How to properly run an application with docker compose using docker hub - docker

As far as I understand, only images can be uploaded to the docker hub, which then need to be spooled, and can be launched via docker run. But what if I have several images that I run through docker compose? I have a site on next.js and nginx. There is such docker-compose.yml
version: '3'
services:
nextjs:
build: ./
networks:
- site_network
nginx:
build: ./.docker/nginx
ports:
- 80:80
- 443:443
networks:
- site_network
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
depends_on:
- nextjs
networks:
site_network:
driver: bridge
If I do a git clone of the repository on the server, and do docker-compose up --build -d, then everything works. I want to automate everything via gitlab ci/cd. I found an article that describes the procedure for installing a runner on a server + a description of the .gitlab-ci.yml file that creates an image, uploads it to the docker hub, and then downloads it on the server and launches it using docker run. Here I see this approach: in gitlab-ci.yml I make several images that I upload to the hub. Next, I upload a file from the docker-compose.yml repository to the server, which will have the following structure:
version: '3'
services:
nextjs:
image: registry.gitlab.com/path_to_project/next:latest
networks:
- site_network
nginx:
image: registry.gitlab.com/path_to_project/nginx:latest
ports:
- 80:80
- 443:443
networks:
- site_network
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
depends_on:
- nextjs
networks:
site_network:
driver: bridge
How correct is this approach? Maybe there is a more reliable and better way? Advanced stack (kubernetes, etc.) not yet considered, I want to learn all the basics first before moving on.

Related

Running an environment with docker-compose

I have a docker-compose.yml file like so:
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
image: darajava/audio-diary
ports:
- 80:3001
volumes:
- .:/app
- "/app/node_modules"
depends_on:
- db
container_name: "soliloquy_express"
db:
image: mariadb:latest
restart: always
environment:
- MYSQL_DATABASE=soliloquy
- MYSQL_USER=soliloquy
- MYSQL_PASSWORD=password
- MYSQL_ROOT_PASSWORD=password
volumes:
- ../db_data:/var/lib/mysql
container_name: "soliloquy_db"
I'm planning to add an nginx service here too.
I use
docker-compose build
and
docker-compose push
to push to Docker Hub, which I can pull from (from my EC2 instance) using:
docker pull darajava/audio-diary:latest
However, when I run that image, it only runs the app service (I think).
using
docker-compose pull darajava/audio-diary:latest
does not work and leads to an error regarding a missing docker-compose.yml file.
So I have 2 questions:
Is there a way I can pull a whole docker-compose config, with app, db, and other services and pull and run it on my EC2 instance just by pulling from Docker Hub? or do I have the wrong use case for Docker Compose?

does celery docker container have to copy the same files from django container build?

Here is tha raw question:
i was wondering if i can run the command from celery container using another container data like django container as these containers are in the same network of containers on the server or i have to duplicate the data from django project to every container or is there another way?
here is the question with explanation:
i am new to this and i tried to make a docker composer file for a Django-rest project with celery and Rabitmq and PostgreSQL,
i followed bunch of tutorials and i manage to make it work as the celery container use a shared volume from Django to start the worker(also the beat celery container) see the picture for related code from docker compose:(edited part from one of suggestions i put the code instead of pic)
version: '3.8'
services:
api:
build:
context: ./youdeal_djangopart
dockerfile: Dockerfile.prod
container_name: 'prod-djbackend'
image: youdeal_djangopart-prod:0.63
restart: unless-stopped
expose:
- '8000'
env_file: .env
volumes:
- static-data:/static
- media-data:/youdeal_djangopart/media
- api_vol:/youdeal_djangopart
environment:
- "CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//"
depends_on:
- db
- rabbitmq
celeryworker:
container_name: celeryworker
image: celeryworker:0.51
build:
context: ./youdeal_djangopart
dockerfile: Dockerfile.celery.prod
env_file: .env
volumes:
- ./:/api_vol/
links:
- db
- rabbitmq
- api
depends_on:
- rabbitmq
environment:
- "CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//"
volumes:
api_vol:
now here is the problem when i want to deploy the project the server i am using (https://www.arvancloud.com/en) doesn't really allow shared volume between containers and dont answer well in this regard, i was wondering if i can run the command from celery container using another container data like django container as these containers are in the same network of containers or i have to duplicate the data from django project to every container or is there another way?
i find this and some other related topics but i couldn't make it work

Docker-compose Apache mapping external local directory to document root

I am pretty beginner with Docker, and I'm trying to create a local development LAMP (more exactly Apache, MariaDB, PHP) stack using docker-compose, existing Docker images from Docker hub and no Dockerfile if possible, to be used with several local web projects.
I'd like to map my local web project directory /Users/myusername/projects/myprojectname to the default document root for Apache container (which seems to be /app for the Apache image I'm using)
Here is my docker-compose.yml file:
version: "3"
services:
mariadb:
image: mariadb:10.5
container_name: mariadb
restart: always
ports:
- 8889:3306
volumes:
- ./mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=localmysqluser
- MYSQL_PASSWORD=localmysqlpwd
php:
image: bitnami/php-fpm:7.4
container_name: php
ports:
- 9000:9000
volumes:
- /Users/myusername/projects/myprojectname:/app
apache:
image: bitnami/apache:latest
container_name: apache
restart: always
ports:
- 8080:80
volumes:
- ./apache-vhosts/myapp.conf:/vhosts/myapp.conf:ro
- /Users/myusername/projects/myprojectname:/app
depends_on:
- mariadb
- php
But when I do docker-compose up -d then browse to http://localhost:8080/, I get zero data. Where am I wrong? Is my docker-compose.yml configuration wrong, or is it because of system rights?
I've been looking at this similar question, but I'd prefer not using any Dockerfile if possible.
Further question: is it possible to make a local directory /Users/myusername/projects/ browsable by Apache in my local browser?
As answered by J. Song, exposed port number of this Apache Docker image is 8080, not 80.
So we just need to change port mapping of Apache service to 8080:8080 instead of 8080:80.

setup networking of multiple docker containers in different projects using docker-compose

Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.

How to network between multiple containers of the same image in docker-compose?

I am using docker-compose and my configuration file is simply:
version: '3.7'
volumes:
mongodb_data: {}
services:
mongodb:
image: mongo:4.4.3
restart: always
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=super-secure-password
rocket:
build:
context: .
depends_on:
- mongodb
image: rocket:dev
dns:
- 1.1.1.1
- 8.8.8.8
volumes:
- .:/var/rocket
ports:
- "30301-30309:30300"
I start MongoDB with docker-compose up, and then in new terminal windows run two Node.js application each with all the source code in /var/rocket with:
# 1st Node.js application
docker-compose run --service-ports rocket
# 2nd Node.js application
docker-compose run --service-ports rocket
The problem is that the 2nd Node.js application service needs to communicate with the 1st Node.js application service on port 30300. I was able to get this working by referencing the 1st Node.js application by the id of the Docker container:
Connect to 1st Node.js application service on: tcp://myapp_myapp_run_837785c85abb:30300 from the 2nd Node.js application service.
Obviously this does not work long term as the container id changes every time I docker-compose up and down. Is there a standard way to do networking when you start multiple of the same container from docker-compose?
You can run the same image multiple times in the same docker-compose.yml file:
version: '3.7'
services:
mongodb: { ... }
rocket1:
build: .
depends_on:
- mongodb
ports:
- "30301:30300"
rocket2:
build: .
depends_on:
- mongodb
ports:
- "30302:30300"
As described in Networking in Compose, the containers can communicate using their respective service names and their "normal" port numbers, like rocket1:30300; any ports: are ignored for this. You shouldn't need to manually docker-compose run anything.
Well you could always give specific names to your two Node containers:
$ docker-compose run --name rocket1 --service-ports rocket
$ docker-compose run --name rocket2 --service-ports rocket
And then use:
tcp://rocket1:30300

Resources