I have been using a Microservices architecture for develop my software, and I have running my services using Docker Compose, but my problem is when the new services were created I have to add them into the docker-compose.yml, and then I got about 200+ hundred lines of code inside the docker-compose.yml, and I have around 17 services for now which the services have related each other.
Then my question is "How to manage the docker-compose.yml to be easy to maintain and clean?"
My docker-compose.yml:
version: '2'
services:
mongo:
container_name: mongodb
image: mongo:3.4.7
volumes:
- ./mongo/data:/data/db
ports:
- 54321:27017
networks:
- zensorium_backend
restart: always
command: mongod --smallfiles
golang_oauth:
container_name: golang_oauth
build: .
volumes:
- ./oauth:/go/src/oauth
working_dir: /go/src/oauth
ports:
- 8080:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_account:
container_name: golang_account
build: .
volumes:
- ./account:/go/src/account
working_dir: /go/src/account
ports:
- 8081:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_client:
container_name: golang_client
build: .
volumes:
- ./client:/go/src/client
working_dir: /go/src/client
ports:
- 8082:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_mail:
container_name: golang_mail
build: .
volumes:
- ./mail:/go/src/mail
working_dir: /go/src/mail
expose:
- 8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_user:
container_name: golang_user
build: .
volumes:
- ./user:/go/src/user
working_dir: /go/src/user
ports:
- 8083:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_gateway2:
container_name: golang_gateway2
build: .
volumes:
- ./gateway2:/go/src/gateway
working_dir: /go/src/gateway
ports:
- 8084:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_measurement:
container_name: golang_measurement
build: .
volumes:
- ./measurement:/go/src/measurement
working_dir: /go/src/measurement
ports:
- 8085:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_app:
container_name: golang_app
build: .
volumes:
- ./app:/go/src/app
working_dir: /go/src/app
ports:
- 8086:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_logging:
container_name: golang_logging
build: .
volumes:
- ./logging:/go/src/logging
working_dir: /go/src/logging
ports:
- 8087:8082
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_notify:
container_name: golang_notify
build: .
volumes:
- ./notify:/go/src/notify
working_dir: /go/src/notify
ports:
- 8088:8082
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_routine:
container_name: golang_routine
build: .
volumes:
- ./routine:/go/src/routine
working_dir: /go/src/routine
ports:
- 8089:8082
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
angular_cli:
container_name: angular_cli
build: ./angular-cli
ports:
- "4200:4200"
networks:
- zensorium_frontend
working_dir: /home/node/webPortal
volumes:
- ./angular-cli/webPortal:/home/node/webPortal
- /home/node/webPortal/node_modules
restart: always
command: npm start
golang_dev:
container_name: golang_dev
build: .
volumes:
- ./dev:/go/src/dev
working_dir: /go/src/dev
ports:
- 8090:8082
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
networks:
zensorium_backend:
driver: bridge
zensorium_frontend:
driver: bridge
Also you can use the possibilities of yaml to write repeated strings in shorter way. For example :
---
version: '3.4'
x-command: &command bash -c "ls && sleep infinity"
services:
service1:
command: *command
service2:
command: *command
And man can use templates:
https://matthiasnoback.nl/2018/03/defining-multiple-similar-services-with-docker-compose/
my recommendation is to keep the docker-compose.yml as little as possible if you have a lot of services defined. For example you could write the definition inline, use .env file so you dont always have to specify it, remove the container_name. Moving working_dir to Dockerfile. If you are using version 2, you can use the inheritance of service but this feature is not supported in docker-compose 3 and newer... You should also use named volumes.
example of inline docker-compose.yml
version: '2'
services:
mongo:
image: mongo:3.4.7
command: mongod --smallfiles
restart: always
ports: ["54321:27017"]
networks: [zensorium_backend]
volumes: ["my_mongo_data:/data/db"]
golang_oauth:
build: .
command: realize start --run
restart: always
working_dir: /go/src/oauth # could be moved do Dockerfile
ports: ["8080:8082"]
depends_on: [mongo]
environment: ["VAR_1=${VAR_1}","VAR2=${VAR2}"] # using .env file
networks: [zensorium_backend]
volumes: ["my_oauth_data:/go/src/oauth"]
volumes:
my_mongo_data:
my_oauth_data:
networks:
zensorium_backend:
driver: bridge
zensorium_frontend:
driver: bridge
and .env file in same folder as the .yml
VAR_1=mazel
VAR2=tov
Related
May be somebody had such specific problem... There are two web applications in different directories, both in docker containers. Linux (centos). When I run the first application (docker-compose up -d) everything works fine. If I launch the second application from another directory, then the first one docker container launched falls. Why? The names of the containers are different, the ports forwarded in the docker are also different.
First app config docker-compose.yml
services:
web:
container_name: myapp-nginx
image: nginx:latest
ports:
- "8000:80"
- "443:443"
volumes:
- ./:/myapp
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- php
php:
build: .
container_name: myapp-php-fpm
image: php:7.4-fpm
volumes:
- ./:/myapp
- ./logs:/myapp/logs
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- mysql:db
mysql:
image: mariadb:latest
container_name: myapp-mysql
volumes:
- /opt/myapp/data:/var/lib/mysql
env_file:
- mysql.env
restart: unless-stopped
ports:
- "3306:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: myapp-phpmyadmin
environment:
- MAX_EXECUTION_TIME=600
- UPLOAD_LIMIT=800M
- PMA_HOST=localhost
- PMA_PORT=3306
- PMA_ARBITRARY=1
ports:
- "80:80"
links:
- mysql:db
Second app docker-compose.yml
version: '3'
services:
web:
container_name: client-nginx
image: nginx:latest
ports:
- "20203:81"
volumes:
- ./:/myapp_client
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- php
php:
build: .
container_name: client-php-fpm
image: php:7.4-fpm
volumes:
- ./:/myapp_client
- ./logs:/myapp_client/logs
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- mysql:db
mysql:
image: mariadb:latest
container_name: client-mysql
volumes:
- /opt/myapp_client/data:/var/lib/mysql
env_file:
- mysql.env
restart: unless-stopped
ports:
- "20202:3307"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: client-phpmyadmin
environment:
- MAX_EXECUTION_TIME=600
- UPLOAD_LIMIT=800M
- PMA_HOST=localhost
- PMA_PORT=3307
- PMA_ARBITRARY=1
ports:
- "20204:82"
links:
- mysql:db
I have docker-compose.yml file which contains frontend,backend,testing,postgres and pgadmin container. The containers except testing are able to communicate each other. But the testing container fails to communicate with backend and frontend container in docker-compose.
version: '3.7'
services:
frontend:
container_name: test-frontend
build:
context: ./frontend
dockerfile: Dockerfile.local
ports:
- '3000:3000'
networks:
- test-network
environment:
# For the frontend can be applied only during the build!
# (while it's applied when TS is compiled)
# You have to build manually without cache if one of those are changed at least for the prod mode.
- REACT_APP_BACKEND_API=http://localhost:8000/api/v1
- REACT_APP_GOOGLE_CLIENT_ID=1234567dfghjjnfd
- CI=true
- CHOKIDAR_USEPOLLING=true
postgres:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- test-network
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: "dev#dev.com"
PGADMIN_DEFAULT_PASSWORD: dev
volumes:
- pgadmin:/root/.pgadmin
- ./pgadmin-config/servers.json:/pgadmin4/servers.json
ports:
- "5050:80"
networks:
- test-network
restart: unless-stopped
backend:
container_name: test-backend
build:
context: ./backend
dockerfile: Dockerfile.local
ports:
- '8000:80'
volumes:
- ./backend:/app
command: >
bash -c "alembic upgrade head
&& exec /start-reload.sh"
networks:
- test-network
depends_on:
- postgres
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/app/.secret/secret.json
- APP_DB_CONNECTION_STRING=postgresql+psycopg2://dev:dev#postgres:5432/postgres
- LOG_LEVEL=debug
- SQLALCHEMY_ECHO=True
- AUTH_ENABLED=True
- CORS=*
- GCP_ALLOWED_DOMAINS=*
testing:
container_name: test-testing
build:
context: ./testing
dockerfile: Dockerfile
volumes:
- ./testing:/isp-app
command: >
bash -c "/wait
&& robot ."
networks:
- test-network
depends_on:
- backend
- frontend
environment:
- WAIT_HOSTS= frontend:3000, backend:8000
- WAIT_TIMEOUT= 3000
- WAIT_SLEEP_INTERVAL=300
- WAIT_HOST_CONNECT_TIMEOUT=300
volumes:
postgres:
pgadmin:
networks:
test-network:
driver: bridge
All the containers are mapped to test-network. When the testing container tried to connect to frontend:3000 or backend:8000, it throws "Host [ backend:8000] not yet available"
How to fix it?
I'd like to deploy my saleor-shop application completely via docker.
So I've built the respective images for saleor backend, storefront & dashboard.
Running the app locally works fine.
Backend is available on localhost:8000/graphql
Storefront runs at localhost:3000
Dashboard runs at localhost:9000
If I'd like to run the app on the droplet IP --> I get issues with running the saleor backend.
As of now trying to access XXX.XX.XXX.XXX:8000 results in "This site can't be reached".
The storefront and dashboard are accessible on XXX.XX.XXX.XXX:3000 and XXX.XX.XXX.XXX:9000 however without any interaction with the backend cause its not available. Thats why the graphql calls are not functioning on the storefront and logging into the dashboard does not work either cause the backend is not available. I think I'm missing something here and would appreciate any help.
[
Within my droplet I'm using the following docker-compose.yml file to get my docker containers up:
services:
api:
ports:
- 8000:8000
image: XXX/murukku-shop
restart: unless-stopped
networks:
- saleor-backend-tier
depends_on:
- db
- redis
- jaeger
env_file: common.env
environment:
- JAEGER_AGENT_HOST=jaeger
- STOREFRONT_URL=http://XXX.XX.XXX.XXX:3000/
- DASHBOARD_URL=http://XXX.XX.XXX.XXX:9000/
storefront:
image: XXX/murukku-storefront
ports:
- 3000:80
restart: unless-stopped
dashboard:
image: XXX/murukku-dashboard
ports:
- 9000:80
restart: unless-stopped
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=saleor
- POSTGRES_PASSWORD=saleor
redis:
image: library/redis:5.0-alpine
ports:
- 6379:6379
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-redis:/data
worker:
image: XXX/murukku-shop
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
- mailhog
environment:
- EMAIL_URL=smtp://mailhog:1025
jaeger:
image: jaegertracing/all-in-one
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
restart: unless-stopped
networks:
- saleor-backend-tier
mailhog:
image: mailhog/mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui. Visit http://localhost:8025/ to check emails
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
saleor-db:
driver: local
saleor-redis:
driver: local
saleor-media:
networks:
saleor-backend-tier:
driver: bridge
I was testing Saleor like you in a docker setup and I've found a solution ! You have to set more env variable, they are all explained on the github page of the storefront and the dashboard.
Here is my config if you want :
version: '2'
services:
api:
ports:
- 8000:8000
build:
context: ./saleor
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
restart: unless-stopped
networks:
- saleor-backend-tier
depends_on:
- db
- redis
- jaeger
volumes:
- ./saleor/saleor/:/app/saleor:Z
- ./saleor/templates/:/app/templates:Z
- ./saleor/tests/:/app/tests
# shared volume between worker and api for media
- saleor-media:/app/media
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
environment:
# - DEFAULT_CURRENCY=EUR
#- DEFAULT_COUNTRY=
- ALLOWED_CLIENT_HOSTS=localhost,127.0.0.1,192.168.0.50
- ALLOWED_HOSTS=localhost,192.168.0.50
- JAEGER_AGENT_HOST=jaeger
- STOREFRONT_URL=http://192.168.0.50:3000/
- DASHBOARD_URL=http://192.168.0.50:9000/
storefront:
build:
context: ./saleor-storefront
dockerfile: ./Dockerfile.dev
ports:
- 3000:3000
restart: unless-stopped
volumes:
- ./saleor-storefront/:/app:cached
- /app/node_modules/
command: npm start -- --host 0.0.0.0
environment:
- NEXT_PUBLIC_API_URI=http://192.168.0.50:8000/graphql/
- API_URI=http://192.168.0.50:8000/graphql/
dashboard:
build:
context: ./saleor-dashboard
dockerfile: ./Dockerfile.dev
ports:
- 9000:9000
restart: unless-stopped
volumes:
- ./saleor-dashboard/:/app:cached
- /app/node_modules/
command: npm start -- --host 0.0.0.0
environment:
- API_URI=http://192.168.0.50:8000/graphql/
- APP_MOUNT_URI=/dashboard/
- STATIC_URL=http://192.168.0.50:9000/
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=saleor
- POSTGRES_PASSWORD=saleor
redis:
image: library/redis:5.0-alpine
ports:
- 6379:6379
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
- saleor-redis:/data
worker:
build:
context: ./saleor
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
command: celery -A saleor --app=saleor.celeryconf:app worker --loglevel=info
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
- mailhog
volumes:
- ./saleor/saleor/:/app/saleor:Z,cached
- ./saleor/templates/:/app/templates:Z,cached
# shared volume between worker and api for media
- saleor-media:/app/media
environment:
- EMAIL_URL=smtp://mailhog:1025
jaeger:
image: jaegertracing/all-in-one
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
restart: unless-stopped
networks:
- saleor-backend-tier
mailhog:
image: mailhog/mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui. Visit http://localhost:8025/ to check emails
restart: unless-stopped
networks:
- saleor-backend-tier
volumes:
saleor-db:
driver: local
saleor-redis:
driver: local
saleor-media:
networks:
saleor-backend-tier:
driver: bridge
PS : It's my first answer on stackoverflow :D Don't forget to tick me as the answer if I solved your problem ;)
I am trying to use celery for asynchronous jobs and am using celery, docker and digitalocean.
I have line that is depicted below in docker-compose file.
As you can see there is celery beat part.
In celery beat part, there is "django_celery_beat.schedulers:DatabaseScheduler" and as far as I understand it can not find django_celery_beat.schedulers:DatabaseScheduler. I could not understand how may I solve tha problem.
version: '3.3'
services:
web:
build: .
image: proje
command: gunicorn -b 0.0.0.0:8000 proje.wsgi -w 4 --timeout 300 -t 80
restart: unless-stopped
tty: true
env_file:
- ./.env.production
networks:
- app-network
depends_on:
- migration
- database
- redis
healthcheck:
test: ["CMD", "wget", "http://localhost/healthcheck"]
interval: 3s
timeout: 3s
retries: 10
celery:
image: proje
command: celery -A proje worker -l info -n worker1#%%h
restart: unless-stopped
networks:
- app-network
environment:
- DJANGO_SETTINGS_MODULE=proje.settings
env_file:
- ./.env.production
depends_on:
- redis
celerybeat:
image: proje
command: celery -A proje beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
restart: unless-stopped
networks:
- app-network
environment:
- DJANGO_SETTINGS_MODULE=proje.settings
env_file:
- ./.env.production
depends_on:
- redis
migration:
image: proje
command: python manage.py migrate
volumes:
- .:/usr/src/app/
env_file:
- ./.env.production
depends_on:
- database
networks:
- app-network
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./static/:/var/www/static/
- ./conf/nginx/:/etc/nginx/conf.d/
- webserver-logs:/var/log/nginx/
networks:
- app-network
database:
image: "postgres:12" # use latest official postgres version
restart: unless-stopped
env_file:
- .databaseenv # configure postgres
ports:
- "5432:5432"
volumes:
- database-data:/var/lib/postgresql/data/
networks:
- app-network
redis:
image: "redis:5.0.8"
restart: unless-stopped
command: [ "redis-server", "/redis.conf" ]
working_dir: /var/lib/redis
ports:
- "6379:6379"
volumes:
- ./conf/redis/redis.conf:/redis.conf
- redis-data:/var/lib/redis/
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
volumes:
database-data:
webserver-logs:
redis-data:
And it gives me result that is depicted below. I am stuck in my project for months.
Any help will be appreciated.
I have uploaded all these to an Ubuntu server and it has worked. I think my computer(Win 10) has some incompatibility with.
Thanks.
ERROR: Encountered errors while bringing up the project.
Docker-compose up -d --build
Docker ps
------------- Docker-compose.yml -----------------
version: "3"
services:
php:
build:
context: ./docker/php
container_name: 'web'
env_file: ./docker/php/.env
restart: 'always'
ports:
- "80:80"
- "443:443"
links:
- mssql
volumes:
- ./:/var/www/html
- ./docker/config/php/php.ini:/usr/local/etc/php/php.ini
- ./docker/config/vhosts:/etc/apache2/sites-enabled
- ./docker/logs/apache2:/var/log/apache2
- ./docker/logs/php:/var/log/php
environment:
PHP_IDE_CONFIG: "serverName=local.dev.com"
mssql:
build: ./docker/mssql
container_name: 'mssql'
env_file: ./docker/mssql/.env
ports:
- "1433:1433"
volumes:
- ./docker/data/mssql:/var/opt/mssql
Make sure your port 80 isn't already in use.
Also, here's a thread with several possible solutions.