This is my docker-compose file, everytime I run the docker-compose up command I get the error specified in the title. I have tried running docker-compose config and everything matches.
version: "3.8"
services:
phedon-service:
build: .
restart: always
ports:
- "8080:8080"
networks:
- phedon
depends_on:
- phedon_db
env_file:
- .env
phedon_db:
image: "mariadb:10.6"
container_name: mariadb
restart: always
healthcheck:
test: [ "CMD", "mariadb-admin", "--protocol", "tcp" ,"ping" ]
timeout: 3m
interval: 10s
retries: 10
ports:
- "3307:3306"
networks:
- phedon
environment:
-MYSQL_DATABASE: "phedondb"
-MYSQL_USER: "root"
-MYSQL_PASSWORD: "12345"
-MYSQL_ROOT_PASSWORD: "12345"
env_file:
- .env
networks:
phedon:
your services.db.environment must be a mapping, not a list.
we can omit the root user password with MARIADB_ALLOW_EMPTY_ROOT_PASSWORD
db:
image: "mariadb:10.6"
container_name: mariadb
restart: always
healthcheck:
test: [ "CMD", "mariadb-admin", "--protocol", "tcp" ,"ping" ]
timeout: 3m
interval: 10s
retries: 10
ports:
- "3333:3306"
environment:
MYSQL_DATABASE: "phedondb"
MYSQL_USER: "phedon"
MYSQL_PASSWORD: "12345"
MARIADB_ALLOW_EMPTY_ROOT_PASSWORD: true
If you like to use the root password
For the admin ping command, we've to provide the password for the DB server. This can be done in different ways.
Using the MYSQL_PWD environment var.
Using the -p flag in the docker CLI
Creating some credentials file
services:
db:
image: "mariadb:10.6"
container_name: mariadb
restart: always
healthcheck:
test: [ "CMD-SHELL", "mysqladmin ping" ]
timeout: 3m
interval: 10s
retries: 10
ports:
- "3333:3306"
environment:
MYSQL_DATABASE: "phedondb"
MYSQL_USER: "phedon"
MYSQL_PASSWORD: "12345"
MYSQL_ROOT_PASSWORD: "12345"
MYSQL_PWD: "12345"
Related
I create secret "databasePassword" using the below command:
echo 123456 | docker secret create databasePassword -
Below is my yml file to create the MySQL and phpMyAdmin where I'm trying to use this secret in the yml file.
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: unless-stopped
container_name: db-mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
MYSQL_ROOT_PASSWORD: /run/secrets/databasePassword
ports:
- 3306:3306
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 20s
retries: 10
secrets:
- databasePassword
beyond-phpmyadmin:
image: phpmyadmin
restart: unless-stopped
container_name: beyond-phpmyadmin
environment:
PMA_HOST: db-mysql
PMA_PORT: 3306
PMA_ARBITRARY: 1
links:
- db
ports:
- 8081:80
But the databasePassword is not getting set as 123456. It is set as the string "/run/secrets/databasePassword" I tried using docker stack deploy also, but it also didn't work.
I tried setting the secrets at the end of the file like below by some web research, but it also didn't work.
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: unless-stopped
container_name: db-mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
MYSQL_ROOT_PASSWORD: /run/secrets/databasePassword
ports:
- 3306:3306
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 20s
retries: 10
secrets:
- databasePassword
beyond-phpmyadmin:
image: phpmyadmin
restart: unless-stopped
container_name: beyond-phpmyadmin
environment:
PMA_HOST: db-mysql
PMA_PORT: 3306
PMA_ARBITRARY: 1
links:
- db
ports:
- 8081:80
secrets:
databasePassword:
external: true
Docker cannot know that /run/secrets/databasePassword is not a literal value of the MYSQL_ROOT_PASSWORD variable, but a path to a file that you would like to read the secret from. That's not how secrets work. They are simply available in a /run/secrets/<secret-name> file inside the container. To use a secret, your container needs to read it from the file.
Fortunatelly for you, the mysql image knows how to do it. Simply use MYSQL_ROOT_PASSWORD_FILE instead of MYSQL_ROOT_PASSWORD:
services:
db:
image: mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/databasePassword
secrets:
- databasePassword
...
secrets:
databasePassword:
external: true
See "Docker Secrets" in the mysql image documentation.
I have working mariadb+phpmyadmin container on my local dev computer.
I would like to create another mariadb+phpmyadmin container, but I'm getting error: [Warning] Access denied for user 'root'#'127.0.0.1' (using password: YES).
I can't figure out where is a problem. I tried to add parameter: MYSQL_HOST: '%' and modify line:
test: mysqladmin ping -h $$MYSQL_HOST -u $$MYSQL_USER --password=$$MYSQL_PASSWORD
but still no success.
Working docker-compose.yml
version: '3.9'
networks:
rosetta_net:
driver: bridge
services:
maria_db_service:
image: mariadb:10.5.9
container_name: 'rosetta-api-db'
restart: always
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: 'root123'
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'root'
MYSQL_DATABASE: 'rosetta'
volumes:
- ./docker/db/mariadb/data:/var/lib/mysql
- ./docker/db/mariadb/my.cnf:/etc/mysql/conf.d/my.cnf
networks:
- rosetta_net
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD
interval: 5s
retries: 5
phpmyadmin_service:
image: phpmyadmin/phpmyadmin:5.1
container_name: 'rosetta-api-db-admin'
ports:
- '8081:80'
environment:
PMA_HOST: db_server
MAX_EXECUTION_TIME: 3600
UPLOAD_LIMIT: 128M
depends_on:
maria_db_service:
condition: service_healthy
volumes:
- ./docker/db/phpmyadmin/sites-enabled:/etc/apache2/sites-enabled
- db_rosetta_data:/var/www/html
networks:
- rosetta_net
volumes:
db_rosetta_data:
Not working docker-compose.yml (it is very similar to previous one):
version: '3.9'
networks:
parkovisko_net:
driver: bridge
services:
maria_db_service:
image: mariadb:10.5.9
container_name: 'parkovisko-api-db'
restart: always
ports:
- '3307:3306'
environment:
MYSQL_ROOT_PASSWORD: 'root123'
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'root'
MYSQL_DATABASE: 'parkovisko'
volumes:
- ./docker/db/mariadb/data:/var/lib/mysql
- ./docker/db/mariadb/my.cnf:/etc/mysql/conf.d/my.cnf
networks:
- parkovisko_net
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD
interval: 5s
retries: 5
phpmyadmin_service:
image: phpmyadmin/phpmyadmin:5.1
container_name: 'parkovisko-api-db-admin'
ports:
- '8083:80'
environment:
PMA_HOST: db_server
MAX_EXECUTION_TIME: 3600
UPLOAD_LIMIT: 128M
depends_on:
maria_db_service:
condition: service_healthy
volumes:
- ./docker/db/phpmyadmin/sites-enabled:/etc/apache2/sites-enabled
- db_parkovisko_data:/var/www/html
networks:
- parkovisko_net
volumes:
db_parkovisko_data:
The problem is that you are using the same volumes for both of the docker-compose files, so MYSQL cant initialize mounting the same volumes for 2 containers
There are multiple ways to fix this:
Change it to named volumes:
volumes:
- mariadb-data:/var/lib/mysql
Copy the data to another folder with cp and then mount it again (assuming you want the same data on both containers).
The same applies to .cnf and phpmyadmin containers
Problem Definition
I am trying to use two docker-compose.yml files (each in separate directories) on the same host machine, one for Airflow and the other for another application. I have put Airflow's containers in the same named network as my other app (see the below compose files) and confirmed using docker network inspect that the Airflow containers are in the network. However when I make a curl from the airflow worker container the my_keycloak server I get the following error:
Error
Failed to connect to localhost port 9080: Connection refused
Files
Airflow docker-compose.yml
version: '3'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.1.0}
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
#added working directory and scripts folder 6-26-2021 CP
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-50000}"
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
ports:
- 6379:6379
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
#changed from default of 8080 because of clash with baton docker services 6-26-2021 CP
ports:
- 50309:8080
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:50309/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-worker:
<<: *airflow-common
command: celery worker
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
command: version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
flower:
<<: *airflow-common
command: celery flower
ports:
- 5555:5555
#added so that airflow can interact with baton 6-30-2021 CP
networks:
- baton_docker_files_tempo
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
volumes:
postgres-db-volume:
#added baton network so that airflow can communicate with baton cp 6-28-2021
networks:
baton_docker_files_tempo:
external: true
other apps docker-compose file
version: "3.7"
services:
db:
image: artifactory.redacted.com/docker/postgres:11.3
ports:
- 11101:5432
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: keycloaks156
networks:
- tempo
keycloak:
image: registry.git.redacted.com/tempo23/tempo23-server/keycloak:${TEMPO_VERSION:-develop}
container_name: my_keycloak
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
KEYCLOAK_DEFAULT_THEME: redacted
KEYCLOAK_WELCOME_THEME: redacted
PROXY_ADDRESS_FORWARDING: 'true'
KEYCLOAK_FRONTEND_URL: http://localhost:9080/auth
DB_VENDOR: postgres
DB_ADDR: db
DB_USER: postgres
DB_PASSWORD: postgres
ports:
- 9080:8080
networks:
- tempo
depends_on:
- db
db-migrate:
image: registry.git.redacted.com/tempo23/tempo23-server/db-migrate:${TEMPO_VERSION:-develop}
command: "-url=jdbc:postgresql://db:5432/ -user=postgres -password=postgres -connectRetries=60 migrate"
restart: on-failure:3
depends_on:
- db
networks:
- tempo
keycloak-bootstrap:
image: registry.git.redacted.com/tempo23/tempo23-server/server-full:${TEMPO_VERSION:-develop}
command: ["keycloakBootstrap", "--config", "conf/single.conf"]
depends_on:
- db
restart: on-failure:10
networks:
- tempo
server:
image: registry.git.redacted.com/tempo23/tempo23-server/server:${TEMPO_VERSION:-develop}
command: [ "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005", "conf/single.conf" ]
environment:
AKKA_PARALLELISM_MAX: "2"
DB_THREADPOOL_SIZE: "4"
UNSAFE_ENABLED: "true"
DOCKER_BIND_HOST_ROOT: "${BIND_ROOT}"
DOCKER_BIND_CONTAINER_ROOT: "/var/lib/tempo2"
MESSAGING_HOST: "server"
PUBSUB_TYPE: inmem
TEMPOJOBS_DOCKER_TAG: registry.git.redacted.com/tempo23/tempo23-server/tempojobs:${TEMPO_VERSION:-develop}
NUM_WORKER: 1
ASSET_CACHE_SIZE: 500M
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "${BIND_ROOT}:/var/lib/tempo2"
ports:
- 2551:2551 # akka port
- 8080:8080 # application http port
- 8081:8081 # executor http port
- 5005:5005 # debug port
networks:
- tempo
restart: always
depends_on:
- db
networks:
tempo:
Read carefully the doc on ports.
It allows to expose a container port to a host port.
Between services in the same network you can just reach a service on service-name:port, in this case keycloak:8080 instead of localhost:9080
No matter where each container resides (any docker-compose file on the same machine). The only thing matter is network as you have mentioned in your question, they are on the same network, so they can see each other on network. But the misunderstanding is where the container are isolated from each other. Therefore instead of localhost you should pass the container-name and execute the curl with it.
Try running:
curl keycloak:9080
I created the rasa x server in server mode using docker-compose (link, https://rasa.com/docs/rasa-x/installation-and-setup/install/docker-compose). I want to get intent and entities so I am hitting this URL from postman (https://2286950d1621.jp.ngrok.io/model/parse) with text. But I'm getting an HTML response instead of a JSON response.
Docker-compose.yml file
version: "3.4"
x-database-credentials: &database-credentials
DB_HOST: "db"
DB_PORT: "5432"
DB_USER: "${DB_USER:-admin}"
DB_PASSWORD: "${DB_PASSWORD}"
DB_LOGIN_DB: "${DB_LOGIN_DB:-rasa}"
x-rabbitmq-credentials: &rabbitmq-credentials
RABBITMQ_HOST: "rabbit"
RABBITMQ_USERNAME: "user"
RABBITMQ_PASSWORD: ${RABBITMQ_PASSWORD}
x-redis-credentials: &redis-credentials
REDIS_HOST: "redis"
REDIS_PORT: "6379"
REDIS_PASSWORD: ${REDIS_PASSWORD}
REDIS_DB: "1"
x-duckling-credentials: &duckling-credentials
RASA_DUCKLING_HTTP_URL: "http://duckling:8000"
x-rasax-credentials: &rasax-credentials
RASA_X_HOST: "http://rasa-x:5002"
RASA_X_USERNAME: ${RASA_X_USERNAME:-admin}
RASA_X_PASSWORD: ${RASA_X_PASSWORD:-}
RASA_X_TOKEN: ${RASA_X_TOKEN}
JWT_SECRET: ${JWT_SECRET}
RASA_USER_APP: "http://app:5055"
RASA_PRODUCTION_HOST: "http://rasa-production:5005"
RASA_WORKER_HOST: "http://rasa-worker:5005"
RASA_TOKEN: ${RASA_TOKEN}
x-rasa-credentials: &rasa-credentials
<<: *rabbitmq-credentials
<<: *rasax-credentials
<<: *database-credentials
<<: *redis-credentials
<<: *duckling-credentials
RASA_TOKEN: ${RASA_TOKEN}
RASA_MODEL_PULL_INTERVAL: 10
RABBITMQ_QUEUE: "rasa_production_events"
RASA_TELEMETRY_ENABLED: ${RASA_TELEMETRY_ENABLED:-true}
x-rasa-services: &default-rasa-service
restart: always
image: "rasa/rasa:${RASA_VERSION}-full"
volumes:
- ./.config:/.config
expose:
- "5005"
command: >
x
--no-prompt
--production
--config-endpoint http://rasa-x:5002/api/config?token=${RASA_X_TOKEN}
--port 5005
--jwt-method HS256
--jwt-secret ${JWT_SECRET}
--auth-token '${RASA_TOKEN}'
--enable-api
--cors "*"
depends_on:
- rasa-x
- rabbit
- redis
services:
rasa-x:
restart: always
image: "rasa/rasa-x:${RASA_X_VERSION}"
expose:
- "5002"
volumes:
- ./models:/app/models
- ./environments.yml:/app/environments.yml
- ./credentials.yml:/app/credentials.yml
- ./endpoints.yml:/app/endpoints.yml
- ./logs:/logs
- ./auth:/app/auth
environment:
<<: *database-credentials
<<: *rasa-credentials
SELF_PORT: "5002"
DB_DATABASE: "${DB_DATABASE:-rasa}"
RASA_MODEL_DIR: "/app/models"
PASSWORD_SALT: ${PASSWORD_SALT}
RABBITMQ_QUEUE: "rasa_production_events"
RASA_X_USER_ANALYTICS: "0"
SANIC_RESPONSE_TIMEOUT: "3600"
RUN_DATABASE_MIGRATION_AS_SEPARATE_SERVICE: "true"
depends_on:
- db
db-migration:
command: ["python", "-m", "rasax.community.services.db_migration_service"]
restart: always
image: "rasa/rasa-x:${RASA_X_VERSION}"
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8000/health || kill 1"]
interval: 5s
timeout: 1s
retries: 3
start_period: 2s
expose:
- "8000"
environment:
<<: *database-credentials
RUN_DATABASE_MIGRATION_AS_SEPARATE_SERVICE: "true"
depends_on:
- db
rasa-production:
<<: *default-rasa-service
environment:
<<: *rasa-credentials
RASA_ENVIRONMENT: "production"
DB_DATABASE: "tracker"
MPLCONFIGDIR: "/tmp/.matplotlib"
RASA_MODEL_SERVER: "http://rasa-x:5002/api/projects/default/models/tags/production"
rasa-worker:
<<: *default-rasa-service
environment:
<<: *rasa-credentials
RASA_ENVIRONMENT: "worker"
DB_DATABASE: "worker_tracker"
MPLCONFIGDIR: "/tmp/.matplotlib"
RASA_MODEL_SERVER: "http://rasa-x:5002/api/projects/default/models/tags/production"
app:
restart: always
image: "rasa/rasa-x-demo:${RASA_X_DEMO_VERSION}"
expose:
- "5055"
depends_on:
- rasa-production
db:
restart: always
image: "bitnami/postgresql:11.9.0"
expose:
- "5432"
environment:
POSTGRESQL_USERNAME: "${DB_USER:-admin}"
POSTGRESQL_PASSWORD: "${DB_PASSWORD}"
POSTGRESQL_DATABASE: "${DB_DATABASE:-rasa}"
volumes:
- ./db:/bitnami/postgresql
rabbit:
restart: always
image: "bitnami/rabbitmq:3.8.9"
environment:
RABBITMQ_HOST: "rabbit"
RABBITMQ_USERNAME: "user"
RABBITMQ_PASSWORD: ${RABBITMQ_PASSWORD}
RABBITMQ_DISK_FREE_LIMIT: "{mem_relative, 0.1}"
expose:
- "5672"
duckling:
restart: always
image: "rasa/duckling:0.1.6.4"
expose:
- "8000"
command: ["duckling-example-exe", "--no-access-log", "--no-error-log"]
nginx:
restart: always
image: "rasa/nginx:${RASA_X_VERSION}"
ports:
- "80:8080"
- "443:8443"
volumes:
- ./certs:/opt/bitnami/certs
- ./terms:/opt/bitnami/nginx/conf/bitnami/terms
depends_on:
- rasa-x
- rasa-production
- app
redis:
restart: always
image: "bitnami/redis:6.0.8"
environment:
REDIS_PASSWORD: ${REDIS_PASSWORD}
expose:
- "6379"
If you want to hit the Rasa Open Source server specifically, add /core to the endpoint -- so here: https://2286950d1621.jp.ngrok.io/core/model/parse
The /core tells the nginx proxy which service to send your request to. If you want to hit the Rasa X api, you should add /api.
What you are seeing currently is the Rasa X UI response, which is the default container (which is why you can log into the UI on the base url directly)
This question already has answers here:
Docker Compose wait for container X before starting Y
(20 answers)
Closed 2 years ago.
I have one java spring boot application which uses mysql DB . I want to start my spring application only after mysql is up and running . ( mysql takes 40-60 sec to up ) . Please suggest how to achieve it .
here is the compose file :
version: "3.8"
services:
mysql:
networks:
- my-network-1
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_USER: root
MYSQL_DATABASE: mydb
expose:
- "3306"
my-spring:
depends_on:
- mysql
build:
context: .
dockerfile: dockerfile.dockerfile
networks:
- my-network-1
expose:
- "8080"
networks:
my-network-1:
driver: overlay
Here is docker file :
FROM openjdk:8u252-jdk
ARG JAR_FILE=/somepath/jar.jar
COPY ${JAR_FILE} my.jar
ENTRYPOINT ["java","-jar","my.jar"]
currently getting connection refused error.
Thanks
Adarsha
Use this under mysql part on docker compose file
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
interval: 1m30s
timeout: 20s
retries: 10
so your compose file should be like this
version: "3.8"
services:
mysql:
networks:
- my-network-1
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_USER: root
MYSQL_DATABASE: mydb
expose:
- "3306"
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
interval: 1m30s
timeout: 20s
retries: 10
my-spring:
depends_on:
- mysql
build:
context: .
dockerfile: dockerfile.dockerfile
networks:
- my-network-1
expose:
- "8080"
networks:
my-network-1:
driver: overlay
If above solution doesn' work, I would recommend to go through this docs https://docs.docker.com/compose/startup-order/