[docker-compose question]
hello all! I've been stuck on this for a while so hopefully we can debug together.
I'm using docker compose to bring up three separate services.
Everything builds and comes up great. Health check for the app passes, the services make contact with each other but I can't seem to my curl my app from the host.
I've tried the following values for app.ports:
"127.0.0.1:3000:3000"
"3000:3000"
"0.0.0.0:3000:3000"
I've also tried to run this with a "host" network, but that also didn't seem to work and I don't prefer it because apparently that is not supported on Mac and my local developer environment is Macosx. The prod server is ubuntu.
And I've tried defining the default bridge netowrk explicitly:
networks:
default:
driver: bridge
Here is my docker-compose.yml
version: "2.4"
services:
rabbitmq:
image: rabbitmq
volumes:
- ${ML_FILE_PATH}/taskqueue/config/:/etc/rabbitmq/
environment:
LC_ALL: "C.UTF-8"
LANG: "C.UTF-8"
celery-worker:
image: ${ML_IMAGE_NAME}
entrypoint: "celery --broker='amqp://<user>:<password>#rabbitmq:5672//' -A taskqueue.celeryapp worker --uid 1111"
runtime: ${RUNTIME} ## either "runc" if running locally on debug mode or "nvidia" on production with multi processors
volumes:
- ${ML_FILE_PATH}:/host
depends_on:
- rabbitmq
- app
environment:
LC_ALL: "C.UTF-8"
LANG: "C.UTF-8"
MPLCONFIGDIR: /host/tmp
volumes:
- ${ML_FILE_PATH}:/host
celery-beat:
image: ${ML_IMAGE_NAME}
entrypoint: "celery --broker='amqp://<user>:<password>#rabbitmq:5672//' -A taskqueue.celeryapp beat --uid 1111"
runtime: ${RUNTIME} ## either "runc" if running locally on debug mode or "nvidia" on production with multi processors
depends_on:
- rabbitmq
- app
environment:
LC_ALL: "C.UTF-8"
LANG: "C.UTF-8"
MPLCONFIGDIR: /host/tmp
volumes:
- ${ML_FILE_PATH}:/host
app:
build: .
entrypoint: ${ML_ENTRYPOINT} # just starts a flask app
image: ${ML_IMAGE_NAME}
ports:
- "3000:3000"
expose:
- "3000"
volumes:
- ${ML_FILE_PATH}:/host
restart: always
runtime: ${RUNTIME}
healthcheck:
test: ["CMD", "curl", "http:/localhost:3000/?requestType=health-check"]
start_period: 30s
interval: 30s
timeout: 5s
environment:
SCHEDULER: "off"
TZ: "UTC"
LC_ALL: "C.UTF-8"
LANG: "C.UTF-8"
I can hit the service from within the container as expected.
I'm not sure what I'm missing. Thanks so much for any help!
I'm not sure but i think you dont can route traffic from the host to containers on mac osx.
https://docs.docker.com/desktop/mac/networking/
This ended up being mostly unrelated to docker-compose.
My flask app was starting up on 127.0.0.1. I needed to be starting it as an
externally visible server.
I just had to add --host=0.0.0.0 to my start script.
Related
I am getting segmentation fault and docker exited with code 139 on running hyperledger-explorer docker image.
docker-compose file for creating explorer-db
version: "2.1"
volumes:
data:
walletstore:
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:V1.0.0
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
restart: always
ports:
- 54320:5432
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- data:/var/lib/postgresql/data
networks:
mynetwork.com:
aliases:
- postgresdb
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: user#domain.com
PGADMIN_DEFAULT_PASSWORD: SuperSecret
PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION: "True"
# PGADMIN_CONFIG_LOGIN_BANNER: "Authorized Users Only!"
PGADMIN_CONFIG_CONSOLE_LOG_LEVEL: 10
volumes:
- "pgadmin_4:/var/lib/pgadmin"
ports:
- 8080:80
networks:
- mynetwork.com
docker-compose-explorer file
version: "2.1"
volumes:
data:
walletstore:
external: true
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorer.mynetwork.com:
image: hyperledger/explorer:V1.0.0
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
# restart: always
environment:
- DATABASE_HOST=xx.xxx.xxx.xxx
#Host is VM IP address with ports exposed for postgres. No issues here
- DATABASE_PORT=54320
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=debug
- LOG_LEVEL_DB=debug
- LOG_LEVEL_CONSOLE=info
# - LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=false
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./examples/net1/crypto:/tmp/crypto
- walletstore:/opt/wallet
- ./crypto-config/:/etc/data
command: sh -c "node /opt/explorer/main.js && tail -f /dev/null"
ports:
- 6060:6060
networks:
- mynetwork.com
error
Attaching to explorer.mynetwork.com
explorer.mynetwork.com | Segmentation fault
explorer.mynetwork.com exited with code 139
Postgres is working fine. Docker is updated to the latest version.
Fabric network being used is generated inside IBM Blockchain VS Code extension.
I too face the same problem with docker images but I was success on manual start.sh but not on the docker image. After some exploration, i came to know this is due to some architecture build related and there seem to be a segmentation fault issue in the latest v1.0.0 container image.
This get fixed it on the latest master branch, but not yet released it on Docker Hub.
Please build Explorer container image by yourself by using build_docker_image.sh on your local for the time being.
from hlf forum
Okay!! So I did some testings and found that if, the Docker is set to Run on Windows Login, Explorer will throw error of segmentation fault, but if, I manually start Docker after windows login, it works well. Strange !!
I want to run RabbitMQ in one container, and a worker process in another. The worker process needs to access RabbitMQ.
I'd like these to be managed through docker-compose.
This is my docker-compose.yml file so far:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
worker:
build: ./worker
depends_on:
- rabbitmq
# Allow access to docker daemon
volumes:
- /var/run/docker.sock:/var/run/docker.sock
So I've exposed the RabbitMQ ports. The worker process accesses RabbitMQ using the following URL:
amqp://guest:guest#rabbitmq:5672/
Which is what they use in the official tutorial, but localhost has been swapped for rabbitmq, since the the containers should be discoverable with a hostname identical to the container name:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Whenever I run this, I get an connection refused error:
Recreating ci_rabbitmq_1 ... done
Recreating ci_worker_1 ... done
Attaching to ci_rabbitmq_1, ci_worker_1
worker_1 | dial tcp 127.0.0.1:5672: connect: connection refused
ci_worker_1 exited with code 1
I find this interesting because it's using the IP 127.0.0.1 which (I think) is localhost, even though I specified rabbitmq as the hostname. I'm not an expert on docker networking, so maybe this is desired.
I'm happy to supply more information if needed!
Edit
There is an almost identical question here. I think I need to wait until rabbitmq is up and running before starting worker. I tried doing this with a healthcheck:
version: "2.1"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 10s
timeout: 10s
retries: 5
worker:
build: .
depends_on:
rabbitmq:
condition: service_healthy
(Note the different version). This doesn't work, however - it will always fail as not-healthy.
Aha! I fixed it. #Ijaz was totally correct - the RabbitMQ service takes a while to start, and my worker tries to connect before it's running.
I tried using a delay, but this failed when the RabbitMQ took longer than usual.
This is also indicative of a larger architectural problem - what happens if the queuing service (RabbitMQ in my case) goes offline during production? Right now, my entire site fails. There needs to be some built-in redundancy and polling.
As described this this related answer, we can use healthchecks in docker-compose 3+:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- 5672
- 15672
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 5s
timeout: 15s
retries: 1
worker:
image: worker
restart: on-failure
depends_on:
- rabbitmq
Now, the worker container will restart a few times while the rabbitmq container stays unhealthy. rabbitmq immediately becomes healthy when nc -z localhost 5672 succeeds - i.e. when the queuing is live!
Here is the correct working example :
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3.7.28-management
#container_name: rabbitmq
volumes:
- ./etc/rabbitmq/conf:/etc/rabbitmq/
- ./etc/rabbitmq/data/:/var/lib/rabbitmq/
- ./etc/rabbitmq/logs/:/var/log/rabbitmq/
environment:
RABBITMQ_ERLANG_COOKIE: ${RABBITMQ_ERLANG_COOKIE:-secret_cookie}
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER:-admin}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS:-admin}
ports:
- 5672:5672 #amqp
- 15672:15672 #http
- 15692:15692 #prometheus
healthcheck:
test: [ "CMD", "rabbitmqctl", "status"]
interval: 5s
timeout: 20s
retries: 5
mysql:
image: mysql
restart: always
volumes:
- ./etc/mysql/data:/var/lib/mysql
- ./etc/mysql/scripts:/docker-entrypoint-initdb.d
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mysqldb
MYSQL_USER: ${MYSQL_DEFAULT_USER:-testuser}
MYSQL_PASSWORD: ${MYSQL_DEFAULT_PASSWORD:-testuser}
ports:
- "3306:3306"
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
trigger-batch-process-job:
build: .
environment:
- RMQ_USER=${RABBITMQ_DEFAULT_USER:-admin}
- RMQ_PASS=${RABBITMQ_DEFAULT_PASS:-admin}
- RMQ_HOST=${RABBITMQ_DEFAULT_HOST:-rabbitmq}
- RMQ_PORT=${RABBITMQ_DEFAULT_PORT:-5672}
- DB_USER=${MYSQL_DEFAULT_USER:-testuser}
- DB_PASS=${MYSQL_DEFAULT_PASSWORD:-testuser}
- DB_SERVER=mysql
- DB_NAME=mysqldb
- DB_PORT=3306
depends_on:
mysql:
condition: service_healthy
rabbitmq:
condition: service_healthy
Maybe you dont need to expose/map the ports on the host if you are just accessing the service from another container.
From the documentation:
Expose Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port
can be specified.
expose:
- "3000"
- "8000"
So it should be like this:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
worker:
build: ./worker
depends_on:
- rabbitmq
# Allow access to docker daemon
volumes:
- /var/run/docker.sock:/var/run/docker.sock
also make sure to connect to rabitmq only when its ready to server on port.
Most clean way for docker compose v3.8
version: "3.8"
services:
worker:
build: ./worker
rabbitmq:
condition: service_healthy
rabbitmq:
image: library/rabbitmq
ports:
- 5671:5671
- 5672:5672
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 5s
timeout: 10s
retries: 3
I have the following docker-compose, where I need to wait for the service jhipster-registry to be up and accepting connections before starting myprogram-app.
I tried the healtcheck way, following the official doc https://docs.docker.com/compose/compose-file/compose-file-v2/
version: '2.1'
services:
myprogram-app:
image: myprogram
mem_limit: 1024m
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:postgresql://myprogram-postgresql:5432/myprogram
- JHIPSTER_SLEEP=0
- SPRING_DATA_ELASTICSEARCH_CLUSTER_NODES=myprogram-elasticsearch:9300
- JHIPSTER_REGISTRY_PASSWORD=53bqDrurQAthqrXG
- EMAIL_USERNAME
- EMAIL_PASSWORD
ports:
- 8080:8080
networks:
- backend
depends_on:
- jhipster-registry:
"condition": service_started
- myprogram-postgresql
- myprogram-elasticsearch
myprogram-postgresql:
image: postgres:9.6.5
mem_limit: 256m
environment:
- POSTGRES_USER=myprogram
- POSTGRES_PASSWORD=myprogram
networks:
- backend
myprogram-elasticsearch:
image: elasticsearch:2.4.6
mem_limit: 512m
networks:
- backend
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
mem_limit: 512m
ports:
- 8761:8761
networks:
- backend
healthcheck:
test: "exit 0"
networks:
backend:
driver: "bridge"
but I get the following error when running docker-compose up:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.myprogram-app.depends_on contains {"jhipster-registry": {"condition": "service_started"}}, which is an invalid type, it should be a string
Am I doing something wrong, or this feature is no more supported? How to achieve this sync between services?
Updated version
version: '2.1'
services:
myprogram-app:
image: myprogram
mem_limit: 1024m
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:postgresql://myprogram-postgresql:5432/myprogram
- JHIPSTER_SLEEP=0
- SPRING_DATA_ELASTICSEARCH_CLUSTER_NODES=myprogram-elasticsearch:9300
- JHIPSTER_REGISTRY_PASSWORD=53bqDrurQAthqrXG
- EMAIL_USERNAME
- EMAIL_PASSWORD
ports:
- 8080:8080
networks:
- backend
depends_on:
jhipster-registry:
condition: service_healthy
myprogram-postgresql:
condition: service_started
myprogram-elasticsearch:
condition: service_started
#restart: on-failure
myprogram-postgresql:
image: postgres:9.6.5
mem_limit: 256m
environment:
- POSTGRES_USER=myprogram
- POSTGRES_PASSWORD=tuenemreh
networks:
- backend
myprogram-elasticsearch:
image: elasticsearch:2.4.6
mem_limit: 512m
networks:
- backend
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
mem_limit: 512m
ports:
- 8761:8761
networks:
- backend
healthcheck:
test: ["CMD", "curl", "-f", "http://jhipster-registry:8761", "|| exit 1"]
interval: 30s
retries: 20
#start_period: 30s
networks:
backend:
driver: "bridge"
The updated version gives me a different error,
ERROR: for myprogram-app Container "8ebca614590c" is unhealthy.
ERROR: Encountered errors while bringing up the project.
saying that the container of jhipster-registry is unhealthy, but it's reachable via browser. How can I fix the command in the healthcheck to make it work?
Best Approach - Resilient App Starts
While docker does support startup dependencies, they officially recommend updating your app start logic to test for the availability of external dependencies and retry. This has lots of benefits for robust applications that may restart in the wild on the fly in addition to circumventing the race condition in docker compose up
depends_on & service_healthy - Compose 1.27.0+
depends_on is back in docker compose v1.27.0+ (was deprecated in v3) in the Compose Specification
Each service should also implement a service_healthy check to be able to report if it's fully setup and ready for downstream dependencies.
version: '3.0'
services:
php:
build:
context: .
dockerfile: tests/Docker/Dockerfile-PHP
depends_on:
redis:
condition: service_healthy
redis:
image: redis
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 1s
timeout: 3s
retries: 30
wait-for-it.sh
The recommended approach from docker according to their docs on Control startup and shutdown order in Compose is to download wait-for-it.sh which takes in the domain:port to poll and then executes the next set of commands if successful.
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Note: This requires overriding the startup command of the image, so make sure you know what wanted to pass to maintain parity of the default startup.
Further Reading
Docker Compose wait for container X before starting Y
Difference between links and depends_on in docker_compose.yml
How can I wait for a docker container to be up and running?
Docker Compose Wait til dependency container is fully up before launching
depends_on doesn't wait for another service in docker-compose 1.22.0
The documentation suggests that, in Docker Compose version 2 files specifically, depends_on: can be a list of strings, or a mapping where the keys are service names and the values are conditions. For the services where you don't have (or need) health checks, there is a service_started condition.
depends_on:
# notice: these lines don't start with "-"
jhipster-registry:
condition: service_healthy
myprogram-postgresql:
condition: service_started
myprogram-elasticsearch:
condition: service_started
Depending on how much control you have over your program and its libraries, it's better still if you can arrange for the service to be able to start without its dependencies necessarily being available (equivalently, to function if its dependencies die while the service is running), and not use the depends_on: option. You might return an HTTP 503 Service Unavailable error if the database is down, for instance. Another strategy that often is helpful is to immediately exit if your dependencies aren't available but use a setting like restart: on-error to ask the orchestrator to restart the service.
Update to version 3+.
Please follow the documents from version 3:
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before
starting web - only until they have been started. If you need to wait
for a service to be ready, see Controlling startup order for more on
this problem and strategies for solving it. Version 3 no longer
supports the condition form of depends_on.
The depends_on option is
ignored when deploying a stack in swarm mode with a version 3 Compose
file.
I would consider using the restart_policy option for configuring your myprogram-app to restart until the jhipster-registry is up and accepting connections:
restart_policy:
condition: on-failure
delay: 3s
max_attempts: 5
window: 60s
With the new docker compose API, we can now use the new --wait option:
docker compose up --wait
If your service has a healthcheck, Docker waits until it has the "healthy" status; otherwise, it waits for the service to be started. That's why it is crucial to have relevant healthchecks for all your services.
Note that this option automatically activate the --detach option.
Check out the documentation here.
The best approach I found is to check for the desired port in the entrypoint. There are different ways to do that e.g. wait-for-it but I like to use this solution that is cross-platform between apline and bash images and doesn't download custom scripts from GitHub:
Install netcat-openbsd (works with apt and apk). Then in the entrypoint (works with both #!/bin/bash and #!/bin/sh):
#!/bin/bash
wait_for()
{
echo "Waiting $1 seconds for $2:$3"
timeout $1 sh -c 'until nc -z $0 $1; do sleep 0.1; done' $2 $3 || return 1
echo "$2:$3 available"
}
wait_for 10 db 5432
wait_for 10 redis 6379
You can also make this into a 1-liner if you don't want to print anything.
Although you already got an answer, it should be mentioned that what you are trying to achieve have some nasty risks.
Ideally a service should be self sufficient and smart enough to retry and await for dependencies to be available (before a going down). Otherwise you will be more exposed to one failure propagating to other services. Also consider that a system reboot, unlike a manual start might ignore the dependencies order.
If one service crash causes all your system to go down, you might have a tool to restart everything again, but it would be better having services that resist that case.
After trying several approaches, IMO the simplest and most elegant option is using the jwilder/dockerize (dockerize) utility image with its -wait flag. Here is a simple example where I need a PostgreSQL database to be ready before starting my app:
version: "3.8"
services:
# Start Postgres.
db:
image: postgres
# Wait for Postgres to be joinable.
check-db-started:
image: jwilder/dockerize:0.6.1
depends_on:
- db
command: 'dockerize -wait=tcp://db:5432'
# Only start myapp once Postgres is joinable.
myapp:
image: myapp:latest
depends_on:
- check-db-started
I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.
I try to integrate the new healthcheck into my docker system, but I don't really know how to do it in the right way :/
The problem is, my database container needs more time to start up and initialize the database then the container who starts my main application.
As a result: the main container wont start correct, cause of the missing database connection.
I wrote an healthcheck.sh script to check the database container for connectivity, so the main container starts booting after the connectivity is available. But I dont know how to integrate it correctly in the Dockerfile and my docker-compose.yml
healthcheck.sh is like:
#!bin/bash
COUNTER=0
while [[ $COUNTER = 0 ]]; do
mysql --host=HOST --user="user" --password="password" --database="databasename" --execute="SELECT 1";
if [[ $? == 1 ]]; then
sleep 1
echo "Let's sleep again"
else
COUNTER=1
echo "OK, lets go!"
fi
done
mysql container Dockerfile:
FROM repository/mysql-5.6:latest
MAINTAINER Me
... some copies, chmod and so on
VOLUME ["/..."]
EXPOSE 3306
CMD [".../run.sh"]
HEALTHCHECK --interval=1s --timeout=3s CMD ./healthcheck.sh
docker-compose.yml like:
version: '2'
services:
db:
image: db image
restart: always
dns:
- 10.
ports:
- "${MYSQL_EXTERNAL_PORT}:${MYSQL_INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
data:
image: data image
main application:
image: application image
restart: always
dns:
- 10.
ports:
- "${..._EXTERNAL_PORT}:${..._INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
volumes:
- ${HOST_BACKUP_DIR}:/...
volumes_from:
- data
- db
What do I have to do to integrate this healthcheck into my docker-compose.yml file to work?
Or is there any other chance to delay the container startup of my main container?
Thx Markus
I believe this is similar to Docker Compose wait for container X before starting Y
Your db_image needs to support curl.
To do that, create your own db_image as:
FROM base_image:latest
RUN apt-get update
RUN apt-get install -y curl
EXPOSE 3306
Then all you should need is a docker-compose.yml that looks like this:
version: '2'
services:
db:
image: db_image
restart: always
dns:
- 10.
ports:
- "${MYSQL_EXTERNAL_PORT}:${MYSQL_INTERNAL_PORT}"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:${MYSQL_INTERNAL_PORT}"]
interval: 30s
timeout: 10s
retries: 5
environment:
TZ: Europe/Berlin
main_application:
image: application_image
restart: always
depends_on:
db:
condition: service_healthy
links:
- db
dns:
- 10.
ports:
- "${..._EXTERNAL_PORT}:${..._INTERNAL_PORT}"
environment:
TZ: Europe/Berlin
volumes:
- ${HOST_BACKUP_DIR}:/...
volumes_from:
- data
- db
In general your application should be able to cope with unavailable resources, but there are also some cases when starting up where it is pretty convenient to have one container waiting for another to be "fully available". Docker itself doesn't handle that for you, but there are ways to handle the startup in the resource-using container by delaying the actual command with some script.
There is a good example for a postgresql startup check that can be used in any container that needs to wait for the database to be "fully started". Please see the sample code in the docker docs: https://docs.docker.com/compose/startup-order/
Since docker-compose 1.10.0 you can specify healthchecks in your compose file: https://github.com/docker/docker.github.io/blob/master/compose/compose-file.md#healthcheck
It makes use of https://docs.docker.com/engine/reference/builder/#/healthcheck which has been introducded with Docker 1.12