Airflow: how to run webserver and scheduler together from a docker image? - docker

I'm somewhat inexperienced with both Docker and Airflow, so this might be a silly question. I have a Dockerfile that uses the apache/airflow image together with some of my own DAGs. I would like to launch the airflow web server together with the scheduler and I'm having trouble with this. I can get it working, but I feel that I'm approaching this incorrectly.
Here is what my Dockerfile looks like:
FROM apache/airflow
COPY airflow/dags/ /opt/airflow/dags/
RUN airflow initdb
Then I run docker build -t learning/airflow .. Here is the tough part: I then run docker run --rm -tp 8080:8080 learning/airflow:latest webserver and in a separate terminal I run docker exec `docker ps -q` airflow scheduler. The trouble is, that in practice this generally happens on a VM somewhere, so opening up a second terminal is just not an option and multiple machines will probably not have access to the same docker container. Running webserver && scheduler does not seem to work, the server appears to be blocking and I'm still seeing the message "The scheduler does not appear to be running" in Airflow UI.
Any ideas on what the right way to run server and scheduler should be?
Many thanks!

First, thanks to #Alex and #abestrad for suggesting docker-compose here -- I think this is the best solution. I finally managed to get it working by referring to this great post. So here is my solution:
First, my Dockerfile looks like this now:
FROM apache/airflow
RUN pip install --upgrade pip
RUN pip install --user psycopg2-binary
COPY airflow/airflow.cfg /opt/airflow/
Note that I'm no longer copying dags to the VM, this information is going to be passed through volumes. I then build the docker file via docker build -t learning/airflow .. My docker-compose.yaml looks like this:
version: "3"
services:
postgres:
image: "postgres:9.6"
container_name: "postgres"
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
ports:
- "5432:5432"
volumes:
- ./data/postgres:/var/lib/postgresql/data
initdb:
image: learning/airflow
entrypoint: airflow initdb
depends_on:
- postgres
webserver:
image: learning/airflow
restart: always
entrypoint: airflow webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
ports:
- "8080:8080"
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
scheduler:
image: learning/airflow
restart: always
entrypoint: airflow scheduler
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-scheduler.pid ]"]
interval: 30s
timeout: 30s
retries: 3
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
To use it, first run docker-compose up postgres, then docker-compose up initdb and then docker-compose up webserver scheduler. That's it!

spinning up two docker containers alone may not achieve your goal, as you would need communications between containers. You can manually set up a docker network between your containers, although I haven't tried this approach personally.
An easier way is to use docker-compose, which you can define your resources in a yml file, and let docker-compose create them for you.
version: '2.1'
services:
webserver:
image: puckel/docker-airflow:1.10.4
restart: always
...
scheduler:
image: puckel/docker-airflow:1.10.4
restart: always
depends_on:
- webserver
...
You can find the complete file here

Note: your question applies to any processes, not only Airflow
It's not recommended, of course, but you can find Docker documentation on supervisor which monitors and runs multiple processes under a single supervisord daemon
https://docs.docker.com/config/containers/multi-service_container/

Related

Docker compose to turn on multiple container based on each other [duplicate]

I am using rabbitmq and a simple python sample from here
together with docker-compose. My problem is that I need to wait for rabbitmq to be fully started. From what I searched so far, I don't know how to wait with container x (in my case worker) until y (rabbitmq) is started.
I found this blog post where he checks if the other host is online.
I also found this docker command:
wait
Usage: docker wait CONTAINER [CONTAINER...]
Block until a container stops, then print its exit code.
Waiting for a container to stop is maybe not what I am looking for but if
it is, is it possible to use that command inside the docker-compose.yml?
My solution so far is to wait some seconds and check the port, but is this the way to achieve this? If I don't wait, I get an error.
docker-compose.yml
worker:
build: myapp/.
volumes:
- myapp/.:/usr/src/app:ro
links:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
python hello sample (rabbit.py):
import pika
import time
import socket
pingcounter = 0
isreachable = False
while isreachable is False and pingcounter < 5:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(('rabbitmq', 5672))
isreachable = True
except socket.error as e:
time.sleep(2)
pingcounter += 1
s.close()
if isreachable:
connection = pika.BlockingConnection(pika.ConnectionParameters(
host="rabbitmq"))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print (" [x] Sent 'Hello World!'")
connection.close()
Dockerfile for worker:
FROM python:2-onbuild
RUN ["pip", "install", "pika"]
CMD ["python","rabbit.py"]
Update Nov 2015:
A shell script or waiting inside your program is maybe a possible solution. But after seeing this Issue I am looking for a command or feature of docker/docker-compose itself.
They mention a solution for implementing a health check, which may be the best option. A open tcp connection does not mean your service is ready or may remain ready. In addition to that I need to change my entrypoint in my dockerfile.
So I am hoping for an answer with docker-compose on board commands, which will hopefully the case if they finish this issue.
Update March 2016
There is a proposal for providing a built-in way to determine if a container is "alive". So docker-compose can maybe make use of it in near future.
Update June 2016
It seems that the healthcheck will be integrated into docker in Version 1.12.0
Update January 2017
I found a docker-compose solution see:
Docker Compose wait for container X before starting Y
Finally found a solution with a docker-compose method. Since docker-compose file format 2.1 you can define healthchecks.
I did it in a example project
you need to install at least docker 1.12.0+.
I also needed to extend the rabbitmq-management Dockerfile, because curl isn't installed on the official image.
Now I test if the management page of the rabbitmq-container is available. If curl finishes with exitcode 0 the container app (python pika) will be started and publish a message to hello queue. Its now working (output).
docker-compose (version 2.1):
version: '2.1'
services:
app:
build: app/.
depends_on:
rabbit:
condition: service_healthy
links:
- rabbit
rabbit:
build: rabbitmq/.
ports:
- "15672:15672"
- "5672:5672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
output:
rabbit_1 | =INFO REPORT==== 25-Jan-2017::14:44:21 ===
rabbit_1 | closing AMQP connection <0.718.0> (172.18.0.3:36590 -> 172.18.0.2:5672)
app_1 | [x] Sent 'Hello World!'
healthcheckcompose_app_1 exited with code 0
Dockerfile (rabbitmq + curl):
FROM rabbitmq:3-management
RUN apt-get update
RUN apt-get install -y curl
EXPOSE 4369 5671 5672 25672 15671 15672
Version 3 no longer supports the condition form of depends_on.
So i moved from depends_on to restart on-failure. Now my app container will restart 2-3 times until it is working, but it is still a docker-compose feature without overwriting the entrypoint.
docker-compose (version 3):
version: "3"
services:
rabbitmq: # login guest:guest
image: rabbitmq:management
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
app:
build: ./app/
environment:
- HOSTNAMERABBIT=rabbitmq
restart: on-failure
depends_on:
- rabbitmq
links:
- rabbitmq
Quite recently they've added the depends_on feature.
Edit:
As of compose version 2.1+ till version 3 you can use depends_on in conjunction with healthcheck to achieve this:
From the docs:
version: '2.1'
services:
web:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
redis:
image: redis
db:
image: redis
healthcheck:
test: "exit 0"
Before version 2.1
You can still use depends_on, but it only effects the order in which services are started - not if they are ready before the dependant service is started.
It seems to require at least version 1.6.0.
Usage would look something like this:
version: '2'
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
From the docs:
Express dependency between services, which has two effects:
docker-compose up will start services in dependency order. In the following example, db and redis will be started before web.
docker-compose up SERVICE will automatically include SERVICE’s dependencies. In the following example, docker-compose up web will also create and start db and redis.
Note: As I understand it, although this does set the order in which containers are loaded. It does not guarantee that the service inside the container has actually loaded.
For example, you postgres container might be up. But the postgres service itself might still be initializing within the container.
Natively that is not possible, yet. See also this feature request.
So far you need to do that in your containers CMD to wait until all required services are there.
In the Dockerfiles CMD you could refer to your own start script that wraps starting up your container service. Before you start it, you wait for a depending one like:
Dockerfile
FROM python:2-onbuild
RUN ["pip", "install", "pika"]
ADD start.sh /start.sh
CMD ["/start.sh"]
start.sh
#!/bin/bash
while ! nc -z rabbitmq 5672; do sleep 3; done
python rabbit.py
Probably you need to install netcat in your Dockerfile as well. I do not know what is pre-installed on the python image.
There are a few tools out there that provide easy to use waiting logic, for simple tcp port checks:
wait-for-it
dockerize
For more complex waits:
goss - Explanation blog
Using restart: unless-stopped or restart: always may solve this problem.
If worker container stops when rabbitMQ is not ready, it will be restarted until it is.
you can also just add it to the command option eg.
command: bash -c "sleep 5; start.sh"
https://github.com/docker/compose/issues/374#issuecomment-156546513
to wait on a port you can also use something like this
command: bash -c "while ! curl -s rabbitmq:5672 > /dev/null; do echo waiting for xxx; sleep 3; done; start.sh"
to increment the waiting time you can hack a bit more:
command: bash -c "for i in {1..100} ; do if ! curl -s rabbitmq:5672 > /dev/null ; then echo waiting on rabbitmq for $i seconds; sleep $i; fi; done; start.sh"
If you want to start service only then another service successfully completed (for example migration, data population, etc), docker-compose version 1.29, comes with build in functionality for this - service_completed_successfully.
depends_on:
<service-name>:
condition: service_completed_successfully
According to specification:
service_completed_successfully - specifies that a dependency is expected to run to successful completion before starting a dependent service
restart: on-failure
did the trick for me..see below
---
version: '2.1'
services:
consumer:
image: golang:alpine
volumes:
- ./:/go/src/srv-consumer
working_dir: /go/src/srv-consumer
environment:
AMQP_DSN: "amqp://guest:guest#rabbitmq:5672"
command: go run cmd/main.go
links:
- rabbitmq
restart: on-failure
rabbitmq:
image: rabbitmq:3.7-management-alpine
ports:
- "15672:15672"
- "5672:5672"
For container start ordering use
depends_on:
For waiting previous container start use script
entrypoint: ./wait-for-it.sh db:5432
This article will help you
https://docs.docker.com/compose/startup-order/
Tried many different ways, but liked the simplicity of this: https://github.com/ufoscout/docker-compose-wait
The idea that you can use ENV vars in the docker compose file to submit a list of services hosts (with ports) which should be "awaited" like this: WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017.
So let's say you have the following docker-compose.yml file (copy/past from repo README):
version: "3"
services:
mongo:
image: mongo:3.4
hostname: mongo
ports:
- "27017:27017"
postgres:
image: "postgres:9.4"
hostname: postgres
ports:
- "5432:5432"
mysql:
image: "mysql:5.7"
hostname: mysql
ports:
- "3306:3306"
mySuperApp:
image: "mySuperApp:latest"
hostname: mySuperApp
environment:
WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017
Next, in order for services to wait, you need to add the following two lines to your Dockerfiles (into Dockerfile of the services which should await other services to start):
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
RUN chmod +x /wait
The complete example of such sample Dockerfile (again from the project repo README):
FROM alpine
## Add your application to the docker image
ADD MySuperApp.sh /MySuperApp.sh
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
RUN chmod +x /wait
## Launch the wait tool and then your application
CMD /wait && /MySuperApp.sh
For other details about possible usage see README
You can also solve this by setting an endpoint which waits for the service to be up by using netcat (using the docker-wait script). I like this approach as you still have a clean command section in your docker-compose.yml and you don't need to add docker specific code to your application:
version: '2'
services:
db:
image: postgres
django:
build: .
command: python manage.py runserver 0.0.0.0:8000
entrypoint: ./docker-entrypoint.sh db 5432
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Then your docker-entrypoint.sh:
#!/bin/sh
postgres_host=$1
postgres_port=$2
shift 2
cmd="$#"
# wait for the postgres docker to be running
while ! nc $postgres_host $postgres_port; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
# run the command
exec $cmd
This is nowadays documented in the official docker documentation.
PS: You should install netcat in your docker instance if this is not available. To do so add this to your Docker file :
RUN apt-get update && apt-get install netcat-openbsd -y
There is a ready to use utility called "docker-wait" that can be used for waiting.
basing on this blog post https://8thlight.com/blog/dariusz-pasciak/2016/10/17/docker-compose-wait-for-dependencies.html
I configured my docker-compose.yml as shown below:
version: "3.1"
services:
rabbitmq:
image: rabbitmq:3.7.2-management-alpine
restart: always
environment:
RABBITMQ_HIPE_COMPILE: 1
RABBITMQ_MANAGEMENT: 1
RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 0.2
RABBITMQ_DEFAULT_USER: "rabbitmq"
RABBITMQ_DEFAULT_PASS: "rabbitmq"
ports:
- "15672:15672"
- "5672:5672"
volumes:
- data:/var/lib/rabbitmq:rw
start_dependencies:
image: alpine:latest
links:
- rabbitmq
command: >
/bin/sh -c "
echo Waiting for rabbitmq service start...;
while ! nc -z rabbitmq 5672;
do
sleep 1;
done;
echo Connected!;
"
volumes:
data: {}
Then I do for run =>:
docker-compose up start_dependencies
rabbitmq service will start in daemon mode, start_dependencies will finish the work.
In version 3 of a Docker Compose file, you can use RESTART.
For example:
docker-compose.yml
worker:
build: myapp/.
volumes:
- myapp/.:/usr/src/app:ro
restart: on-failure
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
Note that I used depends_on instead of links since the latter is deprecated in version 3.
Even though it works, it might not be the ideal solution since you restart the docker container at every failure.
Have a look to RESTART_POLICY as well. it let you fine tune the restart policy.
When you use Compose in production, it is actually best practice to use the restart policy :
Specifying a restart policy like restart: always to avoid downtime
Not recommended for serious deployments, but here is essentially a "wait x seconds" command.
With docker-compose version 3.4 a start_period instruction has been added to healthcheck. This means we can do the following:
docker-compose.yml:
version: "3.4"
services:
# your server docker container
zmq_server:
build:
context: ./server_router_router
dockerfile: Dockerfile
# container that has to wait
zmq_client:
build:
context: ./client_dealer/
dockerfile: Dockerfile
depends_on:
- zmq_server
healthcheck:
test: "sh status.sh"
start_period: 5s
status.sh:
#!/bin/sh
exit 0
What happens here is that the healthcheck is invoked after 5 seconds. This calls the status.sh script, which always returns "No problem".
We just made zmq_client container wait 5 seconds before starting!
Note: It's important that you have version: "3.4". If the .4 is not there, docker-compose complains.
One of the alternative solution is to use a container orchestration solution like Kubernetes. Kubernetes has support for init containers which run to completion before other containers can start. You can find an example here with SQL Server 2017 Linux container where API container uses init container to initialise a database
https://www.handsonarchitect.com/2018/08/understand-kubernetes-object-init.html
I currently also have that requirement of waiting for some services to be up and running before others start. Also read the suggestions here and on some other places. But most of them require that the docker-compose.yml some how has to be changed a bit.
So I started working on a solution which I consider to be an orchestration layer around docker-compose itself and I finally came up with a shell script which I called docker-compose-profile.
It can wait for tcp connection to a certain container even if the service does not expose any port to the host directy. The trick I am using is to start another docker container inside the stack and from there I can (usually) connect to every service (as long no other network configuration is applied).
There is also waiting method to watch out for a certain log message.
Services can be grouped together to be started in a single step before another step will be triggered to start.
You can also exclude some services without listing all other services to start (like a collection of available services minus some excluded services).
This kind of configuration can be bundled to a profile.
There is a yaml configuration file called dcp.yml which (for now) has to be placed aside your docker-compose.yml file.
For your question this would look like:
command:
aliases:
upd:
command: "up -d"
description: |
Create and start container. Detach afterword.
profiles:
default:
description: |
Wait for rabbitmq before starting worker.
command: upd
steps:
- label: only-rabbitmq
only: [ rabbitmq ]
wait:
- 5#tcp://rabbitmq:5432
- label: all-others
You could now start your stack by invoking
dcp -p default upd
or even simply by
dcp
as there is only a default profile to run up -d on.
There is a tiny problem. My current version does not (yet) support special waiting condition like the ony
You actually need. So there is no test to send a message to rabbit.
I have been already thinking about a further waiting method to run a certain command on host or as a docker container.
Than we could extend that tool by something like
...
wait:
- service: rabbitmq
method: container
timeout: 5
image: python-test-rabbit
...
having a docker image called python-test-rabbit that does your check.
The benefit then would be that there is no need anymore to bring the waiting part to your worker.
It would be isolated and stay inside the orchestration layer.
May be someone finds this helpful to use. Any suggestions are very welcome.
You can find this tool at https://gitlab.com/michapoe/docker-compose-profile
After trying several approaches, IMO the simplest and most elegant option is using the jwilder/dockerize utility image (mentioned by #Henrik Sachse but he did not show a concret example) with its -wait flag. Here is a simple example where I need a RabbitMQ to be ready before starting my app:
version: "3.8"
services:
# Start RabbitMQ.
rabbit:
image: rabbitmq
# Wait for RabbitMQ to be joinable.
check-rabbit-started:
image: jwilder/dockerize:0.6.1
depends_on:
- rabbit
command: 'dockerize -wait=tcp://rabbit:5672'
# Only start myapp once RabbitMQ is joinable.
myapp:
image: myapp:latest
depends_on:
- check-rabbit-started
Here is the example where main container waits for worker when it start responding for pings:
version: '3'
services:
main:
image: bash
depends_on:
- worker
command: bash -c "sleep 2 && until ping -qc1 worker; do sleep 1; done &>/dev/null"
networks:
intra:
ipv4_address: 172.10.0.254
worker:
image: bash
hostname: test01
command: bash -c "ip route && sleep 10"
networks:
intra:
ipv4_address: 172.10.0.11
networks:
intra:
driver: bridge
ipam:
config:
- subnet: 172.10.0.0/24
However, the proper way is to use healthcheck (>=2.1).
I guess the docker people really want us to wait on services using code in our own images. I still want to configure the services to wait for in docker-compose.yml. Here's one way if you're willing to use an entrypoint script.
Add this loop to your entrypoint script, using your choice of wait-for-it tool included in the image. I am using https://github.com/vishnubob/wait-for-it/. If you pass no services, the loop does nothing.
for service in "$#"; do
echo "$0: wait for service $service"
if ! wait-for-it "$service"; then
echo "$0: failed on service $service"
exit 1
fi
done
Pass required services with this entry for the container in docker-compose.yml:
command: ["my-data-svc:5000"]
This relies on the behavior that docker commands are passed as arguments to the entrypoint script. You can probably make a convincing argument that I'm abusing the intent of the docker command here. I'm not gonna die on that hill, it just works for me.
I just have 2 compose files and start one first and second one later. My script looks like that:
#!/bin/bash
#before i build my docker files
#when done i start my build docker-compose
docker-compose -f docker-compose.build.yaml up
#now i start other docker-compose which needs the image of the first
docker-compose -f docker-compose.prod.yml up

How will you ensure that a container 1 runs before container 2 while using docker compose? [duplicate]

I am using rabbitmq and a simple python sample from here
together with docker-compose. My problem is that I need to wait for rabbitmq to be fully started. From what I searched so far, I don't know how to wait with container x (in my case worker) until y (rabbitmq) is started.
I found this blog post where he checks if the other host is online.
I also found this docker command:
wait
Usage: docker wait CONTAINER [CONTAINER...]
Block until a container stops, then print its exit code.
Waiting for a container to stop is maybe not what I am looking for but if
it is, is it possible to use that command inside the docker-compose.yml?
My solution so far is to wait some seconds and check the port, but is this the way to achieve this? If I don't wait, I get an error.
docker-compose.yml
worker:
build: myapp/.
volumes:
- myapp/.:/usr/src/app:ro
links:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
python hello sample (rabbit.py):
import pika
import time
import socket
pingcounter = 0
isreachable = False
while isreachable is False and pingcounter < 5:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(('rabbitmq', 5672))
isreachable = True
except socket.error as e:
time.sleep(2)
pingcounter += 1
s.close()
if isreachable:
connection = pika.BlockingConnection(pika.ConnectionParameters(
host="rabbitmq"))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print (" [x] Sent 'Hello World!'")
connection.close()
Dockerfile for worker:
FROM python:2-onbuild
RUN ["pip", "install", "pika"]
CMD ["python","rabbit.py"]
Update Nov 2015:
A shell script or waiting inside your program is maybe a possible solution. But after seeing this Issue I am looking for a command or feature of docker/docker-compose itself.
They mention a solution for implementing a health check, which may be the best option. A open tcp connection does not mean your service is ready or may remain ready. In addition to that I need to change my entrypoint in my dockerfile.
So I am hoping for an answer with docker-compose on board commands, which will hopefully the case if they finish this issue.
Update March 2016
There is a proposal for providing a built-in way to determine if a container is "alive". So docker-compose can maybe make use of it in near future.
Update June 2016
It seems that the healthcheck will be integrated into docker in Version 1.12.0
Update January 2017
I found a docker-compose solution see:
Docker Compose wait for container X before starting Y
Finally found a solution with a docker-compose method. Since docker-compose file format 2.1 you can define healthchecks.
I did it in a example project
you need to install at least docker 1.12.0+.
I also needed to extend the rabbitmq-management Dockerfile, because curl isn't installed on the official image.
Now I test if the management page of the rabbitmq-container is available. If curl finishes with exitcode 0 the container app (python pika) will be started and publish a message to hello queue. Its now working (output).
docker-compose (version 2.1):
version: '2.1'
services:
app:
build: app/.
depends_on:
rabbit:
condition: service_healthy
links:
- rabbit
rabbit:
build: rabbitmq/.
ports:
- "15672:15672"
- "5672:5672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
output:
rabbit_1 | =INFO REPORT==== 25-Jan-2017::14:44:21 ===
rabbit_1 | closing AMQP connection <0.718.0> (172.18.0.3:36590 -> 172.18.0.2:5672)
app_1 | [x] Sent 'Hello World!'
healthcheckcompose_app_1 exited with code 0
Dockerfile (rabbitmq + curl):
FROM rabbitmq:3-management
RUN apt-get update
RUN apt-get install -y curl
EXPOSE 4369 5671 5672 25672 15671 15672
Version 3 no longer supports the condition form of depends_on.
So i moved from depends_on to restart on-failure. Now my app container will restart 2-3 times until it is working, but it is still a docker-compose feature without overwriting the entrypoint.
docker-compose (version 3):
version: "3"
services:
rabbitmq: # login guest:guest
image: rabbitmq:management
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
app:
build: ./app/
environment:
- HOSTNAMERABBIT=rabbitmq
restart: on-failure
depends_on:
- rabbitmq
links:
- rabbitmq
Quite recently they've added the depends_on feature.
Edit:
As of compose version 2.1+ till version 3 you can use depends_on in conjunction with healthcheck to achieve this:
From the docs:
version: '2.1'
services:
web:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
redis:
image: redis
db:
image: redis
healthcheck:
test: "exit 0"
Before version 2.1
You can still use depends_on, but it only effects the order in which services are started - not if they are ready before the dependant service is started.
It seems to require at least version 1.6.0.
Usage would look something like this:
version: '2'
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
From the docs:
Express dependency between services, which has two effects:
docker-compose up will start services in dependency order. In the following example, db and redis will be started before web.
docker-compose up SERVICE will automatically include SERVICE’s dependencies. In the following example, docker-compose up web will also create and start db and redis.
Note: As I understand it, although this does set the order in which containers are loaded. It does not guarantee that the service inside the container has actually loaded.
For example, you postgres container might be up. But the postgres service itself might still be initializing within the container.
Natively that is not possible, yet. See also this feature request.
So far you need to do that in your containers CMD to wait until all required services are there.
In the Dockerfiles CMD you could refer to your own start script that wraps starting up your container service. Before you start it, you wait for a depending one like:
Dockerfile
FROM python:2-onbuild
RUN ["pip", "install", "pika"]
ADD start.sh /start.sh
CMD ["/start.sh"]
start.sh
#!/bin/bash
while ! nc -z rabbitmq 5672; do sleep 3; done
python rabbit.py
Probably you need to install netcat in your Dockerfile as well. I do not know what is pre-installed on the python image.
There are a few tools out there that provide easy to use waiting logic, for simple tcp port checks:
wait-for-it
dockerize
For more complex waits:
goss - Explanation blog
Using restart: unless-stopped or restart: always may solve this problem.
If worker container stops when rabbitMQ is not ready, it will be restarted until it is.
you can also just add it to the command option eg.
command: bash -c "sleep 5; start.sh"
https://github.com/docker/compose/issues/374#issuecomment-156546513
to wait on a port you can also use something like this
command: bash -c "while ! curl -s rabbitmq:5672 > /dev/null; do echo waiting for xxx; sleep 3; done; start.sh"
to increment the waiting time you can hack a bit more:
command: bash -c "for i in {1..100} ; do if ! curl -s rabbitmq:5672 > /dev/null ; then echo waiting on rabbitmq for $i seconds; sleep $i; fi; done; start.sh"
If you want to start service only then another service successfully completed (for example migration, data population, etc), docker-compose version 1.29, comes with build in functionality for this - service_completed_successfully.
depends_on:
<service-name>:
condition: service_completed_successfully
According to specification:
service_completed_successfully - specifies that a dependency is expected to run to successful completion before starting a dependent service
restart: on-failure
did the trick for me..see below
---
version: '2.1'
services:
consumer:
image: golang:alpine
volumes:
- ./:/go/src/srv-consumer
working_dir: /go/src/srv-consumer
environment:
AMQP_DSN: "amqp://guest:guest#rabbitmq:5672"
command: go run cmd/main.go
links:
- rabbitmq
restart: on-failure
rabbitmq:
image: rabbitmq:3.7-management-alpine
ports:
- "15672:15672"
- "5672:5672"
For container start ordering use
depends_on:
For waiting previous container start use script
entrypoint: ./wait-for-it.sh db:5432
This article will help you
https://docs.docker.com/compose/startup-order/
Tried many different ways, but liked the simplicity of this: https://github.com/ufoscout/docker-compose-wait
The idea that you can use ENV vars in the docker compose file to submit a list of services hosts (with ports) which should be "awaited" like this: WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017.
So let's say you have the following docker-compose.yml file (copy/past from repo README):
version: "3"
services:
mongo:
image: mongo:3.4
hostname: mongo
ports:
- "27017:27017"
postgres:
image: "postgres:9.4"
hostname: postgres
ports:
- "5432:5432"
mysql:
image: "mysql:5.7"
hostname: mysql
ports:
- "3306:3306"
mySuperApp:
image: "mySuperApp:latest"
hostname: mySuperApp
environment:
WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017
Next, in order for services to wait, you need to add the following two lines to your Dockerfiles (into Dockerfile of the services which should await other services to start):
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
RUN chmod +x /wait
The complete example of such sample Dockerfile (again from the project repo README):
FROM alpine
## Add your application to the docker image
ADD MySuperApp.sh /MySuperApp.sh
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
RUN chmod +x /wait
## Launch the wait tool and then your application
CMD /wait && /MySuperApp.sh
For other details about possible usage see README
You can also solve this by setting an endpoint which waits for the service to be up by using netcat (using the docker-wait script). I like this approach as you still have a clean command section in your docker-compose.yml and you don't need to add docker specific code to your application:
version: '2'
services:
db:
image: postgres
django:
build: .
command: python manage.py runserver 0.0.0.0:8000
entrypoint: ./docker-entrypoint.sh db 5432
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Then your docker-entrypoint.sh:
#!/bin/sh
postgres_host=$1
postgres_port=$2
shift 2
cmd="$#"
# wait for the postgres docker to be running
while ! nc $postgres_host $postgres_port; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
# run the command
exec $cmd
This is nowadays documented in the official docker documentation.
PS: You should install netcat in your docker instance if this is not available. To do so add this to your Docker file :
RUN apt-get update && apt-get install netcat-openbsd -y
There is a ready to use utility called "docker-wait" that can be used for waiting.
basing on this blog post https://8thlight.com/blog/dariusz-pasciak/2016/10/17/docker-compose-wait-for-dependencies.html
I configured my docker-compose.yml as shown below:
version: "3.1"
services:
rabbitmq:
image: rabbitmq:3.7.2-management-alpine
restart: always
environment:
RABBITMQ_HIPE_COMPILE: 1
RABBITMQ_MANAGEMENT: 1
RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 0.2
RABBITMQ_DEFAULT_USER: "rabbitmq"
RABBITMQ_DEFAULT_PASS: "rabbitmq"
ports:
- "15672:15672"
- "5672:5672"
volumes:
- data:/var/lib/rabbitmq:rw
start_dependencies:
image: alpine:latest
links:
- rabbitmq
command: >
/bin/sh -c "
echo Waiting for rabbitmq service start...;
while ! nc -z rabbitmq 5672;
do
sleep 1;
done;
echo Connected!;
"
volumes:
data: {}
Then I do for run =>:
docker-compose up start_dependencies
rabbitmq service will start in daemon mode, start_dependencies will finish the work.
In version 3 of a Docker Compose file, you can use RESTART.
For example:
docker-compose.yml
worker:
build: myapp/.
volumes:
- myapp/.:/usr/src/app:ro
restart: on-failure
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
Note that I used depends_on instead of links since the latter is deprecated in version 3.
Even though it works, it might not be the ideal solution since you restart the docker container at every failure.
Have a look to RESTART_POLICY as well. it let you fine tune the restart policy.
When you use Compose in production, it is actually best practice to use the restart policy :
Specifying a restart policy like restart: always to avoid downtime
Not recommended for serious deployments, but here is essentially a "wait x seconds" command.
With docker-compose version 3.4 a start_period instruction has been added to healthcheck. This means we can do the following:
docker-compose.yml:
version: "3.4"
services:
# your server docker container
zmq_server:
build:
context: ./server_router_router
dockerfile: Dockerfile
# container that has to wait
zmq_client:
build:
context: ./client_dealer/
dockerfile: Dockerfile
depends_on:
- zmq_server
healthcheck:
test: "sh status.sh"
start_period: 5s
status.sh:
#!/bin/sh
exit 0
What happens here is that the healthcheck is invoked after 5 seconds. This calls the status.sh script, which always returns "No problem".
We just made zmq_client container wait 5 seconds before starting!
Note: It's important that you have version: "3.4". If the .4 is not there, docker-compose complains.
One of the alternative solution is to use a container orchestration solution like Kubernetes. Kubernetes has support for init containers which run to completion before other containers can start. You can find an example here with SQL Server 2017 Linux container where API container uses init container to initialise a database
https://www.handsonarchitect.com/2018/08/understand-kubernetes-object-init.html
I currently also have that requirement of waiting for some services to be up and running before others start. Also read the suggestions here and on some other places. But most of them require that the docker-compose.yml some how has to be changed a bit.
So I started working on a solution which I consider to be an orchestration layer around docker-compose itself and I finally came up with a shell script which I called docker-compose-profile.
It can wait for tcp connection to a certain container even if the service does not expose any port to the host directy. The trick I am using is to start another docker container inside the stack and from there I can (usually) connect to every service (as long no other network configuration is applied).
There is also waiting method to watch out for a certain log message.
Services can be grouped together to be started in a single step before another step will be triggered to start.
You can also exclude some services without listing all other services to start (like a collection of available services minus some excluded services).
This kind of configuration can be bundled to a profile.
There is a yaml configuration file called dcp.yml which (for now) has to be placed aside your docker-compose.yml file.
For your question this would look like:
command:
aliases:
upd:
command: "up -d"
description: |
Create and start container. Detach afterword.
profiles:
default:
description: |
Wait for rabbitmq before starting worker.
command: upd
steps:
- label: only-rabbitmq
only: [ rabbitmq ]
wait:
- 5#tcp://rabbitmq:5432
- label: all-others
You could now start your stack by invoking
dcp -p default upd
or even simply by
dcp
as there is only a default profile to run up -d on.
There is a tiny problem. My current version does not (yet) support special waiting condition like the ony
You actually need. So there is no test to send a message to rabbit.
I have been already thinking about a further waiting method to run a certain command on host or as a docker container.
Than we could extend that tool by something like
...
wait:
- service: rabbitmq
method: container
timeout: 5
image: python-test-rabbit
...
having a docker image called python-test-rabbit that does your check.
The benefit then would be that there is no need anymore to bring the waiting part to your worker.
It would be isolated and stay inside the orchestration layer.
May be someone finds this helpful to use. Any suggestions are very welcome.
You can find this tool at https://gitlab.com/michapoe/docker-compose-profile
After trying several approaches, IMO the simplest and most elegant option is using the jwilder/dockerize utility image (mentioned by #Henrik Sachse but he did not show a concret example) with its -wait flag. Here is a simple example where I need a RabbitMQ to be ready before starting my app:
version: "3.8"
services:
# Start RabbitMQ.
rabbit:
image: rabbitmq
# Wait for RabbitMQ to be joinable.
check-rabbit-started:
image: jwilder/dockerize:0.6.1
depends_on:
- rabbit
command: 'dockerize -wait=tcp://rabbit:5672'
# Only start myapp once RabbitMQ is joinable.
myapp:
image: myapp:latest
depends_on:
- check-rabbit-started
Here is the example where main container waits for worker when it start responding for pings:
version: '3'
services:
main:
image: bash
depends_on:
- worker
command: bash -c "sleep 2 && until ping -qc1 worker; do sleep 1; done &>/dev/null"
networks:
intra:
ipv4_address: 172.10.0.254
worker:
image: bash
hostname: test01
command: bash -c "ip route && sleep 10"
networks:
intra:
ipv4_address: 172.10.0.11
networks:
intra:
driver: bridge
ipam:
config:
- subnet: 172.10.0.0/24
However, the proper way is to use healthcheck (>=2.1).
I guess the docker people really want us to wait on services using code in our own images. I still want to configure the services to wait for in docker-compose.yml. Here's one way if you're willing to use an entrypoint script.
Add this loop to your entrypoint script, using your choice of wait-for-it tool included in the image. I am using https://github.com/vishnubob/wait-for-it/. If you pass no services, the loop does nothing.
for service in "$#"; do
echo "$0: wait for service $service"
if ! wait-for-it "$service"; then
echo "$0: failed on service $service"
exit 1
fi
done
Pass required services with this entry for the container in docker-compose.yml:
command: ["my-data-svc:5000"]
This relies on the behavior that docker commands are passed as arguments to the entrypoint script. You can probably make a convincing argument that I'm abusing the intent of the docker command here. I'm not gonna die on that hill, it just works for me.
I just have 2 compose files and start one first and second one later. My script looks like that:
#!/bin/bash
#before i build my docker files
#when done i start my build docker-compose
docker-compose -f docker-compose.build.yaml up
#now i start other docker-compose which needs the image of the first
docker-compose -f docker-compose.prod.yml up

Docker - need a delay before starting the container [duplicate]

I am using rabbitmq and a simple python sample from here
together with docker-compose. My problem is that I need to wait for rabbitmq to be fully started. From what I searched so far, I don't know how to wait with container x (in my case worker) until y (rabbitmq) is started.
I found this blog post where he checks if the other host is online.
I also found this docker command:
wait
Usage: docker wait CONTAINER [CONTAINER...]
Block until a container stops, then print its exit code.
Waiting for a container to stop is maybe not what I am looking for but if
it is, is it possible to use that command inside the docker-compose.yml?
My solution so far is to wait some seconds and check the port, but is this the way to achieve this? If I don't wait, I get an error.
docker-compose.yml
worker:
build: myapp/.
volumes:
- myapp/.:/usr/src/app:ro
links:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
python hello sample (rabbit.py):
import pika
import time
import socket
pingcounter = 0
isreachable = False
while isreachable is False and pingcounter < 5:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(('rabbitmq', 5672))
isreachable = True
except socket.error as e:
time.sleep(2)
pingcounter += 1
s.close()
if isreachable:
connection = pika.BlockingConnection(pika.ConnectionParameters(
host="rabbitmq"))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print (" [x] Sent 'Hello World!'")
connection.close()
Dockerfile for worker:
FROM python:2-onbuild
RUN ["pip", "install", "pika"]
CMD ["python","rabbit.py"]
Update Nov 2015:
A shell script or waiting inside your program is maybe a possible solution. But after seeing this Issue I am looking for a command or feature of docker/docker-compose itself.
They mention a solution for implementing a health check, which may be the best option. A open tcp connection does not mean your service is ready or may remain ready. In addition to that I need to change my entrypoint in my dockerfile.
So I am hoping for an answer with docker-compose on board commands, which will hopefully the case if they finish this issue.
Update March 2016
There is a proposal for providing a built-in way to determine if a container is "alive". So docker-compose can maybe make use of it in near future.
Update June 2016
It seems that the healthcheck will be integrated into docker in Version 1.12.0
Update January 2017
I found a docker-compose solution see:
Docker Compose wait for container X before starting Y
Finally found a solution with a docker-compose method. Since docker-compose file format 2.1 you can define healthchecks.
I did it in a example project
you need to install at least docker 1.12.0+.
I also needed to extend the rabbitmq-management Dockerfile, because curl isn't installed on the official image.
Now I test if the management page of the rabbitmq-container is available. If curl finishes with exitcode 0 the container app (python pika) will be started and publish a message to hello queue. Its now working (output).
docker-compose (version 2.1):
version: '2.1'
services:
app:
build: app/.
depends_on:
rabbit:
condition: service_healthy
links:
- rabbit
rabbit:
build: rabbitmq/.
ports:
- "15672:15672"
- "5672:5672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
output:
rabbit_1 | =INFO REPORT==== 25-Jan-2017::14:44:21 ===
rabbit_1 | closing AMQP connection <0.718.0> (172.18.0.3:36590 -> 172.18.0.2:5672)
app_1 | [x] Sent 'Hello World!'
healthcheckcompose_app_1 exited with code 0
Dockerfile (rabbitmq + curl):
FROM rabbitmq:3-management
RUN apt-get update
RUN apt-get install -y curl
EXPOSE 4369 5671 5672 25672 15671 15672
Version 3 no longer supports the condition form of depends_on.
So i moved from depends_on to restart on-failure. Now my app container will restart 2-3 times until it is working, but it is still a docker-compose feature without overwriting the entrypoint.
docker-compose (version 3):
version: "3"
services:
rabbitmq: # login guest:guest
image: rabbitmq:management
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
app:
build: ./app/
environment:
- HOSTNAMERABBIT=rabbitmq
restart: on-failure
depends_on:
- rabbitmq
links:
- rabbitmq
Quite recently they've added the depends_on feature.
Edit:
As of compose version 2.1+ till version 3 you can use depends_on in conjunction with healthcheck to achieve this:
From the docs:
version: '2.1'
services:
web:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
redis:
image: redis
db:
image: redis
healthcheck:
test: "exit 0"
Before version 2.1
You can still use depends_on, but it only effects the order in which services are started - not if they are ready before the dependant service is started.
It seems to require at least version 1.6.0.
Usage would look something like this:
version: '2'
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
From the docs:
Express dependency between services, which has two effects:
docker-compose up will start services in dependency order. In the following example, db and redis will be started before web.
docker-compose up SERVICE will automatically include SERVICE’s dependencies. In the following example, docker-compose up web will also create and start db and redis.
Note: As I understand it, although this does set the order in which containers are loaded. It does not guarantee that the service inside the container has actually loaded.
For example, you postgres container might be up. But the postgres service itself might still be initializing within the container.
Natively that is not possible, yet. See also this feature request.
So far you need to do that in your containers CMD to wait until all required services are there.
In the Dockerfiles CMD you could refer to your own start script that wraps starting up your container service. Before you start it, you wait for a depending one like:
Dockerfile
FROM python:2-onbuild
RUN ["pip", "install", "pika"]
ADD start.sh /start.sh
CMD ["/start.sh"]
start.sh
#!/bin/bash
while ! nc -z rabbitmq 5672; do sleep 3; done
python rabbit.py
Probably you need to install netcat in your Dockerfile as well. I do not know what is pre-installed on the python image.
There are a few tools out there that provide easy to use waiting logic, for simple tcp port checks:
wait-for-it
dockerize
For more complex waits:
goss - Explanation blog
Using restart: unless-stopped or restart: always may solve this problem.
If worker container stops when rabbitMQ is not ready, it will be restarted until it is.
you can also just add it to the command option eg.
command: bash -c "sleep 5; start.sh"
https://github.com/docker/compose/issues/374#issuecomment-156546513
to wait on a port you can also use something like this
command: bash -c "while ! curl -s rabbitmq:5672 > /dev/null; do echo waiting for xxx; sleep 3; done; start.sh"
to increment the waiting time you can hack a bit more:
command: bash -c "for i in {1..100} ; do if ! curl -s rabbitmq:5672 > /dev/null ; then echo waiting on rabbitmq for $i seconds; sleep $i; fi; done; start.sh"
If you want to start service only then another service successfully completed (for example migration, data population, etc), docker-compose version 1.29, comes with build in functionality for this - service_completed_successfully.
depends_on:
<service-name>:
condition: service_completed_successfully
According to specification:
service_completed_successfully - specifies that a dependency is expected to run to successful completion before starting a dependent service
restart: on-failure
did the trick for me..see below
---
version: '2.1'
services:
consumer:
image: golang:alpine
volumes:
- ./:/go/src/srv-consumer
working_dir: /go/src/srv-consumer
environment:
AMQP_DSN: "amqp://guest:guest#rabbitmq:5672"
command: go run cmd/main.go
links:
- rabbitmq
restart: on-failure
rabbitmq:
image: rabbitmq:3.7-management-alpine
ports:
- "15672:15672"
- "5672:5672"
For container start ordering use
depends_on:
For waiting previous container start use script
entrypoint: ./wait-for-it.sh db:5432
This article will help you
https://docs.docker.com/compose/startup-order/
Tried many different ways, but liked the simplicity of this: https://github.com/ufoscout/docker-compose-wait
The idea that you can use ENV vars in the docker compose file to submit a list of services hosts (with ports) which should be "awaited" like this: WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017.
So let's say you have the following docker-compose.yml file (copy/past from repo README):
version: "3"
services:
mongo:
image: mongo:3.4
hostname: mongo
ports:
- "27017:27017"
postgres:
image: "postgres:9.4"
hostname: postgres
ports:
- "5432:5432"
mysql:
image: "mysql:5.7"
hostname: mysql
ports:
- "3306:3306"
mySuperApp:
image: "mySuperApp:latest"
hostname: mySuperApp
environment:
WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017
Next, in order for services to wait, you need to add the following two lines to your Dockerfiles (into Dockerfile of the services which should await other services to start):
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
RUN chmod +x /wait
The complete example of such sample Dockerfile (again from the project repo README):
FROM alpine
## Add your application to the docker image
ADD MySuperApp.sh /MySuperApp.sh
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
RUN chmod +x /wait
## Launch the wait tool and then your application
CMD /wait && /MySuperApp.sh
For other details about possible usage see README
You can also solve this by setting an endpoint which waits for the service to be up by using netcat (using the docker-wait script). I like this approach as you still have a clean command section in your docker-compose.yml and you don't need to add docker specific code to your application:
version: '2'
services:
db:
image: postgres
django:
build: .
command: python manage.py runserver 0.0.0.0:8000
entrypoint: ./docker-entrypoint.sh db 5432
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Then your docker-entrypoint.sh:
#!/bin/sh
postgres_host=$1
postgres_port=$2
shift 2
cmd="$#"
# wait for the postgres docker to be running
while ! nc $postgres_host $postgres_port; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
# run the command
exec $cmd
This is nowadays documented in the official docker documentation.
PS: You should install netcat in your docker instance if this is not available. To do so add this to your Docker file :
RUN apt-get update && apt-get install netcat-openbsd -y
There is a ready to use utility called "docker-wait" that can be used for waiting.
basing on this blog post https://8thlight.com/blog/dariusz-pasciak/2016/10/17/docker-compose-wait-for-dependencies.html
I configured my docker-compose.yml as shown below:
version: "3.1"
services:
rabbitmq:
image: rabbitmq:3.7.2-management-alpine
restart: always
environment:
RABBITMQ_HIPE_COMPILE: 1
RABBITMQ_MANAGEMENT: 1
RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 0.2
RABBITMQ_DEFAULT_USER: "rabbitmq"
RABBITMQ_DEFAULT_PASS: "rabbitmq"
ports:
- "15672:15672"
- "5672:5672"
volumes:
- data:/var/lib/rabbitmq:rw
start_dependencies:
image: alpine:latest
links:
- rabbitmq
command: >
/bin/sh -c "
echo Waiting for rabbitmq service start...;
while ! nc -z rabbitmq 5672;
do
sleep 1;
done;
echo Connected!;
"
volumes:
data: {}
Then I do for run =>:
docker-compose up start_dependencies
rabbitmq service will start in daemon mode, start_dependencies will finish the work.
In version 3 of a Docker Compose file, you can use RESTART.
For example:
docker-compose.yml
worker:
build: myapp/.
volumes:
- myapp/.:/usr/src/app:ro
restart: on-failure
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
Note that I used depends_on instead of links since the latter is deprecated in version 3.
Even though it works, it might not be the ideal solution since you restart the docker container at every failure.
Have a look to RESTART_POLICY as well. it let you fine tune the restart policy.
When you use Compose in production, it is actually best practice to use the restart policy :
Specifying a restart policy like restart: always to avoid downtime
Not recommended for serious deployments, but here is essentially a "wait x seconds" command.
With docker-compose version 3.4 a start_period instruction has been added to healthcheck. This means we can do the following:
docker-compose.yml:
version: "3.4"
services:
# your server docker container
zmq_server:
build:
context: ./server_router_router
dockerfile: Dockerfile
# container that has to wait
zmq_client:
build:
context: ./client_dealer/
dockerfile: Dockerfile
depends_on:
- zmq_server
healthcheck:
test: "sh status.sh"
start_period: 5s
status.sh:
#!/bin/sh
exit 0
What happens here is that the healthcheck is invoked after 5 seconds. This calls the status.sh script, which always returns "No problem".
We just made zmq_client container wait 5 seconds before starting!
Note: It's important that you have version: "3.4". If the .4 is not there, docker-compose complains.
One of the alternative solution is to use a container orchestration solution like Kubernetes. Kubernetes has support for init containers which run to completion before other containers can start. You can find an example here with SQL Server 2017 Linux container where API container uses init container to initialise a database
https://www.handsonarchitect.com/2018/08/understand-kubernetes-object-init.html
I currently also have that requirement of waiting for some services to be up and running before others start. Also read the suggestions here and on some other places. But most of them require that the docker-compose.yml some how has to be changed a bit.
So I started working on a solution which I consider to be an orchestration layer around docker-compose itself and I finally came up with a shell script which I called docker-compose-profile.
It can wait for tcp connection to a certain container even if the service does not expose any port to the host directy. The trick I am using is to start another docker container inside the stack and from there I can (usually) connect to every service (as long no other network configuration is applied).
There is also waiting method to watch out for a certain log message.
Services can be grouped together to be started in a single step before another step will be triggered to start.
You can also exclude some services without listing all other services to start (like a collection of available services minus some excluded services).
This kind of configuration can be bundled to a profile.
There is a yaml configuration file called dcp.yml which (for now) has to be placed aside your docker-compose.yml file.
For your question this would look like:
command:
aliases:
upd:
command: "up -d"
description: |
Create and start container. Detach afterword.
profiles:
default:
description: |
Wait for rabbitmq before starting worker.
command: upd
steps:
- label: only-rabbitmq
only: [ rabbitmq ]
wait:
- 5#tcp://rabbitmq:5432
- label: all-others
You could now start your stack by invoking
dcp -p default upd
or even simply by
dcp
as there is only a default profile to run up -d on.
There is a tiny problem. My current version does not (yet) support special waiting condition like the ony
You actually need. So there is no test to send a message to rabbit.
I have been already thinking about a further waiting method to run a certain command on host or as a docker container.
Than we could extend that tool by something like
...
wait:
- service: rabbitmq
method: container
timeout: 5
image: python-test-rabbit
...
having a docker image called python-test-rabbit that does your check.
The benefit then would be that there is no need anymore to bring the waiting part to your worker.
It would be isolated and stay inside the orchestration layer.
May be someone finds this helpful to use. Any suggestions are very welcome.
You can find this tool at https://gitlab.com/michapoe/docker-compose-profile
After trying several approaches, IMO the simplest and most elegant option is using the jwilder/dockerize utility image (mentioned by #Henrik Sachse but he did not show a concret example) with its -wait flag. Here is a simple example where I need a RabbitMQ to be ready before starting my app:
version: "3.8"
services:
# Start RabbitMQ.
rabbit:
image: rabbitmq
# Wait for RabbitMQ to be joinable.
check-rabbit-started:
image: jwilder/dockerize:0.6.1
depends_on:
- rabbit
command: 'dockerize -wait=tcp://rabbit:5672'
# Only start myapp once RabbitMQ is joinable.
myapp:
image: myapp:latest
depends_on:
- check-rabbit-started
Here is the example where main container waits for worker when it start responding for pings:
version: '3'
services:
main:
image: bash
depends_on:
- worker
command: bash -c "sleep 2 && until ping -qc1 worker; do sleep 1; done &>/dev/null"
networks:
intra:
ipv4_address: 172.10.0.254
worker:
image: bash
hostname: test01
command: bash -c "ip route && sleep 10"
networks:
intra:
ipv4_address: 172.10.0.11
networks:
intra:
driver: bridge
ipam:
config:
- subnet: 172.10.0.0/24
However, the proper way is to use healthcheck (>=2.1).
I guess the docker people really want us to wait on services using code in our own images. I still want to configure the services to wait for in docker-compose.yml. Here's one way if you're willing to use an entrypoint script.
Add this loop to your entrypoint script, using your choice of wait-for-it tool included in the image. I am using https://github.com/vishnubob/wait-for-it/. If you pass no services, the loop does nothing.
for service in "$#"; do
echo "$0: wait for service $service"
if ! wait-for-it "$service"; then
echo "$0: failed on service $service"
exit 1
fi
done
Pass required services with this entry for the container in docker-compose.yml:
command: ["my-data-svc:5000"]
This relies on the behavior that docker commands are passed as arguments to the entrypoint script. You can probably make a convincing argument that I'm abusing the intent of the docker command here. I'm not gonna die on that hill, it just works for me.
I just have 2 compose files and start one first and second one later. My script looks like that:
#!/bin/bash
#before i build my docker files
#when done i start my build docker-compose
docker-compose -f docker-compose.build.yaml up
#now i start other docker-compose which needs the image of the first
docker-compose -f docker-compose.prod.yml up

django and mysql not connect together in docker [duplicate]

I am using rabbitmq and a simple python sample from here
together with docker-compose. My problem is that I need to wait for rabbitmq to be fully started. From what I searched so far, I don't know how to wait with container x (in my case worker) until y (rabbitmq) is started.
I found this blog post where he checks if the other host is online.
I also found this docker command:
wait
Usage: docker wait CONTAINER [CONTAINER...]
Block until a container stops, then print its exit code.
Waiting for a container to stop is maybe not what I am looking for but if
it is, is it possible to use that command inside the docker-compose.yml?
My solution so far is to wait some seconds and check the port, but is this the way to achieve this? If I don't wait, I get an error.
docker-compose.yml
worker:
build: myapp/.
volumes:
- myapp/.:/usr/src/app:ro
links:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
python hello sample (rabbit.py):
import pika
import time
import socket
pingcounter = 0
isreachable = False
while isreachable is False and pingcounter < 5:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(('rabbitmq', 5672))
isreachable = True
except socket.error as e:
time.sleep(2)
pingcounter += 1
s.close()
if isreachable:
connection = pika.BlockingConnection(pika.ConnectionParameters(
host="rabbitmq"))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print (" [x] Sent 'Hello World!'")
connection.close()
Dockerfile for worker:
FROM python:2-onbuild
RUN ["pip", "install", "pika"]
CMD ["python","rabbit.py"]
Update Nov 2015:
A shell script or waiting inside your program is maybe a possible solution. But after seeing this Issue I am looking for a command or feature of docker/docker-compose itself.
They mention a solution for implementing a health check, which may be the best option. A open tcp connection does not mean your service is ready or may remain ready. In addition to that I need to change my entrypoint in my dockerfile.
So I am hoping for an answer with docker-compose on board commands, which will hopefully the case if they finish this issue.
Update March 2016
There is a proposal for providing a built-in way to determine if a container is "alive". So docker-compose can maybe make use of it in near future.
Update June 2016
It seems that the healthcheck will be integrated into docker in Version 1.12.0
Update January 2017
I found a docker-compose solution see:
Docker Compose wait for container X before starting Y
Finally found a solution with a docker-compose method. Since docker-compose file format 2.1 you can define healthchecks.
I did it in a example project
you need to install at least docker 1.12.0+.
I also needed to extend the rabbitmq-management Dockerfile, because curl isn't installed on the official image.
Now I test if the management page of the rabbitmq-container is available. If curl finishes with exitcode 0 the container app (python pika) will be started and publish a message to hello queue. Its now working (output).
docker-compose (version 2.1):
version: '2.1'
services:
app:
build: app/.
depends_on:
rabbit:
condition: service_healthy
links:
- rabbit
rabbit:
build: rabbitmq/.
ports:
- "15672:15672"
- "5672:5672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
output:
rabbit_1 | =INFO REPORT==== 25-Jan-2017::14:44:21 ===
rabbit_1 | closing AMQP connection <0.718.0> (172.18.0.3:36590 -> 172.18.0.2:5672)
app_1 | [x] Sent 'Hello World!'
healthcheckcompose_app_1 exited with code 0
Dockerfile (rabbitmq + curl):
FROM rabbitmq:3-management
RUN apt-get update
RUN apt-get install -y curl
EXPOSE 4369 5671 5672 25672 15671 15672
Version 3 no longer supports the condition form of depends_on.
So i moved from depends_on to restart on-failure. Now my app container will restart 2-3 times until it is working, but it is still a docker-compose feature without overwriting the entrypoint.
docker-compose (version 3):
version: "3"
services:
rabbitmq: # login guest:guest
image: rabbitmq:management
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
app:
build: ./app/
environment:
- HOSTNAMERABBIT=rabbitmq
restart: on-failure
depends_on:
- rabbitmq
links:
- rabbitmq
Quite recently they've added the depends_on feature.
Edit:
As of compose version 2.1+ till version 3 you can use depends_on in conjunction with healthcheck to achieve this:
From the docs:
version: '2.1'
services:
web:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
redis:
image: redis
db:
image: redis
healthcheck:
test: "exit 0"
Before version 2.1
You can still use depends_on, but it only effects the order in which services are started - not if they are ready before the dependant service is started.
It seems to require at least version 1.6.0.
Usage would look something like this:
version: '2'
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
From the docs:
Express dependency between services, which has two effects:
docker-compose up will start services in dependency order. In the following example, db and redis will be started before web.
docker-compose up SERVICE will automatically include SERVICE’s dependencies. In the following example, docker-compose up web will also create and start db and redis.
Note: As I understand it, although this does set the order in which containers are loaded. It does not guarantee that the service inside the container has actually loaded.
For example, you postgres container might be up. But the postgres service itself might still be initializing within the container.
Natively that is not possible, yet. See also this feature request.
So far you need to do that in your containers CMD to wait until all required services are there.
In the Dockerfiles CMD you could refer to your own start script that wraps starting up your container service. Before you start it, you wait for a depending one like:
Dockerfile
FROM python:2-onbuild
RUN ["pip", "install", "pika"]
ADD start.sh /start.sh
CMD ["/start.sh"]
start.sh
#!/bin/bash
while ! nc -z rabbitmq 5672; do sleep 3; done
python rabbit.py
Probably you need to install netcat in your Dockerfile as well. I do not know what is pre-installed on the python image.
There are a few tools out there that provide easy to use waiting logic, for simple tcp port checks:
wait-for-it
dockerize
For more complex waits:
goss - Explanation blog
Using restart: unless-stopped or restart: always may solve this problem.
If worker container stops when rabbitMQ is not ready, it will be restarted until it is.
you can also just add it to the command option eg.
command: bash -c "sleep 5; start.sh"
https://github.com/docker/compose/issues/374#issuecomment-156546513
to wait on a port you can also use something like this
command: bash -c "while ! curl -s rabbitmq:5672 > /dev/null; do echo waiting for xxx; sleep 3; done; start.sh"
to increment the waiting time you can hack a bit more:
command: bash -c "for i in {1..100} ; do if ! curl -s rabbitmq:5672 > /dev/null ; then echo waiting on rabbitmq for $i seconds; sleep $i; fi; done; start.sh"
If you want to start service only then another service successfully completed (for example migration, data population, etc), docker-compose version 1.29, comes with build in functionality for this - service_completed_successfully.
depends_on:
<service-name>:
condition: service_completed_successfully
According to specification:
service_completed_successfully - specifies that a dependency is expected to run to successful completion before starting a dependent service
restart: on-failure
did the trick for me..see below
---
version: '2.1'
services:
consumer:
image: golang:alpine
volumes:
- ./:/go/src/srv-consumer
working_dir: /go/src/srv-consumer
environment:
AMQP_DSN: "amqp://guest:guest#rabbitmq:5672"
command: go run cmd/main.go
links:
- rabbitmq
restart: on-failure
rabbitmq:
image: rabbitmq:3.7-management-alpine
ports:
- "15672:15672"
- "5672:5672"
For container start ordering use
depends_on:
For waiting previous container start use script
entrypoint: ./wait-for-it.sh db:5432
This article will help you
https://docs.docker.com/compose/startup-order/
Tried many different ways, but liked the simplicity of this: https://github.com/ufoscout/docker-compose-wait
The idea that you can use ENV vars in the docker compose file to submit a list of services hosts (with ports) which should be "awaited" like this: WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017.
So let's say you have the following docker-compose.yml file (copy/past from repo README):
version: "3"
services:
mongo:
image: mongo:3.4
hostname: mongo
ports:
- "27017:27017"
postgres:
image: "postgres:9.4"
hostname: postgres
ports:
- "5432:5432"
mysql:
image: "mysql:5.7"
hostname: mysql
ports:
- "3306:3306"
mySuperApp:
image: "mySuperApp:latest"
hostname: mySuperApp
environment:
WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017
Next, in order for services to wait, you need to add the following two lines to your Dockerfiles (into Dockerfile of the services which should await other services to start):
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
RUN chmod +x /wait
The complete example of such sample Dockerfile (again from the project repo README):
FROM alpine
## Add your application to the docker image
ADD MySuperApp.sh /MySuperApp.sh
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
RUN chmod +x /wait
## Launch the wait tool and then your application
CMD /wait && /MySuperApp.sh
For other details about possible usage see README
You can also solve this by setting an endpoint which waits for the service to be up by using netcat (using the docker-wait script). I like this approach as you still have a clean command section in your docker-compose.yml and you don't need to add docker specific code to your application:
version: '2'
services:
db:
image: postgres
django:
build: .
command: python manage.py runserver 0.0.0.0:8000
entrypoint: ./docker-entrypoint.sh db 5432
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Then your docker-entrypoint.sh:
#!/bin/sh
postgres_host=$1
postgres_port=$2
shift 2
cmd="$#"
# wait for the postgres docker to be running
while ! nc $postgres_host $postgres_port; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
# run the command
exec $cmd
This is nowadays documented in the official docker documentation.
PS: You should install netcat in your docker instance if this is not available. To do so add this to your Docker file :
RUN apt-get update && apt-get install netcat-openbsd -y
There is a ready to use utility called "docker-wait" that can be used for waiting.
basing on this blog post https://8thlight.com/blog/dariusz-pasciak/2016/10/17/docker-compose-wait-for-dependencies.html
I configured my docker-compose.yml as shown below:
version: "3.1"
services:
rabbitmq:
image: rabbitmq:3.7.2-management-alpine
restart: always
environment:
RABBITMQ_HIPE_COMPILE: 1
RABBITMQ_MANAGEMENT: 1
RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 0.2
RABBITMQ_DEFAULT_USER: "rabbitmq"
RABBITMQ_DEFAULT_PASS: "rabbitmq"
ports:
- "15672:15672"
- "5672:5672"
volumes:
- data:/var/lib/rabbitmq:rw
start_dependencies:
image: alpine:latest
links:
- rabbitmq
command: >
/bin/sh -c "
echo Waiting for rabbitmq service start...;
while ! nc -z rabbitmq 5672;
do
sleep 1;
done;
echo Connected!;
"
volumes:
data: {}
Then I do for run =>:
docker-compose up start_dependencies
rabbitmq service will start in daemon mode, start_dependencies will finish the work.
In version 3 of a Docker Compose file, you can use RESTART.
For example:
docker-compose.yml
worker:
build: myapp/.
volumes:
- myapp/.:/usr/src/app:ro
restart: on-failure
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
Note that I used depends_on instead of links since the latter is deprecated in version 3.
Even though it works, it might not be the ideal solution since you restart the docker container at every failure.
Have a look to RESTART_POLICY as well. it let you fine tune the restart policy.
When you use Compose in production, it is actually best practice to use the restart policy :
Specifying a restart policy like restart: always to avoid downtime
Not recommended for serious deployments, but here is essentially a "wait x seconds" command.
With docker-compose version 3.4 a start_period instruction has been added to healthcheck. This means we can do the following:
docker-compose.yml:
version: "3.4"
services:
# your server docker container
zmq_server:
build:
context: ./server_router_router
dockerfile: Dockerfile
# container that has to wait
zmq_client:
build:
context: ./client_dealer/
dockerfile: Dockerfile
depends_on:
- zmq_server
healthcheck:
test: "sh status.sh"
start_period: 5s
status.sh:
#!/bin/sh
exit 0
What happens here is that the healthcheck is invoked after 5 seconds. This calls the status.sh script, which always returns "No problem".
We just made zmq_client container wait 5 seconds before starting!
Note: It's important that you have version: "3.4". If the .4 is not there, docker-compose complains.
One of the alternative solution is to use a container orchestration solution like Kubernetes. Kubernetes has support for init containers which run to completion before other containers can start. You can find an example here with SQL Server 2017 Linux container where API container uses init container to initialise a database
https://www.handsonarchitect.com/2018/08/understand-kubernetes-object-init.html
I currently also have that requirement of waiting for some services to be up and running before others start. Also read the suggestions here and on some other places. But most of them require that the docker-compose.yml some how has to be changed a bit.
So I started working on a solution which I consider to be an orchestration layer around docker-compose itself and I finally came up with a shell script which I called docker-compose-profile.
It can wait for tcp connection to a certain container even if the service does not expose any port to the host directy. The trick I am using is to start another docker container inside the stack and from there I can (usually) connect to every service (as long no other network configuration is applied).
There is also waiting method to watch out for a certain log message.
Services can be grouped together to be started in a single step before another step will be triggered to start.
You can also exclude some services without listing all other services to start (like a collection of available services minus some excluded services).
This kind of configuration can be bundled to a profile.
There is a yaml configuration file called dcp.yml which (for now) has to be placed aside your docker-compose.yml file.
For your question this would look like:
command:
aliases:
upd:
command: "up -d"
description: |
Create and start container. Detach afterword.
profiles:
default:
description: |
Wait for rabbitmq before starting worker.
command: upd
steps:
- label: only-rabbitmq
only: [ rabbitmq ]
wait:
- 5#tcp://rabbitmq:5432
- label: all-others
You could now start your stack by invoking
dcp -p default upd
or even simply by
dcp
as there is only a default profile to run up -d on.
There is a tiny problem. My current version does not (yet) support special waiting condition like the ony
You actually need. So there is no test to send a message to rabbit.
I have been already thinking about a further waiting method to run a certain command on host or as a docker container.
Than we could extend that tool by something like
...
wait:
- service: rabbitmq
method: container
timeout: 5
image: python-test-rabbit
...
having a docker image called python-test-rabbit that does your check.
The benefit then would be that there is no need anymore to bring the waiting part to your worker.
It would be isolated and stay inside the orchestration layer.
May be someone finds this helpful to use. Any suggestions are very welcome.
You can find this tool at https://gitlab.com/michapoe/docker-compose-profile
After trying several approaches, IMO the simplest and most elegant option is using the jwilder/dockerize utility image (mentioned by #Henrik Sachse but he did not show a concret example) with its -wait flag. Here is a simple example where I need a RabbitMQ to be ready before starting my app:
version: "3.8"
services:
# Start RabbitMQ.
rabbit:
image: rabbitmq
# Wait for RabbitMQ to be joinable.
check-rabbit-started:
image: jwilder/dockerize:0.6.1
depends_on:
- rabbit
command: 'dockerize -wait=tcp://rabbit:5672'
# Only start myapp once RabbitMQ is joinable.
myapp:
image: myapp:latest
depends_on:
- check-rabbit-started
Here is the example where main container waits for worker when it start responding for pings:
version: '3'
services:
main:
image: bash
depends_on:
- worker
command: bash -c "sleep 2 && until ping -qc1 worker; do sleep 1; done &>/dev/null"
networks:
intra:
ipv4_address: 172.10.0.254
worker:
image: bash
hostname: test01
command: bash -c "ip route && sleep 10"
networks:
intra:
ipv4_address: 172.10.0.11
networks:
intra:
driver: bridge
ipam:
config:
- subnet: 172.10.0.0/24
However, the proper way is to use healthcheck (>=2.1).
I guess the docker people really want us to wait on services using code in our own images. I still want to configure the services to wait for in docker-compose.yml. Here's one way if you're willing to use an entrypoint script.
Add this loop to your entrypoint script, using your choice of wait-for-it tool included in the image. I am using https://github.com/vishnubob/wait-for-it/. If you pass no services, the loop does nothing.
for service in "$#"; do
echo "$0: wait for service $service"
if ! wait-for-it "$service"; then
echo "$0: failed on service $service"
exit 1
fi
done
Pass required services with this entry for the container in docker-compose.yml:
command: ["my-data-svc:5000"]
This relies on the behavior that docker commands are passed as arguments to the entrypoint script. You can probably make a convincing argument that I'm abusing the intent of the docker command here. I'm not gonna die on that hill, it just works for me.
I just have 2 compose files and start one first and second one later. My script looks like that:
#!/bin/bash
#before i build my docker files
#when done i start my build docker-compose
docker-compose -f docker-compose.build.yaml up
#now i start other docker-compose which needs the image of the first
docker-compose -f docker-compose.prod.yml up

Docker Compose wait for container X before starting Y

I am using rabbitmq and a simple python sample from here
together with docker-compose. My problem is that I need to wait for rabbitmq to be fully started. From what I searched so far, I don't know how to wait with container x (in my case worker) until y (rabbitmq) is started.
I found this blog post where he checks if the other host is online.
I also found this docker command:
wait
Usage: docker wait CONTAINER [CONTAINER...]
Block until a container stops, then print its exit code.
Waiting for a container to stop is maybe not what I am looking for but if
it is, is it possible to use that command inside the docker-compose.yml?
My solution so far is to wait some seconds and check the port, but is this the way to achieve this? If I don't wait, I get an error.
docker-compose.yml
worker:
build: myapp/.
volumes:
- myapp/.:/usr/src/app:ro
links:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
python hello sample (rabbit.py):
import pika
import time
import socket
pingcounter = 0
isreachable = False
while isreachable is False and pingcounter < 5:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(('rabbitmq', 5672))
isreachable = True
except socket.error as e:
time.sleep(2)
pingcounter += 1
s.close()
if isreachable:
connection = pika.BlockingConnection(pika.ConnectionParameters(
host="rabbitmq"))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print (" [x] Sent 'Hello World!'")
connection.close()
Dockerfile for worker:
FROM python:2-onbuild
RUN ["pip", "install", "pika"]
CMD ["python","rabbit.py"]
Update Nov 2015:
A shell script or waiting inside your program is maybe a possible solution. But after seeing this Issue I am looking for a command or feature of docker/docker-compose itself.
They mention a solution for implementing a health check, which may be the best option. A open tcp connection does not mean your service is ready or may remain ready. In addition to that I need to change my entrypoint in my dockerfile.
So I am hoping for an answer with docker-compose on board commands, which will hopefully the case if they finish this issue.
Update March 2016
There is a proposal for providing a built-in way to determine if a container is "alive". So docker-compose can maybe make use of it in near future.
Update June 2016
It seems that the healthcheck will be integrated into docker in Version 1.12.0
Update January 2017
I found a docker-compose solution see:
Docker Compose wait for container X before starting Y
Finally found a solution with a docker-compose method. Since docker-compose file format 2.1 you can define healthchecks.
I did it in a example project
you need to install at least docker 1.12.0+.
I also needed to extend the rabbitmq-management Dockerfile, because curl isn't installed on the official image.
Now I test if the management page of the rabbitmq-container is available. If curl finishes with exitcode 0 the container app (python pika) will be started and publish a message to hello queue. Its now working (output).
docker-compose (version 2.1):
version: '2.1'
services:
app:
build: app/.
depends_on:
rabbit:
condition: service_healthy
links:
- rabbit
rabbit:
build: rabbitmq/.
ports:
- "15672:15672"
- "5672:5672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
output:
rabbit_1 | =INFO REPORT==== 25-Jan-2017::14:44:21 ===
rabbit_1 | closing AMQP connection <0.718.0> (172.18.0.3:36590 -> 172.18.0.2:5672)
app_1 | [x] Sent 'Hello World!'
healthcheckcompose_app_1 exited with code 0
Dockerfile (rabbitmq + curl):
FROM rabbitmq:3-management
RUN apt-get update
RUN apt-get install -y curl
EXPOSE 4369 5671 5672 25672 15671 15672
Version 3 no longer supports the condition form of depends_on.
So i moved from depends_on to restart on-failure. Now my app container will restart 2-3 times until it is working, but it is still a docker-compose feature without overwriting the entrypoint.
docker-compose (version 3):
version: "3"
services:
rabbitmq: # login guest:guest
image: rabbitmq:management
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
app:
build: ./app/
environment:
- HOSTNAMERABBIT=rabbitmq
restart: on-failure
depends_on:
- rabbitmq
links:
- rabbitmq
Quite recently they've added the depends_on feature.
Edit:
As of compose version 2.1+ till version 3 you can use depends_on in conjunction with healthcheck to achieve this:
From the docs:
version: '2.1'
services:
web:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
redis:
image: redis
db:
image: redis
healthcheck:
test: "exit 0"
Before version 2.1
You can still use depends_on, but it only effects the order in which services are started - not if they are ready before the dependant service is started.
It seems to require at least version 1.6.0.
Usage would look something like this:
version: '2'
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
From the docs:
Express dependency between services, which has two effects:
docker-compose up will start services in dependency order. In the following example, db and redis will be started before web.
docker-compose up SERVICE will automatically include SERVICE’s dependencies. In the following example, docker-compose up web will also create and start db and redis.
Note: As I understand it, although this does set the order in which containers are loaded. It does not guarantee that the service inside the container has actually loaded.
For example, you postgres container might be up. But the postgres service itself might still be initializing within the container.
Natively that is not possible, yet. See also this feature request.
So far you need to do that in your containers CMD to wait until all required services are there.
In the Dockerfiles CMD you could refer to your own start script that wraps starting up your container service. Before you start it, you wait for a depending one like:
Dockerfile
FROM python:2-onbuild
RUN ["pip", "install", "pika"]
ADD start.sh /start.sh
CMD ["/start.sh"]
start.sh
#!/bin/bash
while ! nc -z rabbitmq 5672; do sleep 3; done
python rabbit.py
Probably you need to install netcat in your Dockerfile as well. I do not know what is pre-installed on the python image.
There are a few tools out there that provide easy to use waiting logic, for simple tcp port checks:
wait-for-it
dockerize
For more complex waits:
goss - Explanation blog
Using restart: unless-stopped or restart: always may solve this problem.
If worker container stops when rabbitMQ is not ready, it will be restarted until it is.
you can also just add it to the command option eg.
command: bash -c "sleep 5; start.sh"
https://github.com/docker/compose/issues/374#issuecomment-156546513
to wait on a port you can also use something like this
command: bash -c "while ! curl -s rabbitmq:5672 > /dev/null; do echo waiting for xxx; sleep 3; done; start.sh"
to increment the waiting time you can hack a bit more:
command: bash -c "for i in {1..100} ; do if ! curl -s rabbitmq:5672 > /dev/null ; then echo waiting on rabbitmq for $i seconds; sleep $i; fi; done; start.sh"
If you want to start service only then another service successfully completed (for example migration, data population, etc), docker-compose version 1.29, comes with build in functionality for this - service_completed_successfully.
depends_on:
<service-name>:
condition: service_completed_successfully
According to specification:
service_completed_successfully - specifies that a dependency is expected to run to successful completion before starting a dependent service
restart: on-failure
did the trick for me..see below
---
version: '2.1'
services:
consumer:
image: golang:alpine
volumes:
- ./:/go/src/srv-consumer
working_dir: /go/src/srv-consumer
environment:
AMQP_DSN: "amqp://guest:guest#rabbitmq:5672"
command: go run cmd/main.go
links:
- rabbitmq
restart: on-failure
rabbitmq:
image: rabbitmq:3.7-management-alpine
ports:
- "15672:15672"
- "5672:5672"
For container start ordering use
depends_on:
For waiting previous container start use script
entrypoint: ./wait-for-it.sh db:5432
This article will help you
https://docs.docker.com/compose/startup-order/
Tried many different ways, but liked the simplicity of this: https://github.com/ufoscout/docker-compose-wait
The idea that you can use ENV vars in the docker compose file to submit a list of services hosts (with ports) which should be "awaited" like this: WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017.
So let's say you have the following docker-compose.yml file (copy/past from repo README):
version: "3"
services:
mongo:
image: mongo:3.4
hostname: mongo
ports:
- "27017:27017"
postgres:
image: "postgres:9.4"
hostname: postgres
ports:
- "5432:5432"
mysql:
image: "mysql:5.7"
hostname: mysql
ports:
- "3306:3306"
mySuperApp:
image: "mySuperApp:latest"
hostname: mySuperApp
environment:
WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017
Next, in order for services to wait, you need to add the following two lines to your Dockerfiles (into Dockerfile of the services which should await other services to start):
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
RUN chmod +x /wait
The complete example of such sample Dockerfile (again from the project repo README):
FROM alpine
## Add your application to the docker image
ADD MySuperApp.sh /MySuperApp.sh
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
RUN chmod +x /wait
## Launch the wait tool and then your application
CMD /wait && /MySuperApp.sh
For other details about possible usage see README
You can also solve this by setting an endpoint which waits for the service to be up by using netcat (using the docker-wait script). I like this approach as you still have a clean command section in your docker-compose.yml and you don't need to add docker specific code to your application:
version: '2'
services:
db:
image: postgres
django:
build: .
command: python manage.py runserver 0.0.0.0:8000
entrypoint: ./docker-entrypoint.sh db 5432
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Then your docker-entrypoint.sh:
#!/bin/sh
postgres_host=$1
postgres_port=$2
shift 2
cmd="$#"
# wait for the postgres docker to be running
while ! nc $postgres_host $postgres_port; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
# run the command
exec $cmd
This is nowadays documented in the official docker documentation.
PS: You should install netcat in your docker instance if this is not available. To do so add this to your Docker file :
RUN apt-get update && apt-get install netcat-openbsd -y
There is a ready to use utility called "docker-wait" that can be used for waiting.
basing on this blog post https://8thlight.com/blog/dariusz-pasciak/2016/10/17/docker-compose-wait-for-dependencies.html
I configured my docker-compose.yml as shown below:
version: "3.1"
services:
rabbitmq:
image: rabbitmq:3.7.2-management-alpine
restart: always
environment:
RABBITMQ_HIPE_COMPILE: 1
RABBITMQ_MANAGEMENT: 1
RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 0.2
RABBITMQ_DEFAULT_USER: "rabbitmq"
RABBITMQ_DEFAULT_PASS: "rabbitmq"
ports:
- "15672:15672"
- "5672:5672"
volumes:
- data:/var/lib/rabbitmq:rw
start_dependencies:
image: alpine:latest
links:
- rabbitmq
command: >
/bin/sh -c "
echo Waiting for rabbitmq service start...;
while ! nc -z rabbitmq 5672;
do
sleep 1;
done;
echo Connected!;
"
volumes:
data: {}
Then I do for run =>:
docker-compose up start_dependencies
rabbitmq service will start in daemon mode, start_dependencies will finish the work.
In version 3 of a Docker Compose file, you can use RESTART.
For example:
docker-compose.yml
worker:
build: myapp/.
volumes:
- myapp/.:/usr/src/app:ro
restart: on-failure
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
Note that I used depends_on instead of links since the latter is deprecated in version 3.
Even though it works, it might not be the ideal solution since you restart the docker container at every failure.
Have a look to RESTART_POLICY as well. it let you fine tune the restart policy.
When you use Compose in production, it is actually best practice to use the restart policy :
Specifying a restart policy like restart: always to avoid downtime
Not recommended for serious deployments, but here is essentially a "wait x seconds" command.
With docker-compose version 3.4 a start_period instruction has been added to healthcheck. This means we can do the following:
docker-compose.yml:
version: "3.4"
services:
# your server docker container
zmq_server:
build:
context: ./server_router_router
dockerfile: Dockerfile
# container that has to wait
zmq_client:
build:
context: ./client_dealer/
dockerfile: Dockerfile
depends_on:
- zmq_server
healthcheck:
test: "sh status.sh"
start_period: 5s
status.sh:
#!/bin/sh
exit 0
What happens here is that the healthcheck is invoked after 5 seconds. This calls the status.sh script, which always returns "No problem".
We just made zmq_client container wait 5 seconds before starting!
Note: It's important that you have version: "3.4". If the .4 is not there, docker-compose complains.
One of the alternative solution is to use a container orchestration solution like Kubernetes. Kubernetes has support for init containers which run to completion before other containers can start. You can find an example here with SQL Server 2017 Linux container where API container uses init container to initialise a database
https://www.handsonarchitect.com/2018/08/understand-kubernetes-object-init.html
I currently also have that requirement of waiting for some services to be up and running before others start. Also read the suggestions here and on some other places. But most of them require that the docker-compose.yml some how has to be changed a bit.
So I started working on a solution which I consider to be an orchestration layer around docker-compose itself and I finally came up with a shell script which I called docker-compose-profile.
It can wait for tcp connection to a certain container even if the service does not expose any port to the host directy. The trick I am using is to start another docker container inside the stack and from there I can (usually) connect to every service (as long no other network configuration is applied).
There is also waiting method to watch out for a certain log message.
Services can be grouped together to be started in a single step before another step will be triggered to start.
You can also exclude some services without listing all other services to start (like a collection of available services minus some excluded services).
This kind of configuration can be bundled to a profile.
There is a yaml configuration file called dcp.yml which (for now) has to be placed aside your docker-compose.yml file.
For your question this would look like:
command:
aliases:
upd:
command: "up -d"
description: |
Create and start container. Detach afterword.
profiles:
default:
description: |
Wait for rabbitmq before starting worker.
command: upd
steps:
- label: only-rabbitmq
only: [ rabbitmq ]
wait:
- 5#tcp://rabbitmq:5432
- label: all-others
You could now start your stack by invoking
dcp -p default upd
or even simply by
dcp
as there is only a default profile to run up -d on.
There is a tiny problem. My current version does not (yet) support special waiting condition like the ony
You actually need. So there is no test to send a message to rabbit.
I have been already thinking about a further waiting method to run a certain command on host or as a docker container.
Than we could extend that tool by something like
...
wait:
- service: rabbitmq
method: container
timeout: 5
image: python-test-rabbit
...
having a docker image called python-test-rabbit that does your check.
The benefit then would be that there is no need anymore to bring the waiting part to your worker.
It would be isolated and stay inside the orchestration layer.
May be someone finds this helpful to use. Any suggestions are very welcome.
You can find this tool at https://gitlab.com/michapoe/docker-compose-profile
After trying several approaches, IMO the simplest and most elegant option is using the jwilder/dockerize utility image (mentioned by #Henrik Sachse but he did not show a concret example) with its -wait flag. Here is a simple example where I need a RabbitMQ to be ready before starting my app:
version: "3.8"
services:
# Start RabbitMQ.
rabbit:
image: rabbitmq
# Wait for RabbitMQ to be joinable.
check-rabbit-started:
image: jwilder/dockerize:0.6.1
depends_on:
- rabbit
command: 'dockerize -wait=tcp://rabbit:5672'
# Only start myapp once RabbitMQ is joinable.
myapp:
image: myapp:latest
depends_on:
- check-rabbit-started
Here is the example where main container waits for worker when it start responding for pings:
version: '3'
services:
main:
image: bash
depends_on:
- worker
command: bash -c "sleep 2 && until ping -qc1 worker; do sleep 1; done &>/dev/null"
networks:
intra:
ipv4_address: 172.10.0.254
worker:
image: bash
hostname: test01
command: bash -c "ip route && sleep 10"
networks:
intra:
ipv4_address: 172.10.0.11
networks:
intra:
driver: bridge
ipam:
config:
- subnet: 172.10.0.0/24
However, the proper way is to use healthcheck (>=2.1).
I guess the docker people really want us to wait on services using code in our own images. I still want to configure the services to wait for in docker-compose.yml. Here's one way if you're willing to use an entrypoint script.
Add this loop to your entrypoint script, using your choice of wait-for-it tool included in the image. I am using https://github.com/vishnubob/wait-for-it/. If you pass no services, the loop does nothing.
for service in "$#"; do
echo "$0: wait for service $service"
if ! wait-for-it "$service"; then
echo "$0: failed on service $service"
exit 1
fi
done
Pass required services with this entry for the container in docker-compose.yml:
command: ["my-data-svc:5000"]
This relies on the behavior that docker commands are passed as arguments to the entrypoint script. You can probably make a convincing argument that I'm abusing the intent of the docker command here. I'm not gonna die on that hill, it just works for me.
I just have 2 compose files and start one first and second one later. My script looks like that:
#!/bin/bash
#before i build my docker files
#when done i start my build docker-compose
docker-compose -f docker-compose.build.yaml up
#now i start other docker-compose which needs the image of the first
docker-compose -f docker-compose.prod.yml up

Resources