I run many services in my docker-compse.yml. I'd like to display a message to let user know when docker-compose up is done?
I tried to echo message with command but my container exited with code 0
command: bash -c "echo Congratulation! You can use your containers now"
Is there anyway to let user know when docker-compose up is done?
Many thanks!
Anything you do in CMD will only be visible in the logs, and only per service, so you may as well just watch the logs which I assume is what you're trying to avoid. If you want it to work from the local command line you're going to need to wrap the docker-compose up command.
THE following is NOT fully tested. This is for guidance only. I use the getContainerHealth and waitContainer scripts myself, so can vouch for them. I'm fairly sure I found them on this site, but can't remember where.
Assuming you're waiting for your services to be in a usable state, you could use something similar to this compose file:
docker-compose.yml
services:
serviceOne:
// ... the usual stuff
container_name: serviceOne
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 10s
timeout: 10s
retries: 3
start_period: 1s
serviceTwo:
// ... the usual stuff
container_name: serviceTwo
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 10s
start_period: 1s
The commands in the test are the commands you use to define the service is up and running. More info here: https://docs.docker.com/compose/compose-file/#healthcheck
You will probably want different health checks (or at least intervals, timeouts, start_periods, etc) for production and development.
then some bash script that you use instead of docker-compose up:
getContainerHealth () {
docker inspect --format "{{json .State.Health.Status }}" $1
}
waitContainer () {
while STATUS=$(getContainerHealth $1); [ $STATUS != "\"healthy\"" ]; do
if [ $STATUS = "\"unhealthy\"" ]; then
echo "Failed!"
exit -1
fi
printf .
lf=$'\n'
sleep 1
done
printf "$lf"
}
waitContainers () {
waitContainer serviceOne
waitContainer serviceTwo
// ... for all services
}
docker-compose up
echo "In progress. Please wait"
waitContainers
echo "Done. Ready to roll."
create a shell file echo.sh that contains
echo "Your message"
In Docker file add
CMD ["/dockerRepo/echo.sh"]
Have you tried running something like the following directly in the Dockerfile itself ?
RUN echo -e "Your message"
Related
I was adapting the Docker Compose example from Elasticsearch to set up a default cluster and have stumbled over this health check config which I don't understand:
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
I can guess that this command checks for the existence of the file at config/certs/es01/es01.crt. But how does it do so with the script
[ -f config/certs/es01/es01.crt ]
?
More specifically: is some unix command inferred before the -f option? And what are the square brackets for?
[ is a shell command; it is the same as test(1). Checking on a MacOS and Ubuntu system, in both cases there is a /bin/[ with an ordinary executable of that name in a normal search-path directory.
In your example, [ -f filename ] is the same as test -f filename, which returns true (exit code 0) if the file exists.
In a broader Bourne shell context, if [ -f filename ]; then ...; fi isn't special syntax. It runs [, and if that command returns true (exit code 0) then it runs the then block. You can put any command in the if statement and not just [.
Recently, we had an outage due to Redis being unable to write to a file system (not sure why it's Amazon EFS) anyway I noted that there was no actual HEALTHCHECK set up for the Docker service to make sure it is running correctly, Redis is up so I can't simply use nc -z to check if the port is open.
Is there a command I can execute in the redis:6-alpine (or non-alpine) image that I can put in the healthcheck block of the docker-compose.yml file.
Note I am looking for command that is available internally in the image. Not an external healthcheck.
If I remember correctly that image includes redis-cli so, maybe, something along these lines:
...
healthcheck:
test: ["CMD", "redis-cli","ping"]
Although the ping operation from #nitrin0 answer generally works. It does not handle the case where the write operation will actually fail. So instead I perform a change that will just increment a value to a key I don't plan to use.
image: redis:6
healthcheck:
test: [ "CMD", "redis-cli", "--raw", "incr", "ping" ]
I've just noticed that there is a phase in which redis is still starting up and loading data. In this phase, redis-cli ping shows the error
LOADING Redis is loading the dataset in memory
but stills returns the exit code 0, which would make redis already report has healthy.
Also redis-cli --raw incr ping returns 0 in this phase without actually incrementing this key successfully.
As a workaround, I'm checking whether the redis-cli ping actually prints a PONG, which it only does after the LOADING has been finished.
services:
redis:
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
interval: 1s
timeout: 3s
retries: 5
This works because grep returns only 0 when the string ("PONG") is found.
You can also add it inside the Dockerfile if your using a Redis image that contains the redis-cli:
Linux Docker
HEALTHCHECK CMD redis-cli ping || exit 1
Windows Docker
HEALTHCHECK CMD pwsh.exe -command \
try { \
$response = ./redis-cli ping; \
if ($response -eq 'PONG') { return 0} else {return 1}; \
} catch { return 1 }
i have couple of container running in sequence.
i am using depends on to make sure the next one only starts after current one running.
i realize one of container has some cron job to be finished ,
so the next container has the proper data to be imported....
in this case, i cannot just rely on depends on parameter.
how do i delay the next container to starts? say wait for 5 minutes.
sample docker compose:
test1:
networks:
- test
image: test1
ports:
- "8115:8115"
container_name: test1
test2:
networks:
- test
image: test2
depends_on:
- test1
ports:
- "8160:8160"
You can use entrypoint script, something like this (need to install netcat):
until nc -w 1 -z test1 8115; do
>&2 echo "Service is unavailable - sleeping"
sleep 1
done
sleep 2
>&2 echo "Service is up - executing command"
And execute it by command instruction in service (in docker-compose file) or in the Dockerfile (CMD directive).
I added this in the Dockerfile (since it was just for a quick test):
CMD sleep 60 && node server.js
A 60 seconds sleep did the trick, since the node.js part was executing before a database dump init script could finish executing fully.
I have a customized rabbitmq image that I am using with docker-compose (3.7) to launch a docker cluster. This is necessary because of some peculiar issues when trying to deploy a cluster in docker swarm. The image has a shell script which runs on the primary and secondary nodes and makes the modifications needed to run a cluster. This involves stopping rabbitmq and running rabbitmqctl commands to create the cluster between the two nodes. This configuiration works flawlessly until I try to add in a healthcheck. I have tried adding it in to the image and adding it into the compose file. Both cause the image to crash and constantly restart. I have the following shell script which gets copied into the image:
#!/bin/bash
set -eo pipefail
# A RabbitMQ node is considered healthy if all the below are true:
# * the rabbit app finished booting & it's running
# * there are no alarms
# * there is at least 1 active listener
rabbitmqctl eval '
{ true, rabbit_app_booted_and_running } = { rabbit:is_booted(node()), rabbit_app_booted_and_running },
{ [], no_alarms } = { rabbit:alarms(), no_alarms },
[] /= rabbit_networking:active_listeners(),
rabbitmq_node_is_healthy.
' || exit 1
On an already running image this works and produces the correct result.
I tried the flowing in the compose file:
healthcheck:
interval: 60s
timeout: 60s
retries: 10
start_period: 600s
test: ["CMD", "docker-healthcheck"]
It seems that the start_period is completely ignored. I can see the health status with an error right away. I have also tried the following native rabbitmq diagnostics command:
rabbitmq-diagnostics -q check_running && rabbitmq-diagnostics -q check_local_alarms
This oddly fails with an "unable to find rabbitmq-diagnostics" error, despite the fact the program is definitely in the path. I can execute the command successfully in an already running container.
If I create the container without the healthcheck and then add it in after the fact from the command line with:
docker service update --health-cmd docker-healthcheck --health-interval 60s --health-timeout 60s --health-retries 10 [container id]
it marks the container healthy. So it works but just not in a start up configuration. It seems like to me that the healthcheck should not begin until 10 minutes have passed. It doesn't seem to matter how long I wait for everything to startup using the start_period parameter it still causes the container to fail.
Is this a bug or is there something mysterious about the way start_period works?
Anyone else every have this problem?
I know one of the ways to check health for Docker container is using the commmand
HEALTHCHECK CMD curl --fail http://localhost:3000/ || exit 1
But in case of workers there is no such URL to hit , How to check the container's health in that case ?
The celery inspect ping command comes in handy, as it does a whole trip: it sends a "ping" task on the broker, workers respond and celery fetches the responses.
Assuming your app is named tasks.add, you may ping all your workers:
/app $ celery inspect ping -A tasks.add
-> celery#aa7c21dd0e96: OK
pong
-> celery#57615db15d80: OK
pong
With aa7c21dd0e96 being the Docker hostname, and thus available in $HOSTNAME.
To ping a single node, you would have to run:
celery inspect ping -A tasks.add -d celery#$HOSTNAME
Here, d stands for destination.
The line to add to your Dockerfile:
HEALTHCHECK CMD celery inspect ping -A tasks.add -d celery#$HOSTNAME
Sample outputs:
/app $ celery inspect ping -A tasks.add -d fake_node
Error: No nodes replied within time constraint.
/app $ echo $?
69
Unhealthy if the node does not exist or does not reply
/app $ celery inspect ping -A tasks.add -d celery#$HOSTNAME
-> celery#d39b3d31cc13: OK
pong
/app $ echo $?
0
Healthy when the node replies pong.
/app $ celery inspect ping -d celery#$HOSTNAME
Traceback (most recent call last):
...
raise socket.error(last_err)
OSError: [Errno 111] Connection refused
/app $ echo $?
1
Unhealthy when the broker is not available - I removed the app, so it tries to connect to a local AMPQ and fails
This might not suit your needs, the broker is unhealthy, not the worker.
The below example snippet, derived from that posted by #PunKeel, is applicable for those looking to implement health check in docker-compose.yml which could be used through docker-compose or docker stack deploy.
worker:
build:
context: .
dockerfile: Dockerfile
image: myimage
links:
- rabbitmq
restart: always
command: celery worker --hostname=%h --broker=amqp://rabbitmq:5672
healthcheck:
test: celery -b amqp://rabbitmq:5672 inspect ping -d celery#$$HOSTNAME
interval: 30s
timeout: 10s
retries: 3
Notice the extra $ in the command, so that $HOSTNAME actually gets passed into the container. I also didn't use the -A flag.
Ideally, rabbitmq should also have its own health check, perhaps with curl guest:guest#localhost:15672/api/overview, since docker wouldn't be able to discern if worker is down or the broker is down with celery inspect ping.
For celery 5.2.3 I used celery -A [celery app name] status for the health check. This is how my docker-compose file looks like
worker:
build: .
healthcheck:
test: celery -A app.celery_app status
interval: 10s
timeout: 10s
retries: 10
volumes:
- ./app:/app
depends_on:
- broker
- redis
- database
Landed on this question looking for a health check for Celery workers as part of an Airflow setup (Airflow 2.3.4, Celery 5.2.7), which I eventually figured out. This is a very specific use case of the original question, but might still be useful for some:
# docker-compose.yml
worker:
image: ...
hostname: local-worker
entrypoint: airflow celery worker
...
healthcheck:
test: [ "CMD-SHELL", 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"' ]
interval: 5s
timeout: 10s
retries: 10
restart: always
...
I got inspiration from Airflow's quick-start Docker Compose.