How to stop docker container from starting if app initialization has failed? - docker

We have a flask service application which connects to a mysql database for data. This flask app is server via gunicorn in a docker container. We are using docker-compose for the same.
When the application starts we make the connection to the database. If the connection to the database fails (3 attempts) the application fails to initialize and exits. But am noticing that the container starts. How can i cause the container to fail to start as well when my app fails to start?

First you have to tell docker-compose that you want all containers to stop execution upon exit of your main service. This is done using --abort-on-container-exit command line argument. Lets say if you have 2 services:
docker-compose.yml
version: '3'
services:
db:
...
flask:
...
then command line will look something like:
docker-compose up --exit-code-from flask --abort-on-container-exit
Thus telling your flask service is main and you don't want to continue when it exits.
Second, you configure your flask main process (PID 1) to exit (preferably with non-zero exit code) if it fails to connect to database. Thats' it.

use restart:no
there are 3 options for restart policy.
- restart: no
- restart: always
- restart: on-failure

If it really starts (not in permanent restarting), try add trap 'exit' ERR in top of your entrypoint's script.

Related

How to avoid service dependencies from being stopped in Docker Compose?

Given the following Docker Compose file....
version: '3.8'
services:
producer:
image: producer
container_name: producer
depends_on: [db]
build:
context: ./producer
dockerfile: ./Dockerfile
db:
image: some-db-image
container_name: db
When I do docker-compose up producer obviously the db service gets started too. When I CTRL+C both services are stopped. This is expected and fine.
But sometimes, the db service is started before, on a different shell and so doing docker-compose up producer understands that db is running and only starts producer. But when I hit CTRL+C, both producer and db are stopped even though db was not started as part of this docker compose up command.
Is there a way to avoid getting the dependencies services stopped when stopping its "parent" ?
When running just docker-compose up, the CTRL+C command always stops all running services in the current compose scope. It doesn't care about depends_on.
You would need to spin it up with detach option -d, like
docker-compose up -d producer
Then you can do
docker stop producer
And db service should still be running.
As I understand your question: You want to stop a container A which depends on another container B. But when stopping A, you don't want docker-compose to stop B.
Docker-compose stops the dependent containers ('B' in this case) when 'A' is stopped.
How I would approach this:
Split up the docker-compose files into A and B
In docker-compose for A create a health check testing (and waiting) for container B to be alive.
Since this is a database, you could do this with a dummy query.
Then you still have dependency, but not the docker-compose connection of stopping dependant containers.
You can't simply do that with CTRL+C.
Your docker-compose file and the services defined in it are treated as a project. You may notice that all containers, networks and volumes are prefixed with the name of the directory where the docker-compose file is located by default. This is the project name. It can be changed via an environment variable or the -p flag of the docker-compose command.
What docker-compose does is it keeps track of all the resources for a given project.
In your case there are two services: db and producer. Whenever you run docker-compose up, both of them start up. They both end up being part of the same project. The same applies when you only start one of the services (e.g. with docker-compose up db). You can later start the other service and it will still be part of the same project.
One more thing to note here: Whenever you run docker-compose without the -d (detached) flag, you get attached to the whole project, meaning whenever you hit CTRL+C, you'll stop all services. It does not matter if the last compose command started only one of the services or if they depend on each other. Attaching to the project and hitting CTRL+C will stop them.
A possible solution to your problem would be the following:
Start up your services via docker-compose up -d (both db and producer will get created). They are now in detached mode. If you still want to check the logs in real time (kinda like attaching), use docker-compose logs -f. Now, however, if you want to stop only one of the services you can simply do docker-compose stop $SVC_NAME (where $SVC_NAME is either db or producer) and this will keep the other one running. This way, whatever happens to your terminal session, your services won't stop, unless you explicitly tell them to.
Is there a way to avoid getting the dependencies services stopped when stopping its "parent" ?
Yes.
Using the new version docker compose instead of docker-compose might solve your problem Reference.
Simple example
Assuming now you are using the new version, your process could be something like this.
docker-compose.yml
version: "3.8"
services:
db:
build: .
producer:
build: .
depends_on: [db]
extra:
build: .
Dockerfile
FROM node:alpine
WORKDIR /app
COPY . .
ENTRYPOINT [ "/bin/sh", "script.sh" ]
script.sh
while :; do sleep 1; done
Suppose db has started before with
$ docker compose up -d db.
Then later,
$ docker compose up -d producer.
Now you can stop only producer with
$ docker compose stop producer.
You can check if db is still running with
$ docker compose ps.
Notice the use of -d flag for detached mode, as pointed out in another answer, so you don't need to kill the process with CTRL+C. Also, using detached flag allows you to check the services that are running with docker compose ps.
A similar issue as yours was reported and fixed a while ago, as you can see here.
I was not able to reproduce the behavior you observe with a complete minimal example. Namely, when running docker compose stop producer, the underlying db is not stopped AFAICT.
Anyway, you may be interested in an alternative command that is a bit more flexible than docker compose up, regarding how to run "one-off commands": docker compose run.
The typical use cases are as follows:
docker compose run db bash → run the db service, replacing the default CMD with bash
docker compose run -d db → run the db service in the background (detach mode)
docker compose run --service-ports producer → run the service producer and its dependencies (unless they were run with docker compose up), enabling the ports mapping.
So for your specific use case, you could run:
docker compose up -d db
docker compose run --service-ports producer

How to auto restart a docker container with compose after reboot or failed attempt?

I want to my docker container deconz to restart after a reboot or the container failed to start.
My compose file is
version: "3.3"
services:
deconz:
image: marthoc/deconz
container_name: deconz
network_mode: host
restart: "always"
volumes:
- /sharedfolders/media/AppData/deconz:/root/.local/share/dresden-elektronik/deCONZ
devices:
- /dev/ttyACM0
environment:
- DECONZ_WEB_PORT=8083
- DECONZ_WS_PORT=8443
- DEBUG_INFO=1
- DEBUG_APS=0
- DEBUG_ZCL=0
- DEBUG_ZDP=0
- DEBUG_OTAU=0
I use the command docker-compose up -d to start the container. now I assume that after a reboot the container starts before the USB device is recognized. I want docker to keep trying to restart until successful. I assumed that restart: always or restart: unless-stopped does it but apparently I am mistaken.
Docker (docker-compose) will not help you directly in this task. The only thing that the docker orchestrator is doing is to recognize that the container had failed and to create new container to replace it.
Other orchestrators like Kubernetes have improved handling of the lifecycle, by allowing the orchestrator to recognize the internal state of the containers. Based on the internal state, the orchestrator will manage the lifecycle of that container and also the lifecycle of the related containers.
In your particular case, even just by moving to Kubernetes will not really help you, since it is container's task to recognize if he has all the resources ready to start working.
What you need to do is to create a startup script for the container that will recognize that all of the required resources are ready and it can proceed with the start. When you prepare the script, you can choose to exit from the script after waiting certain time (in which case Docker will detect it as container failure and will handle it based on restart rules) or to wait forever, until the resources are ready. I prefer the approach to wait for a while and then fail if resources are still not ready. This makes it more easier for the administrator to recognize that the container is not healthy.
Most trivial example of the script would be:
testfile="/dev/usbdrive/Iamthedrive.txt"
while :
do
if [ -e "$testfile" ]
then
echo "drive is mounted."
start_the_container_main_process.sh
exit(0)
fi
echo "drive is still not ready, waiting 10s"
sleep(10)
done
Make sure you have sleep for certain amount of time to go easy on the system resources.

Test restart policy - how to crash a container such that it restarts

I have a docker-compose file that creates 3 Hello World applications and uses nginx to load balance traffic across the different containers.
The docker-compose code is as follows:
version: '3.2'
services:
backend1:
image: rafaelmarques7/hello-node:latest
restart: always
backend2:
image: rafaelmarques7/hello-node:latest
restart: always
backend3:
image: rafaelmarques7/hello-node:latest
restart: always
loadbalancer:
image: nginx:latest
restart: always
links:
- backend1
- backend2
- backend3
ports:
- '80:80'
volumes:
- ./container-balancer/nginx.conf:/etc/nginx/nginx.conf:ro
I would like to verify that the restart: always policy actually works.
The approach I tried is as follows:
First, I run my application with docker-compose up;
I identify the containers IDs with docker container ps;
I kill/stop one of the containers with docker stop ID_Container or docker kill ID_Container.
I was expecting that after the 3rd step (stop/kill the container. this makes it exist with code 137), the restart policy would kick in and create a new container again.
However, this does not happen. I have read that this is intentional, as to have a way to be able to manually stop containers that have a restart policy.
Despite this, I would like to know how I can kill a container in such a way that it triggers the restart policy so that I can actually verify that it is working.
Thank you for your help.
If you run ps on the host you will be able to see the actual processes in all of your Docker containers. Once you find a container's main process's process ID, you can sudo kill it (you will have to be root). That will look more like a "crash", especially if you kill -13 to send SIGSEGV.
It is very occasionally useful for validation scenarios like this to have an endpoint that crashes your application that you can enable in test builds and some other similar silly things. Just make sure you do have a gate so that those endpoints don't exist in production builds. (In old-school C, an #ifdef TEST would do the job; some languages have equivalents but many don't.)
You can docker exec into the running container and kill processes. If your entrypoint process (pid 1) starts a sub process, find it and kill it
docker exec -it backend3 /bin/sh
ps -ef
Find the process that pid 1 is its parent and kill -9 it.
If your entrypoint in the only process (pid 1), it cannot be killed by the kill command. Consider replacing your entrypoint with a script that calls your actual process, which will allow you to use the idea I suggest above.
This should simulate a crashing container and should kick the restart process.
NOTES:
See explanation in https://unix.stackexchange.com/questions/457649/unable-to-kill-process-with-pid-1-in-docker-container
See why not run NodeJS as pid 1 in https://www.elastic.io/nodejs-as-pid-1-under-docker-images/

docker-compose restart interval

I have a docker-compose.yml file with a following:
services:
kafka_listener:
build: .
command: bundle exec ./kafka foreground
restart: always
# other services
Then I start containers with: docker-compose up -d
On my amazon instance kafka-server (for example) fails to start sometimes, so ./kafka foregound script fails. When typing docker ps I see a message: Restarting (1) 11 minutes ago. I thought docker should restart failed container instantly, but it seems it doesn't. After all, container has been restarted in about 30 minutes since first failed attempt.
Is there any way to tell Docker-Compose to restart container instantly after failure?
You can use this policy :
on-failure
The on-failure policy is a bit interesting as it allows you to tell Docker to restart a container if the exit code indicates error but not if the exit code indicates success. You can also specify a maximum number of times Docker will automatically restart the container. like on-failure:3 It will retry 3 times.
unless-stopped
The unless-stopped restart policy behaves the same as always with one exception. When a container is stopped and the server is rebooted or the Docker service is restarted, the container will not be restarted.
Hope this will help you in this problem.
Thank you!

How does one close a dependent container with docker-compose?

I have two containers that are spun up using docker-compose:
web:
image: personal/webserver
depends_on:
- database
entrypoint: /usr/bin/runmytests.sh
database:
image: personal/database
In this example, runmytests.sh is a script that runs for a few seconds, then returns with either a zero or non-zero exit code.
When I run this setup with docker-compose, web_1 runs the script and exits. database_1 remains open, because the process running the database is still running.
I'd like to trigger a graceful exit on database_1 when web_1's tasks have been completed.
You can pass the --abort-on-container-exit flag to docker-compose up to have the other containers stop when one exits.
What you're describing is called a Pod in Kubernetes or a Task in AWS. It's a grouping of containers that form a unit. Docker doesn't have that notion currently (Swarm mode has "tasks" which come close but they only support one container per task at this point).
There is a hacky workaround beside scripting it as #BMitch described. You could mount the Docker daemon socket from the host. Eg:
web:
image: personal/webserver
depends_on:
- database
volumes:
- /var/run/docker.sock:/var/run/docker.sock
entrypoint: /usr/bin/runmytests.sh
and add the Docker client to your personal/webserver image. That would allow your runmytests.sh script to use the Docker CLI to shut down the database first. Eg: docker kill database.
Edit:
Third option. If you want to stop all containers when one fails, you can use the --abort-on-container-exit option to docker-compose as #dnephin mentions in another answer.
I don't believe docker-compose supports this use case. However, making a simple shell script would easily resolve this:
#!/bin/sh
docker run -d --name=database personal/database
docker run --rm -it --entrypoint=/usr/bin/runmytests.sh personal/webserver
docker stop database
docker rm database

Resources