I have two containers running on Ubuntu Server 22.04 LTS.
One of them is Selenium Grid and the second one is Python container that works in the connection with mentioned above Selenium container.
How can I get these two containers correctly restarted after system poweroff or reboot?
I tried this:
docker update --restart [on-failure] [always] [unless-stopped] container_grid docker update --restart [on-failure] [always] [unless-stopped] container_python
The Selenium Grid container restarts correctly, but Python container keeps restarting in a loop.
As I can suppose it cannot by some reason establish connection to the second one, exits with the code 1 and keeps restarting.
How can I avoid this? Maybe there is a solution that adds delay or sets the order of containers restart after turning on the system? Or should I simply add some delay in Python code because there is no any simple solution to this?
I am not software developer but automation engineer so could somebody help me with the solution. Maybe it would e Docker Compose or something else.
Thanks in advance.
So) Solved this problem via crontab.
Selenium container starts in accordance with "--restart on-failure" option.
My Python container starts with a delay in accordance with crontab command:
#reboot sleep 20 && docker start [python_container]
I have problems with a heretic docker container... I tried to follow this tutorial, trying to build a OpenVPN in my new raspberry (the first one in my life)... and I think I did something really wrong... I tried to run it with reset policy: "always"
This container has an error each time I try to run it,
standard_init_linux.go:211: exec user process caused "exec format error"
It tries to run each 10 seconds during 3 seconds, more or less, and always with a different Docker Container ID. It runs with different PID, too...
I've tried some solutions I've found on the Internet, trying to stop this madness...
It seems you are using systemd script.
You should try with this command.
systemctl stop docker-openvpn#NAME.service
replace NAME with whatever name you have given to your service.
It is stated in their documentation
In the event the service dies (crashes, or is killed) systemd will attempt to restart the service every 10 seconds until the service is stopped with **systemctl stop docker-openvpn#NAME.service**
Checkout following link
In case you forgot your service name, you can run this command and check your service name
systemctl --type=service
I have a docker-compose setup, which is deployed in three steps:
Build all the containers and dc up -d (dc is an alias for docker-compose)
Create database with: dc run web /usr/local/bin/python create_db.py
Populate database with: dc run -d web /usr/local/bin/python -u manage.py populateDB
Steps 2 and 3 create new containers (see the first two):
~/Documents/Project » docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2ead532ea58b myproject_web "/usr/local/bin/pytho" 8 minutes ago Up 8 minutes 8000/tcp myproject_web_run_2
64e1f81ecd1a myproject_web "/usr/local/bin/pytho" 9 minutes ago Restarting (0) About a minute ago 8000/tcp myproject_web_run_1
9f5c670d4d7f myproject_nginx "/usr/sbin/nginx" 40 minutes ago Up 40 minutes 0.0.0.0:80->80/tcp myproject_nginx_1
46d3e8c09c03 myproject_web "/usr/local/bin/gunic" 40 minutes ago Up 40 minutes 8000/tcp myproject_web_1
ea876e68c8c6 postgres:latest "/docker-entrypoint.s" 40 minutes ago Up 40 minutes 0.0.0.0:5432->5432/tcp myproject_postgres_1
Which is all well and good, except they don't exit when their job is finished.
For example, the create db script, as you can see, is always restarting after it has created the database. Once achromap_web_run_2 has finished populating the database, it will add a second copy of each record, then a third, etc.. forever.
On github, it seems like this was asked for from docker, and the docker run --rm flag handles it. But --rm and -d are incompatible, which I don't understand.
Do you know how to kill containers which have finished executing their functions? Specifically, how to get dc run web /usr/local/bin/python create_db.py to exit once create_db.py calls exit()? Or is there a better way?
I think you may be conflating two things here.
The --rm flag
This exists to clean up after a container is finished, so it doesn't hang around in the dead containers pool. As you already found, it is not compatible with -d. But in this case, you don't need it anyway.
The --restart flag
(Also available in docker-compose as the restart property.)
This flag sets the restart policy. By default it is set to no, but you can set it to a few other values, including always. I would suspect you have it set to always currently, which would force the container to restart every time it stops on its own.
If you manually stop the container (docker stop ...) then the auto-restart would not engage. But if the process exits on its own, or crashes, then the container will be restarted. This is available for the obvious reason, so your service will start up again if it crashes.
How to proceed
I would say what you need is to use exec instead of run for these tasks.
First, run your container normally (i.e. docker-compose up -d).
Instead of using run to execute create_db.py, use exec.
docker-compose exec web /usr/local/bin/python create_db.py
This will use your already-running container, execute the script one time, and when the script exits, you're done. Since you did not create a new container (like run was doing), there is no cleanup to do afterward.
Note that you do not need the -it flag that is often used with docker exec. docker-compose emulates a tty on exec by default.
Trying to stop container from this image by using either of mentioned commands results in indefinite waiting by docker. The container still can be observed in docker ps output.
Sorry for a newbie question, but how does one stop containers properly?
This container was first run according to the instructions on hub.docker.com, halted by Ctrl+C and then started again by docker start <containter-name>. After it was started, it never worked as expected though.
Your test worked for me:
→ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
853e36b8a952 jleight/opentsdb "/usr/bin/supervisord" 9 minutes ago Up 9 minutes 0.0.0.0:4242->4242/tcp fervent_hypatia
→ docker stop fervent_hypatia
fervent_hypatia
→ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
It took a bit long, but I think that is because the Docker image is using a supervisor process so SIGTERM (which is what docker stop sends first) doesn't kill the container, but the SIGKILL, which is by default sent after 10 seconds should (my wait time was ~ 10 seconds).
Just in case your default may be messed up for some reason, try indicating the timeout explicitely:
docker stop --time=2 <container-name>
docker stop <container-name> is a proper way to stop your container. It's possible there is something going on inside, you could try usingdocker logs <container-name> to give you more information about what's running inside.
This probably isn't the best way, but eventually restarting docker would do the trick, if nothing else works.
I've seen a bunch of tutorials that seem do the same thing I'm trying to do, but for some reason my Docker containers exit. Basically, I'm setting up a web-server and a few daemons inside a Docker container. I do the final parts of this through a bash script called run-all.sh that I run through CMD in my Dockerfile. run-all.sh looks like this:
service supervisor start
service nginx start
And I start it inside of my Dockerfile as follows:
CMD ["sh", "/root/credentialize_and_run.sh"]
I can see that the services all start up correctly when I run things manually (i.e. getting on to the image with -i -t /bin/bash), and everything looks like it runs correctly when I run the image, but it exits once it finishes starting up my processes. I'd like the processes to run indefinitely, and as far as I understand, the container has to keep running for this to happen. Nevertheless, when I run docker ps -a, I see:
➜ docker_test docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7706edc4189 some_name/some_repo:blah "sh /root/run-all.sh 8 minutes ago Exited (0) 8 minutes ago grave_jones
What gives? Why is it exiting? I know I could just put a while loop at the end of my bash script to keep it up, but what's the right way to keep it from exiting?
If you are using a Dockerfile, try:
ENTRYPOINT ["tail", "-f", "/dev/null"]
(Obviously this is for dev purposes only, you shouldn't need to keep a container alive unless it's running a process eg. nginx...)
I just had the same problem and I found out that if you are running your container with the -t and -d flag, it keeps running.
docker run -td <image>
Here is what the flags do (according to docker run --help):
-d, --detach=false Run container in background and print container ID
-t, --tty=false Allocate a pseudo-TTY
The most important one is the -t flag. -d just lets you run the container in the background.
This is not really how you should design your Docker containers.
When designing a Docker container, you're supposed to build it such that there is only one process running (i.e. you should have one container for Nginx, and one for supervisord or the app it's running); additionally, that process should run in the foreground.
The container will "exit" when the process itself exits (in your case, that process is your bash script).
However, if you really need (or want) to run multiple service in your Docker container, consider starting from "Docker Base Image", which uses runit as a pseudo-init process (runit will stay online while Nginx and Supervisor run), which will stay in the foreground while your other processes do their thing.
They have substantial docs, so you should be able to achieve what you're trying to do reasonably easily.
you can run plain cat without any arguments as mentioned by bro #Sa'ad to simply keep the container working [actually doing nothing but waiting for user input] (Jenkins' Docker plugin does the same thing)
The reason it exits is because the shell script is run first as PID 1 and when that's complete, PID 1 is gone, and docker only runs while PID 1 is.
You can use supervisor to do everything, if run with the "-n" flag it's told not to daemonize, so it will stay as the first process:
CMD ["/usr/bin/supervisord", "-n"]
And your supervisord.conf:
[supervisord]
nodaemon=true
[program:startup]
priority=1
command=/root/credentialize_and_run.sh
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
startsecs=0
[program:nginx]
priority=10
command=nginx -g "daemon off;"
stdout_logfile=/var/log/supervisor/nginx.log
stderr_logfile=/var/log/supervisor/nginx.log
autorestart=true
Then you can have as many other processes as you want and supervisor will handle the restarting of them if needed.
That way you could use supervisord in cases where you might need nginx and php5-fpm and it doesn't make much sense to have them apart.
Motivation:
There is nothing wrong in running multiple processes inside of a docker container. If one likes to use docker as a light weight VM - so be it. Others like to split their applications into micro services. Me thinks: A LAMP stack in one container? Just great.
The answer:
Stick with a good base image like the phusion base image. There may be others. Please comment.
And this is yet just another plead for supervisor. Because the phusion base image is providing supervisor besides of some other things like cron and locale setup. Stuff you like to have setup when running such a light weight VM. For what it's worth it also provides ssh connections into the container.
The phusion image itself will just start and keep running if you issue this basic docker run statement:
moin#stretchDEV:~$ docker run -d phusion/baseimage
521e8a12f6ff844fb142d0e2587ed33cdc82b70aa64cce07ed6c0226d857b367
moin#stretchDEV:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
521e8a12f6ff phusion/baseimage "/sbin/my_init" 12 seconds ago Up 11 seconds
Or dead simple:
If a base image is not for you... For the quick CMD to keep it running I would suppose something like this for bash:
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
Or this for busybox:
CMD exec /bin/sh -c "trap : TERM INT; (while true; do sleep 1000; done) & wait"
This is nice, because it will exit immediately on a docker stop.
Just plain sleep or cat will take a few seconds before the container is forcefully killed by docker.
Updates
As response to Charles Desbiens concerning running multiple processes in one container:
This is an opinion. And the docs are pointing in this direction. A quote: "It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application." For sure it obviously much more powerful to devide your complex service into multiple containers. But there are situations where it can be beneficial to go the one container route. Especially for appliances. The GitLab Docker image is my favourite example of a multi process container. It makes deployment of this complex system easy. There is no way for mis-configuration. GitLab retains all control over their appliance. Win-Win.
Make sure that you add daemon off; to you nginx.conf or run it with CMD ["nginx", "-g", "daemon off;"] as per the official nginx image
Then use the following to run both supervisor as service and nginx as foreground process that will prevent the container from exiting
service supervisor start && nginx
In some cases you will need to have more than one process in your container, so forcing the container to have exactly one process won't work and can create more problems in deployment.
So you need to understand the trade-offs and make your decision accordingly.
Since docker engine v1.25 there is an option called init.
Docker-compose included this command as of version 3.7.
So my current CMD when running a container that should run into infinity:
CMD ["sleep", "infinity"]
and then run it using:
docker build
docker run --rm --init app
crf.:
rm docs and init docs
Capture the PID of the ngnix process in a variable (for example $NGNIX_PID) and at the end of the entrypoint file do
wait $NGNIX_PID
In that way, your container should run until ngnix is alive, when ngnix stops, the container stops as well
Along with having something along the lines of : ENTRYPOINT ["tail", "-f", "/dev/null"] in your docker file, you should also run the docker container with -td option. This is particularly useful when the container runs on a remote m/c. Think of it more like you have ssh'ed into a remote m/c having the image and started the container. In this case, when you exit the ssh session, the container will get killed unless it's started with -td option. Sample command for running your image would be: docker run -td <any other additional options> <image name>
This holds good for docker version 20.10.2
There are some cases during development when there is no service yet but you want to simulate it and keep the container alive.
It is very easy to write a bash placeholder that simulates a running service:
while true; do
sleep 100
done
You replace this by something more serious as the development progress.
How about using the supervise form of service if available?
service YOUR_SERVICE supervise
Once supervise is successfully running, it will not exit unless it is
killed or specifically asked to exit.
Saves having to create a supervisord.conf