I have two containers running on Ubuntu Server 22.04 LTS.
One of them is Selenium Grid and the second one is Python container that works in the connection with mentioned above Selenium container.
How can I get these two containers correctly restarted after system poweroff or reboot?
I tried this:
docker update --restart [on-failure] [always] [unless-stopped] container_grid docker update --restart [on-failure] [always] [unless-stopped] container_python
The Selenium Grid container restarts correctly, but Python container keeps restarting in a loop.
As I can suppose it cannot by some reason establish connection to the second one, exits with the code 1 and keeps restarting.
How can I avoid this? Maybe there is a solution that adds delay or sets the order of containers restart after turning on the system? Or should I simply add some delay in Python code because there is no any simple solution to this?
I am not software developer but automation engineer so could somebody help me with the solution. Maybe it would e Docker Compose or something else.
Thanks in advance.
So) Solved this problem via crontab.
Selenium container starts in accordance with "--restart on-failure" option.
My Python container starts with a delay in accordance with crontab command:
#reboot sleep 20 && docker start [python_container]
I would like to stop my running docker container after a specific time, let's say 2 hrs after startup. So far my research has led to the following solutions. I just wanted to know if there were better ways to do it.
Use a cron job to stop the container by calling the docker stop command.
Use an entry point like sleep 5000, but this does not suit my use case.
Using --stop-timeout in the docker run command ; I believe this is just the maximum timeout given for the container to gracefully shutdown. Am I missing something
here?
You can use the timeout command that's part of the coreutils package which is already installed in the debian images (and probably many others).
This will run the container for 30 seconds and then stop
docker run debian timeout 30 tail -f /dev/null
Basically, add timeout 7200 in front of the command you want to run in the container, and it'll be killed after 2 hours.
I have to analyze tons of files which require different tools. Right now I have several steps which each are in their own seperate Docker container. Each container takes a folder as an input and provides a folder with output files, everything is working fine. Now I want to automate this as a chain of these containers. How can one container start the next one though? Do I have to use Docker in Docker as a container can not start a new one in the host system ?
Bonus:
what if the run command differs, as it depends on the output of the
previous container ?
Thank you so much, couldn't find a working solution yet.
If you want to simple start the containers after each other you can to it with a shell script:
#!/bin/bash
echo "Running first container"
docker run -it --name sleep1 --rm sleep
echo "Running second container"
docker run -it --name sleep2 --rm sleep
Example Dockerfile (docker build -t sleep .):
FROM alpine:3.3
ENTRYPOINT ["/bin/ash", "-c", "sleep 5"]
Maybe this help you to get what you want. If not please describe a bit more what you want to achieve. Maybe with code examples or something.
Edit:
Based on the new informations of the creator i see only two possibilities to chain the start of mulitple containers from out of a container:
1.: Run Docker in Docker (nested containers). But this is not really recommended.
2.: Run the script from above via ssh from the container on the host machine.
I would like to do "starting a long-running worker process" as in this article
https://docs.docker.com/articles/basics/
I don't understand why sleep 1? why not sleep 86400??? one day or one year?
# Start a very useful long-running process
$ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
# Collect the output of the job so far
$ sudo docker logs $JOB
# Kill the job
$ sudo docker kill $JOB
What's the "best" way to make it run as a background process for apache, nginx, mysql etc?
Why do you need to echo? is that necessary?
This job is to write Hello world to stdout every second I guess for demonstration purposes only. If you want to do something else at some different interval you have to change it accordingly.
The key thing is the -d flag makes Docker run it in background ('detatched'), and the docker logs lets you examine the logs as many times as you like after that point.
For Apache, get hold of an Apache image; for Nginx use an Nginx image, and so on.
I found this image with Nginx and PHP, and the official Docker mysql image worked for me. You'll need to do a bit more reading to see how to integrate your data, web content, config, etc.
I've seen a bunch of tutorials that seem do the same thing I'm trying to do, but for some reason my Docker containers exit. Basically, I'm setting up a web-server and a few daemons inside a Docker container. I do the final parts of this through a bash script called run-all.sh that I run through CMD in my Dockerfile. run-all.sh looks like this:
service supervisor start
service nginx start
And I start it inside of my Dockerfile as follows:
CMD ["sh", "/root/credentialize_and_run.sh"]
I can see that the services all start up correctly when I run things manually (i.e. getting on to the image with -i -t /bin/bash), and everything looks like it runs correctly when I run the image, but it exits once it finishes starting up my processes. I'd like the processes to run indefinitely, and as far as I understand, the container has to keep running for this to happen. Nevertheless, when I run docker ps -a, I see:
➜ docker_test docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7706edc4189 some_name/some_repo:blah "sh /root/run-all.sh 8 minutes ago Exited (0) 8 minutes ago grave_jones
What gives? Why is it exiting? I know I could just put a while loop at the end of my bash script to keep it up, but what's the right way to keep it from exiting?
If you are using a Dockerfile, try:
ENTRYPOINT ["tail", "-f", "/dev/null"]
(Obviously this is for dev purposes only, you shouldn't need to keep a container alive unless it's running a process eg. nginx...)
I just had the same problem and I found out that if you are running your container with the -t and -d flag, it keeps running.
docker run -td <image>
Here is what the flags do (according to docker run --help):
-d, --detach=false Run container in background and print container ID
-t, --tty=false Allocate a pseudo-TTY
The most important one is the -t flag. -d just lets you run the container in the background.
This is not really how you should design your Docker containers.
When designing a Docker container, you're supposed to build it such that there is only one process running (i.e. you should have one container for Nginx, and one for supervisord or the app it's running); additionally, that process should run in the foreground.
The container will "exit" when the process itself exits (in your case, that process is your bash script).
However, if you really need (or want) to run multiple service in your Docker container, consider starting from "Docker Base Image", which uses runit as a pseudo-init process (runit will stay online while Nginx and Supervisor run), which will stay in the foreground while your other processes do their thing.
They have substantial docs, so you should be able to achieve what you're trying to do reasonably easily.
you can run plain cat without any arguments as mentioned by bro #Sa'ad to simply keep the container working [actually doing nothing but waiting for user input] (Jenkins' Docker plugin does the same thing)
The reason it exits is because the shell script is run first as PID 1 and when that's complete, PID 1 is gone, and docker only runs while PID 1 is.
You can use supervisor to do everything, if run with the "-n" flag it's told not to daemonize, so it will stay as the first process:
CMD ["/usr/bin/supervisord", "-n"]
And your supervisord.conf:
[supervisord]
nodaemon=true
[program:startup]
priority=1
command=/root/credentialize_and_run.sh
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
startsecs=0
[program:nginx]
priority=10
command=nginx -g "daemon off;"
stdout_logfile=/var/log/supervisor/nginx.log
stderr_logfile=/var/log/supervisor/nginx.log
autorestart=true
Then you can have as many other processes as you want and supervisor will handle the restarting of them if needed.
That way you could use supervisord in cases where you might need nginx and php5-fpm and it doesn't make much sense to have them apart.
Motivation:
There is nothing wrong in running multiple processes inside of a docker container. If one likes to use docker as a light weight VM - so be it. Others like to split their applications into micro services. Me thinks: A LAMP stack in one container? Just great.
The answer:
Stick with a good base image like the phusion base image. There may be others. Please comment.
And this is yet just another plead for supervisor. Because the phusion base image is providing supervisor besides of some other things like cron and locale setup. Stuff you like to have setup when running such a light weight VM. For what it's worth it also provides ssh connections into the container.
The phusion image itself will just start and keep running if you issue this basic docker run statement:
moin#stretchDEV:~$ docker run -d phusion/baseimage
521e8a12f6ff844fb142d0e2587ed33cdc82b70aa64cce07ed6c0226d857b367
moin#stretchDEV:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
521e8a12f6ff phusion/baseimage "/sbin/my_init" 12 seconds ago Up 11 seconds
Or dead simple:
If a base image is not for you... For the quick CMD to keep it running I would suppose something like this for bash:
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
Or this for busybox:
CMD exec /bin/sh -c "trap : TERM INT; (while true; do sleep 1000; done) & wait"
This is nice, because it will exit immediately on a docker stop.
Just plain sleep or cat will take a few seconds before the container is forcefully killed by docker.
Updates
As response to Charles Desbiens concerning running multiple processes in one container:
This is an opinion. And the docs are pointing in this direction. A quote: "It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application." For sure it obviously much more powerful to devide your complex service into multiple containers. But there are situations where it can be beneficial to go the one container route. Especially for appliances. The GitLab Docker image is my favourite example of a multi process container. It makes deployment of this complex system easy. There is no way for mis-configuration. GitLab retains all control over their appliance. Win-Win.
Make sure that you add daemon off; to you nginx.conf or run it with CMD ["nginx", "-g", "daemon off;"] as per the official nginx image
Then use the following to run both supervisor as service and nginx as foreground process that will prevent the container from exiting
service supervisor start && nginx
In some cases you will need to have more than one process in your container, so forcing the container to have exactly one process won't work and can create more problems in deployment.
So you need to understand the trade-offs and make your decision accordingly.
Since docker engine v1.25 there is an option called init.
Docker-compose included this command as of version 3.7.
So my current CMD when running a container that should run into infinity:
CMD ["sleep", "infinity"]
and then run it using:
docker build
docker run --rm --init app
crf.:
rm docs and init docs
Capture the PID of the ngnix process in a variable (for example $NGNIX_PID) and at the end of the entrypoint file do
wait $NGNIX_PID
In that way, your container should run until ngnix is alive, when ngnix stops, the container stops as well
Along with having something along the lines of : ENTRYPOINT ["tail", "-f", "/dev/null"] in your docker file, you should also run the docker container with -td option. This is particularly useful when the container runs on a remote m/c. Think of it more like you have ssh'ed into a remote m/c having the image and started the container. In this case, when you exit the ssh session, the container will get killed unless it's started with -td option. Sample command for running your image would be: docker run -td <any other additional options> <image name>
This holds good for docker version 20.10.2
There are some cases during development when there is no service yet but you want to simulate it and keep the container alive.
It is very easy to write a bash placeholder that simulates a running service:
while true; do
sleep 100
done
You replace this by something more serious as the development progress.
How about using the supervise form of service if available?
service YOUR_SERVICE supervise
Once supervise is successfully running, it will not exit unless it is
killed or specifically asked to exit.
Saves having to create a supervisord.conf