Prevent container from exiting, conditionally - docker

I have this entrypoint in a Dockerfile:
ENTRYPOINT ["r2g", "run"]
and I run the resulting image with:
docker run --name "$container" "$tag"
most of the time, I want the container to exit when it's done - the r2g process is not a server, but a testing command line tool. So my question is - if I want to conditionally keep the container from exiting, is there a flag I can pass to docker run to keep the container alive? Can I add something to ENTRYPOINT to keep the container alive?

The only way to keep the docker container running is making it run a command that does not exit.
In your case, when you don't want the container to exit, you could run something like this:
docker run --name "$container" "$tag" sh -c "r2g run && sleep infinity"
This way, once the r2g command is finished, your container will wait indefinitely and keep running.

Related

How can I make the docker container to run a script every time when the container restart?

I know I can use the dockerfile's CMD RUN and ENTRYPOINT commands to run a script when the container initiates, but how can I make the container run a script every time when the container restarts on failure?
entrypoint runs every time a container starts, or restarts. It's common practice to put startup configuration in a shell script that then execs the application's "true" entrypoint at the end. (See What purpose does using exec in docker entrypoint scripts serve? for why exec is important).
Remember, docker is really just a wrapper around filesystem , process, and network namespacing. It can't restart your container in any way other than rerunning the same process it started in the first place.
You can try it yourself with an invocation something like this:
docker run -d --restart=always --entrypoint=sh alpine -c "sleep 5; echo Exiting; exit"
if you docker logs -f that container, you'll see the Exiting come out after every 5 seconds. Note that the container stopping will also stop the log following though, so you'll have to run it again to see the next restart.

How to create a Dockerfile so that container can run without an immediate exit

Official Docker images like MySQL can be run like this:
docker run -d --name mysql_test mysql/mysql-server:8.0.13
And it can run indefinitely in the background.
I want to try to create an image which does the same, specifically a Flask development server (just for testing). But my container exit immediately. My Dockerfile is like this:
FROM debian:buster
ENV TERM xterm
RUN XXXX # some apt-get and Python installation stuffs
ENTRYPOINT [ "flask", "run", "--host", "0.0.0.0:5000" ]
EXPOSE 80
EXPOSE 5000
USER myuser
WORKDIR /home/myuser
However it exited immediately as soon as it is ran. I also tried "bash" as an entry point just so to make sure it isn't a Flask configuration issue and it also exited.
How do I make it so that it runs as THE process in the container?
EDIT
OK someone posted below (but later deleted), the command to test is to use tail -f /dev/null, and it does run indefinitely. I still don't understand why bash doesn't work as a process which doesn't exist (does it?). But my flask configuration is probably off.
EDIT 2
I see that running without the -d flag print out the stdout (or stderr) so I can diagnose the problem.
Let's clear things out.
In general, a container exits as soon as its entrypoint is successfully executed.
In your case, without being a python expert this ENTRYPOINT [ "flask", "run", "--host", "0.0.0.0:5000" ] would be enough to keep the container alive. But I guess you have some configuration error and due to that error the container exited before running flask command. You can validate this by running docker ps -a and inspect the exit code(possibly 1).
Let's now discuss about the questions in your edits.
The key part of your misunderstanding derives from the -d flag.
You are right to think that setting bash as entrypoint would be enough to keep container alive but you need to attach to that shell.
When running in detach mode(-d), container will execute bash command but as soon as no one is attached to that shell, it will exit. In addition, using this flag will prevent you from viewing container logs lively(however you may use docker logs container_id to debug) which is very useful when you are in an early phase of setting thing up. So I recommend using this flag only when you are sure that everything works as intended.
To attach to bash shell and keep container alive, you should use the -it flag so that the bash shell will be attached to the current shell invoking the docker run command.
-t : Allocate a pseudo-tty
-i : Keep STDIN open even if not attached
Please also consult official documentation about foreground vs background mode.
The answer to your edit is: when do docker run <container> bash it will literally call bash and exit 0, because the command (bash) was successful. Bash isn't a shell, it's a command.
If you ran docker run -it <container> tail -f /dev/null and then docker exec -it /bin/bash. You'd drop into the shell, because its the command you ran.
Your Dockerfile doesn't have a command to run in the background that is persistent, in mysqls case, it runs mysqld, which starts a server on PID 0.
When PID 0 exits, the container stops.
Your entrypoint is most likely failing to start, or starting and exiting because of how your command is running.
I would try changing your entrypoint to a

What's the difference between `docker run -d` and `docker run -dit`?

If I would like to use it as a development environment for Node.js, is it alright to just docker run -d?
Do I really need the below?
--interactive , -i Keep STDIN open even if not attached
--tty , -t Allocate a pseudo-TTY
In a normal scenario, there is the only one difference
-dit Run the container in the background
-it Run the container in the foreground and will allocate a pseudo-terminal.
But what if the entry point is bash? like in the case of ubuntu-dockerfile. As they believe that the user will override the CMD as per their need or dependent Dockerfile.
# overwrite this with 'CMD []' in a dependent Dockerfile
CMD ["/bin/bash"]
So in this case, when you only specify -d your container will be stopped as soon as it started. So what you need to allocate pseudo-terminal by adding -dit.
As you can see that the container is not running, let check-in stoped container.
so we can that container is exited a minutes ago. Let try with -dit
We can see that the container is running. Same case with alpine if you run alpine with -d it will also stop.
docker run -d alpine
This will exit as soon as it started, so -dit will Allocate a pseudo-TTY as mentioned in the documentation.

Using custom shell script as docker entrypoint

I’m trying to use docker-compose to bring up a container. As an ENTRYPOINT, that container has a simple bash script that I wrote. Now, when I try to bring up the container in any of the below ways:
docker-compose up
docker-compose up foo
it doesn’t complete. i.e., trying to attach (docker exec -i -t $1 /bin/bash) to the running container, fails with:
Error response from daemon: Container xyz is restarting, wait until the container is running.
I tried playing around with putting commands in the background. That didn’t work.
my-script.sh
cmd1 &
cmd2 &&
...
cmdn &
I also tried i) with and without entrypoint: /bin/sh /usr/local/bin/my-script.sh and ii) with and without the tty: true option. No dice.
docker-compose.yml
version: '2'
services:
foo:
build:
context: .
dockerfile: Dockerfile.foo
...
tty: true
entrypoint: /bin/sh /usr/local/bin/my-script.sh
Also tried just a manual docker build / run cycle. And (without launching /bin/sh in the ENTRYPOINT ) the run just exits.
$ docker build ... .
$ docker run -it ...
... shell echos commands here, then exits
$
I'm sure its something simple. What's the solution here?
Your entrypoint in your docker-compose.yml only needs to be
entrypoint: /usr/local/bin/my-script.sh
Just add #! /bin/sh to the top of the script to specify the shell you want to use.
You also need to add exec "$#" to the bottom of your entrypoint script or else it will exit immediately, which will terminate the container.
First of all you need to put something infinite to keep running your container in background,like you can tail -f application.log or anything like this so that even if you exit from your container bash it keeps running in background
you do not need to do cmd1 & cmd2 &&...cmdn & just place one command like this touch 1.txt && tail -f 1.txt as a last step in your my-script.sh. It will keep running your container.
One thing also you need to change is docker run -it -d -d will start container with background mode.If you want to go inside your container than docker exec -it container_name/id bash debug the issue and exit.It will still keep your container running until you stop with docker stop container_id/name
Hope this help.
Thank you!

How to keep an infinite loop running in order to not close a container in docker

I want to keep a docker container running even after executing the run command (containers exit immediately after docker run... I know the command:
while :;do
sleep 300
done
during docker run will make it run but how do I edit the Dockerfile itself in order to keep it running?
You can do this by putting the commands you want to execute into a script, and setting the script to be the command Docker runs when it starts a container:
FROM sixeyed/ubuntu-with-utils
RUN echo 'ping localhost &' > /bootstrap.sh
RUN echo 'sleep infinity' >> /bootstrap.sh
RUN chmod +x /bootstrap.sh
CMD /bootstrap.sh
When you build an image from this Dockerfile and run a container from the image, it will start ping in the background and sleep in the foreground, so you can daemonize the container with docker run -d and it will keep running.
This is not ideal though - Docker only monitors the last process it started when it ran the container, so it will be checking on sleep rather than ping. If the ping command errors the container will keep running. Typically, you want the real application to be the only thing you start in the CMD.

Resources