Is it possible to run a command when stopping a Docker container? - docker

I'm using docker-compose to organize containers for a JS application.
The source container sports command: npm start, pretty standard, to spin up the live application. However, that timeouts when I ask it to stop.
I was wondering if it's possible to have docker-compose stop to run a command inside the container - that could properly terminate the application.

docker-compose stop just sends a SIGTERM to your container and if it does not stop after 10secs (configurable) SIGKILL follows. So if you want to customize this behavior, you should handle signal inside your entrypoint (if you have one).

Related

How to keep a docker container running some background processes

I have a python script that launches multiple programs in different screen sessions (gnu screen: terminal multiplexer) and exits.
The screen sessions keep running in the background.
Everything works fine when I run the script directly on my host machine.
Now, I dockerized the entire thing. But docker containers exit after the script finishes.
docker run -d myimage
After reading a bit, I realized that this is the normal behavior of docker. if no foreground process is running it will exit.
But I want to keep the docker container up, what should I do?
I saw, that running with -it flag, keeps it up. But I want to detach from the container, yet keep it up.

How can I run script automatically after Docker container startup without altering main process of container

I have a Docker container which runs a web service. After the container process is started, I need to run a single command. How can I do this automatically, either by using Docker Compose or Docker?
I'm looking for a solution that does not require me to substitute the original container process with a Bash script that runs sleep infinity etc. Is this even possible?

docker container lifecycle confusion

I am new to Docker, and I find the definitions of containers' lifecycle differ a lot.
here is what "Manning.Docker.in.Action.2016.3" shows:
here is what google gives me:
https://medium.com/#nagarwal/lifecycle-of-docker-container-d2da9f85959
here is what the official document says:
status: One of created, restarting, running, removing, paused, exited, or dead
https://docs.docker.com/engine/reference/commandline/ps/
So what's going on here? I guess some new states(and renaming) are introduced in newer version of Docker?
Thanks in advance
Your linked diagram separates docker create from docker start, it includes "die" as a state transition, and it shows how to get to the "restarting" state. That's all valid, though it leads to a more complicated state machine.
(docker create wasn't in the very first versions of Docker but it appeared in Docker 1.3.0 in 2014, which should predate your diagram.)
Practically I might suggest an even simpler state machine:
-------> running -+------> stopped ------>
run | stop rm
\------> exited ------>
process exits rm
That is, never try to restart a container or make changes inside a running container; if you need to tweak anything, delete the existing container and create a new one. This gives you a consistent environment (when the main container process starts you always know what's in its filesystem, up to mounted data). It also matches what happens in cluster environments like Kubernetes, where the cluster manager will routinely create and delete containers for you.
When you get in a situation where internet gives you different answers, you should consider trying it yourself. Especially with technologies like docker, where it is pretty simple to make tests. For example:
I want to run a container (I will use nginx):
docker run -d nginx
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
258cd2edbed8 nginx "nginx -g 'daemon of…" 3 seconds ago Up 2 seconds 80/tcp jolly_golick
Note: docker will keep a container running only if there is a process running in it.
If you would start a debian container (for example), you would see how it immediately stop, as there is nothing running in it. So you could do
docker run -d debian sleep 10
and see that the container is up for 10 seconds.
When a container is running, you can do some things on it. You can't do other things, like removing it. To remove a container, you need to stop it first (or kill it), or force container removal.
Note: You would get all this info from docker itself, if you would be playing around with it, as it would return these info. Like if you would try to remove a running container, you would get this error:
Error response from daemon: You cannot remove a running container 258cd2edbed85bed23ab543312968bd893c1fbd9ba81de40366337f434daedff. Stop the container before attempting removal or force remove
I can't do all possible combinations here. You would get a similar error if you would try removing a paused container. Just play with it, and you will get a clear picture of how it works.

How to turn a docker container to a zombie

A few years ago. When I just started playing docker. I remember there are some blog posts mentioned if you don't handle your pid(1) process well. You will create a zombie docker container. At that time. I chose just follow the suggestion start using a init tool called dumb-init. And I never really see a zombie container be created.
But I am still curious why it's a problem. If I remember correctly, docker stop xxx by default will send SIGTERM to the container pid(1) process. And if the process can not gracefully stop within 10s (default). Docker will force kill it by sending SIGKILL to pid(1) process. And I also know that pid(1) process is special in Linux system. It can ignore SIGKILL signal (link). But I think even if the process's PID in docker container is 1. It just because it's using namespaces to scope its processes. In the host machine, you should see the process is another PID. Which can be killed by the kernel.
So my questions are:
Why can't docker engine just kill the container in the host kernel level? So no matter what. The user can ensure the container be killed properly.
How can I create a zombie process in docker container? (If someone can share a Gist will be great!)
Not zombie containers, but zombie processes. Write this zombie.py:
#!/usr/bin/env python3
import subprocess
import time
p = subprocess.Popen(['/bin/sleep', '1'])
time.sleep(2)
subprocess.run(['/bin/ps', '-ewl'])
Write this Dockerfile:
FROM python:3
COPY zombie.py /
CMD ["/zombie.py"]
Build and run it:
chmod +x zombie.py
docker build -t zombie .
docker run --rm zombie
What happens here is the /bin/sleep command runs to execution. The parent process needs to use the wait call to clean up after it, but it doesn't so when it runs ps, you'll see a "Z" zombie process.
But wait, there's more! Say your process does carefully clean up after itself. In this specific example, subprocess.run() includes the required wait call, for instance, and you might change the Popen call to run. If that subprocess launches another subprocess, and it exits (or crashes) without waiting for it, the init process with pid 1 becomes the new parent process of the zombie. (It's worked this way for 40 years.) In a Docker container, though, the main container process runs with pid 1, and if it's not expecting "extra" child processes, you could wind up with stale zombie processes for the life of the container.
This leads to the occasional suggestion that a Docker container should always run some sort of "real" init process, maybe something as minimal as tini, so that something picks up after zombie processes and your actual container job doesn't need to worry about it.

Docker Process Management

I have a deployed application running inside a Docker container, which is, in effect, an websocket client that runs forever. Every deploy I'm rebuilding the container and starting it with docker run using the command set in the Dockerfile.
Now, I've noticed a few times that the process occasionally dies without restarting. When running docker ps, I can see that the container is up, and has been up for 2 weeks, however the process running inside of it has died without the host being any the wiser
Do I need to go so far as to have a process manager inside of the docker container to manage the containerized process?
EDIT:
Dockerfile: https://github.com/DVG/catpen-edi/blob/master/Dockerfile
We've developed a process-manager tailor-made for Docker containers and have been using it with quite a bit of success to solve exactly the problem you describe. The best starting point is to take a look at chaperone-docker on github. The readme on the first page contains a quick link to a minimal base image as well as a fully configured LAMP stack so you can try it out and see what a fully-configured image would look like. It's open-source and fully documented.
This is a very interesting problem here related to PID1 and the fact that docker replaces PID1 with the command specified in CMD or ENTRYPOINT. What's happening is that the child process isn't automagically adopted by anything if the parent dies and it becomes an orphan (since there is no PID1 in the sense of a traditional init system like you're used to). Here is some excellent reading to give you a few ideas. You may get some mileage out of their baseimage-docker image which comes with their simplified init system ("my_app"), which will solve some of this problem for you. However, I would strongly caution you against automatically adopting the Phusion mindset for all of your containers, as there exists some ideological friction in that space. I can't recall any discussion on Docker's Github about a potential minimal init system to solve this problem, but I can't imagine it will be a problem forever. Good luck!
If you have two ruby processes it sounds like the child hasn't exited, the application has just stopped working. It's likely the EventMachine reactor is sitting in the background.
Does the EDI app really need to spawn the additional Ruby process? This only adds another layer between Docker and your app. Run the server directly with CMD [ "ruby", "boot.rb" ]. If you find the problem still occurs with a single process then you will need to find what is causing your app to hang.
When a process is running as PID 1 is docker it will need handle the SIGINT and SIGTERM signals too.
# Trap ^C
Signal.trap("INT") {
shut_down
exit
}
# Trap `Kill `
Signal.trap("TERM") {
shut_down
exit
}
Docker also has restart policies for when the container does actually die.
docker run --restart=always
no
Do not automatically restart the container when it exits. This is
the default.
on-failure[:max-retries]
Restart only if the container
exits with a non-zero exit status. Optionally, limit the number of
restart retries the Docker daemon attempts.
always
Always restart the
container regardless of the exit status. When you specify always, the
Docker daemon will try to restart the container indefinitely. The
container will also always start on daemon startup, regardless of the
current state of the container.
unless-stopped
Always restart the
container regardless of the exit status, but do not start it on daemon
startup if the container has been put to a stopped state before.

Resources