Stop a running Docker container by sending SIGTERM - docker

I have a very very simple Go app listening on port 8080
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
w.Header().Set("Content-Type", "text-plain")
w.Write([]byte("Hello World!"))
})
log.Fatal(http.ListenAndServe(":8080", http.DefaultServeMux))
I install it in a Docker container and start it like so:
FROM golang:alpine
ADD . /go/src/github.com/myuser/myapp
RUN go install github.com/myuser/myapp
ENTRYPOINT ["/go/bin/myapp"]
EXPOSE 8080
I then run the container using docker run:
docker run --publish 8080:8080 first-app
I expect that, like most programs, I can send a SIGTERM to the process running docker run and this will cause the container to stop running. I observe that sending SIGTERM has no effect, and instead I need to use a command like docker kill or docker stop.
Is this intended behavior? I've asked in the forums and on IRC and gotten no answer.

Any process run with docker must handle signals itself.
Alternatively use the --init flag to run the tini init as PID 1
The sh shell can become the PID 1 process depending on how you specify a command (CMD).
Detail
A SIGTERM is propagated by the docker run command to the Docker daemon by default but it will not take effect unless the signal is specifically handled in main process being run by Docker.
The first process you run in a container will have PID 1 in that containers context. This is treated as a special process by the linux kernel. It will not be sent a signal unless the process has a handler installed for that signal. It is also PID 1's job to forward signals onto other child processes.
docker run and other commands are API clients for the Remote API hosted by the docker daemon. The docker daemon runs as a seperate process and is the parent for the commands you run inside a container context. This means that there is no direct sending of signals between run and the daemon, in the standard unix manner.
The docker run and docker attach command have a --sig-proxy flag that defaults signal proxying to true. You can turn this off if you want.
docker exec does not proxy signals.
In a Dockerfile, be careful to use the "exec form" when specifying CMD and ENTRYPOINT defaults if you don't want sh to become the PID 1 process (Kevin Burke):
CMD [ "executable", "param1", "param2" ]
Signal Handling Go Example
Using the sample Go code here: https://gobyexample.com/signals
Run both a regular process that doesn't handle signals and the Go daemon that traps signals and put them in the background. I'm using sleep as it's easy and doesn't handle "daemon" signals.
$ docker run busybox sleep 6000 &
$ docker run gosignal &
With a ps tool that has a "tree" view, you can see the two distinct process trees. One for the docker run process under sshd. The other for the actual container processes, under docker daemon.
$ pstree -p
init(1)-+-VBoxService(1287)
|-docker(1356)---docker-containe(1369)-+-docker-containe(1511)---gitlab-ci-multi(1520)
| |-docker-containe(4069)---sleep(4078)
| `-docker-containe(4638)---main(4649)
`-sshd(1307)---sshd(1565)---sshd(1567)---sh(1568)-+-docker(4060)
|-docker(4632)
`-pstree(4671)
The details of docker hosts processes:
$ ps -ef | grep "docker r\|sleep\|main"
docker 4060 1568 0 02:57 pts/0 00:00:00 docker run busybox sleep 6000
root 4078 4069 0 02:58 ? 00:00:00 sleep 6000
docker 4632 1568 0 03:10 pts/0 00:00:00 docker run gosignal
root 4649 4638 0 03:10 ? 00:00:00 /main
Killing
I can't kill the docker run busybox sleep command:
$ kill 4060
$ ps -ef | grep 4060
docker 4060 1568 0 02:57 pts/0 00:00:00 docker run busybox sleep 6000
I can kill the docker run gosignal command that has the trap handler:
$ kill 4632
$
terminated
exiting
[2]+ Done docker run gosignal
Signals via docker exec
If I docker exec a new sleep process in the already running sleep container, I can send an ctrl-c and interrupt the docker exec itself, but that doesn't forward to the actual process:
$ docker exec 30b6652cfc04 sleep 600
^C
$ docker exec 30b6652cfc04 ps -ef
PID USER TIME COMMAND
1 root 0:00 sleep 6000 <- original
97 root 0:00 sleep 600 <- execed still running
102 root 0:00 ps -ef

So there are two factors at play here:
1) If you specify a string for an entrypoint, like this:
ENTRYPOINT /go/bin/myapp
Docker runs the script with /bin/sh -c 'command'. This intermediate script gets the SIGTERM, but doesn't send it to the running server app.
To avoid the intermediate layer, specify your entrypoint as an array of strings.
ENTRYPOINT ["/go/bin/myapp"]
2) I built the app I was trying to run with the following string:
docker build -t first-app .
This tagged the container with the name first-app. Unfortunately when I tried to rebuild/rerun the container I ran:
docker build .
Which didn't overwrite the tag, so my changes weren't being applied.
Once I did both of those things, I was able to kill the process with ctrl+c, and bring down the running container.

A very comprehensive description of this problem and the solutions can be found here:
https://vsupalov.com/docker-compose-stop-slow
In my case, my app expects to receive SIGTERM signal for graceful shutdown didn't receive it because the process started by a bash script which called from a dockerfile in this form: ENTRYPOINT ["/path/to/script.sh"]
so the script didn't propagate the SIGTERM to the app.
The solution was to use exec from the script run the command that starts the app:
e.g. exec java -jar ...

Related

CRON Job to ping running container port and restart server in case of error

I have a python script running inside a docker container, with a REST API inside the container (localhost access to port from 8085)
Things work great (http://{host_ip_address:8085}), but once in a while the container stops responding to http requests, restarting the container solves the issue.
I'd like to setup a CRON job, within the host running the container, to check that http://localhost:8085 is responding, and if not, restart the container.
it is ok for me to restart all running containers on the machine, so the failure command can be:
docker restart $(docker ps -a -q)
How can I achieve this ?
Answering my own question:
docker has the HEALTHCHECK instruction to poll the container via any command we want. unfortunately, at the time of writing, the container will not restart automatically if unhealthy.
Here is my HEALTHCHECK instruction in Dockerfile (app is exposing port 8085):
HEALTHCHECK --interval=1m --timeout=30s --start-period=45s \
CMD curl -f --retry 6 --max-time 5 --retry-delay 10 --retry-max-time 60 "http://localhost:8085" || bash -c 'kill -s 15 -1 && (sleep 10; kill -s 9 -1)'

Enable systemctl in Docker container

I am trying to create my own docker container, and custom service which I created for my work, this is my service file
[1/1] /etc/systemd/system/qsinavAI.service
[Unit]
Description=uWSGI instance to serve Qsinav AI
After=network.target
[Service]
User=www-data
Group=www-data
WorkingDirectory=/root/AI/
Environment="PATH=/root/AI/bin"
ExecStart=/root/AI/bin/uwsgi --ini ai.ini
[Install]
WantedBy=multi-user.target
and when I am trying to run this service I get this error
System has not been booted with systemd as init system (PID 1). Can't
operate. Failed to connect to bus: Host is down
I searched a lot to find a solution but I could not, how can I enable the systemctl in docker.
this is the command that I am using to run the container
docker run -dt -p 5000:5000 --name AIPython2 --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro --cap-add SYS_ADMIN last_python_image
If your application is only ever run inside a container then you should create a docker-entrypoint.sh script with an "exec" at the end so that your application is run as a remapped PID 1 in the container. That way cloud systems can see if the application is alive and they can send a SIGTERM to stop the application.
#! /bin/bash
cd /root/AI
PATH=/root/AI/bin
exec /root/AI/bin/uwsgi --ini ai.ini
If your application shall be able to run in systemd environment outside of a container then you can choose to reuse the systemd descriptor. It requires an init-daemon on PID 1 and a service manager to check the "enbabled" services. One example would be the systemctl-docker-replacement script.
Docker containers should have an "entrypoint" command that runs in foreground to keep the container running. The basic idea behind a container is that it runs as long as the root process that started it, keeps running. Since you will issue a systemctl start qsinavAI.service, the command will succeed but once this command exits, the container will stop.
By design, containers started in detached mode exit when the root process used to run the container exits, ...
See some reference about this and starting nginx service in the official documentation.
So instead of trying to run your application as a service, you should have an entrypoint statement at the end of your Dockerfile. Then when you start this container with docker run, you can specify -d to run it in "detached" mode.
Example, taking the command from ExecStart and assuming it runs in foreground:
ENTRYPOINT ["/root/AI/bin/uwsgi", "--ini", "ai.ini"]
Exemple how to create image with systemd and boot like a real environment. A Dockerfile is required.
FROM ubuntu:22.04
RUN echo 'root:root' | chpasswd
RUN printf '#!/bin/sh\nexit 0' > /usr/sbin/policy-rc.d
RUN apt-get update
RUN apt-get install -y systemd systemd-sysv dbus dbus-user-session
ENTRYPOINT ["/sbin/init"]
/sbin/init is important to init systemd and enable systemctl.
Then build the system.
docker build -t testimage -f Dockerfile .
docker run -it --privileged --cap-add=ALL testimage

'docker stop' for crond times out

I'm trying to understand why my Docker container does not stop gracefully and just times out. The container is running crond:
FROM alpine:latest
ADD crontab /etc/crontabs/root
RUN chmod 0644 /etc/crontabs/root
CMD ["crond", "-f"]
And the crontab file is:
* * * * * echo 'Working'
# this empty line required by cron
Built with docker build . -t periodic:latest
And run with docker run --rm --name periodic periodic:latest
This is all good, but when I try to docker stop periodic from another terminal, it doesn't stop gracefully, the time out kicks in and is killed abruptly. It's like crond isn't responding to the SIGTERM.
crond is definitely PID 1
/ # ps
PID USER TIME COMMAND
1 root 0:00 crond -f
6 root 0:00 ash
11 root 0:00 ps
However, if I do this:
docker run -it --rm --name shell alpine:latest ash and
docker exec -it shell crond -f in another terminal, I can kill crond from the first shell with SIGTERM so I know it can be stopped with SIGTERM.
Thanks for any help.
Adding an init process to the container (init: true in docker-compose.yml) solved the problem.
EDIT: I read this https://blog.thesparktree.com/cron-in-docker to understand the issues and solutions around running cron in Docker. From this article:
"Finally, as you’ve been playing around, you may have noticed that it’s difficult to kill the container running cron. You may have had to use docker kill or docker-compose kill to terminate the container, rather than using ctrl + C or docker stop.
Unfortunately, it seems like SIGINT is not always correctly handled by cron implementations when running in the foreground.
After researching a couple of alternatives, the only solution that seemed to work was using a process supervisor (like tini or s6-overlay). Since tini was merged into Docker 1.13, technically, you can use it transparently by passing --init to your docker run command. In practice you often can’t because your cluster manager doesn’t support it."
Since my original post and answer, I've migrated to Kubernetes, so init in docker-compose.yml won't work. My container is based on Debian Buster, so I've now installed tini in the Dockerfile, and changed the ENTRYPOINT to ["/usr/bin/tini", "--", "/usr/local/bin/entrypoint.sh"] (my entrypoint.sh finally does exec cron -f)
The key is that you cannot stop a pid=1 process in docker. It supposes that docker stops (or kills if it was launched with --rm).
That's why if you run -it ... ash, shell has pid 1 and you can kill other processes.
If you want your cron is killable without stopping/killing docker, just launch another process as entrypoint:
Launch cron after docker entrypoint (For example, run as cmd tail -F /dev/null and then launch cron docker run -d yourdocker service cron start)

is there a way to know if a docker container is restarted within another container?

I need to know if other container has been restarted to run some command in my container. Is there a way to be aware of container restarting within another container?
You didn't give any specifics so I assume both containers are running on the same host. In this case a simple solution is to get the restart event from the docker
daemon running on the host and then send a signal to the other container. docker events can easily do that
Run on the host using, for example, the docker-compose service name to filter the events notified for the restarting container:
docker events | \
grep --line-buffered 'container restart.*com.docker.compose.service=<compose_service_name>' | \
while read ; do docker kill --signal=SIGUSR1 my_ubuntu ; done
The docker kill --signal=SIGUSR1 my_ubuntu send the USR1 signal to the other container where the command needs to be run. To test it, run ubuntu with a sigtrap for USR1:
docker run --rm --name my_ubuntu -it ubuntu /bin/bash \
-c "trap 'echo signal received' USR1; \
while :; do echo loop; sleep 10 & wait ${!}; done;"
Now restart the container and the signal handler will execute echo inside the other container. It can be replaced with the real command.
The docker events are part of the Docker REST API (see Monitor Docker's events) so if the other container can connect to the docker daemon running on host, it can get the restart notification directly.
Hope it helps.

Docker ssh, back to container showing unexpected results

I'm studying the Docker documentation, but I'm having a hard time understanding the concept of creating a container, ssh, and ssh back.
I created a container with
docker run -ti ubuntu /bin/bash
Then, it starts the container and I can run commands. docker ps gives me
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0e37da213a37 ubuntu "/bin/bash" About a minute ago Up About a minute keen_sammet
The issue is after I exit the container I can't ssh back.
I tried docker attach that gives me Error: No such container and I tried docker exec -ti <container>/bin/bash that gives me the same message Error: No such container
How do I run and ssh back to the container?
When you exit the bash process, the container exits (in general, a container will exit when the foreground process exits). The error message you are seeing is accurately describing the situation (the container is no longer running).
If you want to be able to docker exec into a container, you will want to run some sort of persistent command. For example, if you were to run:
docker run -ti -d --name mycontainer ubuntu bash
This would start a "detached" container. That means you've started bash, but it's just hanging around doing nothing. You could use docker exec to start a new process in this container:
$ docker exec -it mycontainer ps -fe
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 16:28 pts/0 00:00:00 bash
root 17 0 0 16:28 pts/1 00:00:00 ps -fe
Or:
$ docker exec -it mycontainer bash
There's really no reason to start bash as the main process in this case, since you're not interacting with it. You can just as easily run...
docker run -ti -d --name mycontainer ubuntu sleep inf
...and the behavior would be the same.
The most common use case for all of this is when your docker run command starts up some sort of persistent service (like a web server, or a database server, etc), and then you use docker exec to perform diagnostic or maintenance tasks.
The docker attach command will re-connect you with the primary console of a detached container. In other words, if we return to the initial example:
docker run -ti -d --name mycontainer ubuntu bash
You could connect to that bash process (instead of starting a new one) by running:
docker attach mycontainer
At this point, exit would cause the container to exit.
First, you don't ssh to a docker container (unless you have a sshd process in that container). But you can execute a command with docker exec -ti mycontainer bash -l
But you can exec a command only on running container. If the container exited already you must use another approach : create an image from the container and run a new one.
Here is an example. First I create a container and create a file within then I exit it.
$ docker run -ti debian:9-slim bash -l
root#09f889e80153:/# echo aaaaaaaaaa > /zzz
root#09f889e80153:/# cat /zzz
aaaaaaaaaa
root#09f889e80153:/# exit
logout
As you can see the container is exited (Exited (0) 24 seconds ago)
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
09f889e80153 debian:9-slim "bash -l" 45 seconds ago Exited (0) 24 seconds ago thirsty_hodgkin
So I create a new image with docker commit
$ docker commit 09f889e80153 bla
sha256:6ceb88470326d2da4741099c144a11a00e7eb1f86310cfa745e8d3441ac9639e
So I can run a new container that contains previous container content.
$ docker run -ti bla bash -l
root#479a0af3d197:/# cat zzz
aaaaaaaaaa

Resources