When I delete a GCE VM I need my docker container to get stopped gracefully before the VM shuts down.
I am using Compute Engine Container Optimized OS (COS) and would expect my containers to be managed properly, but this is not what I am experiencing.
I tried a shutdown-script calling docker stop $(docker ps -a -q) but it doesn't make a difference at all. I can see it runs, but it seems the container is already gone by then.
I've tried trapping SIGTERM in my application. In the VM it's not trapping the signal, but on my local machine it does.
I am a bit lost and don't know what else to try. Any idea?
Take a look at Stopping Docker Containers Gracefully and also Gracefully Stopping Docker Containers
Related
I am running a Docker container that has access to the host's Docker socket. It uses this privilege to start new, short-lived containers on its own, using docker run -it some_image some_command. Though there's no strict link here, we can call the original container the parent, and the other, short-lived containers children. What I'd like to happen is for the children containers to die if the parent dies, for example, by sending a SIGKILL to process started by docker run -it ....
The context is here to avoid the XY problem, but the same question is relevant in a non-nested scenario - if I run docker run -it image sleep 300, then kill this process, sleep 300 will carry on and the container will stay alive. What I want is for the docker run command to be "bound together" with the command it runs. I am looking to do this without access to the host, and without using docker compose. Is there any such way?
The options listed under the docker run docs don't seem to have anything like this. My best working approach would be to set up a trap for SIGTERM, run the children containers in detached mode (so I get their container IDs), and then docker kill them in the trap, but this seems more cumbersome than it should be (and I'm unsure if docker kill can be trapped).
I have set live-restore for most Docker hosts to support smooth minor version upgrades, but the documentation states that this feature is not suitable for major version upgrades. So the question is how to shut down dockerd and all containers, as if live-restore was not set?
Of course I can loop over all containers to shut them down one-by-one, but I would guess that dockerd uses a different procedure. Surely it can avoid starting new containers once it has received the signal to shutdown. The external loop cannot. Not to mention that the next Docker version might introduce new features/integrations that have to be taken into account. There has to be some "docker-style" way to do this.
I guess I figured it out myself:
edit /etc/docker/daemon.json to set live-restore to false
run "systemctl reload docker" or send a SIGHUP to dockerd
run "systemctl stop docker docker.socket" or similar to shutdown docker as usual
Correct me if I am wrong.
I would like to have some sort of systemctl stopall docker that stopped the daemon and the containers when live-restore is active. It certainly would be useful in some situations. Unfortunately there does not appear to be a way to opt-in to non-live-restore behavior temporarily. Instead I use:
docker ps -q | xargs docker kill && systemctl stop docker
There is a very small window of time between killing all the containers and stopping docker that a container can be started, so its not perfect.
I am new to docker containers but we have containers being deployed and due to some internal application network bugs the process running in the container hangs and the docker container is not terminated. While we debug this issue I would like a way to find all those containers and setup a cron job to periodically check and kill those relevant containers.
So how would I determine from "docker ps -a" which containers should be dropped and how would I go about it? Any ideas? We are eventually moving to kubernetes which will help with these issues.
Docker already have a command to cleanup the docker environment, you can use it manually or maybe setup a job to run the following command:
$ docker system prune
Remove all unused containers, networks, images (both dangling and
unreferenced), and optionally, volumes.
refer to the documentation for more details on advanced usage.
I see that docker daemon use a lot of CPU. As I understand the kubelet and the dockerd communicate with each other to maintain the state of the cluster. But does dockerd for some reason do extra runtime work after containers are started that would spike CPU? To get information to report to kubelet?
But does dockerd for some reason do extra runtime work after containers are started that would spike CPU?
Not really unless you have another container or process constantly calling the docker API or running a docker command from the CLI.
The kubelet talks to the docker daemon through a docker shim to do everything that it needs to run containers, so I would check if the kubelet is doing some extra works, maybe scheduling and then evicting/stopping containers.
I have some containers running with docker server up. Now if the daemon crashes for some reason (I killed it using kill -9 $Pid_of_daemon to reproduce this behavior), and if I start the docker server again, why does it kill the old running container. The behavior I want is it should go ahead even if there are already running containers. The only reason I found out is, when daemon crashes, it looses its stdin, stdout pipes with the container, so it can no more attach to the old running containers. But if my container does not want stdout, stdin or stderr, then why would the daemon is killing it during startup. Please help