Docker container restart instantly, despite I have set the "-t" °timeout° option - docker

here my snippet:
docker restart -t 5 waitforit_
then docker ps returns immediately :
status => run since 1s
How it is possible?
any hint would be great,
thanks

I believe docker restart is equivalent to docker stop; docker start. The -t option isn’t a hard wait. Rather, it says that if the process doesn’t stop on its own after receiving SIGTERM, then send it SIGQUIT (kill -9) after that many seconds.
If your process is well-behaved and exits promptly when it receives SIGTERM, then docker restart will in fact be pretty quick, regardless of whatever value you pass as -t.

Related

How do I exit a command that runs inside a Docker container? [duplicate]

Having an issue with Docker at the moment; I'm using it to run an image that launches an ipython notebook on startup. I'm looking to make some edits to ipython notebook itself, so I need to close it after launch.
However, hitting CTRL+C in the terminal just inputs "^C" as a string. There seems to be no real way of using CTRL+C to actually close the ipython notebook instance.
Would anyone have any clues as to what can cause this, or know of any solutions for it?
Most likely the container image you use is not handling process signals properly.
If you are authoring the image then change it as Roland Webers' answer suggests.
Otherwise try to run it with --init.
docker run -it --init ....
This fixes Ctrl+C for me.
Source: https://docs.docker.com/v17.09/engine/reference/run/#specify-an-init-process
The problem is that Ctrl-C sends a signal to the top-level process inside the container, but that process doesn't necessarily react as you would expect. The top-level process has ID 1 inside the container, which means that it doesn't get the default signal handlers that processes usually have. If the top-level process is a shell, then it can receive the signal through its own handler, but doesn't forward it to the command that is executed within the shell. Details are explained here. In both cases, the docker container acts as if it simply ignores Ctrl-C.
If you're building your own images, the solution is to run a minimal init process, such as tini or dumb-init, as the top-level process inside the container.
This post proposes CTRL-Z as a workaround for sending the process to background and then killing the process by its process id:
Cannot kill Python script with Ctrl-C
Possible problems:
The program catches ctrl-c and does nothing, very unlikely.
There are background processes that are not managed correctly. Only the main process receives the signal and sub-processes hang. Very likely what's happening.
Proposed Solution:
Check the programs documentation on how it's properly started and stopped. ctrl-c seems not to be the proper way.
Wrap the program with a docker-entrypoint.sh bash script that blocks the container process and is able to catch ctrl-c. This bash example should help: https://rimuhosting.com/knowledgebase/linux/misc/trapping-ctrl-c-in-bash
After catching ctrl-c invoke the proper shutdown method for ipython notebook.
From this post on the Docker message boards:
Open a new shell and execute
$ docker ps # get the id of the running container
$ docker stop <container> # kill it (gracefully)
This worked well for me. CTRL-Z, CTRL-\, etc. only came up as strings, but this killed the Docker container and returned the tab to terminal input.
#maybeg's answer already explains very well why this might be happening.
Regarding stopping the unresponsive container, another solution is to simply issue a docker stop <container-id> in another terminal. As opposed to CTRL-C, docker stop does not send a SIGINT but a SIGTERM signal, to which the process might react differently.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop a running container by sending SIGTERM and then SIGKILL after a grace period
If that fails, use docker kill <container-id> which sends a SIGKILL immediately.

Stop Synology notification "Docker container stopped unexpectedly"

I have a container with one Node.js script which is launched with CMD npm start. The script runs, does some work, and exits. The node process exits because no work is pending. The npm start exits successfully. The container then stops.
I run this container on a Synology NAS from a cronjob via docker start xxxx. When it finishes, I get an alert Docker container xxxx stopped unexpectedly from their alert system. docker container ls -a shows its status as Exited (0) 5 hours ago. If I monitor docker events I see the event die with exitCode=0
It seems like I need to signal to the system that the exit is expected by producing a stop event instead of a die event. Is that something I can do in my image or on the docker start command line?
The Synology Docker package will generate the notification Docker container xxxx stopped unexpectedly when the following two conditions are met:
The container exits with a die docker event (you can see this happen by monitoring docker events when the container exits). This is any case where the main process in the container exits on its own. The exitCode does not matter.
The container is considered "enabled" by the Synology Docker GUI. This information is stored in /var/packages/Docker/etc/container_name.config:
{
"enabled" : true,
"exporting" : false,
"id" : "dbee87466fb70ea26cd9845fd79af16d793dc64d9453e4eba43430594ab4fa9b",
"image" : "busybox",
"is_ddsm" : false,
"is_package" : false,
"name" : "musing_cori",
"shortcut" : {
"enable_shortcut" : false,
"enable_status_page" : false,
"enable_web_page" : false,
"web_page_url" : ""
}
}
How to enable/disable containers with Synology's Docker GUI
Containers are automatically enabled if you start them from the GUI. All of these things will cause the container to become "enabled" and start notifying on exit:
Sliding the "toggle switch" in the container view to "on"
Using Action start on the container.
Opening the container detail panel and clicking "start"
This is probably how your container ended up "enabled" and why it is now notifying whenever it exits. Containers created with docker run -d ... do not start out enabled, and will not initially warn on exit. This is probably why things like docker run -it --rm busybox and other ephemeral containers do not cause notifications.
Containers can be disabled if you stop them while they are running. There appears to be no way to disable a container which is currently stopped. So to disable a container you must start it and then stop it before it exits on its own:
Slide the toggle switch on then off as soon as it will let you.
Use Action start and then stop as soon as it will let you (this is hard because of the extra click if your container is very shortlived).
Open the controller detail panel, click start, and then as soon as "stop" is not grayed out, click "stop".
Check your work by looking at /var/packages/Docker/etc/container_name.config.
Another option for stopping/starting the container without the notifications is to do it via the Synology Web API.
To stop a container:
synowebapi --exec api=SYNO.Docker.Container version=1 method=stop name="CONTAINER_NAME"
Then to restart it:
synowebapi --exec api=SYNO.Docker.Container version=1 method=start name="CONTAINER_NAME"
Notes:
The commands need to be run as root
You will get a warning [Line 255] Not a json value: CONTAINER_NAME but the commands work and give a response message indicating "success" : true
I don't really have any more information on it as I stumbled across it in a reddit post and there's not a lot to back it up, but it's working for me on DSM 7.1.1-42962 and I'm using it in a scheduled task.
Source and referenced links:
A Reddit post with the commands
Linked GitHub page showing the commands in use
Linked Synology Developer's Guide for DSM Login Web API
I'm not familiar with Synology so I'm not sure which component is raising the "alert" you mention, but I guess this is just a warning and not an error, because:
an exit status of 0 is very fine from a POSIX perspective;
a "die" docker event also seems quite common, e.g. running docker events then docker run --rm -it debian bash -c "echo Hello" yields the same event (while a "kill" event would be more dubious).
So maybe you get one such warning just because Synology assumes a container should be running for a long time?
Anyway, here are a couple of remarks related to your question:
Is the image/container you run really ephemeral? (regarding the data the container handles) because if this the case, instead of doing docker start container_name, you might prefer using docker run --rm -i image_name … or docker run --rm -d -i image_name …. (In this case thanks to --rm, the container removal will be automatically triggered when the container stops.)
Even if the setup you mention sounds quite reasonable for a cron job (namely, the fact that your container stops early and automatically), you might be interested in this other SO answer that gives further details on how to catch the signals raised by docker stop etc.

Neither "docker stop", "docker kill" nor "docker -f rm" works

Trying to stop container from this image by using either of mentioned commands results in indefinite waiting by docker. The container still can be observed in docker ps output.
Sorry for a newbie question, but how does one stop containers properly?
This container was first run according to the instructions on hub.docker.com, halted by Ctrl+C and then started again by docker start <containter-name>. After it was started, it never worked as expected though.
Your test worked for me:
→ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
853e36b8a952 jleight/opentsdb "/usr/bin/supervisord" 9 minutes ago Up 9 minutes 0.0.0.0:4242->4242/tcp fervent_hypatia
→ docker stop fervent_hypatia
fervent_hypatia
→ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
It took a bit long, but I think that is because the Docker image is using a supervisor process so SIGTERM (which is what docker stop sends first) doesn't kill the container, but the SIGKILL, which is by default sent after 10 seconds should (my wait time was ~ 10 seconds).
Just in case your default may be messed up for some reason, try indicating the timeout explicitely:
docker stop --time=2 <container-name>
docker stop <container-name> is a proper way to stop your container. It's possible there is something going on inside, you could try usingdocker logs <container-name> to give you more information about what's running inside.
This probably isn't the best way, but eventually restarting docker would do the trick, if nothing else works.

Sending sigstop and sigcont to docker containers

I am attempting to send SIGSTOP, and then later, SIGKILL to a container. This line leads me to believe that it will behave as I expect: https://github.com/docker/docker/issues/5948#issuecomment-43684471
However, it is going ahead and actually removing the containers. The commands are:
docker kill -s STOP container
docker kill -s CONT container
(Equivalent through the dockerode API I am using, but I just went to command line when that wasn't working). Is there some missing options I'm missing?
I think you're actually looking for the commands docker pause and docker unpause. Using the STOP signal is likely to be error-prone and dependent on how the process handles the signal.
I guess what's happening in this case is that Docker thinks the process has terminated and stops the container (it shouldn't be removed however, you can restart it with docker start).

Which one should i use? docker kill or docker stop?

Will docker stop fail if processes running inside the container fail to stop?? If i use docker kill, can unsaved data inside the container be preserved. Is docker stop time consuming compared to docker kill?? I want to do a shutdown of the container but without loosing any data(without high latency to complete kill or stop process).
Line reference:
docker stop: Stop a running container (send SIGTERM, and then SIGKILL
after grace period) [...] The main process inside the container will
receive SIGTERM, and after a grace period, SIGKILL. [emphasis mine]
docker kill: Kill a running container (send SIGKILL, or specified
signal) [...] The main process inside the container will be sent
SIGKILL, or any signal specified with option --signal. [emphasis mine]
You can get more info from this post: https://superuser.com/questions/756999/whats-the-difference-between-docker-stop-and-docker-kill
Docker stop:
When you issue a docker stop command a hardware signal is sent to the process inside of that container. In the case of docker stop we send a sig term message which is short for terminate signal its a message that's going to be received by the process telling it essentially to shut down on its own time.
SIGTERM is used any time that you want to stop a process inside of your container and shut the container down, and you want to give that process inside there a little bit of time to shut itself down and do a little bit of clean up.
A lot of different programming languages have the ability for you to listen for these signals inside of your code base, and as soon as you get that signal you could attempt to do a little bit of cleanup or maybe save some file or emit some message or something like that.
On the other hand the docker kill command issue is a sig kill or kills signal to the primary running process inside the container, so kill it essentially means you have to shut down right now and you do not get to do any additional work.
So ideally we always stop a container with the docker stop command in order to get the running process inside of it a little bit of time to shut itself down, otherwise if it feels like the container has locked up and it's not responding to the docker stop command then we could issue docker kill instead.
One kind of little oddity or interesting thing about docker stop, when issue docker stop to a container and if the container dose not automatically stop in 10 seconds then docker is going to automatically fall back to issuing the docker kill command.
So essentially at docker stop is us being nice but it's only got 10 seconds to actually shut down.
A good example could be ping command.
sudo docker run busybox ping google.com
now if you want to stop the container if you use docker stop container_id, you will see it takes 10 seconds before getting shut down because ping command dose not properly respond to a SIGTERM message. In other words the ping command doesn't really have the ability to say oh yeah I understand you want me to shut down.
So after we waited those 10 seconds eventually the kill signal was sent to it telling it hey ping you are done and shut yourself down.
But if you use docker kill container_id you are going to see that's it instantly dead.
You should use docker stop since it stops the container gracefully - like shutting down your laptop, instead of killing them - like forcibly turn off the laptop from it's battery.
But, Docker will force shut down (kill the processes) by the time it takes 10 seconds to stop them gracefully.
docker stop will send SIGTERM (terminate signal) to the process and docker will have 10 seconds to clean up like saving files or emitting some messages.
Use docker kill when container is locked up, if it is not responding.

Resources