Unable to kill a docker which restart each 10 seconds - docker

I have problems with a heretic docker container... I tried to follow this tutorial, trying to build a OpenVPN in my new raspberry (the first one in my life)... and I think I did something really wrong... I tried to run it with reset policy: "always"
This container has an error each time I try to run it,
standard_init_linux.go:211: exec user process caused "exec format error"
It tries to run each 10 seconds during 3 seconds, more or less, and always with a different Docker Container ID. It runs with different PID, too...
I've tried some solutions I've found on the Internet, trying to stop this madness...

It seems you are using systemd script.
You should try with this command.
systemctl stop docker-openvpn#NAME.service
replace NAME with whatever name you have given to your service.
It is stated in their documentation
In the event the service dies (crashes, or is killed) systemd will attempt to restart the service every 10 seconds until the service is stopped with **systemctl stop docker-openvpn#NAME.service**
Checkout following link
In case you forgot your service name, you can run this command and check your service name
systemctl --type=service

Related

How to stop a running docker container 'X' minutes from startup?

I would like to stop my running docker container after a specific time, let's say 2 hrs after startup. So far my research has led to the following solutions. I just wanted to know if there were better ways to do it.
Use a cron job to stop the container by calling the docker stop command.
Use an entry point like sleep 5000, but this does not suit my use case.
Using --stop-timeout in the docker run command ; I believe this is just the maximum timeout given for the container to gracefully shutdown. Am I missing something
here?
You can use the timeout command that's part of the coreutils package which is already installed in the debian images (and probably many others).
This will run the container for 30 seconds and then stop
docker run debian timeout 30 tail -f /dev/null
Basically, add timeout 7200 in front of the command you want to run in the container, and it'll be killed after 2 hours.

Stop Synology notification "Docker container stopped unexpectedly"

I have a container with one Node.js script which is launched with CMD npm start. The script runs, does some work, and exits. The node process exits because no work is pending. The npm start exits successfully. The container then stops.
I run this container on a Synology NAS from a cronjob via docker start xxxx. When it finishes, I get an alert Docker container xxxx stopped unexpectedly from their alert system. docker container ls -a shows its status as Exited (0) 5 hours ago. If I monitor docker events I see the event die with exitCode=0
It seems like I need to signal to the system that the exit is expected by producing a stop event instead of a die event. Is that something I can do in my image or on the docker start command line?
The Synology Docker package will generate the notification Docker container xxxx stopped unexpectedly when the following two conditions are met:
The container exits with a die docker event (you can see this happen by monitoring docker events when the container exits). This is any case where the main process in the container exits on its own. The exitCode does not matter.
The container is considered "enabled" by the Synology Docker GUI. This information is stored in /var/packages/Docker/etc/container_name.config:
{
"enabled" : true,
"exporting" : false,
"id" : "dbee87466fb70ea26cd9845fd79af16d793dc64d9453e4eba43430594ab4fa9b",
"image" : "busybox",
"is_ddsm" : false,
"is_package" : false,
"name" : "musing_cori",
"shortcut" : {
"enable_shortcut" : false,
"enable_status_page" : false,
"enable_web_page" : false,
"web_page_url" : ""
}
}
How to enable/disable containers with Synology's Docker GUI
Containers are automatically enabled if you start them from the GUI. All of these things will cause the container to become "enabled" and start notifying on exit:
Sliding the "toggle switch" in the container view to "on"
Using Action start on the container.
Opening the container detail panel and clicking "start"
This is probably how your container ended up "enabled" and why it is now notifying whenever it exits. Containers created with docker run -d ... do not start out enabled, and will not initially warn on exit. This is probably why things like docker run -it --rm busybox and other ephemeral containers do not cause notifications.
Containers can be disabled if you stop them while they are running. There appears to be no way to disable a container which is currently stopped. So to disable a container you must start it and then stop it before it exits on its own:
Slide the toggle switch on then off as soon as it will let you.
Use Action start and then stop as soon as it will let you (this is hard because of the extra click if your container is very shortlived).
Open the controller detail panel, click start, and then as soon as "stop" is not grayed out, click "stop".
Check your work by looking at /var/packages/Docker/etc/container_name.config.
Another option for stopping/starting the container without the notifications is to do it via the Synology Web API.
To stop a container:
synowebapi --exec api=SYNO.Docker.Container version=1 method=stop name="CONTAINER_NAME"
Then to restart it:
synowebapi --exec api=SYNO.Docker.Container version=1 method=start name="CONTAINER_NAME"
Notes:
The commands need to be run as root
You will get a warning [Line 255] Not a json value: CONTAINER_NAME but the commands work and give a response message indicating "success" : true
I don't really have any more information on it as I stumbled across it in a reddit post and there's not a lot to back it up, but it's working for me on DSM 7.1.1-42962 and I'm using it in a scheduled task.
Source and referenced links:
A Reddit post with the commands
Linked GitHub page showing the commands in use
Linked Synology Developer's Guide for DSM Login Web API
I'm not familiar with Synology so I'm not sure which component is raising the "alert" you mention, but I guess this is just a warning and not an error, because:
an exit status of 0 is very fine from a POSIX perspective;
a "die" docker event also seems quite common, e.g. running docker events then docker run --rm -it debian bash -c "echo Hello" yields the same event (while a "kill" event would be more dubious).
So maybe you get one such warning just because Synology assumes a container should be running for a long time?
Anyway, here are a couple of remarks related to your question:
Is the image/container you run really ephemeral? (regarding the data the container handles) because if this the case, instead of doing docker start container_name, you might prefer using docker run --rm -i image_name … or docker run --rm -d -i image_name …. (In this case thanks to --rm, the container removal will be automatically triggered when the container stops.)
Even if the setup you mention sounds quite reasonable for a cron job (namely, the fact that your container stops early and automatically), you might be interested in this other SO answer that gives further details on how to catch the signals raised by docker stop etc.

What is the proper way to start a container after it has exited?

I have a container called sqlcontainer1.
The image is "microsoft/mssql-server-linux:2017-latest".
I restored a .bak file to it and now I can use it for local development.
And I can even see if from SSMS (SQL Server Management Studio). Great!
The problem is after I reboot it the container status says "Exited".
The only way I can see to restart is to type:
docker start -ai sqlcontainer1
Then no command prompt is ever returned so I have to open another command prompt and retype:
docker ps -a
to see the status is now "UP 7 minutes".
OK, I'm glad it's up and I can now connect back with SSMS and work from there (although I am wondering why it says 7 minutes. I've only had it up seconds).
Good.
But there has to be a better way.
I just want two commands like this;
docker start containerName
docker stop containerName
Is there anything like this?
If I can get that far then I would like to look into a proper restart policy.
You can set a container to restart=always when you create it or afterwards you can update it with
docker update --restart=always <container>
Then the container will always run on startup of your computer

How to pull layers one by one in Docker to avoid connection timeout?

I keep getting connection timeout while pulling an image:
First, it starts downloading the 3 first layers, after one of them finish, the 4th layer try to start downloading. Now the problem is it won't start until the two remaining layers finish there download process, and before that happens (I think) the fourth layer fails to start downloading and abort the whole process.
So I was thinking, if downloading the layers one by one would solve this problem.
Or maybe a better way/option to solve this issue that may occure when you don't have a very fast internet speed.
The Docker daemon has a --max-concurrent-downloads option.
According to the documentation, it sets the max concurrent downloads for each pull.
So you can start the daemon with dockerd --max-concurrent-downloads 1 to get the desired effect.
See the dockerd documentation for how to set daemon options on startup.
Please follow the step if docker running already Ubuntu:
sudo service docker stop
sudo dockerd --max-concurrent-downloads 1
Download your images after that stop this terminal and start the daemon again as it was earlier.
sudo service docker start
There are 2 ways:
permanent change. add docker settings file:
sudo vim /etc/docker/daemon.json
the json file as below:
{
"max-concurrent-uploads": 1,
"max-concurrent-downloads": 4
}
after adding the file, run
sudo service docker restart
temporary change
stop the docker by
sudo service docker stop
then run
sudo dockerd --max-concurrent-uploads 1
at this point, start the push at another terminal. it will transfer files one by one. when you finished, restart the service or computer.
Building on the previous answers, in my case I couldn't do service stop, and also I wanted to make sure I would restart the docker daemon in the same state, I thus followed these steps:
Record the command line used to start the docker daemon:
ps aux | grep dockerd
Stop the docker daemon:
sudo kill <process id retrieved from previous command>
Restart docker daemon with max-concurrent-downloads option: Use the command retrieved at the first step, and add --max-concurrent-downloads 1
Additionally
You might still run into a problem if even with a single download at a time, your pull is still aborted at some point, and layers that are already downloaded are erased. It's a bug, but it was my case.
A solution in that case is to make sure to keep already downloaded layers, voluntarily.
The way to do that is to regularly abort the pull manually, but NOT by killing the docker command, but BY KILLING THE DOCKER DAEMON.
Actually, it's the daemon that erases already downloaded layers when the pull fails. Thus, by killing it, it can't erase these layers. The docker pull command does terminate, but once you restart the docker daemon, and then relaunch your docker pull command, downloaded layers are still here.

Neither "docker stop", "docker kill" nor "docker -f rm" works

Trying to stop container from this image by using either of mentioned commands results in indefinite waiting by docker. The container still can be observed in docker ps output.
Sorry for a newbie question, but how does one stop containers properly?
This container was first run according to the instructions on hub.docker.com, halted by Ctrl+C and then started again by docker start <containter-name>. After it was started, it never worked as expected though.
Your test worked for me:
→ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
853e36b8a952 jleight/opentsdb "/usr/bin/supervisord" 9 minutes ago Up 9 minutes 0.0.0.0:4242->4242/tcp fervent_hypatia
→ docker stop fervent_hypatia
fervent_hypatia
→ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
It took a bit long, but I think that is because the Docker image is using a supervisor process so SIGTERM (which is what docker stop sends first) doesn't kill the container, but the SIGKILL, which is by default sent after 10 seconds should (my wait time was ~ 10 seconds).
Just in case your default may be messed up for some reason, try indicating the timeout explicitely:
docker stop --time=2 <container-name>
docker stop <container-name> is a proper way to stop your container. It's possible there is something going on inside, you could try usingdocker logs <container-name> to give you more information about what's running inside.
This probably isn't the best way, but eventually restarting docker would do the trick, if nothing else works.

Resources