I have an application that uses shared memory, while running application we occupy some shared memory and when we kill the process we release those memory like we are suppose to do.
The question I am wondering is when I am running this application inside docker container and if docker itself is killed say using docker kill docker_id. I have seen that SIGINT of process is not called sometime and container just got killed and removed.
I want to understand what happens to shared memory I allocated in my application if docker itself is removed which properly closing the app.
Will this result in memory leak or docker take care of it itself and memory is released. Is there any way I can handle this if there is an actual memory leak happening.
docker kill sends a signal to the main process running in a container, the same way as the normal Unix kill(1) command. Note, though, that docker kill normally sends SIGKILL, the same as shell kill -9; a process can't observe SIGKILL or react to it in any way, it's just immediately killed off.
So, yes, it's possible that docker kill container_id will cause a leak of system resources, in the same way that kill -9 process_id would without Docker.
It's more common to docker stop a container. When you do,
The main process inside the container will receive SIGTERM, and after a grace period, SIGKILL.
So long as you subscribe to SIGTERM and do your cleanup promptly (the default timeout is 10 seconds) this should avoid the resource leak.
You can also specify an alternate signal to docker kill but it wouldn't necessarily be my default approach here
docker kill --signal=SIGINT container_id
Related
I'm running 3 applications together using docker-compose:
Standard Nginx image
Java/Spark API server
Node.js app (backend + frontend)
I can bring the composed service up with docker-compose up no problem, and it runs for a period of time with no issues. At some point something kills the two non-nginx containers with code 137, and the service goes down.
My docker-compose.yml has restart: always on each container, but as I understand it this will not restart the containers if they're getting killed in this way. I verified this with docker kill $CONTAINER on each one, and they are not restarted.
When the application exits, all I see at the end of my logs is:
nginx exited with code 0
java_app exited with code 143
node_app exited with code 137
How can I debug why the host is killing these containers, and either stop this from happening or make them restart on failure?
You are not have enough memory or your applications have a memory leak. You can limit each container. Also, you can try to create swap space if you are not have enough memory.
This would be similar to putting a computer into a sleep mode. FYI: I'm asking this in the context of managing containers using Kubernetes.
The reason for asking is that we would like to run many interactive jobs and want to suspend these jobs when users are not actively working on them so that resources can be released and used by other users.
Yes, you can use docker stop and docker start commands. It would like you suspend pc.
All data produced inside container will be saved. But main process will receive SIGTERM signal. After docker start this process will be started again
Also take a look at docker pause command.
The docker pause command suspends all processes in the specified
containers. On Linux, this uses the cgroups freezer. Traditionally,
when suspending a process the SIGSTOP signal is used, which is
observable by the process being suspended. With the cgroups freezer
the process is unaware, and unable to capture, that it is being
suspended, and subsequently resumed. On Windows, only Hyper-V
containers can be paused.
I am running a docker container which contains a node server. I want to attach to the container, kill the running server, and restart it (for development). However, when I kill the node server it kills the entire container (presumably because I am killing the process the container was started with).
Is this possible? This answer helped, but it doesn't explain how to kill the container's default process without killing the container (if possible).
If what I am trying to do isn't possible, what is the best way around this problem? Adding command: bash -c "while true; do echo 'Hit CTRL+C'; sleep 1; done" to each image in my docker-compose, as suggested in the comments of the linked answer, doesn't seem like the ideal solution, since it forces me to attach to my containers after they are up and run the command manually.
This is by design by Docker. Each container is supposed to be a stateless instance of a service. If that service is interrupted, the container is destroyed. If that service is requested/started, it is created. If you're using an orchestration platform like k8s, swarm, mesos, cattle, etc at least.
There are applications that exist to represent PID 1 rather than the service itself. But this goes against the design philosophy of microservices and containers. Here is an example of an init system that can run as PID 1 instead and allow you to kill and spawn processes within your container at will: https://github.com/Yelp/dumb-init
Why do you want to reboot the node server? To apply changes from a config file or something? If so, you're looking for a solution in the wrong direction. You should instead define a persistent volume so that when the container respawns the service would reread said config file.
https://docs.docker.com/engine/admin/volumes/volumes/
If you need to restart the process that's running the container, then simply run a:
docker restart $container_name_or_id
Exec'ing into a container shouldn't be needed for normal operations, consider that a debugging tool.
Rather than changing the script that gets run to automatically restart, I'd move that out to the docker engine so it's visible if your container is crashing:
docker run --restart=unless-stopped ...
When a container is run with the above option, docker will restart it for you, unless you intentionally run a docker stop on the container.
As for why killing pid 1 in the container shuts it down, it's the same as killing pid 1 on a linux server. If you kill init/systemd, the box will go down. Inside the namespace of the container, similar rules apply and cannot be changed.
I am running prometheus inside a docker container on centos. I wanted to know if there is a way to stop prometheus gracefully (without data corruption). Will running docker stop work? I could not find any docs on this and I am new to linux, docker and prometheus.
Yes, this should work.
docker stop sends a SIGTERM signal to the process at PID1, which is prometheus if you're using the official image. If no answer is received in a period of time (default 10s), docker will then send the SIGKILL signal and kill the process.
Prometheus is expected to shut down cleanly when receiving a SIGTERM however this may take longer than 10s. See
https://prometheus.io/docs/introduction/faq/#troubleshooting
You might have to extend the time docker waits before sending a SIGKILL.
e.g.: docker stop --time 50
You'll have to find the correct setting here depending on your Prometheus setup.
I have some containers running with docker server up. Now if the daemon crashes for some reason (I killed it using kill -9 $Pid_of_daemon to reproduce this behavior), and if I start the docker server again, why does it kill the old running container. The behavior I want is it should go ahead even if there are already running containers. The only reason I found out is, when daemon crashes, it looses its stdin, stdout pipes with the container, so it can no more attach to the old running containers. But if my container does not want stdout, stdin or stderr, then why would the daemon is killing it during startup. Please help