Is there a functional difference here? I can docker start either one to make it go again. What's the difference?
It is quite different.
A stopped container can be restarted, unlike an exited container.
Suppose you have a stopped container, which has an id of 21F123 (that is enough to identify it).
docker start 21F123
may succeed.
If you container exits, you can try again ti launch it, but it will have a new, different pid in
docker ps
Related
I have correctly deployed a Docker container which runs a Python script that grabs some data from the internet and slaps it in BigQuery. The container works well on my machine and on a GCE instance that I've provisioned.
Now, everything works well for the most part but I am failing to understand why the docker container always restarts after exiting (apparently correctly). Logs, in this case, seems to be fairly useless as there is no error whatsoever. My current hunch is that something is failing silently, forcing the instance to restart.
Is there any way to find out the reboot reason for a given Docker container?
Things tried so far
I've tried to print the exit code of the container in the following way. The result is always 0, no matter those restart cycles.
while true
do
docker inspect my_container --format='{{.State.ExitCode}}'
sleep 1
done
The Google Cloud documentation provides you different ways in which you can review your container related logs including container starts and stops.
In any way, I think there is no problem with your container: by default Compute Engine will restart a container on exit, although you can specify a different restart policy if you need to. Please, see the relevant documentation.
I succeeded in connecting to a remote server configured with Docker through vscode. By the way, the list of containers from the past was fetched from the remote explorer of vscode. If you look at this list of containers, they are obviously containers made with images I downloaded a few days ago. I don't know why this is happening.
Presumably, it is a problem with the settings.json file or a problem with some log.
I pressed f1 in vscode and select Remote-Containers: Attach to Running Container...
Then the docker command was entered automatically in the terminal. Here, a container (b25ee2cb9162) that I do not know where it came from has appeared.
After running this container, a new window opens with the message Starting Dev Container.
This is the list of containers that I said downloaded a few days ago. This is what vscode showed me.
What's the reason that this happened?
Those containers you are seeing are similar to those if you run docker container ls. The containers you are seeing have exited and are not automatically cleaned up by Docker unless specified in CLI --rm option.
The docs for the --rm option explain the reason for this nicely:
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag:
From this answer about these non-running containers taking up system resources you don't have to be concerned about these taking up much space expect minimal disk space.
To remove those containers, you have a few options:
[Preemptive] Use --rm flag when running container
You can pass the --rm flag when you run a container with the Docker to remove the containers after they have exited so old containers don't accumulate.
As the docs mention, the downside is after the container exits, it's difficult to debug why the container exited if something failed inside the container.
See the docs here if using docker run: https://docs.docker.com/engine/reference/run/#clean-up---rm
See this answer if using docker-compose run
Clean up existing containers from the command line
Use the docker container prune command to remove all stopped containers.
See the docs here: https://docs.docker.com/engine/reference/commandline/container_prune/
See this related SO answer if you're looking for other options:
Clean up containers from VSCode
VSCode Docker Containers Extension you clean up containers if you open the command palate and enter Docker Containers: Remove
Or you can simply right click those containers.
I use the dotnet3.5 image to run containers on win10 with docker desktop 2.1.0.1(37199). Sadly, I found that after I had created a container, did nothing to it, and left it alone for 4 days, the container automotically became unstoppable. The snapshot tells the story.
The container seemed existing there when docker ps -a, but I cannot get into the container by docker exec. And for I cannot stop it--the docker stop process hangs there after I use docker stop container2--I cannot rm the container.
The only way to resolve this issue is to restore docker desktop's factory setting.
By the way, although in the snapshot the running image is aspnet:3.5-windowsservercore-10.0.14393.953, this issue also happens when the aspnet:3.5
Does anyone have good ideas to the unstoppable container? Any suggestions are welcome.
The command used above is incorrect. There is a difference between the commands and options we use. "# docker ps" or "# docker container ls" will give you the list of currently running processes or active containers.
Whereas "-a" will give you all the list of all those which are used to date which contains the list of active and deleted containers.
In your case, the container was is not there and you are trying to access the one which is non-existing, which is why it is stuck.
I am new to Docker, and I find the definitions of containers' lifecycle differ a lot.
here is what "Manning.Docker.in.Action.2016.3" shows:
here is what google gives me:
https://medium.com/#nagarwal/lifecycle-of-docker-container-d2da9f85959
here is what the official document says:
status: One of created, restarting, running, removing, paused, exited, or dead
https://docs.docker.com/engine/reference/commandline/ps/
So what's going on here? I guess some new states(and renaming) are introduced in newer version of Docker?
Thanks in advance
Your linked diagram separates docker create from docker start, it includes "die" as a state transition, and it shows how to get to the "restarting" state. That's all valid, though it leads to a more complicated state machine.
(docker create wasn't in the very first versions of Docker but it appeared in Docker 1.3.0 in 2014, which should predate your diagram.)
Practically I might suggest an even simpler state machine:
-------> running -+------> stopped ------>
run | stop rm
\------> exited ------>
process exits rm
That is, never try to restart a container or make changes inside a running container; if you need to tweak anything, delete the existing container and create a new one. This gives you a consistent environment (when the main container process starts you always know what's in its filesystem, up to mounted data). It also matches what happens in cluster environments like Kubernetes, where the cluster manager will routinely create and delete containers for you.
When you get in a situation where internet gives you different answers, you should consider trying it yourself. Especially with technologies like docker, where it is pretty simple to make tests. For example:
I want to run a container (I will use nginx):
docker run -d nginx
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
258cd2edbed8 nginx "nginx -g 'daemon of…" 3 seconds ago Up 2 seconds 80/tcp jolly_golick
Note: docker will keep a container running only if there is a process running in it.
If you would start a debian container (for example), you would see how it immediately stop, as there is nothing running in it. So you could do
docker run -d debian sleep 10
and see that the container is up for 10 seconds.
When a container is running, you can do some things on it. You can't do other things, like removing it. To remove a container, you need to stop it first (or kill it), or force container removal.
Note: You would get all this info from docker itself, if you would be playing around with it, as it would return these info. Like if you would try to remove a running container, you would get this error:
Error response from daemon: You cannot remove a running container 258cd2edbed85bed23ab543312968bd893c1fbd9ba81de40366337f434daedff. Stop the container before attempting removal or force remove
I can't do all possible combinations here. You would get a similar error if you would try removing a paused container. Just play with it, and you will get a clear picture of how it works.
I have a deployed application running inside a Docker container, which is, in effect, an websocket client that runs forever. Every deploy I'm rebuilding the container and starting it with docker run using the command set in the Dockerfile.
Now, I've noticed a few times that the process occasionally dies without restarting. When running docker ps, I can see that the container is up, and has been up for 2 weeks, however the process running inside of it has died without the host being any the wiser
Do I need to go so far as to have a process manager inside of the docker container to manage the containerized process?
EDIT:
Dockerfile: https://github.com/DVG/catpen-edi/blob/master/Dockerfile
We've developed a process-manager tailor-made for Docker containers and have been using it with quite a bit of success to solve exactly the problem you describe. The best starting point is to take a look at chaperone-docker on github. The readme on the first page contains a quick link to a minimal base image as well as a fully configured LAMP stack so you can try it out and see what a fully-configured image would look like. It's open-source and fully documented.
This is a very interesting problem here related to PID1 and the fact that docker replaces PID1 with the command specified in CMD or ENTRYPOINT. What's happening is that the child process isn't automagically adopted by anything if the parent dies and it becomes an orphan (since there is no PID1 in the sense of a traditional init system like you're used to). Here is some excellent reading to give you a few ideas. You may get some mileage out of their baseimage-docker image which comes with their simplified init system ("my_app"), which will solve some of this problem for you. However, I would strongly caution you against automatically adopting the Phusion mindset for all of your containers, as there exists some ideological friction in that space. I can't recall any discussion on Docker's Github about a potential minimal init system to solve this problem, but I can't imagine it will be a problem forever. Good luck!
If you have two ruby processes it sounds like the child hasn't exited, the application has just stopped working. It's likely the EventMachine reactor is sitting in the background.
Does the EDI app really need to spawn the additional Ruby process? This only adds another layer between Docker and your app. Run the server directly with CMD [ "ruby", "boot.rb" ]. If you find the problem still occurs with a single process then you will need to find what is causing your app to hang.
When a process is running as PID 1 is docker it will need handle the SIGINT and SIGTERM signals too.
# Trap ^C
Signal.trap("INT") {
shut_down
exit
}
# Trap `Kill `
Signal.trap("TERM") {
shut_down
exit
}
Docker also has restart policies for when the container does actually die.
docker run --restart=always
no
Do not automatically restart the container when it exits. This is
the default.
on-failure[:max-retries]
Restart only if the container
exits with a non-zero exit status. Optionally, limit the number of
restart retries the Docker daemon attempts.
always
Always restart the
container regardless of the exit status. When you specify always, the
Docker daemon will try to restart the container indefinitely. The
container will also always start on daemon startup, regardless of the
current state of the container.
unless-stopped
Always restart the
container regardless of the exit status, but do not start it on daemon
startup if the container has been put to a stopped state before.