What's the difference between "docker stop" and "docker rm"? - docker

I initially thought that docker stop is equivalent to vagrant halt, and docker rm is to vagrant destroy.
But fundamentally, docker containers are stateless, except for VOLUME statement, which AFAIK preserves directory content even after docker rm, if it wasn't called with -v.
So, what is the difference?

docker stop preserves the container in the docker ps -a list (which gives the opportunity to commit it if you want to save its state in a new image).
It sends SIGTERM first, then, after a grace period, SIGKILL.
docker rm will remove the container from docker ps -a list, losing its "state" (the layered filesystems written on top of the image filesystem). It cannot remove a running container (unless called with -f, in which case it sends SIGKILL directly).
In term of lifecycle, you are supposed to stop the container first, then remove it. It gives a chance to the container PID 1 to collect zombie processes.

docker rm removes the container image from your storage location (e.g. debian: /var/lib/docker/containers/) whereas
docker stop simply halts the the container.

Related

Docker - Change existing containers settings without RUN command

Is it possible to change the settings of docker container like entrypoint, ports or memory-limits without having to delete the container and run using docker run command? Example: docker stop <container_id>, change settings and then docker start <container_id>?
When you use docker run -d image_name, some images tries to initialize from start and as a result I can't use the same volume.
Is it possible to change the settings by stopping the container instead of re-run?
You need to stop, delete, and recreate the container.
# this is absolutely totally 100% normal and routine
docker stop my_container
docker rm my_container
# docker build -t image_name .
docker run -d -p 12345:8000 --name my_container image_name
This isn't specific to Docker. If you run any command in any Unix-like environment and you want to change its command-line parameters or environment variables, you need to stop the process and create a new one. A Docker container is a wrapper around a process with some additional isolation features, and for a great many routine things you're required to delete the container. In cluster container environments like Kubernetes, this is routine enough that changing any property of a Deployment object will cause all of the associated containers (Kubernetes Pods) to get recreated automatically.
There are a handful of Docker commands that exist but are almost never used in normal operation. docker start is among these; just skip over it in the documentation.
When you use docker run -d image_name, some images tries to initialize from start and as a result I can't use the same volume.
In fact, the normal behavior of docker run is that you're always beginning the program from a known "clean" initial state; this is easier to set up as an application developer than trying to recover from whatever state the previous run of the application might have been left in.
If you need to debug the image startup, an easy thing to do is to tell the container to run an interactive shell instead of its default command
docker run --rm -it image_name /bin/sh
(Some images may have bash available which will be more comfortable to work in; some images may require an awkward docker run --entrypoint option.) From this shell you can try to manually run the container startup commands and see what happens. You don't need to worry about damaging the container code in any particular way, since anything you change in this shell will get lost as soon as the container exits.

How to use container exits immediately after startup in docker?

As everyone knows, we can use docker start [dockerID] to start a closed container.
But, If this container exits immediately after startup. What should I do?
For example, I have a MySQL container, it runs without any problems. But the system is down. At next time I start this container. It tell me a file is worry so that this container immediately exit.
Now I want to delete this file, but this container can not be activated, so I can't enter this container to delete this file. What should I do?
And if I want to open bash in this state container, What should I do?
Delete the container and launch a new one.
docker rm dockerID
docker run --name dockerID ... mysql:5.7
Containers are generally treated as disposable; there are times you're required to delete and recreate a container (to change some networking or environment options; to upgrade to a newer version of the underlying image). The flip side of this is that containers' state is generally stored outside the container filesystem itself (you probably have a docker run -v or Docker Compose volumes: option) so it will survive deleting and recreating the container. I almost never use docker start.
Creating a new container gets you around the limitations of docker start:
If the container exits immediately but you don't know why, docker run or docker-compose up it without the -d option, so it prints its logs to the console
If you want to run a different command (like an interactive shell) as the main container command, you can do it the same as any other container,
docker run --rm -it -v ...:/var/lib/mysql/data mysql:5.6 sh
docker-compose run db sh
If the actual problem can be fixed with an environment variable or other setting, you can add that to the startup-time configuration, since you're already recreating the container

what's the different bewteen docker stop and docker container stop?

We can use docker stop xxx or docker container stop xxx to stop a container, but what's the difference between these? I looked at the official docs, the two commands had the same description and usage format.
Also, what's the parent command meaning in docker? Thanks!
Both are same. Some frequently used commands like docker container stop etc have shortened aliases under docker parent command.
Parent command is just the command one level up. For docker container stop, stop is the subcommand of docker container and docker container is called the parent command.

how to kill lots of docker container processes effectively and faster?

We are using Jenkins and Docker in combination.. we have set up Jenkins like master/slave model, and containers are spun up in the slave agents.
Sometimes due to bug in jenkins docker plugin or for some unknown reasons, containers are left dangling.
Killing them all takes time, about 5 seconds per container process and we have about 15000 of them. Will take ~24hrs to finish running the cleanup job. How can I remove the containers bunch of them at once? or effectively so that it takes less time?
Will uninstalling docker client, remove the containers?
Is there a volume where these containers process kept, could be removed (bad idea)
Any threading/parallelism to remove them faster?
I am going to run a cron job weekly to patch these bugs, but right now I dont have whole day to get these removed.
Try this:
Uninstall docker-engine
Reboot host
rm /var/lib/docker
Rebooting effectively stops all of the containers and uninstalling docker prevents them from coming back upon reboot. (in case they have restart=always set)
If you are interesting in only killing the processes as they are not exiting properly (my assessment of what you mean--correct me if I'm wrong), there is a way to walk the running container processes and kill them using the Pid information from the container's metadata. As it appears you don't necessarily care about clean process shutdown at this point (which is why docker kill is taking so long per container--the container may not respond to the right signals and therefore the engine waits patiently, and then kills the process), then a kill -9 is a much more swift and drastic way to end these containers and clean up.
A quick test using the latest docker release shows I can kill ~100 containers in 11.5 seconds on a relatively modern laptop:
$ time docker ps --no-trunc --format '{{.ID}}' | xargs -n 1 docker inspect --format '{{.State.Pid}}' $1 | xargs -n 1 sudo kill -9
real 0m11.584s
user 0m2.844s
sys 0m0.436s
A clear explanation of what's happening:
I'm asking the docker engine for an "full container ID only" list of all running containers (the docker ps)
I'm passing that through docker inspect one by one, asking to output only the process ID (.State.Pid), which
I then pass to the kill -9 to have the system directly kill the container process; much quicker than waiting for the engine to do so.
Again, this is not recommended for general use as it does not allow for standard (clean) exit processing for the containerized process, but in your case it sounds like that is not important criteria.
If there is leftover container metadata for these exited containers you can clean that out by using:
docker rm $(docker ps -q -a --filter status=exited)
This will remove all exited containers from the engine's metadata store (the /var/lib/docker content) and should be relatively quick per container.
So,
docker kill $(docker ps -a -q)
isn't what you need?
EDIT: obviously it isn't. My next take then:
A) somehow create a list of all containers that you want to stop.
B) Partition that list (maybe by just slicing it into n parts).
C) Kick of n jobs in parallel, each one working one of those list-slices.
D) Hope that "docker" is robust enough to handle n processes sending n kill requests in sequence in parallel.
E) If that really works: maybe start experimenting to determine the optimum setting for n.

Do docker containers retain file changes?

This is a very basic question, but I'm struggling a bit and would like to make sure I understand properly.
After a container is started from an image and some changes done to files within (i.e.: some data stored in the DB of a WebApp running on the container), what's the appropriate way to continue working with the same data between container stop and restart?
Is my understanding correct that once the container is stopped/finished (i.e.: exit after an interactive session), then that container is gone together with all file changes?
So if I want to keep some file changes I have to commit the state of the container into a new image / new version of the image?
Is my understanding correct that once the container is stopped/finished (i.e.: exit after an interactive session), then that container is gone together with all file changes?
No, a container persists after it exits, unless you started it using the --rm argument to docker run. Consider this:
$ docker run -it busybox sh
/ # date > example_file
/ # exit
Since we exited our shell, the container is no longer running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
But if we had the -a option, we can see it:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
79aee3e2774e busybox:latest "sh" About a minute ago Exited (0) 54 seconds ago loving_fermat
And we can restart it and re-attach to it:
$ docker start 79aee3e2774e
$ docker attach 79aee3e2774e
<i press RETURN>
/ #
And the file we created earlier is still there:
/ # cat example_file
Wed Feb 18 01:51:38 UTC 2015
/ #
You can use the docker commit command to save the contents of the container into a new image, which you can then use to start new containers, or share with someone else, etc. Note, however, that if you find yourself regularly using docker commit you are probably doing yourself a disservice. In general, it is more manageable to consider containers to be read-only and generate new images using a Dockerfile and docker build.
Using this model, data is typically kept external to the container,
either through host volume mounts or using a data-only container.
You can see finished containers with docker ps -a
You can save a finished container, with the filesystem changes, into an image using docker commit container_name new_image_name
You can also extract data files from the finished container with: docker cp containerID:/path/to/find/files /path/to/put/copy
Note that you can also "plan ahead" and avoid trapping data you'll need permanently within a temporary container by having the container mount a directory from the host, e.g.
docker run -v /dir/on/host:/dir/on/container -it ubuntu:14.04

Resources