I'm testing lots of services as containers, usually through docker-compose.
Since I'm new to docker this question can appear noobish, anyway.
At some point I had a docker-compose stack (few containers with volumes) running. I had to stop containers to re-use the same ports; I did that with docker stop command.
When I was ready to start my containers again I did:
$ docker-compose start
Starting jenkins ... done
Starting gitlab-ce ... done
ERROR: No containers to start
I checked for containers and was surprised to see this:
$ docker-compose ps
Name Command State Ports
------------------------------
So I ran:
$ docker-compose up
Creating volume "dockerpipelinejenkins_jenkins_home" with default driver
Creating volume "dockerpipelinejenkins_gitlab_logs" with default driver
Creating volume "dockerpipelinejenkins_gitlab_data" with default driver
Creating volume "dockerpipelinejenkins_gitlab_config" with default driver
Creating dockerpipelinejenkins_jenkins_1 ... done
Creating dockerpipelinejenkins_gitlab-ce_1 ... done
Attaching to dockerpipelinejenkins_jenkins_1, dockerpipelinejenkins_gitlab-ce_1
... and I was shocked to see that my volumes had been recreated, effectively erasing my data.
Why did docker compose erase the volumes, and will it happen every time when I stop docker containers using docker stop?
A stop will not remove volumes. In fact it cannot since the container would still exist and therefore holding the volume as in use:
$ docker run --name test-vol -v test-vol:/data busybox sleep 10
$ docker volume rm test-vol
Error response from daemon: unable to remove volume: remove test-vol: volume is in use - [1de0add7a8dd6e083326888bc02d9954bfc7b889310ac238c34ac0a5b2f16fbf]
A docker-compose down will remove the containers, but it will not remove the volumes unless you explicitly pass that option:
$ docker-compose down --help
Stops containers and removes containers, networks, volumes, and images
created by `up`.
By default, the only things removed are:
- Containers for services defined in the Compose file
- Networks defined in the `networks` section of the Compose file
- The default network, if one is used
Networks and volumes defined as `external` are never removed.
Usage: down [options]
Options:
--rmi type Remove images. Type must be one of:
'all': Remove all images used by any service.
'local': Remove only images that don't have a custom tag
set by the `image` field.
-v, --volumes Remove named volumes declared in the `volumes` section
of the Compose file and anonymous volumes
attached to containers.
--remove-orphans Remove containers for services not defined in the
Compose file
If you have stopped all running containers, and also run a prune, and pass the option to that prune to also delete volumes, then that would remove the volumes:
$ docker system prune --help
Usage: docker system prune [OPTIONS]
Remove unused data
Options:
-a, --all Remove all unused images not just dangling ones
--filter filter Provide filter values (e.g. 'label=<key>=<value>')
-f, --force Do not prompt for confirmation
--volumes Prune volumes
In short, what you have described is not the default behavior of docker, and you have run some command to remove these volumes that was not included in the question.
Related
I have multiple containers running in my machine. Each of them belongs to a docker-compose.yaml file and they create different volumes.
Now I'd like to delete the volumes of containers created from a specific docker-compose.yaml. I can not find anything online.
docker ps -a, docker ps -aq, docker volume ls => None of this gives an association.
I'd like to delete the volumes [...] created from a specific docker-compose.yaml
docker-compose down -v deletes containers, networks, and volumes. (Without -v it keeps volumes for future use, since they're presumed to contain user data that's hard to recreate.) You don't need to know the specific Docker volume ID for this to work.
There are equivalent docker-compose commands to run most common tasks. These are generally driven off of the service name (block heading) in the docker-compose.yml, so you don't need to know or specify the Docker name of a container or network.
I was wondering (but search seems to indicate otherwise) if there is a docker command to stop a single service and remove its assigned volumes too. The equivalent of running:
1 - docker-compose stop <service name>
2 - docker volume rm <volumes_attached_to_service>
I don't mind if it removes the image for the service.
Answer:
Based on your comment:
I want to stop a single docker service and remove its assigned volumes.
This command should do the trick:
docker-compose rm -f -s -v <service>
or
docker-compose rm -fsv <service>
Where, -f will force and not ask for confirmation, -s will ensure the container is stopped before removing it, and -v will remove the volumes attached to the service.
If the container is stopped but still exists, no. You have to delete all container using a volume before deleting said volume. Even using the docker volume rm --force would not do the trick.
I could not find a specific mention in the doc, but this issue discuss the lack of doc related to the subject and some workaround.
You can however stop AND delete a container with it's related volumes in certain situations:
docker-compose down --volumes when all volumes to be deleted are listed in the docker-compose.yml file(s) used
If using anonymous volumes, running the container with docker run --rm or docker-compose run --rm will cause container and attached anonymous volumes to be deleted when container is stopped
In other situations you would have to remove the container and remove its volumes with separate commands.
On the docker documentation, it says the following:
docker volume prune === Remove all unused local volumes
To test that, I've set up a MongoDb container with the official latest image from docker hub. Doing so, created 2 volumes behind the scenes which are probably needed by the container to store the data.
When running docker volume ls now, I can see the two volumes with random names.
So let's say I would have multiple containers with volumes using random names. Now it would get difficult to know which of these are still in use and so I was expecting docker volume prune to help out here.
So I executed the command, expecting docker volume prune to delete nothing as my container is up and running with MongoDb.
But what actually happened, is that all my volumes got removed. After that my container shutdown and could not be restarted.
I tried recreating this multiple times and every time even that my container is running, the command just deletes all volumes.
Anyone can explain that behavior?
Update:
Following command with the container id of my MongoDB image shows me the 2 volumes:
docker inspect -f '{{ .Mounts }}' *CONTAINER_ID*
So from my understanding docker knows that these volumes and the container belong together.
When I ask docker to show me the dangling volumes, it shows me the same volumes again:
docker volume ls --filter dangling=true
So when they are dangling it makes sense to me that the prune command removes them. But I clearly have the volumes in use with my container, so that's not clear to me.
You can remove all existing containers then remove all volumes.
docker rm -vf $(docker ps -aq) && docker volume prune -f
Only unused volumes
docker volume prune -f
or
docker volume rm $(docker volume ls -qf dangling=true)
It's possible to remove containers that aren't running?
I know that for example, this will remove containers from created images
docker rm `docker ps -q -f status=exited`
But I would like to know how I can remove those that are not running
Use the docker container prune command, it will remove all stopped containers. You can read more about this command in the official docs here: https://docs.docker.com/engine/reference/commandline/container_prune/.
Similarly Docker has commands for docker network prune, docker image prune and docker volume prune to prune networks, images and volumes.
I use docker system prune most of the time, it cleans unused containers, plus networks and dangling images.
If I want to clean volumes with the system components, then I use docker system prune --volumes. In this case unused volumes will be removed, too, so be careful, you may loose data.
I just inspected my /var/lib/docker/volumes folder and discovered that is bursting with folders named as Docker UUIDs each of which contain a config.json file with contents along the lines of
{"ID":"UUID","Path":"/path/to/mounted/volume","IsBindMount":true,"Writable":true}
where
/path/to/mounted/volume
is the path to the folder on the host that was mounted on to a docker container with the -v switch at some point. I have such folders dating back to the start of my experiments with Docker, i.e. about 3 weeks ago.
The containers in question were stopped and docker rm'ed a long time ago so I cannot see that those entries are not past their sell-by date. This begs the question - is the left over I am seeing a bug or does one need to manually discard such entries from /var/lib/docker/volumes?
For Docker 1.9 and up there's a native way:
List all orphaned volumes with
$ docker volume ls -qf dangling=true
Eliminate all of them with
$ docker volume rm $(docker volume ls -qf dangling=true)
From the Docker user guide:
If you remove containers that mount volumes, including the initial dbdata container, or the subsequent containers db1 and db2, the volumes will not be deleted. To delete the volume from disk, you must explicitly call docker rm -v against the last container with a reference to the volume. This allows you to upgrade, or effectively migrate data volumes between containers. - source
This is intentional behavior to avoid accidental data loss. You can use a tool like docker-cleanup-volumes to clean out unused volumes.
For Docker 1.13+ and the ce/ee 17+ release numbers, use the volume prune command
docker volume prune
Unlike the dangling=true query, this will not remove "remote" driver based volumes.