How to remove docker containers that are not running? - docker

It's possible to remove containers that aren't running?
I know that for example, this will remove containers from created images
docker rm `docker ps -q -f status=exited`
But I would like to know how I can remove those that are not running

Use the docker container prune command, it will remove all stopped containers. You can read more about this command in the official docs here: https://docs.docker.com/engine/reference/commandline/container_prune/.
Similarly Docker has commands for docker network prune, docker image prune and docker volume prune to prune networks, images and volumes.
I use docker system prune most of the time, it cleans unused containers, plus networks and dangling images.
If I want to clean volumes with the system components, then I use docker system prune --volumes. In this case unused volumes will be removed, too, so be careful, you may loose data.

Related

Can I run docker system prune -a without downtime

I want to execute docker system prune -a to clean up space. Could someone please tell me if I can execute this command without docker-compose down (without removing the containers and downtime)
Docker documentation https://docs.docker.com/engine/reference/commandline/system_prune/ says:
Remove all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes.
About -a option it says:
Remove all unused images not just dangling ones
So for sure this command does not produce a downtime.
You can execute it without "docker-compose down".

Old files remain in new docker container

I have a basic Hyperledger Fabric network where the nodes uses docker containers.
Inside each node I have done some file creation and editing. However, I now want to restart the network with clean containers.
I have tried to shut down all containers, images and networks, then run the docker prune command.
I have also tried to delete all volumes.
However, once I re-create the Fabric network, when bashing into a container the old files that I custom created are still there. I never created those files on host machine, only inside that container. But I do not understand how it is possible that those files still exists. I even tried to delete the images.
System is Ubuntu 18.4
Can anybody spot the potential fix to this?
Delete the volumes of the containers after stopping and removing the containers by below commands,
docker kill $(docker ps -aq)
docker rm $(docker ps -aq)
Then remove the volumes of the containers by below command.
docker system prune --volumes -f
This removes all the unused networks and volumes .
Hope this helps.

How to use `docker volume prune`? (`docker volume prune` removing all volumes even when container is running)

On the docker documentation, it says the following:
docker volume prune === Remove all unused local volumes
To test that, I've set up a MongoDb container with the official latest image from docker hub. Doing so, created 2 volumes behind the scenes which are probably needed by the container to store the data.
When running docker volume ls now, I can see the two volumes with random names.
So let's say I would have multiple containers with volumes using random names. Now it would get difficult to know which of these are still in use and so I was expecting docker volume prune to help out here.
So I executed the command, expecting docker volume prune to delete nothing as my container is up and running with MongoDb.
But what actually happened, is that all my volumes got removed. After that my container shutdown and could not be restarted.
I tried recreating this multiple times and every time even that my container is running, the command just deletes all volumes.
Anyone can explain that behavior?
Update:
Following command with the container id of my MongoDB image shows me the 2 volumes:
docker inspect -f '{{ .Mounts }}' *CONTAINER_ID*
So from my understanding docker knows that these volumes and the container belong together.
When I ask docker to show me the dangling volumes, it shows me the same volumes again:
docker volume ls --filter dangling=true
So when they are dangling it makes sense to me that the prune command removes them. But I clearly have the volumes in use with my container, so that's not clear to me.
You can remove all existing containers then remove all volumes.
docker rm -vf $(docker ps -aq) && docker volume prune -f
Only unused volumes
docker volume prune -f
or
docker volume rm $(docker volume ls -qf dangling=true)

How to clean up Docker

I've just noticed that I ran out of disk space on my laptop. Quite a lot is used by Docker as found by mate-disk-usage-analyzer:
The docker/aufs/diff folder contains 152 folders ending in -removing.
I already ran the following commands to clean up
Kill all running containers:
# docker kill $(docker ps -q)
Delete all stopped containers
# docker rm $(docker ps -a -q)
Delete all images
# docker rmi $(docker images -q)
Remove unused data
# docker system prune
And some more
# docker system prune -af
But the screenshot was taken after I executed those commands.
What is docker/aufs/diff, why does it consume that much space and how do I clean it up?
I have Docker version 17.06.1-ce, build 874a737. It happened after a cleanup, so this is definitely still a problem.
The following is a radical solution. IT DELETES ALL YOUR DOCKER STUFF. INCLUDING VOLUMES.
$ sudo su
# service docker stop
# cd /var/lib/docker
# rm -rf *
# service docker start
See https://github.com/moby/moby/issues/22207#issuecomment-295754078 for details
It might not be /var/lib/docker
The docker location might be different in your case. You can use a disk usage analyzer (such as mate-disk-usage-analyzer) to find the folders which need most space.
See Where are Docker images stored on the host machine?
This dir is where container rootfs layers are stored when using the AUFS storage driver (default if the AUFS kernel modules are loaded).
If you have a bunch of *-removing dirs, this is caused by a failed removal attempt. This can happen for various reasons, the most common is that an unmount failed due to device or resource busy.
Before Docker 17.06, if you used docker rm -f to remove a container all container metadata would be removed even if there was some error somewhere in the cleanup of the container (e.g., failing to remove the rootfs layer).
In 17.06 it will no longer remove the container metadata and instead flag the container with a Dead status so you can attempt to remove it again.
You can safely remove these directories, but I would stop docker first, then remove, then start docker back up.
docker takes lot of gig into three main areas :
Check for downloaded and compiled images.
clean unused and dead images by running below command
docker image prune -a
Docker creates lot of volume, some of the volumes are from dead container that are no more used
clean the volume and reclaim the space using
docker system prune -af && \
docker image prune -af && \
docker system prune -af --volumes && \
docker system df
Docker container logs are also very notorious in generating GBs of log
overlay2 storage for layers of container is also another source of GBs eaten up .
One better way is to calculate the size of docker image and then restrict the docker container with below instructions for storage and logs upper cap.
For these feature use docker V19 and above.
docker run -it --storage-opt size=2G --log-opt mode=non-blocking --log-opt max-buffer-size=4m fedora /bin/bash
Note that this is actually a know, yet still pending, issue: https://github.com/moby/moby/issues/37724
If you have the same issue, I recommend to "Thumbs Up" the issue on GitHub so that it gets addressed soon.
I had same issue.
In my case solution was:
view all images:
docker images
remove old unused images:
docker rmi IMAGE_ID
possibly you will need to prune stopped containers:
docker container prune
p.s. docker --help is good solution :)

Orphaned Docker mounted host volumes?

I just inspected my /var/lib/docker/volumes folder and discovered that is bursting with folders named as Docker UUIDs each of which contain a config.json file with contents along the lines of
{"ID":"UUID","Path":"/path/to/mounted/volume","IsBindMount":true,"Writable":true}
where
/path/to/mounted/volume
is the path to the folder on the host that was mounted on to a docker container with the -v switch at some point. I have such folders dating back to the start of my experiments with Docker, i.e. about 3 weeks ago.
The containers in question were stopped and docker rm'ed a long time ago so I cannot see that those entries are not past their sell-by date. This begs the question - is the left over I am seeing a bug or does one need to manually discard such entries from /var/lib/docker/volumes?
For Docker 1.9 and up there's a native way:
List all orphaned volumes with
$ docker volume ls -qf dangling=true
Eliminate all of them with
$ docker volume rm $(docker volume ls -qf dangling=true)
From the Docker user guide:
If you remove containers that mount volumes, including the initial dbdata container, or the subsequent containers db1 and db2, the volumes will not be deleted. To delete the volume from disk, you must explicitly call docker rm -v against the last container with a reference to the volume. This allows you to upgrade, or effectively migrate data volumes between containers. - source
This is intentional behavior to avoid accidental data loss. You can use a tool like docker-cleanup-volumes to clean out unused volumes.
For Docker 1.13+ and the ce/ee 17+ release numbers, use the volume prune command
docker volume prune
Unlike the dangling=true query, this will not remove "remote" driver based volumes.

Resources