we have some docker server built, but when the old image/container gets renewed/removed the old subvolumes are not deleted, so it now end up with we have 10 running images/containers but 50+ subvolumes, it fill up the filesystems pretty quick.
we would like to delete those subvolumes which are no longer needed but how do we find these subvoluems not anymore related to the images/containers?
thanks
Eric
Related
My raspberrypi suddenly had no more free space.
By looking at the folder sizes with the following command:
sudo du -h --max-depth=3
I noticed that a docker folder eats an incredible amount of hard disk space. It's the folder
var/lib/docker/containers/*
The folder seems to contain some data for the current running docker containers. The first letters of the filename correspond to the docker container-ID. One folder seems to grow dramatically fast. After stopping the affected container and removed him, the related folder disappeared. So the folder seems to have belonged to it.
Problem solved.
I wonder now what the reason could be that this folder size increases so much. Further, I wonder what is the best way to not run into the same problem again later.
I could write a bash script which removes the related container at boot and run it again. Better ideas are very welcome.
The container ids are directories, so you can look inside to see what is using space in there. The two main reasons are:
Logs from stdout/stdere. These can be limited with added options. You can view these with docker logs.
Filesystem changes. The underlying image filesystem is not changed, so any writes trigger a copy-on-write to a directory within each container id. You can view these with docker diff.
We frequently spin up some quick exploratory docker-based projects for a day or two that we'd like to quickly and easily discard when done, without disturbing our primary ongoing containers and images.
At any point in time we have quite a few 'stable' docker images and containers that we do NOT want to rebuild all the time.
How can one remove all the images and containers associated with the current directory's Dockerfile and docker-compose.yml file, without disturbing other projects' images and containers?
(All the Docker documentation I see shows how to discard them ALL, or requires finding and listing a bunch of IDs manually and discarding them manually. This seems primitive and time-wasting... In a project folder, the Dockefile and docker-compose.yml file have all the info needed to identify "all images and images that were created when building and running THIS project" so it seems there would quick command to remove that project's docker dregs when done.)
As an example, right now I have rarely revised Docker images and containers for several production Rails 5 apps I'd like to keep untouched, but a half-dozen folders with short-term Rails 6 experiments that represent dozens of images and containers I'd like to discard.
Is there any way to tell Docker... "here's a Dockerfile and a docker-compose,yml file, stop and remove any/all containers and images that were created by them?"
Be there a machine that runs various docker projects. Each docker container is regularly replaced/stopped/started as soon as newer versions arrive from the build system.
How does a backup concept for such a machine look like?
Looking into similar questions [1] the correct path to a working backup/restore procedure is not immediately clear to me. My current understanding is something like:
Backup
Use scripts to create images and containers. Store/Backup scripts in your favorite Version Control System. Use version tags to pull docker images. Don't use latest tag.
Exclude /var/lib/docker/overlay2 from backup (to prevent backing up dangling and temporary stuff)
Use named volumes only. Volumes can be saved and restored from backup. For database stuff extra work has to be done. Eventually consider to tar volumes to extra folder [2].
docker prune daily to remove dangling stuff
Restore
Make sure all named volumes are back in place.
Fetch scripts from version control to recreate images as needed. Use docker run to recreate containers.
Application specific tasks - restore databases from dumps , etc.
[1]
How can I backup a Docker-container with its data-volumes?
[2] https://stackoverflow.com/a/48112996/1485527
Don't use latest tag in your images. Set correct tags (like v0.0.1, v0.0.2, etc) for your images and you can have all of your versions in a docker registry.
You should prefer to use stateless container
What is about docker volume? You can use it https://docs.docker.com/storage/volumes/
If you use bind mount volume you can manually save you files in archive for backup
The Docker docs state:
Warning: Do not directly manipulate any files or directories within /var/lib/docker/. These files and directories are managed by Docker.
Let's say someone hasn't read that hint and deleted some files from /var/lib/docker/aufs/diff to free up some disk space. These files didn't live in a Docker volume and are not part of the original Docker image but have been created in the container writable layer. Restarting the given container frees up the disk space but are there any known side effects?
And for the next time: Does removing that kind of files or directories from within the container (via docker exec .. rm ..) result in a proper removal or are they only marked as deleted? The documentation currently doesn't describe this special case.
Restarting the given container frees up the disk space but are there any known side effects?
As you stated in your question, you should not "manipulate any files or directories within /var/lib/docker/", as any side-effect may appear and no documentation trace anything about this: it's internal Docker plumbing which may highly change other Docker versions, ut's not supposed to be exposed to end-users nor be tempered with. You could look at Docker code for your Docker version and all it's dependencies to understand what happened, but it's not really practical :-)
are there any known side effects?
There maybe be side effects - I insist on the may as anything can happen depending on your Docker version and configuration. Even if it may seem to be working, some things may be broken.
Well known side effect is Docker installation corruption, which may have present itself in various fashions: random container crash, data loss, unexplained bug, etc.
Best case scenario, you just discarded some data in your container and everything will work fine in the future.
Not-so-good scenario: you actually broke something in your installation and corrupted it, you'll be better of re-installing Docker entirely.
Does removing that kind of files or directories from within the container (via docker exec .. rm ..) result in a proper removal or are they only marked as deleted?
Deleting a file in the container will not always remove it from the system, it depends on the drive your are using. Doc has a section about writing files for all of them:
AUFS - it seemed implied that file is deleted, AUFS will copy the file from the image layer and work on it, it should then delete the copy
When a file is deleted within a container, a whiteout file is created in the container layer. The version of the file in the image layer is not deleted [...] Subsequent writes to the same file operate against the copy of the file already copied up to the container.
BTRFS - deleted and space reclaimed, doc is quite clear:
If a container creates a file and then deletes it, this operation is performed in the Btrfs filesystem itself and the space is reclaimed.
devicemapper - may not be deleted depending on config:
if you are using direct-lvm, the blocks are freed. If you use loop-lvm, the blocks may not be freed
OverlayFS - seemed implied that file is deleted, but the image file is kept
When a file is deleted within a container, a whiteout file is created in the container (upperdir). The version of the file in the image layer (lowerdir) is not deleted
ZFS - deleted:
If you create and then delete a file or directory within the container’s writable layer, the blocks are reclaimed by the zpool.
VFS is using a copy of the previous layer and work directly in a directory representing that layer, a deletion in the container should probably delete it from the related directory on host machine
The documentation currently doesn't describe this special case.
Yes, and it probably won't ;)
I'm having problem with space usage of docker. I have
/var/lib/docker/aufs/diff/e20ed0ec78d30267e8cf855c6311b02089b6086ea149c21997a3e6cb9757ecd4/tmp/registry-dev/docker/registry/v2/blobs# du -sh
4.6G .
can I find which container does this folder belong to? I have docker registry running but inside there I have
/var/lib/registry/docker/registry/v2/blobs# du -sh
465M .
I'm suspecting docker upgrade (I used migrate tool https://docs.docker.com/engine/migration/ here) could have left it, or, I was building docker registry myself before, and moving to pre-compiled registry left this.
can I somehow check which container it belogs to? or maybe, does it belong to any?
I had same issue and the spotify/docker-gc fixed it. Clone it then follow "Running as a Docker Image"
The spotify/docker-gc is not going to fix it, but it make things to turn for the better much easier. The first thing you need to do is to stop doing commits on the same image. As I've realized, this is going to build up a huge diff-dependency. What I did is that I've committed all my running containers into different image names and tags, stopped and restarted the containers. After that, I've deleted the old images manually, then ran spotify/docker-gc. I've saved about 20% of disk space. If I ran spotify/docker-gc before the commits into new images, nothing happened.
If you use spotify/docker-gc, please do DRY_RUN.