How do I clean docker? - docker

An error occurred because there is not enough disk space
I decided to check how much is free and came across this miracle
Cleaned up via docker system prune -a and
docker container prune -f
docker image prune -f
docker system prune -f
But only 9GB was cleared

Prune removes containers/images that have not been used for a while/stopped. I would suggest you do a docker ps -a and then remove/stop all the containers that you don't want with docker stop <container-id>, and then move on to remove docker images by docker images ps and then remove them docker rmi <image-name>
Once you have stooped/removed all the unwanted containers run docker system prune --volumes to remove all the volumes/cache and dangling images.

Don't forget to prune the volumes! Those almost always take up way more space than images and containers. docker volume prune. Be careful if you have anything of value stored in them though.

It could be your logging of the running containers. I've seen Docker logging writing disks full with logging. By default Docker containers can write unlimited logs.
I always add logging configuration to my docker-compose restrict total size. Docker Logging Configuration.

From the screenshot I think there's some confusion on what's taking up how much space. Overlay filesystems are assembled from directories on the parent filesystem. In this case, that parent filesystem is within /var/lib/docker which is part of / in your example. So the df output for each overlay filesystem is showing how much disk space is used/available within /dev/vda2. Each container isn't using 84G, that's just how much is used in your entire / filesystem.
To see how much space is being used by docker containers, images, volumes, etc, you can use docker system df. If you have running containers (likely from the above screenshot), docker will not clean those up, you need to stop them before they are eligible for pruning. Once containers have been deleted, you can then prune images and volumes. Note that deleting volumes deletes any data you were storing in that volume, so be sure it's not data you wanted to save.
It's not uncommon for docker to use a lot of disk space from downloading lots of images (docker makes it easy to try new things) and those are the easiest to prune when the containers are stopped. However what's harder to see are the logs that containers are outputting which will slowly fill the drive within the containers directory. For more details on how to clean the logs automatically, see this answer.

If you want, you could dig deep at granular level and pinpoint the file(s) which are the cause of this much disk usage.
du -sh /var/lib/docker/overley2/<container hash>/merged/* | sort -h
this would help you coming to a conclusion much easily.

Related

Reduce the disk space Docker uses [duplicate]

(Post created on Oct 05 '16)
I noticed that every time I run an image and delete it, my system doesn't return to the original amount of available space.
The lifecycle I'm applying to my containers is:
> docker build ...
> docker run CONTAINER_TAG
> docker stop CONTAINER_TAG
> rm docker CONTAINER_ID
> rmi docker image_id
[ running on a default mac terminal ]
The containers in fact were created from custom images, running from node and a standard redis. My OS is OSX 10.11.6.
At the end of the day I see I keep losing Mbs. How can I face this problem?
EDITED POST
2020 and the problem persists, leaving this update for the community:
Today running:
macOS 10.13.6
Docker Engine 18.9.2
Docker Desktop Cli 2.0.0.3
The easiest way to workaround the problem is to prune the system with the Docker utilties.
docker system prune -a --volumes
WARNING:
By default, volumes are not removed to prevent important data from being deleted if there is currently no container using the volume. Use the --volumes flag when running the command to prune volumes as well:
Docker now has a single command to do that:
docker system prune -a --volumes
See the Docker system prune docs
There are three areas of Docker storage that can mount up, because Docker is cautious - it doesn't automatically remove any of them: exited containers, unused container volumes, unused image layers. In a dev environment with lots of building and running, that can be a lot of disk space.
These three commands clear down anything not being used:
docker rm $(docker ps -f status=exited -aq) - remove stopped containers
docker rmi $(docker images -f "dangling=true" -q) - remove image layers that are not used in any images
docker volume rm $(docker volume ls -qf dangling=true) - remove volumes that are not used by any containers.
These are safe to run, they won't delete image layers that are referenced by images, or data volumes that are used by containers. You can alias them, and/or put them in a CRON job to regularly clean up the local disk.
It is also worth mentioning that file size of docker.qcow2 (or Docker.raw on High Sierra with Apple Filesystem) can seem very large (~64GiB), larger than it actually is, when using the following command:
ls -klsh Docker.raw
This can be somehow misleading because it will output the logical size of the file rather than its physical size.
To see the physical size of the file you can use this command:
du -h Docker.raw
Source: https://docs.docker.com/docker-for-mac/faqs/#disk-usage
Why does the file keep growing?
If Docker is used regularly, the size of the Docker.raw (or Docker.qcow2) can keep growing, even when files are deleted.
To demonstrate the effect, first check the current size of the file on the host:
$ cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
$ ls -s Docker.raw
9964528 Docker.raw
Note the use of -s which displays the number of filesystem blocks actually used by the file. The number of blocks used is not necessarily the same as the file “size”, as the file can be sparse.
Next start a container in a separate terminal and create a 1GiB file in it:
$ docker run -it alpine sh
# and then inside the container:
/ # dd if=/dev/zero of=1GiB bs=1048576 count=1024
1024+0 records in
1024+0 records out
/ # sync
Back on the host check the file size again:
$ ls -s Docker.raw
12061704 Docker.raw
Note the increase in size from 9964528 to 12061704, where the increase of 2097176 512-byte sectors is approximately 1GiB, as expected. If you switch back to the alpine container terminal and delete the file:
/ # rm -f 1GiB
/ # sync
then check the file on the host:
$ ls -s Docker.raw
12059672 Docker.raw
The file has not got any smaller! Whatever has happened to the file inside the VM, the host doesn’t seem to know about it.
Next if you re-create the “same” 1GiB file in the container again and then check the size again you will see:
$ ls -s Docker.raw
14109456 Docker.raw
It’s got even bigger! It seems that if you create and destroy files in a loop, the size of the Docker.raw (or Docker.qcow2) will increase up to the upper limit (currently set to 64 GiB), even if the filesystem inside the VM is relatively empty.
The explanation for this odd behaviour lies with how filesystems typically manage blocks. When a file is to be created or extended, the filesystem will find a free block and add it to the file. When a file is removed, the blocks become “free” from the filesystem’s point of view, but no-one tells the disk device. Making matters worse, the newly-freed blocks might not be re-used straight away – it’s completely up to the filesystem’s block allocation algorithm. For example, the algorithm might be designed to favour allocating blocks contiguously for a file: recently-freed blocks are unlikely to be in the ideal place for the file being extended.
Since the block allocator in practice tends to favour unused blocks, the result is that the Docker.raw (or Docker.qcow2) will constantly accumulate new blocks, many of which contain stale data. The file on the host gets larger and larger, even though the filesystem inside the VM still reports plenty of free space.
TRIM
A TRIM command (or a DISCARD or UNMAP) allows a filesystem to signal to a disk that a range of sectors contain stale data and they can be forgotten. This allows:
an SSD drive to erase and reuse the space, rather than spend time shuffling it around; and
Docker for Mac to deallocate the blocks in the host filesystem, shrinking the file.
So how do we make this work?
Automatic TRIM in Docker for Mac
In Docker for Mac 17.11 there is a containerd “task” called trim-after-delete listening for Docker image deletion events. It can be seen via the ctr command:
$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n ctr t ls
TASK PID STATUS
vsudd 1741 RUNNING
acpid 871 RUNNING
diagnose 913 RUNNING
docker-ce 958 RUNNING
host-timesync-daemon 1046 RUNNING
ntpd 1109 RUNNING
trim-after-delete 1339 RUNNING
vpnkit-forwarder 1550 RUNNING
When an image deletion event is received, the process waits for a few seconds (in case other images are being deleted, for example as part of a docker system prune ) and then runs fstrim on the filesystem.
Returning to the example in the previous section, if you delete the 1 GiB file inside the alpine container
/ # rm -f 1GiB
then run fstrim manually from a terminal in the host:
$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n fstrim /var/lib/docker
then check the file size:
$ ls -s Docker.raw
9965016 Docker.raw
The file is back to (approximately) it’s original size – the space has finally been freed!
Hopefully this blog will be helpful, also checkout the following macos docker utility scripts for this problem:
https://github.com/wanliqun/macos_docker_toolkit
Docker on Mac has an additional problem that is hurting a lot of people: the docker.qcow2 file can grow out of proportions (up to 64gb) and won't ever shrink back down on its own.
https://github.com/docker/for-mac/issues/371
As stated in one of the replies by djs55 this is in the planning to be fixed, but its not a quick fix. Quote:
The .qcow2 is exposed to the VM as a block device with a maximum size
of 64GiB. As new files are created in the filesystem by containers,
new sectors are written to the block device. These new sectors are
appended to the .qcow2 file causing it to grow in size, until it
eventually becomes fully allocated. It stops growing when it hits this
maximum size.
...
We're hoping to fix this in several stages: (note this is still at the
planning / design stage, but I hope it gives you an idea)
1) we'll switch to a connection protocol which supports TRIM, and
implement free-block tracking in a metadata file next to the qcow2.
We'll create a compaction tool which can be run offline to shrink the
disk (a bit like the qemu-img convert but without the dd if=/dev/zero
and it should be fast because it will already know where the empty
space is)
2) we'll automate running of the compaction tool over VM reboots,
assuming it's quick enough
3) we'll switch to an online compactor (which is a bit like a GC in a
programming language)
We're also looking at making the maximum size of the .qcow2
configurable. Perhaps 64GiB is too large for some environments and a
smaller cap would help?
Update 2019: many updates have been done to Docker for Mac since this answer was posted to help mitigate problems (notably: supporting a different filesystem).
Cleanup is still not fully automatic though, you may need to prune from time to time. For a single command that can help to cleanup disk space, see zhongjiajie's answer.
docker container prune
docker system prune
docker image prune
docker volume prune
Since nothing here was working for me, here's what I did. Check file size:
ls -lhks ~/Library/Containers/com.docker.docker//Data/vms/0/data/Docker.raw
Then in the docker desktop simply reduce the disk image size (I was using raw format). It will say it will delete everything, but by the time you are reading this post, you probably already have. So that creates a fresh new empty file.
i'm not sure if it is related to the current topic , but this been a solution for me personally
open docker settings -> resources -> disk image size - 16gb
There are several options on how to limit docker diskspace, I'd start by limiting/rotating the logs: Docker container logs taking all my disk space
E.g. if you have a recent docker version, you can start it with an --log-opt max-size=50m option per container.
Also - if you've got old, unused containers, you can consider having a look at the docker logs which are located at /var/lib/docker/containers/*/*-json.log
$ sudo docker system prune
WARNING! This will remove:
all stopped containers
all networks not used by at least one container
all dangling images
all dangling build cache

Docker: "You don't have enough free space in /var/cache/apt/archives/"

I have a dockerfile which when I want to build results in the error
E: You don't have enough free space in /var/cache/apt/archives/
Note that the image sets up a somewhat complex project with several dependencies that require quite a lot of space. For example, the list includes Qt. This is only a thing during the construction of the image, and in the end, I expect it to have a size of maybe 300 MB.
Now I found this: https://unix.stackexchange.com/questions/578536/how-to-fix-e-you-dont-have-enough-free-space-in-var-cache-apt-archives
Given that, what I tried so far is:
Freeing the space used by docker images so far by calling docker system prune
Removing unneeded installation files by calling sudo apt autoremove and sudo apt autoclean
There was also the suggestion to remove data in var/log, which has currently a size of 3 GB. However, I am not the system administrator and thus wary to do such a thing.
Is there any other way to increase that space?
And, preferably, is there a more sustainable solution, allowing me to build several images without having to search for spots where I can clean up the system?
Try this suggestion. You might have a lot of unused images that need to be deleted.
https://github.com/onyx-platform/onyx-starter/issues/5#issuecomment-276562225
Converting a #Dre suggestion into the code, you might want to use Docker prune command for containers, images & volumes
docker container prune
docker image prune
docker volume prune
You can use these commands in sequence:
docker container prune; docker image prune; docker volume prune
Free Space without removing your latest images
Use the following command to see the different types of reclaimable storage (the -v verbose option provides more detail):
docker system df
docker system df -v
Clear the build cache (the -a option will remove unused build cache):
docker builder prune -a
Remove dangling images ( tagged images, old and previous image builds):
docker rmi -f $(docker images -f "dangling=true" -q)
Increase Disk image size using Docker UI
Docker > Preferences > Resources > Advanced > adjust Disk image size > Apply & Restart
TLDR;
run
docker system prune -a --volumes
I tried to increase the disk space and prune the images, containers and volumes manually but was facing the issue again and again. When I tried to check the memory consumption on my machine, I found a lot of memory consumed by ~/Library/Containers/com.docker.docker location. Did a system prune which cleaned up a lot of space and docker builds started working again.

How can I reinit docker layers?

Using docker under Kubuntu 18 I got out of free space on the device.
I run commands to clear space:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker-compose down --remove-orphans
docker system prune --force --volumes
As I was still of free space opened /var/lib/docker/overlay2/ directory and
deleted a lot of subdirectories ubder it.
After that I got error :
$ docker-compose up -d --build
Creating network "master_default" with the default driver
ERROR: stat /var/lib/docker/overlay2/36af81b800ebb595a24b6c724318c1126932d2bfae61e2c98bfc65a203b2b928: no such file or directory
Looks like that is not a good way to free space. Which way isd good in my case?
If there is a way to reinit my docker apps?
Thanks!
As I was still of free space opened /var/lib/docker/overlay2/ directory and deleted a lot of subdirectories ubder it.
At this point, the docker filesystem has been corrupted. To repair, the best you can do is backup anything you want to save, particularly any volumes, stop the docker engine (systemctl stop docker), delete the entire docker filesystem (rm -rf /var/lib/docker), and restart docker (systemctl start docker).
At that point the engine will be completely empty without any images, containers, etc. You'll need to pull/rebuild your images and recreate the containers you were running. Hopefully that's as easy as a docker-compose up.
/var/lib/docker/overlay2/ is where docker stores the image layers.
Now, docker system prune -a removes all the unused images, stopped containers and the build cache if I'm not wrong.
One advice, since you are building images, check docker buildkit. I know docker-compose added support for it, but I don't know if your version supports that. To make it short, building your images will be way faster.

Docker taking much more space than sum of containers, images and volumes

Docker running on Ubuntu is taking 18G of disk space (on a partition of 20G), causing server crashes. Commands below show that there is a serious mismatch between "official" image, container and volume sizes and the docker folder size.
What causes this and how can I cleanup ?
I already tried docker system prunewhich doesn't help.
du -sh /var/lib/docker
docker system df
du -sh /var/lib/docker/*
du -sh /var/lib/docker/containers/*
I was having the same problem. I solved my problem by deleting log files
sudo sh -c "truncate -s 0 /var/lib/docker/containers/*/*-json.log"
Link: How to clear the logs properly for a Docker container?
You have two containers that are eating your storage. Those containers must be running, because you said you already ran docker system prune. Otherwise /var/lib/docker/containers would be empty.
So check why are those two consuming so much. Probably they are logging too much to stdout.
docker system prune should clean up old unused layers, I have managed to get lots of disk space back several times with this. Hope it helps.

How to check the configuration of Docker (volume, devicemapper, ..) after deleting many files ?

I had problem with storage so I deleted many files and somes files in docker folders (/var/lib/docker).
Is there a way to check the installation of Docker, the link containers and volumes, ... .
This is not the way to deal with storage. I would start from scratch to be sure everything is ok.
To properly deal with storage problems:
How to clean docker devicemapper folder properly ?
If you want to increase capacity:
https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#increase-capacity-on-a-running-device
For earlier versions with devicemapper driver, I used to do following -
Remove untagged & dangling images using simple shell awk command.
Always use docker run --rm parameter if you don't wish to review the stopped container. This will prevent from using additional storage and caches. This is more towards efficient use of Container Life Cycle, which is true for all versions.
Make sure to use -v parameter to remove the Volume associated with stopped container.
Make sure to have a separate mount for /var/lib on Docker Host for more Enterprise Robustness.
I remember one incident, where even after deleting all volumes, I had to run xfs_fsr to reclaim all storage space on the /var/lib mount.
Redhat/Fedora family was affected the most, the only solid solution was to remove Docker, remove /var/lib/docker & reinstall.
Newer version of Docker -
All operations are combined in a wrapper docker system prune -a, which cleans Volume, Image & Container. (Caution - Will remove everything which is not associated with anything).

Resources