Reduce the disk space Docker uses [duplicate] - docker

(Post created on Oct 05 '16)
I noticed that every time I run an image and delete it, my system doesn't return to the original amount of available space.
The lifecycle I'm applying to my containers is:
> docker build ...
> docker run CONTAINER_TAG
> docker stop CONTAINER_TAG
> rm docker CONTAINER_ID
> rmi docker image_id
[ running on a default mac terminal ]
The containers in fact were created from custom images, running from node and a standard redis. My OS is OSX 10.11.6.
At the end of the day I see I keep losing Mbs. How can I face this problem?
EDITED POST
2020 and the problem persists, leaving this update for the community:
Today running:
macOS 10.13.6
Docker Engine 18.9.2
Docker Desktop Cli 2.0.0.3
The easiest way to workaround the problem is to prune the system with the Docker utilties.
docker system prune -a --volumes

WARNING:
By default, volumes are not removed to prevent important data from being deleted if there is currently no container using the volume. Use the --volumes flag when running the command to prune volumes as well:
Docker now has a single command to do that:
docker system prune -a --volumes
See the Docker system prune docs

There are three areas of Docker storage that can mount up, because Docker is cautious - it doesn't automatically remove any of them: exited containers, unused container volumes, unused image layers. In a dev environment with lots of building and running, that can be a lot of disk space.
These three commands clear down anything not being used:
docker rm $(docker ps -f status=exited -aq) - remove stopped containers
docker rmi $(docker images -f "dangling=true" -q) - remove image layers that are not used in any images
docker volume rm $(docker volume ls -qf dangling=true) - remove volumes that are not used by any containers.
These are safe to run, they won't delete image layers that are referenced by images, or data volumes that are used by containers. You can alias them, and/or put them in a CRON job to regularly clean up the local disk.

It is also worth mentioning that file size of docker.qcow2 (or Docker.raw on High Sierra with Apple Filesystem) can seem very large (~64GiB), larger than it actually is, when using the following command:
ls -klsh Docker.raw
This can be somehow misleading because it will output the logical size of the file rather than its physical size.
To see the physical size of the file you can use this command:
du -h Docker.raw
Source: https://docs.docker.com/docker-for-mac/faqs/#disk-usage

Why does the file keep growing?
If Docker is used regularly, the size of the Docker.raw (or Docker.qcow2) can keep growing, even when files are deleted.
To demonstrate the effect, first check the current size of the file on the host:
$ cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
$ ls -s Docker.raw
9964528 Docker.raw
Note the use of -s which displays the number of filesystem blocks actually used by the file. The number of blocks used is not necessarily the same as the file “size”, as the file can be sparse.
Next start a container in a separate terminal and create a 1GiB file in it:
$ docker run -it alpine sh
# and then inside the container:
/ # dd if=/dev/zero of=1GiB bs=1048576 count=1024
1024+0 records in
1024+0 records out
/ # sync
Back on the host check the file size again:
$ ls -s Docker.raw
12061704 Docker.raw
Note the increase in size from 9964528 to 12061704, where the increase of 2097176 512-byte sectors is approximately 1GiB, as expected. If you switch back to the alpine container terminal and delete the file:
/ # rm -f 1GiB
/ # sync
then check the file on the host:
$ ls -s Docker.raw
12059672 Docker.raw
The file has not got any smaller! Whatever has happened to the file inside the VM, the host doesn’t seem to know about it.
Next if you re-create the “same” 1GiB file in the container again and then check the size again you will see:
$ ls -s Docker.raw
14109456 Docker.raw
It’s got even bigger! It seems that if you create and destroy files in a loop, the size of the Docker.raw (or Docker.qcow2) will increase up to the upper limit (currently set to 64 GiB), even if the filesystem inside the VM is relatively empty.
The explanation for this odd behaviour lies with how filesystems typically manage blocks. When a file is to be created or extended, the filesystem will find a free block and add it to the file. When a file is removed, the blocks become “free” from the filesystem’s point of view, but no-one tells the disk device. Making matters worse, the newly-freed blocks might not be re-used straight away – it’s completely up to the filesystem’s block allocation algorithm. For example, the algorithm might be designed to favour allocating blocks contiguously for a file: recently-freed blocks are unlikely to be in the ideal place for the file being extended.
Since the block allocator in practice tends to favour unused blocks, the result is that the Docker.raw (or Docker.qcow2) will constantly accumulate new blocks, many of which contain stale data. The file on the host gets larger and larger, even though the filesystem inside the VM still reports plenty of free space.
TRIM
A TRIM command (or a DISCARD or UNMAP) allows a filesystem to signal to a disk that a range of sectors contain stale data and they can be forgotten. This allows:
an SSD drive to erase and reuse the space, rather than spend time shuffling it around; and
Docker for Mac to deallocate the blocks in the host filesystem, shrinking the file.
So how do we make this work?
Automatic TRIM in Docker for Mac
In Docker for Mac 17.11 there is a containerd “task” called trim-after-delete listening for Docker image deletion events. It can be seen via the ctr command:
$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n ctr t ls
TASK PID STATUS
vsudd 1741 RUNNING
acpid 871 RUNNING
diagnose 913 RUNNING
docker-ce 958 RUNNING
host-timesync-daemon 1046 RUNNING
ntpd 1109 RUNNING
trim-after-delete 1339 RUNNING
vpnkit-forwarder 1550 RUNNING
When an image deletion event is received, the process waits for a few seconds (in case other images are being deleted, for example as part of a docker system prune ) and then runs fstrim on the filesystem.
Returning to the example in the previous section, if you delete the 1 GiB file inside the alpine container
/ # rm -f 1GiB
then run fstrim manually from a terminal in the host:
$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n fstrim /var/lib/docker
then check the file size:
$ ls -s Docker.raw
9965016 Docker.raw
The file is back to (approximately) it’s original size – the space has finally been freed!
Hopefully this blog will be helpful, also checkout the following macos docker utility scripts for this problem:
https://github.com/wanliqun/macos_docker_toolkit

Docker on Mac has an additional problem that is hurting a lot of people: the docker.qcow2 file can grow out of proportions (up to 64gb) and won't ever shrink back down on its own.
https://github.com/docker/for-mac/issues/371
As stated in one of the replies by djs55 this is in the planning to be fixed, but its not a quick fix. Quote:
The .qcow2 is exposed to the VM as a block device with a maximum size
of 64GiB. As new files are created in the filesystem by containers,
new sectors are written to the block device. These new sectors are
appended to the .qcow2 file causing it to grow in size, until it
eventually becomes fully allocated. It stops growing when it hits this
maximum size.
...
We're hoping to fix this in several stages: (note this is still at the
planning / design stage, but I hope it gives you an idea)
1) we'll switch to a connection protocol which supports TRIM, and
implement free-block tracking in a metadata file next to the qcow2.
We'll create a compaction tool which can be run offline to shrink the
disk (a bit like the qemu-img convert but without the dd if=/dev/zero
and it should be fast because it will already know where the empty
space is)
2) we'll automate running of the compaction tool over VM reboots,
assuming it's quick enough
3) we'll switch to an online compactor (which is a bit like a GC in a
programming language)
We're also looking at making the maximum size of the .qcow2
configurable. Perhaps 64GiB is too large for some environments and a
smaller cap would help?
Update 2019: many updates have been done to Docker for Mac since this answer was posted to help mitigate problems (notably: supporting a different filesystem).
Cleanup is still not fully automatic though, you may need to prune from time to time. For a single command that can help to cleanup disk space, see zhongjiajie's answer.

docker container prune
docker system prune
docker image prune
docker volume prune

Since nothing here was working for me, here's what I did. Check file size:
ls -lhks ~/Library/Containers/com.docker.docker//Data/vms/0/data/Docker.raw
Then in the docker desktop simply reduce the disk image size (I was using raw format). It will say it will delete everything, but by the time you are reading this post, you probably already have. So that creates a fresh new empty file.

i'm not sure if it is related to the current topic , but this been a solution for me personally
open docker settings -> resources -> disk image size - 16gb

There are several options on how to limit docker diskspace, I'd start by limiting/rotating the logs: Docker container logs taking all my disk space
E.g. if you have a recent docker version, you can start it with an --log-opt max-size=50m option per container.
Also - if you've got old, unused containers, you can consider having a look at the docker logs which are located at /var/lib/docker/containers/*/*-json.log

$ sudo docker system prune
WARNING! This will remove:
all stopped containers
all networks not used by at least one container
all dangling images
all dangling build cache

Related

Docker: "You don't have enough free space in /var/cache/apt/archives/"

I have a dockerfile which when I want to build results in the error
E: You don't have enough free space in /var/cache/apt/archives/
Note that the image sets up a somewhat complex project with several dependencies that require quite a lot of space. For example, the list includes Qt. This is only a thing during the construction of the image, and in the end, I expect it to have a size of maybe 300 MB.
Now I found this: https://unix.stackexchange.com/questions/578536/how-to-fix-e-you-dont-have-enough-free-space-in-var-cache-apt-archives
Given that, what I tried so far is:
Freeing the space used by docker images so far by calling docker system prune
Removing unneeded installation files by calling sudo apt autoremove and sudo apt autoclean
There was also the suggestion to remove data in var/log, which has currently a size of 3 GB. However, I am not the system administrator and thus wary to do such a thing.
Is there any other way to increase that space?
And, preferably, is there a more sustainable solution, allowing me to build several images without having to search for spots where I can clean up the system?
Try this suggestion. You might have a lot of unused images that need to be deleted.
https://github.com/onyx-platform/onyx-starter/issues/5#issuecomment-276562225
Converting a #Dre suggestion into the code, you might want to use Docker prune command for containers, images & volumes
docker container prune
docker image prune
docker volume prune
You can use these commands in sequence:
docker container prune; docker image prune; docker volume prune
Free Space without removing your latest images
Use the following command to see the different types of reclaimable storage (the -v verbose option provides more detail):
docker system df
docker system df -v
Clear the build cache (the -a option will remove unused build cache):
docker builder prune -a
Remove dangling images ( tagged images, old and previous image builds):
docker rmi -f $(docker images -f "dangling=true" -q)
Increase Disk image size using Docker UI
Docker > Preferences > Resources > Advanced > adjust Disk image size > Apply & Restart
TLDR;
run
docker system prune -a --volumes
I tried to increase the disk space and prune the images, containers and volumes manually but was facing the issue again and again. When I tried to check the memory consumption on my machine, I found a lot of memory consumed by ~/Library/Containers/com.docker.docker location. Did a system prune which cleaned up a lot of space and docker builds started working again.

Docker Cleanup - Can I delete old directories under /var/lib/docker/containers

I have 16 docker containers running in my system which stores a lot of data under/var/lib/docker/overlay2/<id>(around 30 GB per-directory.. Total 30*16 GB). Primarily space is consumed by diff and merged directories inside it.
Every time I do a docker-compose down followed by docker-compose up, it creates another set of 16 directories and starts storing data under that. But it does not clean up old overlay2 directories. This leads to a space crunch
Please let me know if I can do rm -rf /var/lib/docker/overlay2/
of old directories or how I can free up space.
Do I need to wait for a couple of hours after docker-compose down to
reclaim space?
Note: I did a docker system prune -a also.
1)Please let me know if I can do rm -rf /var/lib/docker/overlay2/ of
old directories or how I can free up space.
Only you can decide that.
When I suspect unused data for a /var/lib/docker/overlay2/foo123 folder, I first inspect the content of that folder. It contains generally image content. With modification date of files and content of files , I have a high probability to determinate if the folder is useless.
When I am sure that it is useless, I delete it with rm -rf .... Note that the delete may fail because file mounting, in that case I identify them and unmount them first.
If I am not sure that it is useless, I perform a backup before such as cp -a /var/lib/docker/overlay2/foo123 /var/lib/docker/overlay2/backup-foo123 before deleting
2)Do I need to wait for a couple of hours after docker-compose down to
reclaim space?
With just Docker, no at all. Data have to be explicitly removed (such as : docker container rm FOO, docker system prune, docker system prune -a, docker image rm FOO and so for).
But Docker is not perfect. So sometimes you may have stale data.
Generally from time to time, I inspect docker big folders with a du -sh */ | sort -h.

Reclaim disk space after removing file from Docker container

I've successfully copied a large database backup file (35 GB) into the Docker container and restored my database locally (following this walkthrough). I want to now delete that .bak file from the Docker container to reclaim the space. I did that by running sudo docker exec sql_server rm -rf /var/opt/mssql/backup/example.bak but this didn’t reclaim the space - my Docker.raw file remains about 76 GB. When I run docker system df it says my containers are 45 GB though. I tried docker system prune -a but this reclaimed 0B. Restarting Docker didn't do the trick. How do I shrink that now that the file is removed in order to gain that space back?
I did that by running sudo docker exec sql_server rm -rf /var/opt/mssql/backup/example.bak but this didn’t reclaim the space
Whether this will free up space depends on whether the file exists only in the container or if it exists in your image. Once a file exists in the image, deleting it in the container doesn't modify the image itself. Instead only the container filesystem is updated with an indication that the file is deleted from the view of that container. This is how the layered filesystem works under the covers.
When I run docker system df it says my containers are 45 GB though
You can examine this a bit deeper. For any specific container, you can run a docker container diff command on the container id to see the files that have been modified inside that container.
I tried docker system prune -a but this reclaimed 0B.
This will not reclaim space from a running container. If the container is stopped, it will be deleted, and the image that started that container may also be deleted if nothing else points to it. Otherwise docker will avoid running containers and there's no pruning it can run on the files inside a running container.
my Docker.raw file remains about 76 GB
This is a very key point, it suggests that you are running Docker on a Mac. All of the above steps may reduce disk space of the Linux environment that Docker runs on top of. However, the VM that Docker uses on Mac and Windows is mapped to a file that grows on demand as the VM needs it. From the Docker for Mac FAQ, diskspace reported by this file may not be accurate because of sparse files work on Mac:
Docker.raw consumes an insane amount of disk space!
This is an illusion. Docker uses the raw format on Macs running the
Apple Filesystem (APFS). APFS supports sparse files, which compress
long runs of zeroes representing unused space. The output of ls is
misleading, because it lists the logical size of the file rather than
its physical size. To see the physical size, add the -ks switch; to
see the logical size in human readable form, add -lh:
$ cd ~/Library/Containers/com.docker.docker/Data/vms/0
$ ls -klsh Docker.raw
2333548 -rw-r--r--# 1 akim staff 64G Dec 13 17:42 Docker.raw
In this listing, the logical size is 64GB, but the physical size is
only 2.3GB.
Alternatively, you may use du (disk usage):
$ du -h Docker.raw
2,2G Docker.raw
I'd also recommend looking at how much disk space is used inside the Docker VM with:
sudo docker run --rm -v /var/lib/docker:/host/var/lib/docker:ro \
busybox df -h /host/var/lib/docker
According to this article, you can flatten a container with:
# export the container to a tarball
docker export <CONTAINER ID> > /home/export.tar
# import it back
cat /home/export.tar | docker import - some-name:latest
docker export exports the container’s filesystem as a tar archive and docker import imports the contents from a tarball to create a filesystem image.
You can avoid using a temporary file by piping directly from docker export to docker import with:
docker export <CONTAINER ID> | docker import - flatten-container:latest

Is it safe to clean docker/overlay2/

I got some docker containers running on AWS EC2, the /var/lib/docker/overlay2 folder grows very fast in disk size.
I'm wondering if it is safe to delete its content?
or if docker has some kind of command to free up some disk usage.
UPDATE:
I actually tried docker system prune -a already, which reclaimed 0Kb.
Also my /docker/overlay2 disk size is much larger than the output from docker system df
After reading docker documentation and BMitch's answer, I believe it is a stupid idea to touch this folder and I will try other ways to reclaim my disk space.
Docker uses /var/lib/docker to store your images, containers, and local named volumes. Deleting this can result in data loss and possibly stop the engine from running. The overlay2 subdirectory specifically contains the various filesystem layers for images and containers.
To cleanup unused containers and images, see docker system prune. There are also options to remove volumes and even tagged images, but they aren't enabled by default due to the possibility of data loss:
$ docker system prune --help
Usage: docker system prune [OPTIONS]
Remove unused data
Options:
-a, --all Remove all unused images not just dangling ones
--filter filter Provide filter values (e.g. 'label=<key>=<value>')
-f, --force Do not prompt for confirmation
--volumes Prune volumes
What a prune will never delete includes:
running containers (list them with docker ps)
logs on those containers (see this post for details on limiting the size of logs)
filesystem changes made by those containers (visible with docker diff)
Additionally, anything created outside of the normal docker folders may not be seen by docker during this garbage collection. This could be from some other app writing to this directory, or a previous configuration of the docker engine (e.g. switching from AUFS to overlay2, or possibly after enabling user namespaces).
What would happen if this advice is ignored and you deleted a single folder like overlay2 out from this filesystem? The container filesystems are assembled from a collection of filesystem layers, and the overlay2 folder is where docker is performing some of these mounts (you'll see them in the output of mount when a container is running). Deleting some of these when they are in use would delete chunks of the filesystem out from a running container, and likely break the ability to start a new container from an impacted image. See this question for one of many possible results.
To completely refresh docker to a clean state, you can delete the entire directory, not just sub-directories like overlay2:
# danger, read the entire text around this code before running
# you will lose data
sudo -s
systemctl stop docker
rm -rf /var/lib/docker
systemctl start docker
exit
The engine will restart in a completely empty state, which means you will lose all:
images
containers
named volumes
user created networks
swarm state
I found this worked best for me:
docker image prune --all
By default Docker will not remove named images, even if they are unused. This command will remove unused images.
Note each layer in an image is a folder inside the /usr/lib/docker/overlay2/ folder.
I had this issue... It was the log that was huge. Logs are here :
/var/lib/docker/containers/<container id>/<container id>-json.log
You can manage this in the run command line or in the compose file. See there : Configure logging drivers
I personally added these 3 lines to my docker-compose.yml file :
my_container:
logging:
options:
max-size: 10m
also had problems with rapidly growing overlay2
/var/lib/docker/overlay2 - is a folder where docker store writable layers for your container.
docker system prune -a - may work only if container is stopped and removed.
in my i was able to figure out what consumes space by going into overlay2 and investigating.
that folder contains other hash named folders. each of those has several folders including diff folder.
diff folder - contains actual difference written by a container with exact folder structure as your container (at least it was in my case - ubuntu 18...)
so i've used du -hsc /var/lib/docker/overlay2/LONGHASHHHHHHH/diff/tmp to figure out that /tmp inside of my container is the folder which gets polluted.
so as a workaround i've used -v /tmp/container-data/tmp:/tmp parameter for docker run command to map inner /tmp folder to host and setup a cron on host to cleanup that folder.
cron task was simple:
sudo nano /etc/crontab
*/30 * * * * root rm -rf /tmp/container-data/tmp/*
save and exit
NOTE: overlay2 is system docker folder, and they may change it structure anytime. Everything above is based on what i saw in there. Had to go in docker folder structure only because system was completely out of space and even wouldn't allow me to ssh into docker container.
Backgroud
The blame for the issue can be split between our misconfiguration of container volumes, and a problem with docker leaking (failing to release) temporary data written to these volumes. We should be mapping (either to host folders or other persistent storage claims) all of out container's temporary / logs / scratch folders where our apps write frequently and/or heavily. Docker does not take responsibility for the cleanup of all automatically created so-called EmptyDirs located by default in /var/lib/docker/overlay2/*/diff/*. Contents of these "non-persistent" folders should be purged automatically by docker after container is stopped, but apparently are not (they may be even impossible to purge from the host side if the container is still running - and it can be running for months at a time).
Workaround
A workaround requires careful manual cleanup, and while already described elsewhere, you still may find some hints from my case study, which I tried to make as instructive and generalizable as possible.
So what happened is the culprit app (in my case clair-scanner) managed to write over a few months hundreds of gigs of data to the /diff/tmp subfolder of docker's overlay2
du -sch /var/lib/docker/overlay2/<long random folder name seen as bloated in df -haT>/diff/tmp
271G total
So as all those subfolders in /diff/tmp were pretty self-explanatory (all were of the form clair-scanner-* and had obsolete creation dates), I stopped the associated container (docker stop clair) and carefully removed these obsolete subfolders from diff/tmp, starting prudently with a single (oldest) one, and testing the impact on docker engine (which did require restart [systemctl restart docker] to reclaim disk space):
rm -rf $(ls -at /var/lib/docker/overlay2/<long random folder name seen as bloated in df -haT>/diff/tmp | grep clair-scanner | tail -1)
I reclaimed hundreds of gigs of disk space without the need to re-install docker or purge its entire folders. All running containers did have to be stopped at one point, because docker daemon restart was required to reclaim disk space, so make sure first your failover containers are running correctly on an/other node/s). I wish though that the docker prune command could cover the obsolete /diff/tmp (or even /diff/*) data as well (via yet another switch).
It's a 3-year-old issue now, you can read its rich and colorful history on Docker forums, where a variant aimed at application logs of the above solution was proposed in 2019 and seems to have worked in several setups: https://forums.docker.com/t/some-way-to-clean-up-identify-contents-of-var-lib-docker-overlay/30604
Friends, to keep everything clean you can use de commands:
docker system prune -a && docker volume prune
WARNING: DO NOT USE IN A PRODUCTION SYSTEM
/# df
...
/dev/xvda1 51467016 39384516 9886300 80% /
...
Ok, let's first try system prune
#/ docker system prune --volumes
...
/# df
...
/dev/xvda1 51467016 38613596 10657220 79% /
...
Not so great, seems like it cleaned up a few megabytes. Let's go crazy now:
/# sudo su
/# service docker stop
/# cd /var/lib/docker
/var/lib/docker# rm -rf *
/# service docker start
/var/lib/docker# df
...
/dev/xvda1 51467016 8086924 41183892 17% /
...
Nice!
Just remember that this is NOT recommended in anything but a throw-away server. At this point Docker's internal database won't be able to find any of these overlays and it may cause unintended consequences.
adding to above comment, in which people are suggesting to prune system like clear dangling volumes, images, exit containers etc., Sometime your app become culprit, it generated too much logs in a small time and if you using an empty directory volume (local volumes) this fill the /var partitions. In that case I found below command very interesting to figure out, what is consuming space on my /var partition disk.
du -ahx /var/lib | sort -rh | head -n 30
This command will list top 30, which is consuming most space on a single disk. Means if you are using external storage with your containers, it consumes a lot of time to run du command. This command will not count mount volumes. And is much faster. You will get the exact directories/files which are consuming space. Then you can go to those directories and check which files are useful or not. if these files are required then you can move them to some persistent storage by making change in app to use persistent storage for that location or change location of that files. And for rest you can clear them.
If your system is also used for building images you might have a look at cleaning up garbage created by the builders using:
docker buildx prune --all
and
docker builder prune --all
DON'T DO THIS IN PRODUCTION
The answer given by #ravi-luthra technically works but it has some issues!
In my case, I was just trying to recover disk space. The lib/docker/overlay folder was taking 30GB of space and I only run a few containers regularly. Looks like docker has some issue with data leakage and some of the temporary data are not cleared when the container stops.
So I went ahead and deleted all the contents of lib/docker/overlay folder. After that, My docker instance became un-useable. When I tried to run or build any container, It gave me this error:
failed to create rwlayer: symlink ../04578d9f8e428b693174c6eb9a80111c907724cc22129761ce14a4c8cb4f1d7c/diff /var/lib/docker/overlay2/l/C3F33OLORAASNIYB3ZDATH2HJ7: no such file or directory
Then with some trial and error, I solved this issue by running
(WARNING: This will delete all your data inside docker volumes)
docker system prune --volumes -a
So It is not recommended to do such dirty clean ups unless you completely understand how the system works.
"Official" answer, cleaning with "prune" commands, does not clean actually garbage in overlay2 folder.
So, to answer the original question, what can be done is:
Disclaimer: Be careful when applying this. This may result broking your Docker object!
List folder names (hashes) in overlay2
Inspect your Docker objects (images, containers, ...) that you need (A stopped container or an image currently not inside any container do not mean that you do not need them).
When you inspect, you will see that it gives you the hashes that are related with your object, including overlay2's folders.
Do grep against overlay2's folders
Note all folders that are found with grep
Now you can delete folders of overlay2 that are not referred by any Docker object that you need.
Example:
Let say there are these folders inside your overlay2 directory,
a1b28095041cc0a5ded909a20fed6dbfbcc08e1968fa265bc6f3abcc835378b5
021500fad32558a613122070616963c6644c6a57b2e1ed61cb6c32787a86f048
And what you only have is one image with ID c777cf06a6e3.
Then, do this:
docker inspect c777cf06a6e3 | grep a1b2809
docker inspect c777cf06a6e3 | grep 021500
Imagine that first command found something whereas the second nothing.
Then, you can delete 0215... folder of overlay2:
rm -r 021500fad32558a613122070616963c6644c6a57b2e1ed61cb6c32787a86f048
To answer the title of question:
Yes, it is safe deleting dxirectly overlay2 folder if you find out that it is not in use.
No, it is not safe deleting it directly if you find out that it is in use or you are not sure.
In my case, systemctl stop docker then systemctl start docker somehow automatically free space /var/lib/docker/*
I had the same problem, in my instance it was because ´var/lib/docker´ directory was mounted to a running container (in my case google/cadvisor) therefore it blocked docker prune from cleaning the folder. Stopping the container, running docker prune -and then rerunning the container solved the problem.
Based on Mert Mertce's answer I wrote the following script complete with spinners and progress bars.
Since writing the script, however, I noticed the extra directories on our build servers to be transient - that is Docker appears to be cleaning up, albeit slowly. I don't know if Docker will get upset if there is contention for removing directories. Our current solution is to use docuum with a lot of extra overhead (150+GB).
#!/bin/bash
[[ $(id -u) -eq 0 ]] || exec sudo /bin/bash -c "$(printf '%q ' "$BASH_SOURCE" "$#")"
progname=$(basename $0)
quiet=false
no_dry_run=false
while getopts ":qn" opt
do
case "$opt" in
q)
quiet=true
;;
n)
no_dry_run=true
;;
?)
echo "unexpected option ${opt}"
echo "usage: ${progname} [-q|--quiet]"
echo " -q: no output"
echo " -n: no dry run (will remove unused directories)"
exit 1
;;
esac
done
shift "$(($OPTIND -1))"
[[ ${quiet} = false ]] || exec /bin/bash -c "$(printf '%q ' "$BASH_SOURCE" "$#")" > /dev/null
echo "Running as: $(id -un)"
progress_bar() {
local w=80 p=$1; shift
# create a string of spaces, then change them to dots
printf -v dots "%*s" "$(( $p*$w/100 ))" ""; dots=${dots// /.};
# print those dots on a fixed-width space plus the percentage etc.
printf "\r\e[K|%-*s| %3d %% %s" "$w" "$dots" "$p" "$*";
}
cd /var/lib/docker/overlay2
echo cleaning in ${PWD}
i=1
spi=1
sp="/-\|"
directories=( $(find . -mindepth 1 -maxdepth 1 -type d | cut -d/ -f2) )
images=( $(docker image ls --all --format "{{.ID}}") )
total=$((${#directories[#]} * ${#images[#]}))
used=()
for d in "${directories[#]}"
do
for id in ${images[#]}
do
((++i))
progress_bar "$(( ${i} * 100 / ${total}))" "scanning for used directories ${sp:spi++%${#sp}:1} "
docker inspect $id | grep -q $d
if [ $? ]
then
used+=("$d")
i=$(( $i + $(( ${#images[#]} - $(( $i % ${#images[#]} )) )) ))
break
fi
done
done
echo -e "\b\b " # get rid of spinner
i=1
used=($(printf '%s\n' "${used[#]}" | sort -u))
unused=( $(find . -mindepth 1 -maxdepth 1 -type d | cut -d/ -f2) )
for d in "${used[#]}"
do
((++i))
progress_bar "$(( ${i} * 100 / ${#used[#]}))" "scanning for unused directories ${sp:spi++%${#sp}:1} "
for uni in "${!unused[#]}"
do
if [[ ${unused[uni]} = $d ]]
then
unset 'unused[uni]'
break;
fi
done
done
echo -e "\b\b " # get rid of spinner
if [ ${#unused[#]} -gt 0 ]
then
[[ ${no_dry_run} = true ]] || echo "Could remove: (to automatically remove, use the -n, "'"'"no-dry-run"'"'" flag)"
for d in "${unused[#]}"
do
if [[ ${no_dry_run} = true ]]
then
echo "Removing $(realpath ${d})"
rm -rf ${d}
else
echo " $(realpath ${d})"
fi
done
echo Done
else
echo "All directories are used, nothing to clean up."
fi
I navigated to the folder containing overlay2. Using du -shc overlay2/*, I found that there was 25G of junk in overlay2. Running docker system prune -af said Total Reclaimed Space: 1.687MB, so I thought it had failed to clean it up. However, I then ran du -shc overlay2/* again only to see that overlay2 had only 80K in it, so it did work.
Be careful, docker lies :).
Everything in /var/lib/docker are filesystems of containers. If you stop all your containers and prune them, you should end up with the folder being empty. You probably don't really want that, so don't go randomly deleting stuff in there. Do not delete things in /var/lib/docker directly. You may get away with it sometimes, but it's inadvisable for so many reasons.
Do this instead:
sudo bash
cd /var/lib/docker
find . -type f | xargs du -b | sort -n
What you will see is the largest files shown at the bottom. If you want, figure out what containers those files are in, enter those containers with docker exec -ti containername -- /bin/sh and delete some files.
You can also put docker system prune -a -f on a daily/weekly cron job as long as you aren't leaving stopped containers and volumes around that you care about. It's better to figure out the reasons why it's growing, and correct them at the container level.
Docker apparently keeps image layers of old versions of an image for running containers. It may happen if you update your running container's image (same tag) without stopping it, for example:
docker-compose pull
docker-compose up -d
Running docker-compose down before updating solved it, the downtime is not an issue in my case.
I recently had a similar issue, overlay2 grew bigger and bigger, But I couldn’t figure out what consumed the bulk of the space.
df showed me that overlay2 was about 24GB in size.
With du I tried to figure out what occupied the space… and failed.
The difference came from the fact that deleted files (mostly log files in my case) where still being used by a process (Docker). Thus the file doesn’t show up with du but the space it occupies will show with df.
A reboot of the host machine helped. Restarting the docker container would probably have helped already…
This article on linuxquestions.org helped me to figure that out.
Maybe this folder is not your problem, don't use the result of df -h with docker.
Use the command below to see the size of each of your folders:
echo; pwd; echo; ls -AlhF; echo; du -h --max-depth=1; echo; du-sh
docker system prune -af && docker image prune -af
I used "docker system prune -a" it cleaned all files under volumes and overlay2
[root#jasontest volumes]# docker system prune -a
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all images without at least one container associated to them
- all build cache
Are you sure you want to continue? [y/N] y
Deleted Images:
untagged: ubuntu:12.04
untagged: ubuntu#sha256:18305429afa14ea462f810146ba44d4363ae76e4c8dfc38288cf73aa07485005
deleted: sha256:5b117edd0b767986092e9f721ba2364951b0a271f53f1f41aff9dd1861c2d4fe
deleted: sha256:8c7f3d7534c80107e3a4155989c3be30b431624c61973d142822b12b0001ece8
deleted: sha256:969d5a4e73ab4e4b89222136eeef2b09e711653b38266ef99d4e7a1f6ea984f4
deleted: sha256:871522beabc173098da87018264cf3e63481628c5080bd728b90f268793d9840
deleted: sha256:f13e8e542cae571644e2f4af25668fadfe094c0854176a725ebf4fdec7dae981
deleted: sha256:58bcc73dcf4050a4955916a0dcb7e5f9c331bf547d31e22052f1b5fa16cf63f8
untagged: osixia/openldap:1.2.1
untagged: osixia/openldap#sha256:6ceb347feb37d421fcabd80f73e3dc6578022d59220cab717172ea69c38582ec
deleted: sha256:a562f6fd60c7ef2adbea30d6271af8058c859804b2f36c270055344739c06d64
deleted: sha256:90efa8a88d923fb1723bea8f1082d4741b588f7fbcf3359f38e8583efa53827d
deleted: sha256:8d77930b93c88d2cdfdab0880f3f0b6b8be191c23b04c61fa1a6960cbeef3fe6
deleted: sha256:dd9f76264bf3efd36f11c6231a0e1801c80d6b4ca698cd6fa2ff66dbd44c3683
deleted: sha256:00efc4fb5e8a8e3ce0cb0047e4c697646c88b68388221a6bd7aa697529267554
deleted: sha256:e64e6259fd63679a3b9ac25728f250c3afe49dbe457a1a80550b7f1ccf68458a
deleted: sha256:da7d34d626d2758a01afe816a9434e85dffbafbd96eb04b62ec69029dae9665d
deleted: sha256:b132dace06fa7e22346de5ca1ae0c2bf9acfb49fe9dbec4290a127b80380fe5a
deleted: sha256:d626a8ad97a1f9c1f2c4db3814751ada64f60aed927764a3f994fcd88363b659
untagged: centos:centos7
untagged: centos#sha256:2671f7a3eea36ce43609e9fe7435ade83094291055f1c96d9d1d1d7c0b986a5d
deleted: sha256:ff426288ea903fcf8d91aca97460c613348f7a27195606b45f19ae91776ca23d
deleted: sha256:e15afa4858b655f8a5da4c4a41e05b908229f6fab8543434db79207478511ff7
Total reclaimed space: 533.3MB
[root#jasontest volumes]# ls -alth
total 32K
-rw------- 1 root root 32K May 23 21:14 metadata.db
drwx------ 2 root root 4.0K May 23 21:14 .
drwx--x--x 14 root root 4.0K May 21 20:26 ..

how to increase docker build's volume size

One of my steps in Dockerfile requires more than 10G space on disk. It really does. However, all the intermediate containers in docker build are created with 10G volumes.
What I did:
started dockerd with --storage-opt dm.basesize=25G (docker info says: Base Device Size: 26.84 GB)
disabled cache while building
re-pulled the base images
stopped docker, removed everything from the docker directory, and started it again
It's no good: df -h in an intermediate container still shows a 10G disk, and docker inspect of it shows "DeviceSize": "10737418240".
What have I missed? How do I increase the base volume size?
To grant containers access to more space, we need to take care of two things:
Make sure that dockerd is started with: --storage-opt dm.basesize=25G
Make sure that we pull a clean version of the image after increasing the basesize.
Example:
Start dockerd with:
--storage-opt dm.basesize=25G
Restart docker daemon
Checking the container size here will display the older value of 10G:
docker run -it --rm ubuntu:xenial df -h
Delete the image and repull it
docker rmi ubuntu:xenial
docker pull ubuntu:xenial
Confirm changes took place with the expected value of 25G:
docker run -it --rm ubuntu:xenial df -h
I am not sure if this problem has been resolved in the meantime or not. But if anyone stumbles across this in 2019 (or possibly later), the clean solution to this kind of problems is to switch to another storage backend.
To do this, copy all keepworthy Docker data to a safe location. Stop the Docker daemon. Delete /var/lib/docker (or move it away to allow a rollback if anything goes wrong). Then re-create an empty /var/lib/docker and add file daemon.json with the following content
{
"storage-driver": "overlay2"
}
Then, restart the Docker daemon and the artificial 10G limit is gone.
See the documentation for further details: https://docs.docker.com/storage/storagedriver/overlayfs-driver/
In case there is really no way around the DeviceSize thing, I remember once creating it by hand (in the sense of a dd command with the expected device size) and starting the Docker daemon afterwards. However, as of today, necessity for doing this should be gone.

Resources