Not enough space during build - docker

I am currently trying to create docker image with python files and lot of extra packages in requirement.txt.
While I am running the command "sudo docker build -t XXX ." the packages are downloaded and than installed one by one untill I receive an error:
"Could not install packages due to an EnvironmentError: [Errno 28] No space left on device"
I have already did the atomic option of "sudo docker system prune" and all the past docker images are deleted.
Moreover, "sudo docker info" shows that I have 15 GB allocated to docker and while my unsuccesfull docker image size is 1 GB size it is still well below the total memory.
None of the options mentioned here: https://unix.stackexchange.com/questions/203168/docker-says-no-space-left-on-device-but-system-has-plenty-of-space
or here: Docker error : no space left on device
worked. I can create several "failed" dockers of ~1GB with the total size of more than 20GB so it is not an issue of lack of space on my HDD of VM.
So I would be grateful for some more ideas.

The disk partition used by Docker is becoming full during the build. You can see the available and used space on your partitions using df -h. You can either add more space to that partition or you need to clean more files.
The docker system prune only removes unused data (dangling images, unreferences volumes ...). You can clean more space, by deleting images that you don't need. I suggest you take a look at the images you have using docker image ls and explicitly delete unneeded ones using docker image rm <image>.

If you are using Docker Desktop on Mac, go to Preferences and increase the Disk image size. In my case it was displaying that it was full Disk image size: 59.6 GB (59.6 GB used).

Related

Reduce the disk space Docker uses [duplicate]

(Post created on Oct 05 '16)
I noticed that every time I run an image and delete it, my system doesn't return to the original amount of available space.
The lifecycle I'm applying to my containers is:
> docker build ...
> docker run CONTAINER_TAG
> docker stop CONTAINER_TAG
> rm docker CONTAINER_ID
> rmi docker image_id
[ running on a default mac terminal ]
The containers in fact were created from custom images, running from node and a standard redis. My OS is OSX 10.11.6.
At the end of the day I see I keep losing Mbs. How can I face this problem?
EDITED POST
2020 and the problem persists, leaving this update for the community:
Today running:
macOS 10.13.6
Docker Engine 18.9.2
Docker Desktop Cli 2.0.0.3
The easiest way to workaround the problem is to prune the system with the Docker utilties.
docker system prune -a --volumes
WARNING:
By default, volumes are not removed to prevent important data from being deleted if there is currently no container using the volume. Use the --volumes flag when running the command to prune volumes as well:
Docker now has a single command to do that:
docker system prune -a --volumes
See the Docker system prune docs
There are three areas of Docker storage that can mount up, because Docker is cautious - it doesn't automatically remove any of them: exited containers, unused container volumes, unused image layers. In a dev environment with lots of building and running, that can be a lot of disk space.
These three commands clear down anything not being used:
docker rm $(docker ps -f status=exited -aq) - remove stopped containers
docker rmi $(docker images -f "dangling=true" -q) - remove image layers that are not used in any images
docker volume rm $(docker volume ls -qf dangling=true) - remove volumes that are not used by any containers.
These are safe to run, they won't delete image layers that are referenced by images, or data volumes that are used by containers. You can alias them, and/or put them in a CRON job to regularly clean up the local disk.
It is also worth mentioning that file size of docker.qcow2 (or Docker.raw on High Sierra with Apple Filesystem) can seem very large (~64GiB), larger than it actually is, when using the following command:
ls -klsh Docker.raw
This can be somehow misleading because it will output the logical size of the file rather than its physical size.
To see the physical size of the file you can use this command:
du -h Docker.raw
Source: https://docs.docker.com/docker-for-mac/faqs/#disk-usage
Why does the file keep growing?
If Docker is used regularly, the size of the Docker.raw (or Docker.qcow2) can keep growing, even when files are deleted.
To demonstrate the effect, first check the current size of the file on the host:
$ cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
$ ls -s Docker.raw
9964528 Docker.raw
Note the use of -s which displays the number of filesystem blocks actually used by the file. The number of blocks used is not necessarily the same as the file “size”, as the file can be sparse.
Next start a container in a separate terminal and create a 1GiB file in it:
$ docker run -it alpine sh
# and then inside the container:
/ # dd if=/dev/zero of=1GiB bs=1048576 count=1024
1024+0 records in
1024+0 records out
/ # sync
Back on the host check the file size again:
$ ls -s Docker.raw
12061704 Docker.raw
Note the increase in size from 9964528 to 12061704, where the increase of 2097176 512-byte sectors is approximately 1GiB, as expected. If you switch back to the alpine container terminal and delete the file:
/ # rm -f 1GiB
/ # sync
then check the file on the host:
$ ls -s Docker.raw
12059672 Docker.raw
The file has not got any smaller! Whatever has happened to the file inside the VM, the host doesn’t seem to know about it.
Next if you re-create the “same” 1GiB file in the container again and then check the size again you will see:
$ ls -s Docker.raw
14109456 Docker.raw
It’s got even bigger! It seems that if you create and destroy files in a loop, the size of the Docker.raw (or Docker.qcow2) will increase up to the upper limit (currently set to 64 GiB), even if the filesystem inside the VM is relatively empty.
The explanation for this odd behaviour lies with how filesystems typically manage blocks. When a file is to be created or extended, the filesystem will find a free block and add it to the file. When a file is removed, the blocks become “free” from the filesystem’s point of view, but no-one tells the disk device. Making matters worse, the newly-freed blocks might not be re-used straight away – it’s completely up to the filesystem’s block allocation algorithm. For example, the algorithm might be designed to favour allocating blocks contiguously for a file: recently-freed blocks are unlikely to be in the ideal place for the file being extended.
Since the block allocator in practice tends to favour unused blocks, the result is that the Docker.raw (or Docker.qcow2) will constantly accumulate new blocks, many of which contain stale data. The file on the host gets larger and larger, even though the filesystem inside the VM still reports plenty of free space.
TRIM
A TRIM command (or a DISCARD or UNMAP) allows a filesystem to signal to a disk that a range of sectors contain stale data and they can be forgotten. This allows:
an SSD drive to erase and reuse the space, rather than spend time shuffling it around; and
Docker for Mac to deallocate the blocks in the host filesystem, shrinking the file.
So how do we make this work?
Automatic TRIM in Docker for Mac
In Docker for Mac 17.11 there is a containerd “task” called trim-after-delete listening for Docker image deletion events. It can be seen via the ctr command:
$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n ctr t ls
TASK PID STATUS
vsudd 1741 RUNNING
acpid 871 RUNNING
diagnose 913 RUNNING
docker-ce 958 RUNNING
host-timesync-daemon 1046 RUNNING
ntpd 1109 RUNNING
trim-after-delete 1339 RUNNING
vpnkit-forwarder 1550 RUNNING
When an image deletion event is received, the process waits for a few seconds (in case other images are being deleted, for example as part of a docker system prune ) and then runs fstrim on the filesystem.
Returning to the example in the previous section, if you delete the 1 GiB file inside the alpine container
/ # rm -f 1GiB
then run fstrim manually from a terminal in the host:
$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n fstrim /var/lib/docker
then check the file size:
$ ls -s Docker.raw
9965016 Docker.raw
The file is back to (approximately) it’s original size – the space has finally been freed!
Hopefully this blog will be helpful, also checkout the following macos docker utility scripts for this problem:
https://github.com/wanliqun/macos_docker_toolkit
Docker on Mac has an additional problem that is hurting a lot of people: the docker.qcow2 file can grow out of proportions (up to 64gb) and won't ever shrink back down on its own.
https://github.com/docker/for-mac/issues/371
As stated in one of the replies by djs55 this is in the planning to be fixed, but its not a quick fix. Quote:
The .qcow2 is exposed to the VM as a block device with a maximum size
of 64GiB. As new files are created in the filesystem by containers,
new sectors are written to the block device. These new sectors are
appended to the .qcow2 file causing it to grow in size, until it
eventually becomes fully allocated. It stops growing when it hits this
maximum size.
...
We're hoping to fix this in several stages: (note this is still at the
planning / design stage, but I hope it gives you an idea)
1) we'll switch to a connection protocol which supports TRIM, and
implement free-block tracking in a metadata file next to the qcow2.
We'll create a compaction tool which can be run offline to shrink the
disk (a bit like the qemu-img convert but without the dd if=/dev/zero
and it should be fast because it will already know where the empty
space is)
2) we'll automate running of the compaction tool over VM reboots,
assuming it's quick enough
3) we'll switch to an online compactor (which is a bit like a GC in a
programming language)
We're also looking at making the maximum size of the .qcow2
configurable. Perhaps 64GiB is too large for some environments and a
smaller cap would help?
Update 2019: many updates have been done to Docker for Mac since this answer was posted to help mitigate problems (notably: supporting a different filesystem).
Cleanup is still not fully automatic though, you may need to prune from time to time. For a single command that can help to cleanup disk space, see zhongjiajie's answer.
docker container prune
docker system prune
docker image prune
docker volume prune
Since nothing here was working for me, here's what I did. Check file size:
ls -lhks ~/Library/Containers/com.docker.docker//Data/vms/0/data/Docker.raw
Then in the docker desktop simply reduce the disk image size (I was using raw format). It will say it will delete everything, but by the time you are reading this post, you probably already have. So that creates a fresh new empty file.
i'm not sure if it is related to the current topic , but this been a solution for me personally
open docker settings -> resources -> disk image size - 16gb
There are several options on how to limit docker diskspace, I'd start by limiting/rotating the logs: Docker container logs taking all my disk space
E.g. if you have a recent docker version, you can start it with an --log-opt max-size=50m option per container.
Also - if you've got old, unused containers, you can consider having a look at the docker logs which are located at /var/lib/docker/containers/*/*-json.log
$ sudo docker system prune
WARNING! This will remove:
all stopped containers
all networks not used by at least one container
all dangling images
all dangling build cache

Docker: "You don't have enough free space in /var/cache/apt/archives/"

I have a dockerfile which when I want to build results in the error
E: You don't have enough free space in /var/cache/apt/archives/
Note that the image sets up a somewhat complex project with several dependencies that require quite a lot of space. For example, the list includes Qt. This is only a thing during the construction of the image, and in the end, I expect it to have a size of maybe 300 MB.
Now I found this: https://unix.stackexchange.com/questions/578536/how-to-fix-e-you-dont-have-enough-free-space-in-var-cache-apt-archives
Given that, what I tried so far is:
Freeing the space used by docker images so far by calling docker system prune
Removing unneeded installation files by calling sudo apt autoremove and sudo apt autoclean
There was also the suggestion to remove data in var/log, which has currently a size of 3 GB. However, I am not the system administrator and thus wary to do such a thing.
Is there any other way to increase that space?
And, preferably, is there a more sustainable solution, allowing me to build several images without having to search for spots where I can clean up the system?
Try this suggestion. You might have a lot of unused images that need to be deleted.
https://github.com/onyx-platform/onyx-starter/issues/5#issuecomment-276562225
Converting a #Dre suggestion into the code, you might want to use Docker prune command for containers, images & volumes
docker container prune
docker image prune
docker volume prune
You can use these commands in sequence:
docker container prune; docker image prune; docker volume prune
Free Space without removing your latest images
Use the following command to see the different types of reclaimable storage (the -v verbose option provides more detail):
docker system df
docker system df -v
Clear the build cache (the -a option will remove unused build cache):
docker builder prune -a
Remove dangling images ( tagged images, old and previous image builds):
docker rmi -f $(docker images -f "dangling=true" -q)
Increase Disk image size using Docker UI
Docker > Preferences > Resources > Advanced > adjust Disk image size > Apply & Restart
TLDR;
run
docker system prune -a --volumes
I tried to increase the disk space and prune the images, containers and volumes manually but was facing the issue again and again. When I tried to check the memory consumption on my machine, I found a lot of memory consumed by ~/Library/Containers/com.docker.docker location. Did a system prune which cleaned up a lot of space and docker builds started working again.

What is build cache in `docker system df`

run docker system df will display a row of Build Cache. What does this mean? In my machine this line is always showing 0 for all fields.
$ sudo docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 5 3 352.9MB 352.9MB (100%)
Containers 7 0 26.29MB 26.29MB (100%)
Local Volumes 1 1 0B 0B
Build Cache 0 0 0B 0B
The Build Cache lines refer to the cache used by BuildKit which is included with 18.09 and newer versions of docker. It is not enabled by default, so unless you have switched it on, you can expect this to read 0. This is the cache used when building and rebuilding images to speed up builds and reuse shared layers between images. It also reduces the size of the images pushed to a registry when layers are reused from prior builds.
The cache from BuildKit is buried since it runs from containerd rather than directly in docker, so you can view the disk used for this cache and then prune it with commands like:
docker builder prune
If you run builds without BuildKit, the cache for these will be cleaned up when you prune images on the host.
The command docker system df shows the docker disk usage.
Images shows the disk usage for the docker images that are not running.
Containers shows the disk usage for the docker containers running.
Local Volumes shows the disk usage for the volumes you are using on your running containers.
And, recently, it was added a new section called Build Cache, which shows the disk usage for the cache files docker is using while building and running containers.
It was not there before, it was added on May 18, 2018, but they forgot to add it to the documentation, so you can't see it listed on the system df docs.
I'd just sent a PR so you can see it on the example output so I hope they can merge it soon.
Edit:
The PR was merged, you can now find the examples on the official documentation.

Reclaim disk space after removing file from Docker container

I've successfully copied a large database backup file (35 GB) into the Docker container and restored my database locally (following this walkthrough). I want to now delete that .bak file from the Docker container to reclaim the space. I did that by running sudo docker exec sql_server rm -rf /var/opt/mssql/backup/example.bak but this didn’t reclaim the space - my Docker.raw file remains about 76 GB. When I run docker system df it says my containers are 45 GB though. I tried docker system prune -a but this reclaimed 0B. Restarting Docker didn't do the trick. How do I shrink that now that the file is removed in order to gain that space back?
I did that by running sudo docker exec sql_server rm -rf /var/opt/mssql/backup/example.bak but this didn’t reclaim the space
Whether this will free up space depends on whether the file exists only in the container or if it exists in your image. Once a file exists in the image, deleting it in the container doesn't modify the image itself. Instead only the container filesystem is updated with an indication that the file is deleted from the view of that container. This is how the layered filesystem works under the covers.
When I run docker system df it says my containers are 45 GB though
You can examine this a bit deeper. For any specific container, you can run a docker container diff command on the container id to see the files that have been modified inside that container.
I tried docker system prune -a but this reclaimed 0B.
This will not reclaim space from a running container. If the container is stopped, it will be deleted, and the image that started that container may also be deleted if nothing else points to it. Otherwise docker will avoid running containers and there's no pruning it can run on the files inside a running container.
my Docker.raw file remains about 76 GB
This is a very key point, it suggests that you are running Docker on a Mac. All of the above steps may reduce disk space of the Linux environment that Docker runs on top of. However, the VM that Docker uses on Mac and Windows is mapped to a file that grows on demand as the VM needs it. From the Docker for Mac FAQ, diskspace reported by this file may not be accurate because of sparse files work on Mac:
Docker.raw consumes an insane amount of disk space!
This is an illusion. Docker uses the raw format on Macs running the
Apple Filesystem (APFS). APFS supports sparse files, which compress
long runs of zeroes representing unused space. The output of ls is
misleading, because it lists the logical size of the file rather than
its physical size. To see the physical size, add the -ks switch; to
see the logical size in human readable form, add -lh:
$ cd ~/Library/Containers/com.docker.docker/Data/vms/0
$ ls -klsh Docker.raw
2333548 -rw-r--r--# 1 akim staff 64G Dec 13 17:42 Docker.raw
In this listing, the logical size is 64GB, but the physical size is
only 2.3GB.
Alternatively, you may use du (disk usage):
$ du -h Docker.raw
2,2G Docker.raw
I'd also recommend looking at how much disk space is used inside the Docker VM with:
sudo docker run --rm -v /var/lib/docker:/host/var/lib/docker:ro \
busybox df -h /host/var/lib/docker
According to this article, you can flatten a container with:
# export the container to a tarball
docker export <CONTAINER ID> > /home/export.tar
# import it back
cat /home/export.tar | docker import - some-name:latest
docker export exports the container’s filesystem as a tar archive and docker import imports the contents from a tarball to create a filesystem image.
You can avoid using a temporary file by piping directly from docker export to docker import with:
docker export <CONTAINER ID> | docker import - flatten-container:latest

The size of docker.qcow2 on mac is much larger than the images that I have

I am using mac machine, and I have only one image of ubuntu:14.04, which sizes to 187.9 MB. I have deleted all containers (running and exited). However when I check the size of docker.qcow2 (/Library/containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2), it shows the size as 1.63 GB. As far as I know, this is the file where all the images are stored in mac. So the size of this file should be 187 MB or less. Any idea of why this size is so big?
I was facing the same issue, size of my Docker.qcow2 was 13G. So I connected to the Docker host VM directly and rebooted it from the CLI. After that the size went down to 2.2G
Try screen $HOME/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty and you should be able to connect directly to the Docker host.
There is a number of cleanups you can do using docker command and tools for Docker. Make sure they do what you want them do and they don't remove too much.
First run system prune (note it removes stuff, so please read the message it displays to make sure it wouldn't remove too much): docker system prune -a
Run this third-party cleaning image: docker run --privileged --pid=host docker/desktop-reclaim-space. It uses fstrim, for more details see link. There is no confirmation message when you run the command, so make sure you know what you're doing :)
My results:
before (18 GB) and
after (3 GB).

Resources