One of my steps in Dockerfile requires more than 10G space on disk. It really does. However, all the intermediate containers in docker build are created with 10G volumes.
What I did:
started dockerd with --storage-opt dm.basesize=25G (docker info says: Base Device Size: 26.84 GB)
disabled cache while building
re-pulled the base images
stopped docker, removed everything from the docker directory, and started it again
It's no good: df -h in an intermediate container still shows a 10G disk, and docker inspect of it shows "DeviceSize": "10737418240".
What have I missed? How do I increase the base volume size?
To grant containers access to more space, we need to take care of two things:
Make sure that dockerd is started with: --storage-opt dm.basesize=25G
Make sure that we pull a clean version of the image after increasing the basesize.
Example:
Start dockerd with:
--storage-opt dm.basesize=25G
Restart docker daemon
Checking the container size here will display the older value of 10G:
docker run -it --rm ubuntu:xenial df -h
Delete the image and repull it
docker rmi ubuntu:xenial
docker pull ubuntu:xenial
Confirm changes took place with the expected value of 25G:
docker run -it --rm ubuntu:xenial df -h
I am not sure if this problem has been resolved in the meantime or not. But if anyone stumbles across this in 2019 (or possibly later), the clean solution to this kind of problems is to switch to another storage backend.
To do this, copy all keepworthy Docker data to a safe location. Stop the Docker daemon. Delete /var/lib/docker (or move it away to allow a rollback if anything goes wrong). Then re-create an empty /var/lib/docker and add file daemon.json with the following content
{
"storage-driver": "overlay2"
}
Then, restart the Docker daemon and the artificial 10G limit is gone.
See the documentation for further details: https://docs.docker.com/storage/storagedriver/overlayfs-driver/
In case there is really no way around the DeviceSize thing, I remember once creating it by hand (in the sense of a dd command with the expected device size) and starting the Docker daemon afterwards. However, as of today, necessity for doing this should be gone.
Related
(Post created on Oct 05 '16)
I noticed that every time I run an image and delete it, my system doesn't return to the original amount of available space.
The lifecycle I'm applying to my containers is:
> docker build ...
> docker run CONTAINER_TAG
> docker stop CONTAINER_TAG
> rm docker CONTAINER_ID
> rmi docker image_id
[ running on a default mac terminal ]
The containers in fact were created from custom images, running from node and a standard redis. My OS is OSX 10.11.6.
At the end of the day I see I keep losing Mbs. How can I face this problem?
EDITED POST
2020 and the problem persists, leaving this update for the community:
Today running:
macOS 10.13.6
Docker Engine 18.9.2
Docker Desktop Cli 2.0.0.3
The easiest way to workaround the problem is to prune the system with the Docker utilties.
docker system prune -a --volumes
WARNING:
By default, volumes are not removed to prevent important data from being deleted if there is currently no container using the volume. Use the --volumes flag when running the command to prune volumes as well:
Docker now has a single command to do that:
docker system prune -a --volumes
See the Docker system prune docs
There are three areas of Docker storage that can mount up, because Docker is cautious - it doesn't automatically remove any of them: exited containers, unused container volumes, unused image layers. In a dev environment with lots of building and running, that can be a lot of disk space.
These three commands clear down anything not being used:
docker rm $(docker ps -f status=exited -aq) - remove stopped containers
docker rmi $(docker images -f "dangling=true" -q) - remove image layers that are not used in any images
docker volume rm $(docker volume ls -qf dangling=true) - remove volumes that are not used by any containers.
These are safe to run, they won't delete image layers that are referenced by images, or data volumes that are used by containers. You can alias them, and/or put them in a CRON job to regularly clean up the local disk.
It is also worth mentioning that file size of docker.qcow2 (or Docker.raw on High Sierra with Apple Filesystem) can seem very large (~64GiB), larger than it actually is, when using the following command:
ls -klsh Docker.raw
This can be somehow misleading because it will output the logical size of the file rather than its physical size.
To see the physical size of the file you can use this command:
du -h Docker.raw
Source: https://docs.docker.com/docker-for-mac/faqs/#disk-usage
Why does the file keep growing?
If Docker is used regularly, the size of the Docker.raw (or Docker.qcow2) can keep growing, even when files are deleted.
To demonstrate the effect, first check the current size of the file on the host:
$ cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
$ ls -s Docker.raw
9964528 Docker.raw
Note the use of -s which displays the number of filesystem blocks actually used by the file. The number of blocks used is not necessarily the same as the file “size”, as the file can be sparse.
Next start a container in a separate terminal and create a 1GiB file in it:
$ docker run -it alpine sh
# and then inside the container:
/ # dd if=/dev/zero of=1GiB bs=1048576 count=1024
1024+0 records in
1024+0 records out
/ # sync
Back on the host check the file size again:
$ ls -s Docker.raw
12061704 Docker.raw
Note the increase in size from 9964528 to 12061704, where the increase of 2097176 512-byte sectors is approximately 1GiB, as expected. If you switch back to the alpine container terminal and delete the file:
/ # rm -f 1GiB
/ # sync
then check the file on the host:
$ ls -s Docker.raw
12059672 Docker.raw
The file has not got any smaller! Whatever has happened to the file inside the VM, the host doesn’t seem to know about it.
Next if you re-create the “same” 1GiB file in the container again and then check the size again you will see:
$ ls -s Docker.raw
14109456 Docker.raw
It’s got even bigger! It seems that if you create and destroy files in a loop, the size of the Docker.raw (or Docker.qcow2) will increase up to the upper limit (currently set to 64 GiB), even if the filesystem inside the VM is relatively empty.
The explanation for this odd behaviour lies with how filesystems typically manage blocks. When a file is to be created or extended, the filesystem will find a free block and add it to the file. When a file is removed, the blocks become “free” from the filesystem’s point of view, but no-one tells the disk device. Making matters worse, the newly-freed blocks might not be re-used straight away – it’s completely up to the filesystem’s block allocation algorithm. For example, the algorithm might be designed to favour allocating blocks contiguously for a file: recently-freed blocks are unlikely to be in the ideal place for the file being extended.
Since the block allocator in practice tends to favour unused blocks, the result is that the Docker.raw (or Docker.qcow2) will constantly accumulate new blocks, many of which contain stale data. The file on the host gets larger and larger, even though the filesystem inside the VM still reports plenty of free space.
TRIM
A TRIM command (or a DISCARD or UNMAP) allows a filesystem to signal to a disk that a range of sectors contain stale data and they can be forgotten. This allows:
an SSD drive to erase and reuse the space, rather than spend time shuffling it around; and
Docker for Mac to deallocate the blocks in the host filesystem, shrinking the file.
So how do we make this work?
Automatic TRIM in Docker for Mac
In Docker for Mac 17.11 there is a containerd “task” called trim-after-delete listening for Docker image deletion events. It can be seen via the ctr command:
$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n ctr t ls
TASK PID STATUS
vsudd 1741 RUNNING
acpid 871 RUNNING
diagnose 913 RUNNING
docker-ce 958 RUNNING
host-timesync-daemon 1046 RUNNING
ntpd 1109 RUNNING
trim-after-delete 1339 RUNNING
vpnkit-forwarder 1550 RUNNING
When an image deletion event is received, the process waits for a few seconds (in case other images are being deleted, for example as part of a docker system prune ) and then runs fstrim on the filesystem.
Returning to the example in the previous section, if you delete the 1 GiB file inside the alpine container
/ # rm -f 1GiB
then run fstrim manually from a terminal in the host:
$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n fstrim /var/lib/docker
then check the file size:
$ ls -s Docker.raw
9965016 Docker.raw
The file is back to (approximately) it’s original size – the space has finally been freed!
Hopefully this blog will be helpful, also checkout the following macos docker utility scripts for this problem:
https://github.com/wanliqun/macos_docker_toolkit
Docker on Mac has an additional problem that is hurting a lot of people: the docker.qcow2 file can grow out of proportions (up to 64gb) and won't ever shrink back down on its own.
https://github.com/docker/for-mac/issues/371
As stated in one of the replies by djs55 this is in the planning to be fixed, but its not a quick fix. Quote:
The .qcow2 is exposed to the VM as a block device with a maximum size
of 64GiB. As new files are created in the filesystem by containers,
new sectors are written to the block device. These new sectors are
appended to the .qcow2 file causing it to grow in size, until it
eventually becomes fully allocated. It stops growing when it hits this
maximum size.
...
We're hoping to fix this in several stages: (note this is still at the
planning / design stage, but I hope it gives you an idea)
1) we'll switch to a connection protocol which supports TRIM, and
implement free-block tracking in a metadata file next to the qcow2.
We'll create a compaction tool which can be run offline to shrink the
disk (a bit like the qemu-img convert but without the dd if=/dev/zero
and it should be fast because it will already know where the empty
space is)
2) we'll automate running of the compaction tool over VM reboots,
assuming it's quick enough
3) we'll switch to an online compactor (which is a bit like a GC in a
programming language)
We're also looking at making the maximum size of the .qcow2
configurable. Perhaps 64GiB is too large for some environments and a
smaller cap would help?
Update 2019: many updates have been done to Docker for Mac since this answer was posted to help mitigate problems (notably: supporting a different filesystem).
Cleanup is still not fully automatic though, you may need to prune from time to time. For a single command that can help to cleanup disk space, see zhongjiajie's answer.
docker container prune
docker system prune
docker image prune
docker volume prune
Since nothing here was working for me, here's what I did. Check file size:
ls -lhks ~/Library/Containers/com.docker.docker//Data/vms/0/data/Docker.raw
Then in the docker desktop simply reduce the disk image size (I was using raw format). It will say it will delete everything, but by the time you are reading this post, you probably already have. So that creates a fresh new empty file.
i'm not sure if it is related to the current topic , but this been a solution for me personally
open docker settings -> resources -> disk image size - 16gb
There are several options on how to limit docker diskspace, I'd start by limiting/rotating the logs: Docker container logs taking all my disk space
E.g. if you have a recent docker version, you can start it with an --log-opt max-size=50m option per container.
Also - if you've got old, unused containers, you can consider having a look at the docker logs which are located at /var/lib/docker/containers/*/*-json.log
$ sudo docker system prune
WARNING! This will remove:
all stopped containers
all networks not used by at least one container
all dangling images
all dangling build cache
I want to put a Docker image onto a USB HD and then be able to plug that into any [Linux] machine that has Docker and run the image. How would I go about doing that?
So far, I've discovered that you can "export" a Docker image into a flat file, but it appears you can't do anything with it until you "import" it again. That's no good. My ultimate goal is to run this stuff from a boot CD, which obviously won't have any writable storage to "import" the data into.
So remember that Docker is a service running on your [Linux] machine.
What you can do is are the following options:
Build and run the Dockerfile located on your USB Drive
docker build -t my_image --file /path/to/Dockerfile/on/usb/drive . && docker container run -d my_image
Create a docker-compose file and run the docker-compose from the Dockerfile on your USB Drive
docker-compose up -d --build -f /path/to/Dockerfile/on/usb/drive
In the end, the container will always run on the host machine, but you can take that USB drive to any machine and run the Dockerfile anywhere
OK, so it appears there's two main locations that the Docker daemon uses:
/var/lib/docker holds all the Docker images.
/var/run/docker holds... actually I'm not sure.
The solution I came up with is this:
Delete (!!) the /var/lib/docker folder.
Create a symlink named /var/lib/docker which points to where you actually want the data stored.
Then (and only then) start the Docker daemon.
This seems to result in Docker storing its data where you tell it to. In particular, if you symlink to a folder on an external USB device, Docker will store its state there. You can then repeat this procedure on another machine (maybe one without Internet access) and access the image(s).
Mind you, this stores the entire state of the Docker daemon, not just one image. But I haven't yet found a way around that.
You also wouldn't want to do this to a "real" computer; I want this for a boot CD, where next time you reboot, all the changes to the filesystem will just disappear again.
Another possibility: It's possible to run two Docker daemons on the same host, and to pass images between them. So you could start one daemon running on USB storage, load the necessary image(s) into it, and then on another machine start Docker running on the same USB device.
To run an alternative Docker daemon, you need the following incantations:
containerd \
--state-dir /mnt/Docker/containerd \
--listen unix:///mnt/Docker/containerd.sock
dockerd \
--pidfile /mnt/Docker/dockerd.pid \
--data-root /mnt/Docker/Data \
--exec-root /mnt/Docker/Exec \
--containerd /mnt/Docker/containerd.sock \
--host unix:///mnt/Docker/dockerd.sock
For this to work, the directory /mnt/Docker needs to already exist. The other files, sockets and directories appear to get created automatically.
Both containerd and dockerd accept a --debug option that makes them output a lot more info to the console. Both of these are daemons, so the commands above never return.
Once the new dockerd is running, you can talk to it as normal if you manually specify the socket:
docker --host unix:///mnt/Docker/dockerd.sock info
You might want to define that as a shell alias to save some typing.
You can copy an image from the "normal" Docker daemon to the new one you just created like so:
docker save ubuntu:latest | docker --host unix:///mnt/Docker/dockerd.sock load
I've recently learned that there is a disk limit of docker containers, on my system it is 50GB. I wonder if there is a way to bump up the disk limit for the same container, without creating a new one.
I created the container as this:
nvidia-docker run -dit -v host_dir:docker_local_dir -p 5000:8080 --name Test_Container --privileged Test_Image /bin/bash
after detaching from the container (probably a bad idea!), I wasn't able to attach it anymore:
$nvidia-docker exec -it Test_Container /bin/bash
Error response from daemon: Container ada1..230032 is not running
I really don't want to create a new container and redo lots of logistics.
Any ideas?
Thanks!
It seems that you can't expand the size further. Quoting the documentation:
$ sudo dockerd --storage-opt dm.basesize=50G
This will increase the base device size to 50G. The Docker daemon will
throw an error if existing base device size is larger than 50G. A user
can use this option to expand the base device size however shrinking
is not permitted.
To overcome this, what you should do is to map the particular folder of the container, that you know that will go high in size, to a location in the host OS (as you seem to be doing in the command). You can do it like this:
docker run -v /host/path/tmp:container/path/tmp
With which you are saying that everything that should be saved in container/path/tmp should be instead saved in /host/path/tmp. In this way you don't have any limits, other than the physical ones of the machine. With this approach you must recreate the container.
Not a solution by Docker per se, seems that's not feasible yet.
But to avoid recreating a new container, i found that as long as the 'host_dir' is not local (e.g., gpfs dir etc), you have a good chance to mount it within Docker:
mount -v -t cifs [network folder address/path] [mount folder] -o user=[user id/name],sec=ntlm
mount options should adjust accordingly.
I've just noticed that I ran out of disk space on my laptop. Quite a lot is used by Docker as found by mate-disk-usage-analyzer:
The docker/aufs/diff folder contains 152 folders ending in -removing.
I already ran the following commands to clean up
Kill all running containers:
# docker kill $(docker ps -q)
Delete all stopped containers
# docker rm $(docker ps -a -q)
Delete all images
# docker rmi $(docker images -q)
Remove unused data
# docker system prune
And some more
# docker system prune -af
But the screenshot was taken after I executed those commands.
What is docker/aufs/diff, why does it consume that much space and how do I clean it up?
I have Docker version 17.06.1-ce, build 874a737. It happened after a cleanup, so this is definitely still a problem.
The following is a radical solution. IT DELETES ALL YOUR DOCKER STUFF. INCLUDING VOLUMES.
$ sudo su
# service docker stop
# cd /var/lib/docker
# rm -rf *
# service docker start
See https://github.com/moby/moby/issues/22207#issuecomment-295754078 for details
It might not be /var/lib/docker
The docker location might be different in your case. You can use a disk usage analyzer (such as mate-disk-usage-analyzer) to find the folders which need most space.
See Where are Docker images stored on the host machine?
This dir is where container rootfs layers are stored when using the AUFS storage driver (default if the AUFS kernel modules are loaded).
If you have a bunch of *-removing dirs, this is caused by a failed removal attempt. This can happen for various reasons, the most common is that an unmount failed due to device or resource busy.
Before Docker 17.06, if you used docker rm -f to remove a container all container metadata would be removed even if there was some error somewhere in the cleanup of the container (e.g., failing to remove the rootfs layer).
In 17.06 it will no longer remove the container metadata and instead flag the container with a Dead status so you can attempt to remove it again.
You can safely remove these directories, but I would stop docker first, then remove, then start docker back up.
docker takes lot of gig into three main areas :
Check for downloaded and compiled images.
clean unused and dead images by running below command
docker image prune -a
Docker creates lot of volume, some of the volumes are from dead container that are no more used
clean the volume and reclaim the space using
docker system prune -af && \
docker image prune -af && \
docker system prune -af --volumes && \
docker system df
Docker container logs are also very notorious in generating GBs of log
overlay2 storage for layers of container is also another source of GBs eaten up .
One better way is to calculate the size of docker image and then restrict the docker container with below instructions for storage and logs upper cap.
For these feature use docker V19 and above.
docker run -it --storage-opt size=2G --log-opt mode=non-blocking --log-opt max-buffer-size=4m fedora /bin/bash
Note that this is actually a know, yet still pending, issue: https://github.com/moby/moby/issues/37724
If you have the same issue, I recommend to "Thumbs Up" the issue on GitHub so that it gets addressed soon.
I had same issue.
In my case solution was:
view all images:
docker images
remove old unused images:
docker rmi IMAGE_ID
possibly you will need to prune stopped containers:
docker container prune
p.s. docker --help is good solution :)
I am trying to add a directory in the container I just created but can't following steps I have taken.
docker images
isbhatt/prefixman v1 cbeed3545d24 About an hour ago 1.044 GB
Then
docker run -v /media/sf_MY_WINDOWS/GitRepo/SDS/SDSNG/:/tmp/SDSNG --name "prefixman_v1" isbhatt/prefixman:v1
Then committing into that container
docker commit -m "prefixman_v1" 35fb30be015c
which gave me an id and I tagged the image on it by
docker tag b9873e80b6d0d68bf605b1ead34ba08f2c044b6cea03f7f57553a97f89845fbe prefixman_v1
Then I started container on fresh image by running
docker run -it prefixman_v1 /bin/bash
So, what I can see is that I can see SDSNG directory in /tmp in container but that directory is empty.
Where am I going wrong??
To elaborate on what larsks said, you should read my answer to Can Docker containers (NOT Docker images) be moved?
A docker container is a process, isolated, with a network card, and by default, 10 GB of disk space. This 10 GB should be quite enough for some code and some config files. If you need to deal with data, docker offers volumes.
A must-read is
https://docs.docker.com/userguide/dockervolumes/
or
http://container-solutions.com/understanding-volumes-docker/