Disk space taken by Docker Bind Mount and how to delete it? - docker

After learning and using Docker with Bind Mount I have realized that a lot of extra space has been taken from my disk...
I have been able to remove all the volumes from the docker desktop (not much difference) but I have no idea where to find those Bind Mounts I have created, where to check the space taken by them and last, how to delete them.
I have run the command
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 0 0 0B 0B
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 500 0 16.09GB 16.09GB
What is that Build Cache and how to deal with it?
I have checked also the Docker Engine config file:
{
"builder": {
"gc": {
"defaultKeepStorage": "20GB",
"enabled": true
}
},
"experimental": false,
"features": {
"buildkit": true
}
}
Do those 20GB reserved mean that it has been taken from my disk for Docker?
I deleted ALL my containers, images and volumes, so I only have that Cache.

You could use docker builder prune -a command to remove all unused build cache. HERE you can find a nice script that periodically check space usage and run the cleaning.

Related

no space left on device on docker desktop for macos and skaffold

I had tried following the advice here, specifically:
run docker system prune, which freed about 6GB
increased the Disk image size on docker desktop preferences to 64 GB (43 GB used)
but am still seeing this when running skaffold: exiting dev mode because first build failed: couldn't build "user/orders": docker build: Error response from daemon: Error processing tar file(exit status 1): write /tsconfig.json: no space left on device. Another run of skaffold gave me this on another occasion:
exiting dev mode because first build failed: couldn't build "exiting dev mode because first build failed: couldn't build "user/orders": unable to stream build output: failed to create rwlayer: mkdir /var/lib/docker/overlay2/7c6618702ad15fe0fa7d4655109aa6326fb4f954df00d2621f62d66d7b328ed9/diff: no space left on device
Also, when running docker system df, I see this:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 10 0 28.86GB 28.86GB (100%)
Containers 0 0 0B 0B
Local Volumes 30 0 15.62GB 15.62GB (100%)
Build Cache 0 0 0B 0B
I also have about 200GB of physical hard drive space available.
I'm hoping I don't have to manually run rm * as proposed here, which was for a linux distro.
if you're running on Mac and have 200GB free, will increasing help you?

Docker overlay2 eating Disk Space

Below is the file system in overlay2 eating disk space, on Ubuntu Linux 18.04 LTS
Disk space of server 125GB
overlay 124G 6.0G 113G 6% /var/lib/docker/overlay2/9ac0eb938cd2a50bb87e8ed13605d3f09214fdd9c8967f18dfc3f9432701fea7/merged
overlay 124G 6.0G 113G 6% /var/lib/docker/overlay2/397b099799212060ee7a4718660aa13aba8aa1fbb92f4d88d86fbad94e572847/merged
shm 64M 0 64M 0% /var/lib/docker/containers/7ffb129016d187a61a31c33f9e468b98d0ac7ab1771b87631f6caade5b84adc6/mounts/shm
overlay 124G 6.0G 113G 6% /var/lib/docker/overlay2/df7c4acee73f7aa2536d2a8929a48241bc8e92a5f7b9cb63ab70cea731b52cec/merged
Another solution if the above doesn't work is setup a log rotation.
nano /etc/docker/daemon.json
if not found
cat > daemon.json
Add the following lines to file:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Restart the docker daemon: systemctl restart docker
Please refer: How to setup log rotation post installation
In case someone else runs into this, here's what's happening:
Your container may be writing data (logs, deployables, downloads...) to its local filesystem, and overlay2 will create a diff on each append/create/delete, so the container's filesystem will keep growing until it fills all available space on the host.
There are a few workarounds that won't require changing the storage driver:
first of all, make sure the data saved by the container may be discarded (you probably don't want to delete your database or anything similar)
periodically stop the container, prune the system docker system prune and restart the container
make sure the container doesn't write to its local filesystem, but if you can't:
replace any directories the container writes to with volumes or mounts.
Follow the Steps if your Server is Linux Ubuntu 18.04 LTS (should work for others too)
Docker info for Overlay2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
if you got the following lines when you enter df -h --total
19M /var/lib/docker/overlay2/00d82017328c49c661c78ce14550c4073c50a550fe5004911bd3488b085aea76/diff
5.9M /var/lib/docker/overlay2/00e3e4fa0cbff7c242c38cfc9501ef1a523158d69b50779e08a773e7e22a01f1/diff
44M /var/lib/docker/overlay2/0e8e7e893b2c8aa17b4875d421670e058e4d97de066c970bbeab6cba566a44ba/diff
28K /var/lib/docker/overlay2/12a4c4e4877d35e9db657e4acff32e513042cb44119cca5c43fc19ad81c3915f/diff
............
............
then do the changes as follows:
First stop docker : sudo systemctl stop docker
Next: got to path /etc/docker
Check file daemon.json if not found
cat > daemon.json
and enter the following inside:
{
"storage-driver": "aufs"
}
and close
Finally restart docker : sudo systemctl start docker
Check if the changes have been made:
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 0
Dirperm1 Supported: true
Changing the file system can help you to resolve this issue.
Please if check your docker version supports aufs here:
Please do check the Linux distribution and what storage drivers supported here :
I had a similar issue with the docker swarm.
The docker system prune --volume, restarting the server, removing the swarm stack and recreation was not helping.
In my case, I was hosting RabbitMQ where docker-compose config was:
services:
rabbitmq:
image: rabbitmq:.....
....
volumes:
- "${PWD}/queues/data:/var/lib/rabbitmq"
In such a case each container restart, each server reboot, just all that leads to restarting the rabbitmq container takes more and more hard drive space.
Initial value:
ls -ltrh queues/data/mnesia/ | wc -l
61
du -sch queues/data/mnesia/
7.8G queues/data/mnesia/
7.8G total
After restart:
ls -ltrh queues/data/mnesia/ | wc -l
62
du -sch queues/data/mnesia/
8.3G queues/data/mnesia/
8.3G total
My solution was to stop the rabbitmq and remove directories in queues/data/mnesia/. Then restart the rabbitmq.
Maybe sth is wrong with my config... But if you have such an issue then worth checking your volumes of containers whether do not leave some trash there.
If you are troubled by that /var/lib/docker/overlay2 directory is taking too much space(use du command to check space usage), then the answer below may be suitable for you.
docker xxx prune commands will clean up something unused, such as all stopped containers(in /var/lib/docker/containers), files in the virtual filesystems of stopped containers(in /var/lib/docker/overlay2), unmounted volumes(in /var/lib/docker/volumes) and images that don't have related containers(in /var/lib/docker/images). But all of this will not touch the containers which are in running.
limiting the size of logs in configurations will limit the size of /var/lib/docker/containers/*/*-json.log, but it doesn't involve the overlay2 directory.
you can find two folders called merged and diff in /var/lib/docker/overlay2/<hash>/. If these folders are big. That means there are high disk usage in your containers SELVES but not the docker host. In this case, you have to attach a terminal into relevant containers, find the high usage locations in the containers, and take your own solutions.
Just like Nick M said.

Docker `No space left on device`

I'm having an issue with running docker on a very powerful AWS linux (ubuntu) instance.
When I attempt to pull/extract docker, I get the following error:
docker: failed to register layer: Error processing tar file(exit status 1): write /opt/conda/lib/libmkl_mc3.so: no space left on device..
I'd like to increase the volume of space that docker is running on in order to allow this file to download (there's plenty of space on the machine as a whole) but I'm unsure how to do this. I've trawled through a number of similar problems on here and none of the provided solutions have proven successful for me.
Any advice would be appreciated.
Output of docker system df:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 0 0 0B 0B
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B

pvcreate not able to initialize physical volume

I got some application which will call the pvcreate each time.
I can see the volumes in my vm as follow:
$ pvscan
PV /dev/vda5 VG ubuntu-vg lvm2 [99.52 GiB / 0 free]
Total: 1 [99.52 GiB] / in use: 1 [99.52 GiB] / in no VG: 0 [0 ]
$ pvcreate --metadatasize=128M --dataalignment=256K '/dev/vda5'
Can't initialize physical volume "/dev/vda5" of volume group "ubuntu-vg" without -ff
$ pvcreate --metadatasize=128M --dataalignment=256K '/dev/vda5' -ff
Really INITIALIZE physical volume "/dev/vda5" of volume group "ubuntu-vg" [y/n]? y
Can't open /dev/vda5 exclusively. Mounted filesystem?
I have also tried wipsfs and observed the same result for above commands
$ wipefs -af /dev/vda5
/dev/vda5: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31
How can I execute pvcreate?
Anything to be added for my vm?
It seems your hdd (/dev/vda5) is already been used in your ubuntu-vg. I think you can not use same hdd partition in 2 different PV's. or you can not add it again.

Docker increase disk space

I have a docker running and it gives me disk space warning. How can i increase the docker space and start again? (The same container)
Lets say I want to give like 15gb.
You can also increase disk space through the docker GUI
I assume you are talking about disk space to run your containers.
Make sure that you have enough space on whatever disk drive you are using for /var/lib/docker which is the default used by Docker. You can change it with the -g daemon option.
If you don't have enough space you may have to repartition your OS drives so that you have over 15GB. If you are using boot2docker or docker-machine you will have to grow the volume on your Virtual Machine. It will vary depending on what you are using for Virtualization (i.e VirtualBox, VMware, etc)
For example if you are using VirtualBox and docker-machine you can start with something like this for a 40GB VM.
docker-machine create --driver virtualbox --virtualbox-disk-size "40000" default
I ran into similar problem with my docker-vm (which is 'alpine-linux' on VMware Fusion in OS X):
write error: no space left on device alpinevm:/mnt/hgfs
failed to build: .. no space left on device
.. eventually this guide helped me to resize/expand my docker volume.
TL;DR:
1 - Check size of partition containing /var/lib/docker
> df -h
/dev/sda3 17.6G 4.1G 12.6G 25% /var/lib/docker
look for '/dev/sdaN', where N is your partition for '/var/lib/docker', in my case /dev/sda3
2 - Shut down your VM, open VM Settings > Hard Disk(s) > change size of your 'virtual_disk.vmdk' (or whatever is your machine's virtual disk), then click Apply (see this guide).
3 - Install cfdisk and e2fsprogs-extra which contains resize2fs
> apk add cfdisk
> apk add e2fsprogs-extra
4 - Run cfdisk and resize/expand /dev/sda3
> cfdisk
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 206847 204800 100M 83 Linux
/dev/sda2 206848 4241407 4034560 1.9G 82 Linux swap / Solaris
/dev/sda3 4241408 83886079 79644672 12.6G 83 Linux
[Bootable] [ Delete ] [ Resize ] [ Quit ] [ Type ] [ Help ] [ Write ] [ Dump ]
.. press down/up to select '/dev/sda3'
.. press left/right/enter to select 'Resize' -> 'Write' -> 'Quit'
5 - Run resize2fs to expand the file system of /dev/sda3
> resize2fs /dev/sda3
6 - Verify resized volume
> df -h
/dev/sda3 37.3G 4.1G 31.4G 12% /var/lib/docker
To increase space available for Docker you will have to increase your docker-pool size. If you do a
lvs
You will see the docker-pool logical volume and its size. If your docker pool is sitting on a volume group that has free space you can simply increase the docker-pool LV by
lvextend -l 100%FREE <path_to_lv>
# An example using this may looks like this:
# lvextend -l 100%FREE /dev/VolGroup00/docker-pool
You can check out more docker diskspace tips here
Thanks
Docker stores all layers/images in its file formate (i.e. aufs) in default /var/lib/docker directory.
If you are getting disk space warning because of docker then there must of lot of docker images and you need to clean up it.
If you have option to add disk space then can you create separate partition with bigger size and mount your /var/lib/docker over there which will help you to get rid of filling root partition.
some extra information can be found here on managing disk space for docker .
http://www.scmtechblog.net/2016/06/clean-up-docker-images-from-local-to.html

Resources