Docker increase disk space - docker

I have a docker running and it gives me disk space warning. How can i increase the docker space and start again? (The same container)
Lets say I want to give like 15gb.

You can also increase disk space through the docker GUI

I assume you are talking about disk space to run your containers.
Make sure that you have enough space on whatever disk drive you are using for /var/lib/docker which is the default used by Docker. You can change it with the -g daemon option.
If you don't have enough space you may have to repartition your OS drives so that you have over 15GB. If you are using boot2docker or docker-machine you will have to grow the volume on your Virtual Machine. It will vary depending on what you are using for Virtualization (i.e VirtualBox, VMware, etc)
For example if you are using VirtualBox and docker-machine you can start with something like this for a 40GB VM.
docker-machine create --driver virtualbox --virtualbox-disk-size "40000" default

I ran into similar problem with my docker-vm (which is 'alpine-linux' on VMware Fusion in OS X):
write error: no space left on device alpinevm:/mnt/hgfs
failed to build: .. no space left on device
.. eventually this guide helped me to resize/expand my docker volume.
TL;DR:
1 - Check size of partition containing /var/lib/docker
> df -h
/dev/sda3 17.6G 4.1G 12.6G 25% /var/lib/docker
look for '/dev/sdaN', where N is your partition for '/var/lib/docker', in my case /dev/sda3
2 - Shut down your VM, open VM Settings > Hard Disk(s) > change size of your 'virtual_disk.vmdk' (or whatever is your machine's virtual disk), then click Apply (see this guide).
3 - Install cfdisk and e2fsprogs-extra which contains resize2fs
> apk add cfdisk
> apk add e2fsprogs-extra
4 - Run cfdisk and resize/expand /dev/sda3
> cfdisk
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 206847 204800 100M 83 Linux
/dev/sda2 206848 4241407 4034560 1.9G 82 Linux swap / Solaris
/dev/sda3 4241408 83886079 79644672 12.6G 83 Linux
[Bootable] [ Delete ] [ Resize ] [ Quit ] [ Type ] [ Help ] [ Write ] [ Dump ]
.. press down/up to select '/dev/sda3'
.. press left/right/enter to select 'Resize' -> 'Write' -> 'Quit'
5 - Run resize2fs to expand the file system of /dev/sda3
> resize2fs /dev/sda3
6 - Verify resized volume
> df -h
/dev/sda3 37.3G 4.1G 31.4G 12% /var/lib/docker

To increase space available for Docker you will have to increase your docker-pool size. If you do a
lvs
You will see the docker-pool logical volume and its size. If your docker pool is sitting on a volume group that has free space you can simply increase the docker-pool LV by
lvextend -l 100%FREE <path_to_lv>
# An example using this may looks like this:
# lvextend -l 100%FREE /dev/VolGroup00/docker-pool
You can check out more docker diskspace tips here
Thanks

Docker stores all layers/images in its file formate (i.e. aufs) in default /var/lib/docker directory.
If you are getting disk space warning because of docker then there must of lot of docker images and you need to clean up it.
If you have option to add disk space then can you create separate partition with bigger size and mount your /var/lib/docker over there which will help you to get rid of filling root partition.
some extra information can be found here on managing disk space for docker .
http://www.scmtechblog.net/2016/06/clean-up-docker-images-from-local-to.html

Related

why /var/lib/docker/overlay2 grows too large and restart solved it

I am running a code optimizer for gzip.c in docker, in this process overlay grows infinitely large that eat up my disk.
root#id17:/var/lib/docker/overlay2/fe6987bf6e686e771ba7b08cda40aa477979512e182ad30120db037024638aa0# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sda5 245G 245G 0 100% /
...
overlay 245G 245G 0 100% /var/lib/docker/overlay2/fe6987bf6e686e771ba7b08cda40aa477979512e182ad30120db037024638aa0/merged
By using du -h --max-depth=1 I find it is diff and merged that consumed up my disk(is it?)
root#id17:/var/lib/docker/overlay2/fe6987bf6e686e771ba7b08cda40aa477979512e182ad30120db037024638aa0# du -h --max-depth=1
125G ./diff
129G ./merged
8.0K ./work
254G .
However, when I restart the dockersystemctl restart docker, it returned to normal.
root#eb9bf52aa3a3:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 245G 190G 43G 82% /
...
/dev/sda5 245G 190G 43G 82% /etc/hosts
...
root#id17:/var/lib/docker/overlay2/fe6987bf6e686e771ba7b08cda40aa477979512e182ad30120db037024638aa0# du -h --max-depth=1
125G ./diff
129G ./merged
8.0K ./work
254G .
It has come out for times and I cannot continue to do my work. So I really wonder how can I get out from this problem. Really thank you:-)
If the docker filesystem is growing, that often indicates container logs, or filesystem changes in the container. Logs you can see with docker logs and filesystem changes are shown with docker diff. Since you see a large diff folder, it's going to be the latter.
Those filesystem changes will survive a restart of the container, they get cleaned when the container is removed and replaced with a new container. So if restarting the container resolves it, my suspicion is your application is deleting the files on disk, but still has the file handles open to the kernel, possibly still writing to those file handles.
The other option is the stop or start of your application is deleting the files.

Docker overlay2 eating Disk Space

Below is the file system in overlay2 eating disk space, on Ubuntu Linux 18.04 LTS
Disk space of server 125GB
overlay 124G 6.0G 113G 6% /var/lib/docker/overlay2/9ac0eb938cd2a50bb87e8ed13605d3f09214fdd9c8967f18dfc3f9432701fea7/merged
overlay 124G 6.0G 113G 6% /var/lib/docker/overlay2/397b099799212060ee7a4718660aa13aba8aa1fbb92f4d88d86fbad94e572847/merged
shm 64M 0 64M 0% /var/lib/docker/containers/7ffb129016d187a61a31c33f9e468b98d0ac7ab1771b87631f6caade5b84adc6/mounts/shm
overlay 124G 6.0G 113G 6% /var/lib/docker/overlay2/df7c4acee73f7aa2536d2a8929a48241bc8e92a5f7b9cb63ab70cea731b52cec/merged
Another solution if the above doesn't work is setup a log rotation.
nano /etc/docker/daemon.json
if not found
cat > daemon.json
Add the following lines to file:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Restart the docker daemon: systemctl restart docker
Please refer: How to setup log rotation post installation
In case someone else runs into this, here's what's happening:
Your container may be writing data (logs, deployables, downloads...) to its local filesystem, and overlay2 will create a diff on each append/create/delete, so the container's filesystem will keep growing until it fills all available space on the host.
There are a few workarounds that won't require changing the storage driver:
first of all, make sure the data saved by the container may be discarded (you probably don't want to delete your database or anything similar)
periodically stop the container, prune the system docker system prune and restart the container
make sure the container doesn't write to its local filesystem, but if you can't:
replace any directories the container writes to with volumes or mounts.
Follow the Steps if your Server is Linux Ubuntu 18.04 LTS (should work for others too)
Docker info for Overlay2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
if you got the following lines when you enter df -h --total
19M /var/lib/docker/overlay2/00d82017328c49c661c78ce14550c4073c50a550fe5004911bd3488b085aea76/diff
5.9M /var/lib/docker/overlay2/00e3e4fa0cbff7c242c38cfc9501ef1a523158d69b50779e08a773e7e22a01f1/diff
44M /var/lib/docker/overlay2/0e8e7e893b2c8aa17b4875d421670e058e4d97de066c970bbeab6cba566a44ba/diff
28K /var/lib/docker/overlay2/12a4c4e4877d35e9db657e4acff32e513042cb44119cca5c43fc19ad81c3915f/diff
............
............
then do the changes as follows:
First stop docker : sudo systemctl stop docker
Next: got to path /etc/docker
Check file daemon.json if not found
cat > daemon.json
and enter the following inside:
{
"storage-driver": "aufs"
}
and close
Finally restart docker : sudo systemctl start docker
Check if the changes have been made:
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 0
Dirperm1 Supported: true
Changing the file system can help you to resolve this issue.
Please if check your docker version supports aufs here:
Please do check the Linux distribution and what storage drivers supported here :
I had a similar issue with the docker swarm.
The docker system prune --volume, restarting the server, removing the swarm stack and recreation was not helping.
In my case, I was hosting RabbitMQ where docker-compose config was:
services:
rabbitmq:
image: rabbitmq:.....
....
volumes:
- "${PWD}/queues/data:/var/lib/rabbitmq"
In such a case each container restart, each server reboot, just all that leads to restarting the rabbitmq container takes more and more hard drive space.
Initial value:
ls -ltrh queues/data/mnesia/ | wc -l
61
du -sch queues/data/mnesia/
7.8G queues/data/mnesia/
7.8G total
After restart:
ls -ltrh queues/data/mnesia/ | wc -l
62
du -sch queues/data/mnesia/
8.3G queues/data/mnesia/
8.3G total
My solution was to stop the rabbitmq and remove directories in queues/data/mnesia/. Then restart the rabbitmq.
Maybe sth is wrong with my config... But if you have such an issue then worth checking your volumes of containers whether do not leave some trash there.
If you are troubled by that /var/lib/docker/overlay2 directory is taking too much space(use du command to check space usage), then the answer below may be suitable for you.
docker xxx prune commands will clean up something unused, such as all stopped containers(in /var/lib/docker/containers), files in the virtual filesystems of stopped containers(in /var/lib/docker/overlay2), unmounted volumes(in /var/lib/docker/volumes) and images that don't have related containers(in /var/lib/docker/images). But all of this will not touch the containers which are in running.
limiting the size of logs in configurations will limit the size of /var/lib/docker/containers/*/*-json.log, but it doesn't involve the overlay2 directory.
you can find two folders called merged and diff in /var/lib/docker/overlay2/<hash>/. If these folders are big. That means there are high disk usage in your containers SELVES but not the docker host. In this case, you have to attach a terminal into relevant containers, find the high usage locations in the containers, and take your own solutions.
Just like Nick M said.

Check that Docker container has enough disk space

My hard disk is getting full and I'm suspecting that my Docker container may not have enough disk space.
How can I check that the system allocated enough free disk space for Docker?
My OS is OSX.
Docker for Mac's data is all stored in a VM which uses a thin provisioned qcow2 disk image. This image will grow with usage, but never automatically shrink. (which may be fixed in 1.13)
The image file is stored in your home directories Library area:
mac$ cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux
mac$ ls -l Docker.qcow2
rw-r--r-- 1 user staff 46671265792 31 Jan 22:24 Docker.qcow2
Inside the VM
Attach to the VM's tty with screen (brew install screen if you don't have it)
$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
If you get a login prompt, user is root with no password. Otherwise just press enter. Then you can run the df commands on the Linux VM.
/ # df -h /var/lib/docker
Filesystem Size Used Available Use% Mounted on
/dev/vda2 59.0G 14.9G 41.1G 27% /var
Note that this matches the df output inside a container (when using aufs or overlay)
mac$ docker run debian df -h
Filesystem Size Used Avail Use% Mounted on
overlay 60G 15G 42G 27% /
tmpfs 1.5G 0 1.5G 0% /dev
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
/dev/vda2 60G 15G 42G 27% /etc/hosts
shm 64M 0 64M 0% /dev/shm
Also note that while the VM is only using 14.9G of the 60G, the file size is 43G.
mac$ du -h Docker.qcow2
43G Docker.qcow2
The easiest way to fix the size is to backup any volume data, "Reset" docker from the Preferences menu and start again. It appears 1.13 has resolved the issue and will run a compaction on shutdown.
screen notes
Exit the screen session with ctrl-a then d
The Docker VM's tty get's messed up after I exit screen and I have to restart Docker to get a functional terminal back for a new session.

Files deleted inside docker container not freeing space

I have a container running and by default it uses 10 GB of space. Last night the container space was filled by the log files generated by the system. Since log file grew to 8 GB, I emptied the log file but still my container is 100% disk full. It never released the 8GB space cleared from the log file. Any idea?
root#c7:/app# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-202:1-264176-9aff6 10G 10G 20K 100% /
root#c7:/app# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/docker-202:1-264176-9aff6 68368 67605 763 99% /
Thanks,
Manish Joshi
May be, you can try running this command on host
fstrim /proc/$(docker inspect --format='{{ .State.Pid }}' <cid>)/root

LVM2 : Failing to pvcreate a block device

I'm trying to make use of the LVM2 functionality in linux (Centos6.0).
When trying to make the first step of defining a PV on a specific block device, I get the following error message:
[root#localhost /] pvcreate /dev/sdb
Can't open /dev/sdb exclusively. Mounted filesystem?
/dev/sdb is not mounted and its partition table was deleted.
I should mention also that /dev/sdb used to represent a larger block device (about 4 times larger) and was reduced by configuration of hardware raid (I split the hd to 4 in the raid controller).
Has anyone ever encountered this error before and knows how to take it from here?
Maybe device-mapper is 'stealing' this device. Try this:
[root#host ~]# dmsetup ls
sdb (253, 2)
VolGroup00-LogVol01 (253, 1)
VolGroup00-LogVol00 (253, 0)
If you find sdb device listed as above example, remove it using dmsetup and create the physical volume:
[root#host ~]# dmsetup remove sdb
[root#host ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
[root#localhost /] pvcreate -vvvvv /dev/sdb
Could ouput more details.
and you could use lsof -L to check if the block device is opened by other process.

Resources