why /var/lib/docker/overlay2 grows too large and restart solved it - docker

I am running a code optimizer for gzip.c in docker, in this process overlay grows infinitely large that eat up my disk.
root#id17:/var/lib/docker/overlay2/fe6987bf6e686e771ba7b08cda40aa477979512e182ad30120db037024638aa0# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sda5 245G 245G 0 100% /
...
overlay 245G 245G 0 100% /var/lib/docker/overlay2/fe6987bf6e686e771ba7b08cda40aa477979512e182ad30120db037024638aa0/merged
By using du -h --max-depth=1 I find it is diff and merged that consumed up my disk(is it?)
root#id17:/var/lib/docker/overlay2/fe6987bf6e686e771ba7b08cda40aa477979512e182ad30120db037024638aa0# du -h --max-depth=1
125G ./diff
129G ./merged
8.0K ./work
254G .
However, when I restart the dockersystemctl restart docker, it returned to normal.
root#eb9bf52aa3a3:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 245G 190G 43G 82% /
...
/dev/sda5 245G 190G 43G 82% /etc/hosts
...
root#id17:/var/lib/docker/overlay2/fe6987bf6e686e771ba7b08cda40aa477979512e182ad30120db037024638aa0# du -h --max-depth=1
125G ./diff
129G ./merged
8.0K ./work
254G .
It has come out for times and I cannot continue to do my work. So I really wonder how can I get out from this problem. Really thank you:-)

If the docker filesystem is growing, that often indicates container logs, or filesystem changes in the container. Logs you can see with docker logs and filesystem changes are shown with docker diff. Since you see a large diff folder, it's going to be the latter.
Those filesystem changes will survive a restart of the container, they get cleaned when the container is removed and replaced with a new container. So if restarting the container resolves it, my suspicion is your application is deleting the files on disk, but still has the file handles open to the kernel, possibly still writing to those file handles.
The other option is the stop or start of your application is deleting the files.

Related

Rancher(docker) diskusage cleanup

Rancher system started to use a heavy amount of disksspace. Kubernetes was setup by Rancher's RKE.
Diskusage is already over 5TB however I have only 10-12 replicaset, their real data is binded to PV which uses nfs (which has only a size of 10gb).
df -h --total clearly shows which one takes up so many space:
Filesystem Size Used Avail Use% Mounted on
overlay 98G 78G 16G 84% /var/lib/docker/overlay2/84db..somehash/merged
I have ~50-60 entry from these.
How can I cleanup these? Is there any maintenance feature in rancher for this? Couldn't find any though.
Kubernetes's garbage collection should be cleaning up your nodes.
This looks a lot an issue I saw with some log collectors like Splunk and Datadog.
If the following usage numbers do not match up. Then using the script below to release the file descriptors.
df -h /var/lib/docker
docker system df
Workaround:
ps aux | grep dockerd <<== This pid
cd /proc/`pid of dockerd`/fd
ls -l |grep var.log.journal |grep deleted.$ |awk '{print $9}' |while read x; do :> $x; done;

Move docker volume to different partition

I have a server where I run some containers with volumes. All my volumes are in /var/lib/docker/volumes/ because docker is managing it. I use docker-compose to start my containers.
Recently, I tried to stop one of my container but it was impossible :
$ docker-compose down
[17849] INTERNAL ERROR: cannot create temporary directory!
So, I checked how the data is mounted on the server :
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7,8G 0 7,8G 0% /dev
tmpfs 1,6G 1,9M 1,6G 1% /run
/dev/md3 20G 19G 0 100% /
tmpfs 7,9G 0 7,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 7,9G 0 7,9G 0% /sys/fs/cgroup
/dev/md2 487M 147M 311M 33% /boot
/dev/md4 1,8T 1,7G 1,7T 1% /home
tmpfs 1,6G 0 1,6G 0% /run/user/1000
As you can see, the / is only 20Go, so it is full and I can't stop my containers using docker-compose.
My questions are :
There is a simple solution to increase the available space in the
/, using /dev/md4 ?
Or can I move volumes to another place without losing data ?
This part of the Docker Daemon is confirgurable. Best practices would have you change the data folder; this can be done with OS-level Linux commands like a symlink... I would say it's better to actually configure the Docker Daemon to store the data elsewhere!
You can do that by editing the Docker command line (e.g. the systemd script that starts the Docker daemon), or change /etc/docker/daemon.json.
The file should have this content:
{
"data-root": "/path/to/your/docker"
}
If you add a new hard drive, partition, or mount point you can add it here and docker will store its data there.
I landed here as I had the very same issue. Even though some sources suggest you could do it with a symbolic link this will cause all kinds of issues.
Depending on the OS and Docker version I had malformed images, weird errors or the docker-daemon refused to start.
Here is a solution, but it seems it varies a little from version to version. For me the solution was:
Open
/lib/systemd/system/docker.service
And change this line
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
to:
ExecStart=/usr/bin/dockerd -g /mnt/WHATEVERYOUR/PARTITIONIS/docker --containerd=/run/containerd/containerd.sock
I solved it creating a symbolic link to a partition with bigger size:
ln -s /scratch/docker_meta /var/lib/docker
/scratch/docker_meta is the folder that I have in a bigger partition.
Do a bind mount.
For example, moving /docker/volumes to /mnt/large.
Append line into /etc/fstab.
/mnt/large /docker/volumes none bind 0 0
And then.
mv /docker/volumes/* /mnt/large/
mount /docker/volumes
Do not forget chown and chmod of /mnt/large first, if you are using non-root docker.

Check that Docker container has enough disk space

My hard disk is getting full and I'm suspecting that my Docker container may not have enough disk space.
How can I check that the system allocated enough free disk space for Docker?
My OS is OSX.
Docker for Mac's data is all stored in a VM which uses a thin provisioned qcow2 disk image. This image will grow with usage, but never automatically shrink. (which may be fixed in 1.13)
The image file is stored in your home directories Library area:
mac$ cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux
mac$ ls -l Docker.qcow2
rw-r--r-- 1 user staff 46671265792 31 Jan 22:24 Docker.qcow2
Inside the VM
Attach to the VM's tty with screen (brew install screen if you don't have it)
$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
If you get a login prompt, user is root with no password. Otherwise just press enter. Then you can run the df commands on the Linux VM.
/ # df -h /var/lib/docker
Filesystem Size Used Available Use% Mounted on
/dev/vda2 59.0G 14.9G 41.1G 27% /var
Note that this matches the df output inside a container (when using aufs or overlay)
mac$ docker run debian df -h
Filesystem Size Used Avail Use% Mounted on
overlay 60G 15G 42G 27% /
tmpfs 1.5G 0 1.5G 0% /dev
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
/dev/vda2 60G 15G 42G 27% /etc/hosts
shm 64M 0 64M 0% /dev/shm
Also note that while the VM is only using 14.9G of the 60G, the file size is 43G.
mac$ du -h Docker.qcow2
43G Docker.qcow2
The easiest way to fix the size is to backup any volume data, "Reset" docker from the Preferences menu and start again. It appears 1.13 has resolved the issue and will run a compaction on shutdown.
screen notes
Exit the screen session with ctrl-a then d
The Docker VM's tty get's messed up after I exit screen and I have to restart Docker to get a functional terminal back for a new session.

tar fills up my HDD

I'm trying to tar a pretty big folder (~11GB) and while taring, my VM crashes because its disk is full. But... I still have plenty of room available on all disks but /
$ sudo df -h
File system Size Used Avail. Used% Mount on
udev 3,9G 0 3,9G 0% /dev
tmpfs 799M 9,3M 790M 2% /run
/dev/sda1 9,1G 3,1G 5,6G 36% /
/dev/sda2 69G 37G 29G 57% /home
/dev/sdb1 197G 87G 100G 47% /docker
I assume tar is buffering somewhere on / and fulfil it before my OS crashes. By the way, I have no idea on how to prevent this. Do you guy have any idea?
Cheers,
Olivier
Tar normally builds the archive in the current directory, as a hidden file. Try cd'ing to one of your larger partition mounting points and taring from there to see if it makes a difference. You may also be running out of innodes:
No Space Left on Device, Running out of Innodes
I ran into a similar problem with a server because of too many small files. While you have plenty of free space left, you might run into this issue.

Files deleted inside docker container not freeing space

I have a container running and by default it uses 10 GB of space. Last night the container space was filled by the log files generated by the system. Since log file grew to 8 GB, I emptied the log file but still my container is 100% disk full. It never released the 8GB space cleared from the log file. Any idea?
root#c7:/app# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-202:1-264176-9aff6 10G 10G 20K 100% /
root#c7:/app# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/docker-202:1-264176-9aff6 68368 67605 763 99% /
Thanks,
Manish Joshi
May be, you can try running this command on host
fstrim /proc/$(docker inspect --format='{{ .State.Pid }}' <cid>)/root

Resources