docker disk space grows faster than container's - docker

Docker containers that are modifying files, adding, and deleting extensively (leveldb) are growing disk usage faster that the container itself reports and eventually use up all the disk.
Here's one snapshot of df, and a a second. You'll note that disk space has increased considerably (300Mbytes) from the host's perspective, but the container's self-reported usage of disk space has only increased by 17Mbytes. As this continues the host runs out of disk.
Ubuntu stock 14.04, Docker version 1.10.2, build c3959b1.
Is there some sort of trim-like issue going on here?
root#9e7a93cbcb02:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-202:1-136171-d4[...] 9.8G 667M 8.6G 8% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/disk/by-uuid/0a76513a-37fc-43df-9833-34f8f9598ada 7.8G 2.9G 4.5G 39% /etc/hosts
shm 64M 0 64M 0% /dev/shm
And later on:
root#9e7a93cbcb02:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-202:1-136171-d4[...] 9.8G 684M 8.6G 8% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/disk/by-uuid/0a76513a-37fc-43df-9833-34f8f9598ada 7.8G 3.2G 4.2G 43% /etc/hosts
shm 64M 0 64M 0% /dev/shm

This is happening because of a kernel bug fix that has not been propagated to many mainstream OS distros. It's actually quite bad for newbie Docker users who naively fire up docker on the default Amazon AMI as I did.
Stick with CoreOS Stable, you won't have this issue. I have zero affiliation with CoreOS and frankly am greatly annoyed to have to deal with Yet Another Distro. In the CoreOS distro or other correctly working linux kernel the disk space of container and host track each other up and down correctly as the container frees or uses space. I'll note that OSX or other virtual box distros use CoreOS and thus work correctly.
Here's a long writeup on a very similar issue, but the root cause is a trim/discard issue in devicemapper. You need a fairly recent version of the Linux kernel to handle this properly. I'd go so far as to say that Docker is unfit for purpose unless you have the correct Linux kernel. See that article for a discussion on which version of your distro to use.
Note that above article only deals with management of docker containers and images, but AFAICT it also affects attempts by the container itself to free up disk space during normal addition/removal of files or blocks.
Be careful of what distro your cloud provider is using for cloud container management.

Related

how to convert centos server to docker base image

we have a server which hosts number of apps, I am exploring a possibility where we can create and upload a base image every time we deploy a new app on the server. is this a valid approach and is it possible.
root#sl2o2app301:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg01-root 58G 4.6G 54G 8% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 28K 7.8G 1% /dev/shm
tmpfs 7.8G 835M 7.0G 11% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda1 247M 152M 96M 62% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/1518
root#sl2o2app301:/etc$ cat redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
This is an invalid approach if you want to use docker.
Docker is more like isolated running app (container) not operating system.
But Docker is not a silver bullet, and in some cases, you need to use golden images for server management.
For creating of golden server images you may use packer for production and/or Vagrant for local development images
However, you still may want to use a dockerized approach. in this case, you need to split your server into the set of docker containers. One application + its dependencies in one container. This will be a valid usage of the tool.
If you want to run them altogether from one command it is good to evaluate docker-compose onboarding.

Docker Host on Ubuntu taking all the space on VM

Current Setup:
Machine OS: Windows 7
Vmware: VMWare workstation 8.0.2-591240
VM: Ubuntu LTS 16.04
Docker on Ubuntu: Docker Engine Community version 19.03.5
I have setup docker containers to run bamboo agents recently. It's keep running out of space after. Can anyone please suggest me mounting options or any other tips to keep the volume down?
Ps. I had the similar setup before and it was all good until the VM got corrupted and need to setup the new VM.
root#ubuntu:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.8G 0 5.8G 0% /dev
tmpfs 1.2G 113M 1.1G 10% /run
/dev/sda1 12G 12G 0 100% /
tmpfs 5.8G 0 5.8G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
tmpfs 1.2G 0 1.2G 0% /run/user/1000
overlay 12G 12G 0 100% /var/lib/docker/overlay2/e0e78a7d84da9c2a1e1c9f91ee16bc6515d8660e1a2db5e207504469f9e496ae/merged
overlay 12G 12G 0 100% /var/lib/docker/overlay2/8f3a73cd0b201f4a8a92ded0cfab869441edfbc2199574c225adbf78a2393129/merged
overlay 12G 12G 0 100% /var/lib/docker/overlay2/3d947960c28e834aa422b5ea16c261739d06bf22fe0f33f9e0248d233f2a84d1/merged
12G is quite a low space to be able to leverage cached images to speed up the building process. So, assuming you don't want to expand the root partition of that VM, what you can do is clean up images after every build, or every X builds.
For example, I follow the second approach, I run a cleaner job every night in my Jenkins agents to prevent the disk getting out of space.
Docker installation by default takes your /var space. Cleaning up your unused containers will work for some time and stop yielding you when you really cant delete more. The only way is to map your data-root of your daemon to a more available disk space. You can do the same by configuring below param, data-root in your daemon.json file.
{
“data-root”: “/new/path/to/docker-data”
}
Once you have done that do a systemctl daemon-reload to reload the configuration changes. Doing this will make docker copy all existing container volume data to the new path. This will resolve your space issue permanently. If you wish not to kill your running containers during daemon-reload you must have configured live-restore property in your daemon.json file. Hope this helps.

Running Docker on Ubuntu VM - keeps failing because of disk space

I have been struggling to build an application on my Ubuntu VM. On this VM, I have cloned a git repository, which contains an application (frontend, backend, database). When running the make command, it ultimately fails somewhere in the building process, because of no space left on device. Having increased the RAM and hard-disk size several times now, I am still wondering what exactly causes this error.
Is it the RAM size, or the hard-disk size?
Let me give some more information:
OS: Ubuntu 19.0.4
RAM allocated: 9.2 GB
Processors (CPU): 6
Hard disk space: 43 GB
The Ubuntu VM is a rather clean install, with only Docker, Docker Compose, and NodeJS installed on it. The VM runs via VMWare.
The following repository is cloned, which is meant to be built on the VM:
git#github.com:reactioncommerce/reaction-platform.git
For more information on the requirements they pose, which I seem to meet: https://docs.reactioncommerce.com/docs/installation-reaction-platform
After having increased RAM, CPU processors, and hard disk spaces iteratively, I still end up with the 'no space left on device' error. When checking the disk space, via df -h I get the following:
Filesystem Size Used Avail Use% Mounted on
udev 4.2G 0 4.2G 0% /dev
tmpfs 853M 1.8M 852M 1% /run
/dev/sr0 1.6G 1.6G 0 100% /cdrom
/dev/loop0 1.5G 1.5G 0 100% /rofs
/cow 4.2G 3.7G 523M 88% /
tmpfs 4.2G 38M 4.2G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.2G 0 4.2G 0% /sys/fs/cgroup
tmpfs 4.2G 584K 4.2G 1% /tmp
tmpfs 853M 12K 853M 1% /run/user/999
Now this makes me wonder, it seems that /dev/sr0, /dev/loop0 and /cow are the partitions that are used when building the application. However, I do not quite understand whether I am constrained by RAM or actual disk space at the moment.
Other Docker issues made me look at the inodes as well, as they could be problematic. And these also seem to be maxed out, however, I think the issue resides in the above.
I saw a similar question on SuperUser, however I could not really mirror his situation to mine, that is found here.

Docker host & no space left on device

I'am using Rancher to manage some EC2 hosts (4 nodes in an auto-scaling group) & to orchestrate containers. Everything works fine.
But, at some point, I have a recurrent problem of disk space, even if I remove unused and untagged images with this command
docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi
Like I said, even if I run this command above, my hosts are continuoulsy running out of space :
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 12K 7.9G 1% /dev
tmpfs 1.6G 1.4M 1.6G 1% /run
/dev/xvda1 79G 77G 0 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 7.5M 7.9G 1% /run/shm
none 100M 0 100M 0% /run/user
I'am using rancher 1.1.4 and my hosts are running Docker 1.12.5 under Ubuntu 14.04.4. LTS.
Is there something I miss? What are the best practices to configure docker for production hosts in order to avoid this problem?
Thank you for your help.
Do you use volumes mounts ( docker run -v /local/path:/container/path) for persistent data of your containers ?
If no, data written by your containers (database, logs ...) will always grow the last layer of your image run.
To see the real size of your current running containers :
docker ps -s
You can also use tools such as https://www.diskreport.net to analyse your disk space and see what has grown between two measures.

Docker: How to store images and metadata on another filesystem?

This issue has been really giving me grief and I would appreciate some help.
Running docker 1.10.3 on a vanilla Centos7.1 box, I have two file systems, a 15gb dev/vda1 where my root and var/lib is and a 35gb /dev/vdc1 mounted on mnt where I would like to put my docker volumes/image data and meta data. This is for administration and management purposes as I am expecting the number of containers to grow.
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 15G 1.5G 13G 11% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.3M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/vdc1 35G 49M 33G 1% /mnt/vdc1
tmpfs 385M 0 385M 0% /run/user/0
Despite all my attempts, docker keep on installing and defaulting to place the Data Space and Meta data space onto the 15gb root volume. I have tried many solutions including ; http://collabnix.com/archives/5881 , How to change the docker image installation directory?, and more, all with no luck.... basically wither the docker instance does not start at or all it does with its default settings.
Would like some help either the settings required for Data and Meta data to be stored on /mnt/vdc1 or install docker as a whole on the drive.
Thanks in advance , bf !
--graph is only one flag. There is also --exec-root and $DOCKER_TMPDIR which are used to store files as well.
DIR=/mnt/vdc1
export DOCKER_TMPDIR=$DIR/tmp
dockerd -D -g $DIR --exec-root=$DIR

Resources