Docker Host on Ubuntu taking all the space on VM - docker

Current Setup:
Machine OS: Windows 7
Vmware: VMWare workstation 8.0.2-591240
VM: Ubuntu LTS 16.04
Docker on Ubuntu: Docker Engine Community version 19.03.5
I have setup docker containers to run bamboo agents recently. It's keep running out of space after. Can anyone please suggest me mounting options or any other tips to keep the volume down?
Ps. I had the similar setup before and it was all good until the VM got corrupted and need to setup the new VM.
root#ubuntu:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.8G 0 5.8G 0% /dev
tmpfs 1.2G 113M 1.1G 10% /run
/dev/sda1 12G 12G 0 100% /
tmpfs 5.8G 0 5.8G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
tmpfs 1.2G 0 1.2G 0% /run/user/1000
overlay 12G 12G 0 100% /var/lib/docker/overlay2/e0e78a7d84da9c2a1e1c9f91ee16bc6515d8660e1a2db5e207504469f9e496ae/merged
overlay 12G 12G 0 100% /var/lib/docker/overlay2/8f3a73cd0b201f4a8a92ded0cfab869441edfbc2199574c225adbf78a2393129/merged
overlay 12G 12G 0 100% /var/lib/docker/overlay2/3d947960c28e834aa422b5ea16c261739d06bf22fe0f33f9e0248d233f2a84d1/merged

12G is quite a low space to be able to leverage cached images to speed up the building process. So, assuming you don't want to expand the root partition of that VM, what you can do is clean up images after every build, or every X builds.
For example, I follow the second approach, I run a cleaner job every night in my Jenkins agents to prevent the disk getting out of space.

Docker installation by default takes your /var space. Cleaning up your unused containers will work for some time and stop yielding you when you really cant delete more. The only way is to map your data-root of your daemon to a more available disk space. You can do the same by configuring below param, data-root in your daemon.json file.
{
“data-root”: “/new/path/to/docker-data”
}
Once you have done that do a systemctl daemon-reload to reload the configuration changes. Doing this will make docker copy all existing container volume data to the new path. This will resolve your space issue permanently. If you wish not to kill your running containers during daemon-reload you must have configured live-restore property in your daemon.json file. Hope this helps.

Related

how to convert centos server to docker base image

we have a server which hosts number of apps, I am exploring a possibility where we can create and upload a base image every time we deploy a new app on the server. is this a valid approach and is it possible.
root#sl2o2app301:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg01-root 58G 4.6G 54G 8% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 28K 7.8G 1% /dev/shm
tmpfs 7.8G 835M 7.0G 11% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda1 247M 152M 96M 62% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/1518
root#sl2o2app301:/etc$ cat redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
This is an invalid approach if you want to use docker.
Docker is more like isolated running app (container) not operating system.
But Docker is not a silver bullet, and in some cases, you need to use golden images for server management.
For creating of golden server images you may use packer for production and/or Vagrant for local development images
However, you still may want to use a dockerized approach. in this case, you need to split your server into the set of docker containers. One application + its dependencies in one container. This will be a valid usage of the tool.
If you want to run them altogether from one command it is good to evaluate docker-compose onboarding.

Give docker more diskspace for containers

I have a question. Our docker server was out of space for its containers so I gave it a bigger disk from 500GB to 1TB(its a vm) Ubuntu sees this correctily. If I do the command vgs I get this output:
VG #PV #LV #SN Attr VSize VFree
Docker-vg 1 2 0 wz--n- 999.52g 500.00g
But Docker still thinks it's out of space. I have rebooted the docker VM but still he thinks it's out of space. If I use the df -h command this is the output:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 792M 8.6M 783M 2% /run
/dev/mapper/Docker--vg-root 490G 465G 0 100% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvda1 472M 468M 0 100% /boot
As you see the docker-vg still thinks its 490gb
I don't know where to look. can someone help me ?
You still need to extend your logical volume and resize the filesystem to use the larger logical volume.
First, with lvextend, I'm not sure if it works with /dev/mapper. If not, you can do an lvdisplay to list your logical volumes:
lvextend -l +100%FREE /dev/mapper/Docker--vg-root
With ext*fs you can then run a resize:
resize2fs /dev/mapper/Docker--vg-root
The command is similar for xfs:
xfs_growfs /dev/mapper/Docker--vg-root
With "docker system prune" you clean some space removing old images and other stuff.
If you want your container to be aware of the disk size change, you have to:
docker rmi <image>
docker pull <image>

Docker host & no space left on device

I'am using Rancher to manage some EC2 hosts (4 nodes in an auto-scaling group) & to orchestrate containers. Everything works fine.
But, at some point, I have a recurrent problem of disk space, even if I remove unused and untagged images with this command
docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi
Like I said, even if I run this command above, my hosts are continuoulsy running out of space :
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 12K 7.9G 1% /dev
tmpfs 1.6G 1.4M 1.6G 1% /run
/dev/xvda1 79G 77G 0 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 7.5M 7.9G 1% /run/shm
none 100M 0 100M 0% /run/user
I'am using rancher 1.1.4 and my hosts are running Docker 1.12.5 under Ubuntu 14.04.4. LTS.
Is there something I miss? What are the best practices to configure docker for production hosts in order to avoid this problem?
Thank you for your help.
Do you use volumes mounts ( docker run -v /local/path:/container/path) for persistent data of your containers ?
If no, data written by your containers (database, logs ...) will always grow the last layer of your image run.
To see the real size of your current running containers :
docker ps -s
You can also use tools such as https://www.diskreport.net to analyse your disk space and see what has grown between two measures.

docker disk space grows faster than container's

Docker containers that are modifying files, adding, and deleting extensively (leveldb) are growing disk usage faster that the container itself reports and eventually use up all the disk.
Here's one snapshot of df, and a a second. You'll note that disk space has increased considerably (300Mbytes) from the host's perspective, but the container's self-reported usage of disk space has only increased by 17Mbytes. As this continues the host runs out of disk.
Ubuntu stock 14.04, Docker version 1.10.2, build c3959b1.
Is there some sort of trim-like issue going on here?
root#9e7a93cbcb02:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-202:1-136171-d4[...] 9.8G 667M 8.6G 8% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/disk/by-uuid/0a76513a-37fc-43df-9833-34f8f9598ada 7.8G 2.9G 4.5G 39% /etc/hosts
shm 64M 0 64M 0% /dev/shm
And later on:
root#9e7a93cbcb02:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-202:1-136171-d4[...] 9.8G 684M 8.6G 8% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/disk/by-uuid/0a76513a-37fc-43df-9833-34f8f9598ada 7.8G 3.2G 4.2G 43% /etc/hosts
shm 64M 0 64M 0% /dev/shm
This is happening because of a kernel bug fix that has not been propagated to many mainstream OS distros. It's actually quite bad for newbie Docker users who naively fire up docker on the default Amazon AMI as I did.
Stick with CoreOS Stable, you won't have this issue. I have zero affiliation with CoreOS and frankly am greatly annoyed to have to deal with Yet Another Distro. In the CoreOS distro or other correctly working linux kernel the disk space of container and host track each other up and down correctly as the container frees or uses space. I'll note that OSX or other virtual box distros use CoreOS and thus work correctly.
Here's a long writeup on a very similar issue, but the root cause is a trim/discard issue in devicemapper. You need a fairly recent version of the Linux kernel to handle this properly. I'd go so far as to say that Docker is unfit for purpose unless you have the correct Linux kernel. See that article for a discussion on which version of your distro to use.
Note that above article only deals with management of docker containers and images, but AFAICT it also affects attempts by the container itself to free up disk space during normal addition/removal of files or blocks.
Be careful of what distro your cloud provider is using for cloud container management.

Docker container drive does not match available hard drive space on host

I have loaded a new custom image into a remote RedHat 7 docker host instance. When running a new container, the container does not attempt to use the entire disk. I get the following is the output of a df -h on the container:
rootfs 9.8G 9.3G 0 100% /
/dev/mapper/docker-253:0-67515990-5700c262a29a5bb39d9747532360bf6a346853b0ab1ca6e5e988d7c8191c2573
9.8G 9.3G 0 100% /
tmpfs 1.9G 0 1.9G 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/resolv.conf
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/hostname
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/hosts
tmpfs 1.9G 0 1.9G 0% /proc/kcore
tmpfs 1.9G 0 1.9G 0% /proc/timer_stats
But the host system has much more space:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root 49G 25G 25G 51% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/vg_root-lv_home 9.8G 73M 9.7G 1% /home
/dev/sda1 497M 96M 402M 20% /boot
It seems as if docker is assigning the 9.8 gigs of the /home mapping to the entire drive of the container. So I am wondering if there is a reason I am seeing this?
The Problem
I was able to resolve this problem. The issue was not related to the volume that was being mounted to the container (ie It was not mounting the home volume as the root volume on the container). The problem occurred because docker uses device-mapper in RedHat to manage the file systems of it's containers. By default, the containers will start with 10G of space. In general, docker will use AUFS to manage the file systems of the containers. This is the case on most Debian based versions of Linux, but RedHat uses device-mapper instead.
The Solution
Luckily, the device-mapper size is configurable in docker. First, I had to stop my service, and remove all of my images/containers. (NOTE: There is no coming back from this, so backup all images as needed).
sudo service stop docker && sudo rm -irf /var/lib/docker
Then, start up the docker instance manually with the desired size parameters:
sudo docker -d --storage-opt dm.basesize=[DESIRED_SIZE]
In my case, I increased my container size to 13G:
sudo docker -d --storage-opt dm.basesize=13G
Then with docker still running, pull/reload the desired image, start a container, and the size should now match the desired size.
Next, I set my docker systemd service file to startup with the desired container size. This is required so that the docker service will start the containers up with the desired size. I edited the OPTIONS variable in the /etc/sysconfig/docker file. It now looks like this:
OPTIONS='--selinux-enabled --storage-opt dm.basesize=13G'
Finally, restart the docker service:
sudo service stop docker
References
[1] https://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/ - This is how I discovered RedHat uses device-mapper, and that device-mapper has a 10G limit.
[2] https://docs.docker.com/reference/commandline/cli/ - Found the storage options in dockers documentation.

Resources