How is the rootfs size of a docker container decided? - docker

On one system, the disk size of the Docker container is like this:
root#b65c6518f583:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:0-202764498-b65c6518f5837667e7021971a97aebd382dddca6b3ecf4167472ebe17f16aace 99G 268M 94G 1% /
tmpfs 5.8G 0 5.8G 0% /dev
shm 64M 0 64M 0% /dev/shm
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
tmpfs 5.8G 96K 5.8G 1% /run/secrets
/dev/mapper/rhel-root 50G 20G 31G 40% /etc/hosts
We can see the rootfs size is 99G. While in another system, the disk size of the Docker container is like this:
53ac740bd09b:/ # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-8:8-4202821-2a6f330df1b7b37d55a96b098863f81e4a7f1c39fcca3f5fa03b57998cb33427 9.8G 4.4G 4.9G 48% /
tmpfs 126G 0 126G 0% /dev
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/sda8 97G 11G 82G 12% /data
shm 64M 0 64M 0% /dev/shm
The rootfs size is only 9.8G.
How is the rootfs size of a docker container decided? How can I modify the value of rootfs size?

The default size for a container is 10 GB, and you can change it.
Here is an excerpt from:
https://docs.docker.com/engine/reference/commandline/daemon/
dm.basesize
Specifies the size to use when creating the base device, which limits
the size of images and containers. The default value is 10G. Note,
thin devices are inherently “sparse”, so a 10G device which is mostly
empty doesn’t use 10 GB of space on the pool. However, the filesystem
will use more space for the empty case the larger the device is.
The base device size can be increased at daemon restart which will
allow all future images and containers (based on those new images) to
be of the new base device size.
Examples:
$ docker daemon --storage-opt dm.basesize=50G
This will increase the base device size to 50G. The Docker daemon will
throw an error if existing base device size is larger than 50G. A user
can use this option to expand the base device size however shrinking
is not permitted.
This value affects the system-wide “base” empty filesystem that may
already be initialized and inherited by pulled images. Typically, a
change to this value requires additional steps to take effect:
$ sudo service docker stop
$ sudo rm -rf /var/lib/docker
$ sudo service docker start
Example use:
$ docker daemon --storage-opt dm.basesize=20G

If you would like the docker container to be more than the default size, this tutorial accomplished the task. I know it will work for CentOs 7. With CentOs 6 you only need to change the xfs_growfs to resize2fs.
I was using docker 1.7

Related

VSCode Remote Container - Error: ENOSPC: No space left on device

I have been using the VSCode Remote Container Plugin for some time without issue. But today when I tried to open my project the remote container failed to open with the following error:
Command failed: docker exec -w /home/vscode/.vscode-server/bin/9833dd88 24d0faab /bin/sh -c echo 34503 >.devport
rejected promise not handled within 1 second: Error: ENOSPC: no space left on device, mkdir '/home/vscode/.vscode-server/data/logs/20191209T160810
It looks like the container is out of disk space but I'm not sure how to add more.
Upon further inspection I am a bit confused. When I run df from in the container it shows that I have used 60G of disk space but the size of my root directory is only ~9G.
$ df
Filesystem Size Used Avail Use% Mounted on
overlay 63G 61G 0 100% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda1 63G 61G 0 100% /etc/hosts
tmpfs 7.4G 0 7.4G 0% /proc/acpi
tmpfs 7.4G 0 7.4G 0% /sys/firmware
$ du -h --max-depth=1 /
9.2G /
What is the best way to resolve this issue?
Try docker system prune --all if you don't see any container or images with docker ps and docker images, but be careful it removes all cache and unused containers, images and network. docker ps -a and docker images -a shows you all the containers and images including ones that are currently not running or not in use.
Check the docs if problem persists: Clean unused docker resources
It looks like all docker containers on your system share the same disk space. I found two solutions:
Go into Docker Desktop's settings and increase the amount of disk space available.
Run docker container prune to free disk space being used by stopped containers.
In my case I had a bunch stopped docker containers from months back taking up all of the disk space allocated to Docker.

Give docker more diskspace for containers

I have a question. Our docker server was out of space for its containers so I gave it a bigger disk from 500GB to 1TB(its a vm) Ubuntu sees this correctily. If I do the command vgs I get this output:
VG #PV #LV #SN Attr VSize VFree
Docker-vg 1 2 0 wz--n- 999.52g 500.00g
But Docker still thinks it's out of space. I have rebooted the docker VM but still he thinks it's out of space. If I use the df -h command this is the output:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 792M 8.6M 783M 2% /run
/dev/mapper/Docker--vg-root 490G 465G 0 100% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvda1 472M 468M 0 100% /boot
As you see the docker-vg still thinks its 490gb
I don't know where to look. can someone help me ?
You still need to extend your logical volume and resize the filesystem to use the larger logical volume.
First, with lvextend, I'm not sure if it works with /dev/mapper. If not, you can do an lvdisplay to list your logical volumes:
lvextend -l +100%FREE /dev/mapper/Docker--vg-root
With ext*fs you can then run a resize:
resize2fs /dev/mapper/Docker--vg-root
The command is similar for xfs:
xfs_growfs /dev/mapper/Docker--vg-root
With "docker system prune" you clean some space removing old images and other stuff.
If you want your container to be aware of the disk size change, you have to:
docker rmi <image>
docker pull <image>

Dokku/Docker out of disk space - How to enter app

So my question is I have an errant rails app deployed using Dokku with the default Digital Ocean setup. This rails app has eaten all of the disk space as I did not set up anything to clean out the /tmp directory.
So the output of df is:
Filesystem 1K-blocks Used Available Use% Mounted on
udev 1506176 0 1506176 0% /dev
tmpfs 307356 27488 279868 9% /run
/dev/vda1 60795672 60779288 0 100% /
tmpfs 1536772 0 1536772 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 1536772 0 1536772 0% /sys/fs/cgroup
/dev/vda15 106858 3419 103439 4% /boot/efi
tmpfs 307352 0 307352 0% /run/user/0
So I am out of disk space, but I don't know how to enter the container to clean it. Any dokku **** return /home/dokku/.basher/bash: main: command not found
Access denied which I have found out is because I am completely out of HD space.
So 2 questions.
1: How do I get into the container to clear the tmp directory
2: Is there a way to set a max disk size limit so Dokku doesn't eat the entire HD again?
Thanks
Dokku uses docker to deploy your application, you are probably accumulating a bunch of stale docker images, which over time can take over all of your disk space.
Try running this:
docker image ls
Then try removing unused images:
docker system prune -a
For more details, see: https://www.digitalocean.com/community/tutorials/how-to-remove-docker-images-containers-and-volumes

Ambiguity in disk space allocation for docker containers

I have two Physical machine installed with Docker 1.11.3 on ubuntu. Following is the configuration of machines -
1. Machine 1 - RAM 4 GB, Hard disk - 500 GB, quad core
2. Machine 2 - RAM 8 GB, Hard disk - 1 TB, octa core
I created containers on both machines. When I check the disk space of individual containers, here are some stats, which I am not able to undestand the reason behind.
1. Container on Machine 1
root#e1t2j3k45432#df -h
Filesystem Size Used Avail Use% Mounted on
none 37G 27G 8.2G 77% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda9 37G 27G 8.2G 77% /etc/hosts
shm 64M 0 64M 0% /dev/shm
I have nothing installed in the above container, still it is showing
27 GB used.
How come this container got 37 GB of space. ?
2. Container on Machine 2
root#0af8ac09b89c:/# df -h
Filesystem Size Used Avail Use% Mounted on
none 184G 11G 164G 6% /
tmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda5 184G 11G 164G 6% /etc/hosts
shm 64M 0 64M 0% /dev/shm
Why only 11GB of disk space is shown as used in this container. Even
though this is also empty container with no packages installed.
How come this container is given 184 GB of disk space ?
The disk usage reported inside docker is the host disk usage of /var/lib/docker (my /var/lib/docker in the example below is symlinked to my /home where I have more disk space):
bash$ df -k /var/lib/docker/.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/... 720798904 311706176 372455240 46% /home
bash$ docker run --rm -it busybox df -k
Filesystem 1K-blocks Used Available Use% Mounted on
none 720798904 311706268 372455148 46% /
...
So if you run the df command on the same container on different hosts, a different result is expect.

Docker container drive does not match available hard drive space on host

I have loaded a new custom image into a remote RedHat 7 docker host instance. When running a new container, the container does not attempt to use the entire disk. I get the following is the output of a df -h on the container:
rootfs 9.8G 9.3G 0 100% /
/dev/mapper/docker-253:0-67515990-5700c262a29a5bb39d9747532360bf6a346853b0ab1ca6e5e988d7c8191c2573
9.8G 9.3G 0 100% /
tmpfs 1.9G 0 1.9G 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/resolv.conf
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/hostname
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/hosts
tmpfs 1.9G 0 1.9G 0% /proc/kcore
tmpfs 1.9G 0 1.9G 0% /proc/timer_stats
But the host system has much more space:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root 49G 25G 25G 51% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/vg_root-lv_home 9.8G 73M 9.7G 1% /home
/dev/sda1 497M 96M 402M 20% /boot
It seems as if docker is assigning the 9.8 gigs of the /home mapping to the entire drive of the container. So I am wondering if there is a reason I am seeing this?
The Problem
I was able to resolve this problem. The issue was not related to the volume that was being mounted to the container (ie It was not mounting the home volume as the root volume on the container). The problem occurred because docker uses device-mapper in RedHat to manage the file systems of it's containers. By default, the containers will start with 10G of space. In general, docker will use AUFS to manage the file systems of the containers. This is the case on most Debian based versions of Linux, but RedHat uses device-mapper instead.
The Solution
Luckily, the device-mapper size is configurable in docker. First, I had to stop my service, and remove all of my images/containers. (NOTE: There is no coming back from this, so backup all images as needed).
sudo service stop docker && sudo rm -irf /var/lib/docker
Then, start up the docker instance manually with the desired size parameters:
sudo docker -d --storage-opt dm.basesize=[DESIRED_SIZE]
In my case, I increased my container size to 13G:
sudo docker -d --storage-opt dm.basesize=13G
Then with docker still running, pull/reload the desired image, start a container, and the size should now match the desired size.
Next, I set my docker systemd service file to startup with the desired container size. This is required so that the docker service will start the containers up with the desired size. I edited the OPTIONS variable in the /etc/sysconfig/docker file. It now looks like this:
OPTIONS='--selinux-enabled --storage-opt dm.basesize=13G'
Finally, restart the docker service:
sudo service stop docker
References
[1] https://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/ - This is how I discovered RedHat uses device-mapper, and that device-mapper has a 10G limit.
[2] https://docs.docker.com/reference/commandline/cli/ - Found the storage options in dockers documentation.

Resources