My project will run clients' (docker) containers with kubernetes. And the command df -h inside docker container will show the host / usage like this:
root#aggregator-demo-vgpovusabzqf-578f547cc5-rc8g6:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 1.8T 227G 1.5T 14% /
tmpfs 64M 0 64M 0% /dev
tmpfs 252G 0 252G 0% /sys/fs/cgroup
/dev/mapper/ubuntu--vg-ubuntu--lv 1.8T 227G 1.5T 14% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 504G 12K 504G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 252G 0 252G 0% /proc/acpi
tmpfs 252G 0 252G 0% /proc/scsi
tmpfs 252G 0 252G 0% /sys/firmware
which is not quite helpful for user to understand how much storage already used. And I see some container platforms which can show only the actually user usage like this:
root#container-bc574b56:~# df -lh
Filesystem Size Used Avail Use% Mounted on
overlay 25G 574M 25G 3% /
tmpfs 64M 0 64M 0% /dev
tmpfs 378G 0 378G 0% /sys/fs/cgroup
shm 40G 0 40G 0% /dev/shm
/dev/sdb 150M 4.0K 150M 1% /init
tmpfs 378G 12K 378G 1% /proc/driver/nvidia
/dev/sda2 219G 26G 184G 13% /usr/bin/nvidia-smi
udev 378G 0 378G 0% /dev/nvidiactl
tmpfs 378G 0 378G 0% /proc/asound
tmpfs 378G 0 378G 0% /proc/acpi
tmpfs 378G 0 378G 0% /proc/scsi
tmpfs 378G 0 378G 0% /sys/firmware
The / directory shows Avail is 25Gi which is the actual container limitation and the Used is 574M which is the docker upper directory usage. How to implement like this? Maybe this is not the capability of docker and it may use some other implementation?
df simply queries the filesystem for available space.
If you're using the standard overlay2 driver, the container filesystem is just a view onto the host filesystem; there's no way for df to provide a container specific value because there is no separate filesystem.
If you're using something like the devicemapper, zfs, or btrfs storage drivers, then each container has its own unique filesystem, so df can provide information that is specific to the container.
Related
We have a docker-ckan installation on a cloud server. Recently, we have been having issues with docker running out of disk space. However, when we run "df -h" it says that we still have 23gb left. I'm not sure why this is happening and any suggestions would help!
Log error:
OSError: [Errno 28] No space left on device: '/tmp/default//sessions/container_file_lock/
$df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 394M 600K 393M 1% /run
/dev/vda1 137G 108G 23G 83% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs 394M 0 394M 0% /run/user/1002
I met with such a problem for myself.
I have a need to limit the disk space inside the container.
At the moment, I started using the "--storage-opt size=4G" option to limit disk space, but when I then execute the "df -h" command in the container, I see that the hosts file is mounted from the host system and has a full amount of disk space that is not equal to the allocated space in the "--storage-opt size=4G" option.
root#b74761f5e0bf:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 4.0G 16K 4.0G 1% /
tmpfs 64M 0 64M 0% /dev
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
> /dev/sdb1 16G 447M 16G 3% /etc/hosts
tmpfs 2.0G 0 2.0G 0% /proc/acpi
tmpfs 2.0G 0 2.0G 0% /proc/scsi
tmpfs 2.0G 0 2.0G 0% /sys/firmware
The problem is that when I start writing to the host file, I have the opportunity to fill 100% of the disk space of the host system.
For example:
cat /dev/urandom > /etc/hosts
Result of the executed command
root#b74761f5e0bf:/# cat /dev/urandom > /etc/hosts
cat: write error: No space left on device
root#b74761f5e0bf:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 4.0G 20K 4.0G 1% /
tmpfs 64M 0 64M 0% /dev
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
> /dev/sdb1 16G 16G 84K 100% /etc/hosts
tmpfs 2.0G 0 2.0G 0% /proc/acpi
tmpfs 2.0G 0 2.0G 0% /proc/scsi
tmpfs 2.0G 0 2.0G 0% /sys/firmware
I noticed that when I do not docker
# docker inspect container_name
output:
"ResolvConfPath": "/mnt/docker/containers/b74761f5e0bf334d9e1c77f1423a57272dede65bb5ab37ca81e91c645f3096ff/resolv.conf",
"HostnamePath": "/mnt/docker/containers/b74761f5e0bf334d9e1c77f1423a57272dede65bb5ab37ca81e91c645f3096ff/hostname",
"HostsPath": "/mnt/docker/containers/b74761f5e0bf334d9e1c77f1423a57272dede65bb5ab37ca81e91c645f3096ff/hosts",
I see fields ResolvConfPath,HostnamePath,HostsPath.
Which are responsible for this mounting.
But I can't find solutions on how to avoid this mounting in the documentation and on similar questions.
Does anyone know how I can manage this mount to avoid the described problem?
An option that I can use quickly, but which does not suit me, is to mount my hosts file from the host system and set it to read-only mode
System information:
Docker version 20.10.12, build e91ed57
Ubuntu 20.04.3 LTS (Focal Fossa)
Type file system xfs
This question already has answers here:
How to change the docker image installation directory?
(20 answers)
Closed 2 years ago.
If i run
df -h
inside the Dockerfile of an image I want to build, it sees:
Filesystem Size Used Avail Use% Mounted on
overlay 118G 112G 0 100% /
tmpfs 64M 0 64M 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda1 118G 112G 0 100% /etc/hosts
tmpfs 16G 0 16G 0% /proc/asound
tmpfs 16G 0 16G 0% /proc/acpi
tmpfs 16G 0 16G 0% /proc/scsi
tmpfs 16G 0 16G 0% /sys/firmware
but if I run it outside, it sees:
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 355M 2.8G 12% /run
/dev/sda1 118G 112G 0 100% /
tmpfs 16G 300M 16G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda3 308G 4.8G 288G 2% /mnt/ssd
/dev/sdb1 1.8T 1.2T 605G 66% /home
tmpfs 3.2G 16K 3.2G 1% /run/user/1002
tmpfs 3.2G 4.0K 3.2G 1% /run/user/1038
tmpfs 3.2G 0 3.2G 0% /run/user/1037
/dev/loop1 97M 97M 0 100% /snap/core/9436
/dev/loop0 97M 97M 0 100% /snap/core/9665
/dev/loop3 163M 163M 0 100% /snap/blender/42
/dev/loop2 163M 163M 0 100% /snap/blender/43
tmpfs 3.2G 0 3.2G 0% /run/user/1039
I got error "No space left on device" and I would like to use my /dev/sdb1 disk with 600 free GB, why docker does not see it? How can I make it use that space?
Docker -- absent explicit configuration to the contrary -- stores container data in /var/lib/docker. If you want to expand the space available to your Docker containers, one solution would be to mount a filesystem from /dev/sdb onto /var/lib/docker.
Alternately, you can configure Docker to store container data in a different location by setting the data-root option in your /etc/docker/daemon.json file, as described in the dockerd documentation.
For example:
{
"data-root": "/home/docker"
}
In either case, you may want to copy existing files from /var/lib/docker into the new location; otherwise, Docker will not see any of your existing images, containers, volumes, etc after you make the change.
I am running a private Gitlab Group runner on EC2 (Ubuntu 18.04). It's recently and frequently started failing build jobs at various stages, but all with the same root cause: no space left on device.
On logging in to the EC2 instance, I can see
System load: 0.0
Usage of /: 99.5% of 29.02GB
Memory usage: 14%
Processes: 109
=> / is using 99.5% of 29.02GB
=> There are 3 zombie processes.
Disk free space shows / and /var/lib/docker/overlay2 at 100% usage:
/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 395M 928K 394M 1% /run
/dev/xvda1 30G 29G 140M 100% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
overlay 30G 29G 140M 100% /var/lib/docker/overlay2/ed591...60f1/merged
shm 64M 0 64M 0% /var/lib/docker/containers/e9de...f8ed/mounts/shm
overlay 30G 29G 140M 100% /var/lib/docker/overlay2/0956c...e51f/merged
shm 64M 0 64M 0% /var/lib/docker/containers/4cab...0ba8/mounts/shm
/dev/loop1 18M 18M 0 100% /snap/amazon-ssm-agent/1566
/dev/loop3 29M 29M 0 100% /snap/amazon-ssm-agent/2012
/dev/loop2 97M 97M 0 100% /snap/core/9436
/dev/loop4 97M 97M 0 100% /snap/core/9665
tmpfs 395M 0 395M 0% /run/user/1000
Docker disk usage shows ~21GB, apparently unclaimable:
/# docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 2 2 746MB 0B (0%)
Containers 2 2 8.989MB 0B (0%)
Local Volumes 3 3 20.4GB 0B (0%)
Build Cache 0 0 0B 0B
Pruning does nothing:
/# docker system prune
Total reclaimed space: 0B
How can I identify what is using this disk space and ultimately reclaim it?
Executing docker system prune with default options reclaims only space reserved by idle containers, networks - it does not remove images.
In order to remove all unused images too, you need to run:
docker system prune -a
I have a 2018 samsung chromebook pro upon which I've installed crouton. I have only one chroot installed using crouton. Everything is going well, except that I appear to be out of space on the rootfs. Here is the output of sudo df -h:
chronos#localhost / $ sudo df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.7G 1.7G 41M 98% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmp 1.9G 3.0M 1.9G 1% /tmp
run 1.9G 688K 1.9G 1% /run
shmfs 1.9G 29M 1.9G 2% /dev/shm
/dev/mmcblk0p1 53G 8.7G 41G 18% /mnt/stateful_partition
/dev/mmcblk0p8 12M 28K 12M 1% /usr/share/oem
/dev/mapper/encstateful 16G 81M 16G 1% /mnt/stateful_partition/encrypted
media 1.9G 0 1.9G 0% /media
none 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/loop1 450M 450M 0 100% /opt/google/containers/android/rootfs/root
/dev/loop2 4.0K 4.0K 0 100% /opt/google/containers/arc-removable-media/mountpoints/container-root
/dev/loop3 4.0K 4.0K 0 100% /opt/google/containers/arc-sdcard/mountpoints/container-root
/dev/loop4 4.0K 4.0K 0 100% /opt/google/containers/arc-obb-mounter/mountpoints/container-root
imageloader 1.9G 0 1.9G 0% /run/imageloader
tmpfs 1.9G 4.0K 1.9G 1% /run/arc/oem
tmpfs 1.9G 0 1.9G 0% /run/arc/sdcard
tmpfs 1.9G 0 1.9G 0% /run/arc/obb
tmpfs 1.9G 0 1.9G 0% /run/arc/media
tmpfs 1.9G 0 1.9G 0% /run/arc/adbd
passthrough 1.9G 0 1.9G 0% /run/arc/media/removable
/dev/fuse 53G 8.7G 41G 18% /run/arc/sdcard/default/emulated
/dev/fuse 53G 8.7G 41G 18% /run/arc/sdcard/read/emulated
/dev/fuse 53G 8.7G 41G 18% /run/arc/sdcard/write/emulated
tmpfs 128K 12K 116K 10% /var/run/crw
As you can see, my rootfs is nearly full, and there is a whole bunch of other junk that is apparently normal for chromeos. I've read up on similar questions, but some of my confusions are still unanswered.
This is my current understanding (please correct me if I'm wrong):
chromeos mounts Downloads to the stateful_partition, meaning that a google user is not writing to the rootfs when downloading files.
This implies that the rootfs is really just for the kernel files, and therefore should be small.
Crouton installs chroots to the stateful_partition, which means that a chroot does not take up any partition space in the rootfs.
Outstanding questions:
What is /mnt/stateful_partition really for? Specifically, why does it have to be in /mnt?
Why don't I have a home partition?
Does my disk usage look normal?
Weird thing: Within the chroot, I can only wget sufficiently large files if I first free up space. Is this a space constraint imposed by crouton? Or is the chroot somehow writing to the full rootfs?
What are these extra partitions for? My storage capacity is 32GB, but the SD slot seems to have capacity for 53G * 3. Is this just a partition scheme that is ready to accept and mount a variably sized SD?
Here is sudo df -h from within the chroot:
Filesystem Size Used Avail Use% Mounted on
/dev/mmcblk0p1 53G 8.7G 41G 18% /
devtmpfs 1.9G 0 1.9G 0% /dev
shmfs 1.9G 36M 1.9G 2% /dev/shm
tmp 1.9G 3.0M 1.9G 1% /tmp
tmpfs 385M 12K 385M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
run 1.9G 688K 1.9G 1% /var/host/dbus
/dev/mapper/encstateful 16G 81M 16G 1% /var/host/timezone
/dev/root 1.7G 1.7G 41M 98% /lib/modules/3.18.0-17866-g4dfef3905aba
media 1.9G 0 1.9G 0% /var/host/media
none 1.9G 0 1.9G 0% /sys/fs/cgroup
none 1.9G 4.0K 1.9G 1% /sys/fs/selinux
Why is mmcblk0p1 53GBs when I only have 32GB of storage available?
The /dev/root is mounted on /lib/modules/3.18.... This appears to be the rootfs in chromeos. Why does crouton use this, and what is it for?
the rootfs is read-only which means it's never written to so it being almost full all the time is normal and not a problem. the majority of the storage is in the stateful partition by design.
crouton is normally installed in /usr/local which is on the stateful partition which means it has access to all available storage.
df output shows mounts, not partitions. if you want to view partitions, you need to run something like cgpt show /dev/mmcblk0.
df output can be confusing when using bind mounts, so you'll see the originating device and not the subpath that was bind mounted. that's why you see /dev/root when a bind mount was created specifically to the /lib/modules/... subdir.