Docker tmpfs volume and increasing size - docker

I am starting a jenkins container like this, from a local repository:
docker run -d -p 8080:8080 -v /home/docker:/var/jenkins_home 192.168.99.101:5000/jenkins
the output of my container filesystem is like this:
Filesystem Size Used Avail Use% Mounted on
none 19G 3.7G 14G 22% /
tmpfs 499M 0 499M 0% /dev
shm 64M 0 64M 0% /dev/shm
tmpfs 499M 0 499M 0% /sys/fs/cgroup
tmpfs 897M 801M 97M 90% /var/jenkins_home
/dev/sda1 19G 3.7G 14G 22% /etc/hosts
As can be seen, /var/jenkins_home is getting full. How can I increase the size of "tmpfs"?

When you start the container, folder /var/jenkins_home is mounted from your local host's /home/docker, as below option:
-v /home/docker:/var/jenkins_home
So if you clean the space in localhost:/home/docker, you will get more available space.

Related

Show only upper directory usage inside docker running in kubernetes

My project will run clients' (docker) containers with kubernetes. And the command df -h inside docker container will show the host / usage like this:
root#aggregator-demo-vgpovusabzqf-578f547cc5-rc8g6:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 1.8T 227G 1.5T 14% /
tmpfs 64M 0 64M 0% /dev
tmpfs 252G 0 252G 0% /sys/fs/cgroup
/dev/mapper/ubuntu--vg-ubuntu--lv 1.8T 227G 1.5T 14% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 504G 12K 504G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 252G 0 252G 0% /proc/acpi
tmpfs 252G 0 252G 0% /proc/scsi
tmpfs 252G 0 252G 0% /sys/firmware
which is not quite helpful for user to understand how much storage already used. And I see some container platforms which can show only the actually user usage like this:
root#container-bc574b56:~# df -lh
Filesystem Size Used Avail Use% Mounted on
overlay 25G 574M 25G 3% /
tmpfs 64M 0 64M 0% /dev
tmpfs 378G 0 378G 0% /sys/fs/cgroup
shm 40G 0 40G 0% /dev/shm
/dev/sdb 150M 4.0K 150M 1% /init
tmpfs 378G 12K 378G 1% /proc/driver/nvidia
/dev/sda2 219G 26G 184G 13% /usr/bin/nvidia-smi
udev 378G 0 378G 0% /dev/nvidiactl
tmpfs 378G 0 378G 0% /proc/asound
tmpfs 378G 0 378G 0% /proc/acpi
tmpfs 378G 0 378G 0% /proc/scsi
tmpfs 378G 0 378G 0% /sys/firmware
The / directory shows Avail is 25Gi which is the actual container limitation and the Used is 574M which is the docker upper directory usage. How to implement like this? Maybe this is not the capability of docker and it may use some other implementation?
df simply queries the filesystem for available space.
If you're using the standard overlay2 driver, the container filesystem is just a view onto the host filesystem; there's no way for df to provide a container specific value because there is no separate filesystem.
If you're using something like the devicemapper, zfs, or btrfs storage drivers, then each container has its own unique filesystem, so df can provide information that is specific to the container.

Docker mounts /etc/hosts files from host system to container

I met with such a problem for myself.
I have a need to limit the disk space inside the container.
At the moment, I started using the "--storage-opt size=4G" option to limit disk space, but when I then execute the "df -h" command in the container, I see that the hosts file is mounted from the host system and has a full amount of disk space that is not equal to the allocated space in the "--storage-opt size=4G" option.
root#b74761f5e0bf:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 4.0G 16K 4.0G 1% /
tmpfs 64M 0 64M 0% /dev
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
> /dev/sdb1 16G 447M 16G 3% /etc/hosts
tmpfs 2.0G 0 2.0G 0% /proc/acpi
tmpfs 2.0G 0 2.0G 0% /proc/scsi
tmpfs 2.0G 0 2.0G 0% /sys/firmware
The problem is that when I start writing to the host file, I have the opportunity to fill 100% of the disk space of the host system.
For example:
cat /dev/urandom > /etc/hosts
Result of the executed command
root#b74761f5e0bf:/# cat /dev/urandom > /etc/hosts
cat: write error: No space left on device
root#b74761f5e0bf:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 4.0G 20K 4.0G 1% /
tmpfs 64M 0 64M 0% /dev
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
> /dev/sdb1 16G 16G 84K 100% /etc/hosts
tmpfs 2.0G 0 2.0G 0% /proc/acpi
tmpfs 2.0G 0 2.0G 0% /proc/scsi
tmpfs 2.0G 0 2.0G 0% /sys/firmware
I noticed that when I do not docker
# docker inspect container_name
output:
"ResolvConfPath": "/mnt/docker/containers/b74761f5e0bf334d9e1c77f1423a57272dede65bb5ab37ca81e91c645f3096ff/resolv.conf",
"HostnamePath": "/mnt/docker/containers/b74761f5e0bf334d9e1c77f1423a57272dede65bb5ab37ca81e91c645f3096ff/hostname",
"HostsPath": "/mnt/docker/containers/b74761f5e0bf334d9e1c77f1423a57272dede65bb5ab37ca81e91c645f3096ff/hosts",
I see fields ResolvConfPath,HostnamePath,HostsPath.
Which are responsible for this mounting.
But I can't find solutions on how to avoid this mounting in the documentation and on similar questions.
Does anyone know how I can manage this mount to avoid the described problem?
An option that I can use quickly, but which does not suit me, is to mount my hosts file from the host system and set it to read-only mode
System information:
Docker version 20.10.12, build e91ed57
Ubuntu 20.04.3 LTS (Focal Fossa)
Type file system xfs

How to increase the size of /dev/root on a docker image on a Raspberry Pi

I'm using the https://github.com/lukechilds/dockerpi project to recreate a Raspberry Pi locally with Docker. However, the default disk space is very small and I quickly fill it up:
pi#raspberrypi:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.8G 1.2G 533M 69% /
devtmpfs 124M 0 124M 0% /dev
tmpfs 124M 0 124M 0% /dev/shm
tmpfs 124M 1.9M 122M 2% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 124M 0 124M 0% /sys/fs/cgroup
/dev/sda1 253M 52M 201M 21% /boot
tmpfs 25M 0 25M 0% /run/user/1000
How can I give move space to the RPi? I saw this issue, but I don't understand how that solution is implemented, or if it is relevant.
To increase the disk size, you need to extend the partition of the qemu disk used inside the container.
Start the docker to unzip rootfs and mounted it to an host path
docker run --rm -v $HOME/.dockerpi:/sdcard -it lukechilds/dockerpi
When the virtualized raspberry is up, you can stop it, running from the docker prompt sudo poweroff
Then you have the qemu disk in $HOME/.dockerpi/filesystem.img.
It could be extended with :
sudo qemu-img resize -f raw $HOME/.dockerpi/filesystem.img 10G
startsector=$(fdisk -u -l $HOME/.dockerpi/filesystem.img | grep filesystem.img2 | awk '{print $2}')
sudo parted $HOME/.dockerpi/filesystem.img --script rm 2
sudo parted $HOME/.dockerpi/filesystem.img --script "mkpart primary ext2 ${startsector}s -1s"
Restart the raspberry that will use the resized qemu disk with :
docker run --rm -v $HOME/.dockerpi:/sdcard -it lukechilds/dockerpi
Running from the docker prompt you can extend the root filesystem with :
sudo resize2fs /dev/sda2 8G
Finally the root is increased.
Following this df -h give :
Filesystem Size Used Avail Use% Mounted on
/dev/root 7.9G 1.2G 6.4G 16% /
devtmpfs 124M 0 124M 0% /dev
tmpfs 124M 0 124M 0% /dev/shm
tmpfs 124M 1.9M 122M 2% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 124M 0 124M 0% /sys/fs/cgroup
/dev/sda1 253M 52M 201M 21% /boot
tmpfs 25M 0 25M 0% /run/user/1000
If the solution is indeed to resize /dev/root, you can follow this thread, which concludes:
Using the gparted live distro, I struggled for a little while until I realised that the /dev/root partition was within another partition.
Resizing the latter, then the former, everything works. I just gave the /dev/root partition everything remaining on the disk, the other partitions I left at their original sizes.

How to avoid docker mount /etc/hosts /etc/hostname /etc/resolv.conf automatically

I was trying to share my GPGPU workstation with some friends. As they are using different environments, I was considering to using docker containers.
When running df -h in docker container, I find:
Filesystem Size Used Avail Use% Mounted on
overlay 10G 24K 10G 1% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.8G 8.1M 3.8G 1% /run
tmpfs 3.8G 0 3.8G 0% /tmp
tmpfs 3.8G 0 3.8G 0% /run/lock
/dev/sda3 465G 6.2G 459G 2% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
I was surprised to find that /dev/sda3 was mounted on /etc/hosts with host disk size visible. I was not willing to that be seen.
Though it can be umount by rc.local script or some service, but is there any method to avoid docker automatically mount it, let it managed absolutely by container, just like a normal virtual machine?
Docker version is 18.09.6, build 481bc77
Docker container was started by:
docker run -it --net=macvlan -d --cap-add=SYS_ADMIN --tmpfs /tmp --tmpfs /run --tmpfs /run/lock -v /sys/fs/cgroup:/sys/fs/cgroup:ro -e "container=docker" --device /dev/fuse -v /dev/hugepages:/dev/hugepages --runtime=runc --storage-opt "size=10G" ubuntu:16.04 /sbin/init
Dockerfile automatically fetched from docker hub without any change.
https://hub.docker.com/_/ubuntu
https://github.com/tianon/docker-brew-ubuntu-core/blob/010bf9649b1d10e2c34b159a9a9b338d0fdd4939/xenial/Dockerfile

Docker: How to increase the size of tmpfs volumes?

I observed that the size of the tmpfs volumes created by docker is roughly half the size of the machine's physical memory.
For example, on a machine with 22GB of RAM, I got this:
Filesystem Size Used Avail Use% Mounted on
overlay 970G 130G 840G 14% /
tmpfs 64M 0 64M 0% /dev
tmpfs 12G 0 12G 0% /sys/fs/cgroup
tmpfs 20G 0 20G 0% /ramdisk
/dev/sda1 970G 130G 840G 14% /etc/hosts
tmpfs 12G 180K 12G 1% /dev/shm
tmpfs 12G 0 12G 0% /proc/acpi
tmpfs 12G 0 12G 0% /proc/scsi
tmpfs 12G 0 12G 0% /sys/firmware
I would like to increase this size. Could anybody please tell me how to do that?
Thank you very much in advance for your help!
Update: Let me add some context to this question.
In my docker I have a /ramdisk volume whose size is large (here 20GB) because one of my programs needs that:
nvidia-docker run --ipc=host -h $HOSTNAME --mount type=tmpfs,destination=/ramdisk,tmpfs-mode=1770,tmpfs-size=21474836480
When running the program, at the moment its memory usage surpasses 12GB of ramdisk, it crashes (while ramdisk still has 8GB left). Note that 12GB is the size of the other tmpfs system volumes.
So, one solution I could think of is to increase the size of those other volumes, which is my question.
As per docker docs (20GiB as per example):
docker run -d \
-it \
--name tmptest \
--mount type=tmpfs,destination=/app,tmpfs-mode=1770,tmpfs-size=21474836480 \
nginx:latest
PS: Docs specify that by default tmpfs volumes have unlimited size, so the calculations here might be rounded down to the amount of free memory in the host OS.
SRC: https://docs.docker.com/storage/tmpfs/#specify-tmpfs-options

Resources