We setup a Bitnami kafka cluster (2 broker 1 zookeeper) on Google Cloud's Computer Engine.
After restarted the broker, one of the broker's bitnami kafka drive was unmounted.
Working broker:
kafka-cluster-demo-kafka-0:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 370M 5.0M 365M 2% /run
/dev/sda1 9.7G 2.7G 6.5G 30% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda15 124M 7.9M 116M 7% /boot/efi
**/dev/sdb 49G 53M 47G 1% /bitnami**
tmpfs 370M 0 370M 0% /run/user/1582053158
lumo_gftdevgcp_com#kafka-cluster-demo-kafka-0:~$ cd /bitnami/
lumo_gftdevgcp_com#kafka-cluster-demo-kafka-0:/bitnami$ ls
**kafka** lost+found
lumo_gftdevgcp_com#kafka-cluster-demo-kafka-0:/bitnami$
Issue broker:
kafka-cluster-demo-kafka-1:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 370M 256K 370M 1% /run
/dev/sda1 9.7G 4.5G 4.8G 49% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 740M 0 740M 0% /dev/shm
/dev/sda15 124M 7.9M 116M 7% /boot/efi
As you can there is no show of /bitnami
Anyone know how to remount the drive and why it disappeared
Could you check your PVC and check if they are bound? It is really weird I didn't see that before
Related
The project I'm working on has a docker-compose environment for development, and all of a sudden multiple containers start to have I/O errors like this one:
Error: EIO: i/o error, open '/usr/app/src/components/common/FilterComponent.tsx'
NGINX crashes for the same reason, and so on:
nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (5: Input/output error)
Docker has enough disk space (112GB avail/32GB used), everything else on my Mac seems to work fine. The only way to make it work again is a Docker factory reset, until it happens again (~1-2 days).
This is the docker-compose.yml file, pretty basic if you ask me:
x-env-files: &env-files
env_file:
- docker.env
version: '3.3'
services:
mongo-store:
image: mongo:latest
volumes:
- /data/db
ports:
- 27017:27017
networks:
- backend
command: ['mongod', '--bind_ip', '0.0.0.0']
application-ui:
build:
context: .
dockerfile: Dockerfile-dev
image: application-ui:dev
ports:
- 5000:5000
networks:
- frontend
stdin_open: true
volumes:
- ./src/:/usr/app/src/
command: ['npm', 'run', 'start']
application-service:
build:
context: ../application-service
dockerfile: Dockerfile-local
image: application-service:dev
ports:
- 9000:9000
networks:
- backend
- frontend
environment:
- 'GITLAB_TOKEN=${GITLAB_TOKEN}'
<<: *env-files
volumes:
- ../application-service/src/:/usr/app/src/
nginx:
image: nginx:latest
volumes:
- ./nginx/nginx.local.conf:/etc/nginx/nginx.conf
ports:
- 80:80
- 443:443
networks:
- backend
- frontend
networks:
backend:
frontend:
This is the output of docker run --rm -v /:/host busybox df -h:
Filesystem Size Used Available Use% Mounted on
overlay 149.4G 34.8G 106.9G 25% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
shm 64.0M 0 64.0M 0% /dev/shm
overlay 3.9G 316.0K 3.9G 0% /host
dev 3.8G 0 3.8G 0% /host/dev
shm 3.9G 0 3.9G 0% /host/dev/shm
/dev/vda1 149.4G 34.8G 106.9G 25% /host/etc/cni/net.d
/dev/vda1 149.4G 34.8G 106.9G 25% /host/etc/kubernetes
tmpfs 796.0M 480.0K 795.5M 0% /host/etc/resolv.conf
tmpfs 796.0M 480.0K 795.5M 0% /host/run/config
tmpfs 796.0M 480.0K 795.5M 0% /host/run/desktop
tmpfs 796.0M 480.0K 795.5M 0% /host/run/guest-services
tmpfs 796.0M 480.0K 795.5M 0% /host/run/host-services
cgroup_root 10.0M 0 10.0M 0% /host/sys/fs/cgroup
/dev/vda1 149.4G 34.8G 106.9G 25% /host/usr/libexec/kubernetes/kubelet-plugins
/dev/vda1 149.4G 34.8G 106.9G 25% /host/var/lib
/dev/vda1 149.4G 34.8G 106.9G 25% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/docker
tmpfs 3.9G 316.0K 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/docker/tmp
overlay 3.9G 316.0K 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/docker/rootfs
tmpfs 3.9G 4.0K 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/acpid/tmp
overlay 3.9G 4.0K 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/acpid/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/binfmt/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/binfmt/rootfs
tmpfs 3.9G 8.0K 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/dhcpcd/tmp
overlay 3.9G 8.0K 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/dhcpcd/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/diagnose/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/diagnose/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/dns-forwarder/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/dns-forwarder/rootfs
tmpfs 3.9G 316.0K 3.9G 0% /host/var/lib/mount-docker-cache/entries/docker.tar/70918e63c378be8683de4fc10c553798322a67941c518075202eab14ca0a8c55/containers/services/docker/tmp
overlay 3.9G 316.0K 3.9G 0% /host/var/lib/mount-docker-cache/entries/docker.tar/70918e63c378be8683de4fc10c553798322a67941c518075202eab14ca0a8c55/containers/services/docker/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/http-proxy/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/http-proxy/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/kmsg/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/kmsg/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/procd/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/procd/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/sntpc/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/sntpc/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/socks/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/socks/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/trim-after-delete/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/trim-after-delete/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/volume-contents/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/volume-contents/rootfs
tmpfs 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/vpnkit-forwarder/tmp
overlay 3.9G 0 3.9G 0% /host/var/lib/mount-docker-cache/entries/services.tar/de9df97d36780dceb06110a4e27244d44c593b57e18ad39cc450365999e5a4a3/containers/services/vpnkit-forwarder/rootfs
overlay 149.4G 34.8G 106.9G 25% /host/var/lib/docker/overlay2/9f12278ac636d7b67274d5ddb0af91405db4034136b4fa978e6882c884413a41/merged
overlay 149.4G 34.8G 106.9G 25% /host/var/lib/docker/overlay2/9f12278ac636d7b67274d5ddb0af91405db4034136b4fa978e6882c884413a41/merged
tmpfs 64.0M 0 64.0M 0% /host/var/lib/docker/overlay2/9f12278ac636d7b67274d5ddb0af91405db4034136b4fa978e6882c884413a41/merged/dev
shm 64.0M 0 64.0M 0% /host/var/lib/docker/overlay2/9f12278ac636d7b67274d5ddb0af91405db4034136b4fa978e6882c884413a41/merged/dev/shm
tmpfs 3.9G 0 3.9G 0% /host/var/lib/docker/overlay2/9f12278ac636d7b67274d5ddb0af91405db4034136b4fa978e6882c884413a41/merged/sys/fs/cgroup
tmpfs 3.9G 0 3.9G 0% /host/var/log
tmpfs 796.0M 480.0K 795.5M 0% /host/var/run/linuxkit-containerd/containerd.sock
tmpfs 796.0M 480.0K 795.5M 0% /host/var/run/linuxkit-external-logging.sock
grpcfuse 931.5G 14.3G 768.1G 2% /host/host_mnt
/dev/vda1 149.4G 34.8G 106.9G 25% /etc/resolv.conf
/dev/vda1 149.4G 34.8G 106.9G 25% /etc/hostname
/dev/vda1 149.4G 34.8G 106.9G 25% /etc/hosts
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 64.0M 0 64.0M 0% /proc/sched_debug
tmpfs 3.9G 0 3.9G 0% /sys/firmware
I've never seen this happen before, it just started a couple of weeks ago. It doesn't happen all the time, everything works fine for hours, sometimes days, and then it crashes. None of my colleagues working on the same project has had the same problem.
Docker Desktop for Mac: v3.6.0 (67351)
Docker Engine: v20.10.8
Compose: v1.29.2
MacOS: v11.5.1 (20G80)
Anyone else experiencing this issue? Any suggestion on how I can further investigate the problem?
It seems that the issue disappeared when I deactivated the option called Use gRPC FUSE for file sharing. I don't know if this is related to a problem on my disk or my machine, but I'm glad it works now.
I need to put a limit on block IO operations speed for a number of docker containers.
To achieve this, I need to do something like:
docker run -it --device-read-bps /dev/sda:1mb ubuntu,according to the docs.
My question is how do I get the correct device per container to set the limit for? Is there any what to get this info with docker inspect?
docker inspect my_container | grep DeviceName returns nothing?
The output of df -h is:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 1.4M 7.9G 1% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/mapper/vg0-data 31G 6.9G 23G 24% /
tmpfs 7.9G 0 7.9G 0% /tmp
/dev/sda3 283M 27M 238M 11% /boot
/dev/sda2 10M 2.2M 7.9M 22% /boot/efi
/dev/sdb1 63G 198M 60G 1% /mnt/data
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/0c917cf591efb40f75a450b6ad93bf9cf06c91f7f625e1f073887031d152f444/merged
shm 64M 0 64M 0% /var/lib/docker/containers/b21b4b5d27bd57f04204d3a54f11930a532bdc8c56cabfe903f34b955f3c81f1/mounts/shm
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/8b198c4829d4eb13c21e7c9d1be99aa00986d64f13f50a454abc539aed37e759/merged
shm 64M 0 64M 0% /var/lib/docker/containers/1667192b0b8026eb517894fdf72f71c6aca5a0ff78648447c12175c96b76990c/mounts/shm
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/72efa8bcdec2c529ca3ebde224f8d14e22780636c614fe45a0228eb957a99351/merged
shm 64M 0 64M 0% /var/lib/docker/containers/a7123dfebcc42a675b6ccb0435df1cc24bcd0a39847fb4cb5a3fdcaf2d38089f/mounts/shm
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/65ede56c537f5de766f616f13df90aae46f287b9e28703496f90af9f74f4c463/merged
shm 64M 0 64M 0% /var/lib/docker/containers/6a24ef7116078d48fde87fc47a8efd194ef541ffb7d85ae4bec34e5086e46d4b/mounts/shm
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/2b444d99740719500285bd94eb389815e631cd183cf3637e64fa40999ccf2530/merged
shm 64M 0 64M 0% /var/lib/docker/containers/8c7300dcd9981878ce46f4b805d65b72bf3afbf87d9510312ba5110b95ae8cf4/mounts/shm
overlay 31G 6.9G 23G 24% /var/lib/docker/overlay2/9ea1ad005bcbcdb0e52d6f2b05568268e3a6e318f8d30986e0fac56523408e89/merged
shm 64M 0 64M 0% /var/lib/docker/containers/5eb12be6805976d230f5ec17bda65100745bebeccea4ab7c2bcf2260405ecb96/mounts/shm
I came across many threads asking this question, like this, but no determined answer was given.
Docker does not specify a way to discreetly get the block device, which is used for read/write operations.
Successful workaround:
get all the block devices of the OS (OS dependent command).
put the limit on all the devices. No side effects observed.
Is there a way to change where the container's root directory to be mounted? The reason is currently it is mounted to /dev/mapper/sysvg-root.vol, which does not have much space left.
root#f967e2f116fe:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 10G 6.9G 3.2G 69% /
tmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/mapper/sysvg-root.vol 10G 6.9G 3.2G 69% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 32G 0 32G 0% /proc/acpi
tmpfs 32G 0 32G 0% /proc/scsi
tmpfs 32G 0 32G 0% /sys/firmware
On a CentOS 7 server I'm running out of space due to "unknown" docker volumes, which I'm not able to link to the corresponding container, in order to evaluate if I can delete it or not.
By running df -h I found that a lot of space is being used under /var/lib/docker/overlay2
[root#dev /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_sys-lv_root 20G 19G 0 100% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 50M 3.8G 2% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 190M 147M 30M 84% /boot
tmpfs 783M 0 783M 0% /run/user/0
overlay 20G 19G 0 100% /var/lib/docker/overlay2/df91b034e8daa4cfd70f43d1b430ef3d071921b53c2c272e2607176e229588d0/merged
shm 64M 0 64M 0% /var/lib/docker/containers/9a282dc54d83ed5b218e4c395a3d199b16eb032335cd5c310b7db35052186b7b/mounts/shm
overlay 20G 19G 0 100% /var/lib/docker/overlay2/14ca1651be01e15c51b4caa311ae4c90da45c976da466cad2daf1871bd8b8694/merged
shm 64M 4.0K 64M 1% /var/lib/docker/containers/f7fa1323897ddc1adadeb97cac83850b51f79797e375cbc1d9bb4cc1c439fa13/mounts/shm
overlay 20G 19G 0 100% /var/lib/docker/overlay2/80501e50123d4a300e3f48973215614b0b7f5ae7d6c251959ebd3d60c7e6d667/merged
shm 64M 0 64M 0% /var/lib/docker/containers/d6a1e107bb912d2a62134127cb7635bc6ed6cb0660e2c38d1ca9f8b991a37e59/mounts/shm
overlay 20G 19G 0 100% /var/lib/docker/overlay2/a1f83fb07c08ae0221c73aa2de0d510874f8f55f00bf50edf882f4ebf6ce5811/merged
shm 64M 0 64M 0% /var/lib/docker/containers/ea3aa1c7d7174aa75d46e4a5ec2f392abdf5f8aa768677464871c969c8c1c433/mounts/shm
overlay 20G 19G 0 100% /var/lib/docker/overlay2/1188055ab49d75b016e8c4ad95cde2ad6bc04d354ff7f1a662fdd468a87cb143/merged
shm 64M 0 64M 0% /var/lib/docker/containers/72bac475e028076bd43a75a0bb2e948e39fda486f86a481ed8ba96b4f4988204/mounts/shm
overlay 20G 19G 0 100% /var/lib/docker/overlay2/af8a522ad7fceea7ed91ad1408d3b1c99f7744fee60d2537bc47984a0fee240a/merged
shm 64M 0 64M 0% /var/lib/docker/containers/d5d4bd791a1eb99b60ee6e160bacde5563dac31e9bfaedc6dfe0a1f398f4f8e5/mounts/shm
How can I safely free up some space?
How can I link those directories to their corresponding containers?
I've already run:
docker system prune
and
docker volume prune
But no space has been released.
May this be related to having made some changes to the docker-compose.yml files and then restarting/recreating the containers?
UPDATE
I'm also thinking that all the space is just allocated but not really used, since for example with the first overlay row I get:
cd /var/lib/docker/overlay2/df91b034e8daa4cfd70f43d1b430ef3d071921b53c2c272e2607176e229588d0/
[root#dev df91b034e8daa4cfd70f43d1b430ef3d071921b53c2c272e2607176e229588d0]# du -csh .
494M .
494M total
instead of the 19GB which df -h shows as used.
Where are the docker containers stored on my fedora host machine?
Below is my disk space when I have started 3 containers. The total disk space is 1.3T. When I should open again another container, my total disk space would be 1.4T, etc. My ssd drive is not that large. So what is going on? When another 99G is added, other numbers are not going down.
[root#almond docker]# df --total -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 379M 7.5G 5% /dev/shm
tmpfs 7.8G 2.4M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/fedora-root 50G 37G 11G 79% /
tmpfs 7.8G 2.2M 7.8G 1% /tmp
/dev/sda1 477M 176M 273M 40% /boot
/dev/mapper/fedora-home 860G 45G 772G 6% /home
tmpfs 1.6G 16K 1.6G 1% /run/user/42
tmpfs 1.6G 5.2M 1.6G 1% /run/user/1000
tmpfs 1.6G 0 1.6G 0% /run/user/0
tmpfs 1.6G 0 1.6G 0% /run/user/26
/dev/dm-4 99G 668M 93G 1% /var/lib/docker/devicemapper/mnt/9b57a4483a4358dbd75e4a0bb3524da4b0db394f63875f90818daf4f47ab60c6
shm 64M 0 64M 0% /var/lib/docker/containers/d3a82943e6d0cf286395a3195eb191430e03e1d48571525a0c40238b0f6c6b1e/shm
/dev/dm-5 99G 640M 93G 1% /var/lib/docker/devicemapper/mnt/0a902b9a051b8718caae02c87a688f3646bb01e8906efbd60b03a76085613d9a
shm 64M 0 64M 0% /var/lib/docker/containers/0da780053f389c67e8054487643f4e61faec83228ab0e130e736b28876619b27/shm
/dev/dm-6 99G 640M 93G 1% /var/lib/docker/devicemapper/mnt/21bcf6fc14430e5e5d817a5de069e0aeaceb7c32dc125b3b1c2056060e5a79c7
shm 64M 0 64M 0% /var/lib/docker/containers/96ff5e4e5d58296821da90e78bb94b714268f661b7b4163272148ca2f8e3a06e/shm
total 1.3T 84G 1.1T 7% -