No Space on CentOS with Docker - docker

I was using Docker on my CentOS machine for a while and had lot of images and containers (around 4GBs). My machine has 8GBs os storage and I kept getting an error from devicemapper whenever trying to remove a Docker container or Docker image with docker rm or docker rmi. The error was: Error response from daemon: Driver devicemapper failed to remove root filesystem. So I stopped the Docker service and tried restarting it, but that failed due to devicemapper. After that I uninstalled Docker and removed all images, containers, and volumes by running the following command: rm -rf /var/lib/docker. However, after running that it does not seem like any space was freed up:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 7.7G 346M 96% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 193M 1.6G 11% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
tmpfs 361M 0 361M 0% /run/user/1000
$ du -ch -d 1 | sort -hr
3.6G total
3.6G .
1.7G ./usr
903M ./var
433M ./home
228M ./opt
193M ./run
118M ./boot
17M ./etc
6.4M ./tmp
4.0K ./root
0 ./sys
0 ./srv
0 ./proc
0 ./mnt
0 ./media
0 ./dev
Why does df tell me I am using 7.7G whereas du tells me I am using 3.6G? The figure that du gives (3.6G) should be the correct one since I deleted everything in /var/lib/docker.

I had a similar issue. This ticket was helpful.
Depending on the file system you are using, you will want to use either fstrim, zerofree or add the drive to another machine or and use use xfs_repair
If your file system is xfs and you used xfs_repair then after running that command there should be a lost+found directory at the root of the drive that contains all the data that was taking upspace but unreachable.
You can then delete that and it will actually be reflected in du.

Related

VSCode Remote Container - Error: ENOSPC: No space left on device

I have been using the VSCode Remote Container Plugin for some time without issue. But today when I tried to open my project the remote container failed to open with the following error:
Command failed: docker exec -w /home/vscode/.vscode-server/bin/9833dd88 24d0faab /bin/sh -c echo 34503 >.devport
rejected promise not handled within 1 second: Error: ENOSPC: no space left on device, mkdir '/home/vscode/.vscode-server/data/logs/20191209T160810
It looks like the container is out of disk space but I'm not sure how to add more.
Upon further inspection I am a bit confused. When I run df from in the container it shows that I have used 60G of disk space but the size of my root directory is only ~9G.
$ df
Filesystem Size Used Avail Use% Mounted on
overlay 63G 61G 0 100% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda1 63G 61G 0 100% /etc/hosts
tmpfs 7.4G 0 7.4G 0% /proc/acpi
tmpfs 7.4G 0 7.4G 0% /sys/firmware
$ du -h --max-depth=1 /
9.2G /
What is the best way to resolve this issue?
Try docker system prune --all if you don't see any container or images with docker ps and docker images, but be careful it removes all cache and unused containers, images and network. docker ps -a and docker images -a shows you all the containers and images including ones that are currently not running or not in use.
Check the docs if problem persists: Clean unused docker resources
It looks like all docker containers on your system share the same disk space. I found two solutions:
Go into Docker Desktop's settings and increase the amount of disk space available.
Run docker container prune to free disk space being used by stopped containers.
In my case I had a bunch stopped docker containers from months back taking up all of the disk space allocated to Docker.

Docker container took too many storage of disk

I was use df -hl for check the status of my vps, but it is seem like,the Storage is took by docker mutil time(i only have 1 wordpress in this vps, there is no have other project)
today i recived a email from linode, they tell me my Storage is finished
Total: 25600 MB
Used: 25600 MB
i have a wordpress in this vps, which is builded by docker and wordpress
here is the code which from my vps
root#localhost:~# df -hl
Filesystem Size Used Avail Use% Mounted on
udev 463M 0 463M 0% /dev
tmpfs 99M 5.9M 93M 6% /run
/dev/sda 25G 5.0G 19G 22% /
tmpfs 493M 0 493M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 493M 0 493M 0% /sys/fs/cgroup
overlay 25G 5.0G 19G 22% /var/lib/docker/overlay2/2ebf8af06fccd1e3a455746e257c990e6d85f848832eaadd636f48d56e6fbefb/merged
overlay 25G 5.0G 19G 22% /var/lib/docker/overlay2/28044ad06cc4b50d58a331cd644a254c7c90480ad04c1686f2974503da1c98de/merged
shm 64M 0 64M 0% /var/lib/docker/containers/932928ba7b7ccbbb4dd9f05263fadda8c6764ec7185deefc37c0fc555a2c32d5/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/67d10956ef387af8327570b7013cc113114a48ccf3654f9ee01041e88e740192/mounts/shm
overlay 25G 5.0G 19G 22% /var/lib/docker/overlay2/b81fd707a47702b060b462fbb1424bf024c4e593071b0782f4c817ca46a188e2/merged
shm 64M 0 64M 0% /var/lib/docker/containers/ce2422fff8741ede110a730d1283e0f43792de05a14b2ae9bdb59874fefa5fc2/mounts/shm
tmpfs 99M 0 99M 0% /run/user/0
root#localhost:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
932928ba7b7c wordpress:latest "docker-entrypoint.s…" 7 weeks ago Up 2 weeks 0.0.0.0:1994->80/tcp jujuzone_site
67d10956ef38 phpmyadmin/phpmyadmin "/run.sh supervisord…" 7 weeks ago Up 2 weeks 9000/tcp, 0.0.0.0:8081->80/tcp phpmyadmin
ce2422fff874 mysql:5.7 "docker-entrypoint.s…" 7 weeks ago Up 4 hours 3306/tcp, 33060/tcp db_jujuzone
root#localhost:~# docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
It seems you've deleted some files that might still be in use by the system.
Keep in mind that in this case, the df command can show a different size from the du command.
You can check that with more precision using du -hc on the same directories and check if it's total differs from the df command.
You can also run lsof |grep '(deleted)' to verify which files were left open for the file descriptor.
In this case, you can kill this process and restart the responsible daemon.
After all, you must consider to run docker system prune with -a flag to also clear unused images and maybe reclaim a little bit more space.
I was experiencing this same issue recently on Ubuntu 20.04 with Docker v19.03.13. I searched through Docker docs and found that this maybe because of a new type of filesystem they introduced. You can read more about it from here. The way I fixed it was by editing (creating if not present already) the /etc/docker/daemon.json file and adding the following lines:
{
"storage-driver": "overlay"
}
Then restarting docker using the following command:
sudo systemctl restart docker
I have answered a similar question here.

Docker host & no space left on device

I'am using Rancher to manage some EC2 hosts (4 nodes in an auto-scaling group) & to orchestrate containers. Everything works fine.
But, at some point, I have a recurrent problem of disk space, even if I remove unused and untagged images with this command
docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi
Like I said, even if I run this command above, my hosts are continuoulsy running out of space :
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 12K 7.9G 1% /dev
tmpfs 1.6G 1.4M 1.6G 1% /run
/dev/xvda1 79G 77G 0 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 7.5M 7.9G 1% /run/shm
none 100M 0 100M 0% /run/user
I'am using rancher 1.1.4 and my hosts are running Docker 1.12.5 under Ubuntu 14.04.4. LTS.
Is there something I miss? What are the best practices to configure docker for production hosts in order to avoid this problem?
Thank you for your help.
Do you use volumes mounts ( docker run -v /local/path:/container/path) for persistent data of your containers ?
If no, data written by your containers (database, logs ...) will always grow the last layer of your image run.
To see the real size of your current running containers :
docker ps -s
You can also use tools such as https://www.diskreport.net to analyse your disk space and see what has grown between two measures.

Ambiguity in disk space allocation for docker containers

I have two Physical machine installed with Docker 1.11.3 on ubuntu. Following is the configuration of machines -
1. Machine 1 - RAM 4 GB, Hard disk - 500 GB, quad core
2. Machine 2 - RAM 8 GB, Hard disk - 1 TB, octa core
I created containers on both machines. When I check the disk space of individual containers, here are some stats, which I am not able to undestand the reason behind.
1. Container on Machine 1
root#e1t2j3k45432#df -h
Filesystem Size Used Avail Use% Mounted on
none 37G 27G 8.2G 77% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda9 37G 27G 8.2G 77% /etc/hosts
shm 64M 0 64M 0% /dev/shm
I have nothing installed in the above container, still it is showing
27 GB used.
How come this container got 37 GB of space. ?
2. Container on Machine 2
root#0af8ac09b89c:/# df -h
Filesystem Size Used Avail Use% Mounted on
none 184G 11G 164G 6% /
tmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda5 184G 11G 164G 6% /etc/hosts
shm 64M 0 64M 0% /dev/shm
Why only 11GB of disk space is shown as used in this container. Even
though this is also empty container with no packages installed.
How come this container is given 184 GB of disk space ?
The disk usage reported inside docker is the host disk usage of /var/lib/docker (my /var/lib/docker in the example below is symlinked to my /home where I have more disk space):
bash$ df -k /var/lib/docker/.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/... 720798904 311706176 372455240 46% /home
bash$ docker run --rm -it busybox df -k
Filesystem 1K-blocks Used Available Use% Mounted on
none 720798904 311706268 372455148 46% /
...
So if you run the df command on the same container on different hosts, a different result is expect.

Docker error :System Error : no space left on device

I am getting this error whenever I am trying to build my image. Searched on internet and got some links but none of them solved my problem.
Error:
System error: write /cgroup/docker/5dba72d862bf8171d36aa022d1929455af6589af9fb7ba6220b01842c7a7dee6/cgroup.procs: no space left on device.
This is the output of 'df -h':
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 56G 24G 29G 46% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 1.9G 4.0K 1.9G 1% /dev
tmpfs 385M 1.3M 384M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1.9G 324M 1.6G 17% /run/shm
none 100M 20K 100M 1% /run/user
/dev/sda8 761M 3.4M 758M 1% /boot/efi
/dev/sda3 55G 49G 3.8G 93% /home
/dev/sda4 275G 48G 213G 19% /opt/drive2
/dev/sda5 184G 74G 102G 43% /opt/drive3
/dev/sda6 215G 157G 48G 77% /opt/drive4
/dev/sda7 129G 23G 99G 19% /opt/drive1
You have no space in your host machine, where you are trying to build the image. First of all, you have to free some space. If you are using Debian based system this will help you:
df -h (show free space)
apt-get autoclean
apt-get autoremove
BleachBit (CCleaner-like for Linux).
Moreover, you can optimize your image building following these Dockerfile tips.
This is common problem if you are using docker-machine. In VM is disk smaller. Try cleanup your docker using these commands:
# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q -f dangling=true)
the reason may docker devicemapper Base Device Size.
please consider this or my answer on stackoverflow

Resources