Disk space still full after pruning - docker

I've a problem when pruning docker. After building images, I run "docker system prune --volumes -a -f" but it's not releasing space from "/var/lib/docker/overlay2". See below please
Before building the image, disk space & /var/lib/docker/overlay2 size:
ubuntu#xxx:~/tmp/app$ df -hv
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 390M 5.4M 384M 2% /run
/dev/nvme0n1p1 68G 20G 49G 29% /
tmpfs 2.0G 8.0K 2.0G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs 390M 0 390M 0% /run/user/1000
ubuntu#xxx:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
8.0K /var/lib/docker/overlay2
Building the image
ubuntu#xxx:~/tmp/app$ docker build -f ./Dockerfile .
Sending build context to Docker daemon 1.027MB
Step 1/12 : FROM mhart/alpine-node:9 as base
9: Pulling from mhart/alpine-node
ff3a5c916c92: Pull complete
c77918da3c72: Pull complete
Digest: sha256:3c3f7e30beb78b26a602f12da483d4fa0132e6d2b625c3c1b752c8a8f0fbd359
Status: Downloaded newer image for mhart/alpine-node:9
---> bd69a82c390b
.....
....
Successfully built d56be87e90a4
Sizes after image built:
ubuntu#xxx:~/tmp/app$ df -hv
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 390M 5.4M 384M 2% /run
/dev/nvme0n1p1 68G 21G 48G 30% /
tmpfs 2.0G 8.0K 2.0G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs 390M 0 390M 0% /run/user/1000
ubuntu#xxx:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
3.9G /var/lib/docker/overlay2
ubuntu#xxx:~/tmp/app$ docker system prune -af --volumes
Deleted Images:
deleted: sha256:ef4973a39ce03d2cc3de36d8394ee221b2c23ed457ffd35f90ebb28093b40881
deleted: sha256:c3a0682422b4f388c501e29b446ed7a0448ac6d9d28a1b20e336d572ef4ec9a8
deleted: sha256:6988f1bf347999f73b7e505df6b0d40267dc58bbdccc820cdfcecdaa1cb2c274
deleted: sha256:50aaadb4b332c8c1fafbe30c20c8d6f44148cae7094e50a75f6113f27041a880
untagged: alpine:3.6
untagged: alpine#sha256:ee0c0e7b6b20b175f5ffb1bbd48b41d94891b0b1074f2721acb008aafdf25417
deleted: sha256:d56be87e90a44c42d8f1c9deb188172056727eb79521a3702e7791dfd5bfa7b6
deleted: sha256:067da84a69e4a9f8aa825c617c06e8132996eef1573b090baa52cff7546b266d
deleted: sha256:72d4f65fefdf8c9f979bfb7bce56b9ba14bb9e1f7ca676e1186066686bb49291
deleted: sha256:037b7c3cb5390cbed80dfa511ed000c7cf3e48c30fb00adadbc64f724cf5523a
deleted: sha256:796fd2c67a7bc4e64ebaf321b2184daa97d7a24c4976b64db6a245aa5b1a3056
deleted: sha256:7ac06e12664b627d75cd9e43ef590c54523f53b2d116135da9227225f0e2e6a8
deleted: sha256:40993237c00a6d392ca366e5eaa27fcf6f17b652a2a65f3afe33c399fff1fb44
deleted: sha256:bafcf3176fe572fb88f86752e174927f46616a7cf97f2e011f6527a5c1dd68a4
deleted: sha256:bbcc764a2c14c13ddbe14aeb98815cd4f40626e19fb2b6d18d7d85cc86b65048
deleted: sha256:c69cad93cc00af6cc39480846d9dfc3300c580253957324872014bbc6c80e263
deleted: sha256:97a19d85898cf5cba6d2e733e2128c0c3b8ae548d89336b9eea065af19eb7159
deleted: sha256:43773d1dba76c4d537b494a8454558a41729b92aa2ad0feb23521c3e58cd0440
deleted: sha256:721384ec99e56bc06202a738722bcb4b8254b9bbd71c43ab7ad0d9e773ced7ac
untagged: mhart/alpine-node:9
untagged: mhart/alpine-node#sha256:3c3f7e30beb78b26a602f12da483d4fa0132e6d2b625c3c1b752c8a8f0fbd359
deleted: sha256:bd69a82c390b85bfa0c4e646b1a932d4a92c75a7f9fae147fdc92a63962130ff
Total reclaimed space: 122.2MB
It's releasing only 122.2 MB. Sizes after prune:
ubuntu#xxx:~/tmp/app$ df -hv
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 390M 5.4M 384M 2% /run
/dev/nvme0n1p1 68G 20G 48G 30% /
tmpfs 2.0G 8.0K 2.0G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs 390M 0 390M 0% /run/user/1000
ubuntu#xxx:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
3.7G /var/lib/docker/overlay2
As you can see, there are 0 containers/images:
ubuntu#xxx:~/tmp/app$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ubuntu#xxx:~/tmp/app$ docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
But the size of "/var/lib/docker/overlay2" has only decreased from 3.9G to 3.7G. If I build more than one image, it's increses every time. This is the dockerfile I'm building:
FROM mhart/alpine- node:9 as base
RUN apk add --no-cache make gcc g++ python
WORKDIR /app
COPY package.json /app
RUN npm install --silent
# Only copy over the node pieces we need from the above image
FROM alpine:3.6
COPY --from=base /usr/bin/node /usr/bin/
COPY --from=base /usr/lib/libgcc* /usr/lib/libstdc* /usr/lib/
WORKDIR /app
COPY --from=base /app .
COPY . .
CMD ["node", "server.js"]
Why it's not cleaning overlay2 folder? how can I handle this? is there a solution? is it a known bug?

It's probably logs or some unneeded files in the overlay2 folder, not Docker images that are the problem.
Try sudo du /var/lib/docker/overlay2
The following worked for me - to show me the exact culprit folders:
sudo -s
cd /
df -h
cd to the culprit folder(s) then rm *

A bare docker system prune will not delete:
running containers
tagged images
volumes
The big things it does delete are stopped containers and untagged images. You can pass flags to docker system prune to delete images and volumes, just realize that images could have been built locally and would need to be recreated, and volumes may contain data you want to backup/save:
$ docker system prune --help
Usage: docker system prune [OPTIONS]
Remove unused data
Options:
-a, --all Remove all unused images not just dangling ones
--filter filter Provide filter values (e.g. 'label=<key>=<value>')
-f, --force Do not prompt for confirmation
--volumes Prune volumes
What this still doesn't prune are:
running containers
images used by those containers
volumes used by those containers
Other storage associated with a running container are container logs (docker logs on a container shows these) and filesystem changes made by the container (docker diff shows what has been changed in the container filesystem). To clean logs, see this answer on how you can configure a default limit for all new containers, and the risks of manually deleting logs in a running container.
In this case, it looks like there are still files in overlay2 even when all containers are stopped and deleted. First, realize these are just directories of files, you can dig into each one and see what files are there. These are layers from the overlay filesystem, so deleting them can result in a broken environment since they are referenced from other part of the docker filesystem. There are several possible causes I can think of for this:
Corruption with the docker engine, perhaps folders were deleted manually outside of docker resulting in it losing track of various overlay directories being used. Or perhaps from the harddrive filling the engine started to create a layer and lost track of it. Restarting the docker engine may help it resync these folders.
You are looking at a different docker engine. E.g. if you are running rootless containers, those are in your user's home directory rather than /var/lib/docker. Or if you configure docker context or set $DOCKER_HOST, you may be running commands against a remote docker engine and not pruning your local directories.
Since you have deleted all containers already, and have no other data to preserve like volumes, it's safe to completely reset docker. This can be done with:
# DANGER, this will reset docker, deleting all containers, images, and volumes
sudo -s
systemctl stop docker
rm -rf /var/lib/docker
systemctl start docker
exit
Importantly, you should not delete individual files and directories from overlay2. See this answer for the issues that occur if you do that. Instead, the above is a complete wipe of the docker folder returning to an initial empty state.

On Docker Desktop for Mac, I was suddenly bumping into this all the time. Resizing the image in ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw to be bigger (default was something like 54Gb, upped it to 128Gb) allowed me to proceed at least for the time being.
The guidance I could find mainly suggested reducing its size if you are running up against the size limits of your hard drive, but I have plenty of space there.

Related

VSCode Remote Container - Error: ENOSPC: No space left on device

I have been using the VSCode Remote Container Plugin for some time without issue. But today when I tried to open my project the remote container failed to open with the following error:
Command failed: docker exec -w /home/vscode/.vscode-server/bin/9833dd88 24d0faab /bin/sh -c echo 34503 >.devport
rejected promise not handled within 1 second: Error: ENOSPC: no space left on device, mkdir '/home/vscode/.vscode-server/data/logs/20191209T160810
It looks like the container is out of disk space but I'm not sure how to add more.
Upon further inspection I am a bit confused. When I run df from in the container it shows that I have used 60G of disk space but the size of my root directory is only ~9G.
$ df
Filesystem Size Used Avail Use% Mounted on
overlay 63G 61G 0 100% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda1 63G 61G 0 100% /etc/hosts
tmpfs 7.4G 0 7.4G 0% /proc/acpi
tmpfs 7.4G 0 7.4G 0% /sys/firmware
$ du -h --max-depth=1 /
9.2G /
What is the best way to resolve this issue?
Try docker system prune --all if you don't see any container or images with docker ps and docker images, but be careful it removes all cache and unused containers, images and network. docker ps -a and docker images -a shows you all the containers and images including ones that are currently not running or not in use.
Check the docs if problem persists: Clean unused docker resources
It looks like all docker containers on your system share the same disk space. I found two solutions:
Go into Docker Desktop's settings and increase the amount of disk space available.
Run docker container prune to free disk space being used by stopped containers.
In my case I had a bunch stopped docker containers from months back taking up all of the disk space allocated to Docker.

Docker host & no space left on device

I'am using Rancher to manage some EC2 hosts (4 nodes in an auto-scaling group) & to orchestrate containers. Everything works fine.
But, at some point, I have a recurrent problem of disk space, even if I remove unused and untagged images with this command
docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi
Like I said, even if I run this command above, my hosts are continuoulsy running out of space :
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 12K 7.9G 1% /dev
tmpfs 1.6G 1.4M 1.6G 1% /run
/dev/xvda1 79G 77G 0 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 7.5M 7.9G 1% /run/shm
none 100M 0 100M 0% /run/user
I'am using rancher 1.1.4 and my hosts are running Docker 1.12.5 under Ubuntu 14.04.4. LTS.
Is there something I miss? What are the best practices to configure docker for production hosts in order to avoid this problem?
Thank you for your help.
Do you use volumes mounts ( docker run -v /local/path:/container/path) for persistent data of your containers ?
If no, data written by your containers (database, logs ...) will always grow the last layer of your image run.
To see the real size of your current running containers :
docker ps -s
You can also use tools such as https://www.diskreport.net to analyse your disk space and see what has grown between two measures.

docker image error downloading package

I am trying to build a docker image (using my Dockerfile) and I get a very strange error about insufficient space in the download directory:
Total download size: 208 k
Installed size: 760 k
Downloading packages:
Error downloading packages:
libyaml-0.1.4-11.el7_0.x86_64: Insufficient space in download directory /var/cache/yum/x86_64/7/centos/packages
* free 0
* needed 55 k
PyYAML-3.10-11.el7.x86_64: Insufficient space in download directory /var/cache/yum/x86_64/7/centos/packages
* free 0
* needed 153 k
The command '/bin/sh -c yum -y install python-yaml' returned a non-zero code: 1
I am using a centos7 base image
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 106M 1.5G 7% /run
/dev/sda1 118G 112G 0 100% /
tmpfs 7.9G 648K 7.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sdb1 92G 206M 87G 1% /boot
tmpfs 1.6G 56K 1.6G 1% /run/user/1001
The following docker command was the trick to fix the underlying error for me:
$ docker rm $(docker ps -qa)
For me, running:
docker image prune
did the trick. It turned out I had lots of garbage (a.k.a., dangling) images taking up space. prune docs can be found here.
Check and make sure the /var directory has sufficient space as that is where docker stores its images.
To do so: df -h /var
If it is 100% full you might want to clear up some space.
docker ps -a - to list all of the containers (including those stopped and exited ones). use docker rm {CONTAINER_ID} to free up some space.
Alternatively do docker images to remove unused images. docker rmi {IMAGE_ID}.
You would need to check where the space was used at first.
du -h /var | grep -E ‘^[0-9.]*[M|G]’
If any specific directory is used too much spaces, you check how to remove it properly. And you do it.
You ever have not removed docker containers or images?
It usually is high possibility for root cause of the unsufficient space issues.
Check it by following command.
du -hs /var/lib/docker
If the directory has too much spaces, you would solve docker commands below.
Removing all containers,
docker rm $(docker ps -qa)
Removing docker all images,
docker rmi $(docker image ls -qa)
But the cause may not be the docker around, such as big log files or rpm cache and some big files. And then you can remove the files.
I hope this help you.

No Space on CentOS with Docker

I was using Docker on my CentOS machine for a while and had lot of images and containers (around 4GBs). My machine has 8GBs os storage and I kept getting an error from devicemapper whenever trying to remove a Docker container or Docker image with docker rm or docker rmi. The error was: Error response from daemon: Driver devicemapper failed to remove root filesystem. So I stopped the Docker service and tried restarting it, but that failed due to devicemapper. After that I uninstalled Docker and removed all images, containers, and volumes by running the following command: rm -rf /var/lib/docker. However, after running that it does not seem like any space was freed up:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 7.7G 346M 96% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 193M 1.6G 11% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
tmpfs 361M 0 361M 0% /run/user/1000
$ du -ch -d 1 | sort -hr
3.6G total
3.6G .
1.7G ./usr
903M ./var
433M ./home
228M ./opt
193M ./run
118M ./boot
17M ./etc
6.4M ./tmp
4.0K ./root
0 ./sys
0 ./srv
0 ./proc
0 ./mnt
0 ./media
0 ./dev
Why does df tell me I am using 7.7G whereas du tells me I am using 3.6G? The figure that du gives (3.6G) should be the correct one since I deleted everything in /var/lib/docker.
I had a similar issue. This ticket was helpful.
Depending on the file system you are using, you will want to use either fstrim, zerofree or add the drive to another machine or and use use xfs_repair
If your file system is xfs and you used xfs_repair then after running that command there should be a lost+found directory at the root of the drive that contains all the data that was taking upspace but unreachable.
You can then delete that and it will actually be reflected in du.

Docker error :System Error : no space left on device

I am getting this error whenever I am trying to build my image. Searched on internet and got some links but none of them solved my problem.
Error:
System error: write /cgroup/docker/5dba72d862bf8171d36aa022d1929455af6589af9fb7ba6220b01842c7a7dee6/cgroup.procs: no space left on device.
This is the output of 'df -h':
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 56G 24G 29G 46% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 1.9G 4.0K 1.9G 1% /dev
tmpfs 385M 1.3M 384M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1.9G 324M 1.6G 17% /run/shm
none 100M 20K 100M 1% /run/user
/dev/sda8 761M 3.4M 758M 1% /boot/efi
/dev/sda3 55G 49G 3.8G 93% /home
/dev/sda4 275G 48G 213G 19% /opt/drive2
/dev/sda5 184G 74G 102G 43% /opt/drive3
/dev/sda6 215G 157G 48G 77% /opt/drive4
/dev/sda7 129G 23G 99G 19% /opt/drive1
You have no space in your host machine, where you are trying to build the image. First of all, you have to free some space. If you are using Debian based system this will help you:
df -h (show free space)
apt-get autoclean
apt-get autoremove
BleachBit (CCleaner-like for Linux).
Moreover, you can optimize your image building following these Dockerfile tips.
This is common problem if you are using docker-machine. In VM is disk smaller. Try cleanup your docker using these commands:
# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q -f dangling=true)
the reason may docker devicemapper Base Device Size.
please consider this or my answer on stackoverflow

Resources