Docker error :System Error : no space left on device - docker

I am getting this error whenever I am trying to build my image. Searched on internet and got some links but none of them solved my problem.
Error:
System error: write /cgroup/docker/5dba72d862bf8171d36aa022d1929455af6589af9fb7ba6220b01842c7a7dee6/cgroup.procs: no space left on device.
This is the output of 'df -h':
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 56G 24G 29G 46% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 1.9G 4.0K 1.9G 1% /dev
tmpfs 385M 1.3M 384M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1.9G 324M 1.6G 17% /run/shm
none 100M 20K 100M 1% /run/user
/dev/sda8 761M 3.4M 758M 1% /boot/efi
/dev/sda3 55G 49G 3.8G 93% /home
/dev/sda4 275G 48G 213G 19% /opt/drive2
/dev/sda5 184G 74G 102G 43% /opt/drive3
/dev/sda6 215G 157G 48G 77% /opt/drive4
/dev/sda7 129G 23G 99G 19% /opt/drive1

You have no space in your host machine, where you are trying to build the image. First of all, you have to free some space. If you are using Debian based system this will help you:
df -h (show free space)
apt-get autoclean
apt-get autoremove
BleachBit (CCleaner-like for Linux).
Moreover, you can optimize your image building following these Dockerfile tips.

This is common problem if you are using docker-machine. In VM is disk smaller. Try cleanup your docker using these commands:
# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q -f dangling=true)

the reason may docker devicemapper Base Device Size.
please consider this or my answer on stackoverflow

Related

How to increase the size of /dev/root on a docker image on a Raspberry Pi

I'm using the https://github.com/lukechilds/dockerpi project to recreate a Raspberry Pi locally with Docker. However, the default disk space is very small and I quickly fill it up:
pi#raspberrypi:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.8G 1.2G 533M 69% /
devtmpfs 124M 0 124M 0% /dev
tmpfs 124M 0 124M 0% /dev/shm
tmpfs 124M 1.9M 122M 2% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 124M 0 124M 0% /sys/fs/cgroup
/dev/sda1 253M 52M 201M 21% /boot
tmpfs 25M 0 25M 0% /run/user/1000
How can I give move space to the RPi? I saw this issue, but I don't understand how that solution is implemented, or if it is relevant.
To increase the disk size, you need to extend the partition of the qemu disk used inside the container.
Start the docker to unzip rootfs and mounted it to an host path
docker run --rm -v $HOME/.dockerpi:/sdcard -it lukechilds/dockerpi
When the virtualized raspberry is up, you can stop it, running from the docker prompt sudo poweroff
Then you have the qemu disk in $HOME/.dockerpi/filesystem.img.
It could be extended with :
sudo qemu-img resize -f raw $HOME/.dockerpi/filesystem.img 10G
startsector=$(fdisk -u -l $HOME/.dockerpi/filesystem.img | grep filesystem.img2 | awk '{print $2}')
sudo parted $HOME/.dockerpi/filesystem.img --script rm 2
sudo parted $HOME/.dockerpi/filesystem.img --script "mkpart primary ext2 ${startsector}s -1s"
Restart the raspberry that will use the resized qemu disk with :
docker run --rm -v $HOME/.dockerpi:/sdcard -it lukechilds/dockerpi
Running from the docker prompt you can extend the root filesystem with :
sudo resize2fs /dev/sda2 8G
Finally the root is increased.
Following this df -h give :
Filesystem Size Used Avail Use% Mounted on
/dev/root 7.9G 1.2G 6.4G 16% /
devtmpfs 124M 0 124M 0% /dev
tmpfs 124M 0 124M 0% /dev/shm
tmpfs 124M 1.9M 122M 2% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 124M 0 124M 0% /sys/fs/cgroup
/dev/sda1 253M 52M 201M 21% /boot
tmpfs 25M 0 25M 0% /run/user/1000
If the solution is indeed to resize /dev/root, you can follow this thread, which concludes:
Using the gparted live distro, I struggled for a little while until I realised that the /dev/root partition was within another partition.
Resizing the latter, then the former, everything works. I just gave the /dev/root partition everything remaining on the disk, the other partitions I left at their original sizes.

Give docker more diskspace for containers

I have a question. Our docker server was out of space for its containers so I gave it a bigger disk from 500GB to 1TB(its a vm) Ubuntu sees this correctily. If I do the command vgs I get this output:
VG #PV #LV #SN Attr VSize VFree
Docker-vg 1 2 0 wz--n- 999.52g 500.00g
But Docker still thinks it's out of space. I have rebooted the docker VM but still he thinks it's out of space. If I use the df -h command this is the output:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 792M 8.6M 783M 2% /run
/dev/mapper/Docker--vg-root 490G 465G 0 100% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvda1 472M 468M 0 100% /boot
As you see the docker-vg still thinks its 490gb
I don't know where to look. can someone help me ?
You still need to extend your logical volume and resize the filesystem to use the larger logical volume.
First, with lvextend, I'm not sure if it works with /dev/mapper. If not, you can do an lvdisplay to list your logical volumes:
lvextend -l +100%FREE /dev/mapper/Docker--vg-root
With ext*fs you can then run a resize:
resize2fs /dev/mapper/Docker--vg-root
The command is similar for xfs:
xfs_growfs /dev/mapper/Docker--vg-root
With "docker system prune" you clean some space removing old images and other stuff.
If you want your container to be aware of the disk size change, you have to:
docker rmi <image>
docker pull <image>

Docker host & no space left on device

I'am using Rancher to manage some EC2 hosts (4 nodes in an auto-scaling group) & to orchestrate containers. Everything works fine.
But, at some point, I have a recurrent problem of disk space, even if I remove unused and untagged images with this command
docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi
Like I said, even if I run this command above, my hosts are continuoulsy running out of space :
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 12K 7.9G 1% /dev
tmpfs 1.6G 1.4M 1.6G 1% /run
/dev/xvda1 79G 77G 0 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 7.5M 7.9G 1% /run/shm
none 100M 0 100M 0% /run/user
I'am using rancher 1.1.4 and my hosts are running Docker 1.12.5 under Ubuntu 14.04.4. LTS.
Is there something I miss? What are the best practices to configure docker for production hosts in order to avoid this problem?
Thank you for your help.
Do you use volumes mounts ( docker run -v /local/path:/container/path) for persistent data of your containers ?
If no, data written by your containers (database, logs ...) will always grow the last layer of your image run.
To see the real size of your current running containers :
docker ps -s
You can also use tools such as https://www.diskreport.net to analyse your disk space and see what has grown between two measures.

docker image error downloading package

I am trying to build a docker image (using my Dockerfile) and I get a very strange error about insufficient space in the download directory:
Total download size: 208 k
Installed size: 760 k
Downloading packages:
Error downloading packages:
libyaml-0.1.4-11.el7_0.x86_64: Insufficient space in download directory /var/cache/yum/x86_64/7/centos/packages
* free 0
* needed 55 k
PyYAML-3.10-11.el7.x86_64: Insufficient space in download directory /var/cache/yum/x86_64/7/centos/packages
* free 0
* needed 153 k
The command '/bin/sh -c yum -y install python-yaml' returned a non-zero code: 1
I am using a centos7 base image
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 106M 1.5G 7% /run
/dev/sda1 118G 112G 0 100% /
tmpfs 7.9G 648K 7.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sdb1 92G 206M 87G 1% /boot
tmpfs 1.6G 56K 1.6G 1% /run/user/1001
The following docker command was the trick to fix the underlying error for me:
$ docker rm $(docker ps -qa)
For me, running:
docker image prune
did the trick. It turned out I had lots of garbage (a.k.a., dangling) images taking up space. prune docs can be found here.
Check and make sure the /var directory has sufficient space as that is where docker stores its images.
To do so: df -h /var
If it is 100% full you might want to clear up some space.
docker ps -a - to list all of the containers (including those stopped and exited ones). use docker rm {CONTAINER_ID} to free up some space.
Alternatively do docker images to remove unused images. docker rmi {IMAGE_ID}.
You would need to check where the space was used at first.
du -h /var | grep -E ‘^[0-9.]*[M|G]’
If any specific directory is used too much spaces, you check how to remove it properly. And you do it.
You ever have not removed docker containers or images?
It usually is high possibility for root cause of the unsufficient space issues.
Check it by following command.
du -hs /var/lib/docker
If the directory has too much spaces, you would solve docker commands below.
Removing all containers,
docker rm $(docker ps -qa)
Removing docker all images,
docker rmi $(docker image ls -qa)
But the cause may not be the docker around, such as big log files or rpm cache and some big files. And then you can remove the files.
I hope this help you.

No Space on CentOS with Docker

I was using Docker on my CentOS machine for a while and had lot of images and containers (around 4GBs). My machine has 8GBs os storage and I kept getting an error from devicemapper whenever trying to remove a Docker container or Docker image with docker rm or docker rmi. The error was: Error response from daemon: Driver devicemapper failed to remove root filesystem. So I stopped the Docker service and tried restarting it, but that failed due to devicemapper. After that I uninstalled Docker and removed all images, containers, and volumes by running the following command: rm -rf /var/lib/docker. However, after running that it does not seem like any space was freed up:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 7.7G 346M 96% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 193M 1.6G 11% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
tmpfs 361M 0 361M 0% /run/user/1000
$ du -ch -d 1 | sort -hr
3.6G total
3.6G .
1.7G ./usr
903M ./var
433M ./home
228M ./opt
193M ./run
118M ./boot
17M ./etc
6.4M ./tmp
4.0K ./root
0 ./sys
0 ./srv
0 ./proc
0 ./mnt
0 ./media
0 ./dev
Why does df tell me I am using 7.7G whereas du tells me I am using 3.6G? The figure that du gives (3.6G) should be the correct one since I deleted everything in /var/lib/docker.
I had a similar issue. This ticket was helpful.
Depending on the file system you are using, you will want to use either fstrim, zerofree or add the drive to another machine or and use use xfs_repair
If your file system is xfs and you used xfs_repair then after running that command there should be a lost+found directory at the root of the drive that contains all the data that was taking upspace but unreachable.
You can then delete that and it will actually be reflected in du.

Resources