VSCode Remote Container - Error: ENOSPC: No space left on device - docker

I have been using the VSCode Remote Container Plugin for some time without issue. But today when I tried to open my project the remote container failed to open with the following error:
Command failed: docker exec -w /home/vscode/.vscode-server/bin/9833dd88 24d0faab /bin/sh -c echo 34503 >.devport
rejected promise not handled within 1 second: Error: ENOSPC: no space left on device, mkdir '/home/vscode/.vscode-server/data/logs/20191209T160810
It looks like the container is out of disk space but I'm not sure how to add more.
Upon further inspection I am a bit confused. When I run df from in the container it shows that I have used 60G of disk space but the size of my root directory is only ~9G.
$ df
Filesystem Size Used Avail Use% Mounted on
overlay 63G 61G 0 100% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda1 63G 61G 0 100% /etc/hosts
tmpfs 7.4G 0 7.4G 0% /proc/acpi
tmpfs 7.4G 0 7.4G 0% /sys/firmware
$ du -h --max-depth=1 /
9.2G /
What is the best way to resolve this issue?

Try docker system prune --all if you don't see any container or images with docker ps and docker images, but be careful it removes all cache and unused containers, images and network. docker ps -a and docker images -a shows you all the containers and images including ones that are currently not running or not in use.
Check the docs if problem persists: Clean unused docker resources

It looks like all docker containers on your system share the same disk space. I found two solutions:
Go into Docker Desktop's settings and increase the amount of disk space available.
Run docker container prune to free disk space being used by stopped containers.
In my case I had a bunch stopped docker containers from months back taking up all of the disk space allocated to Docker.

Related

Docker Host on Ubuntu taking all the space on VM

Current Setup:
Machine OS: Windows 7
Vmware: VMWare workstation 8.0.2-591240
VM: Ubuntu LTS 16.04
Docker on Ubuntu: Docker Engine Community version 19.03.5
I have setup docker containers to run bamboo agents recently. It's keep running out of space after. Can anyone please suggest me mounting options or any other tips to keep the volume down?
Ps. I had the similar setup before and it was all good until the VM got corrupted and need to setup the new VM.
root#ubuntu:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.8G 0 5.8G 0% /dev
tmpfs 1.2G 113M 1.1G 10% /run
/dev/sda1 12G 12G 0 100% /
tmpfs 5.8G 0 5.8G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
tmpfs 1.2G 0 1.2G 0% /run/user/1000
overlay 12G 12G 0 100% /var/lib/docker/overlay2/e0e78a7d84da9c2a1e1c9f91ee16bc6515d8660e1a2db5e207504469f9e496ae/merged
overlay 12G 12G 0 100% /var/lib/docker/overlay2/8f3a73cd0b201f4a8a92ded0cfab869441edfbc2199574c225adbf78a2393129/merged
overlay 12G 12G 0 100% /var/lib/docker/overlay2/3d947960c28e834aa422b5ea16c261739d06bf22fe0f33f9e0248d233f2a84d1/merged
12G is quite a low space to be able to leverage cached images to speed up the building process. So, assuming you don't want to expand the root partition of that VM, what you can do is clean up images after every build, or every X builds.
For example, I follow the second approach, I run a cleaner job every night in my Jenkins agents to prevent the disk getting out of space.
Docker installation by default takes your /var space. Cleaning up your unused containers will work for some time and stop yielding you when you really cant delete more. The only way is to map your data-root of your daemon to a more available disk space. You can do the same by configuring below param, data-root in your daemon.json file.
{
“data-root”: “/new/path/to/docker-data”
}
Once you have done that do a systemctl daemon-reload to reload the configuration changes. Doing this will make docker copy all existing container volume data to the new path. This will resolve your space issue permanently. If you wish not to kill your running containers during daemon-reload you must have configured live-restore property in your daemon.json file. Hope this helps.

Move docker volume to different partition

I have a server where I run some containers with volumes. All my volumes are in /var/lib/docker/volumes/ because docker is managing it. I use docker-compose to start my containers.
Recently, I tried to stop one of my container but it was impossible :
$ docker-compose down
[17849] INTERNAL ERROR: cannot create temporary directory!
So, I checked how the data is mounted on the server :
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7,8G 0 7,8G 0% /dev
tmpfs 1,6G 1,9M 1,6G 1% /run
/dev/md3 20G 19G 0 100% /
tmpfs 7,9G 0 7,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 7,9G 0 7,9G 0% /sys/fs/cgroup
/dev/md2 487M 147M 311M 33% /boot
/dev/md4 1,8T 1,7G 1,7T 1% /home
tmpfs 1,6G 0 1,6G 0% /run/user/1000
As you can see, the / is only 20Go, so it is full and I can't stop my containers using docker-compose.
My questions are :
There is a simple solution to increase the available space in the
/, using /dev/md4 ?
Or can I move volumes to another place without losing data ?
This part of the Docker Daemon is confirgurable. Best practices would have you change the data folder; this can be done with OS-level Linux commands like a symlink... I would say it's better to actually configure the Docker Daemon to store the data elsewhere!
You can do that by editing the Docker command line (e.g. the systemd script that starts the Docker daemon), or change /etc/docker/daemon.json.
The file should have this content:
{
"data-root": "/path/to/your/docker"
}
If you add a new hard drive, partition, or mount point you can add it here and docker will store its data there.
I landed here as I had the very same issue. Even though some sources suggest you could do it with a symbolic link this will cause all kinds of issues.
Depending on the OS and Docker version I had malformed images, weird errors or the docker-daemon refused to start.
Here is a solution, but it seems it varies a little from version to version. For me the solution was:
Open
/lib/systemd/system/docker.service
And change this line
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
to:
ExecStart=/usr/bin/dockerd -g /mnt/WHATEVERYOUR/PARTITIONIS/docker --containerd=/run/containerd/containerd.sock
I solved it creating a symbolic link to a partition with bigger size:
ln -s /scratch/docker_meta /var/lib/docker
/scratch/docker_meta is the folder that I have in a bigger partition.
Do a bind mount.
For example, moving /docker/volumes to /mnt/large.
Append line into /etc/fstab.
/mnt/large /docker/volumes none bind 0 0
And then.
mv /docker/volumes/* /mnt/large/
mount /docker/volumes
Do not forget chown and chmod of /mnt/large first, if you are using non-root docker.

Docker host & no space left on device

I'am using Rancher to manage some EC2 hosts (4 nodes in an auto-scaling group) & to orchestrate containers. Everything works fine.
But, at some point, I have a recurrent problem of disk space, even if I remove unused and untagged images with this command
docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi
Like I said, even if I run this command above, my hosts are continuoulsy running out of space :
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 12K 7.9G 1% /dev
tmpfs 1.6G 1.4M 1.6G 1% /run
/dev/xvda1 79G 77G 0 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 7.5M 7.9G 1% /run/shm
none 100M 0 100M 0% /run/user
I'am using rancher 1.1.4 and my hosts are running Docker 1.12.5 under Ubuntu 14.04.4. LTS.
Is there something I miss? What are the best practices to configure docker for production hosts in order to avoid this problem?
Thank you for your help.
Do you use volumes mounts ( docker run -v /local/path:/container/path) for persistent data of your containers ?
If no, data written by your containers (database, logs ...) will always grow the last layer of your image run.
To see the real size of your current running containers :
docker ps -s
You can also use tools such as https://www.diskreport.net to analyse your disk space and see what has grown between two measures.

docker image error downloading package

I am trying to build a docker image (using my Dockerfile) and I get a very strange error about insufficient space in the download directory:
Total download size: 208 k
Installed size: 760 k
Downloading packages:
Error downloading packages:
libyaml-0.1.4-11.el7_0.x86_64: Insufficient space in download directory /var/cache/yum/x86_64/7/centos/packages
* free 0
* needed 55 k
PyYAML-3.10-11.el7.x86_64: Insufficient space in download directory /var/cache/yum/x86_64/7/centos/packages
* free 0
* needed 153 k
The command '/bin/sh -c yum -y install python-yaml' returned a non-zero code: 1
I am using a centos7 base image
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 106M 1.5G 7% /run
/dev/sda1 118G 112G 0 100% /
tmpfs 7.9G 648K 7.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sdb1 92G 206M 87G 1% /boot
tmpfs 1.6G 56K 1.6G 1% /run/user/1001
The following docker command was the trick to fix the underlying error for me:
$ docker rm $(docker ps -qa)
For me, running:
docker image prune
did the trick. It turned out I had lots of garbage (a.k.a., dangling) images taking up space. prune docs can be found here.
Check and make sure the /var directory has sufficient space as that is where docker stores its images.
To do so: df -h /var
If it is 100% full you might want to clear up some space.
docker ps -a - to list all of the containers (including those stopped and exited ones). use docker rm {CONTAINER_ID} to free up some space.
Alternatively do docker images to remove unused images. docker rmi {IMAGE_ID}.
You would need to check where the space was used at first.
du -h /var | grep -E ‘^[0-9.]*[M|G]’
If any specific directory is used too much spaces, you check how to remove it properly. And you do it.
You ever have not removed docker containers or images?
It usually is high possibility for root cause of the unsufficient space issues.
Check it by following command.
du -hs /var/lib/docker
If the directory has too much spaces, you would solve docker commands below.
Removing all containers,
docker rm $(docker ps -qa)
Removing docker all images,
docker rmi $(docker image ls -qa)
But the cause may not be the docker around, such as big log files or rpm cache and some big files. And then you can remove the files.
I hope this help you.

How is the rootfs size of a docker container decided?

On one system, the disk size of the Docker container is like this:
root#b65c6518f583:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:0-202764498-b65c6518f5837667e7021971a97aebd382dddca6b3ecf4167472ebe17f16aace 99G 268M 94G 1% /
tmpfs 5.8G 0 5.8G 0% /dev
shm 64M 0 64M 0% /dev/shm
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
tmpfs 5.8G 96K 5.8G 1% /run/secrets
/dev/mapper/rhel-root 50G 20G 31G 40% /etc/hosts
We can see the rootfs size is 99G. While in another system, the disk size of the Docker container is like this:
53ac740bd09b:/ # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-8:8-4202821-2a6f330df1b7b37d55a96b098863f81e4a7f1c39fcca3f5fa03b57998cb33427 9.8G 4.4G 4.9G 48% /
tmpfs 126G 0 126G 0% /dev
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/sda8 97G 11G 82G 12% /data
shm 64M 0 64M 0% /dev/shm
The rootfs size is only 9.8G.
How is the rootfs size of a docker container decided? How can I modify the value of rootfs size?
The default size for a container is 10 GB, and you can change it.
Here is an excerpt from:
https://docs.docker.com/engine/reference/commandline/daemon/
dm.basesize
Specifies the size to use when creating the base device, which limits
the size of images and containers. The default value is 10G. Note,
thin devices are inherently “sparse”, so a 10G device which is mostly
empty doesn’t use 10 GB of space on the pool. However, the filesystem
will use more space for the empty case the larger the device is.
The base device size can be increased at daemon restart which will
allow all future images and containers (based on those new images) to
be of the new base device size.
Examples:
$ docker daemon --storage-opt dm.basesize=50G
This will increase the base device size to 50G. The Docker daemon will
throw an error if existing base device size is larger than 50G. A user
can use this option to expand the base device size however shrinking
is not permitted.
This value affects the system-wide “base” empty filesystem that may
already be initialized and inherited by pulled images. Typically, a
change to this value requires additional steps to take effect:
$ sudo service docker stop
$ sudo rm -rf /var/lib/docker
$ sudo service docker start
Example use:
$ docker daemon --storage-opt dm.basesize=20G
If you would like the docker container to be more than the default size, this tutorial accomplished the task. I know it will work for CentOs 7. With CentOs 6 you only need to change the xfs_growfs to resize2fs.
I was using docker 1.7

Resources