My hard disk is getting full and I'm suspecting that my Docker container may not have enough disk space.
How can I check that the system allocated enough free disk space for Docker?
My OS is OSX.
Docker for Mac's data is all stored in a VM which uses a thin provisioned qcow2 disk image. This image will grow with usage, but never automatically shrink. (which may be fixed in 1.13)
The image file is stored in your home directories Library area:
mac$ cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux
mac$ ls -l Docker.qcow2
rw-r--r-- 1 user staff 46671265792 31 Jan 22:24 Docker.qcow2
Inside the VM
Attach to the VM's tty with screen (brew install screen if you don't have it)
$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
If you get a login prompt, user is root with no password. Otherwise just press enter. Then you can run the df commands on the Linux VM.
/ # df -h /var/lib/docker
Filesystem Size Used Available Use% Mounted on
/dev/vda2 59.0G 14.9G 41.1G 27% /var
Note that this matches the df output inside a container (when using aufs or overlay)
mac$ docker run debian df -h
Filesystem Size Used Avail Use% Mounted on
overlay 60G 15G 42G 27% /
tmpfs 1.5G 0 1.5G 0% /dev
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
/dev/vda2 60G 15G 42G 27% /etc/hosts
shm 64M 0 64M 0% /dev/shm
Also note that while the VM is only using 14.9G of the 60G, the file size is 43G.
mac$ du -h Docker.qcow2
43G Docker.qcow2
The easiest way to fix the size is to backup any volume data, "Reset" docker from the Preferences menu and start again. It appears 1.13 has resolved the issue and will run a compaction on shutdown.
screen notes
Exit the screen session with ctrl-a then d
The Docker VM's tty get's messed up after I exit screen and I have to restart Docker to get a functional terminal back for a new session.
Related
I have a server where I run some containers with volumes. All my volumes are in /var/lib/docker/volumes/ because docker is managing it. I use docker-compose to start my containers.
Recently, I tried to stop one of my container but it was impossible :
$ docker-compose down
[17849] INTERNAL ERROR: cannot create temporary directory!
So, I checked how the data is mounted on the server :
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7,8G 0 7,8G 0% /dev
tmpfs 1,6G 1,9M 1,6G 1% /run
/dev/md3 20G 19G 0 100% /
tmpfs 7,9G 0 7,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 7,9G 0 7,9G 0% /sys/fs/cgroup
/dev/md2 487M 147M 311M 33% /boot
/dev/md4 1,8T 1,7G 1,7T 1% /home
tmpfs 1,6G 0 1,6G 0% /run/user/1000
As you can see, the / is only 20Go, so it is full and I can't stop my containers using docker-compose.
My questions are :
There is a simple solution to increase the available space in the
/, using /dev/md4 ?
Or can I move volumes to another place without losing data ?
This part of the Docker Daemon is confirgurable. Best practices would have you change the data folder; this can be done with OS-level Linux commands like a symlink... I would say it's better to actually configure the Docker Daemon to store the data elsewhere!
You can do that by editing the Docker command line (e.g. the systemd script that starts the Docker daemon), or change /etc/docker/daemon.json.
The file should have this content:
{
"data-root": "/path/to/your/docker"
}
If you add a new hard drive, partition, or mount point you can add it here and docker will store its data there.
I landed here as I had the very same issue. Even though some sources suggest you could do it with a symbolic link this will cause all kinds of issues.
Depending on the OS and Docker version I had malformed images, weird errors or the docker-daemon refused to start.
Here is a solution, but it seems it varies a little from version to version. For me the solution was:
Open
/lib/systemd/system/docker.service
And change this line
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
to:
ExecStart=/usr/bin/dockerd -g /mnt/WHATEVERYOUR/PARTITIONIS/docker --containerd=/run/containerd/containerd.sock
I solved it creating a symbolic link to a partition with bigger size:
ln -s /scratch/docker_meta /var/lib/docker
/scratch/docker_meta is the folder that I have in a bigger partition.
Do a bind mount.
For example, moving /docker/volumes to /mnt/large.
Append line into /etc/fstab.
/mnt/large /docker/volumes none bind 0 0
And then.
mv /docker/volumes/* /mnt/large/
mount /docker/volumes
Do not forget chown and chmod of /mnt/large first, if you are using non-root docker.
I have been using the VSCode Remote Container Plugin for some time without issue. But today when I tried to open my project the remote container failed to open with the following error:
Command failed: docker exec -w /home/vscode/.vscode-server/bin/9833dd88 24d0faab /bin/sh -c echo 34503 >.devport
rejected promise not handled within 1 second: Error: ENOSPC: no space left on device, mkdir '/home/vscode/.vscode-server/data/logs/20191209T160810
It looks like the container is out of disk space but I'm not sure how to add more.
Upon further inspection I am a bit confused. When I run df from in the container it shows that I have used 60G of disk space but the size of my root directory is only ~9G.
$ df
Filesystem Size Used Avail Use% Mounted on
overlay 63G 61G 0 100% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda1 63G 61G 0 100% /etc/hosts
tmpfs 7.4G 0 7.4G 0% /proc/acpi
tmpfs 7.4G 0 7.4G 0% /sys/firmware
$ du -h --max-depth=1 /
9.2G /
What is the best way to resolve this issue?
Try docker system prune --all if you don't see any container or images with docker ps and docker images, but be careful it removes all cache and unused containers, images and network. docker ps -a and docker images -a shows you all the containers and images including ones that are currently not running or not in use.
Check the docs if problem persists: Clean unused docker resources
It looks like all docker containers on your system share the same disk space. I found two solutions:
Go into Docker Desktop's settings and increase the amount of disk space available.
Run docker container prune to free disk space being used by stopped containers.
In my case I had a bunch stopped docker containers from months back taking up all of the disk space allocated to Docker.
I changed Docker's storage base directory from /var/lib/docker to /home/docker by changing DOCKER_OPTIONS in /etc/default/docker as explained in this other question. After that, I rsynced the old /var/lib/docker to the new place.
Here is my Docker configuration file:
# Docker Upstart and SysVinit configuration file
# ....
# Customize location of Docker binary (especially for development testing).
#DOCKER="/usr/local/bin/docker"
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 -g /home/docker"
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"
# This is also a handy place to tweak where Docker's temporary files go.
#export TMPDIR="/mnt/bigdrive/docker-tmp"
Everything was working fine after I rebooted. However, I started getting a "no space left on device" in my containers from time to time. When this error happens, if my container is up, I can't even do a mkdir. If the container is down and I try to start it, I get the following:
Error response from daemon: rpc error: code = 2 desc = "oci runtime
error: could not synchronise with container process: can't create
pivot_root dir , error mkdir .pivot_root: no space left on device"
However, I have space:
Filesystem Size Used Avail Use% Mounted on
udev 32G 4,0K 32G 1% /dev
tmpfs 6,3G 1,6M 6,3G 1% /run
/dev/sda1 92G 56G 32G 64% /
none 4,0K 0 4,0K 0% /sys/fs/cgroup
none 5,0M 0 5,0M 0% /run/lock
none 32G 472K 32G 1% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda5 1,6T 790G 762G 51% /home
I'm suspecting that perhaps I haven't done the storage migration correctly. Does someone know what might be happening?
Running out of disk space can also include inode limits. You can check those with df -i. This post on Unix.SE walks you through the steps required to increase the number of inodes available. Short of that, you can delete files to free up the inodes.
You can try cleaning up images that aren't in use. This fixed the problem for me:
docker images -aq -f 'dangling=true' | xargs docker rmi
As well as volumes. This will remove dangling volumes:
docker volume ls -q -f 'dangling=true' | xargs docker volume rm
https://success.docker.com/article/error-message-no-space-left-on-device-in-default-machine
On one system, the disk size of the Docker container is like this:
root#b65c6518f583:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:0-202764498-b65c6518f5837667e7021971a97aebd382dddca6b3ecf4167472ebe17f16aace 99G 268M 94G 1% /
tmpfs 5.8G 0 5.8G 0% /dev
shm 64M 0 64M 0% /dev/shm
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
tmpfs 5.8G 96K 5.8G 1% /run/secrets
/dev/mapper/rhel-root 50G 20G 31G 40% /etc/hosts
We can see the rootfs size is 99G. While in another system, the disk size of the Docker container is like this:
53ac740bd09b:/ # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-8:8-4202821-2a6f330df1b7b37d55a96b098863f81e4a7f1c39fcca3f5fa03b57998cb33427 9.8G 4.4G 4.9G 48% /
tmpfs 126G 0 126G 0% /dev
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/sda8 97G 11G 82G 12% /data
shm 64M 0 64M 0% /dev/shm
The rootfs size is only 9.8G.
How is the rootfs size of a docker container decided? How can I modify the value of rootfs size?
The default size for a container is 10 GB, and you can change it.
Here is an excerpt from:
https://docs.docker.com/engine/reference/commandline/daemon/
dm.basesize
Specifies the size to use when creating the base device, which limits
the size of images and containers. The default value is 10G. Note,
thin devices are inherently “sparse”, so a 10G device which is mostly
empty doesn’t use 10 GB of space on the pool. However, the filesystem
will use more space for the empty case the larger the device is.
The base device size can be increased at daemon restart which will
allow all future images and containers (based on those new images) to
be of the new base device size.
Examples:
$ docker daemon --storage-opt dm.basesize=50G
This will increase the base device size to 50G. The Docker daemon will
throw an error if existing base device size is larger than 50G. A user
can use this option to expand the base device size however shrinking
is not permitted.
This value affects the system-wide “base” empty filesystem that may
already be initialized and inherited by pulled images. Typically, a
change to this value requires additional steps to take effect:
$ sudo service docker stop
$ sudo rm -rf /var/lib/docker
$ sudo service docker start
Example use:
$ docker daemon --storage-opt dm.basesize=20G
If you would like the docker container to be more than the default size, this tutorial accomplished the task. I know it will work for CentOs 7. With CentOs 6 you only need to change the xfs_growfs to resize2fs.
I was using docker 1.7
I'm playing with volume containers on boot2docker to run Docker on MacOS X.
boot2docker version
Client version: v1.2.0
Git commit: a551732
I'm trying to perform the backup/restore process which is mentioned in Docker's documentation.
I'm trying to backup a MySQL database which is over 2 GB. When I run the backup command:
docker run --volumes-from data_volume -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/mysql
...if fails with this error:
tar: /backup/backup.tar: Wrote only 4096 of 10240 bytes
tar: Error is not recoverable: exiting now
It seems tar is out of disk space. So I got into my container and looked at the host bind mount and its size is 1.8 GB.
docker run -t -i -v $HOME:/demo ubuntu /bin/bash
root#bb3921a48ba4:/# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 19G 8.3G 9.1G 48% /
none 19G 8.3G 9.1G 48% /
tmpfs 1005M 0 1005M 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/sda1 19G 8.3G 9.1G 48% /etc/hosts
tmpfs 1.8G 1.8G 0 100% /demo
tmpfs 1005M 0 1005M 0% /proc/kcore
You can see that /demo is only 1.8G...
I don't know how to extend this size so I would be able to make large backups...
Any idea? Thanks!
I have this sneaking feeling that you're running out of memory - as 2GB is the default amount of ram we allocate.
Rather than writing to a file, mapped to a virtual filesystem that is attached to your OSX box's FS, I'd suggest running the tar to output to STDOUT, and then pipe that to your local box.
ie
docker run --rm ubuntu tar cf - /etc > test.tar