Is it possible to mount docker volume to a different filesystem? - docker

I'm mounting a folder from my host machine which has about 20GB of mongodb files. Mongo is unable to start because it says there isn't enough space. It appears that the volume is being mounted into tmpfs instead of using the hard disk. Is there any way to change the filesystem for a volume?
docker-compose:
mongo:
image: mongo:2.4
volumes:
- /data/db:/data/db
Docker output
mongo_1 | Wed May 4 20:55:12.591 [initandlisten] ERROR: Insufficient free space for journal files
Machine memory:
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 1025388 241788 783600 24% /data/db
/dev/vda2 61886452 1128580 57591152 2% /data/configdb
/dev/vda2 61886452 1128580 57591152 2% /etc/resolv.conf
/dev/vda2 61886452 1128580 57591152 2% /etc/hostname
/dev/vda2 61886452 1128580 57591152 2% /etc/hosts

This may be an issue with the Docker beta on your Mac.
Running Docker on Centos 7.x, I did:
mkdir ~/data
docker run --rm -it -v ~/data:/data/db mongo:2.4 bash
From inside the container I then checked
# df
And got
...
/dev/mapper/vg_root-root 29939424 6139352 23800072 21% /data/db
...
As you would expect. It's not mapped to tmpfs.

Related

Digital Ocean: How to docker compose build in a volume

I have purchased volume for my droplet in digital ocean and when I do docker compose build it takes up space on my current setup and I am not able to build my images.
My current setup is on
`/dev/vda1 25227048 25191932 18732 100% /`
Full Ubunto is :
udev 2013884 0 2013884 0% /dev
tmpfs 404632 5672 398960 2% /run
/dev/vda1 25227048 25191932 18732 100% /
tmpfs 2023160 0 2023160 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 2023160 0 2023160 0% /sys/fs/cgroup
/dev/vda15 106858 3437 103421 4% /boot/efi
tmpfs 404632 0 404632 0% /run/user/0
/dev/sda 103081248 93980 97728004 1% /mnt/volume_lon1_01
How do I build so it build on my new volume?
`/dev/sda 103081248 93980 97728004 1% /mnt/volume_lon1_01`
Fail into error now:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:01 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
If you want to use your new disk only for docker, you need to mount it in the docker base directory: /var/lib/docker.
But before doing it, you need to:
Stop the docker daemon completely sudo systemctl docker stop
Sync everything in the current directory to the new disk: sudo rsync -aqxP /var/lib/docker/ /mnt/volume_lon1_01
Delete the old content: sudo rm -rf /var/lib/docker/*
mount the new volume to the right place: sudo mount /dev/sda /var/lib/docker
Start docker daemon sudo systemctl start docker
Check that everything works properly - you can check if you still have your volume listed docker volume ls, or some local images docker images ls, or if you can start a new container docker run -ti alpine
Add the new mount definition into /etc/fstab*
You could also change the default directory of docker to use /mnt/volume_lon1_01.
If you want the second option, I recommend you to read https://linuxconfig.org/how-to-move-docker-s-default-var-lib-docker-to-another-directory-on-ubuntu-debian-linux
*For modifying the fstab, if you are not familiar with, you need a few information: what is the filesystem used by the partition, its path and where you want to mount it
After that, edit the file /etc/fstab and check if a line already exist with the partition path (/dev/sda for you). If not, add a new line, if yest, just edit it for changing the mount path to the new one.
How to find the partition filesystem already mounted: mount
This will return one line par partition and you need to check the type of the partition.
Example: rootfs on / type lxfs (rw,noatime), the partition type is lxfs
If you need to add a new line, it will be something like that:
/dev/sda /var/lib/docker <fs type> defaults 0 0

docker is full, all inodes are used

got huge problem, all my inodes seems to be used.
I've cleaned all volumes unused
Cleaned all container and images
with command -> docker prune
but still it seems that it stay full :
Filesystem Inodes IUsed IFree IUse% Mounted on
none 3200000 3198742 1258 100% /
tmpfs 873942 16 873926 1% /dev
tmpfs 873942 13 873929 1% /sys/fs/cgroup
/dev/sda1 3200000 3198742 1258 100% /images
shm 873942 1 873941 1% /dev/shm
tmpfs 873942 1 873941 1% /sys/firmware
docker info
Containers: 5
Running: 3
Paused: 0
Stopped: 2
Images: 23
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 53
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 6.668GiB
Name: serveur-1
ID: CW7J:FJAH:S4GR:4CGD:ZRWI:EDBY:AYBX:H2SD:TWZO:STZU:GSCX:TRIC
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
The only thing i think can do this, is a build i'm doing on this machine.
This build runs a npm install with many files.
Can these files stays on server ?
is there any chance i have to delete these temporary files ?
Is there any dangling volumes left in the system? If you have dangling volumes, it may fill up your disk space.
List all dangling volumes
docker volume ls -q -f dangling=true
Remove all dangling volumes
docker volume rm `docker volume ls -q -f dangling=true`
Found the error,
this seems to be Docker 17.06.1-ce error.
This version seems not correctly deleting images, and keeping files in /var/lib/docker/aufs/mnt/
So just upgrade to new docker version and this will be fine.
now df show me
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 51558236 3821696 45595452 8% /
udev 10240 0 10240 0% /dev
tmpfs 1398308 57696 1340612 5% /run
tmpfs 3495768 0 3495768 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 3495768 0 3495768 0% /sys/fs/cgroup
This is better :)
I had the same problem. Had Jenkins running inside Docker with a volume attached to it. After a few weeks Jenkins told me "npm WARN tar ENOSPC: no space left on device". After some googling I found out that all inodes are taken with sudo df -ih. With sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n I could locate the folder using up all the inodes and it was a certain build with npm. Deleted that folder and now I'm good to go again.
This can also be an effect of a lot of stopped containers, for example if there's a cron job running that use a container, and the docker run commandline used does not include --rm - in that case, every cron invocation will leave a stopped container on the filesystem.
In this case, the output of docker info will show a high number under Server -> Containers -> Stopped.
To cure this:
docker container prune
Add --rm to your docker run command line in the cron job.
In my case it was dangling build cache because removing dangling images does not solve the issue.
This cache can be removed by following command: docker system prune --all --force, but be careful maybe you still need some volumes or images.
In my case, pods with ingress-nginx and modsecurity active are creating a lot of dirs & files on container volumes in mod security structure (/var/lib/docker/overlay2/...) after more than 80 days of execution.
Restarting the pods remove the problem.
This case can be generalize to other cases with container internal storage no controlled

"no space left on device" even after removing all containers

While experimenting with Docker and Docker Compose I suddenly ran into "no space left on device" errors. I've tried to remove everything using methods suggested in similar questions, but to no avail.
Things I ran:
$ docker-compose rm -v
$ docker volume rm $(docker volume ls -qf dangling=true)
$ docker rmi $(docker images | grep "^<none>" | awk "{print $3}")
$ docker system prune
$ docker container prune
$ docker rm $(docker stop -t=1 $(docker ps -q))
$ docker rmi -f $(docker images -q)
As far as I'm aware there really shouldn't be anything left now. And it looks that way:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
Same for volumes:
$ docker volume ls
DRIVER VOLUME NAME
And for containers:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Unfortunately, I still get errors like this one:
$ docker-compose up
Pulling adminer (adminer:latest)...
latest: Pulling from library/adminer
90f4dba627d6: Pulling fs layer
19ae35d04742: Pulling fs layer
6d34c9ec1436: Download complete
729ea35b870d: Waiting
bb4802913059: Waiting
51f40f34172f: Waiting
8c152ed10b66: Waiting
8578cddcaa07: Waiting
e68a921e4706: Waiting
c88c5cb37765: Waiting
7e3078f18512: Waiting
42c465c756f0: Waiting
0236c7f70fcb: Waiting
6c063322fbb8: Waiting
ERROR: open /var/lib/docker/tmp/GetImageBlob865563210: no space left on device
Some data about my Docker installation:
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 15
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.10.0-32-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.685GiB
Name: engelbert
ID: UO4E:FFNC:2V25:PNAA:S23T:7WBT:XLY7:O3KU:VBNV:WBSB:G4RS:SNBH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
And my disk info:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3,9G 0 3,9G 0% /dev
tmpfs 787M 10M 778M 2% /run
/dev/nvme0n1p3 33G 25G 6,3G 80% /
tmpfs 3,9G 46M 3,8G 2% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
/dev/loop0 81M 81M 0 100% /snap/core/2462
/dev/loop1 80M 80M 0 100% /snap/core/2312
/dev/nvme0n1p1 596M 51M 546M 9% /boot/efi
/dev/nvme0n1p5 184G 52G 123G 30% /home
tmpfs 787M 12K 787M 1% /run/user/121
tmpfs 787M 24K 787M 1% /run/user/1000
And:
$ df -hi /var/lib/docker
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme0n1p3 2,1M 2,0M 68K 97% /
As said, I'm still experimenting, so I'm not sure if I've posted all relevant info - let me know if you need more.
Anyone any idea what more could be the issue?
The problem is that /var/lib/docker is on the / filesystem, which is running out of inodes. You can check this by running df -i /var/lib/docker
Since /home's filesystem has sufficient inodes and disk space, moving Docker's working directory there there should get it going again.
(Note that the this assumes there is nothing valuable in the current Docker install.)
First stop the Docker daemon. On Ubuntu, run
sudo service docker stop
Then move the old /var/lib/docker out of the way:
sudo mv /var/lib/docker /var/lib/docker~
Now create a directory on /home:
sudo mkdir /home/docker
and set the required permissions:
sudo chmod 0711 /home/docker
Link the /var/lib/docker directory to the new working directory:
sudo ln -s /home/docker /var/lib/docker
Then restart the Docker daemon:
sudo service docker start
Then it should work again.
For future reference, if you have removed all the containers you can also try docker system prune which will remove dangling images, containers and anything else.
This may not directly answer the question but it can be useful in general if the Dockerfile used to create the image is available.
Make sure in particular to limit the number of layers that will be generated, hence, when writing the Dockerfile, avoid doing this:
RUN apt-get update && sudo apt-get install -y package1
RUN apt-get update && sudo apt-get install -y package2
RUN apt-get update && sudo apt-get install -y package3
and do this instead:
RUN apt-get update && sudo apt-get install -y \
package1 \
package2 \
package3
Doing so drastically reduced the size of the image as well as the inode usage, since less layers are generated. This helped address the issue in my case (where the inodes would all get used up).
Make sure to remove the potential intermediate image that was generated by the failed build to free up space docker rmi <IMAGE_ID>.
For more tips, you can check out this site about Optimizing Docker images.

'No space left on device' after I changed Docker's storage base directory with DOCKER_OPTIONS

I changed Docker's storage base directory from /var/lib/docker to /home/docker by changing DOCKER_OPTIONS in /etc/default/docker as explained in this other question. After that, I rsynced the old /var/lib/docker to the new place.
Here is my Docker configuration file:
# Docker Upstart and SysVinit configuration file
# ....
# Customize location of Docker binary (especially for development testing).
#DOCKER="/usr/local/bin/docker"
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 -g /home/docker"
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"
# This is also a handy place to tweak where Docker's temporary files go.
#export TMPDIR="/mnt/bigdrive/docker-tmp"
Everything was working fine after I rebooted. However, I started getting a "no space left on device" in my containers from time to time. When this error happens, if my container is up, I can't even do a mkdir. If the container is down and I try to start it, I get the following:
Error response from daemon: rpc error: code = 2 desc = "oci runtime
error: could not synchronise with container process: can't create
pivot_root dir , error mkdir .pivot_root: no space left on device"
However, I have space:
Filesystem Size Used Avail Use% Mounted on
udev 32G 4,0K 32G 1% /dev
tmpfs 6,3G 1,6M 6,3G 1% /run
/dev/sda1 92G 56G 32G 64% /
none 4,0K 0 4,0K 0% /sys/fs/cgroup
none 5,0M 0 5,0M 0% /run/lock
none 32G 472K 32G 1% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda5 1,6T 790G 762G 51% /home
I'm suspecting that perhaps I haven't done the storage migration correctly. Does someone know what might be happening?
Running out of disk space can also include inode limits. You can check those with df -i. This post on Unix.SE walks you through the steps required to increase the number of inodes available. Short of that, you can delete files to free up the inodes.
You can try cleaning up images that aren't in use. This fixed the problem for me:
docker images -aq -f 'dangling=true' | xargs docker rmi
As well as volumes. This will remove dangling volumes:
docker volume ls -q -f 'dangling=true' | xargs docker volume rm
https://success.docker.com/article/error-message-no-space-left-on-device-in-default-machine

boot2docker host bind mount volume size limited to 1.8 GB

I'm playing with volume containers on boot2docker to run Docker on MacOS X.
boot2docker version
Client version: v1.2.0
Git commit: a551732
I'm trying to perform the backup/restore process which is mentioned in Docker's documentation.
I'm trying to backup a MySQL database which is over 2 GB. When I run the backup command:
docker run --volumes-from data_volume -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/mysql
...if fails with this error:
tar: /backup/backup.tar: Wrote only 4096 of 10240 bytes
tar: Error is not recoverable: exiting now
It seems tar is out of disk space. So I got into my container and looked at the host bind mount and its size is 1.8 GB.
docker run -t -i -v $HOME:/demo ubuntu /bin/bash
root#bb3921a48ba4:/# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 19G 8.3G 9.1G 48% /
none 19G 8.3G 9.1G 48% /
tmpfs 1005M 0 1005M 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/sda1 19G 8.3G 9.1G 48% /etc/hosts
tmpfs 1.8G 1.8G 0 100% /demo
tmpfs 1005M 0 1005M 0% /proc/kcore
You can see that /demo is only 1.8G...
I don't know how to extend this size so I would be able to make large backups...
Any idea? Thanks!
I have this sneaking feeling that you're running out of memory - as 2GB is the default amount of ram we allocate.
Rather than writing to a file, mapped to a virtual filesystem that is attached to your OSX box's FS, I'd suggest running the tar to output to STDOUT, and then pipe that to your local box.
ie
docker run --rm ubuntu tar cf - /etc > test.tar

Resources