Docker devicemapper (direct-lvm) metadata gets huge amount of folders - docker

Hi i have 3 node Swarm docker setup (1 manager, 2 nodes) on Centos 7. I'm using devicemapper data storage for all three VM's. The problem is that after some time folder:
/var/lib/docker/devicemapper/metadata and ../mnt
have so many files / folders they cannot be even "ls'ed". docker system prune, and docker volume prune, are not clearing antyhing.
Any ideas why it does happen and how to fix it?
Server Version: 17.06.0-ce
Storage Driver: devicemapper
Pool Name: docker-thinpool
Pool Blocksize: 524.3kB
Base Device Size: 10.74GB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 3.055GB
Data Space Total: 20.4GB
Data Space Available: 17.34GB
Metadata Space Used: 120.5MB
Metadata Space Total: 213.9MB
Metadata Space Available: 93.4MB
Thin Pool Minimum Free Space: 2.039GB

Related

Why do I get :cannot start container: Error getting container from device devicemapper: Error Mounting Invalid Argument everytime I start container

All my docker containers suddenly exist. I try to restart the container and get following error:
Error response from daemon: Cannot start container 1559: Error getting container 1559fdbc6c2ab8f12d9efe1a066880ddedb2c424d3a3ed8a1f8a2eb181e1c3ba from driver devicemapper: Error mounting '/dev/mapper/docker-253:2-33554560-1559fdbc6c2ab8f12d9efe1a066880ddedb2c424d3a3ed8a1f8a2eb181e1c3ba' on '/data/docker/devicemapper/mnt/1559fdbc6c2ab8f12d9efe1a066880ddedb2c424d3a3ed8a1f8a2eb181e1c3ba': invalid argument
Here is docker info
[root#localhost ~]# docker info
Containers: 9
Images: 189
Storage Driver: devicemapper
Pool Name: docker-253:2-33554560-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 7.367 GB
Data Space Total: 107.4 GB
Data Space Available: 54.15 GB
Metadata Space Used: 11.58 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.136 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Data loop file: /data/docker/devicemapper/devicemapper/data
Metadata loop file: /data/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-10-14)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 1
Total Memory: 3.703 GiB
Name: localhost.localdomain
ID: UBGK:AERA:AYMM:XB6P:XCOG:MUGB:NKZM:GSIY:AH25:UGN7:FUF3:ID44
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
I tried to restart the docker and it still didn't work.
I cannot remove all the containers or images.
The disk used to run out of space.
My docker version is 1.8
What can I do to recover my container?
Really thank you for your help!!!!!!!
Can you post your docker info command output?
If your Docker Root dir -
/var/lib/docker (default)
is under the same disk running out of space as mentioned by you then you wouldn't be able to pull image or run docker containers. Try to clean up some space and then you may be able to recover your container. BTW when your say recover - Are you writing any data into the container?
Any specific reason to be on older version of Docker?
The recommended storage driver for Docker is overlay2 & FYI the devicemapper storagedriver is deprecated in 18.09 release and would be removed in future release.
Read more about storage drivers here

Data Space Used not matching docker images output

The output for docker info shows that I'm using 515.1 GB of 622.8 GB
$ docker info
.
.
Server Version: 1.13.0
Storage Driver: devicemapper
Pool Name: vg-thinpool
Pool Blocksize: 524.3 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 515.1 GB
Data Space Total: 622.8 GB
Data Space Available: 107.7 GB
Metadata Space Used: 161.5 MB
Metadata Space Total: 6.442 GB
Metadata Space Available: 6.281 GB
Thin Pool Minimum Free Space: 62.28 GB
.
.
.
However docker images , docker volume ls & docker ps show that I don't have anything stored locally. Is there any reason this could be happening?
$ docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker volume ls
DRIVER VOLUME NAME
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

How to increase base size of one docker container

I have tried to increase default size of docker container using --storage-opt option and it works.
But in my case, I have one container which needs more disk space than other containers. so I am interested in how do I configure base size per container.
docker details:
Server Version: 1.11.2
Storage Driver: devicemapper
Pool Blocksize: 65.54 kB
Base Device Size: 21.47 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.776 GB
Data Space Total: 107.4 GB
Data Space Available: 49.43 GB
Metadata Space Used: 3.228 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.144 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
Thanks in advance.
The runtime constraints on resources¶ for docker run is only for cpu, memory and IO, not yet for disk space.
You would need a device mapper (not yet implemented) or at least a container data volume manager like flocker (available) to try and set limits per container (or at least container cluster).
But as mentioned in 2013, storage limits are not straightforward, as cgroups don't have a hook for that.
Its a old post but I would like to post the solution which I found.
sed -i '/STORAGE_OPTIONS=/s/$/"--storage-opt dm.basesize=20G"/' /etc/sysconfig/docker-storage
you need to change STORAGE_OPTION under /etc/sysconfig/docker_storage to change basesize and then restart docker to take this in effect.

Docker undo rm container

I accidentally removed my container with "docker rm [CONTAINER_ID]".
Is there anyway that I can undo this, or restore the data in the container?
It was a CentOS image.
$docker info
Containers: 1
Images: 30
Server Version: 1.9.1
Storage Driver: devicemapper
Pool Name: docker-202:16-1179651-pool
Pool Blocksize: 65.54 kB
Base Device Size: 107.4 GB
Backing Filesystem:
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 8.998 GB
Data Space Total: 107.4 GB
Data Space Available: 43.66 GB
Metadata Space Used: 11.45 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.136 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /srv/docker/devicemapper/devicemapper/data
Metadata loop file: /srv/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.17-22.30.amzn1.x86_64
Operating System: Amazon Linux AMI 2016.03
CPUs: 2
Total Memory: 3.862 GiB
Name: ip-172-31-14-126
ID: QXWS:5VBE:CIZV:NF57:KNTZ:ZOIV:HIZZ:PXKW:44LT:KVFZ:ECQI:FPIX
Username: hogehoge
Registry: https://index.docker.io/v1/
Similar question:
how to retrieve volume from a removed Docker container?
The above question is asking how to retrieve data from a "data volume
container", but my one was not. I stored everything inside the CentOS
image.
The answer is probably no.
According to the comment I received from docker forum:
docker rm is just like rm on the host, there is no going back
Personal story:
I lost 3 weeks of my business data, and having big trouble with it.
Good lesson to teach myself that never "rm" unless you are 100% sure what you are doing, and backup your data!

How to fix server error while pushing an image to the Docker hub?

I've built an image from a Dockerfile, committed it and am now trying to push it to the Hub. The command I run:
sudo docker push lisahelm/mongo:v2
What prints out:
The push refers to a repository [lisahelm/mongo] (len: 1)
7b6d0719b415: Image already exists
975e0be2d43f: Image already exists
ee08822aa3f9: Image already exists
96f2191238d5: Image already exists
07f8e8c5e660: Image already exists
37bea4ee0c81: Image already exists
a82efea989f9: Image already exists
e9e06b06e14c: Image already exists
FATA[0015] Error pushing to registry: Server error: 400 trying to push lisahelm/mongo:v2 manifest
Info I've seen people ask for in other questions:
Docker version 1.6.0, build 4749651/1.6.0
Containers: 4
Images: 22
Storage Driver: devicemapper
Pool Name: docker-202:1-263695-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 2.222 GB
Data Space Total: 107.4 GB
Data Space Available: 4.88 GB
Metadata Space Used: 2.58 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.145 GB
Udev Sync Supported: true
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.89-RHEL6 (2014-09-01)
Execution Driver: native-0.2
Kernel Version: 3.14.35-28.38.amzn1.x86_64
Operating System: Amazon Linux AMI 2015.03
CPUs: 1
Total Memory: 1.957 GiB
Name: ip-172-31-11-134
ID: WVRW:3RM3:L4KL:YABF:JGSK:S6ML:U2CH:5Z5G:67CY:24BF:3DIE:E6TA
Username: lisahelm
Registry: [https://index.docker.io/v1/]
You are using an Amazon AMI running RHEL.
The docker shipped by Amazon on these machines is defective.
The solution is to either switch to Ubuntu, or update docker manually on the instance.
You can read here for context and an answer from an AMZ dev: https://github.com/docker/docker/issues/13143#issuecomment-102522728
And here for another user coming up with an update solution: https://github.com/docker/distribution/issues/538#issuecomment-104241554
Finally here for the Amazon support forum about this bug: https://forums.aws.amazon.com/thread.jspa?messageID=622774

Resources