I have two Physical machine installed with Docker 1.11.3 on ubuntu. Following is the configuration of machines -
1. Machine 1 - RAM 4 GB, Hard disk - 500 GB, quad core
2. Machine 2 - RAM 8 GB, Hard disk - 1 TB, octa core
I created containers on both machines. When I check the disk space of individual containers, here are some stats, which I am not able to undestand the reason behind.
1. Container on Machine 1
root#e1t2j3k45432#df -h
Filesystem Size Used Avail Use% Mounted on
none 37G 27G 8.2G 77% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda9 37G 27G 8.2G 77% /etc/hosts
shm 64M 0 64M 0% /dev/shm
I have nothing installed in the above container, still it is showing
27 GB used.
How come this container got 37 GB of space. ?
2. Container on Machine 2
root#0af8ac09b89c:/# df -h
Filesystem Size Used Avail Use% Mounted on
none 184G 11G 164G 6% /
tmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda5 184G 11G 164G 6% /etc/hosts
shm 64M 0 64M 0% /dev/shm
Why only 11GB of disk space is shown as used in this container. Even
though this is also empty container with no packages installed.
How come this container is given 184 GB of disk space ?
The disk usage reported inside docker is the host disk usage of /var/lib/docker (my /var/lib/docker in the example below is symlinked to my /home where I have more disk space):
bash$ df -k /var/lib/docker/.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/... 720798904 311706176 372455240 46% /home
bash$ docker run --rm -it busybox df -k
Filesystem 1K-blocks Used Available Use% Mounted on
none 720798904 311706268 372455148 46% /
...
So if you run the df command on the same container on different hosts, a different result is expect.
Related
I use docker to do something. But the inode was exhausted after running about 15 days. The output of df -i in docker was:
Filesystem Inodes IUsed IFree IUse% Mounted on
overlay 3276800 1965849 1310951 60% /
tmpfs 16428916 17 16428899 1% /dev
tmpfs 16428916 15 16428901 1% /sys/fs/cgroup
shm 16428916 1 16428915 1% /dev/shm
/dev/vda1 3276800 1965849 1310951 60% /etc/hosts
tmpfs 16428916 1 16428915 1% /proc/acpi
tmpfs 16428916 1 16428915 1% /proc/scsi
tmpfs 16428916 1 16428915 1% /sys/firmware
The hosts file content:
127.0.0.1 xxx xxx
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
::1 xxx xxx
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
Why the hosts can use so much inodes? How to recover it?
Inside the container, that file is a bind mount. The mount statistics come from the underlying filesystem where the file is originally located, in this case /dev/vda1. They are not the statistics for the single file, it's just the way mount shows this data for a bind mount. Same happens for the overlay filesystem since it's also based on a different underlying filesystem. Since that filesystem is the same for each, you see the exact same mount statistics for each.
Therefore you are exhausting the inodes on your host filesystem, likely the /var/lib/docker filesystem, which if you have not configured a separate mount, will be the / (root) filesystem. Why you are using so many inodes on that filesystem is going to require debugging on your side to see what is creating so many files. Often you'll want to separate docker from the root filesystem by making /var/lib/docker a separate partition, or symlinking it to another drive where you have more space.
As another example to show that these are all the same:
$ df -i /var/lib/docker/.
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/bmitch--t490--vg-home 57098240 3697772 53400468 7% /home
$ docker run -it --rm busybox df -i
Filesystem Inodes Used Available Use% Mounted on
overlay 57098240 3697814 53400426 6% /
tmpfs 4085684 17 4085667 0% /dev
tmpfs 4085684 16 4085668 0% /sys/fs/cgroup
shm 4085684 1 4085683 0% /dev/shm
/dev/mapper/bmitch--t490--vg-home
57098240 3697814 53400426 6% /etc/resolv.conf
/dev/mapper/bmitch--t490--vg-home
57098240 3697814 53400426 6% /etc/hostname
/dev/mapper/bmitch--t490--vg-home
57098240 3697814 53400426 6% /etc/hosts
tmpfs 4085684 1 4085683 0% /proc/asound
tmpfs 4085684 1 4085683 0% /proc/acpi
tmpfs 4085684 17 4085667 0% /proc/kcore
tmpfs 4085684 17 4085667 0% /proc/keys
tmpfs 4085684 17 4085667 0% /proc/timer_list
tmpfs 4085684 17 4085667 0% /proc/sched_debug
tmpfs 4085684 1 4085683 0% /sys/firmware
From there you can see /etc/resolv.conf, /etc/hostname, and /etc/hosts are each bind mounts going back to the /var/lib/docker filesystem because docker creates and maintains these for each container.
If removing the container frees up a large number of inodes, then check your container to see if you are modifying/creating files in the container filesystem. These will all be deleted as part of the container removal. You can see currently created files (which won't capture files created and then deleted but still held open by a process) with: docker diff $container_id
I have a linux server (RHEL7) that has the following configuration:
# df -v
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/rhel-root 52403200 21925464 30477736 42% /
devtmpfs 8093108 0 8093108 0% /dev
tmpfs 8109016 64584 8044432 1% /dev/shm
tmpfs 8109016 12460 8096556 1% /run
tmpfs 8109016 0 8109016 0% /sys/fs/cgroup
/dev/sda1 1038336 299108 739228 29% /boot
/dev/mapper/rhel-home 1498653736 160833412 1337820324 11% /home
There are 2 virtual disks (setup and managed by PERC H310 ctlr). One is a 500GB RAID3 and
the other is a 1TB RAID1.
/dev/sda1 is the 500GB one.
/dev/sdb1 is the 1TB one.
When RHEL7 was installed I set the root filesystem in the 1st partition on the 500GB virtual disk.
the home filesystem was setup on the 1TB virtual disk.
The problem is, when I run vgscan it is only seeing the rhel volume group.
This prevents me from being able to manipulate size of the root filesystem which is only setup
for 50GB of the 500GB that this virtual disk has.
What am I missing here??
I have a question. Our docker server was out of space for its containers so I gave it a bigger disk from 500GB to 1TB(its a vm) Ubuntu sees this correctily. If I do the command vgs I get this output:
VG #PV #LV #SN Attr VSize VFree
Docker-vg 1 2 0 wz--n- 999.52g 500.00g
But Docker still thinks it's out of space. I have rebooted the docker VM but still he thinks it's out of space. If I use the df -h command this is the output:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 792M 8.6M 783M 2% /run
/dev/mapper/Docker--vg-root 490G 465G 0 100% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvda1 472M 468M 0 100% /boot
As you see the docker-vg still thinks its 490gb
I don't know where to look. can someone help me ?
You still need to extend your logical volume and resize the filesystem to use the larger logical volume.
First, with lvextend, I'm not sure if it works with /dev/mapper. If not, you can do an lvdisplay to list your logical volumes:
lvextend -l +100%FREE /dev/mapper/Docker--vg-root
With ext*fs you can then run a resize:
resize2fs /dev/mapper/Docker--vg-root
The command is similar for xfs:
xfs_growfs /dev/mapper/Docker--vg-root
With "docker system prune" you clean some space removing old images and other stuff.
If you want your container to be aware of the disk size change, you have to:
docker rmi <image>
docker pull <image>
I am using Vagrant with Docker provision.
The issue is when I run my docker compose I fill up my VM disk space.
Here is what my file system looks like:
Filesystem Size Used Avail Use% Mounted on
udev 476M 0 476M 0% /dev
tmpfs 97M 3.1M 94M 4% /run
/dev/sda1 9.7G 2.2G 7.5G 23% /
tmpfs 483M 0 483M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 483M 0 483M 0% /sys/fs/cgroup
tmpfs 97M 0 97M 0% /run/user/1000
vagrant_ 384G 39G 345G 11% /vagrant
vagrant_www_ 384G 39G 345G 11% /vagrant/www
How can I configure Docker or Vagrant to use /vagrant directory?
(By the way I have not loaded Docker... This is why it's not 100% disk usage)
You can try to reconfigure the Docker daemon as documented here -> https://docs.docker.com/engine/reference/commandline/dockerd/#options. Use the -g parameter to change the root runtime path of the Docker daemon.
--graph, -g /var/lib/docker Root of the Docker runtime
As long as you are working on a local disk or SAN this would be a proper way to change the location of the Docker data including the images. But be aware, do not use NFS or another type of share because this won't work as of the used massive file locks. Somewhere on Github there is an issue about this.
I have loaded a new custom image into a remote RedHat 7 docker host instance. When running a new container, the container does not attempt to use the entire disk. I get the following is the output of a df -h on the container:
rootfs 9.8G 9.3G 0 100% /
/dev/mapper/docker-253:0-67515990-5700c262a29a5bb39d9747532360bf6a346853b0ab1ca6e5e988d7c8191c2573
9.8G 9.3G 0 100% /
tmpfs 1.9G 0 1.9G 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/resolv.conf
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/hostname
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/hosts
tmpfs 1.9G 0 1.9G 0% /proc/kcore
tmpfs 1.9G 0 1.9G 0% /proc/timer_stats
But the host system has much more space:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root 49G 25G 25G 51% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/vg_root-lv_home 9.8G 73M 9.7G 1% /home
/dev/sda1 497M 96M 402M 20% /boot
It seems as if docker is assigning the 9.8 gigs of the /home mapping to the entire drive of the container. So I am wondering if there is a reason I am seeing this?
The Problem
I was able to resolve this problem. The issue was not related to the volume that was being mounted to the container (ie It was not mounting the home volume as the root volume on the container). The problem occurred because docker uses device-mapper in RedHat to manage the file systems of it's containers. By default, the containers will start with 10G of space. In general, docker will use AUFS to manage the file systems of the containers. This is the case on most Debian based versions of Linux, but RedHat uses device-mapper instead.
The Solution
Luckily, the device-mapper size is configurable in docker. First, I had to stop my service, and remove all of my images/containers. (NOTE: There is no coming back from this, so backup all images as needed).
sudo service stop docker && sudo rm -irf /var/lib/docker
Then, start up the docker instance manually with the desired size parameters:
sudo docker -d --storage-opt dm.basesize=[DESIRED_SIZE]
In my case, I increased my container size to 13G:
sudo docker -d --storage-opt dm.basesize=13G
Then with docker still running, pull/reload the desired image, start a container, and the size should now match the desired size.
Next, I set my docker systemd service file to startup with the desired container size. This is required so that the docker service will start the containers up with the desired size. I edited the OPTIONS variable in the /etc/sysconfig/docker file. It now looks like this:
OPTIONS='--selinux-enabled --storage-opt dm.basesize=13G'
Finally, restart the docker service:
sudo service stop docker
References
[1] https://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/ - This is how I discovered RedHat uses device-mapper, and that device-mapper has a 10G limit.
[2] https://docs.docker.com/reference/commandline/cli/ - Found the storage options in dockers documentation.