Initially I've increased the volume size of "/dev/mapper/rhel-var" from 2g to 9g.
When trying to load docker image I get this error:
Filesistem:
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.9G 4.0K 3.9G 1% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 9.0M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mapper/rhel-root 50G 8.8G 42G 18% /
/dev/sda1 1014M 150M 865M 15% /boot
/dev/mapper/rhel-home 21G 13G 7.9G 62% /home
/dev/mapper/rhel-tmp 5.0G 35M 5.0G 1% /tmp
/dev/mapper/rhel-var 9.0G 152M 8.9G 2% /var
/dev/mapper/rhel-var_log 9.0G 53M 9.0G 1% /var/log
/dev/mapper/rhel-var_tmp 2.0G 74M 2.0G 4% /var/tmp
/dev/mapper/rhel-var_log_audit 3.0G 2.9G 105M 97% /var/log/audit
tmpfs 783M 0 783M 0% /run/user/1000
Is it any other volume that needs aditional storage space?
Docker image (I mean the .tar file) space is around 4.8G
$ ll
total 4634988
-rwxr-x---. 1 admin admin 4746224640 Oct 28 14:51 daas2.tar*
Thanks
Cosmin
Later edit:
Got this crash of the machine
$ sudo docker image load -i daas2.tar
7ea4455e747e: Loading layer [==================================================>] 80.31MB/80.31MB
b3b741e72ab9: Loading layer [==================================================>] 46.68MB/46.68MB
9c79dfcaa270: Loading layer [==================================================>] 3.584kB/3.584kB
2335e0a013ff: Loading layer [==================================================>] 4.608kB/4.608kB
5dabc97cd0ea: Loading layer [======================> ] 812.7MB/1.772GB
Remote side unexpectedly closed network connection
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Session stopped
- Press <return> to exit tab
- Press R to restart session
- Press S to save terminal output to file
h of the machine:
The problem was:
/dev/mapper/rhel-var_log_audit 3.0G 2.9G 105M 97% /var/log/audit
I've check according to "Start loading it, while it is happening, open a second terminal, run df repeatedly and observe which usage chagnes over time. –
KamilCuk
Oct 28 at 17:14"
and this volume went full then crash.
Conclusion:
When loading a docker image here is where it writes things, at least in:
/var/lib/docker
/var/run/docker
/var/log/audit
I've cleared the /var/log/audit and it worked.
Thanks!
Related
I'm using the https://github.com/lukechilds/dockerpi project to recreate a Raspberry Pi locally with Docker. However, the default disk space is very small and I quickly fill it up:
pi#raspberrypi:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.8G 1.2G 533M 69% /
devtmpfs 124M 0 124M 0% /dev
tmpfs 124M 0 124M 0% /dev/shm
tmpfs 124M 1.9M 122M 2% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 124M 0 124M 0% /sys/fs/cgroup
/dev/sda1 253M 52M 201M 21% /boot
tmpfs 25M 0 25M 0% /run/user/1000
How can I give move space to the RPi? I saw this issue, but I don't understand how that solution is implemented, or if it is relevant.
To increase the disk size, you need to extend the partition of the qemu disk used inside the container.
Start the docker to unzip rootfs and mounted it to an host path
docker run --rm -v $HOME/.dockerpi:/sdcard -it lukechilds/dockerpi
When the virtualized raspberry is up, you can stop it, running from the docker prompt sudo poweroff
Then you have the qemu disk in $HOME/.dockerpi/filesystem.img.
It could be extended with :
sudo qemu-img resize -f raw $HOME/.dockerpi/filesystem.img 10G
startsector=$(fdisk -u -l $HOME/.dockerpi/filesystem.img | grep filesystem.img2 | awk '{print $2}')
sudo parted $HOME/.dockerpi/filesystem.img --script rm 2
sudo parted $HOME/.dockerpi/filesystem.img --script "mkpart primary ext2 ${startsector}s -1s"
Restart the raspberry that will use the resized qemu disk with :
docker run --rm -v $HOME/.dockerpi:/sdcard -it lukechilds/dockerpi
Running from the docker prompt you can extend the root filesystem with :
sudo resize2fs /dev/sda2 8G
Finally the root is increased.
Following this df -h give :
Filesystem Size Used Avail Use% Mounted on
/dev/root 7.9G 1.2G 6.4G 16% /
devtmpfs 124M 0 124M 0% /dev
tmpfs 124M 0 124M 0% /dev/shm
tmpfs 124M 1.9M 122M 2% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 124M 0 124M 0% /sys/fs/cgroup
/dev/sda1 253M 52M 201M 21% /boot
tmpfs 25M 0 25M 0% /run/user/1000
If the solution is indeed to resize /dev/root, you can follow this thread, which concludes:
Using the gparted live distro, I struggled for a little while until I realised that the /dev/root partition was within another partition.
Resizing the latter, then the former, everything works. I just gave the /dev/root partition everything remaining on the disk, the other partitions I left at their original sizes.
I use docker to do something. But the inode was exhausted after running about 15 days. The output of df -i in docker was:
Filesystem Inodes IUsed IFree IUse% Mounted on
overlay 3276800 1965849 1310951 60% /
tmpfs 16428916 17 16428899 1% /dev
tmpfs 16428916 15 16428901 1% /sys/fs/cgroup
shm 16428916 1 16428915 1% /dev/shm
/dev/vda1 3276800 1965849 1310951 60% /etc/hosts
tmpfs 16428916 1 16428915 1% /proc/acpi
tmpfs 16428916 1 16428915 1% /proc/scsi
tmpfs 16428916 1 16428915 1% /sys/firmware
The hosts file content:
127.0.0.1 xxx xxx
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
::1 xxx xxx
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
Why the hosts can use so much inodes? How to recover it?
Inside the container, that file is a bind mount. The mount statistics come from the underlying filesystem where the file is originally located, in this case /dev/vda1. They are not the statistics for the single file, it's just the way mount shows this data for a bind mount. Same happens for the overlay filesystem since it's also based on a different underlying filesystem. Since that filesystem is the same for each, you see the exact same mount statistics for each.
Therefore you are exhausting the inodes on your host filesystem, likely the /var/lib/docker filesystem, which if you have not configured a separate mount, will be the / (root) filesystem. Why you are using so many inodes on that filesystem is going to require debugging on your side to see what is creating so many files. Often you'll want to separate docker from the root filesystem by making /var/lib/docker a separate partition, or symlinking it to another drive where you have more space.
As another example to show that these are all the same:
$ df -i /var/lib/docker/.
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/bmitch--t490--vg-home 57098240 3697772 53400468 7% /home
$ docker run -it --rm busybox df -i
Filesystem Inodes Used Available Use% Mounted on
overlay 57098240 3697814 53400426 6% /
tmpfs 4085684 17 4085667 0% /dev
tmpfs 4085684 16 4085668 0% /sys/fs/cgroup
shm 4085684 1 4085683 0% /dev/shm
/dev/mapper/bmitch--t490--vg-home
57098240 3697814 53400426 6% /etc/resolv.conf
/dev/mapper/bmitch--t490--vg-home
57098240 3697814 53400426 6% /etc/hostname
/dev/mapper/bmitch--t490--vg-home
57098240 3697814 53400426 6% /etc/hosts
tmpfs 4085684 1 4085683 0% /proc/asound
tmpfs 4085684 1 4085683 0% /proc/acpi
tmpfs 4085684 17 4085667 0% /proc/kcore
tmpfs 4085684 17 4085667 0% /proc/keys
tmpfs 4085684 17 4085667 0% /proc/timer_list
tmpfs 4085684 17 4085667 0% /proc/sched_debug
tmpfs 4085684 1 4085683 0% /sys/firmware
From there you can see /etc/resolv.conf, /etc/hostname, and /etc/hosts are each bind mounts going back to the /var/lib/docker filesystem because docker creates and maintains these for each container.
If removing the container frees up a large number of inodes, then check your container to see if you are modifying/creating files in the container filesystem. These will all be deleted as part of the container removal. You can see currently created files (which won't capture files created and then deleted but still held open by a process) with: docker diff $container_id
I am using Vagrant with Docker provision.
The issue is when I run my docker compose I fill up my VM disk space.
Here is what my file system looks like:
Filesystem Size Used Avail Use% Mounted on
udev 476M 0 476M 0% /dev
tmpfs 97M 3.1M 94M 4% /run
/dev/sda1 9.7G 2.2G 7.5G 23% /
tmpfs 483M 0 483M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 483M 0 483M 0% /sys/fs/cgroup
tmpfs 97M 0 97M 0% /run/user/1000
vagrant_ 384G 39G 345G 11% /vagrant
vagrant_www_ 384G 39G 345G 11% /vagrant/www
How can I configure Docker or Vagrant to use /vagrant directory?
(By the way I have not loaded Docker... This is why it's not 100% disk usage)
You can try to reconfigure the Docker daemon as documented here -> https://docs.docker.com/engine/reference/commandline/dockerd/#options. Use the -g parameter to change the root runtime path of the Docker daemon.
--graph, -g /var/lib/docker Root of the Docker runtime
As long as you are working on a local disk or SAN this would be a proper way to change the location of the Docker data including the images. But be aware, do not use NFS or another type of share because this won't work as of the used massive file locks. Somewhere on Github there is an issue about this.
This issue has been really giving me grief and I would appreciate some help.
Running docker 1.10.3 on a vanilla Centos7.1 box, I have two file systems, a 15gb dev/vda1 where my root and var/lib is and a 35gb /dev/vdc1 mounted on mnt where I would like to put my docker volumes/image data and meta data. This is for administration and management purposes as I am expecting the number of containers to grow.
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 15G 1.5G 13G 11% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.3M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/vdc1 35G 49M 33G 1% /mnt/vdc1
tmpfs 385M 0 385M 0% /run/user/0
Despite all my attempts, docker keep on installing and defaulting to place the Data Space and Meta data space onto the 15gb root volume. I have tried many solutions including ; http://collabnix.com/archives/5881 , How to change the docker image installation directory?, and more, all with no luck.... basically wither the docker instance does not start at or all it does with its default settings.
Would like some help either the settings required for Data and Meta data to be stored on /mnt/vdc1 or install docker as a whole on the drive.
Thanks in advance , bf !
--graph is only one flag. There is also --exec-root and $DOCKER_TMPDIR which are used to store files as well.
DIR=/mnt/vdc1
export DOCKER_TMPDIR=$DIR/tmp
dockerd -D -g $DIR --exec-root=$DIR
I have two Physical machine installed with Docker 1.11.3 on ubuntu. Following is the configuration of machines -
1. Machine 1 - RAM 4 GB, Hard disk - 500 GB, quad core
2. Machine 2 - RAM 8 GB, Hard disk - 1 TB, octa core
I created containers on both machines. When I check the disk space of individual containers, here are some stats, which I am not able to undestand the reason behind.
1. Container on Machine 1
root#e1t2j3k45432#df -h
Filesystem Size Used Avail Use% Mounted on
none 37G 27G 8.2G 77% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda9 37G 27G 8.2G 77% /etc/hosts
shm 64M 0 64M 0% /dev/shm
I have nothing installed in the above container, still it is showing
27 GB used.
How come this container got 37 GB of space. ?
2. Container on Machine 2
root#0af8ac09b89c:/# df -h
Filesystem Size Used Avail Use% Mounted on
none 184G 11G 164G 6% /
tmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda5 184G 11G 164G 6% /etc/hosts
shm 64M 0 64M 0% /dev/shm
Why only 11GB of disk space is shown as used in this container. Even
though this is also empty container with no packages installed.
How come this container is given 184 GB of disk space ?
The disk usage reported inside docker is the host disk usage of /var/lib/docker (my /var/lib/docker in the example below is symlinked to my /home where I have more disk space):
bash$ df -k /var/lib/docker/.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/... 720798904 311706176 372455240 46% /home
bash$ docker run --rm -it busybox df -k
Filesystem 1K-blocks Used Available Use% Mounted on
none 720798904 311706268 372455148 46% /
...
So if you run the df command on the same container on different hosts, a different result is expect.