Running Docker on Ubuntu VM - keeps failing because of disk space - docker

I have been struggling to build an application on my Ubuntu VM. On this VM, I have cloned a git repository, which contains an application (frontend, backend, database). When running the make command, it ultimately fails somewhere in the building process, because of no space left on device. Having increased the RAM and hard-disk size several times now, I am still wondering what exactly causes this error.
Is it the RAM size, or the hard-disk size?
Let me give some more information:
OS: Ubuntu 19.0.4
RAM allocated: 9.2 GB
Processors (CPU): 6
Hard disk space: 43 GB
The Ubuntu VM is a rather clean install, with only Docker, Docker Compose, and NodeJS installed on it. The VM runs via VMWare.
The following repository is cloned, which is meant to be built on the VM:
git#github.com:reactioncommerce/reaction-platform.git
For more information on the requirements they pose, which I seem to meet: https://docs.reactioncommerce.com/docs/installation-reaction-platform
After having increased RAM, CPU processors, and hard disk spaces iteratively, I still end up with the 'no space left on device' error. When checking the disk space, via df -h I get the following:
Filesystem Size Used Avail Use% Mounted on
udev 4.2G 0 4.2G 0% /dev
tmpfs 853M 1.8M 852M 1% /run
/dev/sr0 1.6G 1.6G 0 100% /cdrom
/dev/loop0 1.5G 1.5G 0 100% /rofs
/cow 4.2G 3.7G 523M 88% /
tmpfs 4.2G 38M 4.2G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.2G 0 4.2G 0% /sys/fs/cgroup
tmpfs 4.2G 584K 4.2G 1% /tmp
tmpfs 853M 12K 853M 1% /run/user/999
Now this makes me wonder, it seems that /dev/sr0, /dev/loop0 and /cow are the partitions that are used when building the application. However, I do not quite understand whether I am constrained by RAM or actual disk space at the moment.
Other Docker issues made me look at the inodes as well, as they could be problematic. And these also seem to be maxed out, however, I think the issue resides in the above.
I saw a similar question on SuperUser, however I could not really mirror his situation to mine, that is found here.

Related

Docker Host on Ubuntu taking all the space on VM

Current Setup:
Machine OS: Windows 7
Vmware: VMWare workstation 8.0.2-591240
VM: Ubuntu LTS 16.04
Docker on Ubuntu: Docker Engine Community version 19.03.5
I have setup docker containers to run bamboo agents recently. It's keep running out of space after. Can anyone please suggest me mounting options or any other tips to keep the volume down?
Ps. I had the similar setup before and it was all good until the VM got corrupted and need to setup the new VM.
root#ubuntu:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.8G 0 5.8G 0% /dev
tmpfs 1.2G 113M 1.1G 10% /run
/dev/sda1 12G 12G 0 100% /
tmpfs 5.8G 0 5.8G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
tmpfs 1.2G 0 1.2G 0% /run/user/1000
overlay 12G 12G 0 100% /var/lib/docker/overlay2/e0e78a7d84da9c2a1e1c9f91ee16bc6515d8660e1a2db5e207504469f9e496ae/merged
overlay 12G 12G 0 100% /var/lib/docker/overlay2/8f3a73cd0b201f4a8a92ded0cfab869441edfbc2199574c225adbf78a2393129/merged
overlay 12G 12G 0 100% /var/lib/docker/overlay2/3d947960c28e834aa422b5ea16c261739d06bf22fe0f33f9e0248d233f2a84d1/merged
12G is quite a low space to be able to leverage cached images to speed up the building process. So, assuming you don't want to expand the root partition of that VM, what you can do is clean up images after every build, or every X builds.
For example, I follow the second approach, I run a cleaner job every night in my Jenkins agents to prevent the disk getting out of space.
Docker installation by default takes your /var space. Cleaning up your unused containers will work for some time and stop yielding you when you really cant delete more. The only way is to map your data-root of your daemon to a more available disk space. You can do the same by configuring below param, data-root in your daemon.json file.
{
“data-root”: “/new/path/to/docker-data”
}
Once you have done that do a systemctl daemon-reload to reload the configuration changes. Doing this will make docker copy all existing container volume data to the new path. This will resolve your space issue permanently. If you wish not to kill your running containers during daemon-reload you must have configured live-restore property in your daemon.json file. Hope this helps.

Dokku/Docker out of disk space - How to enter app

So my question is I have an errant rails app deployed using Dokku with the default Digital Ocean setup. This rails app has eaten all of the disk space as I did not set up anything to clean out the /tmp directory.
So the output of df is:
Filesystem 1K-blocks Used Available Use% Mounted on
udev 1506176 0 1506176 0% /dev
tmpfs 307356 27488 279868 9% /run
/dev/vda1 60795672 60779288 0 100% /
tmpfs 1536772 0 1536772 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 1536772 0 1536772 0% /sys/fs/cgroup
/dev/vda15 106858 3419 103439 4% /boot/efi
tmpfs 307352 0 307352 0% /run/user/0
So I am out of disk space, but I don't know how to enter the container to clean it. Any dokku **** return /home/dokku/.basher/bash: main: command not found
Access denied which I have found out is because I am completely out of HD space.
So 2 questions.
1: How do I get into the container to clear the tmp directory
2: Is there a way to set a max disk size limit so Dokku doesn't eat the entire HD again?
Thanks
Dokku uses docker to deploy your application, you are probably accumulating a bunch of stale docker images, which over time can take over all of your disk space.
Try running this:
docker image ls
Then try removing unused images:
docker system prune -a
For more details, see: https://www.digitalocean.com/community/tutorials/how-to-remove-docker-images-containers-and-volumes

docker disk space grows faster than container's

Docker containers that are modifying files, adding, and deleting extensively (leveldb) are growing disk usage faster that the container itself reports and eventually use up all the disk.
Here's one snapshot of df, and a a second. You'll note that disk space has increased considerably (300Mbytes) from the host's perspective, but the container's self-reported usage of disk space has only increased by 17Mbytes. As this continues the host runs out of disk.
Ubuntu stock 14.04, Docker version 1.10.2, build c3959b1.
Is there some sort of trim-like issue going on here?
root#9e7a93cbcb02:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-202:1-136171-d4[...] 9.8G 667M 8.6G 8% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/disk/by-uuid/0a76513a-37fc-43df-9833-34f8f9598ada 7.8G 2.9G 4.5G 39% /etc/hosts
shm 64M 0 64M 0% /dev/shm
And later on:
root#9e7a93cbcb02:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-202:1-136171-d4[...] 9.8G 684M 8.6G 8% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/disk/by-uuid/0a76513a-37fc-43df-9833-34f8f9598ada 7.8G 3.2G 4.2G 43% /etc/hosts
shm 64M 0 64M 0% /dev/shm
This is happening because of a kernel bug fix that has not been propagated to many mainstream OS distros. It's actually quite bad for newbie Docker users who naively fire up docker on the default Amazon AMI as I did.
Stick with CoreOS Stable, you won't have this issue. I have zero affiliation with CoreOS and frankly am greatly annoyed to have to deal with Yet Another Distro. In the CoreOS distro or other correctly working linux kernel the disk space of container and host track each other up and down correctly as the container frees or uses space. I'll note that OSX or other virtual box distros use CoreOS and thus work correctly.
Here's a long writeup on a very similar issue, but the root cause is a trim/discard issue in devicemapper. You need a fairly recent version of the Linux kernel to handle this properly. I'd go so far as to say that Docker is unfit for purpose unless you have the correct Linux kernel. See that article for a discussion on which version of your distro to use.
Note that above article only deals with management of docker containers and images, but AFAICT it also affects attempts by the container itself to free up disk space during normal addition/removal of files or blocks.
Be careful of what distro your cloud provider is using for cloud container management.

Ruby on Rails: Cannot allocate memory issues when application size is big

I have a Linux 14.04 server setup with RAM: 2GB and Hard-drive: 40GB. (Digital Ocean)
And my Ruby on Rails application size is almost 10GB. (Because of some images stored, which is I think not good)
I am now always getting Memory allocation problems. I cannot not run any rake task that writes data to the database. And sometimes application also stops working.
Here is my server's hard drive details,
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 16G 22G 42% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 991M 4.0K 991M 1% /dev
tmpfs 201M 364K 200M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1001M 0 1001M 0% /run/shm
none 100M 4.0K 100M 1% /run/user
Is this memory allocation problem occurring because my Ruby on Rails application size is big?
If so, Is there any workaround to solve this issue as I have some more space available in hard drive?

Deploy says disk full when the disk is not actually full

I'm trying to deploy an update to our rails app on a digital ocean box. When I run cap deploy I get the errors:
error: file write error (No space left on device)
fatal: unable to write sha1 file
fatal: unpack-objects failed
When I run df I see that we are only using 15% of our disk space:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 41151808 5500720 33537664 15% /
none 4 0 4 0% /sys/fs/cgroup
udev 1014128 4 1014124 1% /dev
tmpfs 205000 360 204640 1% /run
none 5120 0 5120 0% /run/lock
none 1024980 0 1024980 0% /run/shm
none 102400 0 102400 0% /run/user
df -i reveals:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/vda1 2621440 176278 2445162 7% /
none 256245 2 256243 1% /sys/fs/cgroup
udev 253532 402 253130 1% /dev
tmpfs 256245 325 255920 1% /run
none 256245 1 256244 1% /run/lock
none 256245 1 256244 1% /run/shm
none 256245 3 256242 1% /run/user
I've tried deleting log files and rebooting the box with no luck. Any ideas on why it thinks our disk is full?
Turns out the error was actually coming from the database server - which was full.
I just ran into this same issue here after several failed attempts to deploy with capistrano3.
df -i did indeed indicate high usage in my case on the app server, but I was able to resolve it by clearing out old releases with cap ENVIRONMENT deploy:clean
This answer may be helpful for others that end up here and wonder where their space has gone; especially if something's gone hay wire with previous deployments, they may be taking up an egregious amount of space or inodes. more likely.

Resources