gcc: No space left on device? - memory

I'm trying to make some C code with a simple gcc command in Ubuntu 10, but for some reason, I keep getting an error:
Cannot create temporary file in /tmp/: No space left on device
The thing is, though, I have plenty of space on the disk. Here is output of df -h:
Filesystem Size Used Avail Use% Mounted on
/ 3.7G 2.4G 1.1G 70% /
devtmpfs 312M 112K 312M 1% /dev
none 312M 24K 312M 1% /dev/shm
none 312M 80K 312M 1% /var/run
none 312M 0 312M 0% /var/lock
none 312M 0 312M 0% /lib/init/rw
And df -i, in case you are wondering about the inodes:
Filesystem Inodes IUsed IFree IUse% Mounted on
/ 240960 195198 45762 82% /
devtmpfs 79775 609 79166 1% /dev
none 79798 3 79795 1% /dev/shm
none 79798 41 79757 1% /var/run
none 79798 2 79796 1% /var/lock
none 79798 1 79797 1% /lib/init/rw
I can also touch /tmp/test successfully, so I know I have space on the drive. Any ideas as to why gcc decided to throw a fit all of a sudden? (It was working earlier) Thanks beforehand.

It looks to me that your /tmp directory is actually mounted as a devtmpfs which if I remember correctly is actually your computer's RAM.
You can always reboot and see if that helps, increase your virtual memory partition, or you can close running programs to see if that helps. Additionally, you can maybe delete some unnecessary files from /tmp as they are volatile to at least the life of the session.

The intermediate files are too big for /tmp, so perhaps using another temporary directory (TMPDIR=/var/tmp g++ ...) helps.

Related

Running Docker on Ubuntu VM - keeps failing because of disk space

I have been struggling to build an application on my Ubuntu VM. On this VM, I have cloned a git repository, which contains an application (frontend, backend, database). When running the make command, it ultimately fails somewhere in the building process, because of no space left on device. Having increased the RAM and hard-disk size several times now, I am still wondering what exactly causes this error.
Is it the RAM size, or the hard-disk size?
Let me give some more information:
OS: Ubuntu 19.0.4
RAM allocated: 9.2 GB
Processors (CPU): 6
Hard disk space: 43 GB
The Ubuntu VM is a rather clean install, with only Docker, Docker Compose, and NodeJS installed on it. The VM runs via VMWare.
The following repository is cloned, which is meant to be built on the VM:
git#github.com:reactioncommerce/reaction-platform.git
For more information on the requirements they pose, which I seem to meet: https://docs.reactioncommerce.com/docs/installation-reaction-platform
After having increased RAM, CPU processors, and hard disk spaces iteratively, I still end up with the 'no space left on device' error. When checking the disk space, via df -h I get the following:
Filesystem Size Used Avail Use% Mounted on
udev 4.2G 0 4.2G 0% /dev
tmpfs 853M 1.8M 852M 1% /run
/dev/sr0 1.6G 1.6G 0 100% /cdrom
/dev/loop0 1.5G 1.5G 0 100% /rofs
/cow 4.2G 3.7G 523M 88% /
tmpfs 4.2G 38M 4.2G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.2G 0 4.2G 0% /sys/fs/cgroup
tmpfs 4.2G 584K 4.2G 1% /tmp
tmpfs 853M 12K 853M 1% /run/user/999
Now this makes me wonder, it seems that /dev/sr0, /dev/loop0 and /cow are the partitions that are used when building the application. However, I do not quite understand whether I am constrained by RAM or actual disk space at the moment.
Other Docker issues made me look at the inodes as well, as they could be problematic. And these also seem to be maxed out, however, I think the issue resides in the above.
I saw a similar question on SuperUser, however I could not really mirror his situation to mine, that is found here.

Dokku/Docker out of disk space - How to enter app

So my question is I have an errant rails app deployed using Dokku with the default Digital Ocean setup. This rails app has eaten all of the disk space as I did not set up anything to clean out the /tmp directory.
So the output of df is:
Filesystem 1K-blocks Used Available Use% Mounted on
udev 1506176 0 1506176 0% /dev
tmpfs 307356 27488 279868 9% /run
/dev/vda1 60795672 60779288 0 100% /
tmpfs 1536772 0 1536772 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 1536772 0 1536772 0% /sys/fs/cgroup
/dev/vda15 106858 3419 103439 4% /boot/efi
tmpfs 307352 0 307352 0% /run/user/0
So I am out of disk space, but I don't know how to enter the container to clean it. Any dokku **** return /home/dokku/.basher/bash: main: command not found
Access denied which I have found out is because I am completely out of HD space.
So 2 questions.
1: How do I get into the container to clear the tmp directory
2: Is there a way to set a max disk size limit so Dokku doesn't eat the entire HD again?
Thanks
Dokku uses docker to deploy your application, you are probably accumulating a bunch of stale docker images, which over time can take over all of your disk space.
Try running this:
docker image ls
Then try removing unused images:
docker system prune -a
For more details, see: https://www.digitalocean.com/community/tutorials/how-to-remove-docker-images-containers-and-volumes

tar fills up my HDD

I'm trying to tar a pretty big folder (~11GB) and while taring, my VM crashes because its disk is full. But... I still have plenty of room available on all disks but /
$ sudo df -h
File system Size Used Avail. Used% Mount on
udev 3,9G 0 3,9G 0% /dev
tmpfs 799M 9,3M 790M 2% /run
/dev/sda1 9,1G 3,1G 5,6G 36% /
/dev/sda2 69G 37G 29G 57% /home
/dev/sdb1 197G 87G 100G 47% /docker
I assume tar is buffering somewhere on / and fulfil it before my OS crashes. By the way, I have no idea on how to prevent this. Do you guy have any idea?
Cheers,
Olivier
Tar normally builds the archive in the current directory, as a hidden file. Try cd'ing to one of your larger partition mounting points and taring from there to see if it makes a difference. You may also be running out of innodes:
No Space Left on Device, Running out of Innodes
I ran into a similar problem with a server because of too many small files. While you have plenty of free space left, you might run into this issue.

Ruby on Rails: Cannot allocate memory issues when application size is big

I have a Linux 14.04 server setup with RAM: 2GB and Hard-drive: 40GB. (Digital Ocean)
And my Ruby on Rails application size is almost 10GB. (Because of some images stored, which is I think not good)
I am now always getting Memory allocation problems. I cannot not run any rake task that writes data to the database. And sometimes application also stops working.
Here is my server's hard drive details,
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 16G 22G 42% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 991M 4.0K 991M 1% /dev
tmpfs 201M 364K 200M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1001M 0 1001M 0% /run/shm
none 100M 4.0K 100M 1% /run/user
Is this memory allocation problem occurring because my Ruby on Rails application size is big?
If so, Is there any workaround to solve this issue as I have some more space available in hard drive?

Deploy says disk full when the disk is not actually full

I'm trying to deploy an update to our rails app on a digital ocean box. When I run cap deploy I get the errors:
error: file write error (No space left on device)
fatal: unable to write sha1 file
fatal: unpack-objects failed
When I run df I see that we are only using 15% of our disk space:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 41151808 5500720 33537664 15% /
none 4 0 4 0% /sys/fs/cgroup
udev 1014128 4 1014124 1% /dev
tmpfs 205000 360 204640 1% /run
none 5120 0 5120 0% /run/lock
none 1024980 0 1024980 0% /run/shm
none 102400 0 102400 0% /run/user
df -i reveals:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/vda1 2621440 176278 2445162 7% /
none 256245 2 256243 1% /sys/fs/cgroup
udev 253532 402 253130 1% /dev
tmpfs 256245 325 255920 1% /run
none 256245 1 256244 1% /run/lock
none 256245 1 256244 1% /run/shm
none 256245 3 256242 1% /run/user
I've tried deleting log files and rebooting the box with no luck. Any ideas on why it thinks our disk is full?
Turns out the error was actually coming from the database server - which was full.
I just ran into this same issue here after several failed attempts to deploy with capistrano3.
df -i did indeed indicate high usage in my case on the app server, but I was able to resolve it by clearing out old releases with cap ENVIRONMENT deploy:clean
This answer may be helpful for others that end up here and wonder where their space has gone; especially if something's gone hay wire with previous deployments, they may be taking up an egregious amount of space or inodes. more likely.

Resources