I am using docker over https https://x.x.198.38:2376/v1.40/images/load
And I started getting this error when running docker on Centos, this was not an issue on Ubuntu.
The image in question is 1.1gb in size.
Error Message:
Error processing tar file(exit status 1): open /root/.cache/node-gyp/12.21.0/include/node/v8-testing.h: no space left on device
I ran into a similar issue some time back.
The image might have a lot of small files and you might be falling short on disk space or inodes.
I was able to get to it only when I did "watch df -hi", it showed me that inodes were pegging up to 100 but docker cleaned up and it was back to 3%. Check this screensshot
And further analysis showed that the volume attached was very small, it was just 5gb out of which 2.9 was already used by some unused images and stopped or exited containers
Hence as a quick fix
sudo docker system prune -a
And this increased the inodes from 96k to 2.5m
And as a long-term fix, I increased the aws abs volume to up to 50gb as we had plans to use windows images too in the future..
HTH
#bjethwan you caught very good command. I solved my problem.Thank you. I am using redhat. I want to add something.
watch command works 2 seconds interval at default. When i used it default, It couldnt catch the problematic inodes.
I ran watch command with 0.5 seconds. This arrested the guilty volume :)
watch -n 0.5 df -hi
After detecting the true volume you must increase it.
Related
I am running docker on windows and even though I do docker system prune it is using more and more space somewhere on my harddisk.
Often after restarting the laptop and running prune I can get rid of some more but its less than it actually takes.
I know that docker is using these space because space on my HDD decreases when building new images and running containers but always decreases by much less space.
It's eaten over 50gb of my 256 gb SSD.
I appreciate any help in how to find and efficiently locate all files docker leaves when building and running containers.
I tried many lines from here and most work but I always fail to reclaim all space and given that I have a very small SSD I really need all the space I can get back.
Many thanks in advance!
I suggest you to add to the command docker system prune the `--all' because of:
-a, --all : Remove all unused images not just dangling ones
I use this to free up all my no more needed disk space.
I'm running a Flask API inside a Docker Container.
My application has to download files from Google Cloud, and sometimes, after some minutes of execution, my container exits with the following message:
pm-api exited with code 247
I am not sure, but I think it might be related to the size of the data I'm trying to download from GCP, becuse I'm using a query and I don't seem to have any problem whem limiting the number of rows I get from it.
Could be data-size related? And if so, how can I configure my docker container to not break when downloading/saving large files?
In order to solve this problem I had to increase the size of my docker container. This is found in settings resources. Increasing the memory removed this issue.
Yes, possibly due to the large file size.
You may have to define FILE_UPLOAD_MAX_MEMORY_SIZE larger than your downloading file.What is code 247
Also refer
max-memory-vs-data-upload-max-memory
I have a similar issue. docker container (python code processing some data) exited
command terminated with exit code 247
my docker container is running on k8s.
the issue was caused because I set the k8s resources memory limit to 4GB, but the python container needs to use >4GB memory. after I increase the memory limit to 8GB, the issue is sovled.
but about why the exit code is 247, I don't find the answer. from the docker exit code, it doesn't have code 247.
My empty space before running docker system prune -a was 900 MB and running it gives me 65 GB free space although the command report that it cleaned only 14.5 GB
Is the report is just wrong on am I missing something here?
The docs is not telling something new and it would be normal if it clears only 14.5 GB and this only gives me one answer that I'm doing it in a wrong way. Any thoughts here?
This will remove following content form your host machine where docker is running
- all stopped containers
- all networks not used by at least one container
- all images without at least one container associated to them
- all build cache
When I am trying to build the docker image I am getting out of disk space error and after investigating I find the following:
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 4G 3.8G 0 100% /
How do I fix this out of space error?
docker system prune
https://docs.docker.com/engine/reference/commandline/system_prune/
This will clean up all images, containers, networks, volumes not used. We generally try to clean up old images when creating a new one but you could also have this run as a scheduled task on your docker server every day.
use command - docker system prune -a
This will clean up total Reclaimable Size for Images, Network & Volume..... This will remove all images related reclaimable space which are not associated with any running container.....
Run docker system df command to view Reclaimable memory
In case there is some Reclaimable memory then if above command does not work in first go then run the same command twice then it should cleaned up....
I have been experiencing this behavior almost on daily basis.....
Planning to report this bug to Docker Community but before that want to reproduce this bug with new release to see if this has been fixed or not with latest one....
Open up the docker settings -> Resources -> Advanced and up the amount of Hard Drive space it can use under disk image size.
If you are using linux, then most probably docker is filling up the directory /var/lib/docker/containers, because it is writing container logs to <CONTAINER_ID>-json.log file under this directory. You can use the command cat /dev/null > <CONTAINER_ID>-json.log to clear this file or you can set the maximum log file size be editing /etc/sysconfig/docker. More information can be found in this RedHat documentation. In my case, I have created a crontab to clear the contents of the file every day at midnight. Hope this helps!
NB:
You can find the docker containers with ID using the following command
sudo docker ps --no-trunc
You can check the size of the file using the command
du -sh $(docker inspect --format='{{.LogPath}}' CONTAINER_ID_FOUND_IN_LAST_STEP)
Nothing works for me. I change the disk images max size in Docker Settings, and just after that it free huge size.
Going to leave this here since I couldn't find the answer.
Go to the Docker GUI -> Prefereces -> Reset -> Uninstall
Completely uninstall Docker.
Then install it fresh using this link
My docker was using 20GB of space when building an image, after fresh install, it uses 3-4GB max. Definitely helps!
Also, if you using a macbook, have look at ~/Library/Containers/docker*
This folder for me was 60 GB and was eating up all the space on my mac! Even though this may not be relevant to the question, I believe it is vital for me to leave this here.
I am on docker version 1.11.2. I am trying to docker save an image but i get
an error.
i did docker images to see the size of the image and the result is this
myimage 0.0.1-SNAPSHOT e0f04657b1e9 10 months ago 1.373 GB
The server I am on is low on space but it has 2.2 GB available but when I run docker save myimage:0.0.1-SNAPSHOT > img.tar i get
write /dev/stdout: no space left on device
I removed all exited containers and dangling volumes in hopes of making it work but nothing helped.
You have no enough space left on device. So free some more space or try gzip on the fly:
docker save myimage:0.0.1-SNAPSHOT | gzip > img.tar.gz
To restore it, docker automatically realizes that is gziped:
docker load < img.tar.gz
In such a situation where you can't free enough space locally you might want to use storage available over a network connection. A little bit more difficult to set up are NFS or Samba.
The easiest approach could be piping the output through netcat, but keep in mind that this is at least by default unencrypted.
But as long as your production server is that low on space you are vulnerable to a bunch of other problems.
Until you can provide more free space I wouldn't create files locally, zipped or not. You could bring important services down when you run out of free space.