Mac docker no space left on device when building images? - docker

I've seen this issue a number of times and usually use docker system prune to solve it temporarily, but i'm not understanding why it says there is no space on the device?
The main drive on my mac currently has 170gb free space, i also have a second drive with 900gb free, the images i'm building take up a total of 900mb when built, so what is docker talking about? I have plenty of storage space!

Since you specified that the platform is Mac, your docker runtime is running inside a VM, which has it's own resources allocated.
Assuming you are using Docker For Mac, you should increase the allocated disk space for the docker VM:

In case you don't want to increase the amount of docker engine storage as answered here, you can free some space by running:
docker image prune

Related

How to find and get rid of docker residual data?

I am running docker on windows and even though I do docker system prune it is using more and more space somewhere on my harddisk.
Often after restarting the laptop and running prune I can get rid of some more but its less than it actually takes.
I know that docker is using these space because space on my HDD decreases when building new images and running containers but always decreases by much less space.
It's eaten over 50gb of my 256 gb SSD.
I appreciate any help in how to find and efficiently locate all files docker leaves when building and running containers.
I tried many lines from here and most work but I always fail to reclaim all space and given that I have a very small SSD I really need all the space I can get back.
Many thanks in advance!
I suggest you to add to the command docker system prune the `--all' because of:
-a, --all : Remove all unused images not just dangling ones
I use this to free up all my no more needed disk space.

Docker taking up a lot of disk space

I am using Docker Desktop for Windows on Windows 10.
I was experiencing issues with system SSD always being full and moved 'docker-desktop-data' distro (which is used to store docker images and other stuff) out of the system drive to drive D: which is HDD using this guide.
Finally, I was happy to have a lot of space on my SSD... but docker containers started to work slower. I guess this happens due to HDD write/read operations being slower than on SSD.
Is there a better way to solve the problem of the continuously growing size of Docker distro's without impacting how fast containers actually work and images are built?
Actually only be design. As you know, a docker container is layered. So it might be feasible to check if it is possible to create something like a "base-container" from which your actual image in derived.
Also it might be sensible to check if your base distro is small enough. I often have seen containers created from full blown Debian or Ubuntu distros. Thats not the best idea. Try to derive from an alpine version or check for even smaller approaches.

Docker doesn't release / display space after running system prune and informing about reclaiming 16 gb on windows 10 home edition

so I'm really new to docker, and my friend told me that docker system prune run from the elevated cmd prompt suppose to clean pretty much everything, after running it however the message notifying about "reclaiming 16.24 gb" was displayed but my file explorer doesn't show any changes to disk c, restart of docker or host machine didn't help, pruning volumes yield same results. How do I make him release the space or display it correctly (as I don't really know what the case is) ?
I'm not super familiar with the internals of Docker for Windows, but fairly recently it worked by having a small virtual machine with a virtual disk image. The reclaimed disk space is inside that virtual disk image, but the "file" for that image will still remain the same size on your physical disk. If you want to reclaim the physical disk space, there should be a "Reset Docker" button somewhere in the Docker for Windows control panel, which will essentially delete that disk image and create a new, empty one.

How do I predefine the maximum runtime disk space for Containers launched from a Docker image?

With docker build, how do I specify the maximum disk space to be allocated to the runtime container?
This StackOverflow question mentioned runtime constraints, and I am aware of --storage-opt, but that concerns runtime parameters on dockerd or run docker -- and in contrast, I want to specify the limit in advance, at image build time.
(Note that I am not talking about specifying the disk footprint of the image, but rather about specifying the the maximum disk space for the container.)
You can't do this at build time. A container's max disk space can be only limited at runtime.
Now, if you are concerned that your disk might get full due to Docker stuff (images, logs, etc), what you can do is mount /var/lib/docker in a partition different than the main system partition, this way you know that getting out of space in docker won't crash your system. Or in case of Docker for Mac, you have a disk limit in the preferences.

Docker save issue

I am on docker version 1.11.2. I am trying to docker save an image but i get
an error.
i did docker images to see the size of the image and the result is this
myimage 0.0.1-SNAPSHOT e0f04657b1e9 10 months ago 1.373 GB
The server I am on is low on space but it has 2.2 GB available but when I run docker save myimage:0.0.1-SNAPSHOT > img.tar i get
write /dev/stdout: no space left on device
I removed all exited containers and dangling volumes in hopes of making it work but nothing helped.
You have no enough space left on device. So free some more space or try gzip on the fly:
docker save myimage:0.0.1-SNAPSHOT | gzip > img.tar.gz
To restore it, docker automatically realizes that is gziped:
docker load < img.tar.gz
In such a situation where you can't free enough space locally you might want to use storage available over a network connection. A little bit more difficult to set up are NFS or Samba.
The easiest approach could be piping the output through netcat, but keep in mind that this is at least by default unencrypted.
But as long as your production server is that low on space you are vulnerable to a bunch of other problems.
Until you can provide more free space I wouldn't create files locally, zipped or not. You could bring important services down when you run out of free space.

Resources