Accidentally removed docker images - docker

I was running into disk space issues due to the /var/lib/docker/overlay2 directory. I was clever enough to remove some images that had been created a while ago, figuring that would be harmless.
It wasn't, and this was not a smart thing to do.
Nevertheless, here we are, and when I try to perform a docker build for my project now, I run into the following error:
error creating overlay mount to /var/lib/docker/overlay2/<id>/merged: no such file or directory
I would very much appreciate some help on how to fix this. Please let me know if you need further information or clarification, I don't work with docker a whole lot as you might surmise from the question.

Related

var/lib/docker/containers/* eats my hard disk space

My raspberrypi suddenly had no more free space.
By looking at the folder sizes with the following command:
sudo du -h --max-depth=3
I noticed that a docker folder eats an incredible amount of hard disk space. It's the folder
var/lib/docker/containers/*
The folder seems to contain some data for the current running docker containers. The first letters of the filename correspond to the docker container-ID. One folder seems to grow dramatically fast. After stopping the affected container and removed him, the related folder disappeared. So the folder seems to have belonged to it.
Problem solved.
I wonder now what the reason could be that this folder size increases so much. Further, I wonder what is the best way to not run into the same problem again later.
I could write a bash script which removes the related container at boot and run it again. Better ideas are very welcome.
The container ids are directories, so you can look inside to see what is using space in there. The two main reasons are:
Logs from stdout/stdere. These can be limited with added options. You can view these with docker logs.
Filesystem changes. The underlying image filesystem is not changed, so any writes trigger a copy-on-write to a directory within each container id. You can view these with docker diff.

Creating a volume size limit in docker that enforces limit - without first downloading whole huge file and only afterwards saying download failed?

I'm trying to create a container disk size limit in docker. Specifically, I have a container that downloads data, and I want this data to be under a limit, that I can cap beforehand.
So far, what I've created works on the surface-level, (prevents the file from actually being saved onto the computer) - however I can watch the container doing it's work, and I can see the download complete to 100%, before it says 'Download failed.' Therefore it seems like it's downloading to a temporary directory, and then checking the size of the file before passing it to the final location. (or not)
This doesn't fully resolve the issue I was trying to fix, because obviously the download consumes a lot of resources. I'm not sure what exactly I am missing here..
This is what creates the above behavior:
sudo zfs create new-pool/zfsvol1
sudo zfs set quota=1G new-pool/zfsvol1
docker run -e "TASK=download" -e "AZURE_SAS_TOKEN= ... " -v /newpool/zfsvol1:/data containerName azureFileToDownload
I got the same behavior while running the container interactively without volumes and downloading into the container. I tried changing the storage driver (inside $docker info) to zfs (from overlay) and it didn't help. I looked into docker plugins but they didn't seem like they would resolve the issue.
This is all run inside an Ubuntu VM; I made a zfs pool to test all of this. I'm pretty sure this is not supposed to happen because it's not very useful. Would anyone have an idea why this is happening?
Ok- so I actually figured out what was going on, and like #hmm suggested the problem wasn't because of Docker. The place it was buffering to was my memory, before downloading to the disk, and that was the issue. It seems like azcopy (Azure's copy command) first downloads to memory before saving to the disk, which is not great at all, but there is nothing to be done about it in this case. I think my approach itself works completely.

Do the various caches in Docker on Mac get corrupted?

I am trying to troubleshoot a Dockerfile I found on the web. As it is failing in a weird way, I am wondering whether failed docker builds or docker runs from various subsets of that file or other files that I have been experimenting with might corrupt some part of Docker's own state.
In other words, would it possibly help to restart Docker itself, Reboot the computer, or do some other Docker command, to eliminate that possibility?
Sometimes just rebooting things helps and it's not wrong to try restarting Docker for Mac or do a full reboot, but I can't think of a specific symptom it would fix and it's not something I need to do routinely.
I've only really run into two classes of problems that sound like what you're describing.
If you have a Dockerfile step that consistently succeeds, but produces inconsistent results:
RUN curl http://may.not.exist.example.com/ || true
You can wind up in a situation where the underlying command failed or produced the wrong output, but the RUN step as a whole succeeded. docker build --no-cache will re-run a build ignoring this, and an extremely aggressive docker rmi sequence (deleting every build, current and past, of the image in question) will clean it up too.
The other class of problem I've encountered involves some level of corruption in /var/lib/docker. This usually has very obvious symptoms generally involving "file not found" or "failed mounting directory" type errors on a setup that you otherwise know works. I've encountered it more on native Linux than Docker for Mac, probably because the DfM Linux installation is a little more controlled and optimized for Docker (it definitely isn't running a 3-year-old kernel with arbitrary vendor patches). On Linux you can work around this by stopping Docker, deleting everything in /var/lib/docker, and starting Docker again; in Docker for Mac, on the preferences window, there's a "Reset" page with various destructive cleanup options and "Reset to factory defaults" is closest to this.
I would first attempt using the Docker 'Diagnose and Feedback option. This generally runs tests on the health of Docker and the Docker engine.
Docker desktop also has options for various troubleshooting scenarios under 'Preferences' > 'Reset' (if you're using Docker Desktop) which have helped me in the past.
A brief look through the previous Docker Release notes.
It certainly looks like it has been possible in the past to corrupt the Docker Engine; there is evidence suggesting the engine has been iteratively fixed since.

Docker for Mac — Extremely slow request times

My .dockerignore is setup to ignore busy directories, but altering a single file seems to have a huge impact on the run performance.
If I make a change to a single, non-dependent file (for example .php or .jpg) in the origin directory, the performance of the next request is really slow.
Subsequent requests are fast, until I make a change to any file in the origin directory and then request times return to ~10s.
Neither :cached or :delegated make any difference
Is there anyway to speed this up? It seems like Docker is doing a lot in the background considering only one file has been changed?
The .dockerignore file does not affect volume mounts. It is only used when sending context to the Docker daemon during image builds. So that is not a factor here.
Poor performance in some situations is a longstanding known issue in Docker for Mac. They discuss this topic in the documentation. In my experience, the worst performance happens with fs event scanners, i.e. you are watching some directory for changes and reloading an app server in response. My way of dealing with that is to disable the fs event watcher and restart the app server manually when that's needed. (May or may not be practical for your situation.)
The short answer is that you can try third party solutions, or you can accept the poor performance in development, realizing it won't follow you to production (which is probably not going to be on the Mac platform).
I ran into a similar issue but on Windows. The way I got around it was to use vagrant. Vagrant has great support for provisioning using Docker. In your Vagrantfile set up the shared directory to use rsync. This will copy over the directories on the VM. Docker can access these directories quickly when in memory on the VM.
This is a great article that helped me come to this conclusion: http://blog.zenika.com/2014/10/07/setting-up-a-development-environment-using-docker-and-vagrant/
More information on provisioning vagrant using docker: https://www.vagrantup.com/docs/provisioning/docker.html
More information on vagrant rsync: https://www.vagrantup.com/docs/synced-folders/rsync.html
I hope this helps you as much as it did me.

Docker's aufs diff folder is growing huge in terms of size

I'm having problem with space usage of docker. I have
/var/lib/docker/aufs/diff/e20ed0ec78d30267e8cf855c6311b02089b6086ea149c21997a3e6cb9757ecd4/tmp/registry-dev/docker/registry/v2/blobs# du -sh
4.6G .
can I find which container does this folder belong to? I have docker registry running but inside there I have
/var/lib/registry/docker/registry/v2/blobs# du -sh
465M .
I'm suspecting docker upgrade (I used migrate tool https://docs.docker.com/engine/migration/ here) could have left it, or, I was building docker registry myself before, and moving to pre-compiled registry left this.
can I somehow check which container it belogs to? or maybe, does it belong to any?
I had same issue and the spotify/docker-gc fixed it. Clone it then follow "Running as a Docker Image"
The spotify/docker-gc is not going to fix it, but it make things to turn for the better much easier. The first thing you need to do is to stop doing commits on the same image. As I've realized, this is going to build up a huge diff-dependency. What I did is that I've committed all my running containers into different image names and tags, stopped and restarted the containers. After that, I've deleted the old images manually, then ran spotify/docker-gc. I've saved about 20% of disk space. If I ran spotify/docker-gc before the commits into new images, nothing happened.
If you use spotify/docker-gc, please do DRY_RUN.

Resources