How can I reinit docker layers? - docker

Using docker under Kubuntu 18 I got out of free space on the device.
I run commands to clear space:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker-compose down --remove-orphans
docker system prune --force --volumes
As I was still of free space opened /var/lib/docker/overlay2/ directory and
deleted a lot of subdirectories ubder it.
After that I got error :
$ docker-compose up -d --build
Creating network "master_default" with the default driver
ERROR: stat /var/lib/docker/overlay2/36af81b800ebb595a24b6c724318c1126932d2bfae61e2c98bfc65a203b2b928: no such file or directory
Looks like that is not a good way to free space. Which way isd good in my case?
If there is a way to reinit my docker apps?
Thanks!

As I was still of free space opened /var/lib/docker/overlay2/ directory and deleted a lot of subdirectories ubder it.
At this point, the docker filesystem has been corrupted. To repair, the best you can do is backup anything you want to save, particularly any volumes, stop the docker engine (systemctl stop docker), delete the entire docker filesystem (rm -rf /var/lib/docker), and restart docker (systemctl start docker).
At that point the engine will be completely empty without any images, containers, etc. You'll need to pull/rebuild your images and recreate the containers you were running. Hopefully that's as easy as a docker-compose up.

/var/lib/docker/overlay2/ is where docker stores the image layers.
Now, docker system prune -a removes all the unused images, stopped containers and the build cache if I'm not wrong.
One advice, since you are building images, check docker buildkit. I know docker-compose added support for it, but I don't know if your version supports that. To make it short, building your images will be way faster.

Related

Docker: "You don't have enough free space in /var/cache/apt/archives/"

I have a dockerfile which when I want to build results in the error
E: You don't have enough free space in /var/cache/apt/archives/
Note that the image sets up a somewhat complex project with several dependencies that require quite a lot of space. For example, the list includes Qt. This is only a thing during the construction of the image, and in the end, I expect it to have a size of maybe 300 MB.
Now I found this: https://unix.stackexchange.com/questions/578536/how-to-fix-e-you-dont-have-enough-free-space-in-var-cache-apt-archives
Given that, what I tried so far is:
Freeing the space used by docker images so far by calling docker system prune
Removing unneeded installation files by calling sudo apt autoremove and sudo apt autoclean
There was also the suggestion to remove data in var/log, which has currently a size of 3 GB. However, I am not the system administrator and thus wary to do such a thing.
Is there any other way to increase that space?
And, preferably, is there a more sustainable solution, allowing me to build several images without having to search for spots where I can clean up the system?
Try this suggestion. You might have a lot of unused images that need to be deleted.
https://github.com/onyx-platform/onyx-starter/issues/5#issuecomment-276562225
Converting a #Dre suggestion into the code, you might want to use Docker prune command for containers, images & volumes
docker container prune
docker image prune
docker volume prune
You can use these commands in sequence:
docker container prune; docker image prune; docker volume prune
Free Space without removing your latest images
Use the following command to see the different types of reclaimable storage (the -v verbose option provides more detail):
docker system df
docker system df -v
Clear the build cache (the -a option will remove unused build cache):
docker builder prune -a
Remove dangling images ( tagged images, old and previous image builds):
docker rmi -f $(docker images -f "dangling=true" -q)
Increase Disk image size using Docker UI
Docker > Preferences > Resources > Advanced > adjust Disk image size > Apply & Restart
TLDR;
run
docker system prune -a --volumes
I tried to increase the disk space and prune the images, containers and volumes manually but was facing the issue again and again. When I tried to check the memory consumption on my machine, I found a lot of memory consumed by ~/Library/Containers/com.docker.docker location. Did a system prune which cleaned up a lot of space and docker builds started working again.

Can I run docker system prune -a without downtime

I want to execute docker system prune -a to clean up space. Could someone please tell me if I can execute this command without docker-compose down (without removing the containers and downtime)
Docker documentation https://docs.docker.com/engine/reference/commandline/system_prune/ says:
Remove all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes.
About -a option it says:
Remove all unused images not just dangling ones
So for sure this command does not produce a downtime.
You can execute it without "docker-compose down".

How do I clean docker?

An error occurred because there is not enough disk space
I decided to check how much is free and came across this miracle
Cleaned up via docker system prune -a and
docker container prune -f
docker image prune -f
docker system prune -f
But only 9GB was cleared
Prune removes containers/images that have not been used for a while/stopped. I would suggest you do a docker ps -a and then remove/stop all the containers that you don't want with docker stop <container-id>, and then move on to remove docker images by docker images ps and then remove them docker rmi <image-name>
Once you have stooped/removed all the unwanted containers run docker system prune --volumes to remove all the volumes/cache and dangling images.
Don't forget to prune the volumes! Those almost always take up way more space than images and containers. docker volume prune. Be careful if you have anything of value stored in them though.
It could be your logging of the running containers. I've seen Docker logging writing disks full with logging. By default Docker containers can write unlimited logs.
I always add logging configuration to my docker-compose restrict total size. Docker Logging Configuration.
From the screenshot I think there's some confusion on what's taking up how much space. Overlay filesystems are assembled from directories on the parent filesystem. In this case, that parent filesystem is within /var/lib/docker which is part of / in your example. So the df output for each overlay filesystem is showing how much disk space is used/available within /dev/vda2. Each container isn't using 84G, that's just how much is used in your entire / filesystem.
To see how much space is being used by docker containers, images, volumes, etc, you can use docker system df. If you have running containers (likely from the above screenshot), docker will not clean those up, you need to stop them before they are eligible for pruning. Once containers have been deleted, you can then prune images and volumes. Note that deleting volumes deletes any data you were storing in that volume, so be sure it's not data you wanted to save.
It's not uncommon for docker to use a lot of disk space from downloading lots of images (docker makes it easy to try new things) and those are the easiest to prune when the containers are stopped. However what's harder to see are the logs that containers are outputting which will slowly fill the drive within the containers directory. For more details on how to clean the logs automatically, see this answer.
If you want, you could dig deep at granular level and pinpoint the file(s) which are the cause of this much disk usage.
du -sh /var/lib/docker/overley2/<container hash>/merged/* | sort -h
this would help you coming to a conclusion much easily.

Old files remain in new docker container

I have a basic Hyperledger Fabric network where the nodes uses docker containers.
Inside each node I have done some file creation and editing. However, I now want to restart the network with clean containers.
I have tried to shut down all containers, images and networks, then run the docker prune command.
I have also tried to delete all volumes.
However, once I re-create the Fabric network, when bashing into a container the old files that I custom created are still there. I never created those files on host machine, only inside that container. But I do not understand how it is possible that those files still exists. I even tried to delete the images.
System is Ubuntu 18.4
Can anybody spot the potential fix to this?
Delete the volumes of the containers after stopping and removing the containers by below commands,
docker kill $(docker ps -aq)
docker rm $(docker ps -aq)
Then remove the volumes of the containers by below command.
docker system prune --volumes -f
This removes all the unused networks and volumes .
Hope this helps.

How to clean up Docker

I've just noticed that I ran out of disk space on my laptop. Quite a lot is used by Docker as found by mate-disk-usage-analyzer:
The docker/aufs/diff folder contains 152 folders ending in -removing.
I already ran the following commands to clean up
Kill all running containers:
# docker kill $(docker ps -q)
Delete all stopped containers
# docker rm $(docker ps -a -q)
Delete all images
# docker rmi $(docker images -q)
Remove unused data
# docker system prune
And some more
# docker system prune -af
But the screenshot was taken after I executed those commands.
What is docker/aufs/diff, why does it consume that much space and how do I clean it up?
I have Docker version 17.06.1-ce, build 874a737. It happened after a cleanup, so this is definitely still a problem.
The following is a radical solution. IT DELETES ALL YOUR DOCKER STUFF. INCLUDING VOLUMES.
$ sudo su
# service docker stop
# cd /var/lib/docker
# rm -rf *
# service docker start
See https://github.com/moby/moby/issues/22207#issuecomment-295754078 for details
It might not be /var/lib/docker
The docker location might be different in your case. You can use a disk usage analyzer (such as mate-disk-usage-analyzer) to find the folders which need most space.
See Where are Docker images stored on the host machine?
This dir is where container rootfs layers are stored when using the AUFS storage driver (default if the AUFS kernel modules are loaded).
If you have a bunch of *-removing dirs, this is caused by a failed removal attempt. This can happen for various reasons, the most common is that an unmount failed due to device or resource busy.
Before Docker 17.06, if you used docker rm -f to remove a container all container metadata would be removed even if there was some error somewhere in the cleanup of the container (e.g., failing to remove the rootfs layer).
In 17.06 it will no longer remove the container metadata and instead flag the container with a Dead status so you can attempt to remove it again.
You can safely remove these directories, but I would stop docker first, then remove, then start docker back up.
docker takes lot of gig into three main areas :
Check for downloaded and compiled images.
clean unused and dead images by running below command
docker image prune -a
Docker creates lot of volume, some of the volumes are from dead container that are no more used
clean the volume and reclaim the space using
docker system prune -af && \
docker image prune -af && \
docker system prune -af --volumes && \
docker system df
Docker container logs are also very notorious in generating GBs of log
overlay2 storage for layers of container is also another source of GBs eaten up .
One better way is to calculate the size of docker image and then restrict the docker container with below instructions for storage and logs upper cap.
For these feature use docker V19 and above.
docker run -it --storage-opt size=2G --log-opt mode=non-blocking --log-opt max-buffer-size=4m fedora /bin/bash
Note that this is actually a know, yet still pending, issue: https://github.com/moby/moby/issues/37724
If you have the same issue, I recommend to "Thumbs Up" the issue on GitHub so that it gets addressed soon.
I had same issue.
In my case solution was:
view all images:
docker images
remove old unused images:
docker rmi IMAGE_ID
possibly you will need to prune stopped containers:
docker container prune
p.s. docker --help is good solution :)

Resources