How to delete unused docker images in swarm? - docker

We have a system where user may install some docker containers. We dont have a limit on what he can install. After some time, we need to clean up - delete all the images that are not in used in the swarm.
What would be the solution for that using docker remote API?
Our idea is to have background image-garbage-collector thread that:
lists all the images
try to delete some
if it fails, just ignore
Would this make sense? Would this affect swarm somehow?

Cleaner way to list and (try to) remove all images
The command docker rmi $(docker images -q) would do the same as the answer by #tpbowden but in a cleaner way. The -q|--quiet only list the images ID.
It may delete frequently used images (not running at the cleaning date)
If you do this, when the user will try to swarm run deleted-image it will:
Either pull the image (< insert network consumption warning here />)
Either just block as the pull action is not automatic in swarm if I remember it right (< insert frequent support request warning here about misunderstood Swarm behavior />).
"dangling=true" filter:
A useful option is the --filter "dangling=true". Executing swarm images -q --filter "dangling=true" will display not-currently-running images.
Though challenge
You issue reminds me the memory management in a computer. Your real issue is:
How to remove image that won't be used in the future?
Which is really hard and really depends on your policy. If your policy is old images are to be deleted the command that could help is: docker images --format='{{.CreatedSince}}:{{ .ID}}'. But then the hack starts... You may need to grep "months" and then cut -d ':' -f 2.
The whole command would result as:
docker rmi $(docker images --format='{{.CreatedSince}}:{{ .ID}}' G months | cut -d ':' -f 2)
Note that this command will need to be run on every Swarm agent as well as the Swarm manager, not only the Swarm manager.
Swarm and registry
Be aware than a swarm pull image:tag will not pull the image on Swarm agents! Each Swarm agent must pull the image itself. Thus deleting still used images will result in network load.
I hope this answer helps. At this time there is no mean to query "image not used since a month" AFAIK.

All you need is 'prune'
$ docker image prune --filter until=72h --force --all

docker images | tail -n+2 | awk '{print $3}' | xargs docker rmi
This will list all images, strip the top line with column headings, grab the 3rd column (image ID hash) and then attempt to remove them all. Docker will prevent you from removing any images that are currently used by running containers.
If you want to do this in a slightly less 'hacky' way, you could use Docker's API to get images which aren't being used and delete them that way.

Related

How to prune old docker images, but only for a selected container?

We know that docker image prune has a --filter argument that can be used to select (and remove) images older than a given number of hours (e.g.--filter "until=7*24h").
We also know that docker images has a similar --filter argument that supports a before key (e.g. docker images --filter "before=ubuntu:22.04"), but that can only filter images created before a given image or reference (not a date).
But pruning as described above would apply to all containers, which is rather too broad. What if we wanted to prune the "old" images more selectively, restricting the pruning to the images of just a single container (e.g. to spare older base containers, etc.)?
I've come up with something looking ugly, but apparently rather effective.
The example below removes (forcefully) all images older than 2 weeks (the shortest period for which this implementation works - can be tweaked to any period though) of the an mirekphd/ml-cache container (caution: as a special case it can remove all images of this container):
$ MAX_WEEK_NUM=2 && REPO=mirekphd && CONTAINER=ml-cache && docker images --format "{{.ID}} {{.CreatedSince}}" --filter=reference="$REPO/$CONTAINER" | grep "[$MAX_WEEK_NUM-9999] weeks\|[1-999] months\|[1-99] years" | awk '{print $1}' | xargs docker rmi -f

Remove multiple Docker containers based on creation time

I have many Docker containers both running and exited. I only want to keep the containers that were created before today/some specified time -- I would like to remove all containers created today. Easiest way to approach this?
Out of the box on all OS there is the possibility to remove all containers younger than a given one:
docker rm -f $(docker ps -a --filter 'since=<containername>' --format "{{.ID}}")
so the container given in since will be kept, but all newer ones not. Maybe that suits your use case.
If you really need a period of time there will be some bash magic doing that. But specify your needs exactly then..
In detail:
docker rm: removing one or more containers
-f: force running containers to stop
docker ps -a: listing all containers
--filter 'since=..' filtering containers created since given with all details
--format "{{.ID}}": filtering ID-column only

Docker: "You don't have enough free space in /var/cache/apt/archives/"

I have a dockerfile which when I want to build results in the error
E: You don't have enough free space in /var/cache/apt/archives/
Note that the image sets up a somewhat complex project with several dependencies that require quite a lot of space. For example, the list includes Qt. This is only a thing during the construction of the image, and in the end, I expect it to have a size of maybe 300 MB.
Now I found this: https://unix.stackexchange.com/questions/578536/how-to-fix-e-you-dont-have-enough-free-space-in-var-cache-apt-archives
Given that, what I tried so far is:
Freeing the space used by docker images so far by calling docker system prune
Removing unneeded installation files by calling sudo apt autoremove and sudo apt autoclean
There was also the suggestion to remove data in var/log, which has currently a size of 3 GB. However, I am not the system administrator and thus wary to do such a thing.
Is there any other way to increase that space?
And, preferably, is there a more sustainable solution, allowing me to build several images without having to search for spots where I can clean up the system?
Try this suggestion. You might have a lot of unused images that need to be deleted.
https://github.com/onyx-platform/onyx-starter/issues/5#issuecomment-276562225
Converting a #Dre suggestion into the code, you might want to use Docker prune command for containers, images & volumes
docker container prune
docker image prune
docker volume prune
You can use these commands in sequence:
docker container prune; docker image prune; docker volume prune
Free Space without removing your latest images
Use the following command to see the different types of reclaimable storage (the -v verbose option provides more detail):
docker system df
docker system df -v
Clear the build cache (the -a option will remove unused build cache):
docker builder prune -a
Remove dangling images ( tagged images, old and previous image builds):
docker rmi -f $(docker images -f "dangling=true" -q)
Increase Disk image size using Docker UI
Docker > Preferences > Resources > Advanced > adjust Disk image size > Apply & Restart
TLDR;
run
docker system prune -a --volumes
I tried to increase the disk space and prune the images, containers and volumes manually but was facing the issue again and again. When I tried to check the memory consumption on my machine, I found a lot of memory consumed by ~/Library/Containers/com.docker.docker location. Did a system prune which cleaned up a lot of space and docker builds started working again.

Remove older docker images but not layers that are still referenced

Say I have a build machine which builds many docker images myimage:myversion. I consume about 1GB disk space per 100 images created and I certainly don't need all of them. I'd like to keep, say, the most recent 10 images (and delete everything older) but I want to make sure I have all of the cached layers from the 10 builds/image. If I have all of the layers cached, then I'm more likely to have a fast build on my next run.
The problem is all of the images (very old and brand new) share a lot of layers so I can't blindly delete the old ones as there is a ton of overlap with the new ones.
I don't want to use docker image prune (https://docs.docker.com/config/pruning/#prune-images) as that depends on which containers I have (regardless of state) and am deleting the containers so prune will end up deleting way too much stuff.
Is there a simple command I can run periodically to achieve the state I described above?
Simple, no, but some shell wizardry is possible. I think this shell script will do what you want:
#!/bin/sh
docker images --filter 'reference=myimage:*' \
--format '{{ .CreatedAt }}/{{ .ID }}/{{ .Repository }}:{{ .Tag }}' \
| sort -r \
| tail +11 \
| cut -d / -f 2 \
| xargs docker rmi
(You might try running this one step at a time to see what comes out.)
In smaller pieces:
List all of the myimage:* images in a format that starts with their date. (If you're using a private registry you must include the registry name as a separate part and you must explicitly include the tag; for instance to list all of your GCR images you need -f 'reference=gcr.io/*/*:*'.)
Sort them, by the date, newest first.
Skip the first 10 lines and start printing at the 11th.
Take only the second slash-separated field (which from the --format option is the hex image ID).
Convert that to command-line arguments to docker rmi.
The extended docker images documentation lists all of the valid --format options.

Cached Docker image?

I created my own image and pushed it to my repo on docker hub. I deleted all the images on my local box with docker rmi -f ...... Now docker images shows an empty list.
But when I do docker run xxxx/yyyy:zzzz it doesn't pull from my remote repo and starts a container right away.
Is there any cache or something else? If so, what is the way to clean it all?
Thank you
I know this is old now but thought I'd share still.
Docker will contain all those old images in a Cache unless you specifically build them with --no-cache, to clear the cache down you can simply run docker system prune -a -f and it should clear everything down including the cache.
Note: this will clear everything down including containers.
You forced removal of the image with -f. Since you used -f I'm assuming that the normal rmi failed because containers based on that image already existed. What this does is just untag the image. The data still exists as a diff for the container.
If you do a docker ps -a you should see containers based on that image. If you start more containers based on that same previous ID, the image diff still exists so you don't need to download anything. But once you remove all those containers, the diff will disappear and the image will be gone.

Resources