gitlab docker volume not cleaned after artifact expiration date - docker

we have some gitlab runners that builds our application on docker containers and each of my artifacts get expired after 1 day.
but after the expire date my runners volume are still there as dangling volumes
after some time my docker environment is full i have to delete them all up manually
i have created an script that cleans up volumes and it goes like this:
docker volume ls -qf dangling=true | xargs --no-run-if-empty docker volume rm
but im scared that my artifacts and npm cache gets deleted

Related

remove a volume if exists

As part of my CICD deployment, there is a volume my_volume that gets created on docker-compose build/up, that needs deleting every deployment.
Therefore the CICID script calls docker volume rm my_volume before docker-compose build/up.
But if a build fails, subsequent builds will error out on docker volume rm my_volume, because the volume doesn't exist.
How can I remove this volume only if it exists?
You can ignore the errors:
docker volume rm ${some_volume} || true
Or you can start by making sure the project is completely down:
docker-compose down -v
docker-compose up -d
Or you can start labeling your volumes and containers when testing, and prune those labels:
docker volume prune -f --filter 'label=ci-test'
In order to ignore failure while calling "docker volume rm my_volume" use the below order-
set +e
docker volume rm my_volume
docker-compose build/up
true

How to delete a docker image?

My project includes a Dockerfile to build an image. I've made modifications to that Dockerfile and want to see if it works. So i closed my program and listed all docker images in powershell using docker image ls -a. I deletetd all unused images using docker rmi -f $(docker images -a -q) and docker image prune -a.
But my image keeps not getting deleted, but simply 'untagged':
Untagged: my_image:latest
Untagged: my_image#sha256:16eb92a476be0...
All containers are stopped and deleted before trying to remove the image.
When i restart my application to build a new image, the old one keeps getting used:
> docker image ls -a
REPOSITORY TAG IMAGE ID CREATED
/my_image latest 0d79904a74b0 2 months ago
How do i actually physically remove the old image so my application can build a new one?
At first, you need to delete/stop the running container that is using your image(which one you want to remove).
docker ps -a: To see all the running containers in your machine.
docker stop <container_id>: To stop a running container.
docker rm <container_id>: To remove/delete a docker container(only if it stopped).
docker image ls: To see the list of all the available images with their tag, image id, creation time and size.
docker rmi <image_id>: To delete a specific image.
docker rmi -f <image_id>: To delete a docker image forcefully
docker rm -f (docker ps -a | awk '{print$1}'): To delete all the docker container available in your machine
docker image rm <image_name>: To delete a specific image
To remove the image, you have to remove/stop all the containers which are using it.
docker system prune -a: To clean the docker environment, removing all the containers and images.

reset a docker container to its initial state every 24 hours

I need to reset a moodle docker to its initial state every 24 hours. This docker will be a running a demo site where users can login and carry out various setting changes and the site needs to reset itself every day. Does docker provide any such feature?
I searched for a docker reset command but it doesn't seem to be there yet.
Will such a process of removing and reinitiating docker container work?
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q)
docker-compose up -d
I should be able to do this programatically ofcourse, preferably using a shell script.
Yes you do not need to reset just recreate the container is enough but if you bind volumes with the host it will not work if there is anything that pick from persistent storage of the host in docker-compose up.
Write a bash script that will run every 1:00 AM or whatever time you want to create fresh container.
0 0 * * * create_container.sh
create_container.sh
#!/bin/bash
docker-compose rm -f
docker-compose up -d
or you can use your own script as well but if there is bind volumes the clear that files before creating the container.
rm -rf /path/to_host_shared_volume
docker rm -f $(docker ps -a -q)
.
.
.
As the behavour of -v is different it will create directory if not exist.
Or if you want to remove everything then you can use system-prune
#!/bin/bash
docker system prune -f -a --volumes
docker-compose up -d
Remove all unused containers, networks, images (both dangling and unreferenced), and volumes.
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all volumes not used by at least one container
- all images without at least one container associated to them
- all build cache

Why docker build image from docker file will create container when build exit incorrectly?

I'm using docker to build images from a docker file. In the process there's some error happened, so the build exit with error code.
When I run docker images I can see a untagged image. so I tried to remove it docker rmi xxxxx. But it always fails, it says the images can't be removed because it's used by a stopped container.
So I dig a little deeper. I run docker ps -a, now I can see a long list of stopped container which are created when the build process fails.
Why there will be container created?? I thought image is like the concept of "class" in programming, and container is instance of the class. Before the image is successfully built, why there'll be instance created? How can I build image without all those stopped containers ?
Each line of your Dockerfile creates an intermediate container to execute the Dockerfile directive for that line.
If the directive succeeds, that will create an intermediate image, which will be the base for the next container to be launch (to execute the next line of your Dockerfile)
If said directive fails, that can leave a container in an Exited state, which in turn will block the intermediate image it was created from.
Simply cleanup all the containers then all the images, and try again.
If you have tried to build repeatedly your Dockerfile, you end up with a collection of intermediate images and containers.
That is why, when my build (finally) succeeds, I always cleanup extra containers, images and (with docker 1.10+) volumes in my build script:
cmdb="docker build${proxy}${f} -t $1 $2"
# echo "cmdb='${cmdb}"
if eval ${cmdb}; then
docker rm $(docker ps -qa --no-trunc --filter "status=exited" 2>/dev/null) 2>/dev/null
docker rmi $(docker images --filter "dangling=true" -q --no-trunc 2>/dev/null) 2>/dev/null
docker volume rm $(docker volume ls -qf dangling=true 2>/dev/null) 2>/dev/null
exit 0
fi
The build process is creating an intermediate for each step in the dockerfile. Like you can see in this example:
...
Removing intermediate container e07079f73a9f
Step 3 : RUN cd /opt/
---> Running in 78d480a57cca
---> 324e9006d642
Removing intermediate container 78d480a57cca
Step 4 : RUN unzip /opt/mule-ee-distribution-standalone-3.7.3.zip -d /opt/
---> Running in 81aa445c770c
...
---> e702e1cff4ee
---> removing intermediate container 81aa445c770c
To speed up the build process the intermediate container will be used as cache. Those containers will be deleted after a succesful build but when you try to build for multiple times the 'old' intermediate containers and image will still be on your system.
If you wish to keep the intermediate containers after the build is complete, you must use --rm=false. By default it's true so after a successful build the intermediate containers will be deleted. When a build fails during the use of an intermediate container, than this container will exit. The intermediate 'image for this container' has tags <none>:<none> and can't be deleted normally because there is an (excited) container which is from this image. You can delete the image by using the -f (force) flag: docker rmi -f image-id

How to remove all docker volumes?

If I do a docker volume ls, my list of volumes is like this:
DRIVER VOLUME NAME
local 305eda2bfd9618266093921031e6e341cf3811f2ad2b75dd7af5376d037a566a
local 226197f60c92df08a7a5643f5e94b37947c56bdd4b532d4ee10d4cf21b27b319
...
...
local 209efa69f1679224ab6b2e7dc0d9ec204e3628a1635fa3410c44a4af3056c301
and I want to remove all of my volumes at once. How can I do it?
The official command to remove all unused data (including volumes without containers) will be with docker 1.13
docker system prune
If you want to limit to volumes alone, removing only unused volumes:
docker volume prune
You also have docker image prune, docker container prune, etc:
See more at "Prune unused Docker objects".
See commit 86de7c0 and PR 26108.
You can see it in action in play-with-docker.com:
/ # docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1296a5e47ef3 hello-world "/hello" 7 seconds ago Exited (0) 6 seconds ago prickly_poincare
/ # docker system prune
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all dangling images
Are you sure you want to continue? [y/N] y
Deleted Containers:
1296a5e47ef3ab021458c92ad711ad03c7f19dc52f0e353f56f062201aa03a35
The current (pre-docker 1.13) way of managing volume was introduced with PR 14242 and the docker volume command, which documents in its comment from July 2015:
docker volume rm $(docker volume ls -q --filter dangling=true)
Edited on 2017:
This answer was given on Apr 16 '16 and now is outdated, and correct only for docker version prior to 1.13
please use the answer from #VonC, now it is marked as correct
To delete unused volumes you can use the built-in docker volume rm command. The rm command also deletes any directory in /var/lib/docker/volumes that is not a volume, so make sure you didn't put anything in there you want to save:
Command to List volumes, little bit right than yours:
$ docker volume ls -qf dangling=true
Cleanup:
$ docker volume rm $(docker volume ls -qf dangling=true)
more details about ls here, about rm here
This is what I've found to be useful: https://github.com/chadoe/docker-cleanup-volumes
Shellscript to delete orphaned docker volumes in /var/lib/docker/volumes and /var/lib/docker/vfs/dir
Docker version 1.4.1 up to 1.11.x
It basically does a cleanup of any orphaned/dangling volumes, but it includes a --dry-run but it makes note of some docker included commands as well (which are referenced in prev comment)
Note about Docker 1.9 and up
To delete orphaned volumes in Docker 1.9 and up you can also use the built-in docker volume commands instead of this docker-cleanup-volumes script. The built-in command also deletes any directory in /var/lib/docker/volumes that is not a volume so make sure you didn't put anything in there you want to save:
List:
$ docker volume ls -qf dangling=true
Cleanup:
$ docker volume rm $(docker volume ls -qf dangling=true)
Or, handling a no-op better but Linux specific:
$ docker volume ls -qf dangling=true | xargs -r docker volume rm
To answer the question and borrowing from Marc, this works:
$ docker volume rm $(docker volume ls -qf dangling=true | xargs)

Resources