As part of my CICD deployment, there is a volume my_volume that gets created on docker-compose build/up, that needs deleting every deployment.
Therefore the CICID script calls docker volume rm my_volume before docker-compose build/up.
But if a build fails, subsequent builds will error out on docker volume rm my_volume, because the volume doesn't exist.
How can I remove this volume only if it exists?
You can ignore the errors:
docker volume rm ${some_volume} || true
Or you can start by making sure the project is completely down:
docker-compose down -v
docker-compose up -d
Or you can start labeling your volumes and containers when testing, and prune those labels:
docker volume prune -f --filter 'label=ci-test'
In order to ignore failure while calling "docker volume rm my_volume" use the below order-
set +e
docker volume rm my_volume
docker-compose build/up
true
Related
For a deployment pipeline I need to remove Docker container without user interaction.
When removing a Docker container using
$ docker compose rm myapp
docker compose wants a confirmation and only continues when y was entered.
How to tell docker compose to remove volumes continue without typing in something?
My Docker version is 20.10.21
There's an option to do that:
Usage: docker compose rm [OPTIONS] [SERVICE...]
Removes stopped service containers
[...]
Options:
-f, --force Don't ask to confirm removal
-s, --stop Stop the containers, if required, before removing
-v, --volumes Remove any anonymous volumes attached to containers
So the solution is
$ docker compose rm myapp -f
I need to reset a moodle docker to its initial state every 24 hours. This docker will be a running a demo site where users can login and carry out various setting changes and the site needs to reset itself every day. Does docker provide any such feature?
I searched for a docker reset command but it doesn't seem to be there yet.
Will such a process of removing and reinitiating docker container work?
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q)
docker-compose up -d
I should be able to do this programatically ofcourse, preferably using a shell script.
Yes you do not need to reset just recreate the container is enough but if you bind volumes with the host it will not work if there is anything that pick from persistent storage of the host in docker-compose up.
Write a bash script that will run every 1:00 AM or whatever time you want to create fresh container.
0 0 * * * create_container.sh
create_container.sh
#!/bin/bash
docker-compose rm -f
docker-compose up -d
or you can use your own script as well but if there is bind volumes the clear that files before creating the container.
rm -rf /path/to_host_shared_volume
docker rm -f $(docker ps -a -q)
.
.
.
As the behavour of -v is different it will create directory if not exist.
Or if you want to remove everything then you can use system-prune
#!/bin/bash
docker system prune -f -a --volumes
docker-compose up -d
Remove all unused containers, networks, images (both dangling and unreferenced), and volumes.
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all volumes not used by at least one container
- all images without at least one container associated to them
- all build cache
This is my Dockerfile for an spring boot app
FROM openjdk:8u151-jdk-alpine
VOLUME /tmp
ADD target/classes/application.properties /workdir/application.properties
ADD target/foo-app-2.0.0.jar /workdir/app.jar
WORKDIR /workdir
ENV JAVA_OPTS=""
EXPOSE 8080
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar app.jar" ]
It creates a volume /tmp when the container is started which is stored in /var/lib/docker folder of the host.
Is that volume deleted when the container is stopped i.e. everything from the host machine for that volume is cleaned up? If not how do I configure that?
Adding to #Whites11 answer -
No, volumes are not deleted automatically/by_default (even if the container gets
deleted) and you can list them using docker volume ls command.
However, once your containers are stopped you can remove the volumes associated with them by using rm -v :
$ docker stop $CONTAINER_ID && docker rm -v $CONTAINER_ID
Manual -
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
Options:
-f, --force Force the removal of a running container
(uses SIGKILL)
-l, --link Remove the specified link
-v, --volumes Remove the volumes associated with the
container
No, volumes are not deleted automatically (even if the container gets deleted) and you can list them using the docker volume ls command.
This could cause the presence of a possibly high number of dangling volumes (volumes not associated to any container) in the host. To remove them you can use this command for example:
docker volume rm `docker volume ls -q -f dangling=true`
(Took from here https://coderwall.com/p/hdsfpq/docker-remove-all-dangling-volumes)
I use docker-compose and find following problem:
When I change my code and want to rebuild dockers I use
docker-compose stop
docker-compose build
And then I want to run system by:
docker-compose up
But no new version of code/containers are run but old ones. What to do?
You could use, docker-compose up --build or docker-compose up --build --force-recreate
I have a helper function to nuke everything so that our Continuous blah, cycle can be tested, erm... continuously. Basically it boils down to the following:
To clear containers:
docker rm -f $(docker ps -a -q)
To clear images:
docker rmi -f $(docker images -a -q)
To clear volumes:
docker volume rm $(docker volume ls -q)
To clear networks:
docker network rm $(docker network ls | tail -n+2 | awk '{if($2 !~ /bridge|none|host/){ print $1 }}')
I generally don't require old containers, volumes and networks, so to clear them all I made a bash script which runs to clean up docker environment before each build. And to rebuild the docker using updated code, I use docker-compose up --build
Credits to marcelmfs and borrowed from Source
In this case first we should remove old containers (by rm -f). So we can deploy new code by:
docker-compose build
docker-compose stop
docker-compose rm -f
docker-compose up
Above sequence is not coincidence - when first instruction build image, the old images running - but when building is finish then old container is stopped, deleted and exchange by new builded one.
I put above commands in handy copy-paste oneliner:
docker-compose build && docker-compose stop && docker-compose rm -f && docker-compose up
I created a docker container and have an application running inside it. I created a second docker container (on the same host) with the same application running inside it. I need to create a few more containers this way. However, when I remove a container, I need to ensure that the dependencies it creates on the host are completely removed. How could this be achieved ?
Thanks,
Checkout the documentation of the docker rm command:
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
Options:
-f, --force Force the removal of a running container (uses SIGKILL)
--help Print usage
-l, --link Remove the specified link
-v, --volumes Remove the volumes associated with the container
So use the "-v" option
Update
You can also use this command to cleanup volumes with no associated containers.
docker volume rm $(docker volume ls -qf dangling=true)
Credit: sceada