Docker system prune: only current directory - docker

I'm working on 2 projects that both use Docker, in separate directories.
In the 2nd project, for a new local build, the first command given (of a series of commands) is the following:
docker container stop $(docker container ls -a -q) && docker system prune -a -f --volumes
However, as a side effect, this kills the containers in the 1st project, also destroying the databases associated with it as well.
This is annoying because I have to constantly rebuild and re-seed the database in the 1st project.
How can I edit the first command such that it only effects the project in the current directory?
Note that this project is also using docker-compose, which I know is good at noting the current directory, so maybe we could make use of docker-compose instead.
The full list of commands given for a new local build are:
docker container stop $(docker container ls -a -q) && docker system prune -a -f --volumes
docker stack rm up
docker-compose -f docker-compose.local.yml build
docker stack deploy up --compose-file docker-compose.local.yml
Thank you very much in advance for any help.
-Michael

Related

How to kill and remove docker containers using puppet

I would like to add a scheduled job (fortnightly) to a machine using puppet to remove all containers on machine.
Currently I need to do sudo docker rm -f $(sudo docker ps -a -q) manually after sshing to that machine, which I want to automate.
Preferably using module: https://forge.puppet.com/puppetlabs/docker.
Can't see any option to kill and remove containers (also new to puppet). Even using docker-compose using puppet is fine.
Any ideas? Thanks.
The docs you linked say:
To remove a running container, add the following code to the manifest file. This also removes the systemd service file associated with the container.
docker::run { 'helloworld':
ensure => absent,
}
Regarding the docker command sudo docker rm -f $(sudo docker ps -a -q) to remove containers via ssh, you have a better one:
$ docker container prune --help
Usage: docker container prune [OPTIONS]
Remove all stopped containers
Options:
--filter filter Provide filter values (e.g. 'until=<timestamp>')
-f, --force Do not prompt for confirmation
So the equivalent would be:
docker container prune --force
And you can automate this ssh command via puppet, no need to manually ssh into the machine. Check their docs to run shell commands without installing an agent, or use Bolt command if you already have an agent installed on the remote host.

What is the command line equivalent of "Clean / Purge Data" in docker?

When I want to remove everything (running or not), I can just go to Troubleshoot and hit the Clean / Purge data button. This would remove all docker data, without resetting settings to factory defaults. Is there a single line command to achieve same thing?
P.S.: I know about docker system prune, but it is not exactly the same. I want to reset everything, not just the unused.
you can use the combination of docker rm to delete running containers and docker system prune to delete everything:
docker rm -f $(docker ps -a -q);docker system prune --volumes -a -f

reset a docker container to its initial state every 24 hours

I need to reset a moodle docker to its initial state every 24 hours. This docker will be a running a demo site where users can login and carry out various setting changes and the site needs to reset itself every day. Does docker provide any such feature?
I searched for a docker reset command but it doesn't seem to be there yet.
Will such a process of removing and reinitiating docker container work?
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q)
docker-compose up -d
I should be able to do this programatically ofcourse, preferably using a shell script.
Yes you do not need to reset just recreate the container is enough but if you bind volumes with the host it will not work if there is anything that pick from persistent storage of the host in docker-compose up.
Write a bash script that will run every 1:00 AM or whatever time you want to create fresh container.
0 0 * * * create_container.sh
create_container.sh
#!/bin/bash
docker-compose rm -f
docker-compose up -d
or you can use your own script as well but if there is bind volumes the clear that files before creating the container.
rm -rf /path/to_host_shared_volume
docker rm -f $(docker ps -a -q)
.
.
.
As the behavour of -v is different it will create directory if not exist.
Or if you want to remove everything then you can use system-prune
#!/bin/bash
docker system prune -f -a --volumes
docker-compose up -d
Remove all unused containers, networks, images (both dangling and unreferenced), and volumes.
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all volumes not used by at least one container
- all images without at least one container associated to them
- all build cache

After docker-compose build the docker-compose up run old not updated containers

I use docker-compose and find following problem:
When I change my code and want to rebuild dockers I use
docker-compose stop
docker-compose build
And then I want to run system by:
docker-compose up
But no new version of code/containers are run but old ones. What to do?
You could use, docker-compose up --build or docker-compose up --build --force-recreate
I have a helper function to nuke everything so that our Continuous blah, cycle can be tested, erm... continuously. Basically it boils down to the following:
To clear containers:
docker rm -f $(docker ps -a -q)
To clear images:
docker rmi -f $(docker images -a -q)
To clear volumes:
docker volume rm $(docker volume ls -q)
To clear networks:
docker network rm $(docker network ls | tail -n+2 | awk '{if($2 !~ /bridge|none|host/){ print $1 }}')
I generally don't require old containers, volumes and networks, so to clear them all I made a bash script which runs to clean up docker environment before each build. And to rebuild the docker using updated code, I use docker-compose up --build
Credits to marcelmfs and borrowed from Source
In this case first we should remove old containers (by rm -f). So we can deploy new code by:
docker-compose build
docker-compose stop
docker-compose rm -f
docker-compose up
Above sequence is not coincidence - when first instruction build image, the old images running - but when building is finish then old container is stopped, deleted and exchange by new builded one.
I put above commands in handy copy-paste oneliner:
docker-compose build && docker-compose stop && docker-compose rm -f && docker-compose up

Rebuild and ReRun a DockerContainer

I'm experimenting with Docker, and I set up a Node App.
The App is in a GIT Repo in my Gogs Container.
I want to keep all the code inside my container, so at the app root I have my Dockerfile.
I want to create a Shell script to automatically ReBuild my Container and ReRun it.
This script is calling later through a "webhook-container" during a GIT push.
The Docker CLI has only a build and a run command. But both fails if a image or a container with the name already exists.
What is the best practice to handle this?
Remark: I don't want to keep my app sources on the host and update only the source and restart the container!
I like the idea that my entire app is a container.
You can remove docker containers and images before running build or run commands.
to remove all containers:
docker rm $(docker ps -a -q)
to remove all images:
docker rmi $(docker images -q)
to remove a specific container:
docker rm -f containerName
then after executing the relevant commands above, then run your script. your script will typically build, run or pull as required.

Resources