How to kill and remove docker containers using puppet - docker

I would like to add a scheduled job (fortnightly) to a machine using puppet to remove all containers on machine.
Currently I need to do sudo docker rm -f $(sudo docker ps -a -q) manually after sshing to that machine, which I want to automate.
Preferably using module: https://forge.puppet.com/puppetlabs/docker.
Can't see any option to kill and remove containers (also new to puppet). Even using docker-compose using puppet is fine.
Any ideas? Thanks.

The docs you linked say:
To remove a running container, add the following code to the manifest file. This also removes the systemd service file associated with the container.
docker::run { 'helloworld':
ensure => absent,
}
Regarding the docker command sudo docker rm -f $(sudo docker ps -a -q) to remove containers via ssh, you have a better one:
$ docker container prune --help
Usage: docker container prune [OPTIONS]
Remove all stopped containers
Options:
--filter filter Provide filter values (e.g. 'until=<timestamp>')
-f, --force Do not prompt for confirmation
So the equivalent would be:
docker container prune --force
And you can automate this ssh command via puppet, no need to manually ssh into the machine. Check their docs to run shell commands without installing an agent, or use Bolt command if you already have an agent installed on the remote host.

Related

Docker compose: remove container without interaction

For a deployment pipeline I need to remove Docker container without user interaction.
When removing a Docker container using
$ docker compose rm myapp
docker compose wants a confirmation and only continues when y was entered.
How to tell docker compose to remove volumes continue without typing in something?
My Docker version is 20.10.21
There's an option to do that:
Usage: docker compose rm [OPTIONS] [SERVICE...]
Removes stopped service containers
[...]
Options:
-f, --force Don't ask to confirm removal
-s, --stop Stop the containers, if required, before removing
-v, --volumes Remove any anonymous volumes attached to containers
So the solution is
$ docker compose rm myapp -f

Docker system prune: only current directory

I'm working on 2 projects that both use Docker, in separate directories.
In the 2nd project, for a new local build, the first command given (of a series of commands) is the following:
docker container stop $(docker container ls -a -q) && docker system prune -a -f --volumes
However, as a side effect, this kills the containers in the 1st project, also destroying the databases associated with it as well.
This is annoying because I have to constantly rebuild and re-seed the database in the 1st project.
How can I edit the first command such that it only effects the project in the current directory?
Note that this project is also using docker-compose, which I know is good at noting the current directory, so maybe we could make use of docker-compose instead.
The full list of commands given for a new local build are:
docker container stop $(docker container ls -a -q) && docker system prune -a -f --volumes
docker stack rm up
docker-compose -f docker-compose.local.yml build
docker stack deploy up --compose-file docker-compose.local.yml
Thank you very much in advance for any help.
-Michael

How to ensure dependencies on host are removed when we remove a docker container

I created a docker container and have an application running inside it. I created a second docker container (on the same host) with the same application running inside it. I need to create a few more containers this way. However, when I remove a container, I need to ensure that the dependencies it creates on the host are completely removed. How could this be achieved ?
Thanks,
Checkout the documentation of the docker rm command:
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
Options:
-f, --force Force the removal of a running container (uses SIGKILL)
--help Print usage
-l, --link Remove the specified link
-v, --volumes Remove the volumes associated with the container
So use the "-v" option
Update
You can also use this command to cleanup volumes with no associated containers.
docker volume rm $(docker volume ls -qf dangling=true)
Credit: sceada

Rebuild and ReRun a DockerContainer

I'm experimenting with Docker, and I set up a Node App.
The App is in a GIT Repo in my Gogs Container.
I want to keep all the code inside my container, so at the app root I have my Dockerfile.
I want to create a Shell script to automatically ReBuild my Container and ReRun it.
This script is calling later through a "webhook-container" during a GIT push.
The Docker CLI has only a build and a run command. But both fails if a image or a container with the name already exists.
What is the best practice to handle this?
Remark: I don't want to keep my app sources on the host and update only the source and restart the container!
I like the idea that my entire app is a container.
You can remove docker containers and images before running build or run commands.
to remove all containers:
docker rm $(docker ps -a -q)
to remove all images:
docker rmi $(docker images -q)
to remove a specific container:
docker rm -f containerName
then after executing the relevant commands above, then run your script. your script will typically build, run or pull as required.

docker ps shows empty list

I built a docker image from a docker file. Build said it succeeded. But when I try to show docker containers through docker ps (also tried docker ps -a), it shows an empty list. What is weird is that I'm still able to somehow push my docker image to dockerhub by calling docker push "container name".
I wonder what's going on? I'm on Windows 7, and just installed the newest version of dockertoolbox.
docker ps shows (running) containers. docker images shows images.
A successfully build docker image should appear in the list which docker images generates. But only a running container (which is an instance of an image) will appear in the list from docker ps (use docker ps -a to also see stopped containers). To start a container from your image, use docker run.
For me, docker ps -a and docker images both returned an empty list even tho I had many docker containers running. I tried rebooting system with no luck. A quick sudo systemctl restart docker fixed this "bug".
try restarting
sudo systemctl restart docker.socket
sudo systemctl restart docker
You can run the command without the -d option.
So you have the output displayed.
It may be that the application failed to start.
For me, the only thing resolving the issue is to reinstall docker. Also, one must be sure that the disk is not full.
This is the command that I use, but it may vary depending on the version of docker already installed:
apt-get install --reinstall docker.io
If prompted, choose "yes" to automatically restart docker daemon
for Linux,
at first, see all the running container
sudo docker ps
try restarting
sudo systemctl restart docker
remove previous docker image with the same name if there is any
sudo docker rm docker_container_id
once again run
sudo docker run -d --name container_name image_name
This should work
or uninstall docker and install it again
In the Dockerfile instructions, make sure the CMD commands are in between double-quotes not single-qoute
for example:
CMD [ "node" , 'index.js'] Here there is a mistake !!
Correct one is :
CMD [ "node" , "index.js"]
This mistake will make the container run and exit immediately.

Resources