Old files remain in new docker container - docker

I have a basic Hyperledger Fabric network where the nodes uses docker containers.
Inside each node I have done some file creation and editing. However, I now want to restart the network with clean containers.
I have tried to shut down all containers, images and networks, then run the docker prune command.
I have also tried to delete all volumes.
However, once I re-create the Fabric network, when bashing into a container the old files that I custom created are still there. I never created those files on host machine, only inside that container. But I do not understand how it is possible that those files still exists. I even tried to delete the images.
System is Ubuntu 18.4
Can anybody spot the potential fix to this?

Delete the volumes of the containers after stopping and removing the containers by below commands,
docker kill $(docker ps -aq)
docker rm $(docker ps -aq)
Then remove the volumes of the containers by below command.
docker system prune --volumes -f
This removes all the unused networks and volumes .
Hope this helps.

Related

How can I reinit docker layers?

Using docker under Kubuntu 18 I got out of free space on the device.
I run commands to clear space:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker-compose down --remove-orphans
docker system prune --force --volumes
As I was still of free space opened /var/lib/docker/overlay2/ directory and
deleted a lot of subdirectories ubder it.
After that I got error :
$ docker-compose up -d --build
Creating network "master_default" with the default driver
ERROR: stat /var/lib/docker/overlay2/36af81b800ebb595a24b6c724318c1126932d2bfae61e2c98bfc65a203b2b928: no such file or directory
Looks like that is not a good way to free space. Which way isd good in my case?
If there is a way to reinit my docker apps?
Thanks!
As I was still of free space opened /var/lib/docker/overlay2/ directory and deleted a lot of subdirectories ubder it.
At this point, the docker filesystem has been corrupted. To repair, the best you can do is backup anything you want to save, particularly any volumes, stop the docker engine (systemctl stop docker), delete the entire docker filesystem (rm -rf /var/lib/docker), and restart docker (systemctl start docker).
At that point the engine will be completely empty without any images, containers, etc. You'll need to pull/rebuild your images and recreate the containers you were running. Hopefully that's as easy as a docker-compose up.
/var/lib/docker/overlay2/ is where docker stores the image layers.
Now, docker system prune -a removes all the unused images, stopped containers and the build cache if I'm not wrong.
One advice, since you are building images, check docker buildkit. I know docker-compose added support for it, but I don't know if your version supports that. To make it short, building your images will be way faster.

How to use `docker volume prune`? (`docker volume prune` removing all volumes even when container is running)

On the docker documentation, it says the following:
docker volume prune === Remove all unused local volumes
To test that, I've set up a MongoDb container with the official latest image from docker hub. Doing so, created 2 volumes behind the scenes which are probably needed by the container to store the data.
When running docker volume ls now, I can see the two volumes with random names.
So let's say I would have multiple containers with volumes using random names. Now it would get difficult to know which of these are still in use and so I was expecting docker volume prune to help out here.
So I executed the command, expecting docker volume prune to delete nothing as my container is up and running with MongoDb.
But what actually happened, is that all my volumes got removed. After that my container shutdown and could not be restarted.
I tried recreating this multiple times and every time even that my container is running, the command just deletes all volumes.
Anyone can explain that behavior?
Update:
Following command with the container id of my MongoDB image shows me the 2 volumes:
docker inspect -f '{{ .Mounts }}' *CONTAINER_ID*
So from my understanding docker knows that these volumes and the container belong together.
When I ask docker to show me the dangling volumes, it shows me the same volumes again:
docker volume ls --filter dangling=true
So when they are dangling it makes sense to me that the prune command removes them. But I clearly have the volumes in use with my container, so that's not clear to me.
You can remove all existing containers then remove all volumes.
docker rm -vf $(docker ps -aq) && docker volume prune -f
Only unused volumes
docker volume prune -f
or
docker volume rm $(docker volume ls -qf dangling=true)

How to remove docker containers that are not running?

It's possible to remove containers that aren't running?
I know that for example, this will remove containers from created images
docker rm `docker ps -q -f status=exited`
But I would like to know how I can remove those that are not running
Use the docker container prune command, it will remove all stopped containers. You can read more about this command in the official docs here: https://docs.docker.com/engine/reference/commandline/container_prune/.
Similarly Docker has commands for docker network prune, docker image prune and docker volume prune to prune networks, images and volumes.
I use docker system prune most of the time, it cleans unused containers, plus networks and dangling images.
If I want to clean volumes with the system components, then I use docker system prune --volumes. In this case unused volumes will be removed, too, so be careful, you may loose data.

How to clean up Docker

I've just noticed that I ran out of disk space on my laptop. Quite a lot is used by Docker as found by mate-disk-usage-analyzer:
The docker/aufs/diff folder contains 152 folders ending in -removing.
I already ran the following commands to clean up
Kill all running containers:
# docker kill $(docker ps -q)
Delete all stopped containers
# docker rm $(docker ps -a -q)
Delete all images
# docker rmi $(docker images -q)
Remove unused data
# docker system prune
And some more
# docker system prune -af
But the screenshot was taken after I executed those commands.
What is docker/aufs/diff, why does it consume that much space and how do I clean it up?
I have Docker version 17.06.1-ce, build 874a737. It happened after a cleanup, so this is definitely still a problem.
The following is a radical solution. IT DELETES ALL YOUR DOCKER STUFF. INCLUDING VOLUMES.
$ sudo su
# service docker stop
# cd /var/lib/docker
# rm -rf *
# service docker start
See https://github.com/moby/moby/issues/22207#issuecomment-295754078 for details
It might not be /var/lib/docker
The docker location might be different in your case. You can use a disk usage analyzer (such as mate-disk-usage-analyzer) to find the folders which need most space.
See Where are Docker images stored on the host machine?
This dir is where container rootfs layers are stored when using the AUFS storage driver (default if the AUFS kernel modules are loaded).
If you have a bunch of *-removing dirs, this is caused by a failed removal attempt. This can happen for various reasons, the most common is that an unmount failed due to device or resource busy.
Before Docker 17.06, if you used docker rm -f to remove a container all container metadata would be removed even if there was some error somewhere in the cleanup of the container (e.g., failing to remove the rootfs layer).
In 17.06 it will no longer remove the container metadata and instead flag the container with a Dead status so you can attempt to remove it again.
You can safely remove these directories, but I would stop docker first, then remove, then start docker back up.
docker takes lot of gig into three main areas :
Check for downloaded and compiled images.
clean unused and dead images by running below command
docker image prune -a
Docker creates lot of volume, some of the volumes are from dead container that are no more used
clean the volume and reclaim the space using
docker system prune -af && \
docker image prune -af && \
docker system prune -af --volumes && \
docker system df
Docker container logs are also very notorious in generating GBs of log
overlay2 storage for layers of container is also another source of GBs eaten up .
One better way is to calculate the size of docker image and then restrict the docker container with below instructions for storage and logs upper cap.
For these feature use docker V19 and above.
docker run -it --storage-opt size=2G --log-opt mode=non-blocking --log-opt max-buffer-size=4m fedora /bin/bash
Note that this is actually a know, yet still pending, issue: https://github.com/moby/moby/issues/37724
If you have the same issue, I recommend to "Thumbs Up" the issue on GitHub so that it gets addressed soon.
I had same issue.
In my case solution was:
view all images:
docker images
remove old unused images:
docker rmi IMAGE_ID
possibly you will need to prune stopped containers:
docker container prune
p.s. docker --help is good solution :)

SonanQube Docker Container won't start due to backelite jar. How to remove it?

I installed a backelite-sonar-swift-plugin.jar and because of it my sonarqube docker container won't start anymore. I cannot docker exec because the container won't start so is there a way to delete the file without starting it?
I am using docker-compose.yml to run my containers and i've tried removing the sonarqube container and images but when i docker-compose up although they will download the sonarqube image again but when starting the container it the backelite-sonar-swift-plugin.jar is in there. Why is it in there? Doesn't it suppose to be gone cause a new image has been downloaded already?
Check if the image is using already cached data or existing volumes from previous container. Remove previous volumes too along with removing the container. If you wanna see how volumes are removed, you will have to look for it yourself, since I know of removing all volumes of all container and removing individual volumes can be done by using docker volume ls and docker volume rm. I prefer removing all docker volumes by:
docker volume rm $(docker volume ls -q)
And if this does not helps, then please provide the errors which are shown before the container exits.

Resources