How to remove docker container even if root filesystem does not exists? - docker

I have one container that is dead, but I can't remove it, as you can see below.
How can I remove it? Or how can I clean my system manually to remove it?
:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78b0dcaffa89 ubuntu:latest "bash -c 'while tr..." 30 hours ago Dead leo.1.bkbjt6w08vgeo39rt1nmi7ock
:~$ docker rm --force 78b0dcaffa89
Error response from daemon: driver "aufs" failed to remove root filesystem for 78b0dcaffa89ac1e532748d44c9b2f57b940def0e34f1f0d26bf7ea1a10c222b: no such file or directory

Its possible Docker needs to be restarted.
I just ran into the same error message when trying to remove a container, and restarting Docker helped.
I'm running Version 17.12.0-ce-mac49 (21995)
To restart Docker, go to "Preferences" and click on the little bomb in the upper right hand corner.
In my situation I have Docker running off of a expansion drive on my MacBook. After coming out of sleep mode, the expansion drive was automatically ejected (undesirable). But after mounting the drive again, I realized Docker needed to be restarted in order to initialize everything again. At this point I was able to remove containers (docker rm -f).
Maybe its not the same situation, but restarting Docker is a useful thing to try.

While browsing related issues, I found something similar "Driver aufs failed to remove root filesystem", "device or resource busy", and at around 80% below, there was a solution which said to use docker stop cadvisor; then docker rm [dead container]
Edit 1: docker stop cadvisor instead of docker stop deadContainerId

As the error message states, docker was configured to use AUFS as storage driver, but they recommend to use Overlay2 instead, as you can read on this link:
https://github.com/moby/moby/issues/21704#issuecomment-312934372
So I changed my configuration to use Overlay2 as docker storage driver. When we do that it removes EVERYTHING from old storage drive, it means that my "Dead" container was gone also.
It is not exactly a solution for my original question, but the result was accomplished.

Let me share how I got here. My disk on the host was getting full while working with docker containers, ended up getting failed to remove root filesystem myself as well. Burned some time before I realized that my disk is full, and then also after freeing up some space, with trying to restart docker. Nothing worked, only closing everything and rebooting the machine. I hope you'll save some time.

Related

Cannot start Cassandra container, getting "CommitLogReadException: Could not read commit log descriptor in file"

JVMStabilityInspector.java:196 - Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException: \
Could not read commit log descriptor in file /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log
I ran the Cassandra container in Docker, and the above error appears and stops.
It worked well before, but it doesn't seem to work well after deleting and recreating the Cassandra container.
I think we need to clear the /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log file.
However, I am not used to using dockers.
How do I erase this file?
I'm not sure if erasing the file will fix the error.
I also asked about this problem in chatgpt. However, after asking a lot of questions for an hour, they told me to try again next time, so I haven't solved it yet. So I'm going to post on Stack Overflow.
So this error likely means that the commitlog file specified is corrupted. I would definitely try deleting it.
If it's on a running docker container, you could try something like this:
Run a docker ps to get the container ID.
Remove the file using docker exec. If my container ID is f6b29860bbe5:
docker exec f6b29860bbe5 rm -rf /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log
Your question is missing a lot crucial information such as which Docker image you're running, the full Docker command you ran to start the container, and other relevant settings you've configured so I'm going to make several assumptions.
The official Cassandra Docker image (see the Quickstart Guide on the Cassandra website) that we (the Cassandra project) publish stores the commit logs in /var/lib/cassandra/commitlog/ but your deployment stores it somewhere else:
Could not read commit log descriptor in file /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log
Assuming that you're using the official image, it indicates to me that you have possibly mounted the container directories on a persistent volume on the host. If so, you will need to do a manual cleanup of all the Cassandra directories when you delete the container and recreate it.
The list of directories you need to empty include:
data/
commitlog/
saved_caches/
In your case, it might be just as easy to delete the contents of /opt/cassandra/.
If those directories are not persisted on the Docker host then you can open an interactive bash session into the Cassandra container. For example if you've named your container cassandra:
$ bash exec -it cassandra bash
For details, see the docker exec manual on the Docker Docs website. Cheers!

Docker driver "overlay2" failed to remove root filesystem: unlinkat - device or resource busy

When trying to remove a docker container (for example when trying to docker-compose down) I always get these errors:
ERROR: for <my_container> container d8424f80ef124c2f3dd8f22a8fe8273f294e8e63954e7f318db93993458bac27: driver "overlay2" failed to remove root filesystem: unlinkat /var/lib/docker/overlay2/64311f2ee553a5d42291afa316db7aa392a29687ffa61971b4454a9be026b3c4/merged: device or resource busy
Common advice like restarting the docker service, pruning or force removing the containers doesn't work. The only thing that I found that works is manually unmounting with sudo umount /home/virtfs/xxxxx/var/lib/docker/overlay2/<container_id>/merged, then I'm able to remove the container.
My OS is CentOS Linux release 7.9.2009 (Core) with kernel version 3.10.0-1127.19.1.el7.x86_64. I thought this was maybe due to overlay2 clashing with CentOS, but according to this page my CentOS/kernel version should work. It would be great to find a solution to this, because I ideally want to docker-compose down without having to use elevated privileges to umount beforehand.
From the error log, it is observed that the file mounting is involved
Execute the following command to view the related processes
grep 64311f2ee553a5d42291afa316db7aa392a29687ffa61971b4454a9be026b3c4 /proc/*/mountinfo
ps -ef | grep "The process ID obtained by the grep command above"
Stop the occupied process
Then delete the container
Often this happens when there are no processes listed as blocking, then you know it's a kernel module blocking it.
The most likely culprits are nfs (not sure why you'd run that in docker), or files inside docker that are bind-mounted, sometimes the automatic ones, such as perhaps ones created by systemd-networkd.
Overlay2 was phased out by Ubuntu for a reason. CentOS is at its end of life, so this problem might be resolved already in your most likely upgrade path, to Rocky Linux. Alternately you can enter the jungle of migrating your docker storage engine.
Or you can just get rid of the package or software hogging it in the first place, if you can. But you'll have to share more info on what it is for help on that.

Docker error: Cannot start service ...: network 7808732465bd529e6f20e4071115218b2826f198f8cb10c3899de527c3b637e6 not found

When starting a docker container (not developed by me), docker says a network has not been found.
Does this mean the problem is within the container itself (so only the developer can fix it), or is it possible to change some network configuration to fix this?
I'm assuming you're using docker-compose and seeing this error. I'd recommend
docker-compose up --force-recreate <name>
That should recreate the containers as well as supporting services such as the network in question (it will likely create a new network).
shutdown properly first, then restart
docker-compose down
docker-compose up
I was facing this similar issue and this worked for me :
Try running this
- docker container ls -a and remove the container id by docker container rm ca877071ac10 (this is the container id ).
The problem was there were some old container instances which were not removed. Once all the old terminated instances get removed, you can start the container with docker-compose file
This can be caused by some old service that has not been killed, first add
--remove-orphans flag when bringing down your container to remove any undead services running, then bring the container back up
docker-compose down --remove-orphans
docker-compose up
This is based in this answer.
In my case the steps that produced the error where:
Server restart, containers from a docker-compose stack remained stopped.
Network prune ran, so the network associated with stack containers where deleted.
Running docker-compose --project-name "my-project" up -d failed with the error described in this topic.
Solved simply adding force-recreate, in this way:
docker-compose --project-name "my-project" up -d --force-recreate
This possibly works because with this containers are recreated linked with the also recreated network (previously pruned as described in the pre conditions).
Apparently VPN was causing this. Turning off VPN and resetting Docker to factory settings has solved the problem in two computers in our company. A third, personal computer that did not have VPN never showed the problem.
Amongst other things docker system prune will remove 'all networks not used by at least one container' allowing them to be recreated next docker-compose up
More precisely docker network prune can also be used.

Is there a way to remove a name from a Docker container?

We found that running docker rm myprocess takes quite a bit of time, much longer than docker run takes to start a fresh copy.
Is there a way we can make a container give up its name, so that we can first free up the name to be able to docker run again, and then do the time-consuming cleanup of the old container later?
That would make the stop/start cycle when updating to newer versions of the underlying image faster.
You can rename a container that already exists, or you could deploy with a new name then rename it afterwards:
docker rename myprocess myprocess-old
There have been multiple reports of that problem.
Issue 16281 mentions (about the devicemapper or dm):
Switching the dm.basesize to 10GB seems to be fixing the issue so far, maybe it would be worth reverting the default to 10GB instead of 100GB or even specify this option at the creation of the container as requested in issue 14678
See the docker daemon storage driver options:
docker daemon --storage-opt dm.basesize=10G
Switching to thinpool can help too:
docker daemon --storage-opt dm.thinpooldev=/dev/mapper/thin-pool

"device or resource busy" error when trying to push image with docker

When pushing to the official registry, I get the following error:
Failed to generate layer archive: Error mounting '/dev/mapper/docker-202:1-399203-ed78b67d527d993117331d27627fd622ffb874dc2b439037fb120a45cd3cb9be' on '/var/lib/docker/devicemapper/mnt/ed78b67d527d993117331d27627fd622ffb874dc2b439037fb120a45cd3cb9be': device or resource busy
The first time I tried to push the image, I ran out of memory on my hard drive. After that I cleaned up and should have now enough space to push it, but the first try somehow locked the image. How can I free it again?
I have stopped and removed the container running the image, but that didn't help.
I have restarted the docker service, without any results
This looks like it might be related to the issue mentioned here: https://github.com/dotcloud/docker/issues/4767
It sounds like you've tried stopping and removing the container. Have you tried restarting the docker daemon and/or restarting the host?
I changed docker-compose.yml volume section and gave mysql volumes new name like:
volumes:
- ${DATA_PATH_HOST}/mysql:/var/lib/mysql_new
- ${MYSQL_ENTRYPOINT_INITDB}:/docker-entrypoint-initdb.d
then start docker container and everything works fine.
docker-compose up -d --build mysql

Resources