I have two docker containers which are linked. The first one is a mariadb database and the other one is a MediaWiki instance. I don't need these two anymore, that's why I would like to stop and remove them.
My Problem is now, that I can't do that.
I tried a lot of things already:
Executing docker update --restart=no "containers". But it keeps restarting after I stop the container with docker container stop "container".
Tried to remove the images with no joy as they are in use by the containers (even if I kill the container and then quickly try to delete the image)
Restarted the entire Docker service with systemctl restart docker.
I even restarted my entire Server.
All of these with no positive result.
I'm kinda frustated.
I got 2 more containers running very well. (pyload and netdata). No problems at all with them.
As I'm new in the Docker world, please tell me what you need to help me :)
Thank you in advance!
Killing the container does not delete them. You have to remove the stopped containers before you can remove the images.
docker ps shows you the list of current running container. docker ps -a will show the list of all the containers (stopped and running).
First clean up all the containers that are not running via docker rm $(docker ps -aq)
Then delete the images that you dont want docker rmi $(docker images)
Try to reset to factory setting by clicking the bug icon of docker desktop and choose "reset to factory default". It will remove everything. It worked for me. I spent days to figure this out and finally I got it. Hopefully it helps you too!
Related
I succeeded in connecting to a remote server configured with Docker through vscode. By the way, the list of containers from the past was fetched from the remote explorer of vscode. If you look at this list of containers, they are obviously containers made with images I downloaded a few days ago. I don't know why this is happening.
Presumably, it is a problem with the settings.json file or a problem with some log.
I pressed f1 in vscode and select Remote-Containers: Attach to Running Container...
Then the docker command was entered automatically in the terminal. Here, a container (b25ee2cb9162) that I do not know where it came from has appeared.
After running this container, a new window opens with the message Starting Dev Container.
This is the list of containers that I said downloaded a few days ago. This is what vscode showed me.
What's the reason that this happened?
Those containers you are seeing are similar to those if you run docker container ls. The containers you are seeing have exited and are not automatically cleaned up by Docker unless specified in CLI --rm option.
The docs for the --rm option explain the reason for this nicely:
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag:
From this answer about these non-running containers taking up system resources you don't have to be concerned about these taking up much space expect minimal disk space.
To remove those containers, you have a few options:
[Preemptive] Use --rm flag when running container
You can pass the --rm flag when you run a container with the Docker to remove the containers after they have exited so old containers don't accumulate.
As the docs mention, the downside is after the container exits, it's difficult to debug why the container exited if something failed inside the container.
See the docs here if using docker run: https://docs.docker.com/engine/reference/run/#clean-up---rm
See this answer if using docker-compose run
Clean up existing containers from the command line
Use the docker container prune command to remove all stopped containers.
See the docs here: https://docs.docker.com/engine/reference/commandline/container_prune/
See this related SO answer if you're looking for other options:
Clean up containers from VSCode
VSCode Docker Containers Extension you clean up containers if you open the command palate and enter Docker Containers: Remove
Or you can simply right click those containers.
I want to know what containers I have available and seems like the command docker ps do it, but this command is also running the containers.
You can see in the picture the status of the containers "Up less than a second" meaning that they has just been started with the command docker ps.
What command can I run to just see the containers without running them?
Best regards.
docker ps shouldn't run the containers. In the unlikely event that it does, I would go through the standard steps of reporting Docker bugs: Reproduction steps, Docker version, etc. If it really is a bug, you could roll back to an older Docker version and surely there are a bunch where docker ps does not contain such critical bugs.
Most likely the problem is specific to your environment. The easy way to confirm this is to try the same commands on a different machine or VM. On my machine, for instance, docker ps does not run containers - once you find a machine that also has correct docker ps behavior you can then start comparing them to find the difference.
Maybe you have docker ps aliased to something else or something like that? There are other ways to check container status, such as Portainer and ctop. I think these probably rely on the same logic as docker ps, but you should see if they have the same issue in any case.
By the way, the status is just the status of the container. It could be that the container is failing a few seconds after launch, and being restarted by Docker, which is why you see the message. Try running a standard container like ubuntu or hello-world with simple parameters (definitely without --restart=always or --rm), and see if that also gets "restarted". My bet is it won't, unless you have a serious misconfiguration/Docker bug (in which case do a clean install of older Docker version).
To directly answer your question:
What command can I run to just see the containers without running them?
You can also run the command: docker container list
Which results in this:
I use the dotnet3.5 image to run containers on win10 with docker desktop 2.1.0.1(37199). Sadly, I found that after I had created a container, did nothing to it, and left it alone for 4 days, the container automotically became unstoppable. The snapshot tells the story.
The container seemed existing there when docker ps -a, but I cannot get into the container by docker exec. And for I cannot stop it--the docker stop process hangs there after I use docker stop container2--I cannot rm the container.
The only way to resolve this issue is to restore docker desktop's factory setting.
By the way, although in the snapshot the running image is aspnet:3.5-windowsservercore-10.0.14393.953, this issue also happens when the aspnet:3.5
Does anyone have good ideas to the unstoppable container? Any suggestions are welcome.
The command used above is incorrect. There is a difference between the commands and options we use. "# docker ps" or "# docker container ls" will give you the list of currently running processes or active containers.
Whereas "-a" will give you all the list of all those which are used to date which contains the list of active and deleted containers.
In your case, the container was is not there and you are trying to access the one which is non-existing, which is why it is stuck.
When starting a docker container (not developed by me), docker says a network has not been found.
Does this mean the problem is within the container itself (so only the developer can fix it), or is it possible to change some network configuration to fix this?
I'm assuming you're using docker-compose and seeing this error. I'd recommend
docker-compose up --force-recreate <name>
That should recreate the containers as well as supporting services such as the network in question (it will likely create a new network).
shutdown properly first, then restart
docker-compose down
docker-compose up
I was facing this similar issue and this worked for me :
Try running this
- docker container ls -a and remove the container id by docker container rm ca877071ac10 (this is the container id ).
The problem was there were some old container instances which were not removed. Once all the old terminated instances get removed, you can start the container with docker-compose file
This can be caused by some old service that has not been killed, first add
--remove-orphans flag when bringing down your container to remove any undead services running, then bring the container back up
docker-compose down --remove-orphans
docker-compose up
This is based in this answer.
In my case the steps that produced the error where:
Server restart, containers from a docker-compose stack remained stopped.
Network prune ran, so the network associated with stack containers where deleted.
Running docker-compose --project-name "my-project" up -d failed with the error described in this topic.
Solved simply adding force-recreate, in this way:
docker-compose --project-name "my-project" up -d --force-recreate
This possibly works because with this containers are recreated linked with the also recreated network (previously pruned as described in the pre conditions).
Apparently VPN was causing this. Turning off VPN and resetting Docker to factory settings has solved the problem in two computers in our company. A third, personal computer that did not have VPN never showed the problem.
Amongst other things docker system prune will remove 'all networks not used by at least one container' allowing them to be recreated next docker-compose up
More precisely docker network prune can also be used.
I have one container that is dead, but I can't remove it, as you can see below.
How can I remove it? Or how can I clean my system manually to remove it?
:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78b0dcaffa89 ubuntu:latest "bash -c 'while tr..." 30 hours ago Dead leo.1.bkbjt6w08vgeo39rt1nmi7ock
:~$ docker rm --force 78b0dcaffa89
Error response from daemon: driver "aufs" failed to remove root filesystem for 78b0dcaffa89ac1e532748d44c9b2f57b940def0e34f1f0d26bf7ea1a10c222b: no such file or directory
Its possible Docker needs to be restarted.
I just ran into the same error message when trying to remove a container, and restarting Docker helped.
I'm running Version 17.12.0-ce-mac49 (21995)
To restart Docker, go to "Preferences" and click on the little bomb in the upper right hand corner.
In my situation I have Docker running off of a expansion drive on my MacBook. After coming out of sleep mode, the expansion drive was automatically ejected (undesirable). But after mounting the drive again, I realized Docker needed to be restarted in order to initialize everything again. At this point I was able to remove containers (docker rm -f).
Maybe its not the same situation, but restarting Docker is a useful thing to try.
While browsing related issues, I found something similar "Driver aufs failed to remove root filesystem", "device or resource busy", and at around 80% below, there was a solution which said to use docker stop cadvisor; then docker rm [dead container]
Edit 1: docker stop cadvisor instead of docker stop deadContainerId
As the error message states, docker was configured to use AUFS as storage driver, but they recommend to use Overlay2 instead, as you can read on this link:
https://github.com/moby/moby/issues/21704#issuecomment-312934372
So I changed my configuration to use Overlay2 as docker storage driver. When we do that it removes EVERYTHING from old storage drive, it means that my "Dead" container was gone also.
It is not exactly a solution for my original question, but the result was accomplished.
Let me share how I got here. My disk on the host was getting full while working with docker containers, ended up getting failed to remove root filesystem myself as well. Burned some time before I realized that my disk is full, and then also after freeing up some space, with trying to restart docker. Nothing worked, only closing everything and rebooting the machine. I hope you'll save some time.