From cache perspective, what is the difference between docker restart and docker-compose restart? - docker

Apologies in advance if these questions are very basic. But I failed to find answers that are on-point:
In terms of clearing the container cache (not policies and not SIGTERM):
1- What is the difference between running the following commands:
docker restart
docker-compose restart
docker-compose down followed by docker-compose up
2- When restarting a container, in what order the following command get executed:
docker stop
docker kill
docker rm
docker start
docker run
UPDATE: Explaining the Problem
According to this issue of the docker registry, the Garbage Collection command doesn't clear the cache related to the deleted image blobs. This causes an error when pushing any deleted image again to the registry. One solution is to restart the container. Basically, it does the job of clearing the container cache including the Garbage Collection blobs cache. I tested it and it works.
But I am just confused because I used to run docker-compose down followed by docker-compose up and the cache didn't get cleared. Only when restarting the container, either by docker restart, docker-compose restart or restarting the server itself, that the push process (of previously deleted images) work again.
(Note that pushing new images always works).
That is why I would appreciate it if someone explain to me the difference between the above commands from a container cache point of view.
Thank you

Related

Cannot start Cassandra container, getting "CommitLogReadException: Could not read commit log descriptor in file"

JVMStabilityInspector.java:196 - Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException: \
Could not read commit log descriptor in file /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log
I ran the Cassandra container in Docker, and the above error appears and stops.
It worked well before, but it doesn't seem to work well after deleting and recreating the Cassandra container.
I think we need to clear the /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log file.
However, I am not used to using dockers.
How do I erase this file?
I'm not sure if erasing the file will fix the error.
I also asked about this problem in chatgpt. However, after asking a lot of questions for an hour, they told me to try again next time, so I haven't solved it yet. So I'm going to post on Stack Overflow.
So this error likely means that the commitlog file specified is corrupted. I would definitely try deleting it.
If it's on a running docker container, you could try something like this:
Run a docker ps to get the container ID.
Remove the file using docker exec. If my container ID is f6b29860bbe5:
docker exec f6b29860bbe5 rm -rf /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log
Your question is missing a lot crucial information such as which Docker image you're running, the full Docker command you ran to start the container, and other relevant settings you've configured so I'm going to make several assumptions.
The official Cassandra Docker image (see the Quickstart Guide on the Cassandra website) that we (the Cassandra project) publish stores the commit logs in /var/lib/cassandra/commitlog/ but your deployment stores it somewhere else:
Could not read commit log descriptor in file /opt/cassandra/data/commitlog/CommitLog-7-1676434400779.log
Assuming that you're using the official image, it indicates to me that you have possibly mounted the container directories on a persistent volume on the host. If so, you will need to do a manual cleanup of all the Cassandra directories when you delete the container and recreate it.
The list of directories you need to empty include:
data/
commitlog/
saved_caches/
In your case, it might be just as easy to delete the contents of /opt/cassandra/.
If those directories are not persisted on the Docker host then you can open an interactive bash session into the Cassandra container. For example if you've named your container cassandra:
$ bash exec -it cassandra bash
For details, see the docker exec manual on the Docker Docs website. Cheers!

Why does vscode's remote explorer get a list of old containers? (Docker)

I succeeded in connecting to a remote server configured with Docker through vscode. By the way, the list of containers from the past was fetched from the remote explorer of vscode. If you look at this list of containers, they are obviously containers made with images I downloaded a few days ago. I don't know why this is happening.
Presumably, it is a problem with the settings.json file or a problem with some log.
I pressed f1 in vscode and select Remote-Containers: Attach to Running Container...
Then the docker command was entered automatically in the terminal. Here, a container (b25ee2cb9162) that I do not know where it came from has appeared.
After running this container, a new window opens with the message Starting Dev Container.
This is the list of containers that I said downloaded a few days ago. This is what vscode showed me.
What's the reason that this happened?
Those containers you are seeing are similar to those if you run docker container ls. The containers you are seeing have exited and are not automatically cleaned up by Docker unless specified in CLI --rm option.
The docs for the --rm option explain the reason for this nicely:
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag:
From this answer about these non-running containers taking up system resources you don't have to be concerned about these taking up much space expect minimal disk space.
To remove those containers, you have a few options:
[Preemptive] Use --rm flag when running container
You can pass the --rm flag when you run a container with the Docker to remove the containers after they have exited so old containers don't accumulate.
As the docs mention, the downside is after the container exits, it's difficult to debug why the container exited if something failed inside the container.
See the docs here if using docker run: https://docs.docker.com/engine/reference/run/#clean-up---rm
See this answer if using docker-compose run
Clean up existing containers from the command line
Use the docker container prune command to remove all stopped containers.
See the docs here: https://docs.docker.com/engine/reference/commandline/container_prune/
See this related SO answer if you're looking for other options:
Clean up containers from VSCode
VSCode Docker Containers Extension you clean up containers if you open the command palate and enter Docker Containers: Remove
Or you can simply right click those containers.

Rancher configuration lost

I have restarted the rancher host a few times while configuring rancher.
Nothing was lost, even though containers had been started and stopped several times during these reboots.
I had to stop and run the container again to set a specific IP for the UI, so I could use the other IP addresses available in the host as HostPorts for containers.
This is the command I had to execute again:
docker run -d --restart=unless-stopped -p 1.2.3.4:80:80 -p 1.2.3.4:443:443 rancher/rancher
After running this, rancher started up as a clean installation, asking me for password, to setup a cluster, and do everything from scratch, even though I see a lot of containers running.
I tried rerunning the command that rancher showed on the first installation (including the old token and ca-checksum). Still nothing.
Why is this happening? Is there a way to restore the data, or should I do the configuration and container creation again?
What is the proper way of cleaning up, if I need to start from scratch? docker rm all containers and do the setup again?
UPDATE
I just found some information from another member in a related question, because this problem happened following a suggestion from another user.
Apparently there is an upgrade process that needs to be followed, but I am missing what needs to be done exactly. I can see my old, stopped container here: https://snag.gy/h2sSpH.jpg
I believe I need to do something with that container so the new rancher container becomes online with the previous data.
Should I be running this?
docker run -d --volumes-from stoic_newton --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest
Ok, I can confirm that this process works.
I have followed the guide here: https://rancher.com/docs/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/#completing-the-upgrade
I just add to stop the new rancher container which was lacking the data, copy if from the original docker container to create a backup, and then restart the new container with the volumes from the data container which was created in the process.
I could probably have launched the new rancher container with the volumes from the old rancher container, but I preferred playing it safe and following every step of the guide, and as a plus I ended up with a backup :)

In docker, how to change shared folder on running state

I run my docker container with command blow.
docker run -v /host/folder:/docker/container/folder my_image
If I want to change shared folder, should I restart?
This has been requested many times, and it is currently not possible. Containers should be ephemeral. You should be able to restart the container again without lose of data.
Check the following issue for more reasons why this is not possible.

How to remove docker container even if root filesystem does not exists?

I have one container that is dead, but I can't remove it, as you can see below.
How can I remove it? Or how can I clean my system manually to remove it?
:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78b0dcaffa89 ubuntu:latest "bash -c 'while tr..." 30 hours ago Dead leo.1.bkbjt6w08vgeo39rt1nmi7ock
:~$ docker rm --force 78b0dcaffa89
Error response from daemon: driver "aufs" failed to remove root filesystem for 78b0dcaffa89ac1e532748d44c9b2f57b940def0e34f1f0d26bf7ea1a10c222b: no such file or directory
Its possible Docker needs to be restarted.
I just ran into the same error message when trying to remove a container, and restarting Docker helped.
I'm running Version 17.12.0-ce-mac49 (21995)
To restart Docker, go to "Preferences" and click on the little bomb in the upper right hand corner.
In my situation I have Docker running off of a expansion drive on my MacBook. After coming out of sleep mode, the expansion drive was automatically ejected (undesirable). But after mounting the drive again, I realized Docker needed to be restarted in order to initialize everything again. At this point I was able to remove containers (docker rm -f).
Maybe its not the same situation, but restarting Docker is a useful thing to try.
While browsing related issues, I found something similar "Driver aufs failed to remove root filesystem", "device or resource busy", and at around 80% below, there was a solution which said to use docker stop cadvisor; then docker rm [dead container]
Edit 1: docker stop cadvisor instead of docker stop deadContainerId
As the error message states, docker was configured to use AUFS as storage driver, but they recommend to use Overlay2 instead, as you can read on this link:
https://github.com/moby/moby/issues/21704#issuecomment-312934372
So I changed my configuration to use Overlay2 as docker storage driver. When we do that it removes EVERYTHING from old storage drive, it means that my "Dead" container was gone also.
It is not exactly a solution for my original question, but the result was accomplished.
Let me share how I got here. My disk on the host was getting full while working with docker containers, ended up getting failed to remove root filesystem myself as well. Burned some time before I realized that my disk is full, and then also after freeing up some space, with trying to restart docker. Nothing worked, only closing everything and rebooting the machine. I hope you'll save some time.

Resources