This is probably because I don't understand something about how docker works, but I'm trying to get all the data from a container cleared. In this case the stack is for a jellyfin server.
I've tried removing the container and recreating it, I've tried removing the stack and recreating it, but no matter what I do as soon as I navigate to the jellyfin server in a browser all the accounts and metadata are still there!
All access is being done through Portainer, to delete a container I'm stopping it, then clicking remove:
To Delete a stack, I'm deleting any containers it has, then stopping it and deleting it:
Related
Apologies in advance if these questions are very basic. But I failed to find answers that are on-point:
In terms of clearing the container cache (not policies and not SIGTERM):
1- What is the difference between running the following commands:
docker restart
docker-compose restart
docker-compose down followed by docker-compose up
2- When restarting a container, in what order the following command get executed:
docker stop
docker kill
docker rm
docker start
docker run
UPDATE: Explaining the Problem
According to this issue of the docker registry, the Garbage Collection command doesn't clear the cache related to the deleted image blobs. This causes an error when pushing any deleted image again to the registry. One solution is to restart the container. Basically, it does the job of clearing the container cache including the Garbage Collection blobs cache. I tested it and it works.
But I am just confused because I used to run docker-compose down followed by docker-compose up and the cache didn't get cleared. Only when restarting the container, either by docker restart, docker-compose restart or restarting the server itself, that the push process (of previously deleted images) work again.
(Note that pushing new images always works).
That is why I would appreciate it if someone explain to me the difference between the above commands from a container cache point of view.
Thank you
I have been using docker-compose to setup some docker containers.
I am aware that the logs can be viewed using docker logs <container-name>.
All logs are being printed to STDOUT and STDERR when the containers are run, there is no log 'file' being generated in the containers.
But these logs (obtained from docker logs command) are removed when their respective containers are removed by commands like docker-compose down or docker-compose rm.
When the containers are created and started again there is a fresh set of logs. No logs from the previous 'run' is present.
I am curious if there is a way to somehow prevent the logs from being removed along with their containers.
Ideally i would like to keep all my previous logs even when the container is removed.
I believe you have two ways you can go:
Make containers log into file
You can reconfigure the applications inside the container to write into logfiles rather than stdout/stderr. As you put it, you'd like to keep the logs even when the container is removed. Therefore ensure the files are stored in a (bind) mounted volume.
Reconfigure docker to store logs
Reconfigure docker to use a different logging driver. This can be especially helpful as it prevents you from changing each and every container.
Can I update labels on a container using docker-compose without restarting the container?
Ideal scenario:
- change labels in docker-compose.yml
- save docker-compose.yml
- run a command to update the labels without restarting the container
As a general rule, changing the settings or code running inside a container involves deleting and restarting the container. This is totally normal, and docker-compose up will do it for you when necessary. (Remember to make sure any data you care about is stored outside the container.)
At a Docker API level, there are only a limited set of things that can be changed in the Update a container call, and labels aren’t one of those. That means anything that manages a container, whether direct docker commands or Docker Compose, must always delete and recreate a container to change its labels.
I'm deploying a container using docker stack with a restart-policy. One of the container's attached volumes had some corrupted data, causing the container to repeatedly crash and eventually hit the max_restarts limit. I manually fixed the volume, and now want to restart the container. I have been unable to find a graceful way to ask Docker "please reset the max_restarts on this container". Does one exist?
I was able to proceed by doing a service rm and then re-create the service, but this seems a bit heavy-handed just for resetting a flag
Background
I have a container, its running alot of stuff including a frontend that is exposed to other developers.
Users are going to be uploading some of their shell/python scripts unto my container to be run.
To keep the my container working, my plan is to send the script to a sibling container which will then run them and send me back the response. The user's scripts should be able to download external packages etc.
Then I want the sibling container to be "cleaned"
Question
Can I have that sibling container restart itself from its source image once it is done running the user's script? This way users can get a consistently clean container to run their scripts on.
Note
If I am completely barking up the wrong tree with this solution, please let me know. I am trying to get some weird functionalities going and could be approaching this from the wrong angle.
EDIT 1 (other approaches and why I don't think I like them)
Two alternatives that I have thought of is having the container with the frontend run containers on it. Or have the sibling container run docker containers on it. But these two solutions run into the difficulty of Docker-in-docker. The other solution may be to heighten my frontend container's permissions until it can make sibling containers on the fly for running scripts. But, I am worried that this may result in giving my frontend container unnecessarily high permissions.
EDIT 2 (all the documentation I have found on having a container restart itself)
I am aware of the documentation on autorestart, but I don't believe that this will clean the containers contents. For instance if a file was downloaded onto it.
My answer has some severe security implications.
You could control your sibling containers from your main container, if you map the docker socket from the host into your main container.
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now you have complete control over the docker engine from inside your main container. You can start, stop, etc your sibling containers, and spawn new (clean) siblings.
But remember that this is effectively granting host root rights to your main container.