how to manage colima <> docker effectively - docker

Im trying to learn more about docker with colima especially on apple-silicon,
so far... I understand the basics of it and how both actually work together.
But I do have some question about how to manage it so far as :
- stopping colima directly without loosing data/containers
(colima stop [c-id] before or after docker stopĀ [c-id], is it needed ?)*
- restart colima easily without having to setup everything (colima actually drains a lot of RAM)
(right now, when ever I stop colima and restart it, I can't see my past containers working running ( I think it is because it stops them too).*
(Specially for different ARCH, I want to know if there's a way to stop colima and then restart it without having to care about inner dockers container states.)*
- how to switch default Deamon on colima
(how to switch so far the default deamon whenever we use colima start.)
- manage deamons names
(I haven't seen any way to rename existing deamons without recreating them on top.)
- how to run colima on boot on a specific deamon
- change colima context without affecting docker
- having stats of colima deamon running (cpu & ram usage...=/= allocated)
Thank you for your help, if any question isn't clear or doesn't make sens, feel free to quote it and correct it.

stopping colima directly without loosing data/containers
When you execute docker stop [c-id], only the container whose ID you specify will stop. colima stop command is used to stop the VM colima is using for Docker. If you want your containers to start when the daemon starts, you need to use a docker restart policy
restart colima easily without having to setup everything (colima actually drains a lot of RAM)
You can't do that, and as I mentioned above, restarting the containers is something you need to apply. You also do not lose any data with colima.
how to switch default Deamon on colima
You can either change the configuration by running colima start --edit or pass your arguments to the terminal. Colima defaults to docker, but you can switch at any time by running colima stop; colima start --runtime docker" to use docker and colima stop; colima start --runtime containerd"
manage deamons names
I don't understand this question.
how to run colima on boot on a specific deamon
I saw the feature request a while ago, but I don't think this feature was released.
change colima context without affecting docker
What do you mean by change colima context? Are you referring to the runtime? If yes, then you can't do that without impacting the running containers. Keep in mind that colima runs the docker engine for you, and you can't use both docker and containerd
having stats of colima deamon running (cpu & ram usage...=/= allocated)
Run colima status to see all resources the colima VM is using.
The colima team did a good job documenting the most important part of the tool.

Related

Reboot Docker container from inside

I'm working with a Docker container with Debian 11 inside and a server.
I need to update this server and do other things on regular manne. I've written several scripts that can do it, but I encountered serious proble.
If I want to update the server and other packages I need to reboot the container.
I'm obviously able to do so from the computer Docker is installed on (in my case Docker Desktop running with WSL2 on Windows 10), I can reboot the container easily, but I need to automate it.
The simplest way will be to add the shutdown command to the scripts I've written. I was reading about it, but found nothing. Is there any way to reboot this container from the Debian inside it? If no, how can it be achieved and how complicated is it?
I was trying to invoke standard Linux commands to shutdown or reboot system on Debian inside container.
I expect a guide if it's possible and worth efforts.
The only way to trigger a restart of a container from within the container is to first set a restart policy on the container such as --restart=on-failure and then simply stop the container, i.e., let the main process terminate itself. The Docker engine would then restart the container.
This, however, is not the way Docker is intended to be used! Docker containers are not VMs and instead are meant to be ephemeral:
By "ephemeral", we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
This means, you shouldn't be updating the server within a running container but instead should update/rebuild the image and start a new container from it!

When I do colima ssh I am still in my current working directory with just colima prompt

I don't know what is happening here, I thought I would be inside the VM.
colima is a docker-for-mac replacement, not a VM (there is a VM inside both colima and docker-for-mac but that is not something you need to go into).
colima will add itself as another docker context in your mac, and you can run docker commands as you are accustomed to.
If you find the need to ssh into colima.. it's described here, but note: THIS IS NOT SOMETHING YOU NEED TO DO NORMALLY!

Does restarting docker service kills all containers?

I'm having trouble with docker where docker ps won't return and is stuck.
I found that doinng docker service restart something like
sudo service docker restart (https://forums.docker.com/t/what-to-do-when-all-docker-commands-hang/28103/4)
However I'm worried if it will kill all the running containers? (I guess the service do provide service so that docker containers can run?)
In the default configuration, your assumption is correct: If the docker daemon is stopped, all running containers are shut down.. But, as outlined on the link, this behaviour can be changed on docker >= 1.12 by adding
{
"live-restore": true
}
to /etc/docker/daemon.json. Crux: the daemon must be restarted for this change to take effect. Please take note of the limitations of live reload, e.g. only patch version upgrades are supported, not major version upgrades.
Another possibility is to define a restart policy when starting a container. To do so, pass one of the following values as value for the command line argument --restart when starting the container via docker run:
no Do not automatically restart the container. (the default)
on-failure Restart the container if it exits due to an error, which manifests
as a non-zero exit code.
always Always restart the container if it stops. If it is manually stopped,
it is restarted only when Docker daemon restarts or the container
itself is manually restarted.
(See the second bullet listed in restart policy details)
unless-stopped Similar to always, except that when the container is stopped
(manually or otherwise), it is not restarted even after Docker
daemon restarts.
For your specific situation, this would mean that you could:
Restart all containers with --restart always (more on that further below)
Re-configure the docker daemon to allow for live reload
Restart the docker daemon (which is not yet configured for live reload, but will be after this restart)
This restart would shut down and then restart all your containers once. But from then on, you should be free to stop the docker daemon without your containers terminating.
Handling major version upgrades
As mentioned above, live reload cannot handle major version upgrades. For a major version upgrade, one has to tear down all running containers. With a restart policy of always, however, the containers will be restarted after the docker daemon is restarted after the upgrade.

How to disable/leave docker swarm mode when starting docker daemon?

is there any way to disable/leave the swarm mode of docker when starting the daemon manually, e.g. dockerd --leave-swarm, instead of starting the daemon and leave the swarm mode afterwards, e.g. using docker swarm leave?
Many thanks in advance,
Aljoscha
I don't think this is anticipated by docker developers. When node leaves swarm, it needs to notify swarm managers, that it will not be available anymore.
Leaving swarm is a one time action and passing this as an configuration option to the daemon is weird. You may try to suggest that on docker's github, but I don't think it will have much supporters.
Perhaps more intuitive option would be to have ability to start dockerd in a way that communication to docker swarm manager would be suspended - so your dockerd is running only locally, but if you start without that flag (--local?) it would reconnect to swarm that it was attached before.

How do you kill a docker containers default command without killing the entire container?

I am running a docker container which contains a node server. I want to attach to the container, kill the running server, and restart it (for development). However, when I kill the node server it kills the entire container (presumably because I am killing the process the container was started with).
Is this possible? This answer helped, but it doesn't explain how to kill the container's default process without killing the container (if possible).
If what I am trying to do isn't possible, what is the best way around this problem? Adding command: bash -c "while true; do echo 'Hit CTRL+C'; sleep 1; done" to each image in my docker-compose, as suggested in the comments of the linked answer, doesn't seem like the ideal solution, since it forces me to attach to my containers after they are up and run the command manually.
This is by design by Docker. Each container is supposed to be a stateless instance of a service. If that service is interrupted, the container is destroyed. If that service is requested/started, it is created. If you're using an orchestration platform like k8s, swarm, mesos, cattle, etc at least.
There are applications that exist to represent PID 1 rather than the service itself. But this goes against the design philosophy of microservices and containers. Here is an example of an init system that can run as PID 1 instead and allow you to kill and spawn processes within your container at will: https://github.com/Yelp/dumb-init
Why do you want to reboot the node server? To apply changes from a config file or something? If so, you're looking for a solution in the wrong direction. You should instead define a persistent volume so that when the container respawns the service would reread said config file.
https://docs.docker.com/engine/admin/volumes/volumes/
If you need to restart the process that's running the container, then simply run a:
docker restart $container_name_or_id
Exec'ing into a container shouldn't be needed for normal operations, consider that a debugging tool.
Rather than changing the script that gets run to automatically restart, I'd move that out to the docker engine so it's visible if your container is crashing:
docker run --restart=unless-stopped ...
When a container is run with the above option, docker will restart it for you, unless you intentionally run a docker stop on the container.
As for why killing pid 1 in the container shuts it down, it's the same as killing pid 1 on a linux server. If you kill init/systemd, the box will go down. Inside the namespace of the container, similar rules apply and cannot be changed.

Resources