what happens when we restart a running CA server container? - docker

if i restart my running CA server container will my network stop working after restart and will i loose any data ? is it okay to do this or not ?
I just don't want to loose any data because my network is in pro environment.

docker stop preserves the container, so no data is lost.
What you need to watch out for is the docker rm command or the docker-compose down command.
These commands evaporate the container. If container information is important, you can back up an existing container to an image using the docker commit command.

Related

Reboot Docker container from inside

I'm working with a Docker container with Debian 11 inside and a server.
I need to update this server and do other things on regular manne. I've written several scripts that can do it, but I encountered serious proble.
If I want to update the server and other packages I need to reboot the container.
I'm obviously able to do so from the computer Docker is installed on (in my case Docker Desktop running with WSL2 on Windows 10), I can reboot the container easily, but I need to automate it.
The simplest way will be to add the shutdown command to the scripts I've written. I was reading about it, but found nothing. Is there any way to reboot this container from the Debian inside it? If no, how can it be achieved and how complicated is it?
I was trying to invoke standard Linux commands to shutdown or reboot system on Debian inside container.
I expect a guide if it's possible and worth efforts.
The only way to trigger a restart of a container from within the container is to first set a restart policy on the container such as --restart=on-failure and then simply stop the container, i.e., let the main process terminate itself. The Docker engine would then restart the container.
This, however, is not the way Docker is intended to be used! Docker containers are not VMs and instead are meant to be ephemeral:
By "ephemeral", we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
This means, you shouldn't be updating the server within a running container but instead should update/rebuild the image and start a new container from it!

Don't restart docker container on host reboot. Restart the container only when it crashed due to error

I want to restart the docker container only when container crashed due to error. And don't want to restart the container if host reboots.
Which restart_policy will work for the above case?
Start containers automatically
on-failure[:max-retries]
Restart the container if it exits due to an error, which manifests as a non-zero exit code. Optionally, limit the number of times the Docker daemon attempts to restart the container using the :max-retries option.
docker run -d --restart on-failure[:max-retries] CONTAINER
UPDATE
A Docker host is a physical computer system or virtual machine running Linux. This can be your laptop, server or virtual machine in your data center, or computing resource provided by a cloud provider. The component on the host that does the work of building and running containers is the Docker Daemon.
Keep containers alive during daemon downtime
By default, when the Docker daemon terminates, it shuts down running containers. You can configure the daemon so that containers remain running if the daemon becomes unavailable. This functionality is called live restore. The live restore option helps reduce container downtime due to daemon crashes, planned outages, or upgrades.
Enable live restore
There are two ways to enable the live restore setting to keep containers alive when the daemon becomes unavailable. Only do one of the following.
Add the configuration to the daemon configuration file. On Linux, this defaults to /etc/docker/daemon.json. On Docker Desktop for Mac or Docker Desktop for Windows, select the Docker icon from the task bar, then click Preferences -> Daemon -> Advanced.
Use the following JSON to enable live-restore.
{
"live-restore": true
}
Restart the Docker daemon. On Linux, you can avoid a restart (and avoid any downtime for your containers) by reloading the Docker daemon. If you use systemd, then use the command systemctl reload docker. Otherwise, send a SIGHUP signal to the dockerd process.
If you prefer, you can start the dockerd process manually with the --live-restore flag. This approach is not recommended because it does not set up the environment that systemd or another process manager would use when starting the Docker process. This can cause unexpected behavior.

Docker container Ubuntu SSH access

I installed a new docker container (the standard Ubuntu latest version).
I would like connect on it trough SSH access. I followed instructions on this excellent link https://linuxconfig.org/how-to-connect-to-docker-container-via-ssh
Once I stop my container and restart it, SSH service is not available anymore.
I have to start it manually anytime.
I tried this command too "systemctl enable ssh" to configure ssh as permanent.
The result is as followed:
"Synchronizing state of ssh.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable ssh"
So everything seems be ok, but when I stop the container and restart it, the problem is still present, no ssh service started on the ubuntu.
Someone knows how to configure SSH access as permanent on this case?
Thank you all by advance for your help :)
You have to write a customized Dockerfile, and inside it instrument the SSH configuration so, each time you run the container, it will have a working SSH daemon.
About the issue that when you rerun the container it loses the SSH configuration is caused by the fact that you a craeting a new container from the original (non SSH configured) Ubuntu image. If you want to run the configured container you have to get the containers lsit by docker container ls --all' and copy the ID, the run the container by docker run {{ID}}`.

Issue commands from a gitlab-runner inside docker container

I have a machine with multiple docker containers for a project that I am developing and I just set up a new docker container running Gitlab-Runner inside it.
I need to run a few commands on all the other docker-containers whenever a commit is issued, is there anyway for the runner inside the Gitlab-Runner to access the other containers and tell them to execute commands or even restart them?
We currently don't use SSH keys to access this server that has all the docker containers, we use username and password.
The safe way (and easier than with passwords too) is start using SSH keys and access containers over network. Or at least issue commands to host over SSH from gitlab-runner.
Also, SOF seach returned this: manage containers from another container, docker
Looks legit.

Deploying changes to Docker and its containers on the fly

Brand spanking new to Docker here. I have Docker running on a remote VM and am running a single dummy container on it (I can verify the container is running by issuing a docker ps command).
I'd like to secure my Docker installation by giving the docker user non-root access:
sudo usermod -aG docker myuser
But I'm afraid to muck around with Docker while any containers are running in case "hot deploys" create problems. So this has me wondering, in general: if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Same goes for the containers themselves. Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Usually, all containers are stopped first.
That happen typically when I upgrade docker itself: I find all my container stopped (except the data containers, which are just created, and remain so)
Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
That depend on the nature and requirements of your app: for a completely stateless app, you could even run 1.0.5 (with different host ports mapped to your app exposed port), test it a bit, and stop 1.0.4 when you think 1.0.5 is ready.
But for an app with any kind of shared state or resource (mounted volumes, shared data container, ...), you would need to stop and rm 1.0.4 before starting the new container from 1.0.5 image.
(1) why don't you stop them [the data containers] when upgrading Docker?
Because... they were never started in the first place.
In the lifecycle of a container, you can create, then start, then run a container. But a data container, by definition, has no process to run: it just exposes VOLUME(S), for other container to mount (--volumes-from)
(2) What's the difference between a data/volume container, and a Docker container running, say a full bore MySQL server?
The difference is, again, that a data container doesn't run any process, so it doesn't exit when said process stops. That never happens, since there is no process to run.
The MySQL server container would be running as long as the server process doesn't stop.

Resources