How to update server settings without recreating docker containers - docker

I followed the official installation guide for AzerothCore using Docker containers and I would like to know if there is a way to update the server settings without recreating the docker containers.
If the containers has to be recreated to apply new settings, how can I prevent my characters from being deleted when recreating the docker containers?

As the official documentation says in the FAQ section, you don't have to recreate your containers.
You can simply start/stop them using:
docker-compose stop to stop your containers
docker-compose start to start again your containers
Those are different than using down and up which will destroy/recreate them.
If you simply change the worldserver.conf configuration content then it's enough to stop/start the containers. If you want to change the location of such configuration files, then you have to recompile (i.e. destroying/recreating containers of the worldserver and authserver).

Related

Backup docker volume on a broken system. Volume recovery

I have a server that's been running in docker on coreos. For some reason containerd has stopped running and the docker daemon has stopped working correctly. My efforts to debug haven't gotten far. I'd like to just boot a new instance and migrate, but I'm not sure I can backup my volume without a working docker service. Is it possible to backup my volume without using docker?
Most search results assume a running docker system, and don't work in this case.
By default, docker volumes are stored in /var/lib/docker/volumes. Being that you don't have a working docker setup, you might have to dive into the subfolders to figure out which volume you're concerned with, but that should at least give you a start. If it's helpful, in a working docker environment, you can inspect docker volumes outlined here, and get all necessary information you would need to carry this out.

Docker Swarm - Should I remove a stack before deploying a stack?

I am not new to Docker, but I am new to Docker Swarm.
Our deployments typically consist of building a new docker image with the latest code, pushing that to our registry and then running docker stack deploy against a compose file.
My question is, do I need to run docker stack rm $STACK_NAME before running the deploy?
I'm not sure if the deploy command for swarm is smart enough to figure out that a docker image has changed and that it needs to do something.
You redeploy the same stack name without deleting the old stack. If you expect to have services deleted from your compose file, then you'll want to include the --prune option. For any unchanged service, swarm will leave it unmodified. But for any services with changes, including a new image on the registry server, you will see a rolling update performed according to the update config you specify in the compose file.
When you use the default VIP to connect to a service, as long as the service exists, even across rolling updates, the VIP will keep the same IP address so that other containers connecting to your service can do so without worrying about a stale DNS reference. And with a replicated service, the rolling update can prevent any visible outage. The combination of the two give you high availability that you would not have when deleting and recreating your swarm stack.

Is there a Better way to run a command or shell on Docker swarm

Lets say I want to edit a config file for an NGINX Docker service that is replicated across 3 nodes.
Currently I list the services using docker service ls.
Then get the details to find a node running a container for that service using docker serivce ps servicename.
Then ssh to a node where one of the containers is running.
Finally, docker exec -it containername bash. Then I edit the config file.
Two questions:
Is there a better way to do this rather than ssh to a node running a container? Maybe there is a swarm or service command to do so?
If I were to edit that config file on one container would that change be replicated to the other 2 containers in the swarm?
The purpose of this exercise would be to edit configuration without shutting down a service.
You should not be exec'ing into containers to change their configuration, and so docker has not created an easy way to do this within Swarm Mode. You could use classic swarm to avoid the need to ssh into the other host, but I still don't recommend this.
The correct way to do this is to migrate your configuration file into a docker config entry. Version your config name. Then when you want to update it, you create a new version with the desired changes, and do a rolling update of your service to use that new configuration.
Unless the config is mounted from an external source like NFS, changes to one config in one container will not apply to other containers running on other nodes. If that config is stored locally inside your container as part of it's internal copy-on-write filesystem, then no changes from one container will be visible in any other container.

Deploying changes to Docker and its containers on the fly

Brand spanking new to Docker here. I have Docker running on a remote VM and am running a single dummy container on it (I can verify the container is running by issuing a docker ps command).
I'd like to secure my Docker installation by giving the docker user non-root access:
sudo usermod -aG docker myuser
But I'm afraid to muck around with Docker while any containers are running in case "hot deploys" create problems. So this has me wondering, in general: if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Same goes for the containers themselves. Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Usually, all containers are stopped first.
That happen typically when I upgrade docker itself: I find all my container stopped (except the data containers, which are just created, and remain so)
Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
That depend on the nature and requirements of your app: for a completely stateless app, you could even run 1.0.5 (with different host ports mapped to your app exposed port), test it a bit, and stop 1.0.4 when you think 1.0.5 is ready.
But for an app with any kind of shared state or resource (mounted volumes, shared data container, ...), you would need to stop and rm 1.0.4 before starting the new container from 1.0.5 image.
(1) why don't you stop them [the data containers] when upgrading Docker?
Because... they were never started in the first place.
In the lifecycle of a container, you can create, then start, then run a container. But a data container, by definition, has no process to run: it just exposes VOLUME(S), for other container to mount (--volumes-from)
(2) What's the difference between a data/volume container, and a Docker container running, say a full bore MySQL server?
The difference is, again, that a data container doesn't run any process, so it doesn't exit when said process stops. That never happens, since there is no process to run.
The MySQL server container would be running as long as the server process doesn't stop.

Docker Container management in production environment

Maybe I missed something in the Docker documentation, but I'm curious and can't find an answer:
What mechanism is used to restart docker containers if they should error/close/etc?
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose? (as they say it is not production ready)
What mechanism is used to restart docker containers if they should error/close/etc?
Docker restart policies, as set with the --restart option to docker run. From the docker-run(1) man page:
--restart=""
Restart policy to apply when a container exits (no, on-fail‐
ure[:max-retry], always)
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose?
Well, you can of course use docker-compose if that is the best match for your requirements, even if it is not labelled as "production ready".
You can investigate larger container management solutions like Kubernetes or even OpenStack (although I would not recommend the latter unless you are already familiar with OpenStack).
You could craft individual systemd unit files for each container.

Resources