Docker Swarm - Should I remove a stack before deploying a stack? - docker

I am not new to Docker, but I am new to Docker Swarm.
Our deployments typically consist of building a new docker image with the latest code, pushing that to our registry and then running docker stack deploy against a compose file.
My question is, do I need to run docker stack rm $STACK_NAME before running the deploy?
I'm not sure if the deploy command for swarm is smart enough to figure out that a docker image has changed and that it needs to do something.

You redeploy the same stack name without deleting the old stack. If you expect to have services deleted from your compose file, then you'll want to include the --prune option. For any unchanged service, swarm will leave it unmodified. But for any services with changes, including a new image on the registry server, you will see a rolling update performed according to the update config you specify in the compose file.
When you use the default VIP to connect to a service, as long as the service exists, even across rolling updates, the VIP will keep the same IP address so that other containers connecting to your service can do so without worrying about a stale DNS reference. And with a replicated service, the rolling update can prevent any visible outage. The combination of the two give you high availability that you would not have when deleting and recreating your swarm stack.

Related

Extract docker-compose.yml from an existing Docker Swarm

I have an existing stack in a Swarm that I'd like to add some templated variables to. The Swarm is currently managed by, but was not created in, Portainer. I no longer have access to the original YML that created the stack and many edits have been made to the services since it was used anyway.
Portainer easily lets me add and remove services, but it seems the ability to associate a service with a stack requires the original YML.
Is there an automated way to extract a YML file from an existing stack? If not, is there a way to associate a new service with an existing stack without using docker stack deploy?
docker stack deploy can easilly add new services to an existing stack namespace. Just don't use the --prune flag.
We use this a lot in our production and staging CD pipelines as we use stacks as environments, and so publish multiple microservices into a single stack - either by passing multiple --compose-file directives, or doing singular --compose-file stack deploys when a single service is implicated.
dockers dns resolution is even more interesting. Docker has no awareness of the stack namespace your services are deployed to - dns is resolved by the networks your service is attached to and the network aliases that have been implicitly and automatically assigned to your service.
network aliases can be explicitly controlled at the service level with no stacks at all with docker service create --network name=my-network,alias=web1 and there is a similar alias syntax for the network section of services in compose files.

How to update database without taking down your serivce?

I'm using ansible to set up my docker-swarm.
In my docker swarm I run: Web server, database, and a cache.
My question is: how can I update my database (e.g docker image, etc.) without making the service unavailable?
Should I mirror the existing swarm, and run two identical swarms?
How should I then make sure the update of these is automatic and flawless?
Docker swarm only permits update services with zero-downtime using parallelism when scale > 1.
You can use parallelism with database, and downtime should me minimum possible, but some seconds are expected.
Check docker swarm rolling update and ansible docker swarm service documentation
Definitely, Blue-Green deployment is not an option for database

How to update server settings without recreating docker containers

I followed the official installation guide for AzerothCore using Docker containers and I would like to know if there is a way to update the server settings without recreating the docker containers.
If the containers has to be recreated to apply new settings, how can I prevent my characters from being deleted when recreating the docker containers?
As the official documentation says in the FAQ section, you don't have to recreate your containers.
You can simply start/stop them using:
docker-compose stop to stop your containers
docker-compose start to start again your containers
Those are different than using down and up which will destroy/recreate them.
If you simply change the worldserver.conf configuration content then it's enough to stop/start the containers. If you want to change the location of such configuration files, then you have to recompile (i.e. destroying/recreating containers of the worldserver and authserver).

Docker usage in compose/swarm mode

I am quite new to docker and I need some help about distributing my application.
Consider this:
I have a pool of physical machines, each of them running the latest version of docker.
My "Application A" has several containers. To be clear in this definition, an application would be a database running in a container, 4 messaging containers and a master container. All 6 containers need to communicate between each other. The database, the messaging and etc containers would be the "services".
I can also have "Application B", "Application C" and "Application N...", that are slightly different in size and configuration from "Application A". Applications do not communicate between each other and are completely independent.
Requirements:
All applications "A,B,C..N" must use the same pool of physical machines.
Each service of each application must run in a different physical machine, if needed.
You may want to restrict how each service is allocated to each physical machine
I need to create applications "on the fly"
My first thought would be to use a docker-compose to define an application and several dockerfiles to define the services inside it. But if I do that, each application would be running in the same docker engine and therefore, the same physical machine.
I have read that you could deploy a docker compose into a docker swarm. In this case, docker swarm would act as a docker engine. However, I could not find any examples on how to do that and I am not sure of the limitations.
My second thought would be to use swarm mode. I would create a swarm, and run services on it. However, I would lose the the concept of "application". There would be a bunch of services thrown into the swarm and I could not manage how each of them communicate with each other.
So, given this problem:
Is there any assumption or statement I got wrong?
What is the recommended docker tools usage in the scenario?
It is possible to use Docker Compose with Docker Swarm Mode (Docker 1.12), but it is currently not completely compatible with it. Have a look at Docker Stacks and Bundles.
In the next version of Docker (1.13) there will also be the new release of Docker Compose v3, which will be compatible with Docker without Docker Compose. This will make it possible to deploy your Docker Compose file like this:
docker deploy --compose-file docker-compose.yml AppA
This is currently experimental but works quite fine with Docker 1-13-rc5. (Docker Releases)
A more detailed explanation of this can be found in this article.
For your requirements to have them all run on different hosts, this is possible with defining constraints in the docker service create (or in the Docker Compose v3) (See Docker Service Create - Constraints). But why do you need to have them run on different hosts?
It is possible to limit the CPU and memory usage that each service is able to use with --limit-cpu and --limit-memory.
If you want to play with Docker Swarm Mode you can create a swarm with Docker Machine on your local host. (Attention do not use the old Docker Swarm)

Recommended way to run a Docker Compose stack in production?

I have a couple of compose files (docker-compose.yml) describing a simple Django application (five containers, three images).
I want to run this stack in production - to have the whole stack begin on boot, and for containers to restart or be recreated if they crash. There aren't any volumes I care about and the containers won't hold any important state and can be recycled at will.
I haven't found much information on using specifically docker-compose in production in such a way. The documentation is helpful but doesn't mention anything about starting on boot, and I am using Amazon Linux so don't (currently) have access to Docker Machine. I'm used to using supervisord to babysit processes and ensure they start on boot up, but I don't think this is the way to do it with Docker containers, as they end up being ultimately supervised by the Docker daemon?
As a simple start I am thinking to just put restart: always on all my services and make an init script to do docker-compose up -d on boot. Is there a recommended way to manage a docker-compose stack in production in a robust way?
EDIT: I'm looking for a 'simple' way to run the equivalent of docker-compose up for my container stack in a robust way. I know upfront that all the containers declared in the stack can reside on the same machine; in this case I don't have need to orchestrate containers from the same stack across multiple instances, but that would be helpful to know as well.
Compose is a client tool, but when you run docker-compose up -d all the container options are sent to the Engine and stored. If you specify restart as always (or preferably unless-stopped to give you more flexibility) then you don't need run docker-compose up every time your host boots.
When the host starts, provided you have configured the Docker daemon to start on boot, Docker will start all the containers that are flagged to be restarted. So you only need to run docker-compose up -d once and Docker takes care of the rest.
As to orchestrating containers across multiple nodes in a Swarm - the preferred approach will be to use Distributed Application Bundles, but that's currently (as of Docker 1.12) experimental. You'll basically create a bundle from a local Compose file which represents your distributed system, and then deploy that remotely to a Swarm. Docker moves fast, so I would expect that functionality to be available soon.
You can find in their documentation more information about using docker-compose in production. But, as they mention, compose is primarily aimed at development and testing environments.
If you want to use your containers in production, I would suggest you to use a suitable tool to orchestrate containers, as Kubernetes.
If you can organize your Django application as a swarmkit service (docker 1.11+), you can orchestrate the execution of your application with Task.
Swarmkit has a restart policy (see swarmctl flags)
Restart Policies: The orchestration layer monitors tasks and reacts to failures based on the specified policy.
The operator can define restart conditions, delays and limits (maximum number of attempts in a given time window). SwarmKit can decide to restart a task on a different machine. This means that faulty nodes will gradually be drained of their tasks.
Even if your "cluster" has only one node, the orchestration layer will make sure your containers are always up and running.
You say that you use AWS so why don't you use ECS which is built for what you ask. You create an application which is the pack of your 5 containers. You will configure which and how many instances EC2 you want in your cluster.
You just have to convert your docker-compose.yml to the specific Dockerrun.aws.json which is not hard.
AWS will start your containers when you deploy and also restart them in case of crash

Resources