Docker swarm strategy - docker

Can anyone share their experience of changing the docker swarm scheduling strategy as there are three (spread, binpack and random). spread is default strategy used by docker swarm and I want it change to binpack.

The Swarm scheduling strategies you've listed are for the Classic Swarm that is implemented as a standalone container that acts as a reverse proxy to various docker engines. Most everyone is using the newer Swarm Mode instead of this, and little development effort happens for Classic Swarm.
The newer Swarm Mode includes a single option for the scheduler that can be tuned. That single option is an HA Spread algorithm. When you have multiple replicas of a single service, it will first seek to spread out those replicas across multiple nodes meeting the required criteria. And among the nodes with the fewest replicas, it will pick the nodes with the fewest other scheduled containers first.
The tuning of this algorithm includes constraints and placement preferences. Constraints allow you to require the service run on nodes with specific labels or platforms. And the placement preferences allow you to spread the workload across different values of a given label, which is useful to ensure all replicas are not running within the same AZ.
None of these configurations in Swarm Mode include a binpacking option. If you wish to reduce the number of nodes in your swarm cluster, then you can update the node state to drain workload from the node. This will gracefully stop all swarm managed containers on that node and migrate them to other nodes. Or you can simply pause new workloads from being scheduled on the node which will gradually remove replicas as services are updated and scheduled on other nodes, but not preemptively stop running replicas on that node. These two options are controlled by docker node update --availability:
$ docker node update --help
Usage: docker node update [OPTIONS] NODE
Update a node
Options:
--availability string Availability of the node ("active"|"pause"|"drain")
--label-add list Add or update a node label (key=value)
--label-rm list Remove a node label if exists
--role string Role of the node ("worker"|"manager")
For more details on constraints and placement preferences, see: https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints---constraint

Related

Do I need to install docker in all my nodes inside the swarm mode?

I know this is a basic question. But I'm new to docker and have this query.
Do I need to install docker in all my nodes that are part of my swarm mode?.
If so what are the ways that I install docker in all my nodes in one shot?
Of course you need to install Docker and its dependencies on each node. On one of the manager nodes, you need to initiate the swarm with docker swarm init and then you join the other machines either as manager nodes or worker nodes.
The number of manager nodes depends on your requirement to compensate node losses:
1 manager node: requires 1 healthy node
3 manager nodes: require 2 healthy nodes for quorum, can compensate 1 unhealthy node
5 manager nodes: require 3 healthy nodes for quorum, can compensate 2 unhealthy nodes
7 manager nodes: require 4 healthy nodes for quorum, can compensate 3 unhealthy nodes
more than 7 is not recommended due to overhead
Using a even number does not provide more reliability, it is quite the oposite. If you have 2 manager nodes, the loss of either one of them renders the cluster headless. If the cluster is not able to build quorum (requires the majority of of manager nodes beeing healthy), the cluster is headless and can not be controlled. Running containers continue to run, but no new containers can be deployed, failed containers won't redeploy, ...).
People usualy deploy a swarm configuration with a configuration management tool like Ansible, Puppet, Chef or Salt.

Can Docker-Swarm run in fail-over-mode only?

I am facing a situation, where I need to run about 100 different applications in Docker containers. It is not reasonable to run all 100 containers on one hardware, so I need to spread the applications over several machines.
As far as I understood, docker-swarm is for scaling only, which means when I run my containers in a swarm, than all 100 containers will automatically be deployed and started on every node of my docker-swarm. But this is not what I am searching for. I want to split the applications and for example run 50 on node1 and 50 on node2.
Question 1:
Is there a way to configure docker-swarm in a way, where my applications will be automatically dispatched on the node which has the most idling resources?
Question 2:
Is there a kind of fail-over-mode in docker swarm which can stop a container on one node and try to start it on on another in case something goes wrong?
all 100 containers will automatically be deployed and started on every node of my docker-swarm
This is not true. When you deploy 100 containers in a swarm, the containers will be distributed on the available nodes in the swarm. You will mostly get an even distribution of containers on all nodes.
Question 1: Is there a way to configure docker-swarm in a way, where my applications will be automatically dispatched on the node which has the most idling resources?
Docker swarm does not check the available resources (memory, cpu ...) available on the nodes before deploying a container on it. The distribution of containers is balanced per nodes, without taking into account the availability of resources on each node.
You can however build a strategy of distributing container on the nodes. You can use placement constraints were you can restrict where a container can be deployed. You can label nodes having a lot of resources and restrict some heavy containers to only run on these nodes.
Question 2: Is there a kind of fail-over-mode in docker swarm which can stop a container on one node and try to start it on on another in case something goes wrong?
If a container crashes, docker swarm will ensure that a new container is started. Again, the decision of what node to deploy the new container on cannot be predetermined.

Difference between Docker container and service

I'm wondering whether there are any differences between the following docker setups.
Administrating two separate docker engines via the remote api.
Administrating two docker swarm nodes via one single docker engine.
I'm wondering if you can administrate a swarm with the ability run a container on a specific node are there any use cases to have separate docker engines?
The difference between the two is swarm mode. When a docker engine is running services in swarm mode you get:
Orchestration from the manager to continuously try to correct any differences between the current state and the target state. This can also include HA using the quorum model (as long as a majority of the managers are reachable to make decisions).
Overlay networking which allows containers on different hosts to talk to each other on their own container network. That can also involve IPSEC for security.
Mesh networking for published ports and a VIP for the service that doesn't change like container IP's do. The latter prevents problems from DNS caching. And the former has all nodes in the swarm publish the port and routes traffic to a container providing this service.
Rolling upgrades to avoid any downtime with replicated services.
Load balancing across multiple nodes when scaling up a service.
More details on swarm mode are available from docker's documentation.
The downside of swarm mode is that you are one layer removed from the containers when they run on a remote node. You can't run an exec command on a task to investigate a container, you need to do that on a container and be on the node it's currently using. Docker also removed some options from services like --volumes-from which don't apply when containers may be running on different machines.
If you think you may grow beyond running containers on a single node, need to communicate between the containers on different nodes, or simply want the orchestration features like rolling upgrades, then I would recommend swarm mode. I'd only manage containers directly on the hosts if you have a specific requirement that prevents swarm mode from being an option. And you can always do both, manage some containers directly and others as a service or stack inside of swarm, on the same nodes.

Docker Swarm Mode service anti-affinity

We are experimenting with docker v1.12 swarm mode using docker service and trying to find a way to ensure containers do not run on the same node. We have three containers and wanted to run 3 docker-engine hosts. When I initially brought them up with a replica of 2 one of the services ended up running both containers on the same node.
For now I'm getting around this by making them global but I was hoping to find a way to do this. I've seen that you can use labels and then create multiple services for the same container and use constraints but was wondering if there is an easier way.
You can use node labels on nodes and service constraints to influence task scheduling to some extent. But for now swarm mode's scheduler capabilities are limited.
There is an open issue regarding your question without a solution:
https://github.com/docker/docker/issues/26259

How does Docker Swarm load balance?

I have a cluster of 10 Swarm nodes started via docker swarm join command
If i want to scale a docker instance to 15 via
docker service create --replicas 15
how does docker swarm know where to start the container?
is it round-robin or does it take into consideration of compute resource (how much cpu/mem being used)?
When you create a service or scale it in the Swarm mode, scheduler on Elected Leader (one of the managers) will choose a node to run the service on. There are 3 strategies currently available to the leader.
spread
binpack
random
The spread and binpack strategies compute rank according to a node’s available CPU, its RAM, and the number of containers it has. The random strategy uses no computation. It selects a node at random and is primarily intended for debugging.
Under the spread strategy, Swarm optimizes for the node with the least number of containers. The binpack strategy causes Swarm to optimize for the node which is most packed.
Swarm uses spread by default.
Keep in mind you can Constraint Containers on specific nodes too.
It's not possible to set strategies in docker version 1.12.1 (Latest release to posting date)

Resources