Docker container deployment to mulitple nodes in a swarm cluster - docker

I have a swarm cluster of 6 worker nodes and 3 master nodes so a total of 9 nodes.
I am having machines of different sizes in my swarm cluster.
So there is a requirement that I need to deploy certain services (containers) on particular worker nodes as per the size of the node.
I am aware we can have placement constraints in the docker-compose file and can specify the hostname.
Since I will be running 2 replicas of the service so swarm will create replicas on the same worker to which I have set the constraint. But I don't want the replicas to be running on the same worker node.
Can we have an option to specify multiple hostnames while setting the placement constraint? Please guide.

You can to combine some nodes by labels.
docker node update --label-add ssd=fat_machine hostname1
docker node update --label-add ssd=fat_machine hostname2
and define label in docker-compose
constraints: [node.labels.ssd == fat_machine]

Related

How to add/remove replicas on a specific node in docker swarm?

In my cluster, my servers have different computing power and bandwidth, so sometimes I want to decide which service replicas running on which node. I know we can choose the replicas with the docker service create command, but how to update it when after the service is created and running? In the official docs, the update command only allows changing the number of replicas.
...I want to decide which service replicas running on which node.
You can modify a live service constraints by using --constraint-rm and --constraint-add. Example presumed node(s) are labeld with a key named "type": docker service update --constraint-rm node.labels.type==small --constraint-add node.labels.type==large my-redis.

Do I need to install docker in all my nodes inside the swarm mode?

I know this is a basic question. But I'm new to docker and have this query.
Do I need to install docker in all my nodes that are part of my swarm mode?.
If so what are the ways that I install docker in all my nodes in one shot?
Of course you need to install Docker and its dependencies on each node. On one of the manager nodes, you need to initiate the swarm with docker swarm init and then you join the other machines either as manager nodes or worker nodes.
The number of manager nodes depends on your requirement to compensate node losses:
1 manager node: requires 1 healthy node
3 manager nodes: require 2 healthy nodes for quorum, can compensate 1 unhealthy node
5 manager nodes: require 3 healthy nodes for quorum, can compensate 2 unhealthy nodes
7 manager nodes: require 4 healthy nodes for quorum, can compensate 3 unhealthy nodes
more than 7 is not recommended due to overhead
Using a even number does not provide more reliability, it is quite the oposite. If you have 2 manager nodes, the loss of either one of them renders the cluster headless. If the cluster is not able to build quorum (requires the majority of of manager nodes beeing healthy), the cluster is headless and can not be controlled. Running containers continue to run, but no new containers can be deployed, failed containers won't redeploy, ...).
People usualy deploy a swarm configuration with a configuration management tool like Ansible, Puppet, Chef or Salt.

Docker swarm strategy

Can anyone share their experience of changing the docker swarm scheduling strategy as there are three (spread, binpack and random). spread is default strategy used by docker swarm and I want it change to binpack.
The Swarm scheduling strategies you've listed are for the Classic Swarm that is implemented as a standalone container that acts as a reverse proxy to various docker engines. Most everyone is using the newer Swarm Mode instead of this, and little development effort happens for Classic Swarm.
The newer Swarm Mode includes a single option for the scheduler that can be tuned. That single option is an HA Spread algorithm. When you have multiple replicas of a single service, it will first seek to spread out those replicas across multiple nodes meeting the required criteria. And among the nodes with the fewest replicas, it will pick the nodes with the fewest other scheduled containers first.
The tuning of this algorithm includes constraints and placement preferences. Constraints allow you to require the service run on nodes with specific labels or platforms. And the placement preferences allow you to spread the workload across different values of a given label, which is useful to ensure all replicas are not running within the same AZ.
None of these configurations in Swarm Mode include a binpacking option. If you wish to reduce the number of nodes in your swarm cluster, then you can update the node state to drain workload from the node. This will gracefully stop all swarm managed containers on that node and migrate them to other nodes. Or you can simply pause new workloads from being scheduled on the node which will gradually remove replicas as services are updated and scheduled on other nodes, but not preemptively stop running replicas on that node. These two options are controlled by docker node update --availability:
$ docker node update --help
Usage: docker node update [OPTIONS] NODE
Update a node
Options:
--availability string Availability of the node ("active"|"pause"|"drain")
--label-add list Add or update a node label (key=value)
--label-rm list Remove a node label if exists
--role string Role of the node ("worker"|"manager")
For more details on constraints and placement preferences, see: https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints---constraint

How to specify a manager leader in a docker swarm constraint

I am aware I can manually a constraint to a leader node. But, there may already be a built-in way to specify a leader node in a swarm.
Basically, I need to prevent containers from running on the leader node. They can run anywhere else except the leader. Is there a built in way to specify a leader node in the constraint?
To prevent containers from running on a node, you can do this for all containers using:
docker node update --availability drain $your_node_name
To do this for a single service, you can add a constraint on the node type:
docker service create --constraint 'node.role==worker' --name $your_service $image_name
I don't think there's any way to do this on only the leader with a group of managers, it's all or none. You may be able to script something external that checks the current leader and updates node labels.

Deploying Docker Swarm mode services across multiple failure domains

I am new to docker swarm mode (and I specifically am talking about swarm mode in docker v1.12, and I do not mean the older non integrated 'docker swarm').
I am attempting to evaluate its suitability to build a large distributed containerised platform for a new software project (I'm comparing to similar technologies such as mesosphere, kubernetes et al).
My understanding of the older non-integrated docker swarm (not swarm mode) is that you could target nodes to deploy to multiple failure domains by using filters. Is there an equivalent in Docker Swarm mode?
For example in a test environment, I have 6 VM's - all running docker.
I start up VM 1 and 2 and call that my failure domain1
I start up VM 3 and 4 and call that my failure domain2
I start up VM 5 and 6 and call that my failure domain3
All failure domains consist of one swarm manager and one swarm worker. In effect, I have 2 nodes per domain that can host service containers.
I tell docker to create a new service, and run 3 containers based upon an image containing a simple web service. Docker does it's thing and spins up 3 containers and my service is running; I can access my load balanced web service without any problems. Hurrah!
However, I'd like to specifically tell docker to distribute my 3 containers across domain1, domain2 and domain3.
How can I do this? (also - am I posting on the correct site - should this be on one of the other stackexchange sites?)
You can continue to use engine labels as you have before. Or with the new swarm you can defined node labels on the swarm nodes. Then, with the new docker swarm, you would define a constraint on your service and create 3 separate services, each constrained to run in one of your failure domains.
For node labels, you'd use docker node update --label-add az=1 vm1 which would add label az1 to your vm1 node. Repeat this process for your other AZ's (availability zone is the term I tend to use) and VM's.
Now when scheduling your job, you add a constraint like
docker service create --constraint node.labels.az==1 \
--name AppAZ1 yourimage
for a node label or for an engine label:
docker service create --constraint engine.labels.az==1 \
--name AppAZ1 yourimage
repeating this for each of your AZ's.
Unfortunately I can't think of a way to force a spread across each of the AZ's automatically when you use something like a --replicas 3 that includes failover to the second node in each vm cluster. However, if you selecte a single VM per cluster for each task, you could label each of them my the same label (e.g. --label-add vm=a, and then do a --mode global --constraint node.label.vm==a to run one service on each of your A nodes.

Resources