I am aware I can manually a constraint to a leader node. But, there may already be a built-in way to specify a leader node in a swarm.
Basically, I need to prevent containers from running on the leader node. They can run anywhere else except the leader. Is there a built in way to specify a leader node in the constraint?
To prevent containers from running on a node, you can do this for all containers using:
docker node update --availability drain $your_node_name
To do this for a single service, you can add a constraint on the node type:
docker service create --constraint 'node.role==worker' --name $your_service $image_name
I don't think there's any way to do this on only the leader with a group of managers, it's all or none. You may be able to script something external that checks the current leader and updates node labels.
Related
I have a docker swarm environment with three managers. They are named swarm1, swarm2 and swarm3. Due to different circumstances (e.g. network and resources), swarm1 was set as the leader and should stay as the leader. However, after a resource upgrade, swarm1 was rebooted. This led to that swarm2 is set as the leader and swarm1 has now the status reachable. How is it possible to set again swarm1 to the leader?
With swarm managers, it's bad practice to have a "special" node that needs to be the leader at all times. All of your manager nodes should be as identical as possible. But, to answer your question, there is no way to manually set the swarm leader. However, what you can do is docker node demote the leader (swarm2), and the other manager (swarm3). Once the managers are demoted to workers, swarm1 by default becomes the leader. Then all you have to do is docker node promote swarm2 and swarm3.
I'll attempt to write the whole scenario, so everything is in one place about changing the leader and reverting.
Scenario: Three nodes are swarm1, swarm2 and swarm3. First leader (that needs resource upgrade is swarm1)
Step 1
Make swarm2 as leader
docker node promote swarm2
docker node ls
Make sure that the swarm2 Manager status is Reachable (Not Down)
Step 2
Demote swarm1 now to a worker
docker node demote swarm1
docker node ls
Ensure Manager status for swarm1 is now blank.
You may remove the node from swarm.
From swarm1 node, issue:
docker swarm leave
Do the necessary upgrades to the swarm1 node and join it back to the swarm.
Step 3
Rejoin to the swarm and change lead.
From swarm2 issue
docker swarm join-token manager
From swarm1 node issue the join command
eg:
docker swarm join --token <token> <ip:port>
Step 4
Now re-elect the leader.
From swarm1 issue:
docker node demote swarm2
docker node ls
Now you should see your old configuration with swarm1 as leader.
I wish to understand the difference between docker swarm node running as Leader and running as a Manager.
I also, understand that there can be several docker managers but can there be multiple docker swarm Leader nodes and the reasons for it.
Note: im aware of what a docker worker node is.
Docker swarm has following terminology.
Manager Node (Can be a leader or manager)
Worker Node
Now for simple docker swarm mode , there is a single manager and other are worker node. In This manager is a leader.
It is possible to have more then one manager node. Like 2 manager ( Mostly odd number prefer like 1,3,5). In such case to one is leader who is responsible to scheduler task on worker node. Also manager node will talk to each other to maintain the state. To make highly available environment when manager node which is a leader at this moment get down , it should not stop scheduling work. At that moment another manager will automatically promoted as a leader and take responsibility to schedule task (container) on worked node.
I have set up a three-node quorum network using docker and if my network crashes, and I only have information of one of the node. Do I want to recover that node using binary? also, the blocks in the new network should be in sync with others. Please guide how is it possible?
I assume you’re using docker swarm. The clustering facility within the swarm is maintained by the manager nodes. If for any reason the manager leader becomes unavailable and no enough remaining managers to reach quorum and elect a new manager leader, a quorum is lost and no manager node is able to control the swarm.
In this kind of situations, it may be necessary to re-initialize the swarm and force the creation of a new cluster using the command on the manager leader when it is brought online again:
# docker swarm init --force-new-cluster
This removes all managers except the manager the command was run from. The good thing is that worker nodes will continue to function normally and the other manager nodes should resume functionality once the swarm has been re-initialized.
Sometimes it might be necessary to remove manager nodes from the swarm and rejoin them to the swarm.
But note that when a node rejoins the swarm, it must join the swarm via a manager node.
You can always monitor the health of manager nodes by querying the docker nodes API in JSON format through the /nodes HTTP endpoint:
# docker node inspect manager1 --format "{{ .ManagerStatus.Reachability }}"
# docker node inspect manager1 --format "{{ .Status.State }}"
Also, make it a practice to perform automate the backup of docker swarm config directory /var/lib/docker/swarm to easily recover from disaster.
In docker swarm mode I can run docker node ls to list swarm nodes but it does not work on worker nodes. I need a similar function. I know worker nodes does not have a strong consistent view of the cluster, but there should be a way to get current leader or reachable leader.
So is there a way to get current leader/manager on worker node on docker swarm mode 1.12.1?
You can get manager addresses by running docker info from a worker.
The docs and the error message from a worker node mention that you have to be on a manager node to execute swarm commands or view cluster state:
Error message from a worker node: "This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager."
After further thought:
One way you could crack this nut is to use an external key/value store like etcd or any other key/value store that Swarm supports and store the elected node there so that it can be queried by all the nodes. You can see examples of that in the Shipyard Docker management / UI project: http://shipyard-project.com/
Another simple way would be to run a redis service on the cluster and another service to announce the elected leader. This announcement service would have a constraint to only run on the manager node(s): --constraint node.role == manager
I added three nodes to a swarm cluster with static file mode. I want to remove host1 from the cluster. But I don't find a docker swarm remove command:
Usage: swarm [OPTIONS] COMMAND [arg...]
Commands:
create, c Create a cluster
list, l List nodes in a cluster
manage, m Manage a docker cluster
join, j join a docker cluster
help, h Shows a list of commands or help for one command
How can I remove the node from the swarm?
Using Docker Version: 1.12.0, docker help offers:
➜ docker help swarm
Usage: docker swarm COMMAND
Manage Docker Swarm
Options:
--help Print usage
Commands:
init Initialize a swarm
join Join a swarm as a node and/or manager
join-token Manage join tokens
update Update the swarm
leave Leave a swarm
Run 'docker swarm COMMAND --help' for more information on a command.
So, next try:
➜ docker swarm leave --help
Usage: docker swarm leave [OPTIONS]
Leave a swarm
Options:
--force Force leave ignoring warnings.
--help Print usage
Using the swarm mode introduced in the docker engine version 1.12, you can directly do docker swarm leave.
The reference to "static file mode" implies the container based standalone swarm that predated the current Swarm Mode that most know as Swarm. These are two completely different "Swarm" products from Docker and are managed with completely different methods.
The other answers here focused on Swarm Mode. With Swarm Mode docker swarm leave on the target node will cause the node to leave the swarm. And when the engine is no longer talking to the manager, docker node rm on an active manager for the specific node will cleanup any lingering references inside the cluster.
With the container based classic swarm, you would recreate the manager container with an updated static list. If you find yourself doing this a lot, the external DB for discovery would make more sense (e.g. consul, etcd, or zookeeper). Given the classic swarm is deprecated and no longer being maintained, I'd suggest using either Swarm Mode or Kubernetes for any new projects.
Try this:
docker node list # to get a list of nodes in the swarm
docker node rm <node-id>
Using the Docker CLI
I work with Docker Swarm clusters and to remove a node from the cluster there are two options.
It depends on where you want to run the command, within the node you want to remove or on a manager node other than the node to be removed.
The important thing is that the desired node must be drained before being removed to maintain cluster integrity.
First option:
So I think the best thing to do is (as steps in official document):
Go to one of the nodes with manager status using a terminal ssh;
Optionally get your cluster nodes;
Change the availability to drained of the node you want to remove;
And remove it;
# step 1
ssh user#node1cluster3
# step 2, see the nodes in your cluster like print screen below
docker node ls
# step 3, drain one of them
docker node update --availability drain node4cluster3
# step 4, remove the drained node
docker node rm node4cluster3
Second option:
The second option needs two terminal logins, one on a manager node and one on the node you want to remove.
Perform the 3 initial steps described in the first option to drain the desired node.
Afterwards, log in to the node you want to remove and run the docker swarm leave command.
# remove from swarm using leave
docker swarm leave
# OR, if the desired node is a manager, you can use force (be careful*)
docker swarm leave --force
*For information about maintaining a quorum and disaster recovery, refer to the Swarm administration guide.
My environment information
I use Ubuntu 20.04 for nodes inside VMs;
With Docker version 20.10.9;
Swarm: active;