Docker Swarm failover behavior seems a bit underwhelming - docker

I am currently trying to use Docker Swarm to set up our application (consisting of both stateless and stateful services) in a highly available fashion on a three node cluster. With "highly available" I mean "can survice the failure of one out of the three nodes".
We have been doing such installations (using other means, not Docker, let alone Docker Swarm) for quite a while now with good success, including acceptable failover behavior, so our application itself (resp. the services that constitute it) has/have proven that in such a three node setup it/they can be made highly available.
With Swarm, I get the application up and running successfully (with all three nodes up) and have taken care that I have each service configured redundantly, i.e., more than one instance exists for each of them, they are properly configured for HA, and not all instances of a service are located on the same Swarm node. Of course, I also took care that all my Swarm nodes joined the Swarm as manager nodes, so that anyone of them can be leader of the swarm if the original leader node fails.
In this "good" state, I can reach the services on their exposed ports on any of the nodes, thanks to Swarm's Ingress networking.
Very cool.
In a production environment, we could now put a highly-available loadbalancer in front of our swarm worker nodes, so that clients have a single IP address to connect to and would not even notice if one of the nodes goes down.
So now it is time to test failover behavior...
I would expect that killing one Swarm node (i.e., hard shutdown of the VM) would leave my application running, albeit in "degraded" mode, of course.
Alas, after doing the shutdown, I cannot reach ANY of my services via their exposed (via Ingress) ports anymore for a considerable time. Some do become reachable again and indeed have recovered successfully (e.g., a three node Elasticsearch cluster can be accessed again, of course now lacking one node, but back in "green" state). But others (alas, this includes our internal LB...) remain unreachable via their published ports.
"docker node ls" shows one node as unreachable
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER
STATUS
kma44tewzpya80a58boxn9k4s * manager1 Ready Active Reachable
uhz0y2xkd7fkztfuofq3uqufp manager2 Ready Active Leader
x4bggf8cu371qhi0fva5ucpxo manager3 Down Active Unreachable
as expected.
What could I be doing wrong regarding my Swarm setup that causes these effects? Am I just expecting too much here?

Related

Deploying couchbase in a docker swarm environment

I'm trying to deploy couchbase community edition in a docker swarm environment. I followed the steps suggested by Arun Gupta, though I'm not sure if a master-worker model is desired as Couchbase doesn't have the notion of master/slave model.
Following are the problems I encountered. I'm wondering if anyone is able to run Couchbase successfully in a swarm mode.
Docker swarm assigns different IP address each time the service is restarted. Sometimes, docker moves the service to a new node which, again assigns a different IP address. It appears that Couchbase doesn't start if it finds a new IP address. (log says "address on which the service is configured is not up. Waiting for the interface to be brought up"). I'm using a host mounted volume as the data folder (/opt/couchase/var) to persist data across restarts.
I tried to read overlay network address used internally and update ip and ip_start files in a run script within the container. This doesn't help either. Server comes up as a new instance without loading old data. This is a real problem as production data can be lost if docker swarm moves services around.
docker swarm's internal router assigns an address from overlay network in addition to other interfaces. I tried using localhost, master.overlaynet, IP address of the overlaynet, private address assigned by docker to container etc. as server address in the Couchbase cluster configuration. While the cluster servers are able to communicate to each other, this created another problem with client connections. Client normally connects to an address/port exposed by the swarm cluster. This is different from cluster node address. In case of a python client, it reads Couchbase cluster server addresses and tried to connect to that if overlay address is given as server address at the time of joining the cluster. The client times out as the address is not reachable.
I might be able to add a network address constraint to the yaml file to ensure that master node will come up with the same address. For eg.
networks:
default:
ipv4_address: 172.20.x.xx
Above approach may not work for worker nodes as that will impact ability to scale worker nodes based on load/growth.
In this model (master/worker), how does a worker get elected as leader if master node goes down? Is master/worker the right approach for a Couchbase cluster in swarm environment?
It will be helpful if I can get some references to Couchbase swarm mode setup or some suggestions on how to handle IP address change.
We ran into the same problem (couchbase server 5.1.1) and our temporary solution is to use fixed IPs on a new docker bridge network.
networks:<br>
default:<br>
ipv4_address: 172.19.0.x
Although this works, this is not a good solution as we loose auto-scaling as mentioned above. We had some learnings during setup. Just to let you know:
You can run a single-node couchbase setup with dynamic IP. You can stop/restart this container and update couchbase-server version with no limitations.
When you add a second node this initially works with dynamic IP as well during setup. You can add the server and rebalance the cluster. But when you stop/restart/scale 0/1 a couchbase container, it won't start up anymore due to a new IP provides by docker (10.0.0.x with default network).
Changing the "ip" or "ip_start" files (/opt/couchbase/var/lib/couchbase/config) to update the IP does NOT work. Server starts up as "new" server, when changing the ip in "ip" and "ip_start" but it still has all the data. So you can backup your data, if you need now. So even after you "switched" to fixed IP you can't re-start the server directly, but need to cbbackup and cbrestore.
https://docs.couchbase.com/server/5.1/install/hostnames.html documentation for using hostnames is a little misleading as this only documents how to "find" a new server while configuring a cluster. If you specify hostnames couchbase anyway configures all nodes with the static IPs.
You might start your docker swarm with host network might be a solution, but we run multiple instances of other containers on a single host, so we would like to avoid that solution.
So always have a backup of the node/cluster. We always make a file-backup and a cluster-backup with cbbackup. As restoring from a file backup is much faster.
There is a discussion at https://github.com/couchbase/docker/issues/82 on this issue, but this involves using AWS for static IPs, which we don't.
I am aware of couchbase autonomous operator for kubernetes, but for now we would like to stay with docker swarm. If anybody has a nicer solution for this, how to configure couchbase to use hostnames, please share.

How to handle port traffic only to Docker Swarm nodes that have the service that exposes that port?

If we imagine that the Docker Swarm consists of node A, B and C.
If we imagine that we run a Docker Stack of a single service (for the sake of example), scaled to 2 instances and that service exposes port 80 of the host machine.
How do I make sure that any traffic that hits:
http://A:80
http://B:80
http://C:80
Always lands on a live Docker instance.
Given that there are 2 instances of the service and 3 nodes total, there will always be at least one node that doesn't have the service on it, so it will not expose port 80 (I assume).
One benefit of using orchestration with e.g. swarm mode is that you must not now anything about single nodes in your swarm. Instead swarm works on a higher level of abstraction than nodes --> on services.
So you tell swarm which nodes it consists of, what services you have and how many instances of containers you want to run inside the swarm for each single service. After configuring that it is swarm's job to decide/know which container runs on which node. Again: you don't care about the single nodes.
So the question is not how to make
http://A:80
http://B:80
http://C:80
(Re)route to a correct/valid node (running a corresponding container with exposed port)
because the only thing you need to know is the name of your service. So you only will call
http://myservice:80
And then swarm mode will decide on which node the request will be forwarded to (http://A:80 or http://B:80 or http://C:80). And if you have 3 nodes, 1 service and 2 replicas for that service swarm will ensure that no requests will be forwarded to the node, on which no container is running because it knows there are only 2 replicas and it knows on which nodes these instances run.

Docker Swarm, multiple hosts not in same local network but reachable over IP

I see a lot of examples running multiple Docker nodes in swarm mode, but they all mention that the nodes shares a local/private network. I was wondering, is it possible to connect two hosts on a swarm that are not on a private network but can still reach each others over IP and having the correct ports setup ?
This would not be for a production setup.
Are there any Swarm mechanisms that prevent such architecture ?
Thank you for your time !
You can connect swarm nodes over the public internet. What's needed is:
Routeable IP addresses for each node, this may require a VPN between nodes
Firewall rules to allow 2376/tcp, 7946/tcp+udp, 4789/udp between each node
Low latency, if the heartbeat timeout is exceeded, nodes will be flagged as down and workload will migrate
Because of the last requirement, typically people will install nodes in the same region but multiple AZ's. And when you get to multiple regions, you typically see multiple clusters to keep the latency down within a cluster.
Running this command helped me have all nodes available across all continents:
sudo docker swarm update --dispatcher-heartbeat 120s

Docker Swarm --advertise-addr changing

I have a 3 node swarm. Each of which has a static ip address. I have a leader node-0 on ip 192.168.2.100, a backup manager node-1 on 192.1682.101, and a worker node-2 on 192.168.2.102. node-0 is the leader that initialized the swarm, so the --advertise-addr is 192.168.2.100. I can deploy services that can land on any node, and node-0 handles the load balancing. So, if I have a database on node-2 (192.168.2.102:3306), it is still reachable from node-0 192.168.2.100:3306, even though the service is not directly running on node-0.
However, when I reboot node-0 (let's say it loses power), the next manager in line assumes leader role (node-1) - as expected.
But, now if I want to access a service, let's say an API or database from a client (a computer that's not in the swarm), I have to use 192.168.2.101:3306 as my entry point ip, because node-1 is handling load balancing. So, essentially from the outside world (other computers on the network), the ip address of swarm has changed, and this is unacceptable and impractical.
Is there a way to resolve this such that a given manager has priority over another manager? Otherwise, how is this sort of issue resolved such that the entry point ip of the swarm is not dependent on the acting leader?
Make all three of your nodes managers and use some sort of load balanced DNS to point to all three of your manager nodes. If one of the managers goes down, your DNS will route to one of the other two managers (seamlessly or slightly less seamlessly depending on how sophisticated your DNS routing/health-check/failover setup is). When you come to scale out with more nodes, nodes 4, 5, 6 etc can all be worker nodes but you will benefit from having three managers rather than one.

Understanding docker swarm in terms of high availability

I am currently trying to understand what would be necessary to create a docker swarm to make some service highly available. I read through a lot of the docker swarm documentation, but if my understanding is correct, docker swarm will just execute a service on any host. What would happen if a host fails? Would the swarm manager restart the service(s) running on that host/node on another one? Is there any better explanation of this than in the original documentation found here?
Nothing more complex than that really. Like it says, Swarm (and kubernetes, and most other tooling in this space) is declarative, which means that you tell it the state that you want (i.e. 'I want 4 instances of redis') and Swarm will converge the system to that state. If you have 3 nodes, then it will schedule 1 redis on Node 1, 1 on Node 2, and 2 on Node 3. If Node 2 dies, then the system is now not 'compliant' with your declared state, and Swarm will schedule another redis on Node 1 or 3 (depending on strategy, etc...).
Now this dynamism of container / task / instance scheduling brings another problem, discovery. Swarm deals with this by maintaining an internal DNS registry and by creating VIP (virtual IPs) for each service. Instead of having to address / keep track of each redis instance, I can instead point to a service alias and Swarm will automatically route traffic to where it needs to go.
Of course there are also other considerations:
Can your service support multiple backend instances? Is it stateless? Sessions? Cache? Etc...
What is 'HA'? Multi-node? Multi-AZ? Multi-region? Etc...

Resources