I have a 3 node swarm. Each of which has a static ip address. I have a leader node-0 on ip 192.168.2.100, a backup manager node-1 on 192.1682.101, and a worker node-2 on 192.168.2.102. node-0 is the leader that initialized the swarm, so the --advertise-addr is 192.168.2.100. I can deploy services that can land on any node, and node-0 handles the load balancing. So, if I have a database on node-2 (192.168.2.102:3306), it is still reachable from node-0 192.168.2.100:3306, even though the service is not directly running on node-0.
However, when I reboot node-0 (let's say it loses power), the next manager in line assumes leader role (node-1) - as expected.
But, now if I want to access a service, let's say an API or database from a client (a computer that's not in the swarm), I have to use 192.168.2.101:3306 as my entry point ip, because node-1 is handling load balancing. So, essentially from the outside world (other computers on the network), the ip address of swarm has changed, and this is unacceptable and impractical.
Is there a way to resolve this such that a given manager has priority over another manager? Otherwise, how is this sort of issue resolved such that the entry point ip of the swarm is not dependent on the acting leader?
Make all three of your nodes managers and use some sort of load balanced DNS to point to all three of your manager nodes. If one of the managers goes down, your DNS will route to one of the other two managers (seamlessly or slightly less seamlessly depending on how sophisticated your DNS routing/health-check/failover setup is). When you come to scale out with more nodes, nodes 4, 5, 6 etc can all be worker nodes but you will benefit from having three managers rather than one.
Related
I'm trying to deploy couchbase community edition in a docker swarm environment. I followed the steps suggested by Arun Gupta, though I'm not sure if a master-worker model is desired as Couchbase doesn't have the notion of master/slave model.
Following are the problems I encountered. I'm wondering if anyone is able to run Couchbase successfully in a swarm mode.
Docker swarm assigns different IP address each time the service is restarted. Sometimes, docker moves the service to a new node which, again assigns a different IP address. It appears that Couchbase doesn't start if it finds a new IP address. (log says "address on which the service is configured is not up. Waiting for the interface to be brought up"). I'm using a host mounted volume as the data folder (/opt/couchase/var) to persist data across restarts.
I tried to read overlay network address used internally and update ip and ip_start files in a run script within the container. This doesn't help either. Server comes up as a new instance without loading old data. This is a real problem as production data can be lost if docker swarm moves services around.
docker swarm's internal router assigns an address from overlay network in addition to other interfaces. I tried using localhost, master.overlaynet, IP address of the overlaynet, private address assigned by docker to container etc. as server address in the Couchbase cluster configuration. While the cluster servers are able to communicate to each other, this created another problem with client connections. Client normally connects to an address/port exposed by the swarm cluster. This is different from cluster node address. In case of a python client, it reads Couchbase cluster server addresses and tried to connect to that if overlay address is given as server address at the time of joining the cluster. The client times out as the address is not reachable.
I might be able to add a network address constraint to the yaml file to ensure that master node will come up with the same address. For eg.
networks:
default:
ipv4_address: 172.20.x.xx
Above approach may not work for worker nodes as that will impact ability to scale worker nodes based on load/growth.
In this model (master/worker), how does a worker get elected as leader if master node goes down? Is master/worker the right approach for a Couchbase cluster in swarm environment?
It will be helpful if I can get some references to Couchbase swarm mode setup or some suggestions on how to handle IP address change.
We ran into the same problem (couchbase server 5.1.1) and our temporary solution is to use fixed IPs on a new docker bridge network.
networks:<br>
default:<br>
ipv4_address: 172.19.0.x
Although this works, this is not a good solution as we loose auto-scaling as mentioned above. We had some learnings during setup. Just to let you know:
You can run a single-node couchbase setup with dynamic IP. You can stop/restart this container and update couchbase-server version with no limitations.
When you add a second node this initially works with dynamic IP as well during setup. You can add the server and rebalance the cluster. But when you stop/restart/scale 0/1 a couchbase container, it won't start up anymore due to a new IP provides by docker (10.0.0.x with default network).
Changing the "ip" or "ip_start" files (/opt/couchbase/var/lib/couchbase/config) to update the IP does NOT work. Server starts up as "new" server, when changing the ip in "ip" and "ip_start" but it still has all the data. So you can backup your data, if you need now. So even after you "switched" to fixed IP you can't re-start the server directly, but need to cbbackup and cbrestore.
https://docs.couchbase.com/server/5.1/install/hostnames.html documentation for using hostnames is a little misleading as this only documents how to "find" a new server while configuring a cluster. If you specify hostnames couchbase anyway configures all nodes with the static IPs.
You might start your docker swarm with host network might be a solution, but we run multiple instances of other containers on a single host, so we would like to avoid that solution.
So always have a backup of the node/cluster. We always make a file-backup and a cluster-backup with cbbackup. As restoring from a file backup is much faster.
There is a discussion at https://github.com/couchbase/docker/issues/82 on this issue, but this involves using AWS for static IPs, which we don't.
I am aware of couchbase autonomous operator for kubernetes, but for now we would like to stay with docker swarm. If anybody has a nicer solution for this, how to configure couchbase to use hostnames, please share.
Am trying to implement a cluster of containerised applications in the production using docker in the swarm mode.
Let me describe a very minimalist scenario.
All i have is just 5 aws-ec2 instances.
None of these nodes have a public IP assigned and all have private IPs assigned part of a subnet.
For example,
Manager Nodes
172.16.50.1
172.16.50.2
Worker Nodes
172.16.50.3
172.16.50.4
172.16.50.5
With the above infrastructure, have created a docker swarm with the first node's IP (172.16.50.1) as the --advertise-addr so that the other 4 nodes join the swarm as manager or worker with their respective tokens.
I didn't want to overload the Manager Nodes by making them doing the role of worker nodes too. (Is this a good idea or resource under-utilization?).
Being the nodes are 4 core each, am hosting 9 replicas of my web application which are distributed in the 3 worker nodes each running 3 containers hosting my web app.
Now with this setup in hand, how should i go about exposing the entire docker swarm cluster with a VIP (virtual IP) to the external world for consumption?
please validate my below thoughts:
1. Should I have a classic load-balancer setup like keeping a httpd or nginx or haproxy based reverse proxy which has a public IP assigned
and make it balance the load to the above 5 nodes where our
docker-swarm is deployed?
One downside I see here is that the above reverse-proxy would be Single Point of Failure? Any ideas how this could be made fault-tolerant/hightly available? should I try a AnyCast solution?
2. Going for a AWS ALB/ELB which would route the traffic to the above 5 nodes where our swarm is.
3. If keeping a separate Load Balancer is the way to go, then what does really docker-swarm load-balancing and service discovery is all
about?
what is docker swarm's answer to expose 1 virtual IP or host name to the external clients to access services in the swarm cluster?
Docker-swarm touts a lot about overlay networks but not sure how it
relates to my issue of exposing the cluster via VIP to clients in the
internet. Should we always keep the load balancer aware of the IP
addresses of the nodes that join the docker swarm later?
please shed some light!
On further reading, I understand that the Overlay Network we are creating in the swarm manager node only serves inter container communication.
The only difference from the other networking modes like bridge, host, macvlan is that the others enables communication among containers with in a single host and while the Overlay network facilitates communication among containers deployed in different subnets too. i.e., multi-host container communication.
with this knowledge as the headsup, to expose the swarm to the world via a single public IP assigned to a loadbalancer which would distribute requests to all the swarm nodes. This is just my understanding at a high level.
This is where i need your inputs and thoughts please...explaining the industry standard on how this is handled?
If we imagine that the Docker Swarm consists of node A, B and C.
If we imagine that we run a Docker Stack of a single service (for the sake of example), scaled to 2 instances and that service exposes port 80 of the host machine.
How do I make sure that any traffic that hits:
http://A:80
http://B:80
http://C:80
Always lands on a live Docker instance.
Given that there are 2 instances of the service and 3 nodes total, there will always be at least one node that doesn't have the service on it, so it will not expose port 80 (I assume).
One benefit of using orchestration with e.g. swarm mode is that you must not now anything about single nodes in your swarm. Instead swarm works on a higher level of abstraction than nodes --> on services.
So you tell swarm which nodes it consists of, what services you have and how many instances of containers you want to run inside the swarm for each single service. After configuring that it is swarm's job to decide/know which container runs on which node. Again: you don't care about the single nodes.
So the question is not how to make
http://A:80
http://B:80
http://C:80
(Re)route to a correct/valid node (running a corresponding container with exposed port)
because the only thing you need to know is the name of your service. So you only will call
http://myservice:80
And then swarm mode will decide on which node the request will be forwarded to (http://A:80 or http://B:80 or http://C:80). And if you have 3 nodes, 1 service and 2 replicas for that service swarm will ensure that no requests will be forwarded to the node, on which no container is running because it knows there are only 2 replicas and it knows on which nodes these instances run.
I am currently trying to use Docker Swarm to set up our application (consisting of both stateless and stateful services) in a highly available fashion on a three node cluster. With "highly available" I mean "can survice the failure of one out of the three nodes".
We have been doing such installations (using other means, not Docker, let alone Docker Swarm) for quite a while now with good success, including acceptable failover behavior, so our application itself (resp. the services that constitute it) has/have proven that in such a three node setup it/they can be made highly available.
With Swarm, I get the application up and running successfully (with all three nodes up) and have taken care that I have each service configured redundantly, i.e., more than one instance exists for each of them, they are properly configured for HA, and not all instances of a service are located on the same Swarm node. Of course, I also took care that all my Swarm nodes joined the Swarm as manager nodes, so that anyone of them can be leader of the swarm if the original leader node fails.
In this "good" state, I can reach the services on their exposed ports on any of the nodes, thanks to Swarm's Ingress networking.
Very cool.
In a production environment, we could now put a highly-available loadbalancer in front of our swarm worker nodes, so that clients have a single IP address to connect to and would not even notice if one of the nodes goes down.
So now it is time to test failover behavior...
I would expect that killing one Swarm node (i.e., hard shutdown of the VM) would leave my application running, albeit in "degraded" mode, of course.
Alas, after doing the shutdown, I cannot reach ANY of my services via their exposed (via Ingress) ports anymore for a considerable time. Some do become reachable again and indeed have recovered successfully (e.g., a three node Elasticsearch cluster can be accessed again, of course now lacking one node, but back in "green" state). But others (alas, this includes our internal LB...) remain unreachable via their published ports.
"docker node ls" shows one node as unreachable
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER
STATUS
kma44tewzpya80a58boxn9k4s * manager1 Ready Active Reachable
uhz0y2xkd7fkztfuofq3uqufp manager2 Ready Active Leader
x4bggf8cu371qhi0fva5ucpxo manager3 Down Active Unreachable
as expected.
What could I be doing wrong regarding my Swarm setup that causes these effects? Am I just expecting too much here?
Elasticsearch is designed to run in cluster mode, all I have to do is to define the relevant node IPs in the cluster via environment variable and as long as network connectivity is available it will connect and join the other nodes to the cluster.
I have 3 nodes, 1 is acting as the docker swarm manager and the other two are workers. I have initialized the manager and joined the worker nodes and everything looks ok from that standpoint.
Now I'm trying to run the elasticsearch container in a way that will allow me to join all nodes to the same elasticsearch cluster, however, I want the nodes to join using their overlay network interface and that means that I need to know the container internal IP addresses at the time of running the docker service create command, how can I do this? Do I have to use something like consul to achieve this?
Some clarifications:
I need to know, at the time of service creation the IP addresses (or DNS names) for all Elasticsearch participants so I could start the cluster correctly. This has to be at the time of creation and not afterwards. Also, as I understand, I can expose ports 9200/9300 for all services and work with the external machine IPs and get it to work, but I would like to use the overlay network to do all these communications (I thought this is what swarm mode is for).
Only a partial solution here.
So, when attaching your services to a custom overlay network, you indeed have access to Docker's custom Service discovery feature. I'll detail the networking feature of Docker Swarm mode, before trying to tie it to your problem.
I'll be using the different term of services and tasks, in which a service could be elasticsearch, whereas a task is a single instance of that elasticsearch service.
Docker networking
The idea is that for each services you create, docker assigns a Virtual IP (VIP), and a custom dns alias. You can retrieve this VIP using the docker service inspect myservice command.
But, there is two modes to attach a service to an overlay network dnsrr and VIP. You can select these options using the --endpoint-mode options of docker service create.
The VIP mode (I believe it is the default one, or at least the most used), affects the virtual ip to the service's dns alias. This means that doing an nslookup servicename would return to you a single vip, that behind the scenes, would be linked to one of your container in a round robin fashion. But, there is also a special dns alias that lets you access all of your instances ips (all of your tasks ips) : tasks.myservice.
So in VIP mode you can retrieve all of your tasks ips using a simple nslookup tasks.myservice, where myservice is a service name.
The other mode is dnsrr. This mode simply gets rid of the VIP, and connects the dns alias to the different tasks (=service instances), in a round robin way. This way, you simply have to do a nslookup myservice to retrieve the different service instances ip.
Elasticsearch clustering
Ok so first of all I'm not really familiar with the way elasticsearch lets you cluster. From what I understood from your question, you need when running the elasticsearch binary, give it as a parameter, the adress of all of the other nodes it needs to cluster with.
So what I would do, is to create a custom Elasticsearch image, probably based on the one from the default library, to which I would add a custom Entrypoint that would firstly run a script to retrieve the other tasks ip.
I'd believe that staying in VIP mode is suitable for you, since there is the tasks.myservice dns alias. You'll then need to parse the output to retrieve the tasks ip (and probably remove yours). Then you'll be able to save them in a config file environment variable, or use them as a runtime option for your elasticsearch binary.
Edit: To create a custom overlay network, you will need to use the docker network create command, and use the --network option of docker service create
This is answer is mainly based on the Swarm mode networking documentation