Cluster Docker Swarm Hosts - docker

This is probably a simple question but I am wondering how others do this.
I have a Docker Swarm Mode cluster with 3 managers. Lets say they have the IP's
192.168.1.10
192.168.1.20
192.168.1.30
I can access my web container through any of the IP's, through the internal routing mesh etc. But incase the host goes down with the IP I access my web container with I obviously won't get routed to the container any more. So I need some kind of single DNS record or something that points me to an active address in my cluster.
How do you guys do this? Should I add all three IP's as A records to the DNS? Or have a load balancer / reverse proxy infront of the cluster?
I suppose there is more than one solution, would be nice to hear how others solved this :-)
Thanks a lot

Assuming your web server was started something like this:
docker service create --replicas=1 -p 80:80 nginx:alpine
The cluster will attempt to reschedule that one replica should the replica go down for any reason. With only 1 replica, you may experience some downtime between the old task going away, in the new task coming up somewhere else.
Part of the design goal of the routing mesh feature was indeed to simplify load-balancing configuration. Because all members of the cluster listen on port 80 in my example, your load balancer can simply be configured to listen to all three ips. Assuming the load balancer can properly detect the one node going down, the task will get rescheduled somewhere in the cluster, and any of the two remaining nodes will be able to route traffic via the routing mesh.

Related

Tweak load balancer for docker swarm mode

I want a lead to the below problem.
My understanding:
Docker swarm incorporates an ingress and a DNS server that identifies services with their names. It also incorporates inbuilt robust load balancers on every node in the cluster.
We can hit any service running on different nodes which are participating in docker swarm mode using any machine's IP address. If a machine does not host service, the load balancer will route the request to a different machine that hosts that service.
For best practice, we can choose a load balancer container(NGINX/HAProxy) as a reverse proxy to route the requests on the basis of some predefined algorithms(round-robin/Hash/IP Hash/Least Connection/Least bandwidth, etc.).
Problem statement:
I want to make a cluster of two/three different machines where I will be deploying all the technical services which are required. A mini QA environment.
As a service is identified by its name, I can not create another service with the same name. Being a developer, I want to have a service up and running on my localHost which is also part of the docker swarm cluster. Obviously, I can not name it the same. So, let's say I name it as myIP_serviceName. Now the DNS entry which docker swarm has will be based on this name.
I want a mechanism where if I make a call for any service using my IP address as host, the load balancer will look for any service which is registered in DNS as myIP_serviceName, if it finds any service with such a name call should be routed to this service, if it doesn't, the call should follow the regular path. This should hold true for every consecutive request which is part of a round trip journey.
I have not explored Kubernetes yet, Please suggest if Kubernetes can be used here to achieve this goal more elegantly.
Please correct my understanding if I am wrong and do provide valuable suggestions.
HAProxy have written about HAProxy on Docker Swarm: Load Balancing and DNS Service Discovery maybe this will point you in the right direction.

Docker swarm and cloudflare dns

I have a docker-swarm set up in digital ocean.
For now there's only 1 master node but more will be added soon.
I use cloudflare dns for this one, and provided the master node's IP as A record IP.
It does work, I am not really sure that this is the correct way though.
Furthermore, I am wondering which IP should I provide to cloudflare when having multiple master nodes?
Any advice on this will be much appreciated, thanks.
You can use any IP you want and Swarm will do routing and load balancing for you. Because of mesh routing you automatically publish your service to all nodes which is part of the swarm network .
As your services will evolve you may need to have more custom rules for the routing. In such case you can introduce a new layer of load balancing done by nginx, haproxy, traefik or any similar tool.
The most simple setup after default swarm routing is to use nginx as load balancer.

Rancher behind haproxy

I have some small cluster managed with Rancher. There are only two workers : node1 and node2.
I could add a stack1, add a load-balancer for this stack or global and it works fine. But I have some problem with DNS.
I could point stack1.domain.com to node1.domain.com for example. My load-balancer is running on the node1 (or even on all the nodes of my cluster) so it works.
But if one day I need to shut down my node1, I have to go quickly and point DNS stack1.domain.com to node2.domain.com
Not a good idea.
My first thought was to use a small haproxy server in front of my Rancher cluster.
So, I point stack1.domain.com to haproxy.domain.com and then haproxy backend it to node1 and node2.
But it does not work.
I could put something like that
frontend http *:80
acl stack1 hdr(host) -i stack1.domain.com
use_backend bck_s1 if stack1
backend bck_s1
mode http
balance roundrobin
server n1 node1.domain.com:80 check
server n2 node2.domain.com:80 check
Probably it could work. But if I need to add stack2 that listen on 80 port as well, I could not use this schema.
I could add bck_s2, but it will point to the same node1/node2. So rancher will not understand if I want stack1 or stack2?
It's possible to resolve it using different ports, but it seems not a good idea. Certainly I could listen stack1 to 80 port, stack2 to 8080, but if I have stack3, 4,... it became too complex.
I had an idea to add some path to backend. Like this :
backend bck_s1
mode http
balance roundrobin
server n1 node1.domain.com:80/s1 check
server n2 node2.domain.com:80/s1 check
In this case I could put a load-balancer on the Rancher based on rule /s1, /s2 etc.
But it seems that it's not possible to do this using haproxy. Am I right?
So the questions.
1) Is it possible to realize it using haproxy and how to do it?
2) Are there some others solutions that I could use?
Instead of using specific entries in haproxy.domain.com, you could configure a wildcard entry, point to both the nodes along with configuring healthcheck for the backend. That way when you take down node-1, HA proxy can detect it and not direct traffic to that node anymore. Things would be more dynamic this way on the HA Proxy side and you wouldn't need to make DNS changes.
References:
- Wildcard in subdomain for ACL in HAPROXY

Docker swarm prevent node from participating in ingress network

Quite possibly a very trivial question but I can't find anything in the documentation about a feature like this. As we know from the routing mesh documentation:
All nodes participate in an ingress routing mesh. The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.
However, I do not wish some nodes to participate in the routing mesh, but I still want them to participate in hosting the service.
The configuration I'm trying to achieve looks a bit like this:
I have a single service, hello-world, with three instances, one on each node.
I would like, in this example, only node-1 and node-2 to participate in externalising the ingress network. However, when I visit 10.0.0.3, it still exposes port 80 and 443 as it still has to have the ingress network on it to be able to run the container hello-world, and I would like this not to be the case.
In essence, I'd like to be able to run containers for a service that hosts port 80 & 443 on 10.0.0.3 without being to access it by visiting 10.0.0.3 in a web browser. Is there any way to configure this? Even if there's no container running on the node, it'll still forward traffic to a container that is running.
Thank you!
The short answer to your specific question is no, there is no supported way to selectively enable/disable the ingress network on specific nodes for specific overlay networks.
But based on what you're asking to do, the expected model for using only specific nodes for incoming traffic is to control which nodes receive the traffic, not shutoff ports on specific nodes...
In a typical 6-node swarm where you've separated out your managers to be protected in a different subnet from the DMZ (e.g. a subnet behind the workers). You'd use placement constraints to ensure your app workloads were only assigned to worker nodes, and those nodes were the only ones in the VLAN/Security Group/etc. for being accessible from user/client traffic.
Most prod designs of Swarm recommend protecting your managers (which manage the orchestration and scheduling of containers, store secrets, etc.) from external traffic.
Why not put your proxies on the workers in a client-accessible network, and have those nodes the only in DMZ/external LB.
Note that if you only allow firewall/LB access to some nodes (e.g. just 3 workers) then the other nodes that don't receive external incoming traffic are effectively not using their ingress networks, which achieves your desired result. The node that receives the external connection uses its VIP to route the traffic directly to the node that runs the published container port.

DNS failover with Docker Swarm 1.12

I want to setup a failover for a webservice I've written.
Everywhere i read that docker swarm 1.12 does automatic failover but I think only for failed containers.
How should i configure public DNS for my service?
How does failover work if a host is down?
With normal dns round robin and IPs of the nodes it won't work. every nth request will fail for n servers. The docker route mesh doesn't help if one host is down. Or do i miss something here?
FYI: Currently I'm not using docker swarm but I'm planing to do so.
You got a couple of options:
If you are using a cloud provider like amazonaws, you can use ELB or what ever loadbalancer your cloudprovider provides, They usually support mode TCP and healthchecks. Then you will point your dns to load balancer. This way when ever a node fails, it will be removed from load balancer via healthchecks and will be back when it is healthy again.
You can use your own load balancer. HAproxy and Nginx support TCP mode and you can set healthcheck rules. when ever a node fails, it will be removed from load balancer via healthchecks and will be back when it is healthy again. Make sure HighAvailablity is provided via tools like keepalived so the loadbalancer wont become a point of failure.
Please note this applies to TCP traffic as many loadbalancers dont support UDP traffic.

Resources