Rancher behind haproxy - docker

I have some small cluster managed with Rancher. There are only two workers : node1 and node2.
I could add a stack1, add a load-balancer for this stack or global and it works fine. But I have some problem with DNS.
I could point stack1.domain.com to node1.domain.com for example. My load-balancer is running on the node1 (or even on all the nodes of my cluster) so it works.
But if one day I need to shut down my node1, I have to go quickly and point DNS stack1.domain.com to node2.domain.com
Not a good idea.
My first thought was to use a small haproxy server in front of my Rancher cluster.
So, I point stack1.domain.com to haproxy.domain.com and then haproxy backend it to node1 and node2.
But it does not work.
I could put something like that
frontend http *:80
acl stack1 hdr(host) -i stack1.domain.com
use_backend bck_s1 if stack1
backend bck_s1
mode http
balance roundrobin
server n1 node1.domain.com:80 check
server n2 node2.domain.com:80 check
Probably it could work. But if I need to add stack2 that listen on 80 port as well, I could not use this schema.
I could add bck_s2, but it will point to the same node1/node2. So rancher will not understand if I want stack1 or stack2?
It's possible to resolve it using different ports, but it seems not a good idea. Certainly I could listen stack1 to 80 port, stack2 to 8080, but if I have stack3, 4,... it became too complex.
I had an idea to add some path to backend. Like this :
backend bck_s1
mode http
balance roundrobin
server n1 node1.domain.com:80/s1 check
server n2 node2.domain.com:80/s1 check
In this case I could put a load-balancer on the Rancher based on rule /s1, /s2 etc.
But it seems that it's not possible to do this using haproxy. Am I right?
So the questions.
1) Is it possible to realize it using haproxy and how to do it?
2) Are there some others solutions that I could use?

Instead of using specific entries in haproxy.domain.com, you could configure a wildcard entry, point to both the nodes along with configuring healthcheck for the backend. That way when you take down node-1, HA proxy can detect it and not direct traffic to that node anymore. Things would be more dynamic this way on the HA Proxy side and you wouldn't need to make DNS changes.
References:
- Wildcard in subdomain for ACL in HAPROXY

Related

Docker swarm and cloudflare dns

I have a docker-swarm set up in digital ocean.
For now there's only 1 master node but more will be added soon.
I use cloudflare dns for this one, and provided the master node's IP as A record IP.
It does work, I am not really sure that this is the correct way though.
Furthermore, I am wondering which IP should I provide to cloudflare when having multiple master nodes?
Any advice on this will be much appreciated, thanks.
You can use any IP you want and Swarm will do routing and load balancing for you. Because of mesh routing you automatically publish your service to all nodes which is part of the swarm network .
As your services will evolve you may need to have more custom rules for the routing. In such case you can introduce a new layer of load balancing done by nginx, haproxy, traefik or any similar tool.
The most simple setup after default swarm routing is to use nginx as load balancer.

Two Nginx instances, listening to different ports, only one reachable by domain

I use docker-compose stacks to run things on my personal VPS. Right now, I have a stack that's composed of:
Nginx (exposed port 443)
Ghost (blogging)
MySQL (for Ghost)
Adminer (for MySQL, exposed port 8080)
I wanted to try out Matomo analytics software, but I didn't want to add that to my current stack until I was happy with it, so I decided to create a second docker-compose stack for just the Matomo analytics:
Nginx (exposed port 444)
Matomo
MariaDB (for Matomo)
Adminer (for MariaDB, exposed port 8081)
With both stacks running, I can access everything at its appropriate port, but only by IP address. If I try to use my domain, it can only connect to the first Nginx, the one exposing port 443. If I try https://www.example.com:444 in my browser, it isn't able to connect. If I try https://myip:444 in my browser, it connects to the second Nginx instance exposing port 444, warning me that the SSL certificate has issues (since I'm connecting to my IP, not my domain), and then lets me through.
I was wondering if anyone knew why this behavior was happening. I'm admittedly new to setting up VPSs, using Nginx to proxy to other hosted services, etc. If it turns out Nginx cannot be used this way, I'd love to hear recommendations on how else I could arrange this. (Do I have to only have one Nginx instance running at a time, and I must proxy through it to everything else? etc)
Thank you!
I was able to fix this by troubleshooting my Cloudflare. I posted this question while waiting for my domain to accept the name servers for my VPS instead of Cloudflare. When this was finished, I tested it and it did get through at https://example.com:444. This proved it was Cloudflare blocking me.
I found this page which explained that the free Cloudflare plans only support a few ports, which does not include port 444. If I upgraded to a pro plan, I would have that option.
Therefore, I can conclude that the solution to my problem is to either upgrade my Cloudflare plan or merge the two docker-compose stacks so that I can accept requests for everything on just port 443.

Run nginx in one container to proxy pass Gitlab in another container

I want to run multiple service such as GItlab, racktables on same host with https enabled in different containers. How can I achieve this?
You achieve this by running a reverse proxy (nginx or apache) that forwards traffic to the different containers using different virtualhosts.
gitlab.foo.bar -> gitlab container
racktables.foo.bar -> racktables container
etc
The reverse proxy container will map port 80 and 443 to the host. All the other containers will not need port mapping as all traffic goes through the revers proxy.
I think the quickest way to get this working is to use jwilder/nginx-proxy. It's at least extremely newbie friendly as it automates almost everything for you. You can also learn a lot by looking at the generated config files in the container. Even getting TLS to work is not that complicated and you get setup with A+ rating from ssllabs by default.
I've used this for my hobby projects for almost a year and it works great (with Let's Encrypt).
You can of course also manually configure everything, but it's a lot of work with so many pitfalls.
The really bad way to do this is to run the reverse proxy on the host and map lots of ports from all the containers to the host. Please don't do that.

Cluster Docker Swarm Hosts

This is probably a simple question but I am wondering how others do this.
I have a Docker Swarm Mode cluster with 3 managers. Lets say they have the IP's
192.168.1.10
192.168.1.20
192.168.1.30
I can access my web container through any of the IP's, through the internal routing mesh etc. But incase the host goes down with the IP I access my web container with I obviously won't get routed to the container any more. So I need some kind of single DNS record or something that points me to an active address in my cluster.
How do you guys do this? Should I add all three IP's as A records to the DNS? Or have a load balancer / reverse proxy infront of the cluster?
I suppose there is more than one solution, would be nice to hear how others solved this :-)
Thanks a lot
Assuming your web server was started something like this:
docker service create --replicas=1 -p 80:80 nginx:alpine
The cluster will attempt to reschedule that one replica should the replica go down for any reason. With only 1 replica, you may experience some downtime between the old task going away, in the new task coming up somewhere else.
Part of the design goal of the routing mesh feature was indeed to simplify load-balancing configuration. Because all members of the cluster listen on port 80 in my example, your load balancer can simply be configured to listen to all three ips. Assuming the load balancer can properly detect the one node going down, the task will get rescheduled somewhere in the cluster, and any of the two remaining nodes will be able to route traffic via the routing mesh.

DNS failover with Docker Swarm 1.12

I want to setup a failover for a webservice I've written.
Everywhere i read that docker swarm 1.12 does automatic failover but I think only for failed containers.
How should i configure public DNS for my service?
How does failover work if a host is down?
With normal dns round robin and IPs of the nodes it won't work. every nth request will fail for n servers. The docker route mesh doesn't help if one host is down. Or do i miss something here?
FYI: Currently I'm not using docker swarm but I'm planing to do so.
You got a couple of options:
If you are using a cloud provider like amazonaws, you can use ELB or what ever loadbalancer your cloudprovider provides, They usually support mode TCP and healthchecks. Then you will point your dns to load balancer. This way when ever a node fails, it will be removed from load balancer via healthchecks and will be back when it is healthy again.
You can use your own load balancer. HAproxy and Nginx support TCP mode and you can set healthcheck rules. when ever a node fails, it will be removed from load balancer via healthchecks and will be back when it is healthy again. Make sure HighAvailablity is provided via tools like keepalived so the loadbalancer wont become a point of failure.
Please note this applies to TCP traffic as many loadbalancers dont support UDP traffic.

Resources