I am trying to implement sticky sessions with HAProxy.
I have a HAProxy instance that routes to two different Node.js servers, each running socket.io. I am connecting to these socket servers (via the HAProxy server) using an iOS app (https://github.com/pkyeck/socket.IO-objc).
Unlike when using a web browser, the sticky sessions do not work, it is like the client is not handling the cookie properly and so the HAProxy server just routes the request to wherever it likes. Below you can see my HAProxy config (I have removed the IP addresses):
listen webfarm xxx.xxx.xxx.xxx:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth haproxy:stats
balance roundrobin
#replace XXXX with customer site name
cookie SERVERID insert indirect nocache
option httpclose
option forwardfor
#replace with web node private ip
server web01 yyy.yyy.yyy.yyy:8000 cookie server1 weight 1 maxconn 1024 check
#replace with web node private ip
server web02 zzz.zzz.zzz.zzz:8000 cookie server2 weight 1 maxconn 1024 check
This is causing a problem with the socket.io handshake, because the initial handshake routes to server1 then subsequent heartbeats from the client go to server2. This causes server2 to reject the client because the socket session ID is invalid as far as server 2 is concerned, when really all requests from the client should go to the same server.
Update the haproxy config file /etc/haproxy/haproxy.cfg by the following:
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
option forwardfor
backend servers
cookie SRVNAME insert
balance leastconn
option forwardfor
server node1 127.0.0.1:3001 cookie node1 check
server node2 127.0.0.1:3002 cookie node2 check
server node3 127.0.0.1:3003 cookie node3 check
server node4 127.0.0.1:3004 cookie node4 check
server node5 127.0.0.1:3005 cookie node5 check
Related
I'm trying to put a HAProxy loadbalancer in front of my RabbitMQ cluster(which is setup with nodes in separate docker containers). I cannot find many examples for haproxy config for the above setup,
global
debug
defaults
log global
mode tcp
timeout connect 5000
timeout client 50000
timeout server 50000
frontend main
bind *:8089
default_backend app
backend app
balance roundrobin
mode http
server rabbit-1 172.18.0.2:8084
server rabbit-2 172.18.0.3:8085
server rabbit-3 172.18.0.4:8086
In this example, what should I give in the place of the ip addresses for docker containers?
I am using haproxy to route the domains and subdomains and it is deployed on port 80. I want that all the domains should be https or using SSL certificate.
global
log xx.xx.90.28 local0
log xx.xx.90.28 local1 notice
maxconn 2048
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
option forwardfor
option http-server-close
retries 3
timeout connect 5000
timeout client 10000
timeout server 10000
frontend balancer
bind *:80
mode http
stats enable
stats uri /stats
stats refresh 15s
stats show-node
stats auth admin:admin
acl domain hdr_dom(host) -i www.example.com
acl subdomain hdr_dom(host) -i app.example.com
acl subdomain1 hdr_dom(host) -i examplecom
use_backend go_app_1 if domain
use_backend go_app_2 if subdomain
use_backend go_app_3 if subdomain1
backend go_app_1
balance roundrobin
mode http
option forwardfor
server go xx.xx.90.28:8081 check
backend go_app_2
balance roundrobin
mode http
option forwardfor
server go xx.xx.90.28:8082 check
backend go_app_3
balance roundrobin
mode http
option forwardfor
server go xx.xx.90.28:8081 check
And here is the DockerFile
FROM haproxy:2.1
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Now I want to use letsencrypt to secure these URL please guide me how I can do this?
I think we need to setup letsencrypt locally first.
And inject the result setting to the container via volume mount.
Example command from Docker Hub:
docker run -d --name my-running-haproxy -v /path/to/etc/haproxy:/usr/local/etc/haproxy:ro haproxy:1.7
Something like below:
-v /my/letsencrypt/setting/:/etc/ssl/
I'm expecting the following config to receive HTTPS requests, do the SSL offloading and send HTTP requests to my backends, however with HTTPS I get "503 service unavailable".
all ACLs work correctly on HTTP and the stats page shows them as online
The stats page works correctly on HTTPS
These are all in a docker compose file, docker is doing the name resolution to internal IP correctly
Perhaps I'm missing something obvious? Quite new to attempting this so any help is appreciated.
global
tune.ssl.default-dh-param 2048
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
backend certbot
option httpchk GET /
default-server init-addr libc,none
server certbot_server certbot check port 80
backend client
option httpchk HEAD /
server client_server client check port 80
backend api
option httpchk OPTIONS /api/healthcheck
server api_server api check port 80
frontend app
bind *:80
bind *:443 ssl crt /certs/productpedia.co.uk.pem
use_backend certbot if { path_beg -i /.well-known/acme-challenge/ }
use_backend api if { path_beg /api }
default_backend client
stats enable
stats uri /stats
stats refresh 10s
stats admin if LOCALHOST
EDIT:
Attached a wireshark trace of the the 503 request, looks like server is resetting the connection, but not sure where I can go from here, or what would be causing this?
After a full day of debugging, it looks like simply specifying port 80 did the trick, although I would have expected the default port to be 80 perhaps it carries through a default port of 443? I could also get rid of the port 80 after check after this change which was the original trigger hint that something might be off there.
backend certbot
option httpchk GET /
default-server init-addr libc,none
server certbot_server certbot:80 check
backend client
option httpchk HEAD /
server client_server client:80 check
backend api
option httpchk OPTIONS /api/healthcheck
server api_server api:80 check
We are using haproxy to switch between a local MQTT broker and a cloud broker based on availability (with preference to the local server). haproxy.cfg looks something like this:
global
log 127.0.0.1 local1
maxconn 1000
daemon
debug
#quiet
tune.bufsize 1024576
stats socket /var/run/haproxy.sock mode 600 level admin
defaults
log global
mode tcp
option tcplog
retries 3
option redispatch
timeout connect 5000
timeout client 50000
timeout server 50000
# Listen to all MQTT requests (port 1883)
listen mqtt
bind *:1883
mode tcp
balance first # Connect to first available
timeout client 3h
timeout server 3h
option clitcpka
option srvtcpka
# MQTT server 1 - local wifi
server wifi_broker localserver.local:1883 init-addr libc,last,none check inter 3s rise 5 fall 2 maxconn 1000 on-marked-up shutdown-backup-sessions on-marked-down shutdown-sessions
# MQTT server 2 - cloud
server aws_iot xxxxx.amazonaws.com:8883 backup check backup ssl verify none ca-file ./root-CA.crt crt ./cert.pem inter 5s rise 3 fall 2
listen stats
bind :9000
mode http
stats enable # Enable stats page
stats hide-version # Hide HAProxy version
stats realm Haproxy\ Statistics # Title text for popup window
stats uri /haproxy_stats # Stats URI
Everything works fine if the local broker is available when haproxy starts up. However, if the wifi connection to the local machine is down when haproxy starts up, init-addr none still allows it to start using the backup server (aws_iot). The local server is marked as "Down for Maintenance" and no more health checks are performed. Even after the network is up and running, haproxy is unaware of it and does not switch back from the cloud server.
Is there any way to make it consider unresolved domain name the same as a normal "down" condition?
One alternative I see right now is to have a script polling the domain name in the background and sending an "enable server" command to the haproxy control socket once it is up. This seems overly roundabout for something that should be really simple!
Update:
Running the command echo "enable server mqtt/wifi_server" | socat /var/run/haproxy.sock stdio doesn't switch the backends after the local connection is up and running. haproxy just never switches back to the local server with anything short of restarting it.
Update 2:
Changed init-addr none to init-addr libc,last,none
You are using "init-addr none" so the server will start without any valid IP address when it is in a down state. In addition, your current config makes HAProxy able to resolve hostnames only at startup as mentioned here.
So to make HAProxy resolves localserver.local after startup to get right the IP and send health checks you need to configure a resolvers section in HAProxy.
I'm using HAProxy 1.6.4 and want to enable the stats. (/haproxy?stats)
Here is my cfg:
global
log 127.0.0.1 local2
daemon
maxconn 256
defaults
log global
timeout connect 5000
timeout client 10000
timeout server 10000
frontend http-in
bind *:8080
default_backend testb
backend testb
balance roundrobin
server s1 123.456.789.0:443 maxconn 32
server s2 123.456.789.1:443 maxconn 32
listen statistics
bind *:8080
mode http
stats enable
If I run statistics on other port than 8080 it works, but how can I run it on the same port as my frontend (8080), which is running in the default mode tcp?
You can do it by redirecting to your self and using access list like this:
global
log 127.0.0.1 local2
daemon
maxconn 256
defaults
log global
timeout connect 5000
timeout client 10000
timeout server 10000
listen stats :1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth myUser:myPassword
frontend http-in
bind *:8080
acl is_www hdr_end(host) -i www.mysite.com
acl is_stat hdr_end(host) -i stat.mysite.com
use_backend srv_www if is_www
use_backend srv_stat if is_stat
backend srv_www
balance roundrobin
server s1 123.456.789.0:443 maxconn 32
server s2 123.456.789.1:443 maxconn 32
backend srv_stat
server Local 127.0.0.1:1936
When going to your server with www, it takes you to the web server.
But using stat, it redirects you from your input port 8080 to 1936 whee stat is running
This is just an educated guess. You can't serve the stats page in tcp mode because it's proxying at layer4. In this mode, haproxy only knows IPs and ports from incoming packets and routes it accordingly based on defined rules.
Unlike http mode (layer7), it has more info to work on like HTTP headers where path is available and use it to know when to serve the /haproxy?stats page.
If you are happy to use a your path, it really easy.
this should work
global
log 127.0.0.1 local2
daemon
maxconn 256
defaults
log global
timeout connect 5000
timeout client 10000
timeout server 10000
frontend http-in
bind *:8080
stats enable
stats uri /stats
default_backend testb
backend testb
balance roundrobin
server s1 123.456.789.0:443 maxconn 32
server s2 123.456.789.1:443 maxconn 32
you can then access the haproxy stats from
http://< hostname >:8080/stats
(tested on haproxy 2.5.5)