haproxy suspends health checks if DNS resolution fails - mqtt

We are using haproxy to switch between a local MQTT broker and a cloud broker based on availability (with preference to the local server). haproxy.cfg looks something like this:
global
log 127.0.0.1 local1
maxconn 1000
daemon
debug
#quiet
tune.bufsize 1024576
stats socket /var/run/haproxy.sock mode 600 level admin
defaults
log global
mode tcp
option tcplog
retries 3
option redispatch
timeout connect 5000
timeout client 50000
timeout server 50000
# Listen to all MQTT requests (port 1883)
listen mqtt
bind *:1883
mode tcp
balance first # Connect to first available
timeout client 3h
timeout server 3h
option clitcpka
option srvtcpka
# MQTT server 1 - local wifi
server wifi_broker localserver.local:1883 init-addr libc,last,none check inter 3s rise 5 fall 2 maxconn 1000 on-marked-up shutdown-backup-sessions on-marked-down shutdown-sessions
# MQTT server 2 - cloud
server aws_iot xxxxx.amazonaws.com:8883 backup check backup ssl verify none ca-file ./root-CA.crt crt ./cert.pem inter 5s rise 3 fall 2
listen stats
bind :9000
mode http
stats enable # Enable stats page
stats hide-version # Hide HAProxy version
stats realm Haproxy\ Statistics # Title text for popup window
stats uri /haproxy_stats # Stats URI
Everything works fine if the local broker is available when haproxy starts up. However, if the wifi connection to the local machine is down when haproxy starts up, init-addr none still allows it to start using the backup server (aws_iot). The local server is marked as "Down for Maintenance" and no more health checks are performed. Even after the network is up and running, haproxy is unaware of it and does not switch back from the cloud server.
Is there any way to make it consider unresolved domain name the same as a normal "down" condition?
One alternative I see right now is to have a script polling the domain name in the background and sending an "enable server" command to the haproxy control socket once it is up. This seems overly roundabout for something that should be really simple!
Update:
Running the command echo "enable server mqtt/wifi_server" | socat /var/run/haproxy.sock stdio doesn't switch the backends after the local connection is up and running. haproxy just never switches back to the local server with anything short of restarting it.
Update 2:
Changed init-addr none to init-addr libc,last,none

You are using "init-addr none" so the server will start without any valid IP address when it is in a down state. In addition, your current config makes HAProxy able to resolve hostnames only at startup as mentioned here.
So to make HAProxy resolves localserver.local after startup to get right the IP and send health checks you need to configure a resolvers section in HAProxy.

Related

HAProxy config for TCP load balancing in docker container

I'm trying to put a HAProxy loadbalancer in front of my RabbitMQ cluster(which is setup with nodes in separate docker containers). I cannot find many examples for haproxy config for the above setup,
global
debug
defaults
log global
mode tcp
timeout connect 5000
timeout client 50000
timeout server 50000
frontend main
bind *:8089
default_backend app
backend app
balance roundrobin
mode http
server rabbit-1 172.18.0.2:8084
server rabbit-2 172.18.0.3:8085
server rabbit-3 172.18.0.4:8086
In this example, what should I give in the place of the ip addresses for docker containers?

Port Forwarding for compute engine google cloud platform

I'm trying to open port TCP 28016 and UDP 28015 for a game server in my compute engine VM running on Microsoft Windows Server 2016.
I've tried opening the opening inside my server using RDP, going to Windows Firewall setting and creating new inbound rules for both TCP 28016 and UDP 28015.
Also done setting firewall rules on my Cloud Platform Firewall Rules for both port.
When running my game server application, running netstat didn't show any of the port being used / not listening . Not even shows up. What did i do wrong ?
Edit : it now shows up on netstat -a -b , but didn't have LISTENING
If it doesn't show as LISTENING, it's not a firewall or "port forwarding" issue; rather, the application either isn't running, or is running but isn't configured to listen for connections on that port.

stopping the ip connection between a client and the server for 30seconds and rewarm up of the link after that period

Here is my configuration:
On the server I have a dhcp server that gives IP addresses to clients (connected on eth1) in order for the clients to be connected to the internet (on eth0).
For a special operational use, I would like to stop the IP connection between a client and the server for 30 seconds and rewarm up the link after that period. Currently I have tried to use iptable black list to put the client IP inside the black ip list with this command:
command 1: sudo iptables -I INPUT -s '.$ip.' -j DROP
then I use another iptable command to resume the ip link with this command:
command 2: sudo iptables -D INPUT -s '.$ip.' -j DROP
the both commands are encapsulated in a PHP program stored in an Ubuntu server and launched from a Windows workstation. Both command works perfectly but, unfortunately, I never get the internet connection back. From a Windows command screen I can monitor the behaviour of the line with ping commands. Here are the result of command 1:stop the network
Here are the result of command 2:start the network
Here is my Question:
Can someone thell me how to rewarm up the local IP socket in order to relaunch the tcp connection after command 2?
Another way of doing this may could have be to stop using iptable command and dynamically modify the lease time of the client IP address given by dhcp service.
Stopping the connection: by anticipating the end of lease period of the IP address. Rewarming up the connection: by attempting a http command to invoke the establishment of a tcp connection with a new lease time.
Can someone tell me how to overcome this?
Thanks very much.

Port 5432 is closed on Google Compute Engine

Currently I need to establish remote connection with my server (Ubuntu 16.04 LTS).
I Install Postgresql and I made the following settings:
/etc/postgresql/9.5/main/postgresql.conf:
listen_addresses='*'
/etc/postgresql/9.5/main/pg_hba.conf:
host all all 0.0.0.0/0 md5
If run this command: netstat -anpt | grep LISTEN
shows the port is listening
but when I try to establish the connection, I have this error:
And this tool tells me that the port is closed:
Allowing only on Configurations of Postgresql server is not enough. You need to add a firewall rule in google compute engine. Check this
Firewall rules control incoming or outgoing traffic to an instance. By default, incoming traffic from outside your network is blocked.

HAProxy Sticky Sessions Node.js iOS Socket.io

I am trying to implement sticky sessions with HAProxy.
I have a HAProxy instance that routes to two different Node.js servers, each running socket.io. I am connecting to these socket servers (via the HAProxy server) using an iOS app (https://github.com/pkyeck/socket.IO-objc).
Unlike when using a web browser, the sticky sessions do not work, it is like the client is not handling the cookie properly and so the HAProxy server just routes the request to wherever it likes. Below you can see my HAProxy config (I have removed the IP addresses):
listen webfarm xxx.xxx.xxx.xxx:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth haproxy:stats
balance roundrobin
#replace XXXX with customer site name
cookie SERVERID insert indirect nocache
option httpclose
option forwardfor
#replace with web node private ip
server web01 yyy.yyy.yyy.yyy:8000 cookie server1 weight 1 maxconn 1024 check
#replace with web node private ip
server web02 zzz.zzz.zzz.zzz:8000 cookie server2 weight 1 maxconn 1024 check
This is causing a problem with the socket.io handshake, because the initial handshake routes to server1 then subsequent heartbeats from the client go to server2. This causes server2 to reject the client because the socket session ID is invalid as far as server 2 is concerned, when really all requests from the client should go to the same server.
Update the haproxy config file /etc/haproxy/haproxy.cfg by the following:
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
option forwardfor
backend servers
cookie SRVNAME insert
balance leastconn
option forwardfor
server node1 127.0.0.1:3001 cookie node1 check
server node2 127.0.0.1:3002 cookie node2 check
server node3 127.0.0.1:3003 cookie node3 check
server node4 127.0.0.1:3004 cookie node4 check
server node5 127.0.0.1:3005 cookie node5 check

Resources