Nginx load balancer & freeradius docker - Reply packet is from docker container - docker

So doing a POC, with Nginx load balancer and freeradius docker. When the request packet is sent to the freeradius container from the proxy, the reply is received by the proxy and all is well.
So the containers are working as expected 100%.
However when the proxy sends the request to the freeradius container through the Nginx load balancer, the freeradius gets the access request packet, but then the access accept packet replies back with the containers IP.
So the proxy sends another request and the freeradius container replies back with the containers IP.
Proxy sends the request to radius:
Sent Accounting-Request Id 90 from xxx.xxx.xxx.xxx:55039 to xxx.xxx.xxx.xxx:1813 length 467
Radius replies
Sent Accounting-Response Id 1 from 172.18.0.5:1813 to xxx.xxx.xxx.xxx:37741 length 25
Proxy never gets the response:
(29) No proxy response, giving up on request and marking it done
(29) ERROR: Failing proxied request for user "vlan1106/lab-1-1070-1106", due to lack of any response from home server 154.119.32.156 port 1813
The Nginx load balancers are configured with transparent, as I need to know the source IP from where the request is made. When not in transparent then it works as the reply packet has the correct IP in the header.
Is there anything that I'm missing here? Been head butting this problem since Saturday.

Related

How do server programs work on Docker when *only* the listening port is mapped to the Docker host?

This is just a conceptual question that I have been thinking about recently.
Say I'm running an Nginx container on Docker on a host. Normally, for this to work, we have to map ports like 80 and 443 to host container. This is because these are listening ports, and connections from the outside world to port 80 should be forwarded to port 80 of the container. So far so good.
But also: port 80 is just the listening socket, right? The listening socket only accepts the connection; after this any communication done between a client and the Nginx server is supposedly done on a different socket with a random port number (on the server side). This is to allow multiple connections, and to keep the listening port free to establish more connections, etc. This is where my issue comes in.
Say I'm a client and I connect to this Nginx server. As far as I understand, I first send TCP packets to port 80 of the host that is hosting this Nginx Docker container. But during the establishment of the connection, the server changes their port to another number, say 45670. (Not sure how, but I am guessing the packets that are sent back suddenly mention this port, and our client will continue the rest of the exchange with this port number instead).
But now as I send packets (e.g. HTTP requests) to the host on port 45670, how will the Nginx docker container see those packets?
I am struggling to understand how server processes can run on Docker with only one port exposed / published for mapping.
Thanks!
But also: port 80 is just the listening socket, right? The listening socket only accepts the connection; after this any communication done between a client and the Nginx server is supposedly done on a different socket with a random port number (on the server side).
Nope. When a connection is established, the client side is a random port number (usually) and the server side is the same port that the server listens on.
In TCP there aren't actually listening sockets - they're an operating system thing - and a connection is identified by the combination of both the port numbers and both the IP addresses. The client sends a SYN ("new connection please") from its port 49621 (for example) to port 80 on the server; the server sends a SYN/ACK ("okay") from its port 80 to port 49621 on the client.

Why is a docker container only connecting using 0.0.0.0?

I have a docker-compose containing two services:
a Python uWSGI app on port 5000 and
an Nginx reverse proxy on port 433 (with a self-signed certificate).
The latter has been set up to proxy requests into the service running in the first container, which is doing it fine.
The Python service has an endpoint that receives a POST request with a JSON payload (an object) containing a $schema attribute that allows the service to validate it:
"$schema": "https://localhost/schemas/schema.json"
The interesting part is that this schema has been set up on the service itself, which means that the attribute passed in - as the URL indicates - is being served by it. The problem happens when the service tries to validate a payload against a schema that it cannot GET (requesting the URL above).
My conclusions
I can understand why fetching from localhost fails, as there isn't any service running on localhost, 443 in the respective container:
jsonschema.exceptions.RefResolutionError: HTTPSConnectionPool(host='localhost', port=443):
Max retries exceeded with url: /schemas/schema.json
(Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f7c26d9f5b0>:
Failed to establish a new connection: [Errno 111] Connection refused'))
As an alternative, I tried to use the IP for the Nginx container (https://nginx/schemas/schema.json), which actually finds the container, but goes through the SSL verification on the self-signed certificate.
jsonschema.exceptions.RefResolutionError: HTTPSConnectionPool(host='nginx', port=443):
Max retries exceeded with url: /schemas/schema.json
(Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1108)')))
In a third attempt, I used 0.0.0.0 which correctly routes the request and validates the object correctly.
However, I couldn't understand why it worked. As well as why the service is able to make the request through the proxy but I cannot CURL myself inside the Python service container simply using curl --insecure https://0.0.0.0/...
Any help in figuring out the real cause or a better understanding of how networking should work here would be very appreciated.

Docker nginx service won't accept connections while individual replicas do

My docker swarm has an API service that uses Nginx proxy to access Report service. Nginx proxy only doing proxy_pass and running in 2 replicas. I have problem accessing Nginx proxy by service name or service IP. Sometimes it works, but mainly i get this response:
# curl 'http://nginx-proxy:8000/v1/report?id=662867'
curl: (7) Couldn't connect to server
# curl 'http://10.0.17.13:8000/v1/report?id=662867'
curl: (7) Couldn't connect to server
On other hand if i access individual replicas by an IP or from outside docker network - it works just fine:
# curl 'http://10.0.17.14:8000/v1/report?id=662867'
0 662867 0 10 6 6 0.0 194 3 5 437
# curl 'http://10.0.17.18:8000/v1/report?id=662867'
0 662867 0 10 6 6 0.0 194 3 5 437
Feels like internal docker balancer gets overwhelmed by amount of requests and stops accepting new connections. There is no errors in nginx logs - every request with 200 status. But API logs show this:
INFO Cannot get online report: Get http://nginx-proxy:8000/v1/report?id=732743: dial tcp 10.0.17.13:8000: connect: cannot assign requested address caller=/go/src/api/src/reader.go:300 (*ReaderCursor).readOnline
INFO Cannot get online report: Get http://nginx-proxy:8000/v1/report?id=732703: dial tcp 10.0.17.13:8000: connect: cannot assign requested address caller=/go/src/api/src/reader.go:300 (*ReaderCursor).readOnline
I'm using official Nginx image, only modified worker_processes auto; from 1 to auto.
Any ideas what can be wrong or where to look?
Problem wasn't in docker at all. It was in an API code, which was running multiple processes with default 2 connections per post. Lots of requests used up all local ports to use for the client connection.
Fixed probelm with http.DefaultTransport.(*http.Transport).MaxIdleConnsPerHost = 100

IOS - How to send all DNS traffic to go through the VPN configured internal DNS server?

When VPN connection is established,
DNS servers configured by ISP will be in 1st resolver list (list contains Domains and name servers).
DNS servers configured by VPN connection will be in 2nd resolver list.
If i configured xyz.com as domain suffix then DNS request for any abc.xyz.com gets resolved.But when user tries to access other internal sites like pqr.com (not configured domain) doesn’t get resolved.
Is there any right APIs to update the DNS server list (without root priviledge) and also sending all DNS traffic to the VPN configured DNS Server?

HAProxy Sticky Sessions Node.js iOS Socket.io

I am trying to implement sticky sessions with HAProxy.
I have a HAProxy instance that routes to two different Node.js servers, each running socket.io. I am connecting to these socket servers (via the HAProxy server) using an iOS app (https://github.com/pkyeck/socket.IO-objc).
Unlike when using a web browser, the sticky sessions do not work, it is like the client is not handling the cookie properly and so the HAProxy server just routes the request to wherever it likes. Below you can see my HAProxy config (I have removed the IP addresses):
listen webfarm xxx.xxx.xxx.xxx:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth haproxy:stats
balance roundrobin
#replace XXXX with customer site name
cookie SERVERID insert indirect nocache
option httpclose
option forwardfor
#replace with web node private ip
server web01 yyy.yyy.yyy.yyy:8000 cookie server1 weight 1 maxconn 1024 check
#replace with web node private ip
server web02 zzz.zzz.zzz.zzz:8000 cookie server2 weight 1 maxconn 1024 check
This is causing a problem with the socket.io handshake, because the initial handshake routes to server1 then subsequent heartbeats from the client go to server2. This causes server2 to reject the client because the socket session ID is invalid as far as server 2 is concerned, when really all requests from the client should go to the same server.
Update the haproxy config file /etc/haproxy/haproxy.cfg by the following:
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
option forwardfor
backend servers
cookie SRVNAME insert
balance leastconn
option forwardfor
server node1 127.0.0.1:3001 cookie node1 check
server node2 127.0.0.1:3002 cookie node2 check
server node3 127.0.0.1:3003 cookie node3 check
server node4 127.0.0.1:3004 cookie node4 check
server node5 127.0.0.1:3005 cookie node5 check

Resources