NGINX Ingress Controller's Load Balancer is hiding the real client IP - docker

Setup
I'm playing around with K8s and I set up a small, single-node, bare metal cluster. For this cluster I pulled the NGINX Ingress Controller config from here, which is coming from the official getting started guide.
Progress
Ok, so pulling this set up a bunch of things, including a LoadBalancer in front. I like that.
For my app (single pod, returns the caller IP) I created a bunch of things to play around with. I now have SSL enabled and another ingress controller, which I pointed to my app's service, which then points to the deployed pod. This all works perfectly, I can browse the page with https. See:
BUT...
My app is not getting the original IP from the client. All client requests end up as coming from 10.42.0.99... here's the controller config from describe:
Debugging
I tried like 50 solutions that were proposed online, none of them worked (ConfigMaps, annotations, proxy mode, etc). And I debugged in-depth, there's no X-Forwarder-For or any similar header in the request that reaches the pod. Previously I tested the same app on apache directly, and also in a docker setup, it works without any issues.
It's also worth mentioning that I looked into the ingress controller's pod and I already saw the same internal IP in there. I don't know how to debug the controller's pod further.
Happy to share more information and config if it helps.
UPDATE 2021-12-15
I think I know what the issue is... I didn't mention how I installed the cluster, assuming it's irrelevant. Now I think it's the most important thing 😬
I set it up using K3S, which has its own LoadBalancer. And through debugging, I see now that all of my requests in NGINX have the IP of the load balancer's pod...
I still don't know how to make this Klipper LB give the source IP address though.
UPDATE 2021-12-17
Opened an issue with the Klipper LB.

Make sure your Nginx ingress configmap have enabled user IP real-ip-header: proxy_protocol try updating this line into configmap.
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
compute-full-forwarded-for: "true"
use-forwarded-headers: "false"
real-ip-header: proxy_protocol
still if that not work you can just inject this config as annotation your ingress configuration and test once.
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Forwarded-For $http_x_forwarded_for";

#milosmns - one of the ways i have been trying is to not install servicelb (--no-deploy=servicelb) and remove traefik (--no-deploy=traefik).
Instead deploy haproxy ingress (https://forums.rancher.com/t/how-to-access-rancher-when-k3s-is-installed-with-no-deploy-servicelb/17941/3) and enable proxy protocol. When you do this, all requests that hit the haproxy ingress will be injected with proxy protocol and no matter how they are routed you will be able to pick them up from anywhere. you can also get haproxy to inject X-Real-IP headers.
the important thing is that haproxy should be running on all master nodes. since there is no servicelb, your haproxy will always get the correct ip address.

Just set externalTrafficPolicy to "Local" if using GCP

Related

Get Visitor IP or a Custom header in Jaeger docker behind docker traefik (v2,x)

we are experimenting with JAEGER as a tracing-tool for our traefik routing environment. We also use an ecapsulated docker network .
The goal is to accumulate requests on our api's per department and also some other monitoring.
We are using traefik 2.8 as a docker service. Also all our services run behind this traefik instance.
We added basic tracing configuration to our .toml file and startet a jaeger-instance, also as docker service. On our websecure endpoint we added forwardedHeaders.insecure = true
Jaeger is working fine, but we only get the docker internal host ip of the service, not the visitor ip from the user accessing a client with the browser or app.
I googled around and I am not sure, but it seems that this is a problem due to our setup and can't be fixed - except by using network="host". But unfortunately thats not an option.
But I want to be sure, so I hope someone here has a tip for us to configure docker/jaeger correctly or knows if it is even possible.
A different tracing tool suggestion (for example like tideways, but more python and wasm and c++ compatible) is also appreciated.
Thanks

Cannot access Keycloak account-console in Kubernetes (403)

I have found a strange behavior in Keycloak when deployed in Kubernetes, that I can't wrap my head around.
Use-case:
login as admin:admin (created by default)
click on Manage account
(manage account dialog screenshot)
I have compared how the (same) image (quay.io/keycloak/keycloak:17.0.0) behaves if it runs on Docker or in Kubernetes (K3S).
If I run it from Docker, the account console loads. In other terms, I get a success (204) for the request
GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=account-console
From the same image deployed in Kubernetes, the same request fails with error 403. However, on this same application, I get a success (204) for the request
GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=security-admin-console
Since I can call security-admin-console, this does not look like an issue with the Kubernetes Ingress gateway nor with anything related to routing.
I've then thought about a Keycloak access-control configuration issue, but in both cases I use the default image without any change. I cross-checked to be sure, it appears that the admin user and the account-console client are configured exactly in the same way in both the docker and k8s applications.
I have no more idea about what could be the problem, do you have any suggestion?
Try to set ssl_required = NONE in realm table in Keycloak database to your realm (master)
So we found that it was the nginx ingress controller causing a lot of issues. While we were able to get it working with nginx, via X-Forwarded-Proto etc., but it was a bit complicated and convoluted. Moving to haproxy instead resolved this problem. As well, make sure you are interfacing with the ingress controller over https or that may cause issues with keycloak.
annotations:
kubernetes.io/ingress.class: haproxy
...

ingress-nginx logs - lots of weird entries showing up

I'm trying to get my logz.io set up and running and during that process I'm noticing that my ingress controller pod is spitting out a lot of logs. I don't know if its normal or not, but I see a TON of entries in the logs that look like this
[06/Sep/2019:21:27:14 +0000]TCP200004.999
[06/Sep/2019:21:27:17 +0000]TCP200005.000
[06/Sep/2019:21:27:19 +0000]TCP200005.001
[06/Sep/2019:21:27:22 +0000]TCP200004.999
[06/Sep/2019:21:27:24 +0000]TCP200005.001
[06/Sep/2019:21:27:27 +0000]TCP200005.000
.
.
.
Is this normal? Is my ingress configured wrong? I don't want to see thousands of these entries in my logz.io instance.
Usually, these logs records reflect TCP/UDP services configuration for upstream backends, when Nginx Ingress controller applies proxy stream for TCP/UDP connections from relevant ConfigMap by adding specific lua block to the nested Nginx ingress controller Pod.
Find related discussion in #3612 Github thread.

How can I use vhosts with the same port in kubernetes pod?

I have an existing web application with frontend and a backend which runs on the same port (HTTPS/443) on different sub domains. Do I really need a load balancer in my pod to handle all incoming web traffic or does Kubernetes has something already build in, which I missed so far?
I would encurage getting familiar with the concept of Ingress and IngressController http://kubernetes.io/docs/user-guide/ingress/
Simplifying things a bit, you can look at ingress as a sort of vhost/path service router/revproxy.

What are possible ways of country filtering?

I'm right now using GKE (kubernetes) with an nginx container to proxy different services. My goal is to block some countries. I'm used to do that with nginx and its useful geoip module, but as of now, kubernetes doesn't forward the real customer ip to the containers, so I can't use it.
What would be the simplest/cheapest solution to filter out countries until kubernetes actually forward the real IP?
External service?
Simple google server with only nginx, filtering countries, forwarding to kubernetes (not great in terms of price and reliability)?
Modify the kube-proxy (as I've seen here and there, but it seems a bit odd)?
Frontend geoip filtering (hmm, worse idea by far)?
thank you!
You can use a custom nginx image and use a map to create a filter
// this in http section
map $geoip_country_code $allowed_country {
default yes;
UY no;
CL no;
}
and
// this inside some location where you want to apply the filter
if ($allowed_country = no) {
return 403;
}
First on GKE if you're using the nginx ingress controller, you should turn off the default GCE controller: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc, otherwise they'll fight.
kubernetes doesn't forward the real customer ip to the containers
That's only true if you're going through kube-proxy with a service of type NodePort and/or LoadBalancer. With the nginx ingress controller you're running with hostPort, so it's actually the docker daemon that's hiding the source ip. I think later versions of docker default to the iptables mode, which shows you the source ip once again.
In the meanwhile you can get source ip by running the nginx controller like: https://gist.github.com/bprashanth/a4b06004a0f9c19f9bd41a1dcd0da0c8
That, however, uses host networking, not the greatest option. Inserted you can use the proxy protocol to get src ip: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#proxy-protocol
Also (in case you didn't already realize) the nginx controller has the geoip module enabled by default: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#nginx-status-page
Please open an issue if you need more help.
EDIT: proxy protocol is possible through the ssl proxy which is in alpha currently: https://cloud.google.com/compute/docs/load-balancing/tcp-ssl/#proxy_protocol_for_retaining_client_connection_information

Resources