ingress-nginx logs - lots of weird entries showing up - docker

I'm trying to get my logz.io set up and running and during that process I'm noticing that my ingress controller pod is spitting out a lot of logs. I don't know if its normal or not, but I see a TON of entries in the logs that look like this
[06/Sep/2019:21:27:14 +0000]TCP200004.999
[06/Sep/2019:21:27:17 +0000]TCP200005.000
[06/Sep/2019:21:27:19 +0000]TCP200005.001
[06/Sep/2019:21:27:22 +0000]TCP200004.999
[06/Sep/2019:21:27:24 +0000]TCP200005.001
[06/Sep/2019:21:27:27 +0000]TCP200005.000
.
.
.
Is this normal? Is my ingress configured wrong? I don't want to see thousands of these entries in my logz.io instance.

Usually, these logs records reflect TCP/UDP services configuration for upstream backends, when Nginx Ingress controller applies proxy stream for TCP/UDP connections from relevant ConfigMap by adding specific lua block to the nested Nginx ingress controller Pod.
Find related discussion in #3612 Github thread.

Related

Cannot access Keycloak account-console in Kubernetes (403)

I have found a strange behavior in Keycloak when deployed in Kubernetes, that I can't wrap my head around.
Use-case:
login as admin:admin (created by default)
click on Manage account
(manage account dialog screenshot)
I have compared how the (same) image (quay.io/keycloak/keycloak:17.0.0) behaves if it runs on Docker or in Kubernetes (K3S).
If I run it from Docker, the account console loads. In other terms, I get a success (204) for the request
GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=account-console
From the same image deployed in Kubernetes, the same request fails with error 403. However, on this same application, I get a success (204) for the request
GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=security-admin-console
Since I can call security-admin-console, this does not look like an issue with the Kubernetes Ingress gateway nor with anything related to routing.
I've then thought about a Keycloak access-control configuration issue, but in both cases I use the default image without any change. I cross-checked to be sure, it appears that the admin user and the account-console client are configured exactly in the same way in both the docker and k8s applications.
I have no more idea about what could be the problem, do you have any suggestion?
Try to set ssl_required = NONE in realm table in Keycloak database to your realm (master)
So we found that it was the nginx ingress controller causing a lot of issues. While we were able to get it working with nginx, via X-Forwarded-Proto etc., but it was a bit complicated and convoluted. Moving to haproxy instead resolved this problem. As well, make sure you are interfacing with the ingress controller over https or that may cause issues with keycloak.
annotations:
kubernetes.io/ingress.class: haproxy
...

NGINX Ingress Controller's Load Balancer is hiding the real client IP

Setup
I'm playing around with K8s and I set up a small, single-node, bare metal cluster. For this cluster I pulled the NGINX Ingress Controller config from here, which is coming from the official getting started guide.
Progress
Ok, so pulling this set up a bunch of things, including a LoadBalancer in front. I like that.
For my app (single pod, returns the caller IP) I created a bunch of things to play around with. I now have SSL enabled and another ingress controller, which I pointed to my app's service, which then points to the deployed pod. This all works perfectly, I can browse the page with https. See:
BUT...
My app is not getting the original IP from the client. All client requests end up as coming from 10.42.0.99... here's the controller config from describe:
Debugging
I tried like 50 solutions that were proposed online, none of them worked (ConfigMaps, annotations, proxy mode, etc). And I debugged in-depth, there's no X-Forwarder-For or any similar header in the request that reaches the pod. Previously I tested the same app on apache directly, and also in a docker setup, it works without any issues.
It's also worth mentioning that I looked into the ingress controller's pod and I already saw the same internal IP in there. I don't know how to debug the controller's pod further.
Happy to share more information and config if it helps.
UPDATE 2021-12-15
I think I know what the issue is... I didn't mention how I installed the cluster, assuming it's irrelevant. Now I think it's the most important thing 😬
I set it up using K3S, which has its own LoadBalancer. And through debugging, I see now that all of my requests in NGINX have the IP of the load balancer's pod...
I still don't know how to make this Klipper LB give the source IP address though.
UPDATE 2021-12-17
Opened an issue with the Klipper LB.
Make sure your Nginx ingress configmap have enabled user IP real-ip-header: proxy_protocol try updating this line into configmap.
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
compute-full-forwarded-for: "true"
use-forwarded-headers: "false"
real-ip-header: proxy_protocol
still if that not work you can just inject this config as annotation your ingress configuration and test once.
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Forwarded-For $http_x_forwarded_for";
#milosmns - one of the ways i have been trying is to not install servicelb (--no-deploy=servicelb) and remove traefik (--no-deploy=traefik).
Instead deploy haproxy ingress (https://forums.rancher.com/t/how-to-access-rancher-when-k3s-is-installed-with-no-deploy-servicelb/17941/3) and enable proxy protocol. When you do this, all requests that hit the haproxy ingress will be injected with proxy protocol and no matter how they are routed you will be able to pick them up from anywhere. you can also get haproxy to inject X-Real-IP headers.
the important thing is that haproxy should be running on all master nodes. since there is no servicelb, your haproxy will always get the correct ip address.
Just set externalTrafficPolicy to "Local" if using GCP

K8s loadbalance between different deployment/replicasets

we have a system that is having 2 endpoint based on geo-location. e.g (east_url, west_url).
One of our application need to load balance between those 2 urls. In the consumer application, created 2 deployment with the same image but different environment variables such as "url=east_url", "url=west_url".
after the deployment, i have following running pod, each of them will have label: "app=consumer-app" and "region=east" or "region=west"
consumer-east-pod-1
consumer-east-pod-2
consumer-west-pod-1
consumer-west-pod-2
when i create a clusterIP service with selector: app=consumer-app, somehow it only picks up one replicaSet. I am just curious if this is actually possible in kubernates to allow Service backed up by different deployments?
Another way of doing this i can think of is to create 2 services, and have ingress controller to loadbalance it, is this possible? we are using Kong as the ingress controller. I am looking for something like openshift which can have "alternativeBackends" to serve the Route. https://docs.openshift.com/container-platform/4.1/applications/deployments/route-based-deployment-strategies.html
I was missing a label for the east replicaSets, after i add the app:consumerAPP, it works fine now.
Thanks
TL;DR: use ISTIO
With ISTIO you can create Virtual Services:
A VirtualService defines a set of traffic routing rules to apply when
a host is addressed. Each routing rule defines matching criteria for
traffic of a specific protocol. If the traffic is matched, then it is
sent to a named destination service (or subset/version of it) defined
in the registry.
The VirtualService will let you send traffic to different backends based on the URI.
Now, if you plan to perform like an A/B TEST, you can use ISTIO's (Destination Rule):[https://istio.io/docs/reference/config/networking/destination-rule/].
DestinationRule defines policies that apply to traffic intended for a
service after routing has occurred.
Version specific policies can be specified by defining a named subset
and overriding the settings specified at the service level
1.- If you are using GKE, the process to install ISTIO can be located in here
2.- If you are using K8s running on a Virtual Machine, the installation process can be found here

GCP Load Balancer: 502 Server Error, "failed_to_connect_to_backend"

I have a dockerized Go application running on two GCP instances, everything works fine when using them with their individual external IPs, but when put through the load balancer, they're either slow to answer or it answers a 502 server error. The health checks seems to be ok, so I really don't understand.
In the logs, the error thrown is
failed_to_connect_to_backend
I've already seen other answers on this question, but none of them seems to provide an answer for my case. I cannot modify the way the application is served, so it doesn't seems to be a timeout thing.
To troubleshoot 502 response from the Load Balancer due to "failed_to_connect_to_backend." I would check the followings:
1) Usually, "failed_to_connect_to_backend" error message indicates that the load balancer is failing to connect to backends, investigating URL map rules is also a good point to start. I would also suggest reviewing your Load Balancer's URL map to make sure that Host rules, Path matcher, and Path rules are correctly defined and comply with descriptions in this article.
2) Also check if the backend instances are exhausting their resources, If a backend server is overwhelmed, it will refuse incoming requests, potentially causing the load balancer to give up on it and return the specific 502 error you're experiencing. For Apache, you could use this link and nginx this link. Also, check the output on how many established connections are present at any one time using 'netstat' and watch command.
3) I would also recommend testing again with the HTTP(S) request directly to the instance, request the same URL that reporting 502. You might do this test in another VM instance in your VPC network.
checking whether your backend block google's cloud cdn ip address or not.those addresses can be found here:https://cloud.google.com/compute/docs/faq#find_ip_range
this happened to me more than once, I was using apache in my servers, and the issue was not of CPU, but of configuration,
I am using apache mpm_event in combination with php-fpm and there are many settings that will limit the max amount of requests that you want apache and fpm to allow.
In my case I increased in Apache MPM config MaxRequestWorkers from the default 150 to 600, and in PHP FPM config pm.max_children to 80 (I don't remember what was the default here)
This worked as expected, hope this helps you to extrapolate to your own stack.
Just encountered 502 errors myself on access to a Prometheus pod running on my GKE Standard cluster (exposed through IAP).
The issue was that the configured External HTTP/S Load Balancer's health check was coming back unhealthy. This was despite the Prometheus pod running as expected. After digging into the issue I found out that the GCP auto-generated health check was faulty, it was checking URL / instead of /-/ready. When I deleted the Prometheus k8s Ingress resource (which auto-generates GCPs LB and Health Check) and recreated it - the issue was resolved (after a few minutes of resource propagation).

How can I use vhosts with the same port in kubernetes pod?

I have an existing web application with frontend and a backend which runs on the same port (HTTPS/443) on different sub domains. Do I really need a load balancer in my pod to handle all incoming web traffic or does Kubernetes has something already build in, which I missed so far?
I would encurage getting familiar with the concept of Ingress and IngressController http://kubernetes.io/docs/user-guide/ingress/
Simplifying things a bit, you can look at ingress as a sort of vhost/path service router/revproxy.

Resources