Cannot access Keycloak account-console in Kubernetes (403) - docker

I have found a strange behavior in Keycloak when deployed in Kubernetes, that I can't wrap my head around.
Use-case:
login as admin:admin (created by default)
click on Manage account
(manage account dialog screenshot)
I have compared how the (same) image (quay.io/keycloak/keycloak:17.0.0) behaves if it runs on Docker or in Kubernetes (K3S).
If I run it from Docker, the account console loads. In other terms, I get a success (204) for the request
GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=account-console
From the same image deployed in Kubernetes, the same request fails with error 403. However, on this same application, I get a success (204) for the request
GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=security-admin-console
Since I can call security-admin-console, this does not look like an issue with the Kubernetes Ingress gateway nor with anything related to routing.
I've then thought about a Keycloak access-control configuration issue, but in both cases I use the default image without any change. I cross-checked to be sure, it appears that the admin user and the account-console client are configured exactly in the same way in both the docker and k8s applications.
I have no more idea about what could be the problem, do you have any suggestion?

Try to set ssl_required = NONE in realm table in Keycloak database to your realm (master)

So we found that it was the nginx ingress controller causing a lot of issues. While we were able to get it working with nginx, via X-Forwarded-Proto etc., but it was a bit complicated and convoluted. Moving to haproxy instead resolved this problem. As well, make sure you are interfacing with the ingress controller over https or that may cause issues with keycloak.
annotations:
kubernetes.io/ingress.class: haproxy
...

Related

How to edit bad gateway page on traefik

I want to edit the Bad Gateway page from traefik to issue a command like
docker restart redis
Does anyone have an idea on how to do this?
A bit of background:
I have a somewhat broken setup of Traefik v2.5 and Authelia on my development server, where sometimes I get a Bad Gateway Error when accessing a page. Usually this is fixed by clearing all sessions from redis. I tried to locate the bug, but the error logs aren't helpful and I don't have the time and skills to make the bug reproduceable or find the broken configuration. So instead I always use ssh into the maschine and reset redis manually

Get Visitor IP or a Custom header in Jaeger docker behind docker traefik (v2,x)

we are experimenting with JAEGER as a tracing-tool for our traefik routing environment. We also use an ecapsulated docker network .
The goal is to accumulate requests on our api's per department and also some other monitoring.
We are using traefik 2.8 as a docker service. Also all our services run behind this traefik instance.
We added basic tracing configuration to our .toml file and startet a jaeger-instance, also as docker service. On our websecure endpoint we added forwardedHeaders.insecure = true
Jaeger is working fine, but we only get the docker internal host ip of the service, not the visitor ip from the user accessing a client with the browser or app.
I googled around and I am not sure, but it seems that this is a problem due to our setup and can't be fixed - except by using network="host". But unfortunately thats not an option.
But I want to be sure, so I hope someone here has a tip for us to configure docker/jaeger correctly or knows if it is even possible.
A different tracing tool suggestion (for example like tideways, but more python and wasm and c++ compatible) is also appreciated.
Thanks

NGINX Ingress Controller's Load Balancer is hiding the real client IP

Setup
I'm playing around with K8s and I set up a small, single-node, bare metal cluster. For this cluster I pulled the NGINX Ingress Controller config from here, which is coming from the official getting started guide.
Progress
Ok, so pulling this set up a bunch of things, including a LoadBalancer in front. I like that.
For my app (single pod, returns the caller IP) I created a bunch of things to play around with. I now have SSL enabled and another ingress controller, which I pointed to my app's service, which then points to the deployed pod. This all works perfectly, I can browse the page with https. See:
BUT...
My app is not getting the original IP from the client. All client requests end up as coming from 10.42.0.99... here's the controller config from describe:
Debugging
I tried like 50 solutions that were proposed online, none of them worked (ConfigMaps, annotations, proxy mode, etc). And I debugged in-depth, there's no X-Forwarder-For or any similar header in the request that reaches the pod. Previously I tested the same app on apache directly, and also in a docker setup, it works without any issues.
It's also worth mentioning that I looked into the ingress controller's pod and I already saw the same internal IP in there. I don't know how to debug the controller's pod further.
Happy to share more information and config if it helps.
UPDATE 2021-12-15
I think I know what the issue is... I didn't mention how I installed the cluster, assuming it's irrelevant. Now I think it's the most important thing 😬
I set it up using K3S, which has its own LoadBalancer. And through debugging, I see now that all of my requests in NGINX have the IP of the load balancer's pod...
I still don't know how to make this Klipper LB give the source IP address though.
UPDATE 2021-12-17
Opened an issue with the Klipper LB.
Make sure your Nginx ingress configmap have enabled user IP real-ip-header: proxy_protocol try updating this line into configmap.
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
compute-full-forwarded-for: "true"
use-forwarded-headers: "false"
real-ip-header: proxy_protocol
still if that not work you can just inject this config as annotation your ingress configuration and test once.
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Forwarded-For $http_x_forwarded_for";
#milosmns - one of the ways i have been trying is to not install servicelb (--no-deploy=servicelb) and remove traefik (--no-deploy=traefik).
Instead deploy haproxy ingress (https://forums.rancher.com/t/how-to-access-rancher-when-k3s-is-installed-with-no-deploy-servicelb/17941/3) and enable proxy protocol. When you do this, all requests that hit the haproxy ingress will be injected with proxy protocol and no matter how they are routed you will be able to pick them up from anywhere. you can also get haproxy to inject X-Real-IP headers.
the important thing is that haproxy should be running on all master nodes. since there is no servicelb, your haproxy will always get the correct ip address.
Just set externalTrafficPolicy to "Local" if using GCP

Nextcloud in docker behind traefik on unraid

I'm running traefik as a reverse proxy on my unraid (6.6.6)
Apps like, sonarr/radarr, nzbget, organizr, all work fine. But that's mostly due to the fact that these are super easy to set up. You only need 4 traefik specific labels and that's it. 
traefik.enable=true
traefik.backend=radarr
traefik.frontend.rule=PathPrefix: /radarr
traefik.port=7878
traefik.frontend.auth.basic.users=username:password
So far so good, everything is using ssl and working great. 
But as soon as I have to configure some extra stuff for the containers to work behind a reverse proxy I get lost. I've read dozens of guides regarding nextcloud, but I can't get it to work. 
Currently I'm using the linuxserver/nextcloud docker and from my internal network it's working great. I got everything set up, added users and smb shares and everybody can connect fine. But I can't get it to work behind traefik using a subdirectory. It's probably just some traefik labels I need to add to the nextcloud container, but I'm simply too much of a newb to know which ones I need. 
My first issue was that nextcloud forces https, which traefik doesn't like unless you configure some stuff. So for now I'm just using the traefik.frontend.auth.forward.tls.insecureSkipVerify=true label to work around this. I know it's potentially a security issue, but if I'm not mistaken it only opens up the possibility of a man in the middle attack. Which shouldn't be too much of an issue since both traefik and nextcloud are running on the same machine (and besides everything else is going over http). 
So now that I got that working I get a Error 500 message when I try to open mydomain.tld/nextcloud. 
The traefik log says "Error calling . Cause: Get : unsupported protocol scheme \"\""
I tried adding some labels I found in a guide (https://www.smarthomebeginner.com/traefik-reverse-proxy-tutorial-for-docker/#NextCloud_Your_Own_Cloud_Storage)
"traefik.frontend.headers.SSLRedirect=true"
"traefik.frontend.headers.STSSeconds=315360000"
"traefik.frontend.headers.browserXSSFilter=true"
"traefik.frontend.headers.contentTypeNosniff=true"
"traefik.frontend.headers.forceSTSHeader=true"
"traefik.frontend.headers.SSLHost=mydomain.tld"
"traefik.frontend.headers.STSPreload=true"
"traefik.frontend.headers.frameDeny=true"
I just thought I'd try it, maybe I get lucky.
Sadly I didn't. Still Error 500. 
In your traefik logs enable using:
loglevel = "DEBUG"
More info here:https://docs.traefik.io/configuration/logs/
After doing this I realized that my docker label was not correctly applying the InsecureSkipVerify = true line in my config. The error I was able to see in the logs was:
500 Internal Server Error' caused by: x509: cannot validate certificate for 172.17.0.x because it doesn't contain any IP SANs"
To work around this I had to add InsecureSkipVerify = true directly to the traefik.toml file for this to work correctly.

openshift wso2api manager redirect error

I am currently trying to setup wso2 api manager on openshift. The problem i am running into is that when i try to browse the url created by the openshift route, the application redirects me to the internally created IP address of the publisher app. However when i launch the container without openshift, the application directs me to it's intended API login page which is the Mgt console url.
I suspect this has to do with how the HAProxy embedded load balancer is behaving. I was able to hack around the configurations by changing the default ports to 443 however that created a new set of issues because changing the ports also required me hard coding container hostnames in the carbon.xml. Hardcoding settings in the configuration files prevents me from being able to scale up the containers.
Any assistance on this will be much appreciated.

Resources