Setup Keycloak, Oauth-proxy and Jupyterhub - oauth

I have deployed Jupyterhub and Keycloak instances with Helm charts. I'm trying to authenticate user with Open Id Connect identity provider from Keycloak. But I'm pretty confused about the settings. I have followed instructions from here saying I should use a GenericOAuthenticator when implementing Keycloak.
To configure OpenId Connect Client I followed this.
I also create a group membership and audience and added to the mappers of the Jupyterhub "jhub" client. As well as a group like this and created two test users and added one of them to that group.
My problem is: When I try to logging I get a 403 error Forbidden and a URL similar to this:
https://jhub.compana.com/hub/oauth_callback?state=eyJzdGF0ZV9pZCI6ICJmYzE4NzA0ZmVmZTk0MGExOGU3ZWMysdfsdfsghfgh9LHKGJHDViLyJ9&session_state=ffg334-444f-b510-1f15d1444790&code=d8e977770a-1asdfasdf664-a790-asdfasdf.a6aac533-c75d-d555f-b510-asdasd.aaaaasdf73353-ce76-4aa9-894e-123asdafs
My questions are:
Am I right about using Oauth Proxy? Do I need it if I'm using Keycloak. According to Jupyterhub docs, there are two authentication flows, so I'm using Oauth-proxy as external authenticator but I'm not positive about the way I'm doing that.
JupyterHub is often deployed with oauthenticator, where an external
identity provider, such as GitHub or KeyCloak, is used to authenticate
users. When this is the case, there are two nested oauth flows: an
internal oauth flow where JupyterHub is the provider, and and external
oauth flow, where JupyterHub is a client.
Does Keycloak already has a default OIDC identity provider? The menu doesn't show any after the installation. Should I have done this for each client, since it's asking for an Authorization URL or is it redundant?
I tried to find out this but I only offers the possibility to define my own default identity provider according to this .
Is there a way to test the Oauth flow from the terminal or with Postman in a way that I can inspect the responses?
I could get an Id token with:
curl -k -X POST https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token -d grant_type=password -d username=myuser -d password=mypassword -d client_id=my-client -d scope=openid -d response_type=id_token -d client_secret=myclientsecret
But how can try to login from the console?
Keycloak console screenshots:
identity provider list
Relevant files:
Jupyterhub-values.yaml:
hub:
config:
Authenticator:
enable_auth_state: true
JupyterHub:
authenticator_class: generic-oauth
GenericOAuthenticator:
client_id: jhubclient
client_secret: abcsecret
oauth_callback_url: https://jhub.company.com/hub/oauth_callback
authorize_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
token_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
userdata_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/userinfo
login_service: keycloak
username_key: preferred_username
userdata_params:
state: state
extraEnv:
OAUTH2_AUTHORIZE_URL: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
OAUTH2_TOKEN_URL: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
OAUTH_CALLBACK_URL: https://keycloak.company.com/hub/company
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"
hosts:
- jhub.company.com
keycloak-values.yaml:
mostly default values but added for https:
extraEnvVars:
- name: KEYCLOAK_PROXY_ADDRESS_FORWARDING
value: "true"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_ENABLE_TLS
value: "true"
- name: KEYCLOAK_FRONTEND_URL
value: "https://keycloak.company.com/auth"
ingress:
enabled: true
servicePort: https
annotations:
cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.org/redirect-to-https: "true"
nginx.org/server-snippets: |
location /auth {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $host;
proxy_set_header X-Forwarded-Proto $scheme;
}

I could make work with this configuration:
hub:
config:
Authenticator:
enable_auth_state: true
admin_users:
- admin
allowed_users:
- testuser1
GenericOAuthenticator:
client_id: jhub
client_secret: nrjNivxuJk2YokEpHB2bQ3o97Y03ziA0
oauth_callback_url: https://jupyter.company.com/hub/oauth_callback
authorize_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
token_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
userdata_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/userinfo
login_service: keycloak
username_key: preferred_username
userdata_params:
state: state
JupyterHub:
authenticator_class: generic-oauth
Creating the ingress myself like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jhub-ingress
namespace: jhub
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"
spec:
ingressClassName: nginx
tls:
- hosts:
- jupyter.company.com
secretName: letsencrypt-cert-tls-jhub
rules:
- host: jupyter.company.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxy-http
port:
number: 8000
I also removed the Oauth-prox deployment since this appears to already be done with Keycloak and it's actually redundant.
Then creating a regular user and admin roles and groups in Keycloak.
It appears the users hadn't the proper permissions in Keycloak.

Related

502 Bad Gateway error thrown in AKS nginx ingress after redirect back from external service

We have a working Azure Kubernetes Service cluster with dotnet 6.0 web app. The pods are running on port 80 but the public url is running behind https cert which is being handled by an nginx ingress controller with cert secret. All this is working well.
We are adding some new functionality (integration from an external service). When signing into our app, with this new functionality, there's a brief redirect to the external service page. Once the user requests have completed, the external service redirects back to our site using a preconfigured redirect url to which is posted some data (custom header and query string). At this point, our site errors with 502 bad gateway.
When i review the logs on the nginx ingress controller pod, i can see some additional errors:
[error] 664#664: *17279861 upstream prematurely closed connection while reading response header from upstream, client: 10.240.0.5, server: www-dev.application.com, request: "GET /details/c2beac1c-b220-45fa-8fd5-08da12dced76/Success?id=ID-MJCX43A4FJ032551T752200W&token=EC-0G826951TM357702S&SenderID=4FHGRLJDXPXUU HTTP/2.0", upstream: "http://10.244.1.66:80/details/c2beac1c-b220-45fa-8fd5-08da12dced76/Success?id=ID-MJCX43A4FJ032551T752200W&token=EC-0G826951TM357702S&SenderID=4FHGRLJDXPXUU", host: "www-dev.application.com", referrer: "https://www.external.service.com/"
10.244.1.66 is the internal ip of one of the application pods.
at first i thought this was an error related to annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
because the referrer is an https:// site making the request. However adding that annotation makes the site unusuable (probably because the dotnet app pods are listening on port 80).
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: application-web
namespace: application
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- www-dev.application.com
secretName: application-ingress-tls
rules:
- host: www-dev.application.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: applicationwebsvc
port:
number: 80
Here's the application ingress yaml.
Anyway, does anyone have any idea what the problem could be here? thanks
Ingress class moved from annotation to ingressClassName field, also you do not need to specify https before. Can you please try this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: application-web
namespace: application
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- www-dev.application.com
secretName: application-ingress-tls
rules:
- host: www-dev.application.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: applicationwebsvc
port:
number: 80
Please also check to ingress documentation.
This ended up being a resource limits issue. There was one particular request that was causing memory usage to spike. This cause the container to be OOMKilled. This is what was leading to the 502 Bad gateway error message (because when it was killed, the container was no longer there to service the request).

Kubernetes SSO Github OAuth for multiple applications

So here is the deal. I am using Kubernetes and I want to protect the applications inside of the cluster. Therefore I added an oauth2-proxy and, in case the user is not logged in, it is redirected to GitHub. After the login is done, the user is redirected to the app (Login Diagram). For now, I have two dummy deployments of an echo-http server (echo1 and echo2) and Jenkins. I am doing everything locally with minikube, so please don't mind the domain names.
In Jenkins, I installed the Github OAuth plugin and configured it as said in the multiple posts I found (e.g., Jenkins GitHub OAuth). Also created the GitHub OAuth application and set the callback. Since I want to have SSO for multiple applications besides Jenkins, I set the call back to https://auth.int.example.com/oauth2/callback instead of https://jenkins.int.example.com/securityRealm/finishLogin. Therefore, after login on the GitHub, I get redirected to the Jenkins webpage but as a guest. If I try to log in, I end up in an error.
I used Helm to setup the oauth2-proxy (k8s-at-home/oauth2-proxy)
Am I missing something?
These are the ingress configuration of the oauth2-proxy and ingress controller that I am using.
Nginx Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://auth.int.example.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.int.example.com/oauth2/start?rd=https%3A%2F%2F$host$request_uri"
spec:
tls:
- hosts:
- echo1.int.example.com
- echo2.int.example.com
- jenkins.int.example.com
secretName: letsencrypt-prod
rules:
- host: echo1.int.example.com
http:
paths:
- backend:
serviceName: echo1
servicePort: 80
- host: echo2.int.example.com
http:
paths:
- backend:
serviceName: echo2
servicePort: 80
- host: jenkins.int.example.com
http:
paths:
- path:
backend:
serviceName: jenkins-service
servicePort: 8080
- path: /securityRealm/finishLogin
backend:
serviceName: jenkins-service
servicePort: 8080
OAuth2-proxy Configuration
config:
existingSecret: oauth2-proxy-creds
extraArgs:
whitelist-domain: .int.example.com
cookie-domain: .int.example.com
provider: github
authenticatedEmailsFile:
enabled: true
restricted_access: |-
my_email#my_email.com
ingress:
enabled: true
path: /
hosts:
- auth.int.example.com
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
tls:
- secretName: oauth2-proxy-https-cert
hosts:
- auth.int.example.com
Nice auth architecture you are building there!
I would say that you may have have overlooked the fact that Jenkins has its own authentication. You also need to configure Jenkins itself to allow Oauth2 access via Github.
So what is really going on? Your Oauth proxy solution is great. You can build apps in your k8s cluster, without having to worry about user management or authentication directly from your app.
However, this is useful only for apps that don't have their own authentication mechanisms.
The Oauth proxy is simply protecting the access to the backend webserver. Once you are allowed by the proxy, you interact directly with the app, so if the app requires authentication, so will you as end user.
My advice would be to use the Oauth proxy for apps that don't have user management mechanisms, and leave open access to apps that have authentication mechanisms, like Jenkins. Otherwise you could end up with double authentication (proxy and Jenkins in this case), which is not so great.
Then, to keep the high level concept of accessing your cluster with Github accounts, you need to configure those user-based apps to also make use of Github Oauth2. This way the access to the cluster is homogeneus (you just need your Github account), but the actual integration has two different types: apps that don't require user management (they are protected by the Oauth proxy), and apps with authentication, which are then configured with Github's Oauth2 independently.

HTTPS is not working with TLS enabled in GKE Ingress

I have deployed jenkins in GKE using helm, now i am trying to configure DNS for jenkins. I am using cloudflare for DNS and also created TLS secret using my cloudflare certificates. The ingress that i have created works fine for http but HTTPS is not working. Following is my ingress that i used.
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1i
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-forwarded-headers: "true"
nginx.ingress.kubernetes.io/use-proxy-protocol: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- jenkins url
secretName: secret-name
rules:
- host: jenkins url
http:
paths:
- path: /jenkins/*
backend:
serviceName: jenkins
servicePort: 80
The ingress that you have provided does not specify any service or service port for 443 to serve https requests and only has port 80 which is for http.
To enable HTTPS or gRPC over SSL when connecting to the endpoints of services, you need to add the nginx.org/ssl-services annotation to your Ingress resource definition. [1]
[1]https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/ssl-services

Configuring WSO2 API Manager to Work With Traefik for HTTPS

I am trying to configure Traefik and WSO2 API Manager. Basically, I want to configure Traefik to handle https.
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.service-am-https.redirectscheme.scheme=https"
- "traefik.http.routers.service-am-http.entrypoints=web"
- "traefik.http.routers.service-am-http.rule=Host(`xx.xx.xx`) && Path(`/apim/admin`)"
- "traefik.http.routers.service-am-http.middlewares=service-am-https#docker"
- "traefik.http.routers.service-am.tls=true"
- "traefik.http.routers.service-am.rule=Host(`xx.xx.xx`) && Path(`/apim/admin`)"
- "traefik.http.routers.service-am.entrypoints=web-secure"
- "traefik.http.services.service-am.loadbalancer.server.port=9443"
I also included this in the deployment.toml file for API Manager.
[catalina.valves.valve.properties]
className = "org.apache.catalina.valves.RemoteIpValve"
internalProxies = "*"
remoteIpHeader ="x-forwarded-for"
proxiesHeader="x-forwarded-by"
trustedProxies="*"
When I try to access the service, https://xx.xx.xx/apim/admin, I get this error:
Bad Request
This combination of host and port requires TLS.
Traefik is successfully handling the https part but when it comes to WSO2 API Manager, this issue comes up. Any ideas on how to resolve this?
I just had this problem and solved including
annotations:
ingress.kubernetes.io/protocol: https
in my Ingress.
The full configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wso2-ingress
namespace: <namespace>
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
ingress.kubernetes.io/protocol: https
spec:
rules:
- host: <hostname>
http:
paths:
- path: /
backend:
serviceName: <service-name>
servicePort: 9443

Configuring end to end SSL using Nginx Ingress controller

I am running a local deployment and trying to redirect HTTPS traffic to my backend pods.
I don't want SSL termination at the Ingress level, which is why I didn't use any tls secrets.
I am creating a self signed cert within the container, and Tomcat starts up by picking that and exposing on 8443.
Here is my Ingress Spec
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-name
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
#nginx.ingress.kubernetes.io/service-upstream: "false"
kubernetes.io/ingress.class: {{ .Values.global.ingressClass }}
nginx.ingress.kubernetes.io/affinity: "cookie"
spec:
rules:
- http:
paths:
- path: /myserver
backend:
serviceName: myserver
servicePort: 8443
I used the above annotation in different combinations but I still can't reach my pod.
My service routes
# service information for myserver
service:
type: ClusterIP
port: 8443
targetPort: 8443
protocol: TCP
I did see a few answers regarding this suggesting annotations, but that didn't seem to work for me. Thanks in advance!
edit: The only thing that remotely worked was when I overwrote the ingress values as
nginx-ingress:
controller:
publishService:
enabled: true
service:
type: NodePort
nodePorts:
https: "40000"
This does enable https, but it picks up kubernetes' fake certs, rather than my cert from the container
Edit 2:
For some reason, the ssl-passthrough is not working. I enforced it as
nginx-ingress:
controller:
extraArgs:
enable-ssl-passthrough: ""
when I describe the deployment, I can see it in the args but when I check with kubectl ingress-nginx backends as described in https://kubernetes.github.io/ingress-nginx/kubectl-plugin/#backends, it says "sslPassThrough:false"
SSL Passthrough requires a specific flag to be passed to the nginx controller while starting since it is disabled by default.
SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag.
Since ssl-passthrough works on layer 4 of the OSI model and not on the layer 7 (HTTP) using it will invalidate all the other annotations that you set on the ingress object.
So at your deployment level you have specify this flag under args:
containers:
- name: controller
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1#sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --enable-ssl-passthrough
There are several thingns you need to setup if you want to use ssl-passthrough.
First is to set proper host name:
spec:
rules:
- host: example.com <- HERE
http:
...
It's mentioned in documentation:
SSL Passthrough leverages SNI [Server Name Indication] and reads the virtual domain from the
TLS negotiation, which requires compatible clients. After a connection
has been accepted by the TLS listener, it is handled by the controller
itself and piped back and forth between the backend and the client.
If there is no hostname matching the requested host name, the request
is handed over to NGINX on the configured passthrough proxy port
(default: 442), which proxies the request to the default backend.
Second thing is setting --enable-ssl-passthrough flag as already mentioned in separate answer by #thomas.
Just edit the nginx ingress deployment and add this line to args list:
- --enable-ssl-passthrough
Third thing that has to be done is use to use the following annotations in your ingress object definition:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
IMPORTANT: In docs you can read:
Attention
Because SSL Passthrough works on layer 4 of the OSI model (TCP) and
not on the layer 7 (HTTP), using SSL Passthrough invalidates all the
other annotations set on an Ingress object.
This means that all other annotations are useless from now on. This applies to annotations like force-ssl-redirect, affinity and also to paths you defined (e.g. path: /myserver). Since traffic is end-to-end encrypted, all ingress sees is some gibberish and all it can do is pass this data to the application based on dns name (SNI).
Looks like this is an open issue
https://github.com/kubernetes/ingress-nginx/issues/5686
So, I had to revert back to using my certs as default certificates and mounting it as tls sercets.

Resources