I currently run a frontend with a backend api pool in my kubernetes. Both services are secured via an auth2-proxy.
As for the frontend, the auth-workflow is done by users entering their credentials and every frontend - backend communication is secured, therefore.
Additionally, an automated service (CI/CD) must also connect to the API. I have read that oauth can also handle basic auth client username/secret authentication but I cannot get the flow to work. I have the credentials of my SSO provider and retrieve the access_token like this:
curl --header 'Content-Type: application/x-www-form-urlencoded' --header 'Authorization: Basic user:secret(base64encoded)' --request POST https://sso.provider.com/as/token.oauth2\?'grant_type=client_credentials'
With my setup like:
api-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sd-ingress
annotations:
# nginx.ingress.kubernetes.io/auth-signin: https://example.net/oauth2/start?rd=$scheme://$host$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://example.net/oauth2/auth
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-forwarded-headers: "false"
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
tls:
- hosts:
- example.com
rules:
- host: example.com
http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: sd-svc
port:
number: 8080
oauth2-config.yaml (excerpt)
...
args:
- --skip-jwt-bearer-tokens=true
- --show-debug-on-error=true
- --http-address=0.0.0.0:4180
- --provider=oidc
- --oidc-issuer-url=https://sso.provider.com
- --metrics-address=0.0.0.0:44180
- --acr-values=gas:strong
- --cookie-domain=example.net
- --oidc-email-claim=sub
- --whitelist-domain=.example.net
- --config=/etc/oauth2_proxy/oauth2_proxy.cfg
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
name: oauth2-secret
key: client-id
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth2-secret
key: client-secret
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
name: oauth2-secret
key: cookie-secret
...
It does not work once curl-ing the api:
curl --location --request GET 'https://example.net/api/get_queue' --header 'Authorization: Bearer <Token>' --header 'Content-Type: application/json; charset=utf-8'
The oauth logs:
[2022/10/25 11:32:03] [jwt_session.go:51] Error retrieving session from token in Authorization header: no valid bearer token found in authorization header
-- Version used: oauth2-proxy:v7.3.0
Related
I have deployed Jupyterhub and Keycloak instances with Helm charts. I'm trying to authenticate user with Open Id Connect identity provider from Keycloak. But I'm pretty confused about the settings. I have followed instructions from here saying I should use a GenericOAuthenticator when implementing Keycloak.
To configure OpenId Connect Client I followed this.
I also create a group membership and audience and added to the mappers of the Jupyterhub "jhub" client. As well as a group like this and created two test users and added one of them to that group.
My problem is: When I try to logging I get a 403 error Forbidden and a URL similar to this:
https://jhub.compana.com/hub/oauth_callback?state=eyJzdGF0ZV9pZCI6ICJmYzE4NzA0ZmVmZTk0MGExOGU3ZWMysdfsdfsghfgh9LHKGJHDViLyJ9&session_state=ffg334-444f-b510-1f15d1444790&code=d8e977770a-1asdfasdf664-a790-asdfasdf.a6aac533-c75d-d555f-b510-asdasd.aaaaasdf73353-ce76-4aa9-894e-123asdafs
My questions are:
Am I right about using Oauth Proxy? Do I need it if I'm using Keycloak. According to Jupyterhub docs, there are two authentication flows, so I'm using Oauth-proxy as external authenticator but I'm not positive about the way I'm doing that.
JupyterHub is often deployed with oauthenticator, where an external
identity provider, such as GitHub or KeyCloak, is used to authenticate
users. When this is the case, there are two nested oauth flows: an
internal oauth flow where JupyterHub is the provider, and and external
oauth flow, where JupyterHub is a client.
Does Keycloak already has a default OIDC identity provider? The menu doesn't show any after the installation. Should I have done this for each client, since it's asking for an Authorization URL or is it redundant?
I tried to find out this but I only offers the possibility to define my own default identity provider according to this .
Is there a way to test the Oauth flow from the terminal or with Postman in a way that I can inspect the responses?
I could get an Id token with:
curl -k -X POST https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token -d grant_type=password -d username=myuser -d password=mypassword -d client_id=my-client -d scope=openid -d response_type=id_token -d client_secret=myclientsecret
But how can try to login from the console?
Keycloak console screenshots:
identity provider list
Relevant files:
Jupyterhub-values.yaml:
hub:
config:
Authenticator:
enable_auth_state: true
JupyterHub:
authenticator_class: generic-oauth
GenericOAuthenticator:
client_id: jhubclient
client_secret: abcsecret
oauth_callback_url: https://jhub.company.com/hub/oauth_callback
authorize_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
token_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
userdata_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/userinfo
login_service: keycloak
username_key: preferred_username
userdata_params:
state: state
extraEnv:
OAUTH2_AUTHORIZE_URL: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
OAUTH2_TOKEN_URL: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
OAUTH_CALLBACK_URL: https://keycloak.company.com/hub/company
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"
hosts:
- jhub.company.com
keycloak-values.yaml:
mostly default values but added for https:
extraEnvVars:
- name: KEYCLOAK_PROXY_ADDRESS_FORWARDING
value: "true"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_ENABLE_TLS
value: "true"
- name: KEYCLOAK_FRONTEND_URL
value: "https://keycloak.company.com/auth"
ingress:
enabled: true
servicePort: https
annotations:
cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.org/redirect-to-https: "true"
nginx.org/server-snippets: |
location /auth {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
I could make work with this configuration:
hub:
config:
Authenticator:
enable_auth_state: true
admin_users:
- admin
allowed_users:
- testuser1
GenericOAuthenticator:
client_id: jhub
client_secret: nrjNivxuJk2YokEpHB2bQ3o97Y03ziA0
oauth_callback_url: https://jupyter.company.com/hub/oauth_callback
authorize_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
token_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
userdata_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/userinfo
login_service: keycloak
username_key: preferred_username
userdata_params:
state: state
JupyterHub:
authenticator_class: generic-oauth
Creating the ingress myself like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jhub-ingress
namespace: jhub
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"
spec:
ingressClassName: nginx
tls:
- hosts:
- jupyter.company.com
secretName: letsencrypt-cert-tls-jhub
rules:
- host: jupyter.company.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxy-http
port:
number: 8000
I also removed the Oauth-prox deployment since this appears to already be done with Keycloak and it's actually redundant.
Then creating a regular user and admin roles and groups in Keycloak.
It appears the users hadn't the proper permissions in Keycloak.
Jenkins installed through helm chart with a custom installation path.
helm install argo-jenkins -f jenkins-volume.yaml jenkinsci/jenkins -n jenkins --set controller.jenkinsUriPrefix='/jenkinsargo'
We have a front-end Istio-ingress gateway from all the browser requests.
GW.Yaml:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: jenkins-gw
namespace: jenkins
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
VS.Yaml:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: jenkins-vs
namespace: jenkins
spec:
gateways:
- jenkins-gw.jenkins
hosts:
- '*'
http:
- match:
- uri:
prefix: /jenkinsargo
route:
- destination:
host: argojenkins.jenkins.svc.cluster.local
port:
number: 8080
Able to access jenkins home page, when trying to configure the jenkins security seeing above error.
PFA for the initial home page and error page.
Error page after redirecting
Jenkins home page
This error means that an HTTPS request reaches to a HTTP (plaintext) listener. Check the listeners, some of your services may have an incorrect name (i.e. http) which is not following the naming convention: https://istio.io/latest/docs/reference/config/analysis/ist0118/
I suggest to use $ istioctl pc listeners --address
#Arnau Senserrich,
If it's a http naming issue, in the place it shouldn't even open jenkins login page.
If you see the above image when accessed jenkins through LB, I'm able to enter credentails and able to login. But when I clicked manage jenkins-> Configure Security
I'm seeing above error.
Also tried GW with both http and https:
GW.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: jenkins-gw
namespace: jenkins
spec:
selector:
custom: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- "*"
VS1.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: jenkins-vs
namespace: jenkins
spec:
gateways:
- jenkins-gw.jenkins
hosts:
- '*'
http:
- match:
- uri:
prefix: /jenkinsargo
route:
- destination:
host: argojenkins.jenkins.svc.cluster.local
port:
number: 8080
VS2.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: jenkins-vs1
namespace: jenkins
spec:
gateways:
- jenkins-gw.jenkins
hosts:
- '*'
tls:
- match:
- port: 443
sniHosts:
- "oitat.xyz.com"
route:
- destination:
host: argojenkins.jenkins.svc.cluster.local
port:
number: 8080
No luck, same issue.
envoy container failing while startup with the below error
Configuration does not parse cleanly as v3. v2 configuration is deprecated and will be removed from Envoy at the start of Q1 2021: Unknown field in: {"static_resources":{"listeners":[{"address":{"socket_address":{"address":"0.0.0.0","port_value":443}},"filter_chains":[{"tls_context":{"common_tls_context":{"tls_certificates":[{"private_key":{"filename":"/etc/ssl/private.key"},"certificate_chain":{"filename":"/etc/ssl/keychain.crt"}}]}},"filters":[{"typed_config":{"route_config":{"name":"local_route","virtual_hosts":[{"domains":["*"],"routes":[{"match":{"prefix":"/"},"route":{"host_rewrite_literal":"127.0.0.1","cluster":"service_envoyproxy_io"}}],"name":"local_service"}]},"#type":"type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager","http_filters":[{"name":"envoy.filters.http.router"}],"access_log":[{"typed_config":{"#type":"type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog","path":"/dev/stdout"},"name":"envoy.access_loggers.file"}],"stat_prefix":"ingress_http"},"name":"envoy.filters.network.http_connection_manager"}]}],"name":"listener_0"}],"clusters":[{"load_assignment":{"cluster_name":"service_envoyproxy_io","endpoints":[{"lb_endpoints":[{"endpoint":{"address":{"socket_address":{"port_value":8080,"address":"127.0.0.1"}}}}]}]},"connect_timeout":"30s","name":"service_envoyproxy_io","dns_lookup_family":"V4_ONLY","transport_socket":{"name":"envoy.transport_sockets.tls","typed_config":{"#type":"type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext","sni":"www.envoyproxy.io"}},"type":"LOGICAL_DNS"}]}}
Here's my envoy.yaml file
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 443
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
http_filters:
- name: envoy.filters.http.router
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
host_rewrite_literal: 127.0.0.1
cluster: service_envoyproxy_io
tls_context:
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/etc/ssl/keychain.crt"
private_key:
filename: "/etc/ssl/private.key"
clusters:
- name: service_envoyproxy_io
connect_timeout: 30s
type: LOGICAL_DNS
# Comment out the following line to test on v6 networks
dns_lookup_family: V4_ONLY
load_assignment:
cluster_name: service_envoyproxy_io
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8080
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
sni: www.envoyproxy.io
I'm I doing something wrong here?
The error message states that: Configuration does not parse cleanly as v3. v2 configuration is deprecated and will be removed from Envoy at the start of Q1 2021. The v2 xDS APIs are deprecated and will be removed form Envoy in Q1 2021, as per the API versioning policy.
According to the official docs you got the following options:
In the interim, you can continue to use the v2 API for the transitional period by:
Setting --bootstrap-version 2 on the CLI for a v2 bootstrap file.
Enabling the runtime envoy.reloadable_features.enable_deprecated_v2_api feature. This is implicitly enabled if a v2 --bootstrap-version is set.
Or Configure Envoy to use the v3 API
More details can be found in the linked docs.
I need to verify that a custom header is provided with a correct value. If not, I want deny access to the service and produce a 401 with a message.
I've been able to create an Istio AuthorizationPolicy for that but it gives me 403 which isn't totally wrong but I want to be correct and give 401.
This is what I've tried so far and unfortunately it doesn't have any impact on the requests that I'm sending (I'm no envoy nor lua expert so bare with me please):
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: custom-filter
namespace: dev
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: NETWORK_FILTER # http connection manager is a filter in Envoy
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value: # lua filter specification
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_request(request_handle)
if request_handle:headers():get("auth_token") != "xxx" then
request_handle:respond({[":status"] = "401"}, "nope")
end
end
Working AuthorizationPolicy that produces 403 and which I would like to replace with above:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-method-get
namespace: dev
spec:
selector:
matchLabels:
app: myapp
action: DENY
rules:
- when:
- key: request.headers[auth_token]
notValues: ["xxx"]
From my understanding, the response from istio is correct:
From RFC Standards:
The 401 (Unauthorized) status code indicates that the request has not been applied because it lacks valid authentication credentials for the target resource...The user agent MAY repeat the request with a new or replaced Authorization header field.
The 403 (Forbidden) status code indicates that the server understood the request but refuses to authorize it...If authentication credentials were provided in the request, the server considers them insufficient to grant access.
So your request with token is understood but forbidden (403) but a request without token doesn't provide credentials at all, so it's unauthorized.
Regarding your filter:
Apply the Filter to HTTP_FILTER instead of the NETWORK_FILTER. And it should be applied before any other filter, so set opteration to INSERT_BEFORE.
Update
To apply the filter to a single pod instead of all pods, change context from GATEWAY to SIDECAR_INBOUND and set a workloadSelector :
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: custom-filter
namespace: dev
spec:
workloadSelector:
labels:
app: my-app
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value: # lua filter specification
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_request(request_handle)
if request_handle:headers():get("auth_token") ~= "my_secret_token" then
request_handle:respond({[":status"] = "401"}, "nope")
end
end
I have a frontend application built with React and backend on nodejs.
Both have a separate Docker image and therefore a separate deployment on k8s (gce).
Each deployment has a corresponding k8s service, let's say fe-serice and be-service.
I am trying to setup an Ingress so that both services are exposed on a single domain in the following manner:
/api/* - are routed to be-service
everything else is routed to fe-service
Here is my yaml file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-host
http:
paths:
- path: /*
backend:
serviceName: fe-service
servicePort: 80
- path: /api/*
backend:
serviceName: be-service
servicePort: 5000
Here is what I get with curl:
curl [ip] --header "Host: my-host" -> React app (as expected)
curl [ip]/foo --header "Host: my-host" -> nginx 404 (why?)
curl [ip]/api --header "Host: my-host" -> nginx 404 (why?)
curl [ip]/api/ --header "Host: my-host" -> nodejs app
curl [ip]/api/foo --header "Host: my-host" -> nodejs app
As far as I can see a part with api/ works fine, but I can't figure out everything else, I tried different combinations with/without wildcards, but it still does not work in the way I want it to work.
What am I missing? Is this even possible?
Thanks in advance!
I can't explain why /foo is not working
But
/api/* does not cover /api, it covers only anything after /api/
I don't think the issue here is your ingress, but rather your nginx setup (without having seen it!). Since React apps are Single Page Applications, you need to tell the nginx server to always look for index.html instead of going to e.g. /usr/share/nginx/html/foo where there's probably nothing to find.
I think you'll find relevant information e.g. here. I wish you good luck #Andrey, and let me know if this was at all helpful!
I hope am not late to respond. Please use kubernetes use-regrex annotations to ensure that it maps to your services endpoints. Please make sure that the react app service mapping is done at the end of the reset. Use the following yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: my-host
http:
paths:
- path: /api/?(.*)
backend:
serviceName: be-service
servicePort: 5000
- path: /?(.*)
backend:
serviceName: fe-service
servicePort: 80