x-requext-id header propagation in keycloak - oauth-2.0

I am using keycloak to implement OAuth2 code authorization flow in a kubernetes cluster governed by an API gatware Ambassador, I am using Istio Service mesh to add all the tracability, mTLS features to my cluster. One of which is Jaeger which requires all the services to forward x-request-id header in order to link the spans into a specific trace.
When request is sent, Istio's proxy attached to ambassador will generate the x-request-id and forward the request keycloak for authorization, when the results are sent back to the ambassador, the header is dropped and therefore, the istio proxy of keycloak will be generating a new x-header-id. The following image shows the problem:
Here is a photo of the trace where I lost the x-request-id:
Is there a way I can force Keycloak to forward the x-request-id header if passed to it?
Update
here is the environment variables (ConfigMap) associated with Keycloak:
kind: ConfigMap
apiVersion: v1
metadata:
name: keycloak-envars
data:
KEYCLOAK_ADMIN: "admin"
KC_PROXY: "edge"
KC_DB: "postgres"
KC_DB_USERNAME: "test"
KC_DB_DATABASE: "keycloak"
PROXY_ADDRESS_FORWARDING: "true"

You may need to restart your keycloak docker container with the environment variable PROXY_ADDRESS_FORWARDING=true.
ex: docker run -e PROXY_ADDRESS_FORWARDING=true jboss/keycloak

Related

Unable to export traces to OpenTelemetry Collector on Kubernetes

I am using the opentelemetry-ruby otlp exporter for auto instrumentation:
https://github.com/open-telemetry/opentelemetry-ruby/tree/main/exporter/otlp
The otel collector was installed as a daemonset:
https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector
I am trying to get the OpenTelemetry collector to collect traces from the Rails application. Both are running in the same cluster, but in different namespaces.
We have enabled auto-instrumentation in the app, but the rails logs are currently showing these errors:
E, [2022-04-05T22:37:47.838197 #6] ERROR -- : OpenTelemetry error: Unable to export 499 spans
I set the following env variables within the app:
OTEL_LOG_LEVEL=debug
OTEL_EXPORTER_OTLP_ENDPOINT=http://0.0.0.0:4318
I can't confirm that the application can communicate with the collector pods on this port.
Curling this address from the rails/ruby app returns "Connection Refused". However I am able to curl http://<OTEL_POD_IP>:4318 which returns 404 page not found.
From inside a pod:
# curl http://localhost:4318/
curl: (7) Failed to connect to localhost port 4318: Connection refused
# curl http://10.1.0.66:4318/
404 page not found
This helm chart created a daemonset but there is no service running. Is there some setting I need to enable to get this to work?
I confirmed that otel-collector is running on every node in the cluster and the daemonset has HostPort set to 4318.
The problem is with this setting:
OTEL_EXPORTER_OTLP_ENDPOINT=http://0.0.0.0:4318
Imagine your pod as a stripped out host itself. Localhost or 0.0.0.0 of your pod, and you don't have a collector deployed in your pod.
You need to use the address from your collector. I've checked the examples available at the shared repo and for agent-and-standalone and standalone-only you also have a k8s resource of type Service.
With that you can use the full service name (with namespace) to configure your environment variable.
Also, the Environment variable now is called OTEL_EXPORTER_OTLP_TRACES_ENDPOINT, so you will need something like this:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=<service-name>.<namespace>.svc.cluster.local:<service-port>
The correct solution is to use the Kubernetes Downward API to fetch the node IP address, which will allow you to export the traces directly to the daemonset pod within the same node:
containers:
- name: my-app
image: my-image
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(HOST_IP):4318
Note that using the deployment's service as the endpoint (<service-name>.<namespace>.svc.cluster.local) is incorrect, as it effectively bypasses the daemonset and sends the traces directly to the deployment, which makes the daemonset useless.

Securing the route on OPENSHIFT

I deployed an Application on OPENSHIFT. And created a route for it. I want the route to be accessible by only few users and not everyone. And the users who can access the route should be controlled by me. It's like providing Authentication Scheme for the route in the Openshift. How can i achieve this. Please help me in this regard.
Unfortunately, OpenShift Routes do not have any authentication mechanisms built-in. There are the usual TLS / subdomain / path-based routing features, but no authentication.
So your most straight-forward path on OpenShift would be to deploy an additional reverse proxy as part of your application such as "nginx", "traefik" or "haproxy":
+-------------+ +-----------------+
| reverse | | |
+--incoming traffic--->+ proxy +------>+ your application|
from your Route | | | |
+-------------+ +-----------------+
For authentication methods, you then have multiple options, depending on which solution you choose to deploy (the simplest ones being Basic Auth or Digest Auth).
While Kubernetes ingress controllers usually embeds such functionality, this is not the case with OpenShift. The recommended way would usually be to rely on an oauth-proxy, which would integrate with the OpenShift API, allowing for easy integration with OpenShift users -- while it could also be used integrating with third-party providers (KeyCloak, LemonLDAP-NG, ...).
Oauth
Integrating with the OpenShift API, we would first create a ServiceAccount, such as the following:
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
serviceaccounts.openshift.io/oauth-redirectreference.my-app: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"my-route-name"}}'
name: my-app
namespace: my-project
That annotation would tell OpenShift how to send your users back to your application, upon successful login: look for a Route my-route-name.
We will also create a certificate, signed by OpenShift PKI, that we'll later use setting up our OAuth proxy. This could be done creating a Service such as the following:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.openshift.io/serving-cert-secret-name: my-app-tls
name: my-app
namespace: my-project
spec:
ports:
- port: 8443
name: oauth
selector:
[ your deployment selector placeholder ]
We may then add a sidecar container into our application deployment:
[...]
containers:
- args:
- -provider=openshift
- -proxy-prefix=/oauth2
- -skip-auth-regex=^/public/
- -login-url=https://console.example.com/oauth/authorize
- -redeem-url=https://kubernetes.default.svc/oauth/token
- -https-address=:8443
- -http-address=
- -email-domain=*
- -upstream=http://localhost:3000
- -tls-cert=/etc/tls/private/tls.crt
- -tls-key=/etc/tls/private/tls.key
- -client-id=system:serviceaccount:my-project:my-app
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-refresh=0
- -cookie-expire=24h0m0s
- -cookie-name=_oauth2_mycookiename
name: oauth-proxy
image: docker.io/openshift/oauth-proxy:latest
[ resource requests/limits & livenesss/readiness placeholder ]
volumeMounts:
- name: cert
path: /etc/tls/private
[ your application container placeholder ]
serviceAccount: my-app
serviceAccountName: my-app
volumes:
- secret:
name: my-app-tls
name: cert
Assuming:
console.example.com your OpenShift public API FQDN
^/public/, optional path regexpr that should not be subject to authentication
http://localhost:3000, the URL the Oauth proxy would connect, fix port to whichever your application binds to. Also consider reconfiguring your application to bind on its loopback interface only, preventing accesses that would bypass your oauth proxy
my-project / my-app, your project & app names
Make sure your Route tls termination is set to Passthrough, sends its traffic to port 8443.
Checkout https://github.com/openshift/oauth-proxy for an exhaustive list of options. the openshift-sar one could be interesting, filtering which users may log in, based on their permissions over OpenShift objects.
Obviously, in some cases you would be deploying applications that could be configured to integrate with some OAuth provider: then there's no reason to use a proxy.
Basic Auth
In cases you do not want to integrate with OpenShift users, nor any kind of third-party provider, you may want to use some nginx sidecar container - there are plenty available on dockerhub.
Custom Ingress
Also note that with OpenShift 3.x, cluster admins may customize the template used to generate HAProxy configuration, patching their ingress deployment. This is a relatively common way to fix some headers, handle some custom annotations for your users to set in their Routes, ... Look for TEMPLATE_FILE in https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html#env-variables.
As of right now, I do not think OpenShift 4 ingress controllers would allow for this - as it is driven by an operator, with limited configuration options, see https://docs.openshift.com/container-platform/4.5/networking/ingress-operator.html.
Although as for OpenShift 3: you may very well deploy your own ingress controllers, should you have to handle such edge cases.

Spinnaker GateWay EndPoint

I'm working for a spinnaker for create a new CD pipeline.
I've deployed halyard in a docker container on my computer, and also deployed spinnaker from it to the Google Kubernetes Engine.
After all of them, I've prepared a new ingress yaml file, shown as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-cloud
namespace: spinnaker
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: spin-deck
servicePort: 9000
After accessing the spinnaker UI via a public IP, I got an error, shown as below.
Error fetching applications. Check that your gate endpoint is accessible.
After all of them, I've checked the docs about it and I've run some commands shown as below.
I've checked the service data on my K8S cluster.
spin-deck NodePort 10.11.245.236 <none> 9000:32111/TCP 1h
spin-gate NodePort 10.11.251.78 <none> 8084:31686/TCP 1h
For UI
hal config security ui edit --override-base-url "http://spin-deck.spinnaker:9000"
For API
hal config security api edit --override-base-url "http://spin-gate.spinnaker:8084"
After running these commands and redeploying spinnaker, the error repeated itself.
How can I solve the problem of accessing the spinnaker gate from the UI?
--override-base-url should be populated without port.

Kubernetes certbot standalone not working

I'm trying to generate an SSL certificate with certbot/certbot docker container in kubernetes. I am using Job controller for this purpose which looks as the most suitable option. When I run the standalone option, I get the following error:
Failed authorization procedure. staging.ishankhare.com (http-01):
urn:ietf:params:acme:error:connection :: The server could not connect
to the client to verify the domain :: Fetching
http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8:
Timeout during connect (likely firewall problem)
I've made sure that this isn't due to misconfigured DNS entries by running a simple nginx container, and it resolves properly. Following is my Jobs file:
apiVersion: batch/v1
kind: Job
metadata:
#labels:
# app: certbot-generator
name: certbot
spec:
template:
metadata:
labels:
app: certbot-generate
spec:
volumes:
- name: certs
containers:
- name: certbot
image: certbot/certbot
command: ["certbot"]
#command: ["yes"]
args: ["certonly", "--noninteractive", "--agree-tos", "--staging", "--standalone", "-d", "staging.ishankhare.com", "-m", "me#ishankhare.com"]
volumeMounts:
- name: certs
mountPath: "/etc/letsencrypt/"
#- name: certs
#mountPath: "/opt/"
ports:
- containerPort: 80
- containerPort: 443
restartPolicy: "OnFailure"
and my service:
apiVersion: v1
kind: Service
metadata:
name: certbot-lb
labels:
app: certbot-lb
spec:
type: LoadBalancer
loadBalancerIP: 35.189.170.149
ports:
- port: 80
name: "http"
protocol: TCP
- port: 443
name: "tls"
protocol: TCP
selector:
app: certbot-generator
the full error message is something like this:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for staging.ishankhare.com
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. staging.ishankhare.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8: Timeout during connect (likely firewall problem)
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: staging.ishankhare.com
Type: connection
Detail: Fetching
http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8:
Timeout during connect (likely firewall problem)
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
I've also tried running this as a simple Pod but to no help. Although I still feel running it as a Job to completion is the way to go.
First, be aware your Job definition is valid, but the spec.template.metadata.labels.app: certbot-generate value does not match with your Service definition spec.selector.app: certbot-generator: one is certbot-generate, the second is certbot-generator. So the pod run by the job controller is never added as an endpoint to the service.
Adjust one or the other, but they have to match, and that might just work :)
Although, I'm not sure using a Service with a selector targeting short-lived pods from a Job controller would work, neither with a simple Pod as you tested. The certbot-randomId pod created by the job (or whatever simple pod you create) takes about 15 seconds total to run/fail, and the HTTP validation challenge is triggered after just a few seconds of the pod life: it's not clear to me that would be enough time for kubernetes proxying to be already working between the service and the pod.
We can safely assume that the Service is actually working because you mentioned that you tested DNS resolution, so you can easily ensure that's not a timing issue by adding a sleep 10 (or more!) to give more time for the pod to be added as an endpoint to the service and being proxied appropriately before the HTTP challenge is triggered by certbot. Just change your Job command and args for those:
command: ["/bin/sh"]
args: ["-c", "sleep 10 && certbot certonly --noninteractive --agree-tos --staging --standalone -d staging.ishankhare.com -m me#ishankhare.com"]
And here too, that might just work :)
That being said, I'd warmly recommend you to use cert-manager which you can install easily through its stable Helm chart: the Certificate custom resource that it introduces will store your certificate in a Secret which will make it straightforward to reuse from whatever K8s resource, and it takes care of renewal automatically so you can just forget about it all.

How to connect to kubernetes-api from a browser?

I set up a cybernetes cluster this tutorial https://coreos.com/kubernetes/docs/latest/deploy-master.html
When you open the browser https://my_ip
I get Unauthorized.
What you need to do to access the API?
~/kubectl config view`
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/hhh/ca.pem
server: https://192.168.0.139
name: default-cluster
contexts:
- context:
cluster: hhh-cluster
user: hhh
name: default-system
current-context: default-system
kind: Config
preferences: {}
users:
- name: cluster-hhh
user:
password: admin
username: admin
- name: default-admin
user:
client-certificate: /home/hhh/admin.pem
client-key: /home/hhh/admin-key.pem
basic-auth not work
Does basic auth work when using kubectl (it's unclear from your output which client credentials are working when connecting to your cluster's apiserver)?
Are you passing --basic-auth-file to your kube-apiserver process when starting it (see https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/server.go#L218)? If not, then basic auth will not work when connecting to your apiserver? If so, you can verify that it is working by running curl -k --user admin:admin https://192.168.0.139.
If you want (or need) to use client certificates from your browser, take a look at the instructions I put into this github issue about making it easier to configure.

Resources