Difference between Kubernetes Service Account Tokens from secret and projected volume - docker

When I do kubectl get secret my-sa-token-lr928 -o yaml, there is a base64 string(JWT A) value for data.token. There are other fields too, like data.ca.crt in this returned secret.
When I use projected volume with source serviceAccountToken and read the file, there is another not-base64 string(JWT B).
cat /var/run/secrets/some.directory/serviceaccount/token
Why JWT A and JWT B strings are different? The most notable difference is in JWT B iss i.e my issuer url (--service-account-issuer) and in JWT A iss i.e my issuer url iskubernetes/serviceaccount`.
Aren't they both JWT service account tokens? If not then what Kubernetes API object they actually represent?
Following is my Kubernetes Pod spec (edited for brevity)
apiVersion: v1
kind: Pod
metadata:
annotations:
labels:
app: sample-app
name: sample-pod-gwrcf
spec:
containers:
image: someImage
name: sample-app-container
resources: {}
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: my-sa-token-lr928
readOnly: true
- mountPath: /var/run/secrets/some.directory/serviceaccount
name: good-token
readOnly: true
serviceAccount: my-sa
serviceAccountName: my-sa
terminationGracePeriodSeconds: 30
volumes:
- name: good-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: my.audience.com
expirationSeconds: 86400
path: token
- name: my-sa-token-lr928
secret:
defaultMode: 420
secretName: my-sa-token-lr928

Aren't they both JWT service account tokens?
Yes, they are both JWT tokens.
The one you mentined as JWT A in my-sa-token-lr928 is base64 encoded as all data in every kubernetes secret.
When k8s is mounting a secret data to a pod, this data is being decoded and stored e.g. as a token file in this case.
JWT B token using Service Account Token Volume Projection is issued by kubelet and allows you for more flexibility, for example setting expiration time in contrast to Regular Service Account Tokens which once issued stays the same (unless recreated) and does not expire.
If you exec to your pod and lookup the content of these tokens what you will see are an actual JWT tokens. You can decode the data from this tokens using any jwt decoder e.g. jwt.io.
Why JWT A and JWT B strings are different?
Because they contain different data.

Related

Setup Keycloak, Oauth-proxy and Jupyterhub

I have deployed Jupyterhub and Keycloak instances with Helm charts. I'm trying to authenticate user with Open Id Connect identity provider from Keycloak. But I'm pretty confused about the settings. I have followed instructions from here saying I should use a GenericOAuthenticator when implementing Keycloak.
To configure OpenId Connect Client I followed this.
I also create a group membership and audience and added to the mappers of the Jupyterhub "jhub" client. As well as a group like this and created two test users and added one of them to that group.
My problem is: When I try to logging I get a 403 error Forbidden and a URL similar to this:
https://jhub.compana.com/hub/oauth_callback?state=eyJzdGF0ZV9pZCI6ICJmYzE4NzA0ZmVmZTk0MGExOGU3ZWMysdfsdfsghfgh9LHKGJHDViLyJ9&session_state=ffg334-444f-b510-1f15d1444790&code=d8e977770a-1asdfasdf664-a790-asdfasdf.a6aac533-c75d-d555f-b510-asdasd.aaaaasdf73353-ce76-4aa9-894e-123asdafs
My questions are:
Am I right about using Oauth Proxy? Do I need it if I'm using Keycloak. According to Jupyterhub docs, there are two authentication flows, so I'm using Oauth-proxy as external authenticator but I'm not positive about the way I'm doing that.
JupyterHub is often deployed with oauthenticator, where an external
identity provider, such as GitHub or KeyCloak, is used to authenticate
users. When this is the case, there are two nested oauth flows: an
internal oauth flow where JupyterHub is the provider, and and external
oauth flow, where JupyterHub is a client.
Does Keycloak already has a default OIDC identity provider? The menu doesn't show any after the installation. Should I have done this for each client, since it's asking for an Authorization URL or is it redundant?
I tried to find out this but I only offers the possibility to define my own default identity provider according to this .
Is there a way to test the Oauth flow from the terminal or with Postman in a way that I can inspect the responses?
I could get an Id token with:
curl -k -X POST https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token -d grant_type=password -d username=myuser -d password=mypassword -d client_id=my-client -d scope=openid -d response_type=id_token -d client_secret=myclientsecret
But how can try to login from the console?
Keycloak console screenshots:
identity provider list
Relevant files:
Jupyterhub-values.yaml:
hub:
config:
Authenticator:
enable_auth_state: true
JupyterHub:
authenticator_class: generic-oauth
GenericOAuthenticator:
client_id: jhubclient
client_secret: abcsecret
oauth_callback_url: https://jhub.company.com/hub/oauth_callback
authorize_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
token_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
userdata_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/userinfo
login_service: keycloak
username_key: preferred_username
userdata_params:
state: state
extraEnv:
OAUTH2_AUTHORIZE_URL: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
OAUTH2_TOKEN_URL: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
OAUTH_CALLBACK_URL: https://keycloak.company.com/hub/company
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"
hosts:
- jhub.company.com
keycloak-values.yaml:
mostly default values but added for https:
extraEnvVars:
- name: KEYCLOAK_PROXY_ADDRESS_FORWARDING
value: "true"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_ENABLE_TLS
value: "true"
- name: KEYCLOAK_FRONTEND_URL
value: "https://keycloak.company.com/auth"
ingress:
enabled: true
servicePort: https
annotations:
cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.org/redirect-to-https: "true"
nginx.org/server-snippets: |
location /auth {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
I could make work with this configuration:
hub:
config:
Authenticator:
enable_auth_state: true
admin_users:
- admin
allowed_users:
- testuser1
GenericOAuthenticator:
client_id: jhub
client_secret: nrjNivxuJk2YokEpHB2bQ3o97Y03ziA0
oauth_callback_url: https://jupyter.company.com/hub/oauth_callback
authorize_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
token_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
userdata_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/userinfo
login_service: keycloak
username_key: preferred_username
userdata_params:
state: state
JupyterHub:
authenticator_class: generic-oauth
Creating the ingress myself like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jhub-ingress
namespace: jhub
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"
spec:
ingressClassName: nginx
tls:
- hosts:
- jupyter.company.com
secretName: letsencrypt-cert-tls-jhub
rules:
- host: jupyter.company.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxy-http
port:
number: 8000
I also removed the Oauth-prox deployment since this appears to already be done with Keycloak and it's actually redundant.
Then creating a regular user and admin roles and groups in Keycloak.
It appears the users hadn't the proper permissions in Keycloak.

How does AKS handle the .env file in a container?

Assume there is a backend application with a private key stored in a .env file.
For the project file structure:
|-App files
|-Dockerfile
|-.env
If I run the docker image locally, the application can be reached normally by using a valid public key during the API request. However, if I deploy the container into AKS cluster by using same docker image, the application failed.
I am wondering how the container in a AKS cluster handle the .env file. What should I do to solve this problem?
Moving this out of comments for better visibility.
First and most important is docker is not the same as kubernetes. What works on docker, won't work directly on kubernetes. Docker is a container runtime, while kubernetes is a container orchestration tool which sits on top of docker (not always docker now, containerd is used as well).
There are many resources on the internet which describe the key difference. For example this one is from microsoft docs
First configmaps and secrets should be created:
Creating and managing configmaps and creating and managing secrets
There are different types of secrets which can be created.
Use configmaps/secrets as environment variables.
Further referring to configMaps and secrets as environment variables looks like (configmaps and secrets have the same structure):
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- ...
env:
-
name: ADMIN_PASS
valueFrom:
secretKeyRef: # here secretref is used for sensitive data
key: admin
name: admin-password
-
name: MYSQL_DB_STRING
valueFrom:
configMapKeyRef: # this is not sensitive data so can be used configmap
key: db_config
name: connection_string
...
Use configmaps/secrets as volumes (it will be presented as file).
Below the example of using secrets as files mounted in a specific directory:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- ...
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
There's a good article which explains and shows use cases of secrets as well as its limitations e.g. size is limited to 1Mb.

Kubernetes SSO Github OAuth for multiple applications

So here is the deal. I am using Kubernetes and I want to protect the applications inside of the cluster. Therefore I added an oauth2-proxy and, in case the user is not logged in, it is redirected to GitHub. After the login is done, the user is redirected to the app (Login Diagram). For now, I have two dummy deployments of an echo-http server (echo1 and echo2) and Jenkins. I am doing everything locally with minikube, so please don't mind the domain names.
In Jenkins, I installed the Github OAuth plugin and configured it as said in the multiple posts I found (e.g., Jenkins GitHub OAuth). Also created the GitHub OAuth application and set the callback. Since I want to have SSO for multiple applications besides Jenkins, I set the call back to https://auth.int.example.com/oauth2/callback instead of https://jenkins.int.example.com/securityRealm/finishLogin. Therefore, after login on the GitHub, I get redirected to the Jenkins webpage but as a guest. If I try to log in, I end up in an error.
I used Helm to setup the oauth2-proxy (k8s-at-home/oauth2-proxy)
Am I missing something?
These are the ingress configuration of the oauth2-proxy and ingress controller that I am using.
Nginx Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://auth.int.example.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.int.example.com/oauth2/start?rd=https%3A%2F%2F$host$request_uri"
spec:
tls:
- hosts:
- echo1.int.example.com
- echo2.int.example.com
- jenkins.int.example.com
secretName: letsencrypt-prod
rules:
- host: echo1.int.example.com
http:
paths:
- backend:
serviceName: echo1
servicePort: 80
- host: echo2.int.example.com
http:
paths:
- backend:
serviceName: echo2
servicePort: 80
- host: jenkins.int.example.com
http:
paths:
- path:
backend:
serviceName: jenkins-service
servicePort: 8080
- path: /securityRealm/finishLogin
backend:
serviceName: jenkins-service
servicePort: 8080
OAuth2-proxy Configuration
config:
existingSecret: oauth2-proxy-creds
extraArgs:
whitelist-domain: .int.example.com
cookie-domain: .int.example.com
provider: github
authenticatedEmailsFile:
enabled: true
restricted_access: |-
my_email#my_email.com
ingress:
enabled: true
path: /
hosts:
- auth.int.example.com
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
tls:
- secretName: oauth2-proxy-https-cert
hosts:
- auth.int.example.com
Nice auth architecture you are building there!
I would say that you may have have overlooked the fact that Jenkins has its own authentication. You also need to configure Jenkins itself to allow Oauth2 access via Github.
So what is really going on? Your Oauth proxy solution is great. You can build apps in your k8s cluster, without having to worry about user management or authentication directly from your app.
However, this is useful only for apps that don't have their own authentication mechanisms.
The Oauth proxy is simply protecting the access to the backend webserver. Once you are allowed by the proxy, you interact directly with the app, so if the app requires authentication, so will you as end user.
My advice would be to use the Oauth proxy for apps that don't have user management mechanisms, and leave open access to apps that have authentication mechanisms, like Jenkins. Otherwise you could end up with double authentication (proxy and Jenkins in this case), which is not so great.
Then, to keep the high level concept of accessing your cluster with Github accounts, you need to configure those user-based apps to also make use of Github Oauth2. This way the access to the cluster is homogeneus (you just need your Github account), but the actual integration has two different types: apps that don't require user management (they are protected by the Oauth proxy), and apps with authentication, which are then configured with Github's Oauth2 independently.

Kubernetes / Docker - SSL certificates for web service use

I have a Python web service that collects data from frontend clients. Every few seconds, it creates a Pulsar producer on our topic and sends the collected data. I have also set up a dockerfile to build an image and am working on deploying it to our organization's Kubernetes cluster.
The Pulsar code relies on certificate and key .pem files for TLS authentication, which are loaded over file paths in the test code. However, if the .pem files are included in the built Docker image, it will result in an obvious compliance violation from the Twistlock scan on our Kubernetes instance.
I am pretty inexperienced with Docker, Kubernetes, and security with certificates in general. What would be the best way to store and load the .pem files for use with this web service?
You can mount certificates in the Pod with Kubernetes secret.
First, you need to create a Kubernetes secret:
(Copy your certificate to somewhere kubectl is configured for your Kubernetes cluster. For example file mykey.pem and copy it to the /opt/certs folder.)
kubectl create secret generic mykey-pem --from-file=/opt/certs/
Confirm it was created correctly:
kubectl describe secret mykey-pem
Mount your secret in your deployment (for example nginx deployment):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: "/etc/nginx/ssl"
name: nginx-ssl
readOnly: true
ports:
- containerPort: 80
volumes:
- name: nginx-ssl
secret:
secretName: mykey-pem
restartPolicy: Always
After that .pem files will be available inside the container and you don't need to include them in the docker image.

Cannot pull image from remote Gitlab registry to Kubernetes

I've been trying to create a deployment of docker image to Kubernetes cluster without luck, my deployment.yaml looks like:
apiVersion: v1
kind: Pod
metadata:
name: application-deployment
labels:
app: application
spec:
serviceAccountName: gitlab
automountServiceAccountToken: false
containers:
- name: application
image: example.org:port1/foo/bar:latest
ports:
- containerPort: port2
volumes:
- name: foo
secret:
secretName: regcred
But it fails to get the image.
Failed to pull image "example.org:port1/foo/bar:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://example.org:port1/v2/foo/bar/manifests/latest: denied: access forbidden
The secret used in deployment.yaml, was created like this:
kubectl create secret docker-registry regcred --docker-server=${CI_REGISTRY} --docker-username=${CI_REGISTRY_USER} --docker-password=${CI_REGISTRY_PASSWORD} --docker-email=${GITLAB_USER_EMAIL}
Attempt #1: adding imagePullSecrets
...
imagePullSecrets:
- name: regcred
results in:
Failed to pull image "example.org:port1/foo/bar:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://example.org:port1/v2/foo/bar/manifests/latest: unauthorized: HTTP Basic: Access denied
Solution:
I've created deploy token under Settings > Repository > Deploy Tokens > (created one with read_registry scope)
And added given values to environment variables and an appropriate line now looks like:
kubectl create secret docker-registry regcred --docker-server=${CI_REGISTRY} --docker-username=${CI_DEPLOY_USER} --docker-password=${CI_DEPLOY_PASSWORD}
I've got the problematic line from tutorials & Gitlab docs, where they've described deploy tokens but further used problematic line in examples.
I reproduced your issue and the problem is with password you used while creating a repository's secret. When creating a secret for gitlab repository you have to use personal token created in gitlab instead of a password.
You can create a token by going to Settings -> Access Tokens. Then you have to pick a name for your token, expiration date and token's scope.
Then create a secret as previously by running
kubectl create secret docker-registry regcred --docker-server=$docker_server --docker-username=$docker_username --docker-password=$personal_token
While creating a pod you have to include
imagePullSecrets:
- name: regcred
You need add the imagePullSecret on your deployment, so your pod will be:
apiVersion: v1
kind: Pod
metadata:
name: application-deployment
labels:
app: application
spec:
serviceAccountName: gitlab
automountServiceAccountToken: false
containers:
- name: application
image: example.org:port1/foo/bar:latest
ports:
- containerPort: port2
imagePullSecrets:
- name: regcred
Be sure that the secret and pod is running on same namespace.
Also make sure that the container you are pulling exist and with the right tag.
I notice you are trying to run the command on pipeline on gitlab-ci, check after run the create secret command that your secret is right (with the variables replacement).
You can verify if you can login to registry and pull the image manually on some other linux to by sure that the credentials are right.
creating a secret didn't work for me at first, though I had to specify the namespace for the secret and it worked.
kubectl delete secret -n ${NAMESPACE} regcred --ignore-not-found
kubectl create secret -n ${NAMESPACE} docker-registry regcred --docker-server=${CI_REGISTRY} --docker-username=${CI_DEPLOY_USERNAME} --docker-password=${CI_DEPLOY_PASSWORD} --docker-email=${GITLAB_USER_EMAIL}

Resources