I have configured the Kubernetes plugin to spin up slaves.
However I am having problems with access-control.
Getting an error when the master tries to spin up new pods (slaves)
Unexpected exception encountered while provisioning agent Kubernetes Pod Template
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/npd-test/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:315)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:266)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:237)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:230)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:208)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:643)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:300)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:636)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:581)
at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I have checked the access of the default service account located at /var/run/secrets/kubernetes.io/serviceaccount/token and tried to create a pod in https://kubernetes.default/api/v1/namespaces/npd-test/pods. using the token and it works.
Not sure why the plugin is complaining that the service account does not have access.
I have tried configuring the Kubernetes plugin with None credentials and a Kubernetes Service Account Credential (no way to specify account), but neither works.
It is odd that the service account worked for you normally but didn't work in Jenkins. In my setup, I had to add a RoleBinding to give the service account the edit role (my namespace is actually jenkins but I changed it here to match your namespace).
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
namespace: npd-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: default
namespace: npd-test
After I did that, I configured the Kubernetes Cloud plugin like this and it works for me.
Kubernetes URL: https://kubernetes.default.svc.cluster.local
Kubernetes server certificate key:
Disable https certificate check: off
Kubernetes Namespace: npd-test
Credentials: - none -
The following creates a Service account to build from a the namespace jenkins to the namespace build. I omitted the rules but if you need them I can add them too.
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
namespace: build
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
namespace: build
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
namespace: jenkins
Related
I'm kind of new to k8s/github deployment story but trying to setup the GitHub action to deploy a docker image to the k8s cluster. So far the image is getting built and pushed to the registry. However, the deploy job is always failing with the error:
error: You must be logged in to the server (the server has asked for the client to provide credentials)
This is how the ci.yaml file looks like:
deploy-to-k8s:
needs: push-api-image
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout#v3
- name: Set the Kubernetes context
uses: azure/k8s-set-context#v3
with:
method: service-account
k8s-url: https://------/v3 ---> the API Endpoint
k8s-secret: ${{ secrets.KUBERNETES_SECRET }}
- name: Deploy to the k8s cluster
uses: azure/k8s-deploy#v4.9
with:
namespace: staging
skip-tls-verify: true
manifests: |
k8s/deployment.yaml
k8s/service.yaml
k8s/ingress.yaml
I created the service account (named: github-deployment-action) in k8s, created the cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: github-deployment-action
rules:
- apiGroups: ["", "apps", "networking.k8s.io", "extensions"] # "" indicates the core API group
resources: ["deployments", "services", "configmaps", "secrets", "ingresses"]
verbs: ["get", "watch", "list", "patch", "update", "delete"]
Together with the ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: github-deployment-action
subjects:
- kind: ServiceAccount
name: github-deployment-action
namespace: staging
roleRef:
kind: ClusterRole
name: github-deployment-action
apiGroup: rbac.authorization.k8s.io
And the secret:
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: github-deployment-action-token
namespace: staging
annotations:
kubernetes.io/service-account.name: github-deployment-action
Then i copied the secret from the k8s and added it to the secrets in github. I generated the secret yaml using this command:
kubectl get secret github-deployment-action-token --namespace=staging -o yaml
I'm not sure what's wrong here, maybe the k8s-url is wrong, if i run the command
kubectl config view
this gives back the local ip address (http://127.0.0.1:8001) which doesn't really make sense to add it in Github action
k8s version is 1.24 and rancher version is 2.7.0
Your advices are highly appreciated
I wrote a go app which will list all the constraint violation in the cluster.When tried to build it as docker image and run it in my pod and getting this error.
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: opa
labels:
name: opa
spec:
containers:
- name: opa
image: sathya0803/opa-task:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8000
ERROR:
revaa#revaa-Lenovo-E41-25:~/opa$ kubectl logs opa
2021/07/30 05:50:12 Using incluster K8S client
2021/07/30 05:50:12 Using incluster K8S client
2021/07/30 05:50:12 err:k8srequiredlabels.constraints.gatekeeper.sh is forbidden: User"system:serviceaccount:default:opa" cannot list resource "k8srequiredlabels" in API group "constraints.gatekeeper.sh" at
the cluster scope
2021/07/30 05:50:12 listing constraints violations...
2021/07/30 05:50:12 data: null
As mentioned by #Ferdy Pruis the service account you are using does not have the necessary privileges to perform the task using the kubernetes API. Check the below RBAC to provision your own service account with the appropriate permissions.
This will grant the default service account view permissions. A more secure approach would probably be to create a new service account, grant it the view permissions, and then assign that service account to deployment configuration.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: default
Refer these links for creating and managing SA and IAM.
How to assign RBAC permissions to system:anonymous service account in Kubernetes?
To understand Kubernetes, I want to assign permissions to the system:anonymous service account to review permissions using kubectl auth can-i --list.
I have created the following role and rolebinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: anonymous-review-access
rules:
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectaccessreviews
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: anonymous-review-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: anonymous-review-access
subjects:
- kind: ServiceAccount
name: anonymous
namespace: default
After kubectl apply -f ... the above, I'm still not allowed to review access permissions anonymously:
$ kubectl auth can-i --list --as=system:anonymous -n default
Error from server (Forbidden): selfsubjectrulesreviews.authorization.k8s.io is forbidden: User "system:anonymous" cannot create resource "selfsubjectrulesreviews" in API group "authorization.k8s.io" at the cluster scope
How can I create proper role and rolebinding to view permissions as system:anonymous service account?
system:anonymous is not a service account.Requests that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: anonymous-review-access
rules:
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectaccessreviews
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: anonymous-review-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: anonymous-review-access
subjects:
- kind: User
name: system:anonymous
namespace: default
kubectl auth can-i --list --as=system:anonymous -n default
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[SOLVED] See my answer below.
So I am trying to integrate KeyVault with an existing AKS Cluster using AAD Pod Identity.
I have closely followed the documentation for integrating it into the cluster, but for some reason I am getting a 403 when trying to access the key vault from the pod that has the aadboundidentity.
My cluster has RBAC, so I have used the yaml from the AAD Pod Github for RBAC.
Here is what my aadpodidentity.yml looks like:
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: aks-kv-identity
spec:
type: 0
ResourceID: <full-resource-id-of-managed-id>
ClientID: <client-id-of-aks-cluster-sp>
My aadpodidentitybinding.yaml looks like:
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: azure-identity-binding
spec:
AzureIdentity: aks-kv-identity
Selector: kv_selector
The pod's yaml that I'd like to bind to:
apiVersion: apps/v1
kind: Deployment
metadata:
name: data-access
labels:
app: data-access
spec:
replicas: 1
selector:
matchLabels:
app: data-access
template:
metadata:
labels:
app: data-access
aadpodidbinding: kv_selector
spec:
containers:
- name: data-access
image: data-access:2020.02.22.0121
ports:
- containerPort: 80
My AKS SP also has the 'Reader' Role assigned to the Key Vault
I was able to solve my issue.
This is how I solved it:
In the aadpodidentitybinding.yml I was using the clientId of the AKS
SP when I needed to use the Client ID of the Managed Identity.
I needed to ensure that my Managed Identity had the Reader role in my
Resource Group
I needed to assign the correct policies to the managed identity in
the Key Vault
This article was actually super helpful. I just followed the steps here, and it worked perfectly.
Note: I added my Managed Identity in the Resource group that had the AKS cluster, not the Resource Group of the resources for the AKS cluster (i.e the one that starts with MC_).
So I am deploying a Jenkins instance inside my K8S cluster using Helm.
Here is the flow that I am following :
1) Create Namespace called jenkins-pipeline.
kubectl get ns jenkins-pipeline -o yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: 2018-09-15T16:58:33Z
name: jenkins-pipeline
resourceVersion: "25596"
selfLink: /api/v1/namespaces/jenkins-pipeline
uid: 9449b9e7-b908-11e8-a915-080027bfdbf9
spec:
finalizers:
- kubernetes
status:
phase: Active
2) Create ServiceAccount called jenkins-admin INSIDE namespace jenkins-pipeline.
kubectl get serviceaccounts -n jenkins-pipeline jenkins-admin -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-09-15T17:02:25Z
name: jenkins-admin
namespace: jenkins-pipeline
resourceVersion: "25886"
selfLink: /api/v1/namespaces/jenkins-pipeline/serviceaccounts/jenkins-admin
uid: 1e921d43-b909-11e8-a915-080027bfdbf9
secrets:
- name: jenkins-admin-token-bhvdd
3) Create ClusterRoleBinding linking my ServiceAccount jenkins-admin to ClusterRole cluster-admin. (I know this is not best practise to assign my deployment that much privilege but Im just testing for now locally).
kubectl get clusterrolebindings.rbac.authorization.k8s.io jenkins-cluster-role-binding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: 2018-09-15T16:58:33Z
name: jenkins-cluster-role-binding
resourceVersion: "25597"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/jenkins-cluster-role-binding
uid: 944a4c18-b908-11e8-a915-080027bfdbf9
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins-admin
namespace: jenkins-pipeline
4) Deploy my pod in namespace jenkins-pipeline.
5) Expose deployment using service in namespace jenkins-pipeline.
Jenkins comes up perfectly fine but when I try to test my Kuberenetes connection, it fails stating :
Error testing connection https://192.168.99.100:8443: Failure executing: GET at: https://192.168.99.100:8443/api/v1/namespaces/jenkins-pipeline/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:jenkins-pipeline:default" cannot list pods in the namespace "jenkins-pipeline".
A snippet of the UI looks like :
Ive configured this to the best of my knowledge. I created the serviceaccount in the namespace and gave this serviceaccount SUPER privileges. And yet it cannot list pods in its own namespace. Any help will be appreciated.
I tried to change namespace in the Jenkins UI but I have a feeling it defaults to jenkins-pipeline even if I dont state it.
Thanks to David Maze's indication, I got it figured out. I was missing a crucial piece which was to make my deployment use the newly created ServiceAccount.
Needed to add it to the deployment file under spec.template.spec :
spec:
containers:
- env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
image: jeunii/my-jenkins-base:1.0
imagePullPolicy: IfNotPresent
.
.
serviceAccount: jenkins-admin
.
.