How to assign RBAC permissions to system:anonymous service account in Kubernetes? - docker

How to assign RBAC permissions to system:anonymous service account in Kubernetes?
To understand Kubernetes, I want to assign permissions to the system:anonymous service account to review permissions using kubectl auth can-i --list.
I have created the following role and rolebinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: anonymous-review-access
rules:
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectaccessreviews
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: anonymous-review-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: anonymous-review-access
subjects:
- kind: ServiceAccount
name: anonymous
namespace: default
After kubectl apply -f ... the above, I'm still not allowed to review access permissions anonymously:
$ kubectl auth can-i --list --as=system:anonymous -n default
Error from server (Forbidden): selfsubjectrulesreviews.authorization.k8s.io is forbidden: User "system:anonymous" cannot create resource "selfsubjectrulesreviews" in API group "authorization.k8s.io" at the cluster scope
How can I create proper role and rolebinding to view permissions as system:anonymous service account?

system:anonymous is not a service account.Requests that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: anonymous-review-access
rules:
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectaccessreviews
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: anonymous-review-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: anonymous-review-access
subjects:
- kind: User
name: system:anonymous
namespace: default
kubectl auth can-i --list --as=system:anonymous -n default
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]

Related

kubernetes plugin in jenkins is not working, Test connection is failing

Environment running:
My Jenkins master is running in the EC2 instance. Jenkins version:- Jenkins 2.249.1
My Kubernetes cluster is running in EKS
I have installed the Kubernetes plugin(1.27.7) and trying to create a Jenkins agent in k8's. I have created a service account, role, role binding. My Yaml file looks as below.
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-admin
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins
namespace: default
labels:
"app.kubernetes.io/name": 'jenkins'
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-role-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins-admin
namespace: default
In Place of the default namespace, I have given a specific namespace
Below is the attached screenshots of My configuration
I have added credentials as the secret text
Error it is throwing:
Error testing connection https://xxxxxxxx.us-east-1.eks.amazonaws.com: java.io.FileNotFoundException: /var/lib/jenkins/.kube/config (No such file or directory)

Unable to load in-cluster configuration

I wrote a go app which will list all the constraint violation in the cluster.When tried to build it as docker image and run it in my pod and getting this error.
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: opa
labels:
name: opa
spec:
containers:
- name: opa
image: sathya0803/opa-task:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8000
ERROR:
revaa#revaa-Lenovo-E41-25:~/opa$ kubectl logs opa
2021/07/30 05:50:12 Using incluster K8S client
2021/07/30 05:50:12 Using incluster K8S client
2021/07/30 05:50:12 err:k8srequiredlabels.constraints.gatekeeper.sh is forbidden: User"system:serviceaccount:default:opa" cannot list resource "k8srequiredlabels" in API group "constraints.gatekeeper.sh" at
the cluster scope
2021/07/30 05:50:12 listing constraints violations...
2021/07/30 05:50:12 data: null
As mentioned by #Ferdy Pruis the service account you are using does not have the necessary privileges to perform the task using the kubernetes API. Check the below RBAC to provision your own service account with the appropriate permissions.
This will grant the default service account view permissions. A more secure approach would probably be to create a new service account, grant it the view permissions, and then assign that service account to deployment configuration.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: default
Refer these links for creating and managing SA and IAM.

Wrong subject name of SA for prometheus clusterrolebinding?

Since sa name is "prometheus" in file prometheus-serviceaccount.yaml,
kind: ServiceAccount
apiVersion: v1
metadata:
name: prometheus
labels:
app: prometheus
namespace: default
do you think subject name in file prometheus-clusterrolebinding.yaml should be "prometheus" rather than current "default"?
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: prometheus
labels:
app: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: default
namespace: default
There are the similar problems for prometheus-proxy.

Authorize Jenkins to list pods in its own K8S namespace

So I am deploying a Jenkins instance inside my K8S cluster using Helm.
Here is the flow that I am following :
1) Create Namespace called jenkins-pipeline.
kubectl get ns jenkins-pipeline -o yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: 2018-09-15T16:58:33Z
name: jenkins-pipeline
resourceVersion: "25596"
selfLink: /api/v1/namespaces/jenkins-pipeline
uid: 9449b9e7-b908-11e8-a915-080027bfdbf9
spec:
finalizers:
- kubernetes
status:
phase: Active
2) Create ServiceAccount called jenkins-admin INSIDE namespace jenkins-pipeline.
kubectl get serviceaccounts -n jenkins-pipeline jenkins-admin -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-09-15T17:02:25Z
name: jenkins-admin
namespace: jenkins-pipeline
resourceVersion: "25886"
selfLink: /api/v1/namespaces/jenkins-pipeline/serviceaccounts/jenkins-admin
uid: 1e921d43-b909-11e8-a915-080027bfdbf9
secrets:
- name: jenkins-admin-token-bhvdd
3) Create ClusterRoleBinding linking my ServiceAccount jenkins-admin to ClusterRole cluster-admin. (I know this is not best practise to assign my deployment that much privilege but Im just testing for now locally).
kubectl get clusterrolebindings.rbac.authorization.k8s.io jenkins-cluster-role-binding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: 2018-09-15T16:58:33Z
name: jenkins-cluster-role-binding
resourceVersion: "25597"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/jenkins-cluster-role-binding
uid: 944a4c18-b908-11e8-a915-080027bfdbf9
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins-admin
namespace: jenkins-pipeline
4) Deploy my pod in namespace jenkins-pipeline.
5) Expose deployment using service in namespace jenkins-pipeline.
Jenkins comes up perfectly fine but when I try to test my Kuberenetes connection, it fails stating :
Error testing connection https://192.168.99.100:8443: Failure executing: GET at: https://192.168.99.100:8443/api/v1/namespaces/jenkins-pipeline/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:jenkins-pipeline:default" cannot list pods in the namespace "jenkins-pipeline".
A snippet of the UI looks like :
Ive configured this to the best of my knowledge. I created the serviceaccount in the namespace and gave this serviceaccount SUPER privileges. And yet it cannot list pods in its own namespace. Any help will be appreciated.
I tried to change namespace in the Jenkins UI but I have a feeling it defaults to jenkins-pipeline even if I dont state it.
Thanks to David Maze's indication, I got it figured out. I was missing a crucial piece which was to make my deployment use the newly created ServiceAccount.
Needed to add it to the deployment file under spec.template.spec :
spec:
containers:
- env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
image: jeunii/my-jenkins-base:1.0
imagePullPolicy: IfNotPresent
.
.
serviceAccount: jenkins-admin
.
.

Add admin user for Jenkins to kubeadm kubernetes cluster

I created a Kubernetes cluster for a single-master multi-node cluster using kubeadm following the official kubernetes guide:
Kubernetes cluster
I currently connect my laptop to the cluster via this command:
kubectl get nodes --username kubernetes-admin --kubeconfig ~/.kube/config
However, I now want to add a separate user (or same actual user but different name) for our Jenkins to run commands. I just want a separate username for access/logging purposes.
How can I easily add another "jenkins" username (possibly with its own cert) in the config file? Kubeadm automatically uses --authorization-mode=Node (or at least mine did)
Background info: Only people who may make any changes on our cluster currently have/need access, so I don't need to only give users access to certain namespaces etc. Also, keep in mind we will have a cluster per environment: dev, UAT, production, etc.
It's suitable to use Kubernetes serviceAccount and instruct your Jenkins deployment to use that account (with a bound Role):
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: jenkins
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins

Resources