Kubernetes Cluster Context with Multiple Namespaces - jenkins

I've a huge pipeline with different developer groups with several permission levels.(For using Jenkins Kubernetes Plugin .)
For example QA teams and Developer teams has different service accounts at kubernetes cluster.
So I need create some connection with kubernetes clusters but every connection I change context of cluster with namespace name .
I want to use multiple namespaces at kubernetes context .
That is my own kubernetes context file .
- context:
cluster: minikube
namespace: user3
user: minikube
How I can handle this problem with kubernetes api call or in yaml files ?
That is my example service account yaml file .
apiVersion: v1
kind: ServiceAccount
metadata:
name: dev
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dev
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dev
subjects:
- kind: ServiceAccount
name: dev

If you want one jenkins to talk to kubernetes API with different service accounts you need to create multiple Jenkins "clouds" in configuration, each with different credentials. Then in your pipeline you set the "cloud" option to choose the right one

Related

kubernetes plugin in jenkins is not working, Test connection is failing

Environment running:
My Jenkins master is running in the EC2 instance. Jenkins version:- Jenkins 2.249.1
My Kubernetes cluster is running in EKS
I have installed the Kubernetes plugin(1.27.7) and trying to create a Jenkins agent in k8's. I have created a service account, role, role binding. My Yaml file looks as below.
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-admin
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins
namespace: default
labels:
"app.kubernetes.io/name": 'jenkins'
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-role-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins-admin
namespace: default
In Place of the default namespace, I have given a specific namespace
Below is the attached screenshots of My configuration
I have added credentials as the secret text
Error it is throwing:
Error testing connection https://xxxxxxxx.us-east-1.eks.amazonaws.com: java.io.FileNotFoundException: /var/lib/jenkins/.kube/config (No such file or directory)

Deploy Jenkins in EKS cluster

I have a running EKS cluster and deployed the web application in the default namespace, now I'm trying to install Jenkins using k8s manifest file.
Here is list of files which I deployed, when I'm trying to configure Kubernetes cloud in manage Jenkins - configured system I'm not able to validate the test connection.
Note: I'm trying to configure Jenkins using the service account method.
rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create","delete","get","list","patch","update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create","delete","get","list","patch","update"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["create","delete","get","list","patch","update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["create","delete","get","list","patch","update"]
- apiGroups: [""]
resources: ["ingresses"]
verbs: ["create","delete","get","list","patch","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
namespace: jenkins
Can someone please help me?
For service account "system:serviceaccount:default:jenkins" to have access to list resource "pods" in API group "" in the namespace "jenkins", change your RoleBinding to:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
namespace: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
namespace: default
You can use kubectl auth can-i command to test if the service account has access to perform a required function after applying the RoleBinding.

Jenkins agent for connecting Amazone EKS not work

I try to configure Kubernetes agent in my Jenkins for deploy microservices using Jenkins pipeline.
I created Amazone EKS cluster using eksctl commande. After cluster creation a created kubeconfig file for configure secret file credential in Jenkins.
When i try to connect my kubernetes agent to my cluster I have an error :
Error testing connection https://<CLUSTER>.sk1.eu-west-3.eks.amazonaws.com: Failure executing: GET at: https://<CLUSTER>.sk1.eu-west-3.eks.amazonaws.com/api/v1/namespaces/default/pods. Message: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" in the namespace "default". Received status: Status(apiVersion=v1, code=403, details=StatusDetails(causes=[], group=null, kind=pods, name=null, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" in the namespace "default", metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Forbidden, status=Failure, additionalProperties={}).
Your config secret does not have enough permission to perform basic task. Please bind the below role in your service account who's token you have used in config secret. Please follow this one
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: jenkins-master
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins-master
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-master
subjects:
- kind: ServiceAccount
name: jenkins-master //replace your service account name
For more details follow this article.

How to deploy a deployment in another namespace in Kubernetes?

I'm using Jenkins deployed on Kubernetes. Jenkins pods are deployed in 'kubernetes-plugin' namespace, and uses service account 'jenkins', which is defined below:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
But when I use kubectl apply -f web-api-deploy.yaml -n default in the jenkins pipeline, it report the following error:
deployments.extensions "news-app-web-api-dev" is forbidden: User "system:serviceaccount:kubernetes-plugin:jenkins" cannot get deployments.extensions in the namespace "default"
which means: you cannot deploy on namespace 'default' when using service account 'jenkins' in namespace 'kubernetes-plugin'
So is there a way to deploy a deployment in another namespace?? How.
So is there a way to deploy a deployment in another namespace?? How.
If I'm not mistaken, this github project gives steps to run in different namespace. It all boils down to this:
You need to craete ServiceAccount, Role and RoleBinding in different namespace and use it like noted in documentation. Here is relevant part:
Ensure you create the namespaces and roles with the following commands,
then run the tests in namespace kubernetes-plugin with the service account
jenkins (edit src/test/kubernetes/service-account.yml to use a different
service account)
kubectl create namespace kubernetes-plugin-test
kubectl create namespace kubernetes-plugin-test-overridden-namespace
kubectl create namespace kubernetes-plugin-test-overridden-namespace2
kubectl apply -n kubernetes-plugin-test -f src/main/kubernetes/service-account.yml
kubectl apply -n kubernetes-plugin-test-overridden-namespace -f src/main/kubernetes/service-account.yml
kubectl apply -n kubernetes-plugin-test-overridden-namespace2 -f src/main/kubernetes/service-account.yml
kubectl apply -n kubernetes-plugin-test -f src/test/kubernetes/service-account.yml
kubectl apply -n kubernetes-plugin-test-overridden-namespace -f src/test/kubernetes/service-account.yml
kubectl apply -n kubernetes-plugin-test-overridden-namespace2 -f src/test/kubernetes/service-account.yml
Also applicable to your situation is to create new Role and RoleBinding in default namespace referencing jenkins ServiceAccount from kubernetes-plugin namespace like so:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: role-jenkins-default
namespace: default
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: roleb-jenkins-default
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: role-jenkins-default
subjects:
- kind: ServiceAccount
name: jenkins
namespace: kubernetes-plugin
Note that role- and roleb- prefixes as well as -deault suffix are added to name for clarity. Same goes for explicitly listing namespace default for easier bookkeeping and clarity.
This change should get you past by the error mentioned in your question.

Add admin user for Jenkins to kubeadm kubernetes cluster

I created a Kubernetes cluster for a single-master multi-node cluster using kubeadm following the official kubernetes guide:
Kubernetes cluster
I currently connect my laptop to the cluster via this command:
kubectl get nodes --username kubernetes-admin --kubeconfig ~/.kube/config
However, I now want to add a separate user (or same actual user but different name) for our Jenkins to run commands. I just want a separate username for access/logging purposes.
How can I easily add another "jenkins" username (possibly with its own cert) in the config file? Kubeadm automatically uses --authorization-mode=Node (or at least mine did)
Background info: Only people who may make any changes on our cluster currently have/need access, so I don't need to only give users access to certain namespaces etc. Also, keep in mind we will have a cluster per environment: dev, UAT, production, etc.
It's suitable to use Kubernetes serviceAccount and instruct your Jenkins deployment to use that account (with a bound Role):
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: jenkins
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins

Resources