Unable to load in-cluster configuration - docker

I wrote a go app which will list all the constraint violation in the cluster.When tried to build it as docker image and run it in my pod and getting this error.
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: opa
labels:
name: opa
spec:
containers:
- name: opa
image: sathya0803/opa-task:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8000
ERROR:
revaa#revaa-Lenovo-E41-25:~/opa$ kubectl logs opa
2021/07/30 05:50:12 Using incluster K8S client
2021/07/30 05:50:12 Using incluster K8S client
2021/07/30 05:50:12 err:k8srequiredlabels.constraints.gatekeeper.sh is forbidden: User"system:serviceaccount:default:opa" cannot list resource "k8srequiredlabels" in API group "constraints.gatekeeper.sh" at
the cluster scope
2021/07/30 05:50:12 listing constraints violations...
2021/07/30 05:50:12 data: null

As mentioned by #Ferdy Pruis the service account you are using does not have the necessary privileges to perform the task using the kubernetes API. Check the below RBAC to provision your own service account with the appropriate permissions.
This will grant the default service account view permissions. A more secure approach would probably be to create a new service account, grant it the view permissions, and then assign that service account to deployment configuration.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: default
Refer these links for creating and managing SA and IAM.

Related

How to assign RBAC permissions to system:anonymous service account in Kubernetes?

How to assign RBAC permissions to system:anonymous service account in Kubernetes?
To understand Kubernetes, I want to assign permissions to the system:anonymous service account to review permissions using kubectl auth can-i --list.
I have created the following role and rolebinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: anonymous-review-access
rules:
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectaccessreviews
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: anonymous-review-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: anonymous-review-access
subjects:
- kind: ServiceAccount
name: anonymous
namespace: default
After kubectl apply -f ... the above, I'm still not allowed to review access permissions anonymously:
$ kubectl auth can-i --list --as=system:anonymous -n default
Error from server (Forbidden): selfsubjectrulesreviews.authorization.k8s.io is forbidden: User "system:anonymous" cannot create resource "selfsubjectrulesreviews" in API group "authorization.k8s.io" at the cluster scope
How can I create proper role and rolebinding to view permissions as system:anonymous service account?
system:anonymous is not a service account.Requests that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: anonymous-review-access
rules:
- apiGroups:
- authorization.k8s.io
resources:
- selfsubjectaccessreviews
- selfsubjectrulesreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: anonymous-review-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: anonymous-review-access
subjects:
- kind: User
name: system:anonymous
namespace: default
kubectl auth can-i --list --as=system:anonymous -n default
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]

AKS AAD Pod Identity Throwing Forbidden (403)

[SOLVED] See my answer below.
So I am trying to integrate KeyVault with an existing AKS Cluster using AAD Pod Identity.
I have closely followed the documentation for integrating it into the cluster, but for some reason I am getting a 403 when trying to access the key vault from the pod that has the aadboundidentity.
My cluster has RBAC, so I have used the yaml from the AAD Pod Github for RBAC.
Here is what my aadpodidentity.yml looks like:
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: aks-kv-identity
spec:
type: 0
ResourceID: <full-resource-id-of-managed-id>
ClientID: <client-id-of-aks-cluster-sp>
My aadpodidentitybinding.yaml looks like:
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: azure-identity-binding
spec:
AzureIdentity: aks-kv-identity
Selector: kv_selector
The pod's yaml that I'd like to bind to:
apiVersion: apps/v1
kind: Deployment
metadata:
name: data-access
labels:
app: data-access
spec:
replicas: 1
selector:
matchLabels:
app: data-access
template:
metadata:
labels:
app: data-access
aadpodidbinding: kv_selector
spec:
containers:
- name: data-access
image: data-access:2020.02.22.0121
ports:
- containerPort: 80
My AKS SP also has the 'Reader' Role assigned to the Key Vault
I was able to solve my issue.
This is how I solved it:
In the aadpodidentitybinding.yml I was using the clientId of the AKS
SP when I needed to use the Client ID of the Managed Identity.
I needed to ensure that my Managed Identity had the Reader role in my
Resource Group
I needed to assign the correct policies to the managed identity in
the Key Vault
This article was actually super helpful. I just followed the steps here, and it worked perfectly.
Note: I added my Managed Identity in the Resource group that had the AKS cluster, not the Resource Group of the resources for the AKS cluster (i.e the one that starts with MC_).

Add admin user for Jenkins to kubeadm kubernetes cluster

I created a Kubernetes cluster for a single-master multi-node cluster using kubeadm following the official kubernetes guide:
Kubernetes cluster
I currently connect my laptop to the cluster via this command:
kubectl get nodes --username kubernetes-admin --kubeconfig ~/.kube/config
However, I now want to add a separate user (or same actual user but different name) for our Jenkins to run commands. I just want a separate username for access/logging purposes.
How can I easily add another "jenkins" username (possibly with its own cert) in the config file? Kubeadm automatically uses --authorization-mode=Node (or at least mine did)
Background info: Only people who may make any changes on our cluster currently have/need access, so I don't need to only give users access to certain namespaces etc. Also, keep in mind we will have a cluster per environment: dev, UAT, production, etc.
It's suitable to use Kubernetes serviceAccount and instruct your Jenkins deployment to use that account (with a bound Role):
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: jenkins
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins

Kubernetes Service External IP not being assigned

I have the following deployment yaml:
apiVersion: v1
kind: Namespace
metadata:
name: authentication
labels:
name: authentication
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: authentication-deployment
namespace: authentication
spec:
replicas: 2
template:
metadata:
labels:
app: authentication
spec:
containers:
- name: authentication
image: blueapp/authentication:0.0.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: authentication-service
namespace: authentication
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: authentication-deployment
type: LoadBalancer
externalName: authentication
Im pretty new to kubernetes but my understanding of what Im trying to do is create a namespace, in that namespace create a deployment of 2 pods and then create a load balancer to distribute traffic to those pods.
When I run
$ kubectl create -f deployment.yaml
everything creates fine, but then the service never gets assigned an external IP
Is there anything obvious that may be causing this?
Your service is of type NodePort.
To get a load balancer assigned to your service you should be using a LoadBalancer service type:
type: LoadBalancer
See documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
External IPs get assigned only in supported cloud environments, providing that your cloud provider is configured correctly.
Observe the error messages in the kube-controller-manager logs when you create your service.

Default service account not working with kubernetes plugin on jenkins

I have configured the Kubernetes plugin to spin up slaves.
However I am having problems with access-control.
Getting an error when the master tries to spin up new pods (slaves)
Unexpected exception encountered while provisioning agent Kubernetes Pod Template
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/npd-test/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:315)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:266)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:237)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:230)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:208)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:643)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:300)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:636)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:581)
at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I have checked the access of the default service account located at /var/run/secrets/kubernetes.io/serviceaccount/token and tried to create a pod in https://kubernetes.default/api/v1/namespaces/npd-test/pods. using the token and it works.
Not sure why the plugin is complaining that the service account does not have access.
I have tried configuring the Kubernetes plugin with None credentials and a Kubernetes Service Account Credential (no way to specify account), but neither works.
It is odd that the service account worked for you normally but didn't work in Jenkins. In my setup, I had to add a RoleBinding to give the service account the edit role (my namespace is actually jenkins but I changed it here to match your namespace).
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
namespace: npd-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: default
namespace: npd-test
After I did that, I configured the Kubernetes Cloud plugin like this and it works for me.
Kubernetes URL: https://kubernetes.default.svc.cluster.local
Kubernetes server certificate key:
Disable https certificate check: off
Kubernetes Namespace: npd-test
Credentials: - none -
The following creates a Service account to build from a the namespace jenkins to the namespace build. I omitted the rules but if you need them I can add them too.
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
namespace: build
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
namespace: build
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
namespace: jenkins

Resources