AKS AAD Pod Identity Throwing Forbidden (403) - azure-aks

[SOLVED] See my answer below.
So I am trying to integrate KeyVault with an existing AKS Cluster using AAD Pod Identity.
I have closely followed the documentation for integrating it into the cluster, but for some reason I am getting a 403 when trying to access the key vault from the pod that has the aadboundidentity.
My cluster has RBAC, so I have used the yaml from the AAD Pod Github for RBAC.
Here is what my aadpodidentity.yml looks like:
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: aks-kv-identity
spec:
type: 0
ResourceID: <full-resource-id-of-managed-id>
ClientID: <client-id-of-aks-cluster-sp>
My aadpodidentitybinding.yaml looks like:
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: azure-identity-binding
spec:
AzureIdentity: aks-kv-identity
Selector: kv_selector
The pod's yaml that I'd like to bind to:
apiVersion: apps/v1
kind: Deployment
metadata:
name: data-access
labels:
app: data-access
spec:
replicas: 1
selector:
matchLabels:
app: data-access
template:
metadata:
labels:
app: data-access
aadpodidbinding: kv_selector
spec:
containers:
- name: data-access
image: data-access:2020.02.22.0121
ports:
- containerPort: 80
My AKS SP also has the 'Reader' Role assigned to the Key Vault

I was able to solve my issue.
This is how I solved it:
In the aadpodidentitybinding.yml I was using the clientId of the AKS
SP when I needed to use the Client ID of the Managed Identity.
I needed to ensure that my Managed Identity had the Reader role in my
Resource Group
I needed to assign the correct policies to the managed identity in
the Key Vault
This article was actually super helpful. I just followed the steps here, and it worked perfectly.
Note: I added my Managed Identity in the Resource group that had the AKS cluster, not the Resource Group of the resources for the AKS cluster (i.e the one that starts with MC_).

Related

How can I restrict outbound/egress pod traffic with Azure Container Instances/ACI via AKS

I have a pod for which I want to restrict most outbound/egress traffic, apart from to another k8s service and DataDog. I am doing this with a k8s NetworkPolicy in AKS and it seems to work fine.
I'd like to move the pod to running in Azure Container Instances/ACI via an AKS virtual node, but ACI doesn't support Kubernetes NetworkPolicies.
It's unclear to me how I could implement the same NetworkPolicy some other way, perhaps using Network Security Groups (but can I whitelist what I need to?) or Azure Firewall, or perhaps it's just not possible.
The network policy I want to implement is:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: foo-network-policy
namespace: default
spec:
podSelector:
matchLabels:
"app.kubernetes.io/name": foo
policyTypes:
- Egress
egress:
- to: # Allow access to bar pods.
- podSelector:
matchLabels:
"app.kubernetes.io/name": bar
- to: # Allow access to DataDog for reporting to the agent.
- namespaceSelector:
matchLabels:
name: datadog
- to: # Allow access for kube-dns - needed for the pod to work.
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP

Unable to load in-cluster configuration

I wrote a go app which will list all the constraint violation in the cluster.When tried to build it as docker image and run it in my pod and getting this error.
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: opa
labels:
name: opa
spec:
containers:
- name: opa
image: sathya0803/opa-task:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8000
ERROR:
revaa#revaa-Lenovo-E41-25:~/opa$ kubectl logs opa
2021/07/30 05:50:12 Using incluster K8S client
2021/07/30 05:50:12 Using incluster K8S client
2021/07/30 05:50:12 err:k8srequiredlabels.constraints.gatekeeper.sh is forbidden: User"system:serviceaccount:default:opa" cannot list resource "k8srequiredlabels" in API group "constraints.gatekeeper.sh" at
the cluster scope
2021/07/30 05:50:12 listing constraints violations...
2021/07/30 05:50:12 data: null
As mentioned by #Ferdy Pruis the service account you are using does not have the necessary privileges to perform the task using the kubernetes API. Check the below RBAC to provision your own service account with the appropriate permissions.
This will grant the default service account view permissions. A more secure approach would probably be to create a new service account, grant it the view permissions, and then assign that service account to deployment configuration.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: default
Refer these links for creating and managing SA and IAM.

Authorize Jenkins to list pods in its own K8S namespace

So I am deploying a Jenkins instance inside my K8S cluster using Helm.
Here is the flow that I am following :
1) Create Namespace called jenkins-pipeline.
kubectl get ns jenkins-pipeline -o yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: 2018-09-15T16:58:33Z
name: jenkins-pipeline
resourceVersion: "25596"
selfLink: /api/v1/namespaces/jenkins-pipeline
uid: 9449b9e7-b908-11e8-a915-080027bfdbf9
spec:
finalizers:
- kubernetes
status:
phase: Active
2) Create ServiceAccount called jenkins-admin INSIDE namespace jenkins-pipeline.
kubectl get serviceaccounts -n jenkins-pipeline jenkins-admin -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-09-15T17:02:25Z
name: jenkins-admin
namespace: jenkins-pipeline
resourceVersion: "25886"
selfLink: /api/v1/namespaces/jenkins-pipeline/serviceaccounts/jenkins-admin
uid: 1e921d43-b909-11e8-a915-080027bfdbf9
secrets:
- name: jenkins-admin-token-bhvdd
3) Create ClusterRoleBinding linking my ServiceAccount jenkins-admin to ClusterRole cluster-admin. (I know this is not best practise to assign my deployment that much privilege but Im just testing for now locally).
kubectl get clusterrolebindings.rbac.authorization.k8s.io jenkins-cluster-role-binding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: 2018-09-15T16:58:33Z
name: jenkins-cluster-role-binding
resourceVersion: "25597"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/jenkins-cluster-role-binding
uid: 944a4c18-b908-11e8-a915-080027bfdbf9
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins-admin
namespace: jenkins-pipeline
4) Deploy my pod in namespace jenkins-pipeline.
5) Expose deployment using service in namespace jenkins-pipeline.
Jenkins comes up perfectly fine but when I try to test my Kuberenetes connection, it fails stating :
Error testing connection https://192.168.99.100:8443: Failure executing: GET at: https://192.168.99.100:8443/api/v1/namespaces/jenkins-pipeline/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:jenkins-pipeline:default" cannot list pods in the namespace "jenkins-pipeline".
A snippet of the UI looks like :
Ive configured this to the best of my knowledge. I created the serviceaccount in the namespace and gave this serviceaccount SUPER privileges. And yet it cannot list pods in its own namespace. Any help will be appreciated.
I tried to change namespace in the Jenkins UI but I have a feeling it defaults to jenkins-pipeline even if I dont state it.
Thanks to David Maze's indication, I got it figured out. I was missing a crucial piece which was to make my deployment use the newly created ServiceAccount.
Needed to add it to the deployment file under spec.template.spec :
spec:
containers:
- env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
image: jeunii/my-jenkins-base:1.0
imagePullPolicy: IfNotPresent
.
.
serviceAccount: jenkins-admin
.
.

Kubernetes Service External IP not being assigned

I have the following deployment yaml:
apiVersion: v1
kind: Namespace
metadata:
name: authentication
labels:
name: authentication
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: authentication-deployment
namespace: authentication
spec:
replicas: 2
template:
metadata:
labels:
app: authentication
spec:
containers:
- name: authentication
image: blueapp/authentication:0.0.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: authentication-service
namespace: authentication
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: authentication-deployment
type: LoadBalancer
externalName: authentication
Im pretty new to kubernetes but my understanding of what Im trying to do is create a namespace, in that namespace create a deployment of 2 pods and then create a load balancer to distribute traffic to those pods.
When I run
$ kubectl create -f deployment.yaml
everything creates fine, but then the service never gets assigned an external IP
Is there anything obvious that may be causing this?
Your service is of type NodePort.
To get a load balancer assigned to your service you should be using a LoadBalancer service type:
type: LoadBalancer
See documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
External IPs get assigned only in supported cloud environments, providing that your cloud provider is configured correctly.
Observe the error messages in the kube-controller-manager logs when you create your service.

Default service account not working with kubernetes plugin on jenkins

I have configured the Kubernetes plugin to spin up slaves.
However I am having problems with access-control.
Getting an error when the master tries to spin up new pods (slaves)
Unexpected exception encountered while provisioning agent Kubernetes Pod Template
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/npd-test/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:315)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:266)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:237)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:230)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:208)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:643)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:300)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:636)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:581)
at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I have checked the access of the default service account located at /var/run/secrets/kubernetes.io/serviceaccount/token and tried to create a pod in https://kubernetes.default/api/v1/namespaces/npd-test/pods. using the token and it works.
Not sure why the plugin is complaining that the service account does not have access.
I have tried configuring the Kubernetes plugin with None credentials and a Kubernetes Service Account Credential (no way to specify account), but neither works.
It is odd that the service account worked for you normally but didn't work in Jenkins. In my setup, I had to add a RoleBinding to give the service account the edit role (my namespace is actually jenkins but I changed it here to match your namespace).
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
namespace: npd-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: default
namespace: npd-test
After I did that, I configured the Kubernetes Cloud plugin like this and it works for me.
Kubernetes URL: https://kubernetes.default.svc.cluster.local
Kubernetes server certificate key:
Disable https certificate check: off
Kubernetes Namespace: npd-test
Credentials: - none -
The following creates a Service account to build from a the namespace jenkins to the namespace build. I omitted the rules but if you need them I can add them too.
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
namespace: build
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
namespace: build
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
namespace: jenkins

Resources