AKS with managed identity. Need Service Principal to automate deployment using bitbucket pipeline - azure-aks

I have an AKS (Kubernetes cluster) created with a managed identity in Azure portal.
I want to automate deployment in the cluster using bitbucket pipelines. For this, it seems I need a service principal.
script:
- pipe: microsoft/azure-aks-deploy:1.0.2
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
Is there a way to get this from the managed identity? Do I need to delete the cluster and re-create it with service principal? Are there any other alternatives?
Thanks!

Unfortunately, the managed identity can only be used inside the Azure Resources. And it seems the bitbucket pipeline should have the service principal with enough permissions first to access the Azure, then it can manage the Azure resources. And for AKS, you can't change the managed identity that you enable it at the creation into service principal.
So finally, you need to delete the existing AKS cluster and recreate a new cluster with a service principal. Then you can use the same service principal to access Azure and manage the AKS cluster.

I wanted to post this for anyone looking.
The OP asked here about retrieving the service principal details for a managed identity. While it is possible to retrieve the azure resource ID and also the "username" of the service principal, as #charles-xu mentioned using a managed identity for anything outside of Azure is not possible, and this is because there is no method to access the password (also known as client secret)
That being said, you can find the command necessary to retrieve your Managed Identity's SP name in case you need it, for example in order to insert it into another azure resource being created by Terraform. The command is documented here: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-cli

Related

Spring Cloud Data Flow in AKS using ACR with a managed identity : metadata

We have a Spring Cloud Data Flow server (SCDF) deployed in an Azure Kubernetes Service (AKS).
We defined a user managed identity assigned to this AKS.
This identity has the AcrPull role on an Azure Container Registry (ACR) that doesn't have an admin user.
In the SCDF documentation, only the authorization type 'basicauth' is described :
- spring.cloud.dataflow.container.registry-configurations[myazurecr].registry-host=tzolovazureregistry.azurecr.io
- spring.cloud.dataflow.container.registry-configurations[myazurecr].authorization-type=basicauth
- spring.cloud.dataflow.container.registry-configurations[myazurecr].user=[your Azure registry username]
- spring.cloud.dataflow.container.registry-configurations[myazurecr].secret=[your Azure registry access password]
But as we use an user managed identity, we don't have a user/secret.
We tried the authentication type 'anonymous' without success.
How to configure the SCDF to be authorized to get the metadata of the container application ?
It works by defining a service principal and using its application ID and secret.
- spring.cloud.dataflow.container.registry-configurations[myazurecr].registry-host=myazureacr.azurecr.io
- spring.cloud.dataflow.container.registry-configurations[myazurecr].authorization-type=basicauth
- spring.cloud.dataflow.container.registry-configurations[myazurecr].user=[service principal application ID]
- spring.cloud.dataflow.container.registry-configurations[myazurecr].secret=[service principal secret]
If you are always using the private registry you can configure a secret directly in AKS and reference it using:
spring.cloud.deployer.kubernetes.imagePullSecret
or
spring.cloud.deployer.kubernetes.imagePullSecrets

Azure RBAC and AKS not working as expected

I have create an AKS Cluster with AKS-managed Azure Active Directory and Role-based access control (RBAC) Enabled.
If I try to connect with the Cluster by using one of the accounts which are included in the Admin Azure AD groups everything works as it should.
I am having some difficulties when i try to do this with a user which is not a member of Admin Azure AD groups. What I did is the following:
created a new user
assigned the roles Azure Kubernetes Service Cluster User Role and Azure Kubernetes Service RBAC Reader to this user.
Execute the following command: az aks get-credentials --resource-group RG1 --name aksttest
When I then execute the following command: kubectl get pods -n test I get the following error: Error from server (Forbidden): pods is forbidden: User "aksthree#tenantname.onmicrosoft.com" cannot list resource "pods" in API group "" in the namespace "test"
In the Cluster I haven't done any RoleBinding. According to the docu from Microsoft, there is no additional task that should be done in the Cluster ( like for ex. Role definition and RoleBinding).
My expectation is that when a user has the above two roles assigned he should be able to have read rights in the Cluster. Am I doing something wrong?
Please let me know what you think,
Thanks in advance,
Mike
When you use AKS-managed Azure Active Directory, it enables authentication as AD user but authorization happens in Kubernetes RBAC
only, so, you have to separately configure Azure IAM and Kubernetes RBAC. For example, it adds the aks-cluster-admin-binding-aad ClusterRoleBinding which provides access to accounts which are included in the Admin Azure AD groups.
The Azure Kubernetes Service RBAC Reader role is applicable for Azure RBAC for Kubernetes Authorization which is feature on top of AKS-managed Azure Active Directory, where both authentication and authorization happen with AD and Azure RBAC. It uses Webhook Token Authentication technique at API server to verify tokens.
You can enable Azure RBAC for Kubernetes Authorization on existing cluster which already has AAD integration:
az aks update -g <myResourceGroup> -n <myAKSCluster> --enable-azure-rbac

Access KeyVault from Azure Container Instance deployed in VNET

Azure Container Instance is deployed in VNET and I want to store my keys and other sensitive variables in Key Vault and somehow access to it. I found in documentation, it's currently limitation to use managed identities once ACI is in VNET.
Is there another way to bypass this identities and to use Key Vault?
I'm trying to avoid environment variables and secret volumes, because this container will be scheduled to run every day, which means there will be some script with access to all secrets and I don't want to expose them in script.
to access the Azure Key Vault you will need to have access to a Token, are you ok storing this token into a k8s secret ?
If you are, then any SKD or CURL command could be use to leverage the Rest API of the Key Vault to retrieve the secret at run time : https://learn.microsoft.com/en-us/rest/api/keyvault/
If you don't want to use secret/volumes to store the token for AKV it would be best to bake in your token in your container Image and maybe rebuild your image everyday with a new token that you could manage its access I AKS at the same time within your CI process

Running a pod as a service account to connect to a database with Integrated Security

I have a .NET Core service running on Azure Kubernetes Service and a Linux Docker image. It needs to connect to an on-premise database with Integrated Security. One of the service accounts in my on-premise AD has access to this database.
My question is - is it possible to run a pod under a specific service account so the service can connect to the database? (Other approach I took was to impersonate the call with WindowsIdentity.RunImpersonated, however that requires the DLL "advapi32.dll" and I couldn't find a way to deploy it to the Linux container and make it run.)
A pod can run with the permissions of an Azure Active Directory service account if you install and implement AAD Pod Identity components in your cluster.
You'll need to set up an AzureIdentity and an AzureIdentityBinding resource in your cluster then add a label to the pod(s) that will use permissions associated with the service account.
Please note that this approach relies on the managed identity or service principal associated with your cluster having the role "Managed Identity Operator" granted against the service account used to access SQL Server (service account must exist in Azure Active Directory).
I suspect you may have a requirement for the pods to take on the identity of a "group managed service account" which exists in your local AD only. I don't think this is supported in Linux containers (Recently, Windows nodes support GMSAs as a GA feature).

equivalent command for azure x gcloud

I was wondering what the equivalent command for azure would be:
cloud auth configure-docker
I'm trying to use a "docker push" inside azure kubernetes but I can't because it asks for authentication.
I believe the az acr login command should be the equivalent to that. There are different ways to authenticate as documented.
You will likely want to go the service principal route but if you using AAD Pod Identity, then I believe you could go the managed identity route which is usually better since service principals have a one year expiry for their passwords.

Resources