Delete an Azure Keyvault backed Scope in Databricks - azure-keyvault

dbutils.secrets does not seem to have a method for deletion of any existing Azure Keyvault-backed Secret Scope in Databricks.
Here is the documentation for creation and management of secret scopes in Databricks-
https://learn.microsoft.com/en-us/azure/databricks/security/secrets/secret-scopes#akv-ss
The documentation does list a method to delete a Databricks-backed secret scope but none for a Keyvault-backed one.

Note: There is no dbutils.secret command to delete the secret-scopes, you need to use the Databricks CLI to delete the scopes.
You can use the same command which is available in document to delete a Databricks backend managed scopes and Azure KeyVault Backend managed scopes.
databricks secrets delete-scope --scope <scope-name>
Here is the example for deleting scopes for Databricks Backend and Azure KeyVault Backend scopes.
Hope this helps. Do let us know if you any further queries.

Related

accessing secret from google secret manager

I put a serviceAccount.json in google secret manager, and I want to build am api service by Fastapi, a python web framework.
I mounted secret as a disk ,I want to read it from my file,but it reply no such file....plz anyone help me?
Never store JSON service account keys in Google Secret Manager. If your workload is running in Cloud Run, you should use the service identity to grant permissions https://cloud.google.com/run/docs/securing/service-identity.

Azure RBAC and AKS not working as expected

I have create an AKS Cluster with AKS-managed Azure Active Directory and Role-based access control (RBAC) Enabled.
If I try to connect with the Cluster by using one of the accounts which are included in the Admin Azure AD groups everything works as it should.
I am having some difficulties when i try to do this with a user which is not a member of Admin Azure AD groups. What I did is the following:
created a new user
assigned the roles Azure Kubernetes Service Cluster User Role and Azure Kubernetes Service RBAC Reader to this user.
Execute the following command: az aks get-credentials --resource-group RG1 --name aksttest
When I then execute the following command: kubectl get pods -n test I get the following error: Error from server (Forbidden): pods is forbidden: User "aksthree#tenantname.onmicrosoft.com" cannot list resource "pods" in API group "" in the namespace "test"
In the Cluster I haven't done any RoleBinding. According to the docu from Microsoft, there is no additional task that should be done in the Cluster ( like for ex. Role definition and RoleBinding).
My expectation is that when a user has the above two roles assigned he should be able to have read rights in the Cluster. Am I doing something wrong?
Please let me know what you think,
Thanks in advance,
Mike
When you use AKS-managed Azure Active Directory, it enables authentication as AD user but authorization happens in Kubernetes RBAC
only, so, you have to separately configure Azure IAM and Kubernetes RBAC. For example, it adds the aks-cluster-admin-binding-aad ClusterRoleBinding which provides access to accounts which are included in the Admin Azure AD groups.
The Azure Kubernetes Service RBAC Reader role is applicable for Azure RBAC for Kubernetes Authorization which is feature on top of AKS-managed Azure Active Directory, where both authentication and authorization happen with AD and Azure RBAC. It uses Webhook Token Authentication technique at API server to verify tokens.
You can enable Azure RBAC for Kubernetes Authorization on existing cluster which already has AAD integration:
az aks update -g <myResourceGroup> -n <myAKSCluster> --enable-azure-rbac

AKS with managed identity. Need Service Principal to automate deployment using bitbucket pipeline

I have an AKS (Kubernetes cluster) created with a managed identity in Azure portal.
I want to automate deployment in the cluster using bitbucket pipelines. For this, it seems I need a service principal.
script:
- pipe: microsoft/azure-aks-deploy:1.0.2
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
Is there a way to get this from the managed identity? Do I need to delete the cluster and re-create it with service principal? Are there any other alternatives?
Thanks!
Unfortunately, the managed identity can only be used inside the Azure Resources. And it seems the bitbucket pipeline should have the service principal with enough permissions first to access the Azure, then it can manage the Azure resources. And for AKS, you can't change the managed identity that you enable it at the creation into service principal.
So finally, you need to delete the existing AKS cluster and recreate a new cluster with a service principal. Then you can use the same service principal to access Azure and manage the AKS cluster.
I wanted to post this for anyone looking.
The OP asked here about retrieving the service principal details for a managed identity. While it is possible to retrieve the azure resource ID and also the "username" of the service principal, as #charles-xu mentioned using a managed identity for anything outside of Azure is not possible, and this is because there is no method to access the password (also known as client secret)
That being said, you can find the command necessary to retrieve your Managed Identity's SP name in case you need it, for example in order to insert it into another azure resource being created by Terraform. The command is documented here: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-cli

Access KeyVault from Azure Container Instance deployed in VNET

Azure Container Instance is deployed in VNET and I want to store my keys and other sensitive variables in Key Vault and somehow access to it. I found in documentation, it's currently limitation to use managed identities once ACI is in VNET.
Is there another way to bypass this identities and to use Key Vault?
I'm trying to avoid environment variables and secret volumes, because this container will be scheduled to run every day, which means there will be some script with access to all secrets and I don't want to expose them in script.
to access the Azure Key Vault you will need to have access to a Token, are you ok storing this token into a k8s secret ?
If you are, then any SKD or CURL command could be use to leverage the Rest API of the Key Vault to retrieve the secret at run time : https://learn.microsoft.com/en-us/rest/api/keyvault/
If you don't want to use secret/volumes to store the token for AKV it would be best to bake in your token in your container Image and maybe rebuild your image everyday with a new token that you could manage its access I AKS at the same time within your CI process

docker secrets and refresh tokens

I'm looking for a way to use docker secrets and for all case where I don't need to update the stored value of the secret that would be a perfect situation but my app is having multiple services which are having 3 legged OAuth authorization. After successfully obtaining all tokens a script is collecting all tokens then creating secrets out of them and executing the config of my docker.compose.yml file with the container using those secrets. The problem is when the tokens have to be refreshed and stored again as secrets. Docker secrets does not allow updating the secrets. What would be the possible workaround or better approach?
You do not update a secret or config in place. They are immutable. Instead, include a version number in your secret name. When you need to change the secret, create a new one with a new name, and then update your service with the new secret version. This will trigger a rolling update of your service.

Resources