I have a Dockerfile which has used to build a node project and run the "az login --service-principal" command. In this node project, it will retrieve the secret value from Azure Key Vault.
I try to run this docker image locally and it can successfully return the secret I set on Azure Key Vault. However, after I deploy the same docker image to AKS, it returns 403 forbidden error. Why would it happen?
I understand that I may not use this method to get authenticated to Azure Key Vault, but why it fails?
403 forbidden error means that the request was authenticated (it knows the requesting identity) but the identity does not have permission to access the requested resource. There are two causes:
There is no access policy for the identity.
The IP address of the requesting resource is not approved in the key
vault's firewall settings.
As you are able to access the key vault from your local, it means the error is because of the key vault's firewall settings
Check your Azure Key Vault networking settings. If you allowed access from selected networks, make sure to add AKS VMSS scale set virtual network in the selected networks
Now, you would be able to access key vault secrets from your AKS pod
Related
I have a docker container that accesses azure key vault. this works when I run it locally.
I set up an azure web app to host my container, and it cannot access the key vault
Forbidden (HTTP 403). Failed to complete operation. Message:
Client address is not authorized and caller is not a trusted service.
Client address: 51.142.174.224 Caller:
I followed the suggestion from https://www.youtube.com/watch?v=QIXbyInGXd8 and
I went to the web app in the portal to set status to on
Created an access policy
and then receive the same error with a different ip
Forbidden (HTTP 403). Failed to complete operation. Message:
Client address is not authorized and caller is not a trusted service.
Client address: 4.234.201.129 Caller:
My web app ip address would change every time an update were made, so are there any suggestions how to overcome this?
It might depend on your exact use case and what you want to achieve with your tests, but you could consider using a test double instead of the real Azure Key Vault while running your app locally or on CI.
If you are interested please feel free to check out Lowkey Vault.
I found solution by setting up a virtual network,
and then whitelisting it in the keyvault access rights
I am trying to use secret store component with Azure Keyvault in my Azure Kubernetes Cluster. I setup exactly following the "https://docs.dapr.io/reference/components-reference/supported-secret-stores/azure-keyvault/" but I am not able to retrieve the secrets. When I change the secretstore to local file or kubernetes secrets everything works fine. With Azure key vault I am getting the following error:
{
"errorCode": "ERR_SECRET_GET",
"message": "failed getting secret with key {keyName} from secret store {storename}: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://{vault url}/secrets/{secret key}/?api-version=2016-10-01: StatusCode=404 -- Original Error: adal: Refresh request failed. Status Code = '404'. Response body: getting assigned identities for pod {podname} in CREATED state failed after 16 attempts, retry duration [5]s. Error: <nil>\n"
}
I verified that the Client secret I am using is correct. Can anyone please point me to right direction ?
The error indicates that the service principal does not have access to get the secrets from the key vault
You can use System Assigned Managed Identity for the AKS pod and add the access policy to read the key vault secrets
Also, you can use Service Principal with access policy to read the key vault secrets or Key Vault Crypto Officer role so that you can fetch the key vault secrets
Reference: Azure Key Vault secret store | Dapr Docs
I am running an application inside of Docker that requires me to leverage google-bigquery. When I run it outside of Docker, I just have to go to the link below (redacted) and authorize. However, the link doesn't work when I copy-paste it from the Docker terminal. I have tried port mapping as well and no luck either.
Code:
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=["https://www.googleapis.com/auth/cloud-platform"],
)
# Make clients.
client = bigquery.Client(credentials=credentials, project=credentials.project_id,)
Response:
requests_oauthlib.oauth2_session - DEBUG - Generated new state
Please visit this URL to authorize this application:
Please see the available solutions on this page, it's constantly updated.
gcloud credential helper
Standalone Docker credential helper
Access token
Service account key
In short you need to use a service account key file. Make sure you either use a Secret Manager, or you just issue a service account key file for the purpose of the Docker image.
You need to place the service account key file into the Docker container either at build or runtime.
I have an AKS (Kubernetes cluster) created with a managed identity in Azure portal.
I want to automate deployment in the cluster using bitbucket pipelines. For this, it seems I need a service principal.
script:
- pipe: microsoft/azure-aks-deploy:1.0.2
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
Is there a way to get this from the managed identity? Do I need to delete the cluster and re-create it with service principal? Are there any other alternatives?
Thanks!
Unfortunately, the managed identity can only be used inside the Azure Resources. And it seems the bitbucket pipeline should have the service principal with enough permissions first to access the Azure, then it can manage the Azure resources. And for AKS, you can't change the managed identity that you enable it at the creation into service principal.
So finally, you need to delete the existing AKS cluster and recreate a new cluster with a service principal. Then you can use the same service principal to access Azure and manage the AKS cluster.
I wanted to post this for anyone looking.
The OP asked here about retrieving the service principal details for a managed identity. While it is possible to retrieve the azure resource ID and also the "username" of the service principal, as #charles-xu mentioned using a managed identity for anything outside of Azure is not possible, and this is because there is no method to access the password (also known as client secret)
That being said, you can find the command necessary to retrieve your Managed Identity's SP name in case you need it, for example in order to insert it into another azure resource being created by Terraform. The command is documented here: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-cli
Azure Container Instance is deployed in VNET and I want to store my keys and other sensitive variables in Key Vault and somehow access to it. I found in documentation, it's currently limitation to use managed identities once ACI is in VNET.
Is there another way to bypass this identities and to use Key Vault?
I'm trying to avoid environment variables and secret volumes, because this container will be scheduled to run every day, which means there will be some script with access to all secrets and I don't want to expose them in script.
to access the Azure Key Vault you will need to have access to a Token, are you ok storing this token into a k8s secret ?
If you are, then any SKD or CURL command could be use to leverage the Rest API of the Key Vault to retrieve the secret at run time : https://learn.microsoft.com/en-us/rest/api/keyvault/
If you don't want to use secret/volumes to store the token for AKV it would be best to bake in your token in your container Image and maybe rebuild your image everyday with a new token that you could manage its access I AKS at the same time within your CI process