We are using AKS in Azure, and we have RBAC setup so our admins can login to the cluster.
We are logging into the cluster and trying something simple to list the deployments with:
az aks get-credentials --resource-group groupname --name aksname --subscription subid
kubectl get deployments --all-namespaces=true
When using kubectl get deployments --all-namespaces=true we are asked to open a browser and enter the code, however an error comes back in the browser and the login fails:
Request Id: 8d0fd182-1b31-45ee-8798-c7f799662c00
Correlation Id: ed808bbb-76b4-4cd7-b62b-da4807acc4ae
Timestamp: 2021-06-16T00:22:55.668Z
App name: Azure Kubernetes Service AAD Client
App id: 80faf920-1908-4b52-b5ef-a8e7bedfc67a
IP address: <IP>
Device identifier: Not available
Device platform: Windows 10
Device state: Unregistered
We have a Conditional Access policy that says outside of our Office LAN, users that connect must use 1. MFA, and 2. Have a hybrid Joined Device.
It seems the error Device state: Unregistered is not correct, because our devices are Hybrid Joined. Possibly the Azure Kubernetes Service AAD Client is not able to pickup the device state?
We tried to add the Azure Kubernetes Service AAD Client as an exception in the CA policy, but it's not available in the App List.
Any suggestions? Could be that Azure Kubernetes Service AAD Client has a bug?
Related
I have a docker container that accesses azure key vault. this works when I run it locally.
I set up an azure web app to host my container, and it cannot access the key vault
Forbidden (HTTP 403). Failed to complete operation. Message:
Client address is not authorized and caller is not a trusted service.
Client address: 51.142.174.224 Caller:
I followed the suggestion from https://www.youtube.com/watch?v=QIXbyInGXd8 and
I went to the web app in the portal to set status to on
Created an access policy
and then receive the same error with a different ip
Forbidden (HTTP 403). Failed to complete operation. Message:
Client address is not authorized and caller is not a trusted service.
Client address: 4.234.201.129 Caller:
My web app ip address would change every time an update were made, so are there any suggestions how to overcome this?
It might depend on your exact use case and what you want to achieve with your tests, but you could consider using a test double instead of the real Azure Key Vault while running your app locally or on CI.
If you are interested please feel free to check out Lowkey Vault.
I found solution by setting up a virtual network,
and then whitelisting it in the keyvault access rights
Context:
I have JupyterHub helm chart deployed on AWS EKS, following instructions here https://zero-to-jupyterhub.readthedocs.io/en/latest/kubernetes/index.html
for SSO for all our apps, we use Azure Gov AD
in Azure App Registration, as required for apps authenticated on Azure Gov, Authority Endpoints are setup correctly to hit https://login.microsoftonline.us/<tenant_id>/oauth2/token as shown in attached pic
however, when I try SSO into JupyterHub, connection in Auth0 is hitting https://login.microsoftonline.com instead, as shown in attached logs from pod running Jupyterhub, resulting in a 500 Status error
What could be causing Auth0 connection to hit a different, wrong endpoint not specified in AAD App Registration?
Did anyone face similar issues trying to authenticate an app on Azure Gov?
Is this an error on Azure AD side, or how OAuthenticator is configured on JupyterHub?
AAD Endpoints
k logs hub-78c6c9ff4f-znbxp -n jupyterhub2 --follow
[E 2022-03-03 02:15:58.325 JupyterHub oauth2:389] Error fetching 400 POST https://login.microsoftonline.com/<tenant_id>/oauth2/token: {
"correlation_id": <id>,
"error": "invalid_request",
"error_codes": [
900432
],
"error_description": "AADSTS900432: Confidential Client is not supported in Cross Cloud request.\r\nTrace ID: 9898f82e-b503-4c47-8ae4-859d8d54b500\r\nCorrelation ID: a86756e7-8aaa-4eb9-876a-1db5d145889d\r\nTimestamp: 2022-03-03 02:15:58Z",
"timestamp": "2022-03-03 02:15:58Z",
"trace_id": "9898f82e-b503-4c47-8ae4-859d8d54b500"
}
I got this resolved - TL;DR the AzureOAuthenticator from JupyterHub isn't built for Azure Gov apps. It defaults to Auth0 endpoint that is only for non-Gov Azure apps.
So I had to create my own authenticator with the correct configuration from Azure AD app registration.
Also I was using my secret ID instead of secret value.
I have a parent project that has an artifact registry configured for docker.
A child project has a cloud run service that needs to pull its image from the parent.
The child project also has a service account that is authorized to access the repository via an IAM role roles/artifactregistry.writer.
When I try to start my service I get an error message:
Google Cloud Run Service Agent must have permission to read the image,
europe-west1-docker.pkg.dev/test-parent-project/docker-webank-private/node:custom-1.
Ensure that the provided container image URL is correct and that the
above account has permission to access the image. If you just enabled
the Cloud Run API, the permissions might take a few minutes to
propagate. Note that the image is from project [test-parent-project], which
is not the same as this project [test-child-project]. Permission must be
granted to the Google Cloud Run Service Agent from this project.
I have tested manually connecting with docker login and using the service account's private key and the docker pull command works perfectly from my PC.
cat $GOOGLE_APPLICATION_CREDENTIALS | docker login -u _json_key --password-stdin https://europe-west1-docker.pkg.dev
> Login succeeded
docker pull europe-west1-docker.pkg.dev/bfb-cicd-inno0/docker-webank-private/node:custom-1
> OK
The service account is also attached to the cloud run service:
You have 2 types of service account used in Cloud Run:
The Google Cloud Run API service account
The Runtime service account.
In your explanation, and your screenshot, you talk about the runtime service account, the identity that will be used by the service when it runs and call Google Cloud API.
BUT before running, the service must be deployed. This time, it's a Google Cloud Run internal process that run to pull the container, create a revision and do all the required internal stuff. To do that job, a service account also exist, it's named "service agent".
In the IAM console, you can find it: the format is the following
service-<PROJECT_NUMBER>#serverless-robot-prod.iam.gserviceaccount.com
Don't forget to tick the checkbox in the upper right corner to include the Google Managed service account
If you want that this deployment service account be able to pull image in another project, grant on it the correct permission, not on the runtime service account.
I have create an AKS Cluster with AKS-managed Azure Active Directory and Role-based access control (RBAC) Enabled.
If I try to connect with the Cluster by using one of the accounts which are included in the Admin Azure AD groups everything works as it should.
I am having some difficulties when i try to do this with a user which is not a member of Admin Azure AD groups. What I did is the following:
created a new user
assigned the roles Azure Kubernetes Service Cluster User Role and Azure Kubernetes Service RBAC Reader to this user.
Execute the following command: az aks get-credentials --resource-group RG1 --name aksttest
When I then execute the following command: kubectl get pods -n test I get the following error: Error from server (Forbidden): pods is forbidden: User "aksthree#tenantname.onmicrosoft.com" cannot list resource "pods" in API group "" in the namespace "test"
In the Cluster I haven't done any RoleBinding. According to the docu from Microsoft, there is no additional task that should be done in the Cluster ( like for ex. Role definition and RoleBinding).
My expectation is that when a user has the above two roles assigned he should be able to have read rights in the Cluster. Am I doing something wrong?
Please let me know what you think,
Thanks in advance,
Mike
When you use AKS-managed Azure Active Directory, it enables authentication as AD user but authorization happens in Kubernetes RBAC
only, so, you have to separately configure Azure IAM and Kubernetes RBAC. For example, it adds the aks-cluster-admin-binding-aad ClusterRoleBinding which provides access to accounts which are included in the Admin Azure AD groups.
The Azure Kubernetes Service RBAC Reader role is applicable for Azure RBAC for Kubernetes Authorization which is feature on top of AKS-managed Azure Active Directory, where both authentication and authorization happen with AD and Azure RBAC. It uses Webhook Token Authentication technique at API server to verify tokens.
You can enable Azure RBAC for Kubernetes Authorization on existing cluster which already has AAD integration:
az aks update -g <myResourceGroup> -n <myAKSCluster> --enable-azure-rbac
I have a .NET Core service running on Azure Kubernetes Service and a Linux Docker image. It needs to connect to an on-premise database with Integrated Security. One of the service accounts in my on-premise AD has access to this database.
My question is - is it possible to run a pod under a specific service account so the service can connect to the database? (Other approach I took was to impersonate the call with WindowsIdentity.RunImpersonated, however that requires the DLL "advapi32.dll" and I couldn't find a way to deploy it to the Linux container and make it run.)
A pod can run with the permissions of an Azure Active Directory service account if you install and implement AAD Pod Identity components in your cluster.
You'll need to set up an AzureIdentity and an AzureIdentityBinding resource in your cluster then add a label to the pod(s) that will use permissions associated with the service account.
Please note that this approach relies on the managed identity or service principal associated with your cluster having the role "Managed Identity Operator" granted against the service account used to access SQL Server (service account must exist in Azure Active Directory).
I suspect you may have a requirement for the pods to take on the identity of a "group managed service account" which exists in your local AD only. I don't think this is supported in Linux containers (Recently, Windows nodes support GMSAs as a GA feature).