aks not showing the list of namespaces and workloads - azure-aks

Trying to learn AKS using terraform. Created an AKS cluster using these terraform configuration files.
After applied, I see that the aks is not showing the namespaces are workloads on azure portal and shows me this message.
namespaces is forbidden: User "1456657a-34b8-4930-b7af-94f462729cfk" cannot list resource "namespaces" in API group "" at the cluster scope: User does not have access to the resource in Azure. Update role assignment to allow access.. 'vivekx8dm#outlook.com' does not have the required Kubernetes permissions to view this resource. Ensure you have the correct role/role binding for this user or group.
Similarly for workloads as well.
But when I try from console the following commands I get the required list of namespaces pods or services.
kubectl get pods
kubectl get ns
kubectl get svc
Any idea whats happening?
Update
I think I found the cause for the issue.
In the azurerm_kubernetes_cluster resource, I have this azure_active_directory_role_based_access_control argument reference. Because of this, there seems to be some AD integration happening. We can observe this in the aks overview tab below. I see Azure Ad Authentication with Azure RBAC.
I removed that by commenting out that in my code and run that again. This time I see the following.
As you can see the contrast its, Local accounts with kubernetes RBAC.
Now its showing the namespace, workload stuff as expected.
The workloads now.
Need to understand this AD more.

When I tried to reproduce the issue ,noted that we have different kinds of authentication and authorization mechanisms
AUTHENTICATING LOCAL ACCOUNTS FOR USER & ADMIN ACCESS / AUTHORIZATION RBAC
az aks create -g RG_NAME -n AKS_CLUATER_NAME
AUTHENTICATING LOCAL ACCOUNTS FOR USER AND ADMIN ACCESS/AUTHORIZATION RBAC disabled
az aks create -g RG_NAME -n AKS_CLUSTER_NAME \ --disable-rbac
AUTHENTICATING AZURE AD / AUTHENTICATING KUBERNETES RBAC ONLY
az aks get-credentials --resource-group RG_NAME --name AKS_CLUSTER_NAME --admin
az aks get-credentials --resource-group RG_NAME --name AKS_CLUSTER_NAME --overwrite-existing
AUTHENTICATING AZURE AD / AUTHERISATION USING KUBERNETES RBAC & AZURE RBAC/LOCAL ACCOUNTS DISABLED
az aks create -g RG_NAME -n AKS_CLUSTER_NAME \ --enable-aad \ --enable-azure-rbac \ --disable-local-accounts
Kubernetes will not provide any built_in authorization mechanisms so,This service offers the ability to integrate with azure AD, we have different authorization and authenticating options with kubernetes and azure AD as mentioned above.

Related

How to build and deploy kubernetes cluster to Google Cloud using Cloud Build and Skaffold?

I am new to micro-services technologies and getting troubled with Google Cloud Build.
I am using Docker, Kubernetes, Ingres Nginx and skaffold and my deployment works fine in local machine.
Now I want to develop locally and build and run remotely using Cloud Platform so, here's what I have done:
In Google Cloud, I have set up kubernetes cluster
Set local kubectl context to cloud cluster
Set up an Ingress Nginx load balancer
Enabled Cloud Build API (no trigger setup)
Here's my create deployment and skaffold yaml files look like:
When I run skaffold dev, it logs out: Some taggers failed. Rerun with -vdebug for errors., then it takes some time and my network bandwidth.
The image does get pushed to Cloud Container Registry and I can access the app using load balancer's IP address but the Cloud Build History is still empty. Where am I missing?
Note: Right now I am not pushing my code to any online repository like github.
Sorry If the information I provide is insufficient, I am new to these technologies.
Cloud Build started working:
First, In Cloud Build settings, I enabled kubernetes Engine, Compute Engine and Service Accounts.
Then, I executed these 2 commands:
gcloud auth application-default login: As google describes it This will acquire new user credentials to use for Application Default Credentials
As mentioned in ingress nginx -> deploy -> GCE-GKE documentation, this will Initialize your user as a cluster-admin
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)

Does az ml model deploy --overwrite or az ml service update temporarily put the service into transitioning/ unhealthy state?

When we update a webservice endpoint, using az ml model deploy .... --overwrite or az ml service update ..., does it temporarily go out of service?
Since AKS is managed Kubernetes cluster, does Azure ensure zero down-time by managing pods by updating them to ensure it is always up and running?

Enable the Application Gateway Ingress Controller (AGIC) add-on for an AKS cluster using custom subnet

I follow this tutorial https://learn.microsoft.com/en-us/azure/application-gateway/tutorial-ingress-controller-add-on-new to install AGIC for AKS cluster.
As the docs, the default vnet is 10.0.0.0/8 and subnet is 10.240.0.0/16. I don't want to use /8 and /16 so I change vnet to /20 and subnet to /23.
After running this command, I see the k8s cluster but there is no App Gateway.
az aks create -n test-k8s -g "infra-k8s-test" --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name test-appgw --appgw-subnet-cidr "10.0.12.0/23" --generate-ssh-keys --location southcentralus --service-cidr "10.0.4.0/23" --dns-service-ip 10.0.4.10 --vnet-subnet-id "/subscriptions/efxxxxc9/resourceGroups/infra-test-k8s/providers/Microsoft.Network/virtualNetworks/infra-k8s-test-vnet/subnets/infra-k8s-test-subnet"
Waiting for AAD role to propagate[################################ ] 90.0000%Could not create a role assignment for virtual network:
subscriptions/efxxxxc9/resourceGroups/infra-k8s-test/providers/Microsoft.Network/virtualNetworks/infra-k8s-test-vnet specified in ingressApplicationGateway addon. Are you an Owner on this subscription?
I see there are many issues on Github related to custom subnet with AKS. Do we have a solution for this setup?
Set the variable for the subnet ID for the existing subnet using the following command:
APPGW_SUBNET_ID=$(az network vnet subnet show -g $VNET_RG --vnet-name $VNET_NAME -n $SUBNET_NAME --query id -o tsv --subscription $SUBSCRIPTION)
Replace --appgw-subnet-cidr with the command below:
--appgw-subnet-id $APPGW_SUBNET_ID

Get kubectl credentials in Azure

I have created a K8s service (cluster) on Azure Portal.
I can retreive my credentials with this command (works fine):
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
But i want to know how i can download credentials from azure portal web interface (UI). Is there a way to do that ?
Thanks
K8s credentials are not available from the portal UI. Ref: https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal#connect-to-the-cluster
You can connect to the cluster via cloud CLI rather than your local terminal.

How do I find out which service principal an azure kubernetes cluster uses?

We have created an AKS (Azure Kubernetes) cluster. How can I find out which service principal is assigned to the cluster?
It seems to be not shown anywhere in the GUI.
easiest way is opening up cloud shell and doing this:
az aks list
and looking at the result. it would show the id or the service principal
an alternative would be doing this:
az aks show -n aks_name -g rg_name --query servicePrincipalProfile

Resources