I have created a K8s service (cluster) on Azure Portal.
I can retreive my credentials with this command (works fine):
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
But i want to know how i can download credentials from azure portal web interface (UI). Is there a way to do that ?
Thanks
K8s credentials are not available from the portal UI. Ref: https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal#connect-to-the-cluster
You can connect to the cluster via cloud CLI rather than your local terminal.
Related
Trying to learn AKS using terraform. Created an AKS cluster using these terraform configuration files.
After applied, I see that the aks is not showing the namespaces are workloads on azure portal and shows me this message.
namespaces is forbidden: User "1456657a-34b8-4930-b7af-94f462729cfk" cannot list resource "namespaces" in API group "" at the cluster scope: User does not have access to the resource in Azure. Update role assignment to allow access.. 'vivekx8dm#outlook.com' does not have the required Kubernetes permissions to view this resource. Ensure you have the correct role/role binding for this user or group.
Similarly for workloads as well.
But when I try from console the following commands I get the required list of namespaces pods or services.
kubectl get pods
kubectl get ns
kubectl get svc
Any idea whats happening?
Update
I think I found the cause for the issue.
In the azurerm_kubernetes_cluster resource, I have this azure_active_directory_role_based_access_control argument reference. Because of this, there seems to be some AD integration happening. We can observe this in the aks overview tab below. I see Azure Ad Authentication with Azure RBAC.
I removed that by commenting out that in my code and run that again. This time I see the following.
As you can see the contrast its, Local accounts with kubernetes RBAC.
Now its showing the namespace, workload stuff as expected.
The workloads now.
Need to understand this AD more.
When I tried to reproduce the issue ,noted that we have different kinds of authentication and authorization mechanisms
AUTHENTICATING LOCAL ACCOUNTS FOR USER & ADMIN ACCESS / AUTHORIZATION RBAC
az aks create -g RG_NAME -n AKS_CLUATER_NAME
AUTHENTICATING LOCAL ACCOUNTS FOR USER AND ADMIN ACCESS/AUTHORIZATION RBAC disabled
az aks create -g RG_NAME -n AKS_CLUSTER_NAME \ --disable-rbac
AUTHENTICATING AZURE AD / AUTHENTICATING KUBERNETES RBAC ONLY
az aks get-credentials --resource-group RG_NAME --name AKS_CLUSTER_NAME --admin
az aks get-credentials --resource-group RG_NAME --name AKS_CLUSTER_NAME --overwrite-existing
AUTHENTICATING AZURE AD / AUTHERISATION USING KUBERNETES RBAC & AZURE RBAC/LOCAL ACCOUNTS DISABLED
az aks create -g RG_NAME -n AKS_CLUSTER_NAME \ --enable-aad \ --enable-azure-rbac \ --disable-local-accounts
Kubernetes will not provide any built_in authorization mechanisms so,This service offers the ability to integrate with azure AD, we have different authorization and authenticating options with kubernetes and azure AD as mentioned above.
I want to access a private Google Cloud Composer 2 environment directly from my local machine and execute Airflow CLI commands. The documentation mentions multiple ways to connect to a private environment, but nothing for my particular use case. I either have to log in to a GCE instance in the same VPC or allow public endpoint access.
What I am currently trying to do is to create an SSH tunnel or Socks5 Proxy to a VM instance (bastion-host) in the same VPC as my Composer environment. Then export the PROXY variables in my shell and run the CLI command with gcloud
gcloud compute ssh bastion-host -- -ND 8888
export {HTTP,HTTPS}_PROXY=socks5://localhost:8888
gcloud composer environments run composer --location europe-west1 dags list
But I am receiving the following error
ERROR: gcloud crashed (ProxyError): HTTPSConnectionPool(host='composer.googleapis.com', port=443): Max retries exceeded with url: /v1/projects/my-project/locations/europe-west1/environments/composer?alt=json (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response')))
How can I resolve this issue?
It looks like gcloud composer environments run is not only accessing the GKE cluster via kubectl, but also making some Google API calls. As a workaround I also had to set the NO_PROXY variable
export NO_PROXY=googleapis.com
I am new to micro-services technologies and getting troubled with Google Cloud Build.
I am using Docker, Kubernetes, Ingres Nginx and skaffold and my deployment works fine in local machine.
Now I want to develop locally and build and run remotely using Cloud Platform so, here's what I have done:
In Google Cloud, I have set up kubernetes cluster
Set local kubectl context to cloud cluster
Set up an Ingress Nginx load balancer
Enabled Cloud Build API (no trigger setup)
Here's my create deployment and skaffold yaml files look like:
When I run skaffold dev, it logs out: Some taggers failed. Rerun with -vdebug for errors., then it takes some time and my network bandwidth.
The image does get pushed to Cloud Container Registry and I can access the app using load balancer's IP address but the Cloud Build History is still empty. Where am I missing?
Note: Right now I am not pushing my code to any online repository like github.
Sorry If the information I provide is insufficient, I am new to these technologies.
Cloud Build started working:
First, In Cloud Build settings, I enabled kubernetes Engine, Compute Engine and Service Accounts.
Then, I executed these 2 commands:
gcloud auth application-default login: As google describes it This will acquire new user credentials to use for Application Default Credentials
As mentioned in ingress nginx -> deploy -> GCE-GKE documentation, this will Initialize your user as a cluster-admin
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
I'm trying to create a CI/CD pipeline in Azure DevOps for my Application. With the Build Pipeline section I have no problem. It's a react app which gets build and hosted (nginx) via a Docker Container. So building the Docker Container and pushing it to my private Docker Hub is no problem, that will work.But I actually can't figure out how to configure the release pipeline to deploy to a Azure WebApp for Containers with a private Docker Hub. Because there is no option for a service connection or Login Credentials to Configure. Would be nice if someone could help me and give a hint.
Thanks
But I actually can't figure out how to configure the release pipeline
to deploy to a Azure WebApp for Containers with a private Docker Hub.
Because there is no option for a service connection or Login
Credentials to Configure. Would be nice if someone could help me and
give a hint.
You don't need to provide service connection or Login Credentials of private Docker Hub within the Azure Web App For containers task or Azure App Service Deploy task.
Instead you should configure the credentials in Azure Web Portal. Go Azure Web Portal=>App Service=>Settings=>Container Settings you'll see:
You can enter your credentials there or you can configure the credentials when creating Azure App Service.
Since your Azure App Service can access your Private Docker Hub and deployment tasks in Azure Devops pipeline can access the Azure App Service with Azure Subscription input, the Azure WebApp for Containers task can automatically access the Private Docker Hub registry. (That's why you don't have/need an option to provide credentials of private docker hub!)
We have created an AKS (Azure Kubernetes) cluster. How can I find out which service principal is assigned to the cluster?
It seems to be not shown anywhere in the GUI.
easiest way is opening up cloud shell and doing this:
az aks list
and looking at the result. it would show the id or the service principal
an alternative would be doing this:
az aks show -n aks_name -g rg_name --query servicePrincipalProfile