Run Kubectl command on AKS - azure-aks

I have deployed a container image onto AKS successfully.
Now I want to run a command and a json file on the AKS using the pipeline once the container image is deployed onto AKS.

First of all you need to install azure cli & kubectl on your system.
Install Azure Cli
https://learn.microsoft.com/en-us/cli/azure/install-azure-cli
Install Kubectl
https://kubernetes.io/docs/tasks/tools/
As far kubectl is installed, verify its version
kubectl version --client --short
Client Version: v1.23.1
The version in your case might be different.
Now is the time to get AKS credentials (kubeconfig) file to interact with AKS cluster.
az login
provide the credentials for azure AD.
az account set --subscription {subscription_id}
az aks get-credentials --resource-group MyAKSResoucceGroup --name MyAksCluster
Verify if cluster is connected
kubectl config current-context
MyAksCluster
You can play around with AKS and run all commands you want to run. Here is the cheatsheet or kubectl.
Kubectl Cheat-Sheet
https://www.bluematador.com/learn/kubectl-cheatsheet
In order to run commands using Azure DevOps on you need to create service connection in Azure DevOps to authenticate Azure DevOps with AKS.
Project Settings --> Service Connections --> New Kubernetes Service Connection --> Azure Subscription
Now you can run the kubernetes commands on this AKS using built in kubernetes task or using bash|powershell commands inside your pipeline.
Hope that helps you.
e:g
- task: Kubernetes#1
inputs:
connectionType: 'Kubernetes Service Connection'
kubernetesServiceEndpoint: '12345'
namespace: 'default'
command: 'apply'
useConfigurationFile: true
configurationType: 'inline'
inline: 'abcd'
secretType: 'dockerRegistry'
containerRegistryType: 'Azure Container Registry'

Related

Run TFS Agent as Service in Docker Container

I'm trying to run a TFS agent as a service in a Windows Server Docker Container. I am able to get the agent running if I use the run.cmd but when attempting to configure the agent to run as a service I'm getting the error Below.
I have ensured the account is a local administrator and have tried the local system account and seem to be getting the same error. Thanks
Exit code -1073741502 returned from process: file name 'C:\TFSAgent\bin\AgentService.exe', arguments 'init'.
Command I'm using:
.\config.cmd --unattended --url https://tfsurl --auth Negotiate --username username --password password --pool Sandbox --agent dockeragent --runasservice --windowslogonaccount
username --windowslogonpassword password --replace
Run TFS Agent as Service in Docker Container
According to the document Define container jobs, which need to make sure:
The agent must have permission to access the Docker daemon
To run a self-hosted agent in Docker, you could refer following document:
Run a self-hosted agent in Docker
Running Azure DevOps private agents as docker containers

jenkins use kubernetes plugin to deploy slave ,but in the slave containers,can not use kubectl

I use jenkins k8s plugin to deploy slave node,but in the slave container,can not use kubectl,the error is :
User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "default"
Nearly, I done a thing like docker run a slave , but it's ok, the docker containers can use kubectl, why?
thanks!
You need to create a role for cluster-admin for your user:
kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts

Openshift & docker - which registry can I use for Minishift?

It is easy to work with Openshift as a Container As A Service, see the detailed steps. So, via the docker client I can work with Openshift.
I would like to work on my laptop with Minishift. That's the local version of Openshift on your laptop.
Which docker registry should I use in combination with Minishift? Minishift doesn't have it's own registry - I guess.
So, I would like to do:
$ maven clean install -- building the application
$ oc login to your minishift environment
$ docker build -t myproject/mynewapplication:latest
$ docker tag -- ?? normally to a openshift docker registry entry
$ docker push -- ?? to a local docker registry?
$ on 1st time: $ oc new-app mynewapplication
$ on updates: $ oc rollout latest dc/mynewapplication-n myproject
I use just docker and oc cluster up which is very similar. The internal registry that is deployed has an address in the 172.30.0.0/16 space (ie. the default service network).
$ oc login -u system:admin
$ oc get svc -n default | grep registry
docker-registry ClusterIP 172.30.1.1 <none> 5000/TCP 14m
Now, this service IP is internal to the cluster, but it can be exposed on the router:
$oc expose svc docker-registry -n default
$oc get route -n default | grep registry
docker-registry docker-registry-default.127.0.0.1.nip.io docker-registry 5000-tcp None
In my example, the route was docker-registry-default.127.0.0.1.nip.io
With this route, you can log in with your developer account and your token
$oc login -u developer
$docker login docker-registry-default.127.0.0.1.nip.io -p $(oc whoami -t) -u developer
Login Succeeded
Note: oc cluster up is ephemeral by default; the docs can provide instructions on how to make this setup persistent.
One additional note is that if you want OpenShift to try to use some of it's native builders, you can simply run oc new-app . --name <appname> from within the your source code directory.
$ cat Dockerfile
FROM centos:latest
$ oc new-app . --name=app1
--> Found Docker image 49f7960 (5 days old) from Docker Hub for "centos:latest"
* An image stream will be created as "centos:latest" that will track the source image
* A Docker build using binary input will be created
* The resulting image will be pushed to image stream "app1:latest"
* A binary build was created, use 'start-build --from-dir' to trigger a new build
* This image will be deployed in deployment config "app1"
* The image does not expose any ports - if you want to load balance or send traffic to this component
you will need to create a service with 'expose dc/app1 --port=[port]' later
* WARNING: Image "centos:latest" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "centos" created
imagestream "app1" created
buildconfig "app1" created
deploymentconfig "app1" created
--> Success
Build scheduled, use 'oc logs -f bc/app1' to track its progress.
Run 'oc status' to view your app.
There is an internal image registry. You login to it and push images just like you suggest. You just need to know the address and what credentials you need. For details see:
http://cookbook.openshift.org/image-registry-and-image-streams/how-do-i-push-an-image-to-the-internal-image-registry.html

azure container service creating issue

I am trying to bring up K8s Cluster in Azure. This is the error I am getting:
az aks create --resource-group upf-infra-ResourceGroup --name upf-infra-K8sCluster-1 --node-count 1 --generate-ssh-keys
Deployment failed. Correlation ID: d90bed78-075f-4a07-81c0-271dac75e0ca. PutControlPlane error
Include kubernetes version when creating your cluster.
Ex: az aks create --resource-group --name --node-count 1 --generate-ssh-keys --kubernetes-version 1.8.7
https://github.com/Azure/AKS/issues/284

Accessing another cluster from a Kubernetes pod

I'm running jenkins in GKE. A step of the build is using kubectl to deploy another cluster. I have gcloud-sdk installed in the jenkins container. The step of the build in question does this:
gcloud auth activate-service-account --key-file /etc/secrets/google-service-account
gcloud config set project XXXX
gcloud config set account xxxx#xxx.iam.gserviceaccount.com
gcloud container clusters get-credentials ANOTHER_CLUSTER
However I get this error (it works as expected locally though):
kubectl get pod
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Note: I noticed that with no config at all (~/.kube is empty) I'm able to use kubectl and get access to the cluster where the pod is currently running.
I'm not sure how it does that, does it use /var/run/secrets/kubernetes.io/serviceaccount/ to access the cluster
EDIT: Not tested if it works yet, but adding a service account to the target cluster and using that in jenkins might work:
http://kubernetes.io/docs/admin/authentication/ (search jenkins)
See this answer here: kubectl oauth2 authentication with container engine fails
What you need to do before doing gcloud auth activate-service-account --key-file /etc/secrets/google-service-account is to set gcloud to the old mode of auth:
CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True
gcloud config set container/use_client_certificate True
I have not succeded however using the other env var: GOOGLE_APPLICATION_CREDENTIALS

Resources