Accessing another cluster from a Kubernetes pod - jenkins

I'm running jenkins in GKE. A step of the build is using kubectl to deploy another cluster. I have gcloud-sdk installed in the jenkins container. The step of the build in question does this:
gcloud auth activate-service-account --key-file /etc/secrets/google-service-account
gcloud config set project XXXX
gcloud config set account xxxx#xxx.iam.gserviceaccount.com
gcloud container clusters get-credentials ANOTHER_CLUSTER
However I get this error (it works as expected locally though):
kubectl get pod
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Note: I noticed that with no config at all (~/.kube is empty) I'm able to use kubectl and get access to the cluster where the pod is currently running.
I'm not sure how it does that, does it use /var/run/secrets/kubernetes.io/serviceaccount/ to access the cluster
EDIT: Not tested if it works yet, but adding a service account to the target cluster and using that in jenkins might work:
http://kubernetes.io/docs/admin/authentication/ (search jenkins)

See this answer here: kubectl oauth2 authentication with container engine fails
What you need to do before doing gcloud auth activate-service-account --key-file /etc/secrets/google-service-account is to set gcloud to the old mode of auth:
CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True
gcloud config set container/use_client_certificate True
I have not succeded however using the other env var: GOOGLE_APPLICATION_CREDENTIALS

Related

Run Kubectl command on AKS

I have deployed a container image onto AKS successfully.
Now I want to run a command and a json file on the AKS using the pipeline once the container image is deployed onto AKS.
First of all you need to install azure cli & kubectl on your system.
Install Azure Cli
https://learn.microsoft.com/en-us/cli/azure/install-azure-cli
Install Kubectl
https://kubernetes.io/docs/tasks/tools/
As far kubectl is installed, verify its version
kubectl version --client --short
Client Version: v1.23.1
The version in your case might be different.
Now is the time to get AKS credentials (kubeconfig) file to interact with AKS cluster.
az login
provide the credentials for azure AD.
az account set --subscription {subscription_id}
az aks get-credentials --resource-group MyAKSResoucceGroup --name MyAksCluster
Verify if cluster is connected
kubectl config current-context
MyAksCluster
You can play around with AKS and run all commands you want to run. Here is the cheatsheet or kubectl.
Kubectl Cheat-Sheet
https://www.bluematador.com/learn/kubectl-cheatsheet
In order to run commands using Azure DevOps on you need to create service connection in Azure DevOps to authenticate Azure DevOps with AKS.
Project Settings --> Service Connections --> New Kubernetes Service Connection --> Azure Subscription
Now you can run the kubernetes commands on this AKS using built in kubernetes task or using bash|powershell commands inside your pipeline.
Hope that helps you.
e:g
- task: Kubernetes#1
inputs:
connectionType: 'Kubernetes Service Connection'
kubernetesServiceEndpoint: '12345'
namespace: 'default'
command: 'apply'
useConfigurationFile: true
configurationType: 'inline'
inline: 'abcd'
secretType: 'dockerRegistry'
containerRegistryType: 'Azure Container Registry'

Accessing Stored Jenkins credentials from Docker container

i am trying to trigger a google cli command in jenkins pipeline.
gcloud auth activate-service-account --key-file=user.json
currently using googlesdk docker image
Here i have my private key stored as credentials in Jenkins server while running command directly from agent i can authenticate to the account. now i wanted to run command inside docker container.
need to know how can i access private key stored in Jenkins from Docker container ?
i tried to access it directly and got following error message
ERROR: gcloud crashed (ValueError): No key could be detected.
some Assistance will be helpful.
i use scripted pipeline.

Google Cloud Composer KubernetesPodOperator InvalidImage error

I am trying to run a docker image from private GCR using KubernetesPodOperator in Cloud Composer, but getting the following error:
ERROR: Pod launching failed : Pod took too long to start
I have tried the following till now:
At first I tried increasing the "startup_timeout_seconds" but it didn't help.
Looking at the Composer created GKE cluster logs gave me the following error:
Failed to apply default image tag "docker pull us.gcr.io/my-proj-name/myimage-
name:latest": couldn't parse image reference "docker pull us.gcr.io/my-proj-
name/myimage-name:latest": invalid reference format: InvalidImageName
I tried pulling the same docker image on my local machine from my private GCR and it worked fine, not sure where is the issue.
This link https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod tells me that
"All pods in a cluster will have read access to images in this registry.
The kubelet will authenticate to GCR using the instance’s Google service account. The service
account on the instance will have a https://www.googleapis.com/auth/devstorage.read_only, so
it can pull from the project’s GCR, but not push"
which means the pod should be able to pull image from GCR. FYI, I am using a service account
to provision my composer env and it has sufficient permission to read from GCS bucket.
Also, I did the following steps to add secret :
gcloud container clusters get-credentials <cluster_name>
kubectl create secret generic gc-storage-rw-key --from-file=key.json=<path_to_serv_accnt_key>
secret_file = secret.Secret(
deploy_type='volume',
deploy_target='/tmp/secrets/google',
secret='gc-storage-rw-key',
key='<path of serv acct key file>.json')
Refer it as secrets=[secret_file] inside KubernetesPodOperator operator in DAG
I have added image_pull_policy='Always' in my DAG as well but not working...
For reference: my CircleCI config.yml contains following
- run: echo ${GOOGLE_AUTH} > ${HOME}/gcp-key.json
- run: docker build --rm=false -t us.gcr.io/${GCP_PROJECT}/${IMAGE_NAME}:latest .
- run: gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json
- run: gcloud --quiet config set project ${GCP_PROJECT}
- run: gcloud docker -- push us.gcr.io/${GCP_PROJECT}/${IMAGE_NAME}:latest
Could anyone please guide me?

docker push to gcr.io fails with "denied: Token exchange failed for project"

I've discovered a flow that works through GCP console but not through the gcloud CLI.
Minimal Repro
The following bash snippet creates a fresh GCP project and attempts to push an image to gcr.io, but fails with "access denied" even though the user is project owner:
gcloud auth login
PROJECT_ID="example-project-20181120"
gcloud projects create "$PROJECT_ID" --set-as-default
gcloud services enable containerregistry.googleapis.com
gcloud auth configure-docker --quiet
mkdir ~/docker-source && cd ~/docker-source
git clone https://github.com/mtlynch/docker-flask-upload-demo.git .
LOCAL_IMAGE_NAME="flask-demo-app"
GCR_IMAGE_PATH="gcr.io/${PROJECT_ID}/flask-demo-app"
docker build --tag "$LOCAL_IMAGE_NAME" .
docker tag "$LOCAL_IMAGE_NAME" "$GCR_IMAGE_PATH"
docker push "$GCR_IMAGE_PATH"
Result
The push refers to repository [gcr.io/example-project-20181120/flask-demo-app]
02205dbcdc63: Preparing
06ade19a43a0: Preparing
38d9ac54a7b9: Preparing
f83363c693c0: Preparing
b0d071df1063: Preparing
90d1009ce6fe: Waiting
denied: Token exchange failed for project 'example-project-20181120'. Access denied.
The system is Ubuntu 16.04 with the latest version of gcloud 225.0.0, as of this writing. The account I auth'ed with has role roles/owner.
Inconsistency with GCP Console
I notice that if I follow the same flow through GCP Console, I can docker push successfully:
Create a new GCP project via GCP Console
Create a service account with roles/owner via GCP Console
Download JSON key for service account
Enable container registry API via GCP Console
gcloud auth activate-service-account --key-file key.json
gcloud config set project $PROJECT_ID
gcloud auth configure-docker --quiet
docker tag "$LOCAL_IMAGE_NAME" "$GCR_IMAGE_PATH" && docker push "$GCR_IMAGE_PATH"
Result: Works as expected. Successfully pushes docker image to gcr.io.
Other attempts
I also tried using gcloud auth login as my #gmail.com account, then using that account to create a service account with gcloud, but that gets the same denied error:
SERVICE_ACCOUNT_NAME=test-service-account
gcloud iam service-accounts create "$SERVICE_ACCOUNT_NAME"
KEY_FILE="${HOME}/key.json"
gcloud iam service-accounts keys create "$KEY_FILE" \
--iam-account "${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member "serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/owner
gcloud auth activate-service-account --key-file="${HOME}/key.json"
docker push "$GCR_IMAGE_PATH"
Result: denied: Token exchange failed for project 'example-project-20181120'. Access denied.
I tried to reproduce the same error using bash snippet you provided, however it successfully built the ‘flask-demo-app’ container registry image for me. I used below steps to reproduce the issue:
Step 1: Use account which have ‘role: roles/owner’ and ‘role: roles/editor’
Step 2: Created bash script using your given snippet
Step 3: Added ‘gcloud auth activate-service-account --key-file skey.json’ in script to authenticate the account
Step 4: Run the bash script
Result : It created the ‘flask-demo-app’ container registry image
This leads me to believe that there might be an issue with your environment which is causing this error for you. To troubleshoot this you could try running your code on a different machine, a different network or even on the Cloud Shell.
In my case project IAM permission was the issue. Make sure proper permission given and also cloud resource/container registry API enabled.
GCP Access Control

GitLab CI ssh registry login

I have a GitLab project gitlab.com/my-group/my-project which has a CI pipeline that builds an image and pushes it to the project's GitLab registry registry.gitlab.com/my-group/my-project:tag. I want to deploy this image to Google Compute Engine, where I have a VM running docker.
Easy enough to do it manually by ssh'ing into the VM, then docker login registry.gitlab.com and docker run ... registry.gitlab.com/my-group/my-project:tag. Except the docker login command is interactive, which is a no-go for CI. It can accept a username and password on the command line, but that hardly feels like the right thing to do, even if my login info is in a secret variable (storing my GitLab login credentials in a GitLab secret variable?...)
This is the intended workflow on the Deploy stage of the pipeline:
Either install the gcloud tool or use an image with it preinstalled
gcloud compute ssh my-gce-vm-name --quiet --command \
"docker login registry.gitlab.com && docker run registry.gitlab.com/my-group/my-project:tag"
Since the gcloud command would be running within the GitLab CI Runner, it could have access to secret variables, but is that really the best way to log in to the GitLab Registry over ssh from GitLab?
I'll answer my own question in case anyone else stumbles upon it. GitLab creates ephemeral access tokens for each build of the pipeline that give the user gitlab-ci-token access to the GitLab Registry. The solution was to log in as the gitlab-ci-token user in the build.
.gitlab-ci.yml (excerpt):
deploy:
stage: deploy
before_script:
- gcloud compute ssh my-instance-name --command "docker login registry.gitlab.com/my-group/my-project -u gitlab-ci-token -p $CI_BUILD_TOKEN"
The docker login command creates a local configuration file in which your credentials are stored at $HOME/.docker/config.json that looks like this (also see the documentation on this):
{
"auths": {
"<registry-url>": {
"auth": "<credentials>"
}
}
}
As long as the config.json file is present on your host and your credentials (in this case simply being stored as base64("<username>:<password>")) do not change, there is no need to run docker login on every build or to store your credentials as variables for your CI job.
My suggestion would be to simply ensure that the config.json file is present on your target machine (either by running docker login once manually or by deploying the file using whatever configuration management tool you like). This saves you from handling the login and managing credentials within your build pipeline.
Regarding the SSH login per se; this should work just fine. If you really want to eliminate the SSH login, you could setup the Docker engine on your target machine to listen on an external socket, configure authentication and encryption using TLS client certificates as described in the official documentation and directly talk to the remote server's Docker API from within the build job:
variables:
DOCKER_HOST: "tcp://<target-server>:2376"
DOCKER_TLS_VERIFY: "1"
script:
- docker run registry.gitlab.com/my-group/my-project:tag
We had the same "problem" on other hosting providers. Our solution is to use some kind of custom script which runs on the target machine and can be called via a Rest-Api Endpoint (secured by Basic-Auth or what ever).
So you could just trigger the remote host to do the docker login and upgrade your service without granting SSH Access via gitlab-ci.

Resources