I have a gke cluster with a running jenkins master. I am trying to start a build. I am using a pipeline with a slave configured by the kubernetes plugin (pod Templates). I have a custom image for my jenkins slave published in gcr (private access). I have added credentials (google service account) for my gcr to jenkins. Nevertheless jenkins/kubernetes is failing to start-up a slave because the image can't be pulled from gcr. When I use public images (jnlp) there is no issue.
But when I try to use the image from gcr, kubernetes says:
Failed to pull image "eu.gcr.io/<project-id>/<image name>:<tag>": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Although the pod is running in the same project as the gcr.
I would expect jenkins to start the slave even if I use a image from gcr.
Even if the pod is running in a cluster in the same project, is not authenticated by default.
Is stated that you've already set up the Service Account and I assume that there's a furnished key in the Jenkins server.
If you're using the Google OAuth Credentials Plugin you can then also use the Google Container Registry Auth Plugin to authenticate to a private GCR repository and pull the image.
Related
I have a Jenkins Pipeline that is building a container image and is pushing it to Google Artifact Registry successfully. I have another job that takes the image tag and can deploy it into the K8s cluster, but for security reasons I need to include in my pipeline a step that reviews the vulnerabilities from the artifact registry scan and prevent the deployment if there are high or critical vulnerabilities, what would be the best option for accomplish with it?
I solved it with the use of SDK:
https://cloud.google.com/sdk/gcloud/reference/artifacts/docker/images/describe
Just used:
gcloud artifacts docker images describe IMAGE --show-package-vulnerability
Note that the service account we are using in jenkins should get the appropriate permissions.
I have a series of build pipelines in Azure Devops that use the Docker#2 task to buildAndPush an image from a Dockerfile into our Azure Container Registry. The pipeline needs a Service Connection for the ACR, but at the moment I also supply the FQDN for the same ACR as a variable, so that the image can be tagged and pushed correctly.
In the interests of DRY, is there any way to access the Service Connection and extract this information instead? I haven't found much information about what adding a Service Connection actually does on the build agent - presumably there is some file of credentials/properties created somewhere...
I have a docker container image in our intranet hosted Gitlab registry. I can manually pull the image from our OpenShift installation and run up an arbitrary number of pods successfully. If I rebuild the image locally and push to Gitlab I can trigger a pod rebuild manually from Openshift. All this is working well.
How can I trigger the pod rebuild automatically whenever I push a new image to the Gitlab registry? I don't see anywhere to place hooks between OpenShift and Gitlab and all my reading about Image Streams hasn't resulted in a successful automated deployment pipeline. The deployed versions below;
GitLab Community Edition 9.4.6 23ec1ec
Version
OpenShift Master:
v3.5.5.15
Kubernetes Master:
v1.5.2+43a9be4
Any help greatly appreciated
You possibly need to schedule update of image meta data.
https://docs.openshift.com/container-platform/3.6/dev_guide/managing_images.html#importing-tag-and-image-metadata
That requires that feature be enabled globally in the OpenShift cluster though.
A better option may be to push the image direct into the OpenShift internal registry. That can trigger the new deployment automatically.
https://docs.openshift.com/container-platform/3.6/dev_guide/managing_images.html#accessing-the-internal-registry
Configuring a mesos slave to use amazon ECR gives the following error when it receives a job:
Unsupported auth-scheme: BASIC
Does this look familiar to anyone ? I'm running the slave pointed to my user's docker config json file, which i updated by issuing a docker login beforehand.
Turns out it was my bad. I didn't modify the application that started this job to change the docker image name to include the registry as well.
As part of a Jenkins pipeline to build and deploy an app to Google's Kubernetes service (GKE), I've created a script to carry out the following deployment to GKE:
checkout code
setup authentication to gcloud and
create the deployment and service using kubectl:
Detailed steps implemented by the script are as follows:
a) Create the docker registry authentication file (.json)
b) login to the google docker registry using the authentication file
c) initialise a git repo in the current directory
d) add the remote origin in prep for code pull
e) pull the source code for the microservice container
f) Create a kubectl configurtion file and directory to authenticate to the kubernetes cluster in Gcloud
g) Create a keyfile for a Gcloud service account that needs to authenticate to the container service
h) Activate the service account
i) Get the credentials for the container cluster from Gcloud
j) Run kubectl apply to create the kubernetes services
Full, tested, script at: https://pastebin.com/sZPrQuzD
If I put this sequence of steps in a scripts on an AWS EC2 instance and run it manually it works. However,the Jenkins build step fails at the the point kubectl is invoked to run the service, with the following error:
gcloud container clusters get-credentials jenkins-cd --zone europe-west1-b --project noon-prod
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.
Build step 'Execute shell' marked build as failure
The full error dump from the Jenkins run is as follows:
https://pastebin.com/pSWPQ5Ei
My questions:
a) How to fix this? Surely it can't be that difficult to get authentication running from Jenkins?
b) Is this the correct way to authenticate to the gcloud container service from a Jenkins system which is not on Gcloud infrastructure at all?
Many thanks in advance for any help!
Traiano
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
We worked around some of the issues you've been having by running the Jenkins pipelines inside the kubernetes cluster; so there's no need to authenticate with GKE.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).