I'm building a Docker image using a Jenkins pipeline (using a pipeline script that was auto-generated by JHipster). I want to push my final docker image to the Google Container Registry.
Here's what I've done:
I've installed both the CloudBees Docker Custom Build Environment Plugin and the Google Container Registry Auth Plugin.
I've set up Google auth credentials in Jenkins following instructions over here
I've configured my build step to use the Google Registry tag format, like so: docker.build('us.gcr.io/[my-project-id]/[my-artifact-id]', 'target/docker')
I've referenced the id of my Google Auth credentials in my push step:
(Hm. Needs extra text line after bullets to format properly)
docker.withRegistry('https://us.gcr.io', '[my-credential-id]') {
dockerImage.push 'latest'
}
But the build fails with:
ERROR: Could not find credentials matching [my-credential-id]
Finished: FAILURE
I'm basically at the point of believing that these plugins don't work in a pipelines world, but I thought I'd ask if anyone has accomplished this and could give me some pointers.
Try prefixing your credentials id by "gcr:".
Your credential id would look like "gcr:[my-credential-id]".
Complete working example:
stage('Build image') {
app = docker.build("[id-of-your-project-as-in-google-url]/[name-of-the-artifact]")
}
stage('Push image') {
docker.withRegistry('https://eu.gcr.io', 'gcr:[credentials-id]') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
Please note the name of the image. I was struggling with pushing of the image even with working credentials until I've named the image wit [id-of-your-project-as-in-google-url]/[name-of-the-artifact] notation.
When you get a message that you need to enable the Google....... API, you probably got your [id-of-your-project-as-in-google-url] wrong.
Images can now be successfully used with url of eu.gcr.io/[id-of-your-project-as-in-google-url]/[name-of-the-artifact]:47.
LZ
The previous answers didn't seem to work for me anymore. This is the syntax that works for me:
stage('Push Image') {
steps {
script {
docker.withRegistry('https://gcr.io', 'gcr:my-credential-id') {
dockerImage.push()
}
}
}
}
Other way of setting up Google cloud registry can be as below where you use withCredentials plugin and use file credential type.
withCredentials([file(credentialsId: 'gcr-private-repo-reader', variable: 'GC_KEY')]){
sh '''
chmod 600 $GC_KEY
cat $GC_KEY | docker login -u _json_key --password-stdin https://eu.gcr.io
docker ps
docker pull eu.gcr.io/<reponame>/<image>
'''
}
check if you have https://plugins.jenkins.io/google-container-registry-auth/ plugin installed.
After plugin installed use gcr:credential-id synthax
The below answer didn't completely work before, and is apparently now deprecated. I'm leaving the text here for historical reasons.
I've ultimately bypassed the problem by using a gcloud script stage in place of a docker pipeline stage, like so:
stage('publish gcloud') {
sh "gcloud docker -- push us.gcr.io/[my-project-id]/[my-artifact-id]"
}
The gcloud command is able to find the auth credentials that are set up using a gcloud init on the command line of the Jenkins server.
Related
I'm a little bit of a newb, with this CI/CD container stuff so please correct me anywhere I'm wrong.
I can't seem to find out how to send by npm build files created on my jenkins instance (workspace) to a remote server. I have a pipeline that successfully pulls in my github repo, does all my fun npm stuff (npm install, test, build). I see my build dir in my jenkins instance /workspace.
My environment is as follows. We have a server where docker (with Portainer) is installed. Jenkins is running in a container with a volume mounted (my react build dir goes here). No issues with the pipeline or building etc. I just can't figure out how to push my artifacts from my jenkins workspace directory to my 'remote' dev server.
I can successfully open a console in my jenkins container (portainer as the jenkins user) and scp files from the workspace directory using my remote server creds(but password is necessary).
I installed and used "Publish Over SSH" Jenkins plugin and get a successful "Test Configuration" from my setup.
I created my RSA keys on the REMOTE machine (that I'm trying to push my build files to).
I then pasted the private key (created without a password) into the plugin at the 'Use password authentication, or use a different key' section. Again, I get a successful test connection.
In my pipeline the last step is deploying and I use this command
sh 'scp -r build myusername#xx.xx.xx.xx:/var/files/react-tester'
I get a 'Permission denied (publickey,password).' error. I have no password associated with the rsa key. I tried both ways, creating the rsa key on the remote machine as my remote user, and the jenkins machine as the jenkins user. I've read examples of people creating the keys both ways, but not sure which user/machine combo to create the keys and paste to which section of the 'Publish Over SSH' plugin.
I'm out of ideas.
First, go to "Manage Jenkins" > "Credentials", add a new SSH credential of type "SSH Username with private key" and fill the "Username" and your private key (generate one if you haven't done it yet) fields (you can also upload one). Don't forget that you have to copy the generated public key to the ${SSH_USERNAME}/.ssh/authorized_keys file on the remote server.
I'm assuming you're using a scripted or DSL pipeline here. In your code, after you've builded your application, you can push it to your server adding this step to your pipeline:
pipeline {
stages {
stage("Pushing changes to remote server") {
steps {
script {
def remote_server = "1.2.3.4"
withCredentials([sshUserPrivateKey(credentialsId: 'my-key', keyFileVariable: 'SSH_KEY', passphraseVariable: '', usernameVariable: 'SSH_USERNAME')]) {
sh "scp -i \${SSH_KEY} build/ ${SSH_USERNAME}#${remote_server}:/var/files/react-tester/"
}
}
}
}
}
}
Best regards.
It might sound silly but I was trying to store my dockerhub password inside Mange credentials of jenkins as Secret text so that it can be accessed in the pipeline script.
Here is the secret which I have created
Here is a pipeline script where i trying to access the password using the ID
node {
stage("Docker Login"){
sh 'docker login -u rahulwagh17 -p ${DOCKER_HUB_PASSWORD}'
}
}
But it always fails with -
You're looking for the withCredentials method of jenkins' pipeline DSL.
Have a look here:
https://support.cloudbees.com/hc/en-us/articles/203802500-Injecting-Secrets-into-Jenkins-Build-Jobs
Every Job has it's pipeline syntax button available in it's dashboard:
$JENKINs_URL/$YOUR_JOB/pipeline-syntax/.
You can generate an adequate withCredentials blog there.
I have a Jenkins pipeline in which I need to log into two different docker repositories. I know how to authenticate to one repo using the following command
docker.withRegistry('https://registry.example.com', 'credentials-id')
but don't know how to do it for more than 1 repo?
Nesting docker.withRegistry calls actually works as expected. Each call adds an entry to /home/jenkins/.dockercfg with provided credentials.
// Empty registry ('') means default Docker Hub `https://index.docker.io/v1/`
docker.withRegistry('', 'dockerhub-credentials-id') {
docker.withRegistry('https://private-registry.example.com', 'private-credentials-id') {
// your build steps ...
}
}
This allows you to pull base images from Docker Hub using provided credentials to avoid recently introduced pull limits, and push results into another docker registry.
This is a partial answer only applicable when you are using two registries but only need credentials on one. You can nest the calls since they mostly just do a docker login that stays active for the scope of the closure and will add the registry domain name into docker pushes and such.
Use this in a scripted Jenkins pipeline or in a script { } section of a declarative Jenkins pipeline:
docker.withRegistry('https://registry.example1.com') { // no credentials here
docker.withRegistry('https://registry.example2.com', 'credentials-id') { // credentials needed
def container = docker.build('myImage')
container.inside() {
sh "ls -al" // example to run on the newly built
}
}
}
Sometimes you can use two, non-nested calls to docker.withRegistry() one after the other but building is an example of when you can't if, for example, the base image for the first FROM in the Dockerfile needs one registry and the base image for a second FROM is in another registry.
I have signed up for one of the public docker registries, so I've been given a username and password. I'm writing a Jenkins job which pulls an image from this repository, so I'm using the following command in my Jenkins pipeline
docker.withRegistry('https://registry.example.com', 'credentials-id')
However I don't know what I should put in as the credentials-id? How can I get it?
This credentials-id Item is provided by the Jenkins credentials Plugin. It is documented here, e.g. https://jenkins.io/doc/book/using/using-credentials/
I'm trying to set up jenkins pipeline according to
this article but instead use google container registry to push the docker images to.
The Problem: The part which fails me is this jenkinsfile stage block
stage ('Push Docker Image To Container Registry') {
docker.image('google/cloud-sdk:alpine').inside {
sh "echo ${env.GOOGLE_AUTH} > gcp-key.json"
sh 'gcloud auth activate-service-account --key-file ./service-account-creds.json'
}
}
The Error:
Please verify that you have permissions to write to the parent directory.)
ERROR: (gcloud.components.update) Could not create directory [/.config/gcloud]: Permission denied.
I can't run any command to do with gcloud as the error above is what i get all the time.
I tried create the "/.config" directory manually logged into the aws instance and open up the permission of the folder to everyone but that didn't help either.
I also can't find anywhere how to properly setup google cloud for jenkins pipeline using docker.
Any suggestions are greatly appreciated :)
It looks like it's trying to write data directly into your root file system directory.
The .config directory for gcloud would normally be in the following locations for username and/or root user:
/home/yourusername/.config/gcloud
/root/.config/gcloud
It looks like, for some reason, jenkins thinks the parent directory should be in /.
I would try checking where your cloud sdk config directories are on the machine you are running this on (and for the user the scripts runs as):
$ sudo find / -iname "gcloud"
And look for location similars to those printed above.
Could it be that the Cloud SDK is installed in a none standard location on the machine?