Jenkins - export a scripted pipeline into a shared lib - jenkins

We have a few similar apps that are deployed with a scripted pipeline which is basically C&P over all apps. I would like to move the whole pipeline into a Jenkins shared lib as hinted in the Jenkins docs.
So let's suppose that I have the following "pipeline" in var/standardSpringPipeline.groovy:
#!groovy
def call() {
node {
echo "${env.BRANCH_NAME}"
}
}
Then - the Jenkins file:
#Library('my-jenkins-lib#master') _
standardSpringPipeline
echo "Bye!"
Unfortunately this does not work for a reason that I do not understand. The Jenkins output is similar:
> git fetch --no-tags --progress ssh://git#***.com:7999/log/my-jenkins-lib.git +refs/heads/*:refs/remotes/origin/*
Checking out Revision 28900d4ed5bcece9451655f6f1b9a41a76256629 (master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 28900d4ed5bcece9451655f6f1b9a41a76256629
Commit message: "NOJIRA: ...."
> git rev-list --no-walk 28900d4ed5bcece9451655f6f1b9a41a76256629 # timeout=10
[Pipeline] echo
Bye!
[Pipeline] End of Pipeline
Any clue why this does not work (see the output above) and what is the correct way to do that?

For no-arg methods, you cannot use optional parenthesis. From the Groovy documentation (emphasis mine):
Method calls can omit the parentheses if there is at least one parameter and there is no ambiguity:
println 'Hello World'
def maximum = Math.max 5, 10
Parentheses are required for method calls without parameters or
ambiguous method calls:
println()
println(Math.max(5, 10))
The standardSpringPipeline behaves like a method because of how it is compiled. If you add a echo "$standardSpringPipeline" it is a bit clearer that it is a compiled class that can be invoked.
To address your issue, just add parenthesis to the call:
standardSpringPipeline()

Related

Jenkins `checkout scm` fails with Error code=18 on first invocation, succeeds on second call

I have been running into an issue where most, but not all, of the jenkins jobs to build our branches are failing, when connecting by proxy, with:
> git fetch --tags --progress https://github.com/myorg/myrepo.git
> +refs/heads/*:refs/remotes/origin/* # timeout=60
> ERROR: Error cloning remote repo 'origin'
> hudson.plugins.git.GitException: Command "git fetch --tags --progress >
> https://github.com/myorg/myrepo.git +refs/heads/*:refs/remotes/origin/*" >
> returned status code 128:
> stdout:
> stderr: error: RPC failed; result=18, HTTP code = 200
> fatal: The remote end hung up unexpectedly
The Jenkins script that I have, started like so:
stage ( "Checkout" ) {
sh label: "Use proxy to access GitHub", script:
"""
git config --global http.proxy ${httpProxy}
"""
scmVars = checkout scm
... ^^ this call wraps the failing `git fetch --tags --progress` call
Reading various other stack articles (git bash: error: RPC failed; result = 18, HTP code = 200B | 1KiB/s Git clone return result=18 code=200 on a specific repository etc), the only thing I have found of note is to increase the postBuffer size. ie. git config --global http.postBuffer 524288000 # 500mb
This then looks like:
stage ( "Checkout" ) {
sh label: "Use proxy to access GitHub", script:
"""
git config --global http.postBuffer 524288000
git config --global http.proxy ${httpProxy}
"""
scmVars = checkout scm
...
This has no net effect. ~90%+ of the builds fail on fetch; ~10% will go through (independent of the branch being built).
This is where affairs become very hard to explain: after hours of banging my head against the wall, I eventually decided to invoke checkout scm twice, consecutively; first I call scm checkout in a try/catch. The second call, I assign scmVars. (nb. if the first call succeeds, the second is idempotent). This looks like:
stage ( "Checkout" ) {
sh label: "Use proxy to access GitHub", script:
"""
git config --global http.postBuffer 524288000
git config --global http.proxy ${httpProxy}
"""
try {
checkout scm
} catch (nil) {
println "Initial SCM CHECKOUT fails"
}
scmVars = checkout scm
Calling checkout scm twice, catching an error on the first call, succeeds every time, in over 50 runs.
Why?
Is git config --global http.postBuffer 524288000 being applied to both calls or only the second?
If http.postBuffer is only set on the second call, how do I correctly set the property for the first call?
In broad strokes, what is actually happening here?
long-time listener, first-time caller; I am not a dev-ops engineer, I just play one on T.V. - thank you in advance for any and all insights

Jenkins git checkout on agent not working

The Jenkins file in my github repository is used in a Jenkins Master/Slave environment.
I need to execute a testing command on a remote Jenkins Slave Server.
In my declarative pipeline the agent is called like this:
stage("Testautomation") {
agent { label 'test-device' }
steps {
bat '''
#ECHO ON
ECHO %WORKSPACE%
... '''
}
}
Before Jenkins can even execute a remote command, it starts checking out from version control. The checkout on Jenkins Master is no problem and working fine. But on this Jenkins Slave I always receive this error message.
using credential github-enterprise:...
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://...git # timeout=10
Fetching upstream changes from https://...git
> git --version # timeout=10
using GIT_ASKPASS to set credentials GitHub Enterprise Access Token
> git fetch --tags --force --progress --depth=1 -- https://...git +refs/heads/development:refs/remotes/origin/development # timeout=120
Checking out Revision ... (development)
> git config core.sparsecheckout # timeout=10
> git checkout -f ...
Could not checkout ...
The Declarative pipeline performs a SCM checkout on every agent by default. Check if Git is installed on the Jenkins slave.
Conversely, if you want the code to be checked out on master but not on the agent, disable the default checkout in the options directive and use the scm checkout step inside a stage.
pipeline {
agent { label 'master' }
options {
skipDefaultCheckout(true)
}
stages {
stage('Build') {
steps {
checkout scm
// do other stuff on master
}
}
stage("Testautomation") {
agent { label 'test-device' }
steps {
bat '''
#ECHO ON
ECHO %WORKSPACE%
'''
}
}
}
}
You can further customize the checkout behavior as described in this answer https://stackoverflow.com/a/42293620/8895640.

Jenkins multibranch pipeline triggers pipeline on unrelated branches

I have a problem with Jenkins multibranch pipleline using JenkinsFile and the GIT plugin.
The problem is that every push to staging branch triggers the pipeline of master as well.
The desired behavior is that push to staging branch only triggers the pipleine for staging, and push to master branch only triggers the pipeline for master
This is my JenkinsFile
#!/usr/bin/env bash
pipeline {
agent any
triggers {
pollSCM('*/1 * * * *')
}
environment {
GCLOUD_PATH="/var/jenkins_home/GoogleCloudSDK/google-cloud-sdk/bin"
}
stages {
stage('Git Checkout'){
steps{
// Clean Workspace
cleanWs()
// Get source from Git
git branch: 'staging',
credentialsId: ****',
url: 'git#github.com:***/****.git'
}
}
stage('Update Staging') {
when {
branch 'staging'
}
environment{
INSTANCE="***"
}
steps {
sshagent(credentials : ['****']) {
sh 'ssh -tt -o StrictHostKeyChecking=no jenkins#"${INSTANCE}" sudo /opt/webapps/****/deploy.sh firstinstance'
}
}
}
stage('Update Production') {
when {
branch 'master'
}
environment{
gzone="us-central1-a"
}
steps {
sh '''
#!/bin/bash
echo "${BRANCH_NAME}"
export instances=$("${GCLOUD_PATH}"/gcloud compute instances list --filter="status:(running) AND tags.items=web" --format="value(name)")
FIRST=1
for instance in ${instances}
do
echo "### Running Instance: ${instance} ###"
if [[ $FIRST == 1 ]]; then
echo "first instance"
${GCLOUD_PATH}/gcloud compute ssh jenkins#${instance} --zone ${gzone} '--ssh-flag=-tt -i /root/.ssh/id_rsa -o StrictHostKeyChecking=no' --command="echo first"
else
${GCLOUD_PATH}/gcloud compute ssh jenkins#${instance} --zone ${gzone} '--ssh-flag=-tt -i /root/.ssh/id_rsa -o StrictHostKeyChecking=no' --command="sudo uptime"
fi
FIRST=0
done
'''
}
}
}
post {
success {
cleanWs()
}
}
}
I'll share some logs:
The is a log for master branch
http://34.69.57.212:8080/job/tinytap-server/job/master/2/pollingLog/ returns
Started on Dec 10, 2019 1:42:00 PM
Using strategy: Specific revision
[poll] Last Built Revision: Revision 12ecdbc8d2f7e7ff1f578b135ea0b23a28d7672d (master)
using credential ccb9a735-04d9-4aab-8bab-5c86fe0f363c
> git --version # timeout=10
using GIT_ASKPASS to set credentials
> git ls-remote -h -- https://github.com/tinytap/tinytap-web.git # timeout=10
Found 222 remote heads on https://github.com/tinytap/tinytap-web.git
[poll] Latest remote head revision on refs/heads/master is: 12ecdbc8d2f7e7ff1f578b135ea0b23a28d7672d - already built by 1
Using strategy: Default
[poll] Last Built Revision: Revision f693e358ce14bc5dfc6111e62ed88e6dd1d0dfc9 (refs/remotes/origin/staging)
using credential 17f45a89-da78-4969-b18f-cb270a526347
> git --version # timeout=10
using GIT_SSH to set credentials jenkins key
> git ls-remote -h -- git#github.com:tinytap/tinytap-web.git # timeout=10
Found 222 remote heads on git#github.com:tinytap/tinytap-web.git
[poll] Latest remote head revision on refs/heads/staging is: 907899a0e7e131e9416ee65aad041c8da111e2fe
Done. Took 1 sec
Changes found
The is a log for master branch, but only staging had a new commit :
http://34.69.57.212:8080/job/tt-server/job/master/3/pollingLog/ returns
Started on Dec 10, 2019 1:55:00 PM
Using strategy: Specific revision
[poll] Last Built Revision: Revision 12ecdbc8d2f7e7ff1f578b135ea0b23a28d7672d (master)
using credential ****-****-****-****-5c86fe0f363c
> git --version # timeout=10
using GIT_ASKPASS to set credentials
> git ls-remote -h -- https://github.com/tt/tt-web.git # timeout=10
Found 222 remote heads on https://github.com/tt/tt-web.git
[poll] Latest remote head revision on refs/heads/master is: 12ecdbc8d2f7e7ff1f578b135ea0b23a28d7672d - already built by 2
Using strategy: Default
[poll] Last Built Revision: Revision 907899a0e7e131e9416ee65aad041c8da111e2fe (refs/remotes/origin/staging)
using credential ****-****-****-****-cb270a526347
> git --version # timeout=10
using GIT_SSH to set credentials jenkins key
> git ls-remote -h -- git#github.com:tt/tt-web.git # timeout=10
Found 222 remote heads on git#github.com:tt/tt-web.git
[poll] Latest remote head revision on refs/heads/staging is: eab6e8bc6d8586084e9fe9856dec7fd8b31dd098
Done. Took 0.98 sec
Changes found
Notice "changes found" even though head did not change on master branch
Jenkins ver. 2.190.1
Git plugin ver 4.0.0
Git client plugin ver 2.9.0
I use this plugin - https://github.com/lachie83/jenkins-pipeline and it works fine for me. You need to have separate if blocks for each branch and then the stage block inside it. Example below:
#!/usr/bin/groovy
#Library('https://github.com/lachie83/jenkins-pipeline#master')
def pipeline = new io.estrado.Pipeline()
def cloud = pipeline.getCloud(env.BRANCH_NAME)
def label = pipeline.getPodLabel(cloud)
// deploy only the staging branch
if (env.BRANCH_NAME == 'staging') {
stage ('deploy to k8s staging') {
//Deploy to staging
}
}
// deploy only the master branch
if (env.BRANCH_NAME == 'master') {
stage ('deploy to k8s production') {
//Deploy to production
}
}
I think you have some logical omissions in your Jenkinsfile. As it currently stands, you poll SCM for changes. If any change is detected, first stage 'Git Checkout' will checkout staging branch (always). Then you have another stage which does something if the branch is 'staging' (which it is, because it's hardcoded to checkout that branch above) etc. This will be the first thing to fix - if SCM changes are detected, checkout the right branch. How - there are a few options. I usually use 'skipDefaultCheckout()' in 'options' together with explicit checkout in my first pipeline stage:
steps {
sshagent(['github-creds']) {
git branch: "${env.BRANCH_NAME}", credentialsId: 'github-creds', url: 'git#github.com:x/y.git'
}
}
The second thing is that you try to squeeze handling two different branches into a single Jenkinsfile. This is not how it should be done. Jenkins wil use Jenkinsfile from a given branch - just make sure Jenkinsfile on staging contains what you want it to contain, same with Jenkinsfile on master.
Hope it helps.

How to use shell regular expression in jenkinsfile for jenkins pipeline?

I am trying to replace the '/' from Git branch name with '_' in my jenkinsfile so that I can tag my docker image with the branch name. In bash the below command works fine
echo "${git_branch_name//\//_}"
But when use the above command in jenkinsfile as below it throws an error.
#!/usr/bin/env groovy
def commit_id
def imagetag
def branch_name
def git_branch_name
node('Nodename') {
stage('checkout') {
checkout (scm).$Branch_Param
sh "git rev-parse --short HEAD > .git/commit-id"
commit_id = readFile('.git/commit-id').trim()
sh "git rev-parse --abbrev-ref HEAD > .git/branch-name"
git_branch_name = readFile('.git/branch-name').trim()
branch_name= sh "echo ${git_branch_name//\//_}"
sh "echo ${commit_id}"
sh "echo ${branch_name}"
sh "echo Current branch is ${branch_name}"
}
}
WorkflowScript: 15: end of line reached within a simple string 'x' or "x" or /x/;
solution: for multi-line literals, use triple quotes '''x''' or """x""" or /x/ or $/x/$ # line 15, column 28.
sh "branch_name = echo ${git_branch_name//\//_}"
What am I doing wrong here? Should I use Groovy regular expression instead of shell? why is shell not being interpreted correctly?
Thank you
The issue is that you're asking Groovy itself to interpret the expression ${git_branch_name//\//_}, not the shell. Using double-quotes around the string you pass to the sh step is what causes that. So if you instead write the following, this first error will go away:
sh 'echo ${git_branch_name//\\//_}' // <- Note the single-quotes
Basically, always use single-quotes unless you specifically need to use groovy's string interpolation (see the very last echo at the bottom of this answer).
Interestingly, it seems when I tested I didn't need the shebang (#!/bin/bash) to specify bash as some comments suggest; this ${variable//x/y} replace syntax worked in an sh step as-is. I guess the shell spawned was bash. I don't know if that's always the case, or if our Jenkins box has been specifically setup that way.
Also note you need to escape the escape sequence ('\\/') because what you're passing to the sh step is a string literal in groovy code. If you don't add that extra backslash, the line passed to the shell to be interpreted by it will be echo ${git_branch_name////_}, which it won't understand.
But there are other issues as well. First, assigning the output of the sh step to branch_name as you do means branch_name will always equal null. To get the stdout from a line of shell code you need to pass the extra parameter returnStdout: true to sh:
branch_name = sh (
script: 'echo ${git_branch_name//\\//_}',
returnStdout: true
).trim () // You basically always need to use trim, because the
// stdout will have a newline at the end
For bonus points, we could wrap that sh call in a closure. I find myself using it often enough to make this a good idea.
// Get it? `sh` out, "shout!"
def shout = { cmd -> sh (script: cmd, returnStdout: true).trim () }
//...
branch_name = shout 'echo ${git_branch_name//\\//_}'
But finally, the major problem is that bash (or whatever shell is actually spawned) doesn't have access to groovy variables. As far as it knows, echo ${git_branch_name} outputs an empty string, and therefore so does echo ${git_branch_name//\//_}.
You have a couple of choices. You could skip the creation of .git/branch-name and just immediately output the string-replaced result of git rev-parse:
branch_name = shout 'name=$(git rev-parse --abbrev-ref HEAD) && echo ${name//\\//_}'
Or to simplify that further you could use groovy's string replace function rather than the bash syntax:
branch_name = shout ('git rev-parse --abbrev-ref HEAD').replace ('/', '_')
Personally, I find the latter quite a bit more readable. YMMV. So bringing it all together at last:
#!groovy
def shout = { cmd -> sh (script: cmd, returnStdout: true).trim () }
// Note that I'm not declaring any variables up here. They're not needed.
// But you can if you want, just to clearly declare the environment for
// future maintainers.
node ('Nodename') {
stage ('checkout') {
checkout (scm).$Branch_Param
commit_id = shout 'git rev-parse --short HEAD'
branch_name = shout ('git rev-parse --abbrev-ref HEAD').replace ('/', '_')
echo commit_id
echo branch_name
echo "The most recent commit on branch ${branch_name} was ${commit_id}"
}
}

access workspace in jenkins pipeline

I have a jenkins pipeline job with the job config pulling Jenkinsfile from a repo . Once the job runs and pulls the jenkinsfile, it clones the repo and I can see it in workspace icon for the job.
Now when in Jenkinsfile I do cd ${workspace} and ls , it doesnt display anything. How do I access the workspace of the repo of the Jenkinsfile ? Or does it just store the Jenkinsfile itself ?
This is my Jenkinsfile :
node ("master"){
// Mark the code checkout 'Checkout'....
stage 'Checkout'
sh "pwd ; ls"
}
As I run it, I get the following log:
> GitHub pull request #282 of commit
> 0045a729838aae0738966423ff19c250151ed636, no merge conflicts. Setting
> status of 0045a729838aae0738966423ff19c250151ed636 to PENDING with url
> https://10.146.84.103/job/test1/ and message: 'Pull request builder
> for this security group pr started' Using context: SG Terraform
> Validate/Plan
> > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository
> > git config remote.origin.url https://github.xxx.net/Terraform/djin-sg/ # timeout=10 Fetching
> upstream changes from https://github.xxx.net/Terraform/djin-sg/
> > git --version # timeout=10 using GIT_ASKPASS to set credentials wsjbuild
> > git fetch --tags --progress https://github.xxx.net/Terraform/djin-sg/
> +refs/heads/*:refs/remotes/origin/*
> > git rev-parse origin/master^{commit} # timeout=10 Checking out Revision 9dd8491b7b18c47eac09cec5a4bff4f16df979bf (origin/master)
> > git config core.sparsecheckout # timeout=10
> > git checkout -f 9dd8491b7b18c47eac09cec5a4bff4f16df979bf First time build. Skipping changelog. [Pipeline] node Running on master in
> /var/lib/jenkins/workspace/test1 [Pipeline] { [Pipeline] stage
> (Checkout) Using the ‘stage’ step without a block argument is
> deprecated Entering stage Checkout Proceeding [Pipeline] wrap
> [Pipeline] { [Pipeline] sh [test1] Running shell script
> + cd /var/lib/jenkins/workspace/test1
> + ls
My question specifically is that to get the Jenkinsfile it clones the djin-sg repo. Its in the workspace as well. So When I do ls why does it show no files ?
As I go to the Jenkins job pipeline steps and open workspace in console I can see the full repo in workspace but I cant seem to access it in the job.
Try instead a Jenkins pipeline syntax, like:
pipeline {
agent { node { label 'master' } }
stages {
stage('After Checkout') {
steps {
sh 'pwd; ls'
}
}
}
}
You can do checkout scm to actually checkout a repository to the workspace or you can find it under ../${env.JOB_NAME}#script only on master.
It's better to always checkout checkout scm manually because slaves do not have ../$env.JOB_NAME#script folder.

Resources