How to execute a stage only if latest commit message does not contain certain text - jenkins

I am using declarative pipelines and would like to run a stage only when it matches a certain branch and the latest commit does not include certain text.
Matching the branch is the easy part:
when {
branch 'master'
}
But for checking the latest commit message, I am unsure how to add in this check.
I have come across this answer which is very much the kind of thing I am trying to achieve.
I am wondering if I need to use an expression clause that would contain the logic similar to the answer linked to above, to query the git log and parse\check the latest commit message contains my supplied regex pattern.
Can anyone point me at any snippets that would help me here?
Thanks

I answer a bit late, but it could be useful for others.
The answer you pointed us out is for Scripted Pipeline and works well.
If you want to do this in a Declarative way, you can combine changelog condition and branch condition like this:
stage("Skip release commit") {
when {
changelog "^chore\\(release\\):.*"
branch "master"
}
steps {
script {
// Abort the build to avoid build loop
currentBuild.result = "ABORTED"
error('Last commit is from Jenkins release, cancel execution')
}
}
}
My point was to skip the build when Jenkins is commiting on the master branch with a chore(release):... message.
If you have to check a more complex branch name, you can replace branch "master" by something like expression {env.BRANCH_NAME ==~ /^feat\/.+/} (to execute the stage only on branch name starting by "feat/")
Unfortunely, if the contributors on your project do git commit --amend on a previous commit with message that matches the changelog condition then the stage will be not triggered because changelog will be Branch indexing for this amend.
In this case you probably need :
environment {
CHORE_RELEASE = sh script: "git log -1 | grep 'chore(release):'", returnStatus: true
}
when {
expression { "${CHORE_RELEASE}" == "0" }
branch "master"
}
instead of
when {
changelog "^chore\\(release\\):.*"
branch "master"
}

Related

Multiple Jenkins pipelines for a single repo

At the moment I have two MultiJob Projects for a single repo:
First runs on develop branch
Second runs on all opened Pull Requests
Each has a lot of nested Freestyle jobs.They are are quite different.
I'm looking at switching to Pipeline-as-Code by using Jenkinsfile. So my question is is there a way to switch Jenkinsfile path/name based on, say branch name. I tried to use MultiBranch Pipeline job type, but it only allows to set a single Jenkinsfile path and it uses it across any branch including PullRequests.
Maybe there is a better way to achieve that? I'm open to discussion. Thank you
You can do it in one jenkinsfile by using when expression, I assume your pipeline is not quite big
pipeline {
agent any
stages {
stage("Set variables from external input") {
when {
branch "develop"
}
steps{
#add the thing which you want execute when branch is develop
}
}
stage("2 for Pull request") {
when {
expression {return !env.GIT_BRANCH.contains('master|develop')}
}
steps{
#add the thing which you want execute when branch is pull request
}
}
}
}

Multibranch Pipeline in Jenkins with SVN

I have SVN as my SCM. The SVN Root URL structure is as follows.
https://svn.domain.com/orgName
Under this, I have a folder called "test". Then I have tags, branches and trunk. For example,
https://svn.domain.com/orgName/test/trunk
https://svn.domain.com/orgName/test/branches
Under trunk and branches, I have various modules. One module is Platform which is the core module. The URL structure for my project under Platform is as follows.
https://svn.domain.com/orgName/test/trunk/Platform/MyProject
https://svn.domain.com/orgName/test/branches/Platform/1.0.0.0/MyProject
Not sure if the above structure is correct or not. But this is how it is structured in my organization and it can't be changed. Now, I have the following questions.
At what level should I maintain the Jenkinsfile?
How should I pass the branch name (including trunk) to this file?
It will be great if someone can provide some details (if possible step by step) on how to use Multibranch pipeline with SVN. Unfortunately, I could not find any tutorial or examples to achieve this.
I figured this out on my own. Here are the details, in case, someone needs help.
For Trunk, add the Jenkinsfile inside trunk/Platform (in my case) and for Branches, add Jenkinsfile inside branches/Platform/ folder. For branches, it is better to keep the Jenkinsfile inside each version folder since it has some benefits. This approach will create a Jenkins job for each version.
In the Jenkins job (for multibranch pipeline), add the base url for Project Repository Base. In my case, it is https://svn.domain.com/orgName/test. In Include branches field, add trunk/Platform, branches/Platform/* in my case. In Jenkinsfile, to get branch name, use the built in variable $BRANCH_NAME. This gives trunk/Platform for trunk and branches/Platform/1.0.0.0 (for example) for branches.
Only challenge is that job names are created like Trunk/Platform, Branches/Platform/1.0.0.0. So the workspace gets created like Trunk%2FPlatform, Branches%2FPlatform%2F1.0.0.0 since "/" gets encoded with %2F. While using in jobs, make sure the job name is appropriately modified using the below code.
def cws = "${WORKSPACE_DIR}\\" + "${JOB_NAME}".replace("%2F","_").replace("/","\\")
echo "\u2600 workspace=${cws}"
def isTrunk = "${JOB_NAME}".toLowerCase().contains("trunk")
def version = ""
def verWithBldNum = ""
echo "\u2600 isTrunk=${isTrunk}"
if(!isTrunk)
{
version = "${JOB_NAME}".substring("${JOB_NAME}".lastIndexOf("%2F") + 3, "${JOB_NAME}".length())
echo "\u2600 version=${version}"
verWithBldNum = "${version}".substring(0, "${version}".lastIndexOf('.') + 1) + "${BUILD_NUMBER}"
echo "\u2600 verWithBldNum=${verWithBldNum}"
}
else
{
echo "\u2600 Branch is Trunk"
}

jenkins declarative pipeline ignoring changelog of jenkinsfiles

I have apps and their codes on git repositories. Also jenkinsfiles for building apps and these files on another repository. The problem is jenkins builds changelog. Jenkins add jenkinsfiles changelog to build changesets and I don't want to that. Because these changes are according to infrastructure not relevant with apps. How to prevent this? I didn't find any workaround or solution.
If I got well your question... I don't think you can remove Jenkinsfile changes from the change set that Jenkins reads from git, but instead, you can skip your pipeline to build if there are only changes on Jenkinsfile.
If it helps...
First place, you need to read the change set:
def changedFiles = []
for (changeLogSet in currentBuild.changeSets) {
for (entry in changeLogSet.getItems()) {
for (file in entry.getAffectedFiles()) {
changedFiles.add(file.getPath())
}
}
}
Then, you can check if it is only Jenkinsfile:
if (changedFiles.size() == 1 && changedFiles[0] == "Jenkinsfile"){
println "Only Jenkinsfile has changed... Skipping build..."
currentBuild.getRawBuild().getExecutor().interrupt(Result.SUCCESS) // this skips your build prematurely and makes the rest of the stages green
sleep(1) // you just need this
}else{
println "There are other changes rather than only Jenkinsfile, build will proceed."
}
P.S. You have several ways to terminate the jobs earlier without failing the build, but this one is the cleanest in my experience, even though you need to allow some Admin Signatures Permission on Jenkins (I've seen it in another thread here some time ago, can't find it now though)

Jenkins is re-using a pipeline workspace and I wish for each build to have a unique workspace

So, most of the questions and answers I've found on this subject is for people who want to use the SAME workspace for different runs. (Which baffles me, but then I require a clean slate each time I start a job. Leftover stuff will only break things)
My issue is the EXACT opposite - I MUST have a separate workspace for each run (or I need to know how to create files with the same name in different runs that stay with that run only, and which are easily reachable from bash scripts started by the pipeline!)
So, my question is - how do I either force Jenkins to NOT use the same workspace for two concurrently-running jobs on different hosts, OR what variable can I use in the 'custom workspace' field to accomplish this?
After I responded to the question by #Joerg S I realized that I'm saying the thing that Joerg S says CAN'T happen is EXACTLY what I'm observing! Jenkins is using the SAME workspace for 2 different, concurrent, jobs on 2 different hosts. Is this a Jenkins pipeline bug?
See below for a bewildering amount of information.
Given the way I have to go onto and off of nodes during the run, I've found that I can start 2 different builds on different hosts of the same job, and they SHARE the workspace dir! Since each job has shell scripts which are busy writing files into that directory, this is extremely bad.
In Custom workspace in jenkins we are told to use custom workspace, and I'm set up just like that
In Jenkins: how to run builds in unique directories we are told to use ${BUILD_NUMBER} in the above custom workspace field, so what I tried was:
${JENKINS_HOME}/workspace/${ITEM_FULLNAME}/${BUILD_NUMBER}
All that happens to me when I use that is that the workspace name is, you guessed it, "${BUILD_NUMBER}" (and I even got a "${BUILD_NUMBER}#2" just for good measure!)
I tried {$BUILD_ID}, same thing (uses that literally, does not substitute the number).
I have the 'allow concurrent builds' turned on.
I'm using pipelines exclusively.
All jobs here, as part of normal execution, cause the slave, non-master host to reboot into an OS that does not have the capability to run slave.jar (indeed, it has no network access at all), so I cannot run the entire pipeline on that host.
All jobs use the following construct somewhere inside them:
tests=Arrays.asList(tests.split("\\r?\n"))
shellerror=231
for( line in tests){
So let's call an example job 'foo' that loops through a list, as above, that I want to run on 2 different hosts. The pipeline for that job starts running on master (since the above for (line in tests) is REQUIRED to run on a node!)). Then goes back and forth between master and slave, often multiple times.
If I start this job on host A and host B at about the same time, they will BOTH use the workspace ${JENKINS_HOME}/workspace/${JOB_NAME}, or in my case /var/lib/jenkins/jenkins/workspace/job
Since they write different data to files with the same name in that directory, I'm clearly totally broken immediately.
So, how do I force Jenkins to use a unique workspace EVERY SINGLE JOB?
Or, what???
Other things: pipeline build step version 2.5.1, Jenkins 2.46.2
I've been trying to get the workspace statement ('ws') to work, but that doesn't quite work as I expected either - some files are in the workspace I explicitly name, and some are still in the 'built-in' workspace (workspace/).
I was asked to provide code. The 'standard' pipeline I use is about 26K bytes, composing about 590 lines. So, I'm going to GREATLY reduce. That being said:
node("master") { // 1
..... lots of stuff....
} // this matches the "node('master')" above
node(HOST) {
echo "on $HOST, check what os"
if (isUnix())
...some more stuff...
} // end of 'node(HOST)' above
if (isok == 0 ) {
node("master") {
echo "----------------- Running on MASTER 19 $shellerror waiting on boot out of windows ------------"
sleep 120
echo "----------------- Leaving MASTER ------------"
}
}
... lots 'o code ...
node(HOST) {
... etc
} // matches the latest 'node HOST' above
node("master") { // 120
.... code ...
for( line in tests) {
...code...
}
}
... and on and on and on, switching back and forth from one to the other
FWIW, when I tried to make the above use 'ws' so that I could make certain the ws name was unique, I simply added a 'ws wsname' block directly under (almost) every 'node' opening so it was
node(name) { ws (wsname) { ..stuff that was in node block before... } }
But then I've got two directories to worry about checking - both the 'default' workspace/jobname dir AND the new wsname one.
Try using customWorkspace node common option:
pipeline {
agent {
node {
label 'node(s)-defined-label'
customWorkspace "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
}
}
stages {
// Your pipeline logic here
}
}
customWorkspace
A string. Run the Pipeline or individual stage this
agent is applied to within this custom workspace, rather than the
default. It can be either a relative path, in which case the custom
workspace will be under the workspace root on the node, or an absolute
path.
Edit
Since this doesn't work for your complex pipeline. Maybe try this silly solution:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
sh(script: "cd ${WORKSPACE}")
//Do stuff here
}
or if dir() is accessible:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
dir(WORKSPACE) {
//Do stuff here
}
}
customWorkspace didn't work for me.
What worked:
stages {
stage("SCM (For commit trigger)"){
steps {
ws('custom-workspace') { // Because we don't want to switch from the pipeline checkout
// Generated from http://lstool01:8080/job/Permanent%20Build/pipeline-syntax/
checkout(xxx)
}
}
}
'${SOMEVAR}'
will not get substituted
"${SOMEVAR}"
will - this is how groovy strings are being handled
see groovy string handling
so if you have a
ws("/some/path/somewhere/${BUILD_ID}")
{
//something
}
on your node in your pipeline Jenkinsfile it should do the trick in this regard
the problem with #2 workspaces can occur when you allow concurrent builds of the project - I had the exact same problem with a custom ws() with #2 - simply disallow concurrent builds or work around that.

Abort, rather than error, stage within a Jenkins declarative pipeline

Problem
Our source is a single large repository that contains multiple projects. We need to be able to avoid building all projects within the repository if a commit happens within specific areas. We are managing our build process using pipelines.
Research
The git plugin provides the ability to ignore commits from certain user, paths, and message content. However, as we are using the pipeline, we believe we are experiencing the issue described by JENKINS-36195. In one of the most recent comments, Jesse suggests examining the changeset and returning early if the changes look boring. He mentions that a return statement does not work inside a library, closure, etc), but he doesn't mention how a job could be aborted.
Potential Approaches
We have considered using the error step, but this would result in the job being marked as having a failure and would need to be investigated.
While a job result could be marked as NOT_BUILT, the job is not aborted but continues to process all stages.
Question
How would you abort a job during an early step without marking it as a failure and processing all stages of the pipeline (and potentially additional pipelines)?
Can you try using try catch finally block in the pipeline.
try{
}
catch {
}
finally{
}
I suppose you can also use post build action in the pipeline.
pipeline {
agent any
stages {
stage('Test') {
steps {
sh 'make check'
}
}
}
post {
always {
junit '**/target/*.xml'
}
failure {
mail to: team#example.com, subject: 'The Pipeline failed :('
}
}
}
The documentation is below https://jenkins.io/doc/book/pipeline/syntax/#post
you can also try using the below one outside of build step with your conditions as specified by Slawomir Demichowicz in the ticket.
if (stash.isJenkinsLastAuthor() && !params.SKIP_CHANGELOG_CHECK) {
echo "No more new changes, stopping pipeline"
currentBuild.result = "SUCCESS"
return
}
I am not sure this could help you.

Resources