How to handle nightly build in Jenkins declarative pipeline - jenkins

I have a multibranch pipeline with a Jenkinsfile in my repo and I am able to have my CI workflow (build & unit tests -> deploy-dev -> approval -> deploy-QA -> approval -> deploy-prod) on every commit.
What I would like to do is add SonarQube Analysis on nightly builds in the first phase build & unit tests.
Since my build is triggerd by Gitlab I have defined my pipeline triggers as follow :
pipeline {
...
triggers {
gitlab(triggerOnPush: true, triggerOnMergeRequest: true, branchFilterType: 'All')
}
...
}
To setup my nightly build I have added
triggers {
...
cron('H H * * *')
}
But now, how to execute analysis step if we are only building the job triggered by the cron expression at night ?
My simplified build stage looks as follow :
stage('Build & Tests & Analysis') {
// HERE THE BEGIN SONAR ANALYSIS (to be executed on nightly builds)
bat 'msbuild.exe ...'
bat 'mstest.exe ...'
// HERE THE END SONAR ANALYSIS (to be executed on nightly builds)
}

There is the way how to get build trigger information. It is described here:
https://jenkins.io/doc/pipeline/examples/#get-build-cause
It is good for you to check this as well:
how to get $CAUSE in workflow
Very good reference for your case is https://hopstorawpointers.blogspot.com/2016/10/performing-nightly-build-steps-with.html. Here is the function from that source that exactly matches your need:
// check if the job was started by a timer
#NonCPS
def isJobStartedByTimer() {
def startedByTimer = false
try {
def buildCauses = currentBuild.rawBuild.getCauses()
for ( buildCause in buildCauses ) {
if (buildCause != null) {
def causeDescription = buildCause.getShortDescription()
echo "shortDescription: ${causeDescription}"
if (causeDescription.contains("Started by timer")) {
startedByTimer = true
}
}
}
} catch(theError) {
echo "Error getting build cause"
}
return startedByTimer
}

This works in declarative pipeline
when {
triggeredBy 'TimerTrigger'
}

For me the easiest way is to define a cron in build trigger and verify the hour on the nightly stage using a when expression:
pipeline {
agent any
triggers {
pollSCM('* * * * *') //runs this pipeline on every commit
cron('30 23 * * *') //run at 23:30:00
}
stages {
stage('nightly') {
when {//runs only when the expression evaluates to true
expression {//will return true when the build runs via cron trigger (also when there is a commit at night between 23:00 and 23:59)
return Calendar.instance.get(Calendar.HOUR_OF_DAY) in 23
}
}
steps {
echo "Running the nightly stage only at night..."
}
}
}
}

You could check the build cause like so:
stage('Build & Tests & Analysis') {
when {
expression {
for (Object currentBuildCause : script.currentBuild.rawBuild.getCauses()) {
return currentBuildCause.class.getName().contains('TimerTriggerCause')
}
}
steps {
bat 'msbuild.exe ...'
bat 'mstest.exe ...'
}
}
}
However, this requires the following entries in script-approval.xml:
<approvedSignatures>
<string>method hudson.model.Run getCauses</string>
<string>method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild</string>
</approvedSignatures>
This can also be approved via https://YOURJENKINS/scriptApproval/.
Hopefully, this won't be necessary after JENKINS-41272 is fixed.
Until then, a workaround could be to check the hour of day in the when expression (keep in mind that these times refer to to the timezone of Jenkins)
when { expression { return Calendar.instance.get(Calendar.HOUR_OF_DAY) in 0..3 } }

I've found a way, which does not use "currentBuild.rawBuild" which is restricted. Begin your pipeline with:
startedByTimer = false
def buildCauses = "${currentBuild.buildCauses}"
if (buildCauses != null) {
if (buildCauses.contains("Started by timer")) {
startedByTimer = true
}
}
Test the boolean where you need it, for example:
stage('Clean') {
when {
anyOf {
environment name: 'clean_build', value: 'Yes'
expression { (startedByTimer == true) }
}
}
steps {
echo "Cleaning..."
...

Thanks to this you can now do this without needing to the the use the non-whitelisted currentBuild.getRawBuild().getCauses() function which can give you Scripts not permitted to use method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild depending on your setup:
#NonCPS
def isJobStartedByTimer() {
def startedByTimer = false
try {
def buildCauses = currentBuild.getBuildCauses()
for ( buildCause in buildCauses ) {
if (buildCause != null) {
def causeDescription = buildCause.shortDescription
echo "shortDescription: ${causeDescription}"
if (causeDescription.contains("Started by timer")) {
startedByTimer = true
}
}
}
} catch(theError) {
echo "Error getting build cause"
}
return startedByTimer
}

Related

How to get Jenkins build status without running build

How can I get the status of already completed builds?
The code below starts the build, but I just need to know the build status by name.
I need to find out the names of the failed builds in order to send the data in one message to the mail.
Map buildResults = [:]
Boolean failedJobs = false
void nofify_email(Map results) {
echo "TEST SIMULATE notify: ${results.toString()}"
}
Boolean buildJob(String jobName, Map results) {
def jobBuild = **build job: jobName**, propagate: false
def jobResult = jobBuild.getResult()
echo "Build of '${jobName}' returned result: ${jobResult}"
results[jobName] = jobResult
return jobResult == 'SUCCESS'
}
pipeline {
agent any
stages {
stage('Parallel Builds') {
steps {
parallel(
"testJob1": {
script {
if (!buildJob('testJob1', buildResults)) {
failedJobs = true
}
}
},
)
}
}
I couldn't find anything to replace build job

Jenkins stop previous job if new job is triggered

When a new merge is made the previous job keeps building keeping the new one in queue.
What I Want: When a new merge is made, I want jenkin to focus on that job stopping the previous build using powershell or cmd.
Is that possible.
Screenshot of a problem
Thank You in advance.
You can use the sample to abort the previous job:
pipeline {
agent any
stages {
stage('Abort previous running builds') {
steps {
abortPreviousRunningBuilds()
}
}
stage('Build') {
steps {
sleep(5)
}
}
stage('Test') {
steps {
sleep(5)
}
}
}
}
def abortPreviousRunningBuilds() {
def previousBuild = currentBuild.getRawBuild().getPreviousBuildInProgress()
while (previousBuild != null) {
if (previousBuild.isInProgress()) {
def executor = previousBuild.getExecutor()
if (executor != null) {
echo ">> Aborting older build #${previousBuild.number}"
executor.interrupt(Result.ABORTED, new CauseOfInterruption.UserInterruption("Aborted by newer build #${currentBuild.number}"))
}
}
previousBuild = previousBuild.getPreviousBuildInProgress()
}
}
If you get some RejectedAccessException, you need to allow in-process script approval by this way.
Update
Or you can use the solution: install Jenkins plugin - Pipeline: Milestone Step and write like this:
pipeline {
agent any
stages {
stage('Abort previous running builds') {
steps {
abortPreviousRunningBuilds()
}
}
stage('Build') {
steps {
sleep(5)
}
}
stage('Test') {
steps {
sleep(5)
}
}
}
}
def abortPreviousRunningBuilds() {
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
}

how to schedule parameterized pipelines to run just once with one parameter and the rest with another one?

I have a pipeline which I just added 2 parameters to build release or debug (parameters are called release or debug). The pipeline uses cron syntax to check for changes in the SCM every 10 mins, the pipeline checks for every commit and then build release (C++ program) but I would like to build debug once a day, let's say everyday every coomit pushed from 12 to 13 will be build in debug. All of this without me having to run the pipeline and changing the parameter manually (it is set to release by default). Is there any way to do this? This is a very short version of what the pipeline looks like:
pipeline {
stages {
stage('Setup parameters') {
steps {
script {
properties([
parameters([
choice(
defaultValue: 'RELEASE',
choices: ['RELEASE', 'DEBUG'],
name: 'BUILD_CONFIG'
),
])
])
}
}
}
stage('Build release'){
when {
expression {
return params.BUILD_CONFIG == 'RELEASE'
}
}
steps{
script {
def msbuild = tool name: 'MSBuild', type: 'hudson.plugins.msbuild.MsBuildInstallation'
bat "\"${msbuild}\" /Source/project-GRDK.sln /t:Rebuild /p:configuration=\"Release Steam D3D11\""
}
}
}
stage('Build debug'){
when {
expression {
return params.BUILD_CONFIG == 'DEBUG'
}
}
steps{
script {
def msbuild = tool name: 'MSBuild', type: 'hudson.plugins.msbuild.MsBuildInstallation'
bat "\"${msbuild}\" /Source/project-GRDK.sln /t:Rebuild /p:configuration=\"Debug Steam D3D11\""
}
}
}
}
}
parameterizedCron plugin does what you need:
pipeline {
agent any
parameters {
choice(name: 'BUILD_CONFIG', choices: ['RELEASE', 'DEBUG'], defaultValue: 'RELEASE')
}
triggers {
parameterizedCron('''
1,2,3,4,5,6,7,8,9,10 * * * * % BUILD_CONFIG=RELEASE
12 * * * * % BUILD_CONFIG=DEBUG
''')
}
Another option would be to create a second job, which triggers the build job with the right parameters:
pipeline {
agent any
triggers {
cron('H 12 * * *')
}
stages {
stage('Build xxx debug') {
steps {
build job: "your-job-name-here", parameters: [
choice(name: 'BUILD_CONFIG', value: 'DEBUG')
]
}
}
}
}
It is possible to determine the cause of the build with currentBuild.rawBuild.getCause(Class<T> type). The type you are looking for is UserIdCause. Following would build a stage in case the job was not triggered by an user (manually). In this stage steps are from Build debug stage.
stage('Build debug if time triggered') {
when {
expression {
return currentBuild.rawBuild.getCause(hudson.model.Cause$UserIdCause) == null
}
}
steps {
script {
def msbuild = tool name: 'MSBuild', type: 'hudson.plugins.msbuild.MsBuildInstallation'
bat "\"${msbuild}\" /Source/project-GRDK.sln /t:Rebuild /p:configuration=\"Debug Steam D3D11\""
}
}
You will also need to add an expression to Build release and Build debug stages, in order to prevent building if the job is not triggered by an user (manually).
stage('Build release'){
when {
allOf {
expression {
return params.BUILD_CONFIG == 'RELEASE'
}
expression {
return currentBuild.rawBuild.getCause(hudson.model.Cause$UserIdCause) != null
}
}
}
...
Docu:
https://javadoc.jenkins-ci.org/hudson/model/Cause.html
https://javadoc.jenkins-ci.org/hudson/model/Run.html
How to differentiate build triggers in Jenkins Pipeline
EDIT
If you want to keep everything in one pipeline, then you need to create two new variables. Following code creates Calendar object for 12 am today and converts it to milliseconds.
Calendar date = new GregorianCalendar()
date.set(Calendar.HOUR_OF_DAY, 12);
date.set(Calendar.MINUTE, 0);
date.set(Calendar.SECOND, 0);
date.set(Calendar.MILLISECOND, 0);
def start = date.getTime().getTime()
In same way you could create a Calendar object for 1 pm today (e.g. end). With currentBuild.rawBuild.getTimestamp() you get Calendar object, when the build was scheduled. If the scheduled time is between start and end set for example a boolean variable and check it in the pipeline when block.
def buildDebug = false
def scheduled = currentBuild.rawBuild.getTimestamp().getTime().getTime()
if(scheduled > start && scheduled < end)
buildDebug = true
...
stage('Build release'){
when {
allOf {
expression {
return params.BUILD_CONFIG == 'RELEASE'
}
expression {
return currentBuild.rawBuild.getCause(hudson.model.Cause$UserIdCause) == null
}
expression {
return buildDebug == false
}
}
}
...
stage('Build debug'){
when {
allOf {
expression {
return params.BUILD_CONFIG == 'RELEASE'
}
expression {
return currentBuild.rawBuild.getCause(hudson.model.Cause$UserIdCause) == null
}
expression {
return buildDebug == true
}
}
}
How to create a Java Date object of midnight today and midnight tomorrow?

Dynamic number of parallel steps in declarative pipeline

I'm trying to create a declarative pipeline which does a number (configurable via parameter) jobs in parallel, but I'm having trouble with the parallel part.
Basically, for some reason the below pipeline generates the error
Nothing to execute within stage "Testing" # line .., column ..
and I cannot figure out why, or how to solve it.
import groovy.transform.Field
#Field def mayFinish = false
def getJob() {
return {
lock("finiteResource") {
waitUntil {
script {
mayFinish
}
}
}
}
}
def getFinalJob() {
return {
waitUntil {
script {
try {
echo "Start Job"
sleep 3 // Replace with something that might fail.
echo "Finished running"
mayFinish = true
true
} catch (Exception e) {
echo e.toString()
echo "Failed :("
}
}
}
}
}
def getJobs(def NUM_JOBS) {
def jobs = [:]
for (int i = 0; i < (NUM_JOBS as Integer); i++) {
jobs["job{i}"] = getJob()
}
jobs["finalJob"] = getFinalJob()
return jobs
}
pipeline {
agent any
options {
buildDiscarder(logRotator(numToKeepStr:'5'))
}
parameters {
string(
name: "NUM_JOBS",
description: "Set how many jobs to run in parallel"
)
}
stages {
stage('Setup') {
steps {
echo "Setting it up..."
}
}
stage('Testing') {
steps {
parallel getJobs(params.NUM_JOBS)
}
}
}
}
I've seen plenty of examples doing this in the old pipeline, but not declarative.
Anyone know what I'm doing wrong?
At the moment, it doesn't seem possible to dynamically provide the parallel branches when using a Declarative Pipeline.
Even if you have a stage prior where, in a script block, you call getJobs() and add it to the binding, the same error message is thrown.
In this case you'd have to fall back to using a Scripted Pipeline.

Show a Jenkins pipeline stage as failed without failing the whole job

Here's the code I'm playing with
node {
stage 'build'
echo 'build'
stage 'tests'
echo 'tests'
stage 'end-to-end-tests'
def e2e = build job:'end-to-end-tests', propagate: false
result = e2e.result
if (result.equals("SUCCESS")) {
stage 'deploy'
build 'deploy'
} else {
?????? I want to just fail this stage
}
}
Is there any way for me to mark the 'end-to-end-tests' stage as failed without failing the whole job? Propagate false just always marks the stage as true, which is not what I want, but Propagate true marks the job as failed which I also don't want.
This is now possible, even with declarative pipelines:
pipeline {
agent any
stages {
stage('1') {
steps {
sh 'exit 0'
}
}
stage('2') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh "exit 1"
}
}
}
stage('3') {
steps {
sh 'exit 0'
}
}
}
}
In the example above, all stages will execute, the pipeline will be successful, but stage 2 will show as failed:
As you might have guessed, you can freely choose the buildResult and stageResult, in case you want it to be unstable or anything else. You can even fail the build and continue the execution of the pipeline.
Just make sure your Jenkins is up to date, since this is a fairly new feature.
Stage takes a block now, so wrap the stage in try-catch. Try-catch inside the stage makes it succeed.
The new feature mentioned earlier will be more powerful. In the meantime:
try {
stage('end-to-end-tests') {
node {
def e2e = build job:'end-to-end-tests', propagate: false
result = e2e.result
if (result.equals("SUCCESS")) {
} else {
sh "exit 1" // this fails the stage
}
}
}
} catch (e) {
result = "FAIL" // make sure other exceptions are recorded as failure too
}
stage('deploy') {
if (result.equals("SUCCESS")) {
build 'deploy'
} else {
echo "Cannot deploy without successful build" // it is important to have a deploy stage even here for the current visualization
}
}
Sounds like JENKINS-26522. Currently the best you can do is set an overall result:
if (result.equals("SUCCESS")) {
stage 'deploy'
build 'deploy'
} else {
currentBuild.result = e2e.result
// but continue
}
I recently tried to use vaza's answer
Show a Jenkins pipeline stage as failed without failing the whole job as template for writing a function that excutes a job in an own stage named like the job name. Surprisingly it worked, but maybe some groovy experts have a look at it :)
Here is how it looks like if one of the jobs is aborted:
def BuildJob(projectName) {
try {
stage(projectName) {
node {
def e2e = build job:projectName, propagate: false
result = e2e.result
if (result.equals("SUCCESS")) {
} else {
error 'FAIL' //sh "exit 1" // this fails the stage
}
}
}
} catch (e) {
currentBuild.result = 'UNSTABLE'
result = "FAIL" // make sure other exceptions are recorded as failure too
}
}
node {
BuildJob('job1')
BuildJob('job2')
}
In order to show a successful build with a failed stage when a downstream job fails AND support a user being able to cancel a build (including all subsequent stages), I had to use a combination of various solutions, specifically when, try/catch, throw and catchError().
env.GLOBAL_BUILD_ABORTED = false // Set if the user aborts the build
pipeline {
agent any
stages {
stage('First Stage') {
when { expression { env.GLOBAL_BUILD_ABORTED.toBoolean() == false } }
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
myLocalBuildMethod('Stage #1, build #1')
myLocalBuildMethod('Stage #1, build #2')
}
}
}
stage('Second Stage') {
when { expression { env.GLOBAL_BUILD_ABORTED.toBoolean() == false } }
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
myLocalBuildMethod('Stage #2, build #1')
myLocalBuildMethod('Stage #2, build #2')
myLocalBuildMethod('Stage #2, build #3')
}
}
}
}
}
def myLocalBuildMethod(myString) {
/* Dummy method to show User Aborts vs Build Failures */
echo "My Local Build Method: " + myString
try {
build (
job: "Dummy_Downstream_Job"
)
} catch (e) {
/* Build Aborted by user - Stop All Test Executions */
if (e.getMessage().contains("was cancelled") || e.getMessage().contains("ABORTED")) {
env.GLOBAL_BUILD_ABORTED = true
}
/* Throw the execiption to be caught by catchError() to mark the stage failed. */
throw (e)
}
// Do other stuff...
}
You could add a explicit fail task, such as 'sh "not exist command"' in the stage.
if (result.equals("SUCCESS")) {
stage 'deploy'
build 'deploy'
} else {
try {
sh "not exist command"
}catch(e) {
}
}
Solution steps
You must emit an error in a stage to mark it as an error
Outside the scope of the stage, handle the exception and choose the build status
This makes the effect desired by a couple of users here, including myself, #user3768904, #Sviatlana
Success with failed Step Example
node("node-name") {
try {
stage("Process") {
error("This will fail")
}
} catch(Exception error) {
currentBuild.result = 'SUCCESS'
return
}
stage("Skipped") {
// This stage will never run
}
}
Aborted with failure Step Example
node("node-name") {
try {
stage("Process") {
error("This will fail")
}
} catch(Exception error) {
currentBuild.result = 'ABORTED'
return
}
stage("Skipped") {
// This stage will never run
}
}
You can use the following code in your else statement:
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
error "some err msg"
}
This could be a general pattern showing how to customize the stage result with nice messages using the built-in functions and propagate the sub-job's result to the stage result. That the overall build is marked unstable if a sub-job is not successful is just a implementation choice for this example.
def run_sub_job() {
def jobBuild = build(job: 'foo', wait: true, propagate: false)
def result = jobBuild.getResult()
def msg = 'sub-job: ' + result
if ('SUCCESS' == result) {
println(msg)
} else if ('UNSTABLE' == result) {
unstable(msg) // will also set buildResult to UNSTABLE
} else { // anything else (FAILURE, ABORTED ...) is considered an error
catchError(
buildResult: 'UNSTABLE',
stageResult: result // propagate sub-job result
) {
error(msg)
}
}
}

Resources