How to block upstream/downstream build in Jenkins declarative pipeline? - jenkins

I have 3 downstream build jobs which are triggered when 1 upstream job 'Project U' has been built successfully. Example:
triggers {
pollSCM('H/5 * * * *')
upstream(upstreamProjects: 'Project U', threshold: hudson.model.Result.SUCCESS)
}
This works as expected, however, if code changes are committed to all parts at the same time, the upstream and downstream builds start building simultaneously.
I want to avoid this, because the downstream builds will run twice, and the first run is quite useless as the upstream commit has not been built yet. So I would like to configure the downstream jobs to block their build while the upstream job is building.
I know how to do this in Jenkins Freestyle job in the user interface (also see this answer):
But I cannot find how to do this in a Jenkins declarative pipeline?

This approach works:
waitUntil {
def job = Jenkins.instance.getItemByFullName("Project U")
!job.isBuilding() && !job.isInQueue()
}
When this downstream is started, it will check if the upstream job is either active or queued, and if so, will wait until its build has finished.
I haven't been able to find out how to programmatically access the current job's upstream job(s), so the job names need to be copied. (There is a method getBuildingUpstream(), which would be much more convenient, but I haven't found a way to obtain the current Project object from the Job instance.)
I finally ended up creating this function in the Jenkins shared library:
/*
vars/waitForJobs.groovy
Wait until none of the upstream jobs is building or being queued any more
Parameters:
upstreamProjects String with a comma separated list of Jenkins jobs to check
(use same format as in upstream trigger)
*/
def call( Map params) {
projects = params['upstreamProjects'].split(', *')
echo 'Checking the following upstream projects:'
if ( projects.size() == 0) {
echo 'none'
} else {
projects.each {project ->
echo "${project}"
}
}
waitUntil {
def running = false
projects.each {project ->
def job = Jenkins.instance.getItemByFullName(project)
if (job == null) {
error "Project '${project} not found"
}
if (job.isBuilding()) {
echo "Waiting for ${project} (executing)"
running = true
}
if (job.isInQueue()) {
echo "Waiting for ${project} (queued for execution)"
running = true
}
}
return !running
}
}
The nice thing is that I can just copy the parameter from the upstream trigger, because it uses the exact same format. Here's an example of how it looks like:
pipeline {
[...]
triggers {
pollSCM('H/5 * * * *')
upstream(upstreamProjects: 'Project U1, Project U2, Project U3', threshold: hudson.model.Result.SUCCESS)
}
[...]
stages {
stage('Wait') {
steps {
script{
// Wait for upstream jobs to finish
waitForJobs(upstreamProjects: 'Project U1, Project U2, Project U3')
}
}
[...]
}
}
}

Related

Prevent Jenkins job from building from another jenkins file

We have 2 jenkins job AA and BB.
Is there a way to allow BB only to be trigger from AA after it completes?
Basically, you can use build triggers to do this. More about build triggers can be found here. There are two ways to add build triggers. Following is how you can add triggers using the UI. By going to Configure Job you can add triggers. Please refer to the following image.
In a declarative pipeline, you can add triggers as shown below.
pipeline {
agent any
triggers { upstream(upstreamProjects: 'AA', threshold: hudson.model.Result.SUCCESS) }
stages {
stage('Hello') {
steps {
echo 'Hello World BB'
}
}
}
}
Here you can specify the threshold on when to trigger the build based on the status of the upstream build. There are 4 different thresholds.
hudson.model.Result.ABORTED: The upstream build was manually aborted.
hudson.model.Result.FAILURE: The upstream build had a fatal error.
hudson.model.Result.SUCCESS: The upstream build had no errors.
hudson.model.Result.UNSTABLE: The upstream build had an unstable result.
Update 02
If you want to restrict all other jobs/users from triggering this job you will have to restructure your Job. You can wrap your stages with a parent Stage and conditionally check who triggered the Job. But note that the Job will anyway trigger but the stages will be skipped. Please refer the following pipeline.
pipeline {
agent any
triggers { upstream(upstreamProjects: 'AA', threshold: hudson.model.Result.SUCCESS) }
stages{
stage('Parent') {
// We will restrict triggering this Job for everyone other than Job AA
when { expression {
print("Checking if the Trigger is allowed to execute this job")
print('AA' in currentBuild.buildCauses.upstreamProject)
}
}
stages {
stage('Hello') {
steps {
echo 'Hello World BB'
}
}
}
}
}
}

Jenkins - How to run a stage/function before pipeline starts?

We are using a Jenkins multibranch pipeline with BitBucket to build pull request branches as part of our code review process.
We wanted to abort any queued or in-progress builds so that we only run and keep the latest build - I created a function for this:
def call(){
def jobName = env.JOB_NAME
def buildNumber = env.BUILD_NUMBER.toInteger()
def currentJob = Jenkins.instance.getItemByFullName(jobName)
for (def build : currentJob.builds){
def exec = build.getExecutor()
if(build.isBuilding() && build.number.toInteger() != buildNumber && exec != null){
exec.interrupt(
Result.ABORTED,
new CauseOfInterruption.UserInterruption("Job aborted by #${currentBuild.number}")
)
println("Job aborted previously running build #${build.number}")
}
}
}
Now in my pipeline, I want to run this function when the build is triggered by the creation or push to a PR branch.
It seems the only way I can do this is to set the agent to none and then set it to the correct node for each of the subsequent stages. This results in missing environment variables etc. since the 'environment' section runs on the master.
If I use agent { label 'mybuildnode' } then it won't run the stage until the agent is free from the currently in-progress/running build.
Is there a way I can get the 'cancelpreviousbuilds()' function to run before the agent is allocated as part of my pipeline?
This is roughly what I have currently but doesn't work because of environment variable issues - I had to do the skipDefaultCheckout so I could do it manually as part of my build:
pipeline {
agent none
environment {
VER = '1.2'
FULLVER = "${VER}.${BUILD_NUMBER}.0"
PROJECTPATH = "<project path>"
TOOLVER = "2017"
}
options {
skipDefaultCheckout true
}
stages {
stage('Check Builds') {
when {
branch 'PR-*'
}
steps {
// abort any queued/in-progress builds for PR branches
cancelpreviousbuilds()
}
}
stage('Checkout') {
agent {
label 'buildnode'
}
steps {
checkout scm
}
}
}
}
It works and aborts the build successfully but I get errors related to missing environment variables because I had to start the pipeline with agent none and then set each stage to agent label 'buildnode' to.
I would prefer that my entire pipeline ran on the correct agent so that workspaces / environment variables were set correctly but I need a way to trigger the cancelpreviousbuilds() function without requiring the buildnode agent to be allocated first.
You can try combining the declarative pipeline and the scripted pipeline, which is possible.
Example (note I haven't tested it):
// this is scripted pipeline
node('master') { // use whatever node name or label you want
stage('Cancel older builds') {
cancel_old_builds()
}
}
// this is declarative pipeline
pipeline {
agent { label 'buildnode' }
...
}
As a small side comment, you seem to use: build.number.toInteger() != buildNumber which would abort not only older builds but also newer ones. In our CI setup, we've decided that it's best to abort the current build if a newer build has already been scheduled.

How to fetch Build ID of Job triggered from another job

I have the following situation in a Jenkinsfile of Job A:
...
... // Some execution
...
call Job B
// When Job B runs successfully
params.some_var_used_in_Job_C = BUILD ID of Job B
call Job C
I have to know the BUILD ID of Job B after it succeeds and I need to pass it as a params to Job C. Can anyone suggest how I can do this?
Also is it possible that I can pass some variable from Job B to Job A (so that I can send that value to Job C later) ?
Should be as simple as this:
node {
stage('Test') { // for display purposes
def jb = build wait: true, job: 'JobB'
println jb.fullDisplayName
println jb.id
//this will show everything available but needs admin privs to execute
println jb.properties
}
}
If you want to pass a simple string from job B to Job A then in Job B you can set an env variable
env.someVar = "some value"
then back in job A
println jb.buildVariables.someVar
#Kaus Untwale answer is correct. I've copied his answer into a declarative pipeline and added error handling.
From a upstream job:
pipeline {
agent any
stages {
stage('Run job') {
steps {
// make build as unstable on error
// remove this if not needed
catchError(buildResult: 'UNSTABLE') {
script {
def jb = build wait: true, job: 'test2', propagate: false
println jb
println jb.fullDisplayName
println jb.id
// throw an error if build failed
// this still allows you to get the job infos you need
if (jb.result == 'FAILURE') {
error('Downstream job failed')
}
}
}
}
}
}
}
Get the build within a downstream job:
# job: test2
pipeline {
agent any
stages {
stage('Upstream') {
steps {
script {
// upstream build if available
def upstream = currentBuild.rawBuild.getCause(hudson.model.Cause$UpstreamCause)
echo upstream?.shortDescription
// the run of that cause holds more infos
def upstreamRun = upstream?.getUpstreamRun()
echo upstreamRun?.number.toString()
}
}
}
}
}
See the api docs for the run class. You'll also need to allow some calls from the downstream example as approved scripts or disable the Groovy Sandboxo on that job.

Run the Jenkins pipeline nightly and after every commit

We would like to run the Jenkins pipeline:
nightly, but only if there was any commit in the chosen branches (dev or master)
and after every commit
With the caveat that nightly build should also tigger additional step (long-running integration tests).
Is it possible? How can we configure this in Jenkinsfile?
So, your question boils down to several:
How can I run a nightly build?
How can I stop a build if no commits were made in the last 24 hours?
How can I run a "long-running" stage, but not always?
Let's address these one by one.
You can have a nightly build by using cron or parameterizedCron, if you install the ParameterizedCron plugin. As you seem to want to run your job in all the branches, this may look like e.g.
pipeline {
agent any
triggers {
// schedule a nightly build at random time after 0am on dev branch, 1am on master
cron(env.BRANCH_NAME == 'dev' ? '0 * * * *' : '')
cron(env.BRANCH_NAME == 'master' ? '1 * * * *' : '')
To address the other points, you need to know why your build is running. It may be triggered by a person, or cron, or a commit. Once you make sure it's cron that triggered the build, you may want to explore git log for the time of the latest commit.
def currentBuildReason() {
def timerCause = currentBuild.rawBuild.getCause(hudson.triggers.TimerTrigger.TimerTriggerCause)
if (timerCause) { echo "Build reason: Build was started by timer"; return "TIMER" }
timerCause = currentBuild.rawBuild.getCause(org.jenkinsci.plugins.parameterizedscheduler.ParameterizedTimerTriggerCause)
if (timerCause) { echo "Build reason: Build was started by parameterized timer"; return "TIMER" }
def userCause = currentBuild.rawBuild.getCause(hudson.model.Cause$UserIdCause)
if (userCause) { echo "Build reason: Build was started by user"; return "USER" }
println "here are the causes"
echo "${currentBuild.buildCauses}"
return "UNKNOWN"
}
and later:
if ( currentBuildReason() == "TIMER" and env.BRANCH == "master" ) {
def gitLog = sh script: "git log", returnStdout: true
// look for the date, and stop the build
Finally, you can run steps on condition. One way to do it is to define a parameter for your job, indicating whether the integration tests should run. This will allow a person to run them even if not nightly, when they select the box. E.g.
pipeline {
parameters {
booleanParam(name: 'RUN_LONG_TEST', defaultValue: false,
description: "Should long-running tests execute?")
}
and later
stage('Integration tests') {
agent { node { label "integration_test_slave" }}
when {
beforeAgent true
expression {params.RUN_LONG_TEST}
}
steps {
// run integration test
}
}
To schedule the nightly with the param selected, you need the parameterizedCron plugin:
pipeline {
agent any
triggers {
// schedule a nightly build at random time after 0am on dev branch, 1am on master
parameterizedCron(env.BRANCH_NAME == 'dev' ? '0 * * * * % RUN_LONG_TEST=true' : '')
parameterizedCron(env.BRANCH_NAME == 'master' ? '1 * * * * % RUN_LONG_TEST=true'' : '')
Alternatively, you may disregard the parameter if only the nightlies should run it, and analyze the value of currentBuildReason(). When it equals to TIMER, these tests should run.

How can i use 'parallel' option in jenkins pipeline in the 'post' section?

I looked at many pipeline examples and how to write the post build section in a pipeline script. But never got my answer i was looking for.
I have 4 jobs - say Job A,B,C and D. I want job A to run first, and if successful it should trigger Job B,C,D in parallel. If Job A fails, it should trigger only Job B. Something like below:
pipeline {
agent any
stages {
stage('Build_1') {
steps {
sh '''
Build Job A
'''
}
}
post {
failure {
sh '''
Build Job B
'''
}
success {
sh '''
Build Job B,C,D in parallel
'''
}
}
}
I tried using 'parallel' option in post section but it gave me errors. Is there a way to build Job B,C,D in parallel, in the post 'success' section?
Thanks in advance!
The parallel keyword actually can work inside a post condition as long as it is encapsulated inside a script block, as the script blocks is just a fallback to the scripted pipeline which will allow you to run parallel execution step wherever you want.
The following should work fine:
pipeline {
agent any
stages {
stage('Build_1') {
steps {
// Build Job A
}
}
}
post {
failure {
// run job B
build job: 'Job-B'
}
success {
script {
// run jobs B, C, D in parallel
def jobs = ['Job-B', 'Job-C', 'Job-D']
parallel jobs.collectEntries { job ->
["Building ${job}" : {
build job: job
}]
}
}
}
}
}
This is just an example and specific parameters or configuration (for the build keyword) can be added to each job execution according to your needs.
The error message is quiet clear about this:
Invalid step "parallel" used - not allowed in this context - The
parallel step can only be used as the only top-level step in a stages
step
The more restrictive declarative syntax does not allow the usage of parallel in thw post section at the moment.
If you don't want to switch to the scripted syntax, another option that should work: Build the jobs B,C,D in parallel in a second stage and move the the failure condition in the post section of your first stage. As a result job B,C,D will run if A is successful. If A is not successful only job B will run.
pipeline {
agent any
stages {
stage('one') {
steps {
// run job A
}
post {
failure {
// run job B
}
}
}
stage('two') {
steps {
parallel(
// run job B, C, D
)
}
}
}
}

Resources