Jenkins Pipeline, getting runWrapper references even on failed runs - jenkins

I'm trying to run multiple builds in parallel in jenkins pipeline and get the result of those builds. My code looks something like
runWrappers = []
script {
def builds = [:]
builds['a'] = { runWrappers += build job: 'jobA', parameters: /* params here*/ }
builds['b'] = { runWrappers += build job: 'jobB', parameters: /* params here*/ }
builds['c'] = { runWrappers += build job: 'jobC', parameters: /* params here*/ }
builds['d'] = { runWrappers += build job: 'jobD', parameters: /* params here*/ }
parallel builds
// All the builds are ran in parallel and do not exit early if one fails
// Multiple of the builds could fail on this step
}
If there are no failures, the pipeline continues onto other stages. If there is a failure, an exception will be thrown and the following post-build code will run immediately
post {
always {
script {
def summary = ''
for (int i; i < runWrappers.size(); i++) {
def result = runWrappers[i].getResult()
def link = runWrappers[i].getAbsoluteUrl()
summary += "Build at: " + link + " had result of: " + result
}
/* Code to send summary to external location */
}
}
}
This works for the most part. The problem is that this code will only print out the result for the builds that result in a SUCCESS because the builds that fail throw an exception before returning a reference to a runWrapper.
Is there a way to get a reference to a runWrapper or similar that can give me information (mainly the url and result) on a failed build? Or is there a way for me to get such a reference before I start the build and cause an exception?

Try to use propagate: false:
build job: 'jobA', propagate: false, parameters: /* params here*/
But in this case your parallel won't fail anymore.

Related

Jenkins: howto prevent overwriting build results of parallel downstream jobs

I'm running a scripted pipeline which starts multiple downstream jobs in parallel.
A the main job, I'd like to collect the data and results of the parallel running downstream jobs so that I can process it later.
My main job is like this:
def all_build_results = []
pipeline {
stages {
stage('building') {
steps {
script {
def build_list = [
['PC':'01', 'number':'07891705'],
['PC':'01', 'number':'00568100']
]
parallel build_list.collectEntries {build_data ->
def br =[:]
["Building With ${build_data}": {
br = build job: 'Downstream_Pipeline',
parameters: [
string(name: 'build_data', value: "${build_data}")
],
propagate: false,
wait:true
build_result = build_data + ['Ergebnis':br.getCurrentResult(), 'Name': br.getFullDisplayName(), 'Url':br.getAbsoluteUrl(),
'Dauer': br.getDurationString(), 'BuildVars':br.getBuildVariables()]
// Print result
print "${BuildResultToString(build_result)}"
// ->> everything singular
// save single result to result list
all_build_results = all_build_results + [build_result]
}]
}
// print build results
echo "$all_build_results"
}
}
}
}
}
Mostly the different results a seperately saved in the "all_build_result" list. Everything how it should be.
But sometimes, 1 build result is listed twice and the other not!!
At the print "${BuildResultToString(build_result)}"the 2 results are still printed seperately but in the "all_build_result" 1 result is added 2 times and the other not!
Why?

Jenkins Pipeline - build same job multiple times in parallel

I am trying to create a pipeline in Jenkins which triggers same job multiple times in different node(agents).
I have "Create_Invoice" job Jenkins, configured : (Execute Concurrent builds if necessary)
If I click on Build 10 times it will run 10 times in different (available) agents/nodes.
Instead of me clicking 10 times, I want to create a parallel pipeline.
I created something like below - it triggers the job but only once.
What Am I missing or is it even possible to trigger same test more than once at the same time from pipeline?
Thank you in advance
node {
def notifyBuild = { String buildStatus ->
// build status of null means successful
buildStatus = buildStatus ?: 'SUCCESSFUL'
// Default values
def tasks = [:]
try {
tasks["Test-1"] = {
stage ("Test-1") {
b = build(job: "Create_Invoice", propagate: false).result
}
}
tasks["Test-2"] = {
stage ("Test-2") {
b = build(job: "Create_Invoice", propagate: false).result
}
}
parallel tasks
} catch (e) {
// If there was an exception thrown, the build failed
currentBuild.result = "FAILED"
throw e
}
finally {
notifyBuild(currentBuild.result)
}
}
}
I had the same problem and solved it by passing different parameters to the same job. You should add parameters to your build steps, although you obviously don't need them. For example, I added a string parameter.
tasks["Test-1"] = {
stage ("Test-1") {
b = build(job: "Create_Invoice", parameters: [string(name: "PARAM", value: "1")], propagate: false).result
}
}
tasks["Test-2"] = {
stage ("Test-2") {
b = build(job: "Create_Invoice", parameters: [string(name: "PARAM", value: "2")], propagate: false).result
}
}
As long as the same parameters or no parameters are passed to the same job, the job is only tirggered once.
See also this Jenkins issue, it describes the same problem:
https://issues.jenkins.io/browse/JENKINS-55748
I think you have to switch to Declarative pipeline instead of Scripted pipeline.
Declarative pipeline has parallel stages support which is your goal:
https://www.jenkins.io/blog/2017/09/25/declarative-1/
This example will grab the available agent from the Jenkins and iterate and run the pipeline in all the active agents.
with this approach, you no need to invoke this job from an upstream job many time to build on a different agent. This Job itself will manage everything and run all the stages define in all the online node.
jenkins.model.Jenkins.instance.computers.each { c ->
if(c.node.toComputer().online) {
node(c.node.labelString) {
stage('steps-one') {
echo "Hello from Steps One"
}
stage('stage-two') {
echo "Hello from Steps Two"
}
}
} else {
println "SKIP ${c.node.labelString} Because the status is : ${c.node.toComputer().online} "
}
}

How to run a job multiple times in parallel depending on the pipeline parameter value

I want to run a job "main_job" N number of times in parallel from a pipeline "main_pipeline" depending upon the parameter N. But I cannot get around jenkins errors where it allows "for" loop in script blocks but not in "parallel" blocks.
I have tried using all the mixture of script/parallel/stage blocks.
pipeline {
agent any
stages {
stage("All jobs") {
parallel {
script {
def numJobs = "${N}" as Integer
for(def curJob=1; curJob<=numJobs; i++) {
def param = "JOB-" + curJob
script {
build (job: "main_job",
parameters: [string(name:"PARAM", value:param)])
}
}
}
}
}
}
}
Using different combination of script/stage/parallel I am getting different errors regarding which blocks are expected. One example of the error:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: xx: Expected a stage # line xx, column xx.
script {
^
WorkflowScript: xx: Expected one of "steps", "stages", or "parallel" for stage "All jobs" # line xx, column xx.
stage("All jobs") {
Okay I finally found what I needed to do, basically by trial and error and finding out that parallel can be called as a function by passing a dictionary with details for all the jobs to run.
Final code looks like:
pipeline {
agent any
parameters {
string(
name: 'N',
defaultValue:"2",
description: "The number of jobs to run"
)
}
stages {
stage("All jobs") {
steps {
script {
def numJobs = "${N}" as Integer
def allJobs = [:]
for(def curJob=1; curJob<=numJobs; curJob++) {
def jobName = "JOB-" + curJob
allJobs[jobName] = {
build (job: "main_job",
parameters:
[string(name:"PARAM", value:jobName)])
}
}
parallel(allJobs)
}
}
}
}
}

jenkins pipeline catch build_job info for a failed parallel build

Does anyone know how to catch the failed job's number in a parallel pipeline execution while still have failFast feature working for short-circuiting of builds in the event of a job failure? I know i can kind-of make it work if i do "propagate = false" while running the build step but that kills the failFast feature, and i need that.
For example, below is my code and i want the value of variable achild_job_info inside the catch block as well.
build_jobs = [“Build_A”, “ Build_B”, “ Build_C”]
def build_job_to_number_mappings = [:]
// in this hashmap we'll place the jobs that we wish to run
def branches = [:]
def achild_job_info = ""
def abuild_number = ""
for (x in build_jobs) {
def abuild = x
branches[abuild] = {
stage(abuild){
retry(2) {
try {
achild_job_info = build job: abuild
echo “ achild_job_info” // —> this gives: org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper#232601dc
abuild_number = achild_job_info.getId()
build_job_to_number_mappings[abuild] = achild_job_info.getNumber()
} catch (err) {
echo “ achild_job_info: ${achild_job_info } “ // —> This comes empty. I want the runwrapper here as well, just like in the try block.
abuild_job_number = abuild_job_info.getId()
build_job_to_number_mappings[abuild] = achild_job_info.getNumber()
} // try-catch
} // stage
} // branches
} // for
branches.failFast = true
parallel branches
The only way i could find out for now is to use the value of the 'exception string' and split it to get the current build number and name. I am not sure that is the most robust way to do this but works for now. Posting this reply to help others.
You need to avoid throwing by turning off the exception propagation:
achild_job_info = build job: abuild, propagate: false
if(achild_job_info.result == "SUCCESS") { ...
P.S. A little late, but I just got here looking for this myself.

Get the build number of a failed build in a parallel build in jenkins workflow

I am executing 3 parallel jobs, each that run tests, from my job as follows:
def run_job(job) {
output = build(job:job, parameters:parameters)
def buildNumber = output.getNumber().toString()
test_results[job] = '/job/'+ job +'/' + buildNumber + '/artifact/test_result.xml'
}
def test_func_array = [:]
def test_results = [:]
test_func_array['Python_Tests'] = {run_job('Run_Python_Tests', test_results)}
test_func_array['JS_Tests'] = {run_job('Run_JS_Tests', test_results)}
test_func_array['Selenium_Tests'] = {run_job('Run_Selenium_Tests', test_results)}
parallel(test_func_array)
I am able to get the build number using the output.getNumber() call when each job succeeds. However, when a job fails, build() function call throws an exception so I cannot get the build number.
However, failed builds can still have build numbers and have archived artifacts.
How do I get the build number of a failed build?
if you use propagate: false, you can't use try-catch block because build job don't throw exception when the job failed so you need to handle the result by getResult() method like this:
stage('build something'){
def job_info = build job: build_something, propagate: false,
println "Build number: ${job_info.getNumber()}"
currentBuild.result = job_info.getResult()
}
see also: https://jenkins.io/doc/pipeline/steps/pipeline-build-step/
Use propagate: false. See Snippet Generator for details.
I think Jesse's answer is valid when you want to complete all the parallel jobs, even when one of more jobs have failed. So basically, it will disable the failFast feature.
Does anyone know how to catch the failed job's number while still have failFast feature working to short-circuit the builds in the event of a job failure? For example, below is my code and i want the value of variable achild_job_info inside the catch block as well.
build_jobs = [“Build_A”, “ Build_B”, “ Build_C”]
// in this hashmap we'll place the jobs that we wish to run
def branches = [:]
def achild_job_info = ""
def abuild_number = ""
for (x in build_jobs) {
def abuild = x
branches[abuild] = {
stage(abuild){
def allow_retry = true
retry(2) {
try {
achild_job_info = build job: abuild
echo “ achild_job_info” // —> this gives: org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper#232601dc
abuild_number = achild_job_info.getId()
build_job_to_number_mappings[abuild] = achild_job_info.getNumber()
} catch (err) {
echo “ achild_job_info: ${achild_job_info } “ // —> This comes empty. I want the runwrapper here as well, just like in the try block.
abuild_job_number = abuild_job_info.getId()
build_job_to_number_mappings[abuild] = achild_job_info.getNumber()
} // try-catch
} // stage
} // branches
} // for
branches.failFast = true
parallel branches

Resources