Jenkins: howto prevent overwriting build results of parallel downstream jobs - jenkins

I'm running a scripted pipeline which starts multiple downstream jobs in parallel.
A the main job, I'd like to collect the data and results of the parallel running downstream jobs so that I can process it later.
My main job is like this:
def all_build_results = []
pipeline {
stages {
stage('building') {
steps {
script {
def build_list = [
['PC':'01', 'number':'07891705'],
['PC':'01', 'number':'00568100']
]
parallel build_list.collectEntries {build_data ->
def br =[:]
["Building With ${build_data}": {
br = build job: 'Downstream_Pipeline',
parameters: [
string(name: 'build_data', value: "${build_data}")
],
propagate: false,
wait:true
build_result = build_data + ['Ergebnis':br.getCurrentResult(), 'Name': br.getFullDisplayName(), 'Url':br.getAbsoluteUrl(),
'Dauer': br.getDurationString(), 'BuildVars':br.getBuildVariables()]
// Print result
print "${BuildResultToString(build_result)}"
// ->> everything singular
// save single result to result list
all_build_results = all_build_results + [build_result]
}]
}
// print build results
echo "$all_build_results"
}
}
}
}
}
Mostly the different results a seperately saved in the "all_build_result" list. Everything how it should be.
But sometimes, 1 build result is listed twice and the other not!!
At the print "${BuildResultToString(build_result)}"the 2 results are still printed seperately but in the "all_build_result" 1 result is added 2 times and the other not!
Why?

Related

Run same Jenkins job with different parameters sequentially

I have a jenkins job A which takes let's say have param foo allowed value of foo are (1,2,3,4,5,6,7). Now I want to make a jenkins job B which runs job A with param foo with 1,2,3,4,5,6,7 sequentially. i.e Job B will be Job A 7 times with param foo all value sequentially.
You can do something like below.
pipeline {
agent any
stages {
stage('Job B') {
steps {
script {
def foo = [1,2,3,4,5,6,7]
foo.each { val ->
build job:'JobA' , parameters:[ string(name: 'fooInJobA',value: val)], wait: true
}
}
}
}
}
}

How to run for loop in batch of 4 for 20 items at once in jenkinsfile

I have to run for loop in groovy for 40 items, but do wish to run it for 4 items in parallel then next batch and so on. I know of parallel deployments in jenkinsfile but it triggers all 40 at once.
def i = 0
mslist.collate(4).each{
build job: 'deploy', parameters: [string(name: 'PROJECT', value: "${it[i]}"), string(name: 'ENVIRONMENT', value: params.ENVIRONMENT)]
i=i+1
}
My Updated code:
stages {
stage ('Parallel Deployments') {
steps {
script {
def m = rc.ic()
m = m.collect { "${it}" }
println "$m"
m.collate(4).each{
def deployments = [:]
batch.each {
deployments[it] = {
build job: 'jb', parameters: [string(name: 'pt', value: it), string(name: 'pl', value: params.gh), string(name: 'dc', value: params.nb)]
}
}
parallel deployments
}
deployments["failFast"] = false
}
}
}
}
It can be done like this :
node {
def items = (1..40).collect { "item-${it}" }
items.collate(4).each { List batch ->
def n=[:]
batch.each {
n[it] = {
stage(it) {
build job: 'x', parameters: [ string( name: "it", value : it) ]
}
}
}
parallel n
}
}
job: x Jenkinsfile content
node {
echo "Hello from Pipeline x"
print params
}
This will invoke 4 jobs at a time and run parallelly. Make sure you have more than 4 executors configured on Jenkins.
You can do something like:
def items = (1..40).collect { "item-${it}" }
items.collate(4).each { List batch ->
// batch is now a list of 4 items, do something with it here
}
to use the groovy Iterable.collate method to split the items into batches of four items and loop through the batches.
If you really want to do this "in parallell" as in using multiple threads, then that is a different question.

Jenkins Pipeline - build same job multiple times in parallel

I am trying to create a pipeline in Jenkins which triggers same job multiple times in different node(agents).
I have "Create_Invoice" job Jenkins, configured : (Execute Concurrent builds if necessary)
If I click on Build 10 times it will run 10 times in different (available) agents/nodes.
Instead of me clicking 10 times, I want to create a parallel pipeline.
I created something like below - it triggers the job but only once.
What Am I missing or is it even possible to trigger same test more than once at the same time from pipeline?
Thank you in advance
node {
def notifyBuild = { String buildStatus ->
// build status of null means successful
buildStatus = buildStatus ?: 'SUCCESSFUL'
// Default values
def tasks = [:]
try {
tasks["Test-1"] = {
stage ("Test-1") {
b = build(job: "Create_Invoice", propagate: false).result
}
}
tasks["Test-2"] = {
stage ("Test-2") {
b = build(job: "Create_Invoice", propagate: false).result
}
}
parallel tasks
} catch (e) {
// If there was an exception thrown, the build failed
currentBuild.result = "FAILED"
throw e
}
finally {
notifyBuild(currentBuild.result)
}
}
}
I had the same problem and solved it by passing different parameters to the same job. You should add parameters to your build steps, although you obviously don't need them. For example, I added a string parameter.
tasks["Test-1"] = {
stage ("Test-1") {
b = build(job: "Create_Invoice", parameters: [string(name: "PARAM", value: "1")], propagate: false).result
}
}
tasks["Test-2"] = {
stage ("Test-2") {
b = build(job: "Create_Invoice", parameters: [string(name: "PARAM", value: "2")], propagate: false).result
}
}
As long as the same parameters or no parameters are passed to the same job, the job is only tirggered once.
See also this Jenkins issue, it describes the same problem:
https://issues.jenkins.io/browse/JENKINS-55748
I think you have to switch to Declarative pipeline instead of Scripted pipeline.
Declarative pipeline has parallel stages support which is your goal:
https://www.jenkins.io/blog/2017/09/25/declarative-1/
This example will grab the available agent from the Jenkins and iterate and run the pipeline in all the active agents.
with this approach, you no need to invoke this job from an upstream job many time to build on a different agent. This Job itself will manage everything and run all the stages define in all the online node.
jenkins.model.Jenkins.instance.computers.each { c ->
if(c.node.toComputer().online) {
node(c.node.labelString) {
stage('steps-one') {
echo "Hello from Steps One"
}
stage('stage-two') {
echo "Hello from Steps Two"
}
}
} else {
println "SKIP ${c.node.labelString} Because the status is : ${c.node.toComputer().online} "
}
}

killing/stopping Jenkins job with a very long list

I'm new to Jenkins so I hope my terms are correct:
I have a Jenkins job that triggers another job. This second job tests a very long list of items (maybe 2000) it gets from the trigger.
Because it's such a long list, I pass it to the second job in groups of 20.
Unfortunately, this list turned out to take an extremely long time, and I can't stop it.
No matter what I tried, stop/kill only stop the current group of 20, and proceeds to the group.
Waiting for it to finish, or doing this manually for each group is not an option.
I guess the entire list was already passed to the second job, and it's loading the next group whenever the current one ends.
What I tried:
Clicking the "stop" button next to the build on the trigger and the second job
Using purge build queue add on
Using the following script in script console:
def jobname = "Trigger Job"
def buildnum = 123
def job = Jenkins.instance.getItemByFullName(jobname)
for (build in job.builds) {
if (buildnum == build.getNumber().toInteger()){
if (build.isBuilding()){
build.doStop();
build.doKill();
}
}
}
Using the following script in script console:
String job = 'Job name';
List<Integer> build_list = [];
def result = jenkins.model.Jenkins.instance.getItem(job).getBuilds().findAll{
it.isBuilding() == true && (!build_list || build_list.contains(it.id.toInteger()))}.each{it.doStop()}.collect{it.id};
println new groovy.json.JsonBuilder(result).toPrettyString(); ```
This is my groovy part of the code that splits it into groups of 20. Maybe I should put the parallel part outside the sub list loop?
Is there a better way to divide into sub lists for future use?
stages {
stage('Execute tests') {
steps {
script {
// Limit number of items to run
def shortList = IDs.take(IDs.size()) // For testing purpose, can be removed if not needed
println(Arrays.toString(shortList))
// devide the list of items into small, equal,sub-lists
def colList = shortList.toList().collate(20)
for (subList in colList) {
testStepsForParallel = subList.collectEntries {
["Testing on ${it}": {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
stage(it) {
def buildWrapper = build job: "Job name",
parameters: [
string(name: 'param1', value: it.trim()),
string(name: 'param2', value: "")
],
propagate: false
remoteBuildResult = buildWrapper.result
println("Remote build results: ${remoteBuildResult}")
if (remoteBuildResult == "FAILURE") {
currentBuild.result = "FAILURE"
}
catchError(stageResult: 'UNSTABLE') {
copyArtifacts projectName: "Job name", selector: specific("${buildWrapper.number}")
}
}
}
}]
}
parallel testStepsForParallel
}
}
}
}
}
Thanks for your help!
Don't know what else to do to stop this run.

Jenkins Pipeline, getting runWrapper references even on failed runs

I'm trying to run multiple builds in parallel in jenkins pipeline and get the result of those builds. My code looks something like
runWrappers = []
script {
def builds = [:]
builds['a'] = { runWrappers += build job: 'jobA', parameters: /* params here*/ }
builds['b'] = { runWrappers += build job: 'jobB', parameters: /* params here*/ }
builds['c'] = { runWrappers += build job: 'jobC', parameters: /* params here*/ }
builds['d'] = { runWrappers += build job: 'jobD', parameters: /* params here*/ }
parallel builds
// All the builds are ran in parallel and do not exit early if one fails
// Multiple of the builds could fail on this step
}
If there are no failures, the pipeline continues onto other stages. If there is a failure, an exception will be thrown and the following post-build code will run immediately
post {
always {
script {
def summary = ''
for (int i; i < runWrappers.size(); i++) {
def result = runWrappers[i].getResult()
def link = runWrappers[i].getAbsoluteUrl()
summary += "Build at: " + link + " had result of: " + result
}
/* Code to send summary to external location */
}
}
}
This works for the most part. The problem is that this code will only print out the result for the builds that result in a SUCCESS because the builds that fail throw an exception before returning a reference to a runWrapper.
Is there a way to get a reference to a runWrapper or similar that can give me information (mainly the url and result) on a failed build? Or is there a way for me to get such a reference before I start the build and cause an exception?
Try to use propagate: false:
build job: 'jobA', propagate: false, parameters: /* params here*/
But in this case your parallel won't fail anymore.

Resources