Jenkins Pipeline - Trigger jobs sequentially inside a loop - jenkins

I have a Jenkins pipeline where I trigger a single job with different parameters. The number of parameters may also change which changes the number of times the job needs to be triggered. This is why I have the build job in a for loop. Here's what the code looks like:
pipeline{
stages{
stage('Setup'){
steps{
script {
for (int i=0; i<list_one.size(); i++ ) {
def index_i = i
for (int j=0; j<list_two.size(); j++) {
def index_j = j
stage ("${list_one[i]} ${list_two[j]}") {
sh "echo 'index_i: ${index_i}'"
sh "echo 'index_j: ${index_j}'"
build job: 'Downstream Job', parameters: [
string(name: 'some_param', value: "${list_one[index_i]}")]
}
}
}
}
}
}
}
}
When I run this pipeline, it only runs once for the first iteration of both the loops. However when I remove the build job line, the pipeline runs for all the values in the list. I'm perplexed by why this would be and would like some assistance in the matter.

Or you can use something like propagate=false.
Is there a way to use "propagate=false" in a Jenkinsfile with declarative syntax directly for stage/step?

Related

How can I run each iteration of a for loop parallel in Jenkins pipeline?

I have a pipeline job which run some sequence of action (Eg; Build >> Run >> Report). I ahave put this sequence in a for loop as I can get a parameter how many times I should repeat the same sequence. Please find the sample code I have written.
for (int i = 0; i < <param_val>; ++i){
node{
stage('Build') {
build 'Build'
}
stage('Run') {
build 'Run'
}
stage('Reporting') {
build 'Reporting'
}
}
}
Now my code is waiting for one complete sequence to happen and then continue to run the next sequence. That is time consuming. I have more slave agents and can run the sequence in parallel. How can I run each iteration of a for loop parallel?
I have thought of a solution :
have a pipeline having the actual sequence
node{
stage('Build') {
build 'Build'
}
stage('Run') {
build 'Run'
}
stage('Reporting') {
build 'Reporting'
}
}
have another pipeline with the for loop which will trigger the 1st pipeline with wait: false:
for (int i = 0; i < <param_val>; ++i){
build(job: 'pipeline-1', wait: false)
}
Does this work ? Or do we have a better option to do the same with single pipeline ?
Put the code inside the loop in a closure:
def oneNode = { c ->
node {
build job: 'Build', parameters: [string(name: 'Component', value: c)]
build 'Run'
build 'Reporting'
}
}
Then make a map of all the closures you want to run simultaneously:
def jobs = [:]
def components = params.Componets.split(",")
for (int i = 0; i < components.size(); ++i) {
def component = components[i].trim()
jobs[component] = {
oneNode(component)
}
}
And finally use the parallel step with the resulting map:
stage('Build, run, report') {
<the code from the above steps>
parallel jobs
}
This worked for me.
def jobs = [:]
def components = params.Componets.split(",")
stage('Build, run, report') {
for (int i = 0; i < components.size(); ++i) {
def index = i
def component = components[index].trim()
jobs[i] = {
node {
build job: 'Build', parameters: [string(name: 'Component', value: component)]
build 'Run'
build 'Reporting'
}
}
}
parallel jobs
}

"java.lang.IllegalArgumentException: Expected a closure or failFast" Exception on running parallel builds

I have a pipeline which has two parallel run in sequence.
Setup in many slaves in parallel and after all the machine setup is done I have a Build and run stage as coded below. But when I tried running the script I am getting error java.lang.IllegalArgumentException: Expected a closure or failFast but found 0=org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper.
Code for reference:
def slaves = params.slaves
stage('Setup'){
for(int i=0; i<slaves.size(); ++i){
def slave = slaves[i]
setup_builds[i] = build job: 'setup', parameters: [[$class: 'LabelParameterValue', name: 'TestMachine', label: slave]]
}
parallel setup_builds
}
stage('Build, run) {
for (int i = 0; i < 4; ++i){
def index = i
builds[i] = {
stage('Build') {
build job: 'Build'
}
stage('Run') {
build job: 'Run', parameters: [string(name: 'index', value: index)]
}
}
}
parallel builds
}
I have tried using setup_builds.failFast = true and builds.failFast = true before parallel setup_builds and parallel builds. But even that didn't fix the issue.
I think one of the issues is that it expects a closure on the setup_builds level:
setup_builds[i] = { build job: 'setup' ... }

Get all pipeline Jobs in Jenkins Groovy Script

I want to trigger several different pipeline jobs, depending on the input parameters of a Controller Pipeline job.
Within this job I build the names of the other pipelines, I want to trigger from a list, given back from a python script.
node {
stage('Get_Clusters_to_Build') {
copyArtifacts filter: params.file_name_var_mapping, fingerprintArtifacts: true, projectName: 'UpdateConfig', selector: lastSuccessful()
script {
cmd_string = 'determine_ci_builds --jobname ' + env.JOB_NAME
clusters = bat(script: cmd_string, returnStdout: true)
output_array = clusters.split('\n')
cluster_array = output_array[2].split(',')
}
echo "${clusters}"
}
jobs = Hudson.instance.getAllItems(AbstractProject.class)
echo "$jobs"
def builders = [:]
for (i=0; i<cluster_array.size(); i++) {
def cluster = cluster_array[i]
def job_to_build = "BuildCI_${cluster}".trim()
echo "### branch${i}"
echo "### ${job_to_build}"
builders["${job_to_build}"] =
{
stage("${job_to_build}") {
build "${job_to_build}"
}
}
}
parallel builders
stage ("TriggerTests") {
echo "Done"
}
}
My problem is, it might be the case, that a couple of jobs with the names I get from the Stage Get_Clusters_to_Build do not exist. Therefore they cannot be triggered and my job fails.
Now to my question, is there a way to get the names of all pipeline jobs, and how can I use them to check if I can trigger a build?
I tried by jobs = Hudson.instance.getAllItems(AbstractProject.class) but this gives me only the "normal" FreeStyleProject-Jobs.
I want to do something like this in the loop:
def builders = [:]
for (i=0; i<cluster_array.size(); i++) {
def cluster = cluster_array[i]
def job_to_build = "BuildCI_${cluster}".trim()
echo "### branch${i}"
echo "### ${job_to_build}"
// This part I only want to be executed if job_to_build is found in the jobs list, somehow like:
if job_to_build in jobs: // I know, this is not proper groovy syntax
builders["${job_to_build}"] =
{
stage("${job_to_build}") {
build "${job_to_build}"
}
}
}
parallel builders
All pipeline jobs are instantces of org.jenkinsci.plugins.workflow.job.WorkflowJob. So you can get names of all Pipeline jobs using the following function
#NonCPS
def getPipelineJobNames() {
Hudson.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)*.fullName
}
Then you can use it this way
//...
def jobs = getPipelineJobNames()
if (job_to_build in jobs) {
//....
}
try this syntax to get standard and pipeline jobs:
def jobs = Hudson.instance.getAllItems(hudson.model.Job.class)
As #Vitalii Vitrenko wrote, that is working fine
for (job in Hudson.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)) {
println job.fullName
}

Jenkins pipeline (parallel && dynamically)?

Question
I have simple parallel pipeline (see code) which I use together with Jenkins 2.89.2. Additionally I use parameters and now want to be able to in-/decrease the number of deployVM A..Z stages automatically by providing the parameter before job execution.
How can I dynamically build my pipeline by providing a parameter?
Researched so far:
Jenkins pipeline script created dynamically - Not getting this to work with my Jenkins version
Can I create dynamically stages in a Jenkins pipeline? - Not working either
Code
The pseudo code of what I want - dynamic generation:
pipeline {
agent any
parameters {
string(name: 'countTotal', defaultValue: '3')
}
stages {
stage('deployVM') {
def list = [:]
for(int i = 0; i < countTotal.toInteger; i++) {
list += stage("deployVM ${i}") {
steps {
script {
sh "echo p1; sleep 12s; echo phase${i}"
}
}
}
}
failFast true
parallel list
}
}
}
The code I have so far - executes parallel but is static:
pipeline {
agent any
stages {
stage('deployVM') {
failFast true
parallel {
stage('deployVM A') {
steps {
script {
sh "echo p1; sleep 12s; echo phase1"
}
}
}
stage('deployVM B') {
steps {
script {
sh "echo p1; sleep 20s; echo phase2"
}
}
}
}
}
}
}
Although the question assumes using declarative pipeline I would suggest to use scripted pipeline because it's way more flexible.
Your task can be accomplished this way
properties([
parameters([
string(name: 'countTotal', defaultValue: '3')
])
])
def stages = [failFast: true]
for (int i = 0; i < params.countTotal.toInteger(); i++) {
def vmNumber = i //alias the loop variable to refer it in the closure
stages["deployVM ${vmNumber}"] = {
stage("deployVM ${vmNumber}") {
sh "echo p1; sleep 12s; echo phase${vmNumber}"
}
}
}
node() {
parallel stages
}
Also take a look at snippet generator which allows you to generate some scripted pipeline code.
Using Declarative pipeline also you can achieve this.
Follow my answer HERE
In above link answer I have used Var.collectEntries but map also can be used.
#Vitalii
I wrote similiar code piece, but unfoutunelty, all three element been loopped all shows the last one, not sure if it had something to do with groovy / jenkinsfile itself, that some clouse / reference went break with wrong usage
my purpose is to distribute tasks to specific work nodes
node_candicates = ["worker-1", "worder-2", "worker-3"]
def jobs = [:]
for (node_name in node_candidates){
jobs["run on $node_name"] = { // good
stage("run on $node_name"){ // all show the third
node(node_name){ // all show the third
print "on $node_name"
sh "hostname"
}
}
}
}
parallel jobs
it went totally Ok if I expand / explain the loop, instead of loop over it, like
parallel worker_1: {
stage("worker_1"){
node("worker_1"){
sh """hostname ; pwd """
print "on worker_1"
}
}
}, worker_2: {
stage("worker_2"){
node("worker_2"){
sh """hostname ; pwd """
print "on worker_2"
}
}
}, worker_3: {
stage("worker_3"){
node("worker_3"){
sh """hostname ; pwd """
print "on worker_3"
}
}
}

Parameterized pipeline build to build more than one job in jenkins

Parameterized pipeline job has to take more than one job name as parameter and start parameterized jobs in parallel, I tried below code but it isnt working
def String[] jobs;
stages {
stage('stage1') {
steps {
script {
jobs = jobnames.split(',');
for (ii = 0; ii < jobs.size(); ii++) {
build job: 'startjob_${jobs[ii]}', parameters: [string(name: 'BRANCH',value: String.valueOf(BRANCH)),string(name: 'CHANGENUM',value: String.valueOf(CHANGENUM))]
}
This code is working, but not the way I expected, I want to start all jobs in parallel. but its scheduling one job after other.
can anyone help me with this
Try this
builds = [:]
for (ii = 0; ii < jobs.size(); ii++) {
builds << [
"startjob_${jobs[ii]}": { ->
build job: "startjob_${jobs[ii]}", parameters: [string(name: 'BRANCH', value: String.valueOf(BRANCH)), string(name: 'CHANGENUM', value: String.valueOf(CHANGENUM))]
}
]
}
parallel builds

Resources