I'm running version 2.249.3 of jenkins and try to create a pipeline that remove all old instances.
for (String Name : ClustersToRemove) {
buildRemoveJob (Name, removeClusterBuilds, removeClusterBuildsResults)
parallel removeClusterBuilds
}
and what the method does is :
def buildRemoveJob (Name, removeClusterBuilds, removeClusterBuildsResults) {
removeClusterBuilds[clusterName] = {
//Random rnd = new Random()
//sleep (Math.abs(rnd.nextInt(Integer.valueOf(rangeToRandom)) + Integer.valueOf(minimumRunInterval)))
removeClusterBuildsResults[clusterName] = build job: 'Delete_Instance', wait: true, propagate: false, parameters: [
[$class: 'StringParameterValue', name: 'Cluster_Name', value: clusterName],
]
}
But... I get only one downstream job that is being launched.
I found this bug https://issues.jenkins.io/browse/JENKINS-55748 but it looks that someone must have solved this issue since it's a very common scenario.
Also here - How to run the same job multiple times in parallel with Jenkins? - I found a documentaion but looks that it does not apply from same jobs
The version of build pipeline plugin is 1.5.8
From the parallel command documentation:
Takes a map from branch names to closures and an optional argument failFast which will terminate all branches upon a failure in any other branch:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false
So you should first create a map of all executions and then run them all in parallel.
In your example, you should first iterate over the strings and create the executions map, then pass it to the parallel command. Something like this:
def executions = ClustersToRemove.collectEntries {
["building ${it}": {
stage("Build") {
removeClusterBuildsResults[it] = build job: 'Delete_Instance', wait: true, propagate: false,
parameters: [[$class: 'StringParameterValue', name: it, value: clusterName]]
}
}]
}
parallel executions
or without the variable:
parallel ClustersToRemove.collectEntries {
...
Yes to be straight..
Depends on number of agents you have.
if you have single agent then other triggers go to queue state.
Hope that answers your question.
Related
In this i have requirement to trigger my job with 3 iterations (below example 3) without waiting, but after all 3 jobs were triggered this has to wait until all 3 jobs have successfully finished irrespective of fail or pass.
I am using wait:true but that will wait for each iteration, thats not i want.
If I use wait:false , it's not waiting once all iterations in loop has completed, its not waiting for the downstream jobs to finish. I want the current job to wait until i got result of jobs (3 pipelines).
//job1 is a pipeline job which i am triggering multiple times with different params
stage {
for(int cntr=0;i<3;i++) {
build job : "job1",
parameters: [string(name: 'param1', value:val[cntr] )],
wait: false
}
}
I think what you actually want is to run them all in parallel and then wait until they all finish.
To do so you can use the parallel keyword:
parallel: Execute in parallel.
Takes a map from branch names to closures and an optional argument failFast >which will terminate all branches upon a failure in any other branch:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false```
In your case it can look something like:
stage('Build Jobs') {
def values = ['value1', 'value2', 'value2']
parallel values.collectEntries {value ->
["Building With ${value}": {
build job : "job1",
parameters: [string(name: 'param1', value: value)],
wait: true
}]
}
}
Or if you want to use indexes instead of a constant list:
stage('Build Jobs') {
def range = 0..2 // or range = [0, 1, 2]
parallel range.collectEntries { num ->
["Iteration ${num}": {
build job : "job1",
parameters: [string(name: 'param1', value: somefunc(num)],
wait: true
}]
}
}
This will execute all the jobs in parallel and then wait until they are all finished before progressing with the pipeline (don't forget to set the wait parameter of the build step to true).
You can find more examples for things like this here.
I have created a jenkins pipleine to run a job (e.g. Pipeline A runs job B). Within job B there is multiple parameters. One of the parameters is a choice parameter that has multiple different choices. I need pipeline A to run job B with all of the different choices at once (Pipeline A runs Job B with all of the different choices in one build). I am not too familiar with using the Jenkins declarative syntax but I am guessing I would use some sort of for loop to iterate over all of the available choices?
I have searched and searched through Stack overflow/google for an answer but have not had much luck.
You can define the options in a separate file outside your jobs, in shared library:
// vars/choices.groovy
def my_choices = [
"Option A",
"Option B", // etc.
]
You can then use these choices when defining the job:
// Job_1 Jenkinsfile
#Library('my-shared#master') _
properties([
parameters([
[$class: 'ChoiceParameterDefinition',
name: 'MY_IMPORTANT_OPTION',
choices: choices.my_choices as List,
description: '',
],
...
pipeline {
agent { any }
stages {
...
In Job 2, you can iterate over the values:
// Job_2 Jenkinsfile
#Library('my-shared#master') _
pipeline {
agent { any }
stages {
stage {
for (String option : choices.my_choices) {
build job: "Job_1",
wait: false,
parameters: [ string(name: 'MY_IMPORTANT_OPTION', value: option) , // etc.
]
Job_2 when it is run will asynchronously trigger a number of runs of Job_1 each time with a different parameter.
I want to do similar thing - call "some_job_pipeline" from trigger pipeline and that it would be controller by parameter to execute on same or some specific Jenkins node.
If it is should be executed on same/master/parent job jenkins node - it should not create new "Executor". That if i set for "Node1" node executors count to 1 - job would run successfully (would not require 2-nd executor).
In example I have Trigger_Main_Job which looks something like this:
node("${params.JenkinsNode}") {
stage("Stage 1") {
...
}
stage("some_job_pipeline") {
build job: 'some_job_pipeline', parameters: []
}
stage("Stage 3") {
...
}
...
}
and some_job_pipeline which looks something like this:
boolean runOnItsOwnNode = params.JenkinsNode?.trim()
properties([
parameters([
string(name: 'JenkinsNode', defaultValue: '', description: 'Node from which to deploy. If "JenkinsNode" is not passed - then it will use parent job node.')
])
])
if(runOnItsOwnNode) {
node("${params.JenkinsNode}") {
print "Deploying from node: '${params.JenkinsNode}'"
}
}
else {
print "Deploying from parent job node."
???? THIS_IS MISSING_PART ????
}
Note: Similar question, but it point out that parent pipeline should be changed: Jenkins pipeline: how to trigger another job and wait for it without using an extra agent/executor. Question is is it possible to implement this and how without changing "Trigger" job. That I could create "some_job_pipeline" which execution would depend only on JenkinsNode passed parameter and not Parent/Called job implementation.
I tried to different variants to specify "???? THIS_IS MISSING_PART ????" code part in
node("master") {...}
and without "note" and similar things. But no luck - "some_job_pipeline" still consumes/requires new executor.
I have a pipeline something like below.
stage('Build, run, report') {
for (int i = 0; i < components.size(); ++i){
builds[i] = {
stage('Build') {
build job: 'Build', parameters: [string(name: 'Component', value: component)]
}
stage('Run') {
build job: 'Run', parameters: [string(name: 'Component', value: component)]
}
stage('Reporting') {
'Reporting', parameters: [string(name: 'Component', value: component)]
}
}
}
parallel builds
Here "components" is a list coming from parameter of pipeline. I want to run the same flow according the number of component.
I have only one slave node with 4 executers. If I have 10 components 4 will start running immediately and the other 6 will be queued and will be waiting for the executer to be free.
I can get even more than 50 comonents in the list and having somany in the queue is not looking good and I don't feel this will be the right approach then.(I suspect there would be a limit of builds can be in queue also.)
Do we have a way to pause the parallel triggering till executers/slaves are available and resume one by one when the executer/slave is getting free ?
Or do we have a better way to handle it than parallel run in pipeline ?
I haven't tried it myself, but maybe you can consider using quiet period for the build job after every 4 count of components.
I am building pipeline workflow in Jenkins v.2.8. What I would like to achieve is to build one step which would trigger same job multiple times as same time with different parameters.
Example: I have a worfklow called "Master" which has one step, this step is reading my parameter "Number" which is a check box with multiple selection option. So user can trigger workflow and select option for Number like "1, 2, 3". Now what I would like to achieve when this step is executed that it calls my job "Master_Child" and triggers "Master_Child" with 3 different parameters at the same time.
I tried to do it in this way:
stage('MyStep') {
steps {
echo 'Deploying MyStep'
script {
env.NUMBER.split(',').each {
build job: 'Master_Child', parameters: [string(name: 'NUMBER', value: "$it")]
}
}
}
}
But with this it reads first parameter triggers the Mast_Child with parametre 1 and it waits until the jobs is finished, when job is finished then it triggers the same the job with parameter 2.
If I use wait: false on job call, then pipeline workflow just calls this jobs with different parameters but it is not depended if sub job fails.
Any ideas how to implement that ?
Thank you in advance.
I resolved my problem in this way.
stage('MyStage') {
steps {
echo 'Deploying MyStep'
script {
def numbers = [:]
env.NUMBER.split(',').each {
numbers["numbers${it}"] = {
build job: 'Master_Child', parameters: [string(name: 'NUMBER', value: "$it")]
}
}
parallel numbers
}
}
}
Set the wait in the build job syntax to false wait: false
stage('MyStep') {
steps {
echo 'Deploying MyStep'
script {
env.NUMBER.split(',').each {
build job: 'Master_Child', parameters: [string(name: 'NUMBER', value: "$it")], wait: false
}
}
}
}