I have a jenkins job A which takes let's say have param foo allowed value of foo are (1,2,3,4,5,6,7). Now I want to make a jenkins job B which runs job A with param foo with 1,2,3,4,5,6,7 sequentially. i.e Job B will be Job A 7 times with param foo all value sequentially.
You can do something like below.
pipeline {
agent any
stages {
stage('Job B') {
steps {
script {
def foo = [1,2,3,4,5,6,7]
foo.each { val ->
build job:'JobA' , parameters:[ string(name: 'fooInJobA',value: val)], wait: true
}
}
}
}
}
}
Related
I'm running a scripted pipeline which starts multiple downstream jobs in parallel.
A the main job, I'd like to collect the data and results of the parallel running downstream jobs so that I can process it later.
My main job is like this:
def all_build_results = []
pipeline {
stages {
stage('building') {
steps {
script {
def build_list = [
['PC':'01', 'number':'07891705'],
['PC':'01', 'number':'00568100']
]
parallel build_list.collectEntries {build_data ->
def br =[:]
["Building With ${build_data}": {
br = build job: 'Downstream_Pipeline',
parameters: [
string(name: 'build_data', value: "${build_data}")
],
propagate: false,
wait:true
build_result = build_data + ['Ergebnis':br.getCurrentResult(), 'Name': br.getFullDisplayName(), 'Url':br.getAbsoluteUrl(),
'Dauer': br.getDurationString(), 'BuildVars':br.getBuildVariables()]
// Print result
print "${BuildResultToString(build_result)}"
// ->> everything singular
// save single result to result list
all_build_results = all_build_results + [build_result]
}]
}
// print build results
echo "$all_build_results"
}
}
}
}
}
Mostly the different results a seperately saved in the "all_build_result" list. Everything how it should be.
But sometimes, 1 build result is listed twice and the other not!!
At the print "${BuildResultToString(build_result)}"the 2 results are still printed seperately but in the "all_build_result" 1 result is added 2 times and the other not!
Why?
I have to run for loop in groovy for 40 items, but do wish to run it for 4 items in parallel then next batch and so on. I know of parallel deployments in jenkinsfile but it triggers all 40 at once.
def i = 0
mslist.collate(4).each{
build job: 'deploy', parameters: [string(name: 'PROJECT', value: "${it[i]}"), string(name: 'ENVIRONMENT', value: params.ENVIRONMENT)]
i=i+1
}
My Updated code:
stages {
stage ('Parallel Deployments') {
steps {
script {
def m = rc.ic()
m = m.collect { "${it}" }
println "$m"
m.collate(4).each{
def deployments = [:]
batch.each {
deployments[it] = {
build job: 'jb', parameters: [string(name: 'pt', value: it), string(name: 'pl', value: params.gh), string(name: 'dc', value: params.nb)]
}
}
parallel deployments
}
deployments["failFast"] = false
}
}
}
}
It can be done like this :
node {
def items = (1..40).collect { "item-${it}" }
items.collate(4).each { List batch ->
def n=[:]
batch.each {
n[it] = {
stage(it) {
build job: 'x', parameters: [ string( name: "it", value : it) ]
}
}
}
parallel n
}
}
job: x Jenkinsfile content
node {
echo "Hello from Pipeline x"
print params
}
This will invoke 4 jobs at a time and run parallelly. Make sure you have more than 4 executors configured on Jenkins.
You can do something like:
def items = (1..40).collect { "item-${it}" }
items.collate(4).each { List batch ->
// batch is now a list of 4 items, do something with it here
}
to use the groovy Iterable.collate method to split the items into batches of four items and loop through the batches.
If you really want to do this "in parallell" as in using multiple threads, then that is a different question.
I am trying to create a pipeline in Jenkins which triggers same job multiple times in different node(agents).
I have "Create_Invoice" job Jenkins, configured : (Execute Concurrent builds if necessary)
If I click on Build 10 times it will run 10 times in different (available) agents/nodes.
Instead of me clicking 10 times, I want to create a parallel pipeline.
I created something like below - it triggers the job but only once.
What Am I missing or is it even possible to trigger same test more than once at the same time from pipeline?
Thank you in advance
node {
def notifyBuild = { String buildStatus ->
// build status of null means successful
buildStatus = buildStatus ?: 'SUCCESSFUL'
// Default values
def tasks = [:]
try {
tasks["Test-1"] = {
stage ("Test-1") {
b = build(job: "Create_Invoice", propagate: false).result
}
}
tasks["Test-2"] = {
stage ("Test-2") {
b = build(job: "Create_Invoice", propagate: false).result
}
}
parallel tasks
} catch (e) {
// If there was an exception thrown, the build failed
currentBuild.result = "FAILED"
throw e
}
finally {
notifyBuild(currentBuild.result)
}
}
}
I had the same problem and solved it by passing different parameters to the same job. You should add parameters to your build steps, although you obviously don't need them. For example, I added a string parameter.
tasks["Test-1"] = {
stage ("Test-1") {
b = build(job: "Create_Invoice", parameters: [string(name: "PARAM", value: "1")], propagate: false).result
}
}
tasks["Test-2"] = {
stage ("Test-2") {
b = build(job: "Create_Invoice", parameters: [string(name: "PARAM", value: "2")], propagate: false).result
}
}
As long as the same parameters or no parameters are passed to the same job, the job is only tirggered once.
See also this Jenkins issue, it describes the same problem:
https://issues.jenkins.io/browse/JENKINS-55748
I think you have to switch to Declarative pipeline instead of Scripted pipeline.
Declarative pipeline has parallel stages support which is your goal:
https://www.jenkins.io/blog/2017/09/25/declarative-1/
This example will grab the available agent from the Jenkins and iterate and run the pipeline in all the active agents.
with this approach, you no need to invoke this job from an upstream job many time to build on a different agent. This Job itself will manage everything and run all the stages define in all the online node.
jenkins.model.Jenkins.instance.computers.each { c ->
if(c.node.toComputer().online) {
node(c.node.labelString) {
stage('steps-one') {
echo "Hello from Steps One"
}
stage('stage-two') {
echo "Hello from Steps Two"
}
}
} else {
println "SKIP ${c.node.labelString} Because the status is : ${c.node.toComputer().online} "
}
}
In Jenkins, right now i am configuring the pipeline job that can run based on choice parameters values, for each choice values there is an certain jobs need to run in parallel. for example here i need to build Job1 parameter then its only need to build Job1's parallel jobs. but i tried it here its building all the jobs, is there an way to build the jobs based on parameter values?
Choice Parameter
Name: Param
Value: Job1
Job2
import jenkins.model.*
import hudson.model.*
node('') {
String
stage ('Parallel-Job1'){
parallel(Job1: {
stage ('Parallel-test1'){
build job: 'test1', propagate: false
def jobname1 = "test1"
}
}, Job1: {
stage ('Parallel-test2'){
build job: 'test2', propagate: false
def jobname2 = "test2"
}
})
stage ('Parallel-Job2'){
parallel(Job2: {
stage ('Parallel-test3'){
build job: 'test3', propagate: false
def jobname1 = "test3"
}
})
}
}
}
if (param == "Job1") {
stage('Parallel-Job1') {steps ..}
PA: in this case you won't see the skipped pipeline stage on the general view
Or:
stage('conditional stage') {
agent label:'my-node'
when {
expression {
return ${Param} != 'Job1';
}
}
steps {
echo 'foo bar'
}
}
I have a Jenkins Pipeline that executes Job A and Job B. I have 10 agents/nodes on which Job A is executed.
If I specify Agent1, when I Build Pipeline, then Job A should execute on Agent1.
Issue:
Pipeline is running on Agent1 and JobA is getting picked up on any random available agent.
Script:
pipeline {
agent none
stages {
stage('JOB A') {
agent { label "${machine}" }
steps {
build job: 'JOB A', parameters: [a,b,c,d,e,f]
}
}
stage('JOB B') {
agent { label 'xyz' }
steps {
build job: 'JOB B', parameters: [a,b,c,d,e,f,]
}
}
}
}
I'm using different label for every agent.
Can someone help me understand how and where the Pipeline and downstream jobs are running?
Thanks!
As rightly pointed by #yong, I 'specified agent label for stage, not for the JOB A'.
So I declared a label parameter in JOB A and passed it downstream via the Pipeline. It's now correctly executing on the specified Agent.
pipeline {
agent { label 'master' }
stages {
stage('JOB A') {
steps {
build job: 'JOB A', parameters: [a, [$class: 'LabelParameterValue', name: 'Agent', label: "${Agent}" ], b, c, d]
}
}
stage('JOB B') {
steps {
build job: 'JOB B', parameters: [x,y,z]
}
}
}
}