Jenkins Pipeline best way to use choice parameter for checking conditions - jenkins

Pipeline executes two jobs : Job X and Job Y.
The Pipeline has a choice parameter which is required by Job Y.
Choice parameter options: A,B and C.
Job Y has 3 conditions :
if Choice==A then do task 1
else if Choice==B then do task 2
else do task 3
Getting stuck at declaring choice conditions at STAGE of Job Y.
p.s. tried Active Choice parameter, can't work through it.
Can anybody help out with the logic for this problem?

I am assuming that Job Y is called from your pipeline as a downstream job. Thus somewhere, (probably the end of your pipeline) you will have:
build job: 'CloudbeeFolder1/Path/To/JobY', propagate: false, wait: false, parameters: [[$class: 'StringParameterValue', name: 'MY_PARAM', value: "${env.SOME_VALUE}"]]
Then in JobY on the "other side" you have:
environment {
PARAM_FROM_PIPELINE = "${params.MY_PARAM}"
}
This gets the value of your parameter into an environment variable in JobY.
Depending on what the tasks are you could perform them in a batch (or sh) file by passing PARAM_FROM_PIPELINE like so:
stages {
stage("Do Tasks") {
steps {
bat "mybatchfile.bat ${env.PARAM_FROM_PIPELINE}"
}
}
}
Finally in mybatchfile.bat you can read the value of ${env.PARAM_FROM_PIPELINE} like so:
#ECHO OFF
SET PARAM_VAL=%1
ECHO PARAM VALUE IS: %PARAM_VAL%
IF %PARAM_VAL% = "A" (
REM DO TASK1
) ELSE (
IF %PARAM_VAL% = "B" (
REM DO TASK2
) ELSE (
REM DO TASK3
)
)
If you don't want to encapsulate the if-else logic in a batch file you can use a script {...} block in your Jenkinsfile to use scripted pipeline.

Related

How to enforce jenkins job to wait until all jobs executed in loop

In this i have requirement to trigger my job with 3 iterations (below example 3) without waiting, but after all 3 jobs were triggered this has to wait until all 3 jobs have successfully finished irrespective of fail or pass.
I am using wait:true but that will wait for each iteration, thats not i want.
If I use wait:false , it's not waiting once all iterations in loop has completed, its not waiting for the downstream jobs to finish. I want the current job to wait until i got result of jobs (3 pipelines).
//job1 is a pipeline job which i am triggering multiple times with different params
stage {
for(int cntr=0;i<3;i++) {
build job : "job1",
parameters: [string(name: 'param1', value:val[cntr] )],
wait: false
}
}
I think what you actually want is to run them all in parallel and then wait until they all finish.
To do so you can use the parallel keyword:
parallel: Execute in parallel.
Takes a map from branch names to closures and an optional argument failFast >which will terminate all branches upon a failure in any other branch:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false```
In your case it can look something like:
stage('Build Jobs') {
def values = ['value1', 'value2', 'value2']
parallel values.collectEntries {value ->
["Building With ${value}": {
build job : "job1",
parameters: [string(name: 'param1', value: value)],
wait: true
}]
}
}
Or if you want to use indexes instead of a constant list:
stage('Build Jobs') {
def range = 0..2 // or range = [0, 1, 2]
parallel range.collectEntries { num ->
["Iteration ${num}": {
build job : "job1",
parameters: [string(name: 'param1', value: somefunc(num)],
wait: true
}]
}
}
This will execute all the jobs in parallel and then wait until they are all finished before progressing with the pipeline (don't forget to set the wait parameter of the build step to true).
You can find more examples for things like this here.

Does jenkins can run the same job multiple times in parallel?

I'm running version 2.249.3 of jenkins and try to create a pipeline that remove all old instances.
for (String Name : ClustersToRemove) {
buildRemoveJob (Name, removeClusterBuilds, removeClusterBuildsResults)
parallel removeClusterBuilds
}
and what the method does is :
def buildRemoveJob (Name, removeClusterBuilds, removeClusterBuildsResults) {
removeClusterBuilds[clusterName] = {
//Random rnd = new Random()
//sleep (Math.abs(rnd.nextInt(Integer.valueOf(rangeToRandom)) + Integer.valueOf(minimumRunInterval)))
removeClusterBuildsResults[clusterName] = build job: 'Delete_Instance', wait: true, propagate: false, parameters: [
[$class: 'StringParameterValue', name: 'Cluster_Name', value: clusterName],
]
}
But... I get only one downstream job that is being launched.
I found this bug https://issues.jenkins.io/browse/JENKINS-55748 but it looks that someone must have solved this issue since it's a very common scenario.
Also here - How to run the same job multiple times in parallel with Jenkins? - I found a documentaion but looks that it does not apply from same jobs
The version of build pipeline plugin is 1.5.8
From the parallel command documentation:
Takes a map from branch names to closures and an optional argument failFast which will terminate all branches upon a failure in any other branch:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false
So you should first create a map of all executions and then run them all in parallel.
In your example, you should first iterate over the strings and create the executions map, then pass it to the parallel command. Something like this:
def executions = ClustersToRemove.collectEntries {
["building ${it}": {
stage("Build") {
removeClusterBuildsResults[it] = build job: 'Delete_Instance', wait: true, propagate: false,
parameters: [[$class: 'StringParameterValue', name: it, value: clusterName]]
}
}]
}
parallel executions
or without the variable:
parallel ClustersToRemove.collectEntries {
...
Yes to be straight..
Depends on number of agents you have.
if you have single agent then other triggers go to queue state.
Hope that answers your question.

How to use multiple choices to run a job?

I have created a jenkins pipleine to run a job (e.g. Pipeline A runs job B). Within job B there is multiple parameters. One of the parameters is a choice parameter that has multiple different choices. I need pipeline A to run job B with all of the different choices at once (Pipeline A runs Job B with all of the different choices in one build). I am not too familiar with using the Jenkins declarative syntax but I am guessing I would use some sort of for loop to iterate over all of the available choices?
I have searched and searched through Stack overflow/google for an answer but have not had much luck.
You can define the options in a separate file outside your jobs, in shared library:
// vars/choices.groovy
def my_choices = [
"Option A",
"Option B", // etc.
]
You can then use these choices when defining the job:
// Job_1 Jenkinsfile
#Library('my-shared#master') _
properties([
parameters([
[$class: 'ChoiceParameterDefinition',
name: 'MY_IMPORTANT_OPTION',
choices: choices.my_choices as List,
description: '',
],
...
pipeline {
agent { any }
stages {
...
In Job 2, you can iterate over the values:
// Job_2 Jenkinsfile
#Library('my-shared#master') _
pipeline {
agent { any }
stages {
stage {
for (String option : choices.my_choices) {
build job: "Job_1",
wait: false,
parameters: [ string(name: 'MY_IMPORTANT_OPTION', value: option) , // etc.
]
Job_2 when it is run will asynchronously trigger a number of runs of Job_1 each time with a different parameter.

How can I parameterize Jenkinsfile jobs

I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name

Aggregating results of downstream parameterised jobs in Jenkins

I have a Jenkins Build job which triggers multiple Test jobs with the test name as a parameter using the Jenkins Parameterized Trigger Plugin. This kicks off a number of test builds on multiple executors which all run correctly.
I now want to aggregate the results using 'Aggregate downstream test results->Automatically aggregate all downstream tests'. I have enabled this in the Build job and have set up fingerprinting so that these are recognised as downstream jobs. In the Build jobs lastBuild page I can see that they are recognised as downstream builds:
Downstream Builds
Test #1-#3
When I click on "Aggregated Test Results" however it only shows the latest of these (Test #3). This may be good behaviour if the job always runs the same tests but mine all run different parts of my test suite.
Is there some way I can get this to aggregate all of the relevant downstream Test builds?
Additional:
Aggregated Test Results does work if you replicate the Test job. This is not ideal as I have a large number of test suites.
I'll outline the manual solution (as mentioned in the comments), and provide more details if you need them later:
Let P be the parent job and D be a downstream job (you can easily extend the approach to multiple downstream jobs).
An instance (build) of P invokes D via Parameterized Trigger Plugin via a build step (not as a post-build step) and waits for D's to finish. Along with other parameters, P passes to D a parameter - let's call it PARENT_ID - based on P's build's BUILD_ID.
D executes the tests and archives them as artifacts (along with jUnit reports - if applicable).
P then executes an external Python (or internal Groovy) script that finds the appropriate build of D via PARENT_ID (you iterate over builds of D and examine the value of PARENT_ID parameter). The script then copies the artifacts from D to P and P publishes them.
If using Python (that's what I do) - utilize Python JenkinsAPI wrapper. If using Groovy - utilize Groovy Plugin and run your script as system script. You then can access Jenkins via its Java API.
I came up with the following solution using declarative pipelines.
It requires installation of "copy artifact" plugin.
In the downstream job, set "env" variable with the path (or pattern path) to result file:
post {
always {
steps {
script {
// Rem: Must be BEFORE execution that may fail
env.RESULT_FILE='Devices\\resultsA.xml'
}
xunit([GoogleTest(
pattern: env.RESULT_FILE,
)])
}
}
}
Note that I use xunit but the same apply with junit
In the parent job, save build variables, then in post process I aggregate results with following code:
def runs=[]
pipeline {
agent any
stages {
stage('Tests') {
parallel {
stage('test A') {
steps {
script {
runs << build(job: "test A", propagate: false)
}
}
}
stage('test B') {
steps {
script {
runs << build(job: "test B", propagate: false)
}
}
}
}
}
}
post {
always {
script {
currentBuild.result = 'SUCCESS'
def result_files = []
runs.each {
if (it.result != 'SUCCESS') {
currentBuild.result = it.result
}
copyArtifacts(
filter: it.buildVariables.RESULT_FILE,
fingerprintArtifacts: true,
projectName: it.getProjectName(),
selector: specific(it.getNumber().toString())
)
result_files << it.buildVariables.RESULT_FILE
}
env.RESULT_FILE = result_files.join(',')
println('Results aggregated from ' + env.RESULT_FILE)
}
archiveArtifacts env.RESULT_FILE
xunit([GoogleTest(
pattern: env.RESULT_FILE,
)])
}
}
}
Note that the parent job also set the "env" variable so it can itself be aggregated by a parent job.

Resources