How to create dynamic parameters in Jenkins - jenkins

I want to create a Jenkins job that copies some file from one server to another. I have a list of servers. What I want is to define two parameters SOURCE and TARGET, and use Groovy scripts to create a dropdown list for each parameter, like for SOURCE, the list looks like this:
return [
"server1",
"server2",
"server3",
"server4",
"server5"
]
For the second parameter, I want the list be the same as above, but remove the one that was chosen in the first parameter. So if server1 was chosen for SOURCE, the list for TARGET should be:
return [
"server2",
"server3",
"server4",
"server5"
]
If server3 was chosen for SOURCE, then the list for TARGET will be:
return [
"server1",
"server2",
"server4",
"server5"
]
I can use Groovy for the TARGET like:
if (SOURCE.equals("server1")) {
return [ "server2","server3","server4","server5" ]
}
else if (SOURCE.equals("server2")) {
....
But the list is over 50 long and I prefer not to have over 50 "if"s in the script.
Is there a better way to create the TARGET list = the SOURCE list - the SOURCE choice?
Thanks!

You can do something like the below.
pipeline {
agent any
stages {
stage('Build') {
steps {
script{
// Get input 1 here
def firstList = getListEnv()
echo "First List : $firstList"
// User selected server4
def selectedServer = "server4"
// Get the second List now
def nl = excludeAndGet(selectedServer)
echo "Second List : $nl"
}
}
}
}
}
def excludeAndGet(name) {
def list = getListEnv()
list.remove(list.indexOf(name))
return list
}
def getListEnv(){
return [
"server1",
"server2",
"server3",
"server4",
"server5"
]
}

Related

Define global variable in Jenkins Shared Library

I created a Jenkins Shared Library with many functions into /vars. Among them, there is a devopsProperties.groovy with many properties :
class devopsProperties {
//HOSTS & SLAVES
final String delivery_host = "****"
final String yamale_slave = "****"
//GIT
final Map<String,String> git_orga_by_project = [
"project1" : "orga1",
"project2" : "orga2",
"project3" : "orga3"
]
...
}
Other functions in my Shared Library use these parameters. For example, gitGetOrga.groovy :
def call(String project_name) {
devopsProperties.git_orga_by_project.each{
if (project_name.startsWith(it.key)){
orga_found = it.value
}
}
return orga_found
}
But now, as we have many environments, we need to load at the beginning of the pipeline the devopsProperties. I create properties files in the resources :
+-resources
+-properties
+-properties-dev.yaml
+-properties-val.yaml
+-properties-prod.yaml
and create a function to load it :
def call(String environment="PROD") {
// load the specific environment properties file
switch(environment.toUpperCase()) {
case "DEV":
def propsText = libraryResource 'properties/properties-dev.yaml'
devopsProperties = readYaml text:propsText
print "INFO : DEV properties loaded"
break
case "VAL":
def propsText = libraryResource 'properties/properties-val.yaml'
devopsProperties = readYaml text:propsText
print "INFO : VAl properties loaded"
break
case "PROD":
def propsText = libraryResource 'properties/properties-prod.yaml'
devopsProperties = readYaml text:propsText
print "INFO : PROD properties loaded"
break
default:
print "ERROR : environment unkown, choose between DEV, VAL or PROD"
break
}
return devopsProperties
}
but when I try to use it in a pipeline :
#Library('Jenkins-SharedLibraries')_
devopsProperties = initProperties("DEV")
pipeline {
agent none
stages {
stage("SLAVE JENKINS") {
agent {
node {
label ***
}
}
stages{
stage('Test') {
steps {
script {
print devopsProperties.delivery_host // THIS IS OK
print devopsProperties.git_orga_by_project["project1"] // THIS IS OK
print gitGetOrga("project1") //THIS IS NOT OK
}
}
}
}
}
}
}
The last print generates an error : groovy.lang.MissingPropertyException: No such property: devopsProperties for class: gitGetOrga
How I can use a global variable into all my Jenkins Shared Library functions ? If possible, I prefer to not pass it in parameter of all functions.
EDITED
First, you need to place gitGetOrga.groovy in to 'src' directory which is on the same level as var and where you have java package for your code. You'll get the structure like this:
After that, you need to import your class in gitGetOrga.groovy
import com.you-company.project-name.devopsProperties
def call(String project_name) {
devopsProperties.git_orga_by_project.each{
if (project_name.startsWith(it.key)){
orga_found = it.value
}
}
return orga_found
}
You can find more information in the Jenkins docs: https://www.jenkins.io/doc/book/pipeline/shared-libraries/#writing-libraries

Sort map by value in Groovy jenkins pipeline script

How to do custom sort of Map for example by value in Jekins pipeline script?
This code doesn't quite work in Jenkins pipeline script:
Map m =[ james :"silly boy",
janny :"Crazy girl",
jimmy :"funny man",
georges:"massive fella" ]
Map sorted = m.sort { a, b -> a.value <=> b.value }
The map is still not sorted.
I decided to crate a separate question with better name and tags, because many people were struggling to find an answer here:
Groovy custom sort a map by value
You will have to create a separate method with #NonCPS annotation for that:
#NonCPS
def getSorted(def toBeSorted){
toBeSorted.sort(){ a, b -> b.value <=> a.value }
}
And then call it from the pipeline script.
Map unsortedMap =[ james :"silly boy",
janny :"Crazy girl",
jimmy :"funny man",
georges:"massive fella" ]
def sortedMap = getSorted(unsortedMap)
params name
1.xx
2.xx
...
pipeline {
agent {
kubernetes {
inheritFrom 'seunggabi-batch'
defaultContainer 'seunggabi-batch'
}
}
environment {
COUNTRY = "kr"
ENV = "prod"
CLASS = "seunggabi.batch.job.SparkSubmitJob"
}
stages {
stage('Run Job') {
steps {
script {
ARGS = sorted(params).collect { /$it.value/ } join ","
}
sh "/app/static/sh/emr.sh 1 20 ${COUNTRY} ${ENV} ${CLASS} \"${ARGS}\""
}
}
}
}
#NonCPS
def sorted(def m){
m.sort { /$it.key/ }
}

Jenkins Job DSL Parameter from list

I'm trying to create a DSL job that will create jobs from a list of maps iterating throughout them like the following:
def git_branch = "origin/awesomebranch"
def credential_id = "awesomerepocreds"
def jobs = [
[
title: "AwesomePipeline",
description: "This pipeline is awesome.",
directory: "awesomepath",
repo: "ssh://git#bitbucket.XXX.XX.XX:XXXX/repo/repo.git"
]
]
jobs.each { i ->
pipelineJob(i.title) {
description("${i.description}\n\n__branch__: ${git_branch}")
parameters {
stringParam('branch', defaultValue='origin/develop', description='Branch to build')
}
definition {
cpsScm {
scm {
git {
branch('$branch')
remote {
url(i.repo)
credentials(credential_id)
}
}
scriptPath("jenkins/${i.directory}/Jenkinsfile")
}
}
}
}
}
For jobs without parameters this works great but I don't know how to pass a list into the map of a job that will be used by the parameters block, something like
....
def jobs = [
[
title: "AwesomePipeline",
description: "This pipeline is awesome.",
directory: "awesomepath",
repo: "ssh://git#bitbucket.XXX.XX.XX:XXXX/repo/repo.git",
params: [
stringParam('branch', defaultValue='origin/develop', description='Branch to build'),
stringParam('sawesomeparam', defaultValue='awesomevalue', description='awesomething')
]
]
]
...
That might be used someway as some sort of each but not sure how to formulate this properly.
....
jobs.each { i ->
pipelineJob(i.title) {
description("${i.description}\n\n__branch__: ${git_branch}")
parameters {
i.params.each { p ->
p
}
}
....
Thanks in advance
This is a relatively old question, and I came across this a few days ago. This is how I ended up solving this.
First we need a class that has a static context method, that (I think) gets called by Groovy. This context is then later reused to actually add the parameters. I suppose you should use the additionalClasspath feature of job-dsl plugin to separate the Params class into a separate file so you can re-use it.
import javaposse.jobdsl.dsl.helpers.BuildParametersContext
class Params {
protected static Closure context(#DelegatesTo(BuildParametersContext) Closure params) {
params.resolveStrategy = Closure.DELEGATE_FIRST
return params
}
static Closure extend(Closure params) {
return context(params)
}
}
Then you can setup your job using a params attribute that contains a list of closures.
def jobs = [
[
name: "myjob",
params: [
{
stringParam('ADDITIONAL_PARAM', 'default', 'description')
}
]
]
]
Lastly, you can call the parameters property using the static extend of above class.
jobs.each { j ->
pipelineJob(j.name) {
// here go all the default parameters
parameters {
string {
name('SOMEPARAM')
defaultValue('')
description('')
trim(true)
}
}
j.params.each { p ->
parameters Params.extend(p)
}
// additional pipelineJob properties...
}
}
The reason I made the params attribute of the map into a list, is because this way you can have additional methods that return Closure that contain additional params. For example:
class DefaultParams {
static Closure someParam(String valueDefault) {
return {
string {
name('SOME_EXTRA_PARAM')
defaultValue(valueDefault)
description("It's a description bonanza")
trim(true)
}
}
}
}
def myjobs = [
[
name: "myjob",
params: [
defaultParams.someParam('default')
{
stringParam('ADDITIONAL_PARAM', 'default', 'description')
}
]
]
]
You should be able to use all the different syntaxes that is showing on the job-dsl api page.
simplest method is:
jobs.each { i ->
pipelineJob(i.title) {
description("${i.description}\n\n__branch__: ${git_branch}")
parameters {
i.params.each { p ->
p.delegate = owner.delegate
p()
}
}
where owner.delegate is delegate of parameters
or better which not creates empty parameters:
jobs.each { i ->
pipelineJob(i.title) {
description("${i.description}\n\n__branch__: ${git_branch}")
i.params.each { p ->
parameters {
p.delegate = getDelegate()
p()
}
}

Jenkins Pipeline groovy compareTo operator does not work

I had this code where I want to get the object with the oldest CreateDate in a list of json objects:
import groovy.json.JsonSlurperClassic
def result = """{
"Metadata": [
{
"Status": "Active",
"CreateDate": "2018-08-14T18:59:52Z",
},
{
"Status": "Active",
"CreateDate": "2018-05-18T16:11:45Z",
}
]
}"""
def all = new JsonSlurperClassic().parseText(result)
def oldest = all.Metadata.min { a, b ->
Date.parse("yyyy-M-d'T'H:m:s'Z'", a.CreateDate).getTime() <=>
Date.parse("yyyy-M-d'T'H:m:s'Z'", b.CreateDate).getTime() }
print "oldest=" + oldest
works fine in Jenkins Script Console. I.e: it prints the output
oldest=[Status:Active, CreateDate:2018-05-18T16:11:45Z]
But when the same code is run under Pipeline, it prints
oldest=1
Why is this?
This is Groovy CPS transformer bug. The difference between script console and Jenkins pipeline is that script console executes script in vanilla Groovy environment while Jenkins pipeline is executed with groovy-cps. It means that Jenkins pipeline Groovy script gets executed in the Groovy shell that uses CPS transformation - it modifies the code so it supports this continuous passing style.
According to CpsDefaultGroovyMethodsTest, groovy-cps supports collection.min {} operation, but only when the closure with a single parameter is used. I've created a test case for a closure with two parameters, like:
[3,2,5,4,5].min { int a, int b -> a <=> b }
and instead of 2 I get -1 - it looks like the value of compareTo() method is being returned and not the actual min value from the given collection.
Solution
The easiest solution to bypass this problem is to extract
def oldest = all.Metadata.min { a, b ->
Date.parse("yyyy-M-d'T'H:m:s'Z'", a.CreateDate).getTime() <=>
Date.parse("yyyy-M-d'T'H:m:s'Z'", b.CreateDate).getTime() }
to a method annotated with #NonCPS - this annotation instructs groovy-cps interpreter to skip CPS transformations and just run this method as is. Below you can find working example:
import groovy.json.JsonSlurper
node {
stage("Test") {
def result = """{
"Metadata": [
{
"Status": "Active",
"CreateDate": "2018-08-14T18:59:52Z",
},
{
"Status": "Active",
"CreateDate": "2018-05-18T16:11:45Z",
}
]
}"""
def all = new JsonSlurper().parseText(result)
def oldest = getOldest(all)
println "oldest = ${oldest}"
}
}
#NonCPS
def getOldest(all) {
return all.Metadata.min { a, b ->
Date.parse("yyyy-M-d'T'H:m:s'Z'", a.CreateDate).getTime() <=>
Date.parse("yyyy-M-d'T'H:m:s'Z'", b.CreateDate).getTime() }
}

use groovy to add an additional parameter to a jenkins job

We've got a set of groovy scripts that our users invoke in their jenkinsfile that sets some common job properties. However, we haven't been able to figure out how to preserve their existing parameters when we do this update.
snippet of our groovy code:
def newParamsList = []
def newbool = booleanParam(defaultValue: false, description: "deploy", name: "deploy_flag")
newParamsList.add(newbool)
def newParams = parameters(newParamsList)
properties([ //job property declaration
jobProperties,
disableConcurrentBuilds(),
newParams,
addSchedule,
])
However, this overwrites the parameter definitions, so if the user had specified a different parameter definition in their jenkins file before invoking our groovy, it's been wiped out.
I can get access to the existing parameters using currentBuild.rawBuild.getAction(ParametersAction), but if I understand correctly, I need the ParameterDefinition not the ParameterValue in order to set the property. I tried currentBuild.rawBuild.getAction(ParametersDefinitionProperty.class) thinking I could use that like ParametersAction, but it returns null.
Is it possible to get the parameter definitions inside the groovy being called from a Jenkinsfile? Or is there a different way that would let us add an additional parameter to the job without wiping out the existing ones currently defined in the jenkinsfile?
So the way we do this, is treat it all like a simple list, then join them together. So jenkinsfile's first get a list from the shared library, before adding their own to the list and then they set the params (not the shared library)
Repos jenkinsfiles do this:
#!groovy
#Library('shared') _
// Call shared libaray for common params
def paramList = jobParams.listParams ([
"var1": "value",
"var2": "value2"
])
// Define repo specific params
def addtionalParams = [
booleanParam(defaultValue: false, name: 'SOMETHING', description: 'description?'),
booleanParam(defaultValue: false, name: 'SOMETHING_ELSE', description: 'description?'),
]
// Set Jenkins job properties, combining both
properties([
buildDiscarder(logRotator(numToKeepStr: '20')),
parameters(paramList + addtionalParams)
])
// Do repo stuff
Our shared library looks like this:
List listParams(def body = [:]) {
//return list of parameters
config = BuildConfig.resolve(body)
// Always common params
def paramsList = [
choice(name: 'ENV', choices: ['dev', 'tst'].join('\n'), description: 'Environment'),
string(name: 'ENV_NO', defaultValue: "1", description: 'Environment number'),
]
// Sometimes common params, switch based on jenkinsfile input
def addtionalParams = []
switch (config.var1) {
case 'something':
case 'something2':
addtionalParams = [
choice(name: 'AWS_REGION', choices: ['us-west-2'].join('\n'), description: 'AWS Region to build/deploy'),
]
break
case 'something3':
addtionalParams = [
string(name: 'DEBUG', defaultValue: '*', description: 'Namespaces for debug logging'),
]
break
}
return paramsList + addtionalParams
}
We did the following groovy code to retrieve the parameters definitions and add new parameters to existing ones (we don't have any knowledge about what the user will put as parameters). If you have something more simple, I take it:
boolean isSupported = true
// nParamsis the List of new parameters to add //
Map initParamsMap = this.initializeParamsMap(nParams)
currentBuild.rawBuild.getParent().getProperties().each { k, v ->
if (v instanceof hudson.model.ParametersDefinitionProperty) {
// get each parameter definition
v.parameterDefinitions.each { ParameterDefinition paramDef ->
String param_symbol_name = null
// get the symbol name from the nested DescriptorImpl class
paramDef.class.getDeclaredClasses().each {
if(it.name.contains('DescriptorImpl')){
param_symbol_name = it.getAnnotation(Symbol).value().first()
}
}
// ... processing... //
if( !initParamsMap.containsKey(paramDef.name) ) {
//Valid parameter types are booleanParam, choice, file, text, password, run, or string.
if (param_symbol_name == 'choice') {
String defaultParamVal = paramDef.defaultParameterValue == null ? null : paramDef.defaultParameterValue.value
tempParams.add(
"$param_symbol_name"(name: paramDef.name,
defaultValue: defaultParamVal,
description: paramDef.description,
choices: paramDef.choices)
)
} else if (param_symbol_name == 'run') {
logError {"buildParametersArray does not support yet already existing RunParameterDefinition " +
"in current job parameters list, so the job parameters will not be modified"}
isSupported = false
} else {
tempParams.add(
"$param_symbol_name"(name: paramDef.name,
defaultValue: paramDef.defaultParameterValue.value,
description: paramDef.description)
)
}
}
}
}
}
if( isSupported) {
properties([parameters(tempParams)])
}
I think you can also do something like this:
// Get existing ParameterDefinitions
existing = currentBuild.rawBuild.parent.properties
.findAll { it.value instanceof hudson.model.ParametersDefinitionProperty }
.collectMany { it.value.parameterDefinitions }
// Create new params and merge them with existing ones
jobParams = [
booleanParam(name: 'boolean_param', defaultValue: false)
/* other params */
] + existing
// Create properties
properties([
parameters(jobParams)
])
Note: But you should either run it in a non-sandboxed environment or use with #NonCPS
There is an example how to add additional string parameter NEW_PARAM into job with name test:
job = Jenkins.instance.getJob("test")
ParametersDefinitionProperty params = job.getProperty(ParametersDefinitionProperty.class);
List<ParameterDefinition> newParams = new ArrayList<>();
newParams.addAll(params.getParameterDefinitions());
newParams.add(new StringParameterDefinition("NEW_PARAM", "default_value"));
job.removeProperty(params);
job.addProperty(new ParametersDefinitionProperty(newParams));

Resources