Jenkins Pipeline groovy compareTo operator does not work - jenkins

I had this code where I want to get the object with the oldest CreateDate in a list of json objects:
import groovy.json.JsonSlurperClassic
def result = """{
"Metadata": [
{
"Status": "Active",
"CreateDate": "2018-08-14T18:59:52Z",
},
{
"Status": "Active",
"CreateDate": "2018-05-18T16:11:45Z",
}
]
}"""
def all = new JsonSlurperClassic().parseText(result)
def oldest = all.Metadata.min { a, b ->
Date.parse("yyyy-M-d'T'H:m:s'Z'", a.CreateDate).getTime() <=>
Date.parse("yyyy-M-d'T'H:m:s'Z'", b.CreateDate).getTime() }
print "oldest=" + oldest
works fine in Jenkins Script Console. I.e: it prints the output
oldest=[Status:Active, CreateDate:2018-05-18T16:11:45Z]
But when the same code is run under Pipeline, it prints
oldest=1
Why is this?

This is Groovy CPS transformer bug. The difference between script console and Jenkins pipeline is that script console executes script in vanilla Groovy environment while Jenkins pipeline is executed with groovy-cps. It means that Jenkins pipeline Groovy script gets executed in the Groovy shell that uses CPS transformation - it modifies the code so it supports this continuous passing style.
According to CpsDefaultGroovyMethodsTest, groovy-cps supports collection.min {} operation, but only when the closure with a single parameter is used. I've created a test case for a closure with two parameters, like:
[3,2,5,4,5].min { int a, int b -> a <=> b }
and instead of 2 I get -1 - it looks like the value of compareTo() method is being returned and not the actual min value from the given collection.
Solution
The easiest solution to bypass this problem is to extract
def oldest = all.Metadata.min { a, b ->
Date.parse("yyyy-M-d'T'H:m:s'Z'", a.CreateDate).getTime() <=>
Date.parse("yyyy-M-d'T'H:m:s'Z'", b.CreateDate).getTime() }
to a method annotated with #NonCPS - this annotation instructs groovy-cps interpreter to skip CPS transformations and just run this method as is. Below you can find working example:
import groovy.json.JsonSlurper
node {
stage("Test") {
def result = """{
"Metadata": [
{
"Status": "Active",
"CreateDate": "2018-08-14T18:59:52Z",
},
{
"Status": "Active",
"CreateDate": "2018-05-18T16:11:45Z",
}
]
}"""
def all = new JsonSlurper().parseText(result)
def oldest = getOldest(all)
println "oldest = ${oldest}"
}
}
#NonCPS
def getOldest(all) {
return all.Metadata.min { a, b ->
Date.parse("yyyy-M-d'T'H:m:s'Z'", a.CreateDate).getTime() <=>
Date.parse("yyyy-M-d'T'H:m:s'Z'", b.CreateDate).getTime() }
}

Related

Jenkins: howto prevent overwriting build results of parallel downstream jobs

I'm running a scripted pipeline which starts multiple downstream jobs in parallel.
A the main job, I'd like to collect the data and results of the parallel running downstream jobs so that I can process it later.
My main job is like this:
def all_build_results = []
pipeline {
stages {
stage('building') {
steps {
script {
def build_list = [
['PC':'01', 'number':'07891705'],
['PC':'01', 'number':'00568100']
]
parallel build_list.collectEntries {build_data ->
def br =[:]
["Building With ${build_data}": {
br = build job: 'Downstream_Pipeline',
parameters: [
string(name: 'build_data', value: "${build_data}")
],
propagate: false,
wait:true
build_result = build_data + ['Ergebnis':br.getCurrentResult(), 'Name': br.getFullDisplayName(), 'Url':br.getAbsoluteUrl(),
'Dauer': br.getDurationString(), 'BuildVars':br.getBuildVariables()]
// Print result
print "${BuildResultToString(build_result)}"
// ->> everything singular
// save single result to result list
all_build_results = all_build_results + [build_result]
}]
}
// print build results
echo "$all_build_results"
}
}
}
}
}
Mostly the different results a seperately saved in the "all_build_result" list. Everything how it should be.
But sometimes, 1 build result is listed twice and the other not!!
At the print "${BuildResultToString(build_result)}"the 2 results are still printed seperately but in the "all_build_result" 1 result is added 2 times and the other not!
Why?

Groovy code in script block to replace general build step step()

Among the possible steps one can use in a Jenkins pipeline, there is one with the name step, subtitled General Build Step. https://www.jenkins.io/doc/pipeline/steps/workflow-basic-steps/#step-general-build-step . I need to iterate on calling this step based on the contents of a file. I have created a groovy script to read the file and perform the iteration, but I am not sure how to create the equivalent of my step() in the groovy script. Here is the general format of the step I am trying to perform:
stage ('title') {
steps {
step([
$class: 'UCDeployPublisher',
siteName: 'literal string',
deploy: [
$class: 'com.urbancode.jenkins.plugins.ucdeploy.DeployHelper$DeployBlock',
param1: 'another literal string',
param2: 'yet another string'
]
])
}
}
The script step I have developed looks like this:
steps {
script {
def content = readFile(file:'data.csv', encoding:'UTF-8');
def lines = content.split('\n');
for (line in lines) {
// want to insert equivalent groovy code for the basic build step here
}
}
}
I'm expecting there is probably a trivial answer here. I'm just out of my element in the groovy/java world and I am not sure how to proceed. I have done extensive research, looked at source code for Jenkins, looked at plugins, etc. I am stuck!
Check the following, simply move your UCDeployPublisher to a new function and call that from your loop.
steps {
script {
def content = readFile(file:'data.csv', encoding:'UTF-8');
def lines = content.split('\n');
for (line in lines) {
runUCD(line)
}
}
}
// Groovy function
def runUCD(def n) {
stage ("title $n") {
steps {
step([
$class: 'UCDeployPublisher',
siteName: 'literal string',
deploy: [
$class: 'com.urbancode.jenkins.plugins.ucdeploy.DeployHelper$DeployBlock',
param1: 'another literal string',
param2: 'yet another string'
]
])
}
}
}
This is showing the code related to my comment on the accepted answer
pipeline {
stages {
stage ('loop') {
steps {
script {
... groovy to read/parse file and call runUCD
}
}
}
}
}
def runUCD(def param1, def param2) {
stage ("title $param1") {
step([
....
])
}
}

Sort map by value in Groovy jenkins pipeline script

How to do custom sort of Map for example by value in Jekins pipeline script?
This code doesn't quite work in Jenkins pipeline script:
Map m =[ james :"silly boy",
janny :"Crazy girl",
jimmy :"funny man",
georges:"massive fella" ]
Map sorted = m.sort { a, b -> a.value <=> b.value }
The map is still not sorted.
I decided to crate a separate question with better name and tags, because many people were struggling to find an answer here:
Groovy custom sort a map by value
You will have to create a separate method with #NonCPS annotation for that:
#NonCPS
def getSorted(def toBeSorted){
toBeSorted.sort(){ a, b -> b.value <=> a.value }
}
And then call it from the pipeline script.
Map unsortedMap =[ james :"silly boy",
janny :"Crazy girl",
jimmy :"funny man",
georges:"massive fella" ]
def sortedMap = getSorted(unsortedMap)
params name
1.xx
2.xx
...
pipeline {
agent {
kubernetes {
inheritFrom 'seunggabi-batch'
defaultContainer 'seunggabi-batch'
}
}
environment {
COUNTRY = "kr"
ENV = "prod"
CLASS = "seunggabi.batch.job.SparkSubmitJob"
}
stages {
stage('Run Job') {
steps {
script {
ARGS = sorted(params).collect { /$it.value/ } join ","
}
sh "/app/static/sh/emr.sh 1 20 ${COUNTRY} ${ENV} ${CLASS} \"${ARGS}\""
}
}
}
}
#NonCPS
def sorted(def m){
m.sort { /$it.key/ }
}

fetch source values from jenkins extended choice parameter

I have added an extended choice paramter. Now the source values are lin1, lin2, lin3 as listed in screenshot
now when I run,
If I select lin1 then I get param3 = lin1,
If I select lin1 and lin2 then I get param2 - lin1,lin2 ( delimiter is comma )
The question here is, inside jenkins pipeline how can get what all source values were set when the param was created. In short, without selecting any of the checkboxes, want to get the list of the possible values probably in a list
Eg:
list1 = some_method(param3)
// expected output >> list1 = [lin,lin2,lin3]
Let me know if this description is not clear.
The user who runs this does not have configure access ( we dont want to give configure access to anonynmous user ) Hence the job/config.xml idea will not work here
As requested you can also get the values dynamically:
import hudson.model.*
import org.jenkinsci.plugins.workflow.job.*
import com.cwctravel.hudson.plugins.extended_choice_parameter.ExtendedChoiceParameterDefinition
def getJob(name) {
def hi = Hudson.instance
return hi.getItemByFullName(name, Job)
}
def getParam(WorkflowJob job, String paramName) {
def prop = job.getProperty(ParametersDefinitionProperty.class)
for (param in prop.getParameterDefinitions()) {
if (param.name == paramName) {
return param
}
}
return null
}
pipeline {
agent any
parameters {
choice(name: 'FOO', choices: ['1','2','3','4'])
}
stages {
stage('test') {
steps {
script {
def job = getJob(JOB_NAME)
def param = getParam(job, "FOO")
if (param instanceof ChoiceParameterDefinition) {
// for the standard choice parameter
print param.getChoices()
} else if (param instanceof ExtendedChoiceParameterDefinition) {
// for the extended choice parameter plugin
print param.getValue()
}
}
}
}
}
}
As you can see it requires a lot of scripting, so just must either disable the Groovy sandbox or approve most of the calls on the script approval page.
I couldn't find any variable or method to get the parameter list. I guess it's somehow possible through a undocumented method on the param or currentBuild maps.
A possible solution to your problem could be defining the map outside of the pipeline and then just use that variables like this:
def param3Choices = ['lin1', 'lin2', 'lin3']
pipeline {
parameters {
choice(name: 'PARAM3', choices: param3Choices, description: '')
}
stage('Debug') {
steps {
echo param.PARAM3
print param3Choices
}
}
}

Iterating Jenkins groovy map, with multiple sets

I'd like to ask for help with a a Jenkins groovy pipeline, copied from here:
Is it possible to create parallel Jenkins Declarative Pipeline stages in a loop?
I'd like for a several sets of vars to be passed in a under a map, for several stages under a parallel run. However, only the last set (square brackets at the bottom of the map) gets registered for my map.
When the parallel stage runs, the map iterates successfully, but only with the last set (currently install_Stage(it)), ignoring other sets. Meaning that I get a pipeline showing four "stage: install ${product}" stages in parallel, and that's it. I'd like to get three parallels with four stages (network setup, revert, and install), as per my code below:
#!groovy
#Library('ci_builds')
def products = ["A", "B", "C", "D"]
def parallelStagesMap = products.collectEntries {
switch (it) {
case "A":
static_ip_address = "10.100.100.6"; static_vm_name = "install-vm1"; version = "14.1.60"
break
case "B":
static_ip_address = "10.100.100.7"; static_vm_name = "install-vm2"; version = "15.1"
break
case "C":
static_ip_address = "10.100.100.8"; static_vm_name = "install-vm3"; version = "15.1"
break
case "D":
static_ip_address = "10.100.100.9"; static_vm_name = "install-vm4"; version = "15.2"
break
default:
static_ip_address = "The product name is not on the switch list - please enter an ip address"
version = "The product name is not on the switch list - please enter a version"
break
}
["${it}" : network_reg(it)]
["${it}" : revert_to_snapshot_Stage(it)]
["${it}" : install_Stage(it)]
}
def network_reg(product) {
return {
stage("stage: setup network for ${product}") {
echo "setting network on ${static_vm_name} with ${static_ip_address}."
sh script: "sleep 15"
}
}
}
def revert_to_snapshot_Stage(product) {
return {
stage("stage: revert ${product}") {
echo "reverting ${static_vm_name} for ${product} on ${static_ip_address}."
sh script: "sleep 15"
}
}
}
def install_Stage(product) {
return {
stage("stage: install ${product}") {
echo "installing ${product} on ${static_ip_address}."
sh script: "sleep 15"
}
}
}
pipeline {
agent any
stages {
stage('non-parallel env check') {
steps {
echo 'This stage will be executed first.'
}
}
stage('parallel stage') {
steps {
script {
parallel parallelStagesMap
}
}
}
}
}
The network_reg and revert_to_snapshot_Stage won't run (unless I place them as the last set instead of ["${it}" : install_Stage(it)] , in which case, again, only the one of the parallel stages is run)
I don't mind a different approach to run several map definitions, but others such as: How to define and iterate over map in Jenkinsfile don't allow for a full multi variable map (more than a key+value pair)
Any help would be appreciated, Thanks!
I assume you have a similar issue like I had trying to dynamically build the parallel branches for parallel execution.
Two things were very important:
Make a copy of the loop variable (in you case: it) and use that copy only inside the parallel branch; if you don't all branches (closures) will reference the very same variable which of course will have the same value. That is particular to closures. See also: http://groovy-lang.org/closures.html.
Don't use collectEntries{}. Stick to the java-style loops as groovy loops most of the time do not work properly. Some .each{} constructs may work already but if in doubt switch to the java loops. See also: Impossibility to iterate over a Map using Groovy within Jenkins Pipeline
Following stripped down example works for me. I believe you'll be able to adjust it to your needs.
def products = ["A", "B", "C", "D"]
def parallelStagesMap = [:]
// use java-style loop
for (def product: products) {
// make a copy to ensure that each closure will get it's own variable
def copyOfProduct = product
parallelStagesMap[product] = {echo "install_Stage($copyOfProduct)"}
}
echo parallelStagesMap.toString()
pipeline {
agent any
stages {
stage('parallel stage') {
steps {
script {
parallel parallelStagesMap
}
}
}
}
}
If it still doesn't work: Check whether there's and upgrade your Pipeline: Groovy plugin as they usually fix lot of issues which usually work in groovy but won`t in pipeline.
You may want to check following related question which contains a minimal example as well:
Currying groovy CPS closure for parallel execution

Resources