Jenkins groovy for loops behaving differently - jenkins

My jenkins job looks like this:
class Device {
String name
Device(name) {
this.name = name
}
}
class globals {
static List<Device> devices = new ArrayList<Device>()
}
pipeline {
stages {
stage('test for loop') {
steps {
script {
addDevices()
def testStages = [:]
echo "TESTING REGULAR LOOP"
for(index=0; index < globals.devices.size(); index++) {
Device device = globals.devices.get(index)
echo "BEFORE TEST STAGES $device.name" //shows correct name
testStages["$device.name"] = { //correct name here
echo "INSIDE TEST STAGE $device.name" //works correctly
}
}
parallel testStages
}
}
}
stage('test for each loop') {
steps {
script {
def testStages = [:]
echo "TESTING FOR EACH LOOP"
for(Device device: globals.devices) {
echo "BEFORE $device.name" //shows correct name
testStages["$device.name"] = { //correct name here
echo "INSIDE TESTSTAGES: $device.name" //device name is always the last name in the list.
}
}
parallel testStages
}
}
}
}
}
void addDevices() {
globals.devices.add(new Device("a"))
globals.devices.add(new Device("b"))
}
The first loop works fine, it's a different device.name every time. However the second loop does not work, it prints the last item in the list. I really don't understand why. Would be really greatful if anyone can provide some insight.

The core of the problem is this:
int x = 1
def cls = { y -> x + y }
x = 2
println cls(3)
Output:
5
So the closure changes when the variable x changes. If you are using a normal for loop or the each method, every iteration creates a new instance of the variable (device in you case), but since the for-each loop uses iterators internally, the same variable is reused which is then used in the created closures. So only the last value of device counts in this case.

Related

Jenkins multithreading functions

I'm in need to run two defined functions parallel in Jenkins pipeline.
As defined in jenkins, the keyword parallel used with jobs, seems don't work with function calling.
What I've tried is -
def first_func(){
echo "first function"
}
def second_func(){
echo "second function"
}
node {
task = [:]
function_lists = ['first_func()', 'second_func()']
stage ('build') {
for (job in function_lists) {
task[job] = { '${job}' }
}
parallel task
}
}
don't actually call the functions. Is there any way to do so in jenkins?
Yes this can be achieved in below way:
def first_func(){
echo "first function"
}
def second_func(){
echo "second function"
}
node {
def task = [:]
stage ('build') {
// Loop through list
['first_func', 'second_func'].each {
def a = it;
task[a] = { "${a}"()}
}
parallel task
}
}
Output :

Getting the same output from parallel stages in jenkins scripted pipelines

I'm trying to create parallel stages in jenkins pipeline for say with this example
node {
stage('CI') {
script {
doDynamicParallelSteps()
}
}
}
def doDynamicParallelSteps(){
tests = [:]
for (f in ["Branch_1", "Branch_2", "Branch_3"]) {
tests["${f}"] = {
node {
stage("${f}") {
echo "${f}"
}
}
}
}
parallel tests
}
I'm expecting to see "Branch_1", "Branch_2", "Branch_3" and instead I'm getting "Branch_3", "Branch_3", "Branch_3"
I don't understand why. Can you please help ?
Short answer: On the classic view, the stage names are displaying the last value of the variable ${f}. Also, all the echo are echoing the same value. You need to change the loop.
Long Answer: Jenkins does not allow to have multiple stages with the same name so this could never happen successfully :)
On your example, you can see it fine on Blue Ocean:
Also, on console output, the names are right too.
On Jenkins classic view, the stage names have the last value of the variable ${f}. The last value is being printed on the classic view for the stage name, and all the echo are the same.
Solution: Change your loop. This worked fine for me.
node {
stage('CI') {
script {
doDynamicParallelSteps()
}
}
}
def void doDynamicParallelSteps(){
def branches = [:]
for (int i = 0; i < 3 ; i++) {
int index=i, branch = i+1
branches["branch_${branch}"] = {
stage ("Branch_${branch}"){
node {
sh "echo branch_${branch}"
}
}
}
}
parallel branches
}
This has to do with closures and iteration, but in the end this might fix it:
for (f in ["Branch_1", "Branch_2", "Branch_3"]) {
def definitive_name = f
tests[definitive_name] = {

Jenkins parallel script in loop using wrong variables

I'm trying to build a dynamic group of steps to run in parallel. The following example is what I came up with (and found examples of at https://devops.stackexchange.com/questions/3073/how-to-properly-achieve-dynamic-parallel-action-with-a-declarative-pipeline). But I'm having trouble getting it to use the expected variables. The result always seems to be the variables from the last iteration of the loop.
In the following example the echo output is always bdir2 for both tests:
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
def tests = [:]
def files
files = ['adir1/adir2/adir3','bdir1/bdir2/bdir3']
files.each { f ->
rolePath = new File(f).getParentFile()
roleName = rolePath.toString().split('/')[1]
tests[roleName] = {
echo roleName
}
}
parallel tests
}
}
}
}
}
I'm expecting one of the tests to output adir2 and another to be bdir2. What am I missing here?
Just try to move the test section a little higher, and it will be work
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
def tests = [:]
def files
files = ['adir1/adir2/adir3','bdir1/bdir2/bdir3']
files.each { f ->
tests[f] = {
rolePath = new File(f).getParentFile()
roleName = rolePath.toString().split('/')[1]
echo roleName
}
}
parallel tests
}
}
}
}
}

Implement matrix config in Jenkins pipeline

I've recently moved to the Pipeline plugin in Jenkins. I've successfully used freestyle jobs before for my project, but now would like to test something new.
My project builds for Windows and Linux, in release and in debug mode, and uses a parameter, called device, to configure some C preprocessor macros: globally #defined frame_width and frame_height differ, depending on device value.
Here is my Jenkinsfile:
def device_config(device) {
def device_config = "";
switch(device) {
case ~/^dev_[Aa]$/:
device_config="""-DGLOBAL_FRAME_WIDTH=640\
-DGLOBAL_FRAME_HEIGHT=480"""
break;
case ~/^dev_[Ss]$/:
device_config="""-DGLOBAL_FRAME_WIDTH=320\
-DGLOBAL_FRAME_HEIGHT=240"""
break;
default:
echo "warning: Unknown device \"$device\" using default config from CMake"
break;
}
return device_config
}
pipeline {
agent {
label 'project_device_linux'
}
environment {
device='dev_A'
}
stages {
stage('Configure') {
steps {
script {
dc = device_config("${env.device}")
}
dir('build') {
sh """cmake .. -DOpenCV_DIR="/usr/local" -DCMAKE_BUILD_TYPE="Debug"\
-DCHECK_MEMORY_LEAKS=ON\
-DENABLE_DEVELOPER_MODE=OFF\
-DUNIT_TEST_RAW_PATH=${env.tmpfs_path}\
$dc"""
}
}
}
stage('Build') {
steps {
dir('build') {
sh "make -j 16"
}
}
}
stage('Test'){
steps {
dir('build') {
sh "make check"
}
}
}
}
}
Now I'd like to repeat all those stages for another device, dev_s, for "Release" build type, and for Windows also. There are also some minor differences, depending on parameters: for example, "Release" builds should have included publishing of compiled binaries and excluded check for memory leaks. Also, if I've got it correctly, Windows slave does not understand sh build step and uses bat for that purpose.
How can I do it without copy-pasteing of code and in parallel on 2 nodes, one running Linux, and another one, running Windows?
Obviously, there should be several nested loops, but it is not clear for me, what to emit on each loop iteration.
Forgot to mention, I'd like to run everything from the Gitlab trigger on Push events.
UPDATE
Currently I end up with something like the following
#!/usr/bin/env groovy
def device_config(device) {
def result = "";
switch(device) {
case ~/^dev_[Aa]$/:
result ="""-DFRAME_WIDTH=640\
-DFRAME_HEIGHT=480"""
break;
case ~/^dev_[Ss]$/:
result ="""-DFRAME_WIDTH=320\
-DFRAME_HEIGHT=240"""
break;
default:
echo "warning: Unknown device \"$device\" using default config from CMake"
break;
}
return result;
}
oses = ['linux', 'windows']
devices = ['dev_A', 'dev_S']
build_types = ['Debug', 'Release']
node {
stage('Checkout') {
checkout_steps = [:]
for (os in oses) {
for (device in devices) {
for (build_type in build_types) {
def label = "co-${os}-${device}-${build_type}"
def node_label = "project && ${os}"
checkout_steps[label] = {
node(node_label) {
checkout scm
}
}
}
}
}
parallel checkout_steps
}
stage('Configure') {
config_steps = [:]
for (os in oses) {
for (device in devices) {
for (build_type in build_types) {
def label = "configure-${os}-${device}-${build_type}"
def node_label = "project && ${os}"
def dc = device_config("${device}")
cmake_parameters = """-DCMAKE_BUILD_TYPE="${build_type}"\
-DCHECK_MEMORY_LEAKS=ON\
$dc"""
if(os == 'linux') {
config_steps[label] = {
node(node_label) {
dir('build') {
sh """cmake .. -DOpenCV_DIR=/usr/local ${cmake_parameters}"""
}
}
}
} else {
config_steps[label] = {
node(node_label) {
dir('build') {
bat """cmake .. -G"Ninja" -DOpenCV_DIR=G:/opencv_2_4_11/build ${cmake_parameters}"""
}
}
}
}
}
}
}
parallel config_steps
}
}
What I don't like is that some node-specific settings, like paths, are set in the Jenkinsfile. Hope to figure out, how to set them in node settings in Jenkins.
I also see in logs, that only Release + dev_S configuration is applied - there is some kind of closure and late binding. Search reveals that it is a known and already fixed issue - I have to plan to figure out how to deal with closures.

Dynamic number of parallel steps in declarative pipeline

I'm trying to create a declarative pipeline which does a number (configurable via parameter) jobs in parallel, but I'm having trouble with the parallel part.
Basically, for some reason the below pipeline generates the error
Nothing to execute within stage "Testing" # line .., column ..
and I cannot figure out why, or how to solve it.
import groovy.transform.Field
#Field def mayFinish = false
def getJob() {
return {
lock("finiteResource") {
waitUntil {
script {
mayFinish
}
}
}
}
}
def getFinalJob() {
return {
waitUntil {
script {
try {
echo "Start Job"
sleep 3 // Replace with something that might fail.
echo "Finished running"
mayFinish = true
true
} catch (Exception e) {
echo e.toString()
echo "Failed :("
}
}
}
}
}
def getJobs(def NUM_JOBS) {
def jobs = [:]
for (int i = 0; i < (NUM_JOBS as Integer); i++) {
jobs["job{i}"] = getJob()
}
jobs["finalJob"] = getFinalJob()
return jobs
}
pipeline {
agent any
options {
buildDiscarder(logRotator(numToKeepStr:'5'))
}
parameters {
string(
name: "NUM_JOBS",
description: "Set how many jobs to run in parallel"
)
}
stages {
stage('Setup') {
steps {
echo "Setting it up..."
}
}
stage('Testing') {
steps {
parallel getJobs(params.NUM_JOBS)
}
}
}
}
I've seen plenty of examples doing this in the old pipeline, but not declarative.
Anyone know what I'm doing wrong?
At the moment, it doesn't seem possible to dynamically provide the parallel branches when using a Declarative Pipeline.
Even if you have a stage prior where, in a script block, you call getJobs() and add it to the binding, the same error message is thrown.
In this case you'd have to fall back to using a Scripted Pipeline.

Resources