Jenkins 2 dimensional array parallel stages - jenkins

I am trying to implement Jenkins 2D parallel stages.
This is the example that I am testing.
I expect 12 different stages which are A1/B1/C1/A2/B3/C2/A3/B3/C3/A4/B4/C4 and they are executing 3 parallel builds and each build has 4 serial build stages.
build_list = ['B1', 'B2', 'B3']
l1 = ['A1', 'A2', 'A3', 'A4']
l2 = ['B1', 'B2', 'B3', 'B4']
l3 = ['C1', 'C2', 'C3', 'C4']
array_build_list = [ l1, l2, l3 ]
index = 1
index = -index
def parallelStagesMap = build_list.collectEntries {
index++
["${it}" : generateStage(it, index)]
}
def generateStage(job, index) {
return {
stage("${job}") {
echo "This is ${job}"
script {
build_project_list = array_build_list[index]
echo "build_project_list = ${build_project_list}"
for(int i=0; i<build_project_list.size(); i++) {
stage(build_project_list[i]) {
echo "${build_project_list[i]}"
}
}
}
}
}
}
pipeline {
agent any
stages {
stage("start") {
steps {
echo "start"
}
}
stage('parallel stage') {
steps {
script {
echo "Inside script"
parallel parallelStagesMap
}
}
}
}
}
But this is the build result:
Result
The result stages are C1/C1/C1/C2/C2/C2/C3/C3/C3/C4/C4/C4 which means only the last parallel run is repeated 3 times.
Blue ocean Result
Do you have any suggestions or comments?
This is the console output:
[Pipeline] Start of Pipeline
[Pipeline] echo
After value B1 at index 0
[Pipeline] echo
After value B2 at index 1
[Pipeline] echo
After value B3 at index 2
[Pipeline] node
Running on Jenkins in /Users/.jenkins/workspace/2D_array_test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (start)
[Pipeline] echo
start
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (parallel stage)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
Inside script
[Pipeline] parallel
[Pipeline] { (Branch: B1)
[Pipeline] { (Branch: B2)
[Pipeline] { (Branch: B3)
[Pipeline] stage
[Pipeline] { (B1)
[Pipeline] stage
[Pipeline] { (B2)
[Pipeline] stage
[Pipeline] { (B3)
[Pipeline] echo
This is B1
[Pipeline] sh
[Pipeline] echo
This is B2
[Pipeline] sh
[Pipeline] echo
This is B3
[Pipeline] sh
[Pipeline] script
[Pipeline] {
[Pipeline] script
[Pipeline] {
[Pipeline] script
[Pipeline] {
[Pipeline] echo
build_project_list = [A1, A2, A3, A4]
[Pipeline] echo
build_project_list = [B1, B2, B3, B4]
[Pipeline] echo
build_project_list = [C1, C2, C3, C4]
[Pipeline] stage
[Pipeline] { (C1)
[Pipeline] stage
[Pipeline] { (C1)
[Pipeline] stage
[Pipeline] { (C1)
[Pipeline] echo
C1
[Pipeline] }
[Pipeline] echo
C1
[Pipeline] }
[Pipeline] echo
C1
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (C2)
[Pipeline] stage
[Pipeline] { (C2)
[Pipeline] stage
[Pipeline] { (C2)
[Pipeline] echo
C2
[Pipeline] }
[Pipeline] echo
C2
[Pipeline] }
[Pipeline] echo
C2
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (C3)
[Pipeline] stage
[Pipeline] { (C3)
[Pipeline] stage
[Pipeline] { (C3)
[Pipeline] echo
C3
[Pipeline] }
[Pipeline] echo
C3
[Pipeline] }
[Pipeline] echo
C3
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (C4)
[Pipeline] stage
[Pipeline] { (C4)
[Pipeline] stage
[Pipeline] { (C4)
[Pipeline] echo
C4
[Pipeline] }
[Pipeline] echo
C4
[Pipeline] }
[Pipeline] echo
C4
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // script
[Pipeline] // script
[Pipeline] // script
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Try refactoring your code like shown below.
build_list = ['B1X', 'B2X', 'B3X']
l1 = ['A1', 'A2', 'A3', 'A4']
l2 = ['B1', 'B2', 'B3', 'B4']
l3 = ['C1', 'C2', 'C3', 'C4']
array_build_list = [ l1, l2, l3 ]
index = 1
index = -index
def parallelStagesMap = build_list.collectEntries {
index++
["${it}" : generateStage(it, index)]
}
def generateStage(job, index) {
return {
stage("${job}") {
echo "This is ${job}"
script {
build_project_list = array_build_list[index]
echo "build_project_list = ${build_project_list}"
runStages(build_project_list)
}
}
}
}
def runStages(def build){
for(int i=0; i<build.size(); i++) {
stage(build[i]) {
echo "Build ~~~~~ ${build[i]}"
}
}
}
pipeline {
agent any
stages {
stage("start") {
steps {
echo "start"
}
}
stage('parallel stage') {
steps {
script {
echo "Inside script"
parallel parallelStagesMap
}
}
}
}
}
Explanation
Basically, this has to do with the variable scopes. In your pipeline, the culprit is this line build_project_list = array_build_list[index] Since you don't have def in front of the variable declaration the build_project_list becomes a global variable that binds to the scope of the script. So when things execute in parallel all the stages will update this variable in parallel.
So there are two ways to fix this, method one is the one I added above, by scoping the variables by moving them to a new function. Method two is simply adding def key work to the variable build_project_list. Something like below.
def build_project_list = array_build_list[index]

Related

How to change the whole workspace of a project in Jenkins

With customWorkspace option in pipeline block, the job will start up at the jenkins default workspcace, logs below
pipeline {
agent {
node {
label ''
customWorkspace '/Users/dabaoji/change_dir_test/test'
}
}
stages {
stage ('pre') {
steps {
sh 'echo $pwd'
}
}
}
}
Started by user admin
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /Users/dabaoji/.jenkins-lts/workspace/test
[Pipeline] {
[Pipeline] ws
Running in /Users/dabaoji/change_dir_test/test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (pre)
[Pipeline] sh
+ echo
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // ws
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
How to make the job run at the custom workspace when starting up?
With freestyle project it is possible. https://youtu.be/elc7oGS7BTg?t=302

Jenkins parallel/matrix execution on agents assigned to same label

My Jenkins pipeline and setup is quite simple.
All agents I want to use are assigned to the same label. (Currently 2 agents are available, both have set the number of executors to 1).
After some recent changes I had to add some long running tasks for different projects.
To put my requirements in a nutshell:
One 'Init' stage shall be present
Multiple sequential stages shall get executed for several projects (they shall use the same workspace)
Release
Debug
Some tasks need to get executed in the post step
The pipeline should look like this:
As I want to keep the runtime to a minimum I want to execute the project specific stages in parallel if multiple free agents are available, if only one agent is free the stages shall be executed sequentially on this agent.
Here is a simple example based on my pipeline:
pipeline {
agent {
docker {
label 'label-1'
image 'docker_image:v1'
registryUrl 'https://custom_registry.com/'
registryCredentialsId 'xxx'
reuseNode true
alwaysPull true
}
}
options {
timestamps()
}
stages {
stage('Init') {
steps {
sleep time:2
echo "Init: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE}"
}
}
stage('Projects') {
matrix {
axes {
axis {
name 'PROJECT'
values 'Proj-1', 'Proj-2'
}
}
stages {
stage("Project") {
stages {
stage("Release") {
steps {
sleep time:1
echo "Release: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE} PROJECT=${PROJECT}"
}
}
stage("Debug") {
steps {
sleep time:1
echo "Release: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE} PROJECT=${PROJECT}"
}
}
}
}
}
}
}
}
post {
always {
echo "Post: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE}"
}
}
}
with this implementation Release and Debug is executed on the same agent in parallel using the same workspace. This is not something I want to have.
Here is the relevant console log for this problem:
[Pipeline] stage
[Pipeline] { (Init)
[Pipeline] sleep
[2023-01-30T08:07:13.893Z] Sleeping for 2 sec
[Pipeline] echo
[2023-01-30T08:07:15.919Z] Init: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Projects)
[Pipeline] parallel
[Pipeline] { (Branch: Matrix - PROJECT = 'Proj-1')
[Pipeline] { (Branch: Matrix - PROJECT = 'Proj-2')
[Pipeline] stage
[Pipeline] { (Matrix - PROJECT = 'Proj-1')
[Pipeline] stage
[Pipeline] { (Matrix - PROJECT = 'Proj-2')
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Project)
[Pipeline] stage
[Pipeline] { (Project)
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] sleep
[2023-01-30T08:07:16.588Z] Sleeping for 1 sec
[Pipeline] sleep
[2023-01-30T08:07:16.604Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:07:17.608Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-1
[Pipeline] }
[Pipeline] // stage
[Pipeline] echo
[2023-01-30T08:07:17.688Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-2
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] sleep
[2023-01-30T08:07:17.822Z] Sleeping for 1 sec
[Pipeline] sleep
[2023-01-30T08:07:17.837Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:07:18.839Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-1
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] echo
[2023-01-30T08:07:18.905Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-2
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] }
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] // stage
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] echo
[2023-01-30T08:07:19.518Z] Post: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo
[Pipeline] }
[Pipeline] // stage
Then I tried to get it somehow working and found this solution:
pipeline {
agent none
options {
timestamps()
}
stages {
stage('Init') {
label 'label-1'
steps {
sleep time:2
echo "Init: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE}"
}
}
stage('Projects') {
matrix {
axes {
axis {
name 'PROJECT'
values 'Proj-1', 'Proj-2'
}
}
stages {
stage("Project") {
agent {
docker {
label 'label-1'
image 'docker_image:v1'
registryUrl 'https://custom_registry.com/'
registryCredentialsId 'xxx'
reuseNode true
alwaysPull true
}
}
stages {
stage("Release") {
steps {
sleep time:1
echo "Release: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE} PROJECT=${PROJECT}"
}
}
stage("Debug") {
steps {
sleep time:1
echo "Release: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE} PROJECT=${PROJECT}"
}
}
}
}
}
}
}
}
post {
always {
node ('label-1') {
echo "Post: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE}"
}
}
}
}
Here is the relevant console log for this pipeline defintion (some logs related to docker have been removed):
[Pipeline] stage
[Pipeline] { (Init)
[Pipeline] node
[2023-01-30T08:12:05.782Z] Running on agent-01 in c:\jenkins\workspace\demo
[Pipeline] {
[Pipeline] sleep
[2023-01-30T08:12:05.844Z] Sleeping for 2 sec
[Pipeline] echo
[2023-01-30T08:12:07.873Z] Init: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Projects)
[Pipeline] parallel
[Pipeline] { (Branch: Matrix - PROJECT = 'Proj-1')
[Pipeline] { (Branch: Matrix - PROJECT = 'Proj-2')
[Pipeline] stage
[Pipeline] { (Matrix - PROJECT = 'Proj-1')
[Pipeline] stage
[Pipeline] { (Matrix - PROJECT = 'Proj-2')
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Project)
[Pipeline] stage
[Pipeline] { (Project)
[Pipeline] getContext
[Pipeline] getContext
[Pipeline] node
[2023-01-30T08:12:08.320Z] Running on agent-01 in c:\jenkins\workspace\demo
[Pipeline] node
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
[2023-01-30T08:12:08.634Z] Login Succeeded
[Pipeline] {
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] bat
[2023-01-30T08:12:09.015Z]
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] bat
[2023-01-30T08:12:09.992Z]
[2023-01-30T08:12:09.992Z] .
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] withDockerContainer
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] sleep
[2023-01-30T08:12:13.631Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:12:14.644Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-1
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] sleep
[2023-01-30T08:12:14.741Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:12:15.762Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-1
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[2023-01-30T08:12:22.160Z] Running on agent-01 in c:\jenkins\workspace\demo
[Pipeline] // node
[Pipeline] }
[Pipeline] {
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
[Pipeline] {
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] bat
[2023-01-30T08:12:22.979Z]
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] bat
[2023-01-30T08:12:23.422Z]
[2023-01-30T08:12:23.423Z] .
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] withDockerContainer
[2023-01-30T08:12:23.610Z] agent-01 does not seem to be running inside a container
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] sleep
[2023-01-30T08:12:26.799Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:12:27.832Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-2
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] sleep
[2023-01-30T08:12:27.970Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:12:28.987Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-2
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] node
[2023-01-30T08:12:35.603Z] Running on agent-01 in c:\jenkins\workspace\demo
[Pipeline] {
[Pipeline] echo
[2023-01-30T08:12:35.634Z] Post: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
This actually seems to work. If multiple agents assigned to label-1 are available then both are used. If only one is available then the two project specific stages Release and Debug are executed sequentially for each project, so Proj-1 and Proj-2 are executed sequentially.
The problem I have with this implementation is that the pipeline is configured to run concurrent builds. So, as I have assigned agent none to the root of the pipeline waiting tasks could start in-between the Debug stage and the 'post' step, extending the runtime of this pipeline run significantly.
Is there an easy way to achieve what I want to have?

Jenkins - Not creating dedicated workspaces in a parallel stage

My Jenkins Setup is as follows:
I do have multiple Jenkins slaves which have the same label e.g. jenkins_node.
Now I wanted to parallelize my current pipeline which didn't have a parallel stage before, because there was only one node available.
The stages I want to run in parallel are for different CMake Build types (Debug and Release).
If those stages are run on the same node, they must not share the same workspace. Otherwise the compile process will fail.
If one of the executors is occupied with another build job, the pipline (also the parallel steps) get executed on the same node. In that case the workspace is shared and the build process fails.
I would expect that Jenkins takes care of this by creating dedicated workspaces in case of a parallel stage e.g. by using the #2 suffix like documented here.
This is a sample pipeline to reproduce the problem. Release and Debug share the same workspace if executed on the same node. How can I enforce that dedicated workspaces are used for those stages?
pipeline {
agent { label 'jenkins_node' }
options {
timeout(time: 3, unit: 'HOURS')
timestamps()
}
stages {
stage('create file') {
steps{
touch 'my_file_test_file.txt'
}
}
stage('Builds') {
parallel {
stage('Release') {
steps{
//trigger release build here
touch 'release.txt'
echo pwd()
bat 'dir'
}
}
stage('Debug') {
steps{
//trigger debug build here
touch 'debug.txt'
echo pwd()
bat 'dir'
}
}
}
}
}
post{
cleanup {
cleanWs()
}
}
}
here's the log of one test run:
Started by user ***
[Pipeline] Start of Pipeline
[Pipeline] node (hide)
Running on *** in c:\jenkins\demo_playground
[Pipeline] {
[Pipeline] timeout
Timeout set to expire in 3 hr 0 min
[Pipeline] {
[Pipeline] timestamps
[Pipeline] {
[Pipeline] stage
[Pipeline] { (create file)
[Pipeline] touch
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Builds)
[Pipeline] parallel
[Pipeline] { (Branch: Release)
[Pipeline] { (Branch: Debug)
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] touch
[Pipeline] touch
[Pipeline] pwd
[Pipeline] echo
c:\jenkins\demo_playground
[Pipeline] bat
[Pipeline] pwd
[Pipeline] echo
c:\jenkins\demo_playground
[Pipeline] bat
c:\jenkins\demo_playground>dir
Volume in drive C is OSDisk
Volume Serial Number is 8ECE-9CEF
Directory of c:\jenkins\demo_playground
03/16/2022 02:12 PM <DIR> .
03/16/2022 02:12 PM <DIR> ..
03/16/2022 02:12 PM 0 debug.txt
03/16/2022 02:12 PM 0 my_file_test_file.txt
03/16/2022 02:12 PM 0 release.txt
3 File(s) 0 bytes
2 Dir(s) 33,649,197,056 bytes free
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
c:\jenkins\demo_playground>dir
Volume in drive C is OSDisk
Volume Serial Number is 8ECE-9CEF
Directory of c:\jenkins\demo_playground
03/16/2022 02:12 PM <DIR> .
03/16/2022 02:12 PM <DIR> ..
03/16/2022 02:12 PM 0 debug.txt
03/16/2022 02:12 PM 0 my_file_test_file.txt
03/16/2022 02:12 PM 0 release.txt
3 File(s) 0 bytes
2 Dir(s) 33,649,197,056 bytes free
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
As you can see, the workspace is shared between the Release and Debug stage.
Update 1 (2022-03-21):
As highlighted by #MaratC I modified my script to allocate the agent per stage, but still. If executed on one slave (because the other one is occupied), I sill get the same result. So for me this issue is still open.
pipeline {
agent none
options {
timeout(time: 3, unit: 'HOURS')
timestamps()
}
stages {
stage('create file') {
agent {
node {
label 'jenkins_node'
}
}
steps{
touch 'my_file_test_file.txt'
}
}
stage('Builds') {
parallel {
stage('Release') {
agent {
node {
label 'jenkins_node'
}
}
steps{
//trigger release build here
touch 'release.txt'
echo pwd()
bat 'dir'
}
}
stage('Debug') {
agent {
node {
label 'jenkins_node'
}
}
steps{
//trigger debug build here
touch 'debug.txt'
echo pwd()
bat 'dir'
}
}
}
}
}
post{
cleanup {
cleanWs()
}
}
}
Here's my version of your pipeline:
jenkins_node = "some_node_with_lots_of_executors"
pipeline {
agent { node { label 'master' } }
options {
timeout(time: 3, unit: 'HOURS')
timestamps()
}
stages {
stage('create file') {
agent { node { label "${jenkins_node}" } }
steps { echo pwd() }
}
stage('Builds') {
parallel {
stage('Release') {
agent { node { label "${jenkins_node}" } }
steps { echo pwd() }
}
stage('Debug') {
agent { node { label "${jenkins_node}" } }
steps { echo pwd() }
}
}
}
}
post { cleanup { cleanWs() } }
}
Here's the output:
[Pipeline] node
14:20:16 Running on <some_node> in /home/jenkins/workspace/test_pipeline
[Pipeline] {
[Pipeline] pwd
[Pipeline] echo
14:20:16 /home/jenkins/workspace/test_pipeline
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Builds)
[Pipeline] parallel
[Pipeline] { (Branch: Release)
[Pipeline] { (Branch: Debug)
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] node
[Pipeline] node
14:20:16 Running on <some_node> in /home/jenkins/workspace/test_pipeline
14:20:16 Running on <some_node> in /home/jenkins/workspace/test_pipeline#2
[Pipeline] {
[Pipeline] {
[Pipeline] pwd
[Pipeline] echo
14:20:16 /home/jenkins/workspace/test_pipeline
[Pipeline] }
[Pipeline] pwd
[Pipeline] echo
14:20:16 /home/jenkins/workspace/test_pipeline#2
[Pipeline] }
[Pipeline] // node
[Pipeline] // node
From what can be seen, Jenkins allocates two separate workspaces (test_pipeline, test_pipeline#2) for two parallel stages if they run on the same node at the same time.

Jenkins Declarative Piplines- Script is unable to pickup parameter as varible

I am New to Jenkins. Trying to create one basic Pipeline which is using choicebased parameters. Following is my script.
Code ----
pipeline{
agent {
label 'agent'
}
parameters {
choice choices: ['John', 'Stacy'], description: 'Choose one', name: 'Person'
}
stages {
stage('Print') {
steps {
echo "Hello ${params.Person}"
sh """if (${params.Person} = "John")
then
echo "Person is male."
else
echo "Person is female."
fi"""
}
}
}
}
Now my build complete successfully regardless of whatever option I choose. It always display result "Person is female.
Following is result of one of my build.
Started by user ****
[Pipeline] Start of Pipeline
[Pipeline] node
Running on agent in
/home/temp/jenkins_agent/workspace/ChoiceBased PL
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Print)
[Pipeline] echo
Hello John
[Pipeline] sh
+ John = John
/home/temp/jenkins_agent/workspace/ChoiceBased PL#tmp/durable-
b7e98c46/script.sh: 1: John: not found
+ echo Person is female.
Person is female.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: Success
Please suggest what I am missing ?
I would change this just to be in Groovy rather than doing the compare in sh
stage('Print') {
steps {
echo "Hello ${params.Person}"
script {
if (params.Person == "John") {
echo "Person is male."
} else {
echo "Person is female."
}
}
}
}
Then when you choose Stacey you will get
[Pipeline] echo
Hello Stacy
[Pipeline] script
[Pipeline] {
[Pipeline] echo
Person is female.
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Executing parallel stages with declarative pipeline only using last item from list

I'm not sure what i'm doing wrong here, since currently when i try to iterate on a list the creation of the stages seems fine, but when executing the shellscript, the value used is allways the last item of the list:
Working pipeline:
pipeline {
agent any
stages {
stage('set servers') {
steps {
script {
my_list = ['server1','server-2','server-3']
}
}
}
stage('Execute then') {
parallel {
stage('shouter') {
steps {
script {
shouter = [:]
script {
for(i in my_list) {
shouter["${i}"] = {
echo "standupandshout.sh ${i}"
}
}
}
parallel shouter
}
}
}
}
}
}
}
Output:
Console Output:
Replayed #4
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (set servers)
[Pipeline] script
[Pipeline] {
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Execute then)
[Pipeline] parallel
[Pipeline] [shouter] { (Branch: shouter)
[Pipeline] [shouter] stage
[Pipeline] [shouter] { (shouter)
[Pipeline] [shouter] script
[Pipeline] [shouter] {
[Pipeline] [shouter] script
[Pipeline] [shouter] {
[Pipeline] [shouter] }
[Pipeline] [shouter] // script
[Pipeline] [shouter] parallel
[Pipeline] [server1] { (Branch: server1)
[Pipeline] [server-2] { (Branch: server-2)
[Pipeline] [server-3] { (Branch: server-3)
[Pipeline] [server1] echo
[server1] standupandshout.sh server-3
[Pipeline] [server1] }
[Pipeline] [server-2] echo
[server-2] standupandshout.sh server-3
[Pipeline] [server-2] }
[Pipeline] [server-3] echo
[server-3] standupandshout.sh server-3
[Pipeline] [server-3] }
[Pipeline] [shouter] // parallel
[Pipeline] [shouter] }
[Pipeline] [shouter] // script
[Pipeline] [shouter] }
[Pipeline] [shouter] // stage
[Pipeline] [shouter] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Desired output:
[Pipeline] [server1] echo
[server1] standupandshout.sh server-1
[Pipeline] [server1] }
[Pipeline] [server-2] echo
[server-2] standupandshout.sh server-2
[Pipeline] [server-2] }
[Pipeline] [server-3] echo
[server-3] standupandshout.sh server-3
This is due to groovy closures and when the code they contain gets evaluated.
http://blog.freeside.co/2013/03/29/groovy-gotcha-for-loops-and-closure-scope/
When the closures are run the value that is bound to the variable i is the value it had on the final iteration of the loop rather than the iteration where the closure was created. The closures' scopes have references to i and by the time any of the closures are executed i is 5.
Variables local to the loop body do not behave like this, obviously because each closure scope contains a reference to a different variable
This is why your stage name is ok but your value is not.
What’s the solution? Should we always use .each rather than a for loop? Well, I kind of like for loops in many cases and there can be memory utilization differences (don’t take that to mean loops are "better" or "more efficient").
If you simply alias the loop variable and refer to that alias in the
closure body all will be well
def fns = []
for (i in (1..5)) {
def myi = i
def isq = i * i
fns << {->
println "$myi squared is $isq"
}
}
fns.each { it() }
So this should work:
script {
shouter = [:]
for(i in my_list) {
def val = i
shouter["${i}"] = {
echo "standupandshout.sh ${val}"
}
}
parallel shouter
}

Resources