How to change the whole workspace of a project in Jenkins - jenkins

With customWorkspace option in pipeline block, the job will start up at the jenkins default workspcace, logs below
pipeline {
agent {
node {
label ''
customWorkspace '/Users/dabaoji/change_dir_test/test'
}
}
stages {
stage ('pre') {
steps {
sh 'echo $pwd'
}
}
}
}
Started by user admin
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /Users/dabaoji/.jenkins-lts/workspace/test
[Pipeline] {
[Pipeline] ws
Running in /Users/dabaoji/change_dir_test/test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (pre)
[Pipeline] sh
+ echo
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // ws
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
How to make the job run at the custom workspace when starting up?
With freestyle project it is possible. https://youtu.be/elc7oGS7BTg?t=302

Related

Jenkins parallel/matrix execution on agents assigned to same label

My Jenkins pipeline and setup is quite simple.
All agents I want to use are assigned to the same label. (Currently 2 agents are available, both have set the number of executors to 1).
After some recent changes I had to add some long running tasks for different projects.
To put my requirements in a nutshell:
One 'Init' stage shall be present
Multiple sequential stages shall get executed for several projects (they shall use the same workspace)
Release
Debug
Some tasks need to get executed in the post step
The pipeline should look like this:
As I want to keep the runtime to a minimum I want to execute the project specific stages in parallel if multiple free agents are available, if only one agent is free the stages shall be executed sequentially on this agent.
Here is a simple example based on my pipeline:
pipeline {
agent {
docker {
label 'label-1'
image 'docker_image:v1'
registryUrl 'https://custom_registry.com/'
registryCredentialsId 'xxx'
reuseNode true
alwaysPull true
}
}
options {
timestamps()
}
stages {
stage('Init') {
steps {
sleep time:2
echo "Init: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE}"
}
}
stage('Projects') {
matrix {
axes {
axis {
name 'PROJECT'
values 'Proj-1', 'Proj-2'
}
}
stages {
stage("Project") {
stages {
stage("Release") {
steps {
sleep time:1
echo "Release: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE} PROJECT=${PROJECT}"
}
}
stage("Debug") {
steps {
sleep time:1
echo "Release: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE} PROJECT=${PROJECT}"
}
}
}
}
}
}
}
}
post {
always {
echo "Post: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE}"
}
}
}
with this implementation Release and Debug is executed on the same agent in parallel using the same workspace. This is not something I want to have.
Here is the relevant console log for this problem:
[Pipeline] stage
[Pipeline] { (Init)
[Pipeline] sleep
[2023-01-30T08:07:13.893Z] Sleeping for 2 sec
[Pipeline] echo
[2023-01-30T08:07:15.919Z] Init: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Projects)
[Pipeline] parallel
[Pipeline] { (Branch: Matrix - PROJECT = 'Proj-1')
[Pipeline] { (Branch: Matrix - PROJECT = 'Proj-2')
[Pipeline] stage
[Pipeline] { (Matrix - PROJECT = 'Proj-1')
[Pipeline] stage
[Pipeline] { (Matrix - PROJECT = 'Proj-2')
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Project)
[Pipeline] stage
[Pipeline] { (Project)
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] sleep
[2023-01-30T08:07:16.588Z] Sleeping for 1 sec
[Pipeline] sleep
[2023-01-30T08:07:16.604Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:07:17.608Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-1
[Pipeline] }
[Pipeline] // stage
[Pipeline] echo
[2023-01-30T08:07:17.688Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-2
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] sleep
[2023-01-30T08:07:17.822Z] Sleeping for 1 sec
[Pipeline] sleep
[2023-01-30T08:07:17.837Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:07:18.839Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-1
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] echo
[2023-01-30T08:07:18.905Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-2
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] }
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] // stage
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] echo
[2023-01-30T08:07:19.518Z] Post: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo
[Pipeline] }
[Pipeline] // stage
Then I tried to get it somehow working and found this solution:
pipeline {
agent none
options {
timestamps()
}
stages {
stage('Init') {
label 'label-1'
steps {
sleep time:2
echo "Init: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE}"
}
}
stage('Projects') {
matrix {
axes {
axis {
name 'PROJECT'
values 'Proj-1', 'Proj-2'
}
}
stages {
stage("Project") {
agent {
docker {
label 'label-1'
image 'docker_image:v1'
registryUrl 'https://custom_registry.com/'
registryCredentialsId 'xxx'
reuseNode true
alwaysPull true
}
}
stages {
stage("Release") {
steps {
sleep time:1
echo "Release: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE} PROJECT=${PROJECT}"
}
}
stage("Debug") {
steps {
sleep time:1
echo "Release: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE} PROJECT=${PROJECT}"
}
}
}
}
}
}
}
}
post {
always {
node ('label-1') {
echo "Post: NODE_NAME=${env.NODE_NAME} WORKSPACE=${env.WORKSPACE}"
}
}
}
}
Here is the relevant console log for this pipeline defintion (some logs related to docker have been removed):
[Pipeline] stage
[Pipeline] { (Init)
[Pipeline] node
[2023-01-30T08:12:05.782Z] Running on agent-01 in c:\jenkins\workspace\demo
[Pipeline] {
[Pipeline] sleep
[2023-01-30T08:12:05.844Z] Sleeping for 2 sec
[Pipeline] echo
[2023-01-30T08:12:07.873Z] Init: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Projects)
[Pipeline] parallel
[Pipeline] { (Branch: Matrix - PROJECT = 'Proj-1')
[Pipeline] { (Branch: Matrix - PROJECT = 'Proj-2')
[Pipeline] stage
[Pipeline] { (Matrix - PROJECT = 'Proj-1')
[Pipeline] stage
[Pipeline] { (Matrix - PROJECT = 'Proj-2')
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Project)
[Pipeline] stage
[Pipeline] { (Project)
[Pipeline] getContext
[Pipeline] getContext
[Pipeline] node
[2023-01-30T08:12:08.320Z] Running on agent-01 in c:\jenkins\workspace\demo
[Pipeline] node
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
[2023-01-30T08:12:08.634Z] Login Succeeded
[Pipeline] {
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] bat
[2023-01-30T08:12:09.015Z]
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] bat
[2023-01-30T08:12:09.992Z]
[2023-01-30T08:12:09.992Z] .
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] withDockerContainer
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] sleep
[2023-01-30T08:12:13.631Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:12:14.644Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-1
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] sleep
[2023-01-30T08:12:14.741Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:12:15.762Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-1
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[2023-01-30T08:12:22.160Z] Running on agent-01 in c:\jenkins\workspace\demo
[Pipeline] // node
[Pipeline] }
[Pipeline] {
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
[Pipeline] {
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] bat
[2023-01-30T08:12:22.979Z]
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] bat
[2023-01-30T08:12:23.422Z]
[2023-01-30T08:12:23.423Z] .
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] withDockerContainer
[2023-01-30T08:12:23.610Z] agent-01 does not seem to be running inside a container
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] sleep
[2023-01-30T08:12:26.799Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:12:27.832Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-2
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] sleep
[2023-01-30T08:12:27.970Z] Sleeping for 1 sec
[Pipeline] echo
[2023-01-30T08:12:28.987Z] Release: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo PROJECT=Proj-2
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] node
[2023-01-30T08:12:35.603Z] Running on agent-01 in c:\jenkins\workspace\demo
[Pipeline] {
[Pipeline] echo
[2023-01-30T08:12:35.634Z] Post: NODE_NAME=agent-01 WORKSPACE=c:\jenkins\workspace\demo
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
This actually seems to work. If multiple agents assigned to label-1 are available then both are used. If only one is available then the two project specific stages Release and Debug are executed sequentially for each project, so Proj-1 and Proj-2 are executed sequentially.
The problem I have with this implementation is that the pipeline is configured to run concurrent builds. So, as I have assigned agent none to the root of the pipeline waiting tasks could start in-between the Debug stage and the 'post' step, extending the runtime of this pipeline run significantly.
Is there an easy way to achieve what I want to have?

Jenkins - Not creating dedicated workspaces in a parallel stage

My Jenkins Setup is as follows:
I do have multiple Jenkins slaves which have the same label e.g. jenkins_node.
Now I wanted to parallelize my current pipeline which didn't have a parallel stage before, because there was only one node available.
The stages I want to run in parallel are for different CMake Build types (Debug and Release).
If those stages are run on the same node, they must not share the same workspace. Otherwise the compile process will fail.
If one of the executors is occupied with another build job, the pipline (also the parallel steps) get executed on the same node. In that case the workspace is shared and the build process fails.
I would expect that Jenkins takes care of this by creating dedicated workspaces in case of a parallel stage e.g. by using the #2 suffix like documented here.
This is a sample pipeline to reproduce the problem. Release and Debug share the same workspace if executed on the same node. How can I enforce that dedicated workspaces are used for those stages?
pipeline {
agent { label 'jenkins_node' }
options {
timeout(time: 3, unit: 'HOURS')
timestamps()
}
stages {
stage('create file') {
steps{
touch 'my_file_test_file.txt'
}
}
stage('Builds') {
parallel {
stage('Release') {
steps{
//trigger release build here
touch 'release.txt'
echo pwd()
bat 'dir'
}
}
stage('Debug') {
steps{
//trigger debug build here
touch 'debug.txt'
echo pwd()
bat 'dir'
}
}
}
}
}
post{
cleanup {
cleanWs()
}
}
}
here's the log of one test run:
Started by user ***
[Pipeline] Start of Pipeline
[Pipeline] node (hide)
Running on *** in c:\jenkins\demo_playground
[Pipeline] {
[Pipeline] timeout
Timeout set to expire in 3 hr 0 min
[Pipeline] {
[Pipeline] timestamps
[Pipeline] {
[Pipeline] stage
[Pipeline] { (create file)
[Pipeline] touch
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Builds)
[Pipeline] parallel
[Pipeline] { (Branch: Release)
[Pipeline] { (Branch: Debug)
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] touch
[Pipeline] touch
[Pipeline] pwd
[Pipeline] echo
c:\jenkins\demo_playground
[Pipeline] bat
[Pipeline] pwd
[Pipeline] echo
c:\jenkins\demo_playground
[Pipeline] bat
c:\jenkins\demo_playground>dir
Volume in drive C is OSDisk
Volume Serial Number is 8ECE-9CEF
Directory of c:\jenkins\demo_playground
03/16/2022 02:12 PM <DIR> .
03/16/2022 02:12 PM <DIR> ..
03/16/2022 02:12 PM 0 debug.txt
03/16/2022 02:12 PM 0 my_file_test_file.txt
03/16/2022 02:12 PM 0 release.txt
3 File(s) 0 bytes
2 Dir(s) 33,649,197,056 bytes free
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
c:\jenkins\demo_playground>dir
Volume in drive C is OSDisk
Volume Serial Number is 8ECE-9CEF
Directory of c:\jenkins\demo_playground
03/16/2022 02:12 PM <DIR> .
03/16/2022 02:12 PM <DIR> ..
03/16/2022 02:12 PM 0 debug.txt
03/16/2022 02:12 PM 0 my_file_test_file.txt
03/16/2022 02:12 PM 0 release.txt
3 File(s) 0 bytes
2 Dir(s) 33,649,197,056 bytes free
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
As you can see, the workspace is shared between the Release and Debug stage.
Update 1 (2022-03-21):
As highlighted by #MaratC I modified my script to allocate the agent per stage, but still. If executed on one slave (because the other one is occupied), I sill get the same result. So for me this issue is still open.
pipeline {
agent none
options {
timeout(time: 3, unit: 'HOURS')
timestamps()
}
stages {
stage('create file') {
agent {
node {
label 'jenkins_node'
}
}
steps{
touch 'my_file_test_file.txt'
}
}
stage('Builds') {
parallel {
stage('Release') {
agent {
node {
label 'jenkins_node'
}
}
steps{
//trigger release build here
touch 'release.txt'
echo pwd()
bat 'dir'
}
}
stage('Debug') {
agent {
node {
label 'jenkins_node'
}
}
steps{
//trigger debug build here
touch 'debug.txt'
echo pwd()
bat 'dir'
}
}
}
}
}
post{
cleanup {
cleanWs()
}
}
}
Here's my version of your pipeline:
jenkins_node = "some_node_with_lots_of_executors"
pipeline {
agent { node { label 'master' } }
options {
timeout(time: 3, unit: 'HOURS')
timestamps()
}
stages {
stage('create file') {
agent { node { label "${jenkins_node}" } }
steps { echo pwd() }
}
stage('Builds') {
parallel {
stage('Release') {
agent { node { label "${jenkins_node}" } }
steps { echo pwd() }
}
stage('Debug') {
agent { node { label "${jenkins_node}" } }
steps { echo pwd() }
}
}
}
}
post { cleanup { cleanWs() } }
}
Here's the output:
[Pipeline] node
14:20:16 Running on <some_node> in /home/jenkins/workspace/test_pipeline
[Pipeline] {
[Pipeline] pwd
[Pipeline] echo
14:20:16 /home/jenkins/workspace/test_pipeline
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Builds)
[Pipeline] parallel
[Pipeline] { (Branch: Release)
[Pipeline] { (Branch: Debug)
[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] stage
[Pipeline] { (Debug)
[Pipeline] node
[Pipeline] node
14:20:16 Running on <some_node> in /home/jenkins/workspace/test_pipeline
14:20:16 Running on <some_node> in /home/jenkins/workspace/test_pipeline#2
[Pipeline] {
[Pipeline] {
[Pipeline] pwd
[Pipeline] echo
14:20:16 /home/jenkins/workspace/test_pipeline
[Pipeline] }
[Pipeline] pwd
[Pipeline] echo
14:20:16 /home/jenkins/workspace/test_pipeline#2
[Pipeline] }
[Pipeline] // node
[Pipeline] // node
From what can be seen, Jenkins allocates two separate workspaces (test_pipeline, test_pipeline#2) for two parallel stages if they run on the same node at the same time.

Trigger a multibranch pipeline when there is a change in files in specific folder

I want to run a multibranch pipeline when some files in a folder are pushed to BitBucket . I have tried with polling ignores commits to certain paths. But pipeline is not triggering . Can anyone help to solve the issue. How the path should be exactly specified inside included region of polling ignores commits to certain paths.
I used the following Jenkinsfile in a Git repo that contains 1.txt and 2.txt:
pipeline {
agent any
stages {
stage('1.txt in changelog') {
when {
// see https://www.jenkins.io/doc/book/pipeline/syntax/#built-in-conditions
changelog '1.txt'
}
steps {
echo '1.txt in changelog'
}
}
stage('2.txt in changelog') {
when {
changelog '2.txt'
}
steps {
echo '2.txt in changelog'
}
}
stage('1.txt in changeset') {
when {
changeset '1.txt'
}
steps {
echo '1.txt in changeset'
}
}
stage('2.txt in changeset') {
when {
changeset '2.txt'
}
steps {
echo '2.txt in changeset'
}
}
}
}
After changing and pushing 2.txt the Console Output of an according Multibranch Pipeline project showed:
...
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (1.txt in changelog)
Stage "1.txt in changelog" skipped due to when conditional
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (2.txt in changelog)
[Pipeline] echo
2.txt in changelog
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (1.txt in changeset)
Stage "1.txt in changeset" skipped due to when conditional
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (2.txt in changeset)
[Pipeline] echo
2.txt in changeset
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
The same applies to creation of 2.txt but not to deletion of it.

Executing parallel stages with declarative pipeline only using last item from list

I'm not sure what i'm doing wrong here, since currently when i try to iterate on a list the creation of the stages seems fine, but when executing the shellscript, the value used is allways the last item of the list:
Working pipeline:
pipeline {
agent any
stages {
stage('set servers') {
steps {
script {
my_list = ['server1','server-2','server-3']
}
}
}
stage('Execute then') {
parallel {
stage('shouter') {
steps {
script {
shouter = [:]
script {
for(i in my_list) {
shouter["${i}"] = {
echo "standupandshout.sh ${i}"
}
}
}
parallel shouter
}
}
}
}
}
}
}
Output:
Console Output:
Replayed #4
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (set servers)
[Pipeline] script
[Pipeline] {
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Execute then)
[Pipeline] parallel
[Pipeline] [shouter] { (Branch: shouter)
[Pipeline] [shouter] stage
[Pipeline] [shouter] { (shouter)
[Pipeline] [shouter] script
[Pipeline] [shouter] {
[Pipeline] [shouter] script
[Pipeline] [shouter] {
[Pipeline] [shouter] }
[Pipeline] [shouter] // script
[Pipeline] [shouter] parallel
[Pipeline] [server1] { (Branch: server1)
[Pipeline] [server-2] { (Branch: server-2)
[Pipeline] [server-3] { (Branch: server-3)
[Pipeline] [server1] echo
[server1] standupandshout.sh server-3
[Pipeline] [server1] }
[Pipeline] [server-2] echo
[server-2] standupandshout.sh server-3
[Pipeline] [server-2] }
[Pipeline] [server-3] echo
[server-3] standupandshout.sh server-3
[Pipeline] [server-3] }
[Pipeline] [shouter] // parallel
[Pipeline] [shouter] }
[Pipeline] [shouter] // script
[Pipeline] [shouter] }
[Pipeline] [shouter] // stage
[Pipeline] [shouter] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Desired output:
[Pipeline] [server1] echo
[server1] standupandshout.sh server-1
[Pipeline] [server1] }
[Pipeline] [server-2] echo
[server-2] standupandshout.sh server-2
[Pipeline] [server-2] }
[Pipeline] [server-3] echo
[server-3] standupandshout.sh server-3
This is due to groovy closures and when the code they contain gets evaluated.
http://blog.freeside.co/2013/03/29/groovy-gotcha-for-loops-and-closure-scope/
When the closures are run the value that is bound to the variable i is the value it had on the final iteration of the loop rather than the iteration where the closure was created. The closures' scopes have references to i and by the time any of the closures are executed i is 5.
Variables local to the loop body do not behave like this, obviously because each closure scope contains a reference to a different variable
This is why your stage name is ok but your value is not.
What’s the solution? Should we always use .each rather than a for loop? Well, I kind of like for loops in many cases and there can be memory utilization differences (don’t take that to mean loops are "better" or "more efficient").
If you simply alias the loop variable and refer to that alias in the
closure body all will be well
def fns = []
for (i in (1..5)) {
def myi = i
def isq = i * i
fns << {->
println "$myi squared is $isq"
}
}
fns.each { it() }
So this should work:
script {
shouter = [:]
for(i in my_list) {
def val = i
shouter["${i}"] = {
echo "standupandshout.sh ${val}"
}
}
parallel shouter
}

can not create file via Groovy code(or java code) in jenkinsfile of a pipeline job on Jenkins

pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
echo "whoami".execute().text
script {
File f = new File('/home/jenkins/test2.txt');
f.createNewFile();
}
}
}
stage('Test') {
steps {
echo 'Testing..'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
Jenkins console log: (got exception: Started by user Edgar Yu Running
in Durability level: MAX_SURVIVABILITY [Pipeline] node Running on
Jenkins in /var/jenkins_home/workspace/test2 [Pipeline] { [Pipeline]
stage [Pipeline] { (Build) [Pipeline] echo Building.. [Pipeline] echo
jenkins
[Pipeline] script [Pipeline] { [Pipeline] } [Pipeline] // script
[Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Test)
Stage 'Test' skipped due to earlier failure(s) [Pipeline] } [Pipeline]
// stage [Pipeline] stage [Pipeline] { (Deploy) Stage 'Deploy' skipped
due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline]
} [Pipeline] // node [Pipeline] End of Pipeline
java.io.IOException: Permission denied at java.io.UnixFileSystem.createFileExclusively(Native Method) at
java.io.File.createNewFile(File.java:1012)
This is due to Jenkins not implementing Groovy itself but an interpreter (CPS) - https://github.com/cloudbees/groovy-cps
To help deal with the complexities introduced, there are some common Steps implemented to take the trouble out of tasks such as creating a file.
To use Jenkins pipeline steps out of the box, use writeFile:
https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-writefile-code-write-file-to-workspace
writeFile([file: 'file.txt', text: filetxt])
If your deadset on writing your own, I suggest splitting it out into a Shared library, note this will probably cause ScriptSecurity alerts that will require approval:
final class PipelineUtils implements Serializable {
private script=null
private static final PipelineUtils instance = new PipelineUtils()
#NonCPS
String saveFile(String filename, String text) {
String PWD = script.pwd()
String filePath = "${PWD}/${filename}"
File file = new File(filePath)
file.text = text
}
}
See https://github.com/jenkinsci/pipeline-plugin/blob/master/TUTORIAL.md for information regarding #NonCPS and nonserializable objects.

Resources