Jenkins pipeline - How to iterate through a list - jenkins

I'm required to read values from a file in my pipeline. I'm using split() which puts them into an Array. I need to put them into an Arraylist so I'm using Arrays.asList(). The problem I'm having is I'm unable to use the size() or length() methods so I cannot make a for loop such as
for (ii = 0; ii < var.length; ii++)
or
for (ii = 0; ii < var.size; ii++)
because I get error: unclassified field java.util.Arrays$ArrayList length
So I tried to use a for each loop, but when I take some action (like ls command for example) in my finally block it only iterates 1 time. But if I just run the command 'echo' it iterates for each element like it's supposed to. Any advice on how to modify my code to get it to iterate for each element in the list when using any command?
Works correctly....
node{
wrap([$class: 'ConfigFileBuildWrapper', managedFiles: [[fileId: 'dest_hosts.txt', targetLocation: '', variable: 'DEST_HOST']]]) {
HOST = Arrays.asList(readFile(env.DEST_HOST).split("\\r?\\n"))
deploy(HOST)
}
}
#NonCPS
def deploy(host){
for (String target : host){
try {
echo target
}
finally {
echo target
}
}
}
OUTPUT (iterates for each element):
[Pipeline] node
Running on <obfuscated>
[Pipeline] {
[Pipeline] wrap
provisoning config files...
copy managed file [<obfuscated>] to file:/var/lib/jenkins/<obfuscated>
[Pipeline] {
[Pipeline] readFile
[Pipeline] echo
www.testhost.com
[Pipeline] echo
www.testhost.com
[Pipeline] echo
www.testhost2.com
[Pipeline] echo
www.testhost2.com
[Pipeline] }
Deleting 1 temporary files
[Pipeline] // wrap
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
But if I take some action such as 'ls -l' it only iterates 1 time
node{
wrap([$class: 'ConfigFileBuildWrapper', managedFiles: [[fileId: 'dest_hosts.txt', targetLocation: '', variable: 'DEST_HOST']]]) {
HOST = Arrays.asList(readFile(env.DEST_HOST).split("\\r?\\n"))
deploy(HOST)
}
}
#NonCPS
def deploy(host){
for (String target : host){
try {
echo target
}
finally {
sh 'ls -l'
}
}
}
OUTPUT (only iterates 1 time):
[Pipeline] node
Running on <obfuscated>
[Pipeline] {
[Pipeline] wrap
provisoning config files...
copy managed file [<obfuscated>] to file:/var/lib/jenkins/<obfuscated>
[Pipeline] {
[Pipeline] readFile
[Pipeline] echo
www.testhost.com
[Pipeline] sh
[sandbox%2Fpipeline-test-new1] Running shell script
+ ls -l
total 8
-rw-r--r-- 1 jenkins jenkins 10 Jun 17 16:07 someFile
[Pipeline] }
Deleting 1 temporary files
[Pipeline] // wrap
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

ArrayList (and generally Lists) don't have a length or size field, they have a size() method. So use that in the for:
for (ii = 0; ii < var.size(); ii++)

I prefer this solution:
node('master') {
stage('Test 1: loop of echo statements') {
echo_all(abcs)
}
}
#NonCPS // has to be NonCPS or the build breaks on the call to .each
def echo_all(list) {
list.each { item ->
echo "Hello ${item}"
}
}
If you use a declarative pipeline, you have to wrap the call in a script statement:
stage('master') {
steps {
script {
echo_all(abcs);
}
}

As per this tutorial: https://github.com/jenkinsci/pipeline-plugin/blob/master/TUTORIAL.md#serializing-local-variables
...a method marked with the annotation #NonCPS... will be treated as “native” by the Pipeline engine, and its local variables never saved. However it may not make any calls to Pipeline steps
In your case, the sh call is a pipeline step operation, which you apparently can't perform from within a #NonCPS annotated method.
Regarding turning an array into a List, then since we're in Groovy land you could just use the .toList() method on the array.

I cannot tell you PRECISELY why, as I've not figured out how to find useful information about Jenkins without spending hours googling, but I can tell you this:
For a moment I thought you can make it run fine by adding 'echo line' AFTER the sh 'echo $line', but that turned out to be caused by Jenkins running a PREVIOUS version of the script...
I tried all sorts of things and none of them worked, then I found this:
Why an each loop in a Jenkinsfile stops at first iteration
Its a known bug in Jenkins pipeline!
(the known bug is JENKINS-26481, which says "At least some closures are executed only once inside of Groovy CPS DSL scripts managed by the workflow plugin")

Related

Jenkins Pipeline sleep(10) prevents function from being completed

I have found a strange issue with Groovy code in Jenkinsfile:
#NonCPS
def test() {
println "Start"
sleep(10)
println "Stop"
}
Thing is, that after sleeping for 10s code never gets to println "Stop".
It just seems, that sleep returns after 10 seconds and runs next pipeline steps.
Output is just:
[Pipieline] echo
Start
[Pipeline] sleep
Sleeping for 10 sec
[Pipeline] }
... next pipeline steps
Did anyone had same problem?
When you call sleep(10) inside your Groovy script, Workflow CPS plugin executes SleepStep instead of DefaultGroovyStaticMethods.sleep(Object self, long time). The problem in your case is caused #NonCPS function (thanks mkobit for a suggestion!). Consider following example:
node {
stage("Test") {
test()
}
}
#NonCPS
def test() {
echo "Start"
sleep(5)
echo "Stop"
}
Output:
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/test-pipeline
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] echo
Start
[Pipeline] sleep
Sleeping for 5 sec
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
There is no Stop echoed from the pipeline run. Now, if we only remove #NonCPS annotation from the function we call in the pipeline:
node {
stage("Test") {
test()
}
}
def test() {
echo "Start"
sleep(5)
echo "Stop"
}
Things get change and Stop gets echoed as expected:
Started by user admin
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/test-pipeline
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] echo
Start
[Pipeline] sleep
Sleeping for 5 sec
[Pipeline] echo
Stop
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
For more information about usage of #NonCPS please read following Best Practices For Pipeline Code article.
This issue is well documented in https://wiki.jenkins.io/display/JENKINS/Pipeline+CPS+method+mismatches
Use of Pipeline steps from #NonCPS
Sometimes users will apply the #NonCPS annotation to a method definition, which bypasses the CPS transform inside that method. This can be done to work around limitations in Groovy language coverage (since the body of the method will execute using the native Groovy semantics), or to get better performance (the interpreter imposes a substantial overhead). However such methods must not call CPS-transformed code such as Pipeline steps.
As suggested by Szymon removing the #NonCPS tag will indeed solve the issue,
however if you can't remove the #NonCPS tag because it is needed (to solve serialization issues for example) you can overcome this by using the Thread.sleep(long millis) Java method instead.
Notice - an administrator will need to approve this signature if running in sandbox.

Glob patterns in Jenkinsfile sh (shell) steps

I'm trying to set up a Jenkinsfile to run our CI pipeline. One of the steps will involve collecting files from across our directory tree and copying them into a single directory, for zipping up.
I'm attempting to do this using the Jenkins sh step and using glob patterns, but I can't seem to get this to work.
A simple example Jenkinsfile would be:
pipeline {
agent any
stages {
stage('List with Glob'){
steps{
sh 'ls **/*.xml'
}
}
}
}
I would expect that to list any .xml files in the workspace, but instead I receive:
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (List with Glob)
[Pipeline] sh
[jenkinsfile-pipeline] Running shell script
+ ls '**/*.xml'
ls: cannot access **/*.xml: No such file or directory
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 2
Finished: FAILURE
I think i'm missing something with Groovy string interpolation, but I need some help for this specific case (running in a Jenkins pipeline via a Jenkinsfile)
Any help much appreciated!
As far as I can tell **/*.xml' isn't a valid glob pattern (see this). Instead what you have there is an ant naming pattern, which, as far as I know, isn't supported by bash (or sh). Instead what you wan't to do is to use find:
pipeline {
agent any
stages {
stage('List with find'){
steps{
sh "find . -type f -name '*.xml'"
}
}
}
}

#NonCPS stops after first build step

In a #NonCPS annotated function, only code up to the very first jenkins build step is executed. Does anyone have the same problem? Am I missing something? I am using Jenkins LTS... just sayin' (2.73.2).
This is my code:
#NonCPS
def hello() {
println 'Output "hello":'
sh 'echo Hello'
println 'Output "World":'
sh 'echo World'
}
node {
stage('Test') {
hello()
}
}
I would expect this code to run properly, but the output is the following:
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/Sandbox/pipeline-test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] echo
Output "hello":
[Pipeline] sh
[pipeline-test] Running shell script
+ echo Hello
Hello
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
You cannot run build steps inside #NonCPS methods. Pipeline scripts are considered "serializable", allowing them to be durable across system failures etc. Only a subset of the capabilities of groovy used by pipeline scripts is serializable - for anything that is not, you use #NonCPS to execute it.
Essentially, your #NonCPS method needs to do its business and return data back to the "safe", serialized execution stack.
In your particular example code I see no reason why hello() has to be #NonCPS at all - I can only assume your real function is doing something more complex.
(Edit)
Having just looked at your question history and the original script; I don't know if this is still the case with the latest versions but when I was writing our scripts ~6 months ago, each { thing -> iteration was not serializable.

Jenkins variable, different executors

I'm not a Jenkins guru so please be patient. :-)
I have a pipeline, something nearly as simple as this:
def hash = ''
node {
stage('Checkout') {
…
}
stage('Build') {
…
}
stage('Tests') {
…
}
}
stage('Ask deploy') {
input 'Deploy?'
}
node {
stage('Deploy') {
}
}
I want to set the value of the hash variable in the first node and read it in the next if the manual input is positive. Is this possible and safe? Is this the correct approach?
Note that there are multiple executors and manual input involved. In the Jenkins docs it is hinted for a node that:
As soon as an executor is free on a node, the steps will run.
This means that the two nodes may run in different executors, correct? Do they still share the same global variables? Thanks in advance for any clarifications!
If you have multiple slaves in Jenkins, the pipeline will be launch in one of this slaves. Every slave is different.
Every stage in you pipeline will be launch in the same slave so if you have the variable "hash" at the first line of your pipeline you wouldn't have problem to read it in all your pipeline but if you have to access to this variable value from a different build you can not access.
If you need a global variable to read it in different builds you can define a global variable using the Global Variables String Parameter Plugin
The hash variable is global and its value is available in the different executors which seems logical to me. So it looks like what I do is OK and it will work this way unless I miss something.
Here is how I've verified that (details skipped for brevity):
I've created a similar pipeline and killed the executor which ran the first node:
def gitHash;
node {
withCredentials(...) {
//Step 1:
//Check out from the SCM
stage('Prepare') {
echo "Checking out the project from source control.."
scmInfo = checkout scm
gitHash = scmInfo.GIT_COMMIT
echo "Project checked out, the GIT hash of the last commit is: ${gitHash}"
}
}
}
stage('Ask deploy') {
input 'Deploy?'
}
node {
withCredentials(...) {
stage('Deploy') {
echo "TODO, hash ${gitHash}"
}
}
}
The output from Jenkins is the following (details skipped):
Obtained Jenkinsfile from 7adc4bb98524b31de93e0c1ae16bf967ca3df47c
Running on jnlp-13775fa128a47 in /root/workspace/...
[Pipeline] {
[Pipeline] withCredentials
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Prepare)
[Pipeline] echo
Project checked out, the GIT hash of the last commit is: 7adc4bb98524b31de93e0c1ae16bf967ca3df47c
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] stage
[Pipeline] { (Ask deploy)
[Pipeline] input
Deploy?
Proceed or Abort
Approved by admin
[Pipeline] }
[Pipeline] // stage
[Pipeline] node
Running on jnlp-1383bdf520c9d in /root/workspace/...
[Pipeline] {
[Pipeline] withCredentials
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] echo
TODO, hash 7adc4bb98524b31de93e0c1ae16bf967ca3df47c
[Pipeline] End of Pipeline
Finished: SUCCESS
As seen the first node runs on executor jnlp-13775fa128a47 the second is on jnlp-1383bdf520c9d but the value of the globally scoped variable can be read there.

Using loaded script in Jenkinsfile

I had a script that was being executed as a jenkins pipeline and it was working fine. I wanted to reuse it for multiple environments so I moved all the code to functions and load them from multiple files.
Library file - Healthcheck:
#!groovy
#NonCPS
def check(type) {
stage "prepare"
echo "TEST1"
props = readProperties file:'build.properties'
echo "TEST2"
stage "queues"
checkQueues()
}
#NonCPS
def checkQueues() {
txt = "http://ltxl0207.sgdcelab.sabre.com:8161/api/jolokia/read/org.apache.activemq:brokerName=localhost,destinationName=!/tss!/trip_source_updates,destinationType=Queue,type=Broker/QueueSize".toURL().getText(requestProperties: [Authorization: "Basic " + "admin:admin".getBytes().encodeBase64().toString()])
json = new groovy.json.JsonSlurper().parseText(txt)
echo "Got response: " + txt
}
return this;
File that uses it - Healthcheck-dev:
#!groovy
node {
checkout scm
healthcheck = load 'Healthcheck'
healthcheck.check('DEV')
}
And the trouble is that the script doesn't get pass readProperties and the prepare stage it just stops there, ignoring the queues stage:
[Pipeline] load
[Pipeline] { (Healthcheck)
[Pipeline] }
[Pipeline] // load
[Pipeline] stage (prepare)
Entering stage prepare
Proceeding
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
What I'm doing wrong? When I move the code to single file it works correctly.
Have you tried to execute your pipeline with your 2 files ("Healthcheck" & "Healtcheck-dev") being in the same repository ? Because when you load a script from another SCM repo like you seem to be doing, Jenkins actually creates another directory #script or such for your loaded script.
You might need to do something like this to load your general script from the correct workspace :
node {
checkout scm
dir("${projectWorkspace}#script") {
healthcheck = load 'Healthcheck'
healthcheck.check('DEV')
}
}
with ${projectWorkspace} being the original workspace for your build.

Resources