Try to print all list items with below script.
node {
env.myJobs = [ "abc",
...
"def"]
stage('Regression') {
echo "Test"
build_jobs(myJobs)
}
}
#NonCPS
def build_jobs(list) {
list.tokenize(',').each { item ->
echo "${item}"
}
}
Below is build log. For first and last item, it prints extra bracket and add a space to other items.
How to iterate through list without extra character?
[Pipeline] echo
[abc
...
[Pipeline] echo
def]
Thanks for Szymon Stepniak
Define list as a string and seperated with ,⎵
Get each item with regex split()
node {
env.myJobs = 'abc,\
...
def'
stage('Regression') {
echo "Test"
build_jobs(myJobs)
}
}
#NonCPS
def build_jobs(list) {
list.split(/, */).each { item ->
echo "${item}"
}
}
Related
(edited/updated from original post to attempt to address confusion about what the problem is)
The problem is: Values that are set in a Jenkinsfile environment section are not added to the object returned by env.getEnvironment()
The question is: How do I get a map of the complete environment, including values that were assigned in the environment section? Because env.getEnvironment() doesn't do that.
Example Jenkinsfile:
pipeline {
agent any
environment {
// this is not included in env.getEnvironment()
ONE = '1'
}
stages {
stage('Init') {
steps {
script {
// this is included in env.getEnvironment()
env['TWO'] = '2'
}
}
}
stage('Test') {
steps {
script {
// get env values as a map (for passing to groovy methods)
def envObject = env.getEnvironment()
// see what env.getEnvironment() looks like
// notice ONE is not present in the output, but TWO is
// ONE is set using ONE = '1' in the environment section above
// TWO is set using env['TWO'] = '2' in the Init stage above
println envObject.toString()
// for good measure loop through the env.getEnvironment() map
// and print any value(s) named ONE or TWO
// only TWO: 2 is output
envObject.each { k,v ->
if (k == 'ONE' || k == 'TWO') {
println "${k}: ${v}"
}
}
// now show that both ONE and TWO are indeed in the environment
// by shelling out and using the env linux command
// this outputs ONE=1 and TWO=2
sh 'env | grep -E "ONE|TWO"'
}
}
}
}
}
Output (output of envObject.toString() shortened to ... except relevant part):
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Init)
[Pipeline] script
[Pipeline] {
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
[..., TWO:2]
[Pipeline] echo
TWO: 2
[Pipeline] sh
+ env
+ grep -E ONE|TWO
ONE=1
TWO=2
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Notice ONE is missing from the env.getEnvironment() object, but TWO is present.
Also notice that both ONE and TWO are set in the actual environment and I am not asking how to access the environment or how to iterate through the values returned by env.getEnvironment(). The issue is that env.getEnvironment() does not return all the values in the environment, it excludes any values that were set inside the environment section of the Jenkinsfile.
I don't have a "why" answer for you, but you can cheat and get a map by parsing the output from env via the readProperties step.
def envMap = readProperties(text: sh(script: 'env', returnStdout: true))
println(envMap.getClass())
println("${envMap}")
I would get the env and convert it to map with the help of properties
pipeline {
agent any
environment {
// this is not included in env.getEnvironment()
ONE = '1'
}
stages {
stage('Init') {
steps {
script {
// this is included in env.getEnvironment()
env['TWO'] = '2'
}
}
}
stage('Test') {
steps {
script {
def envProp = readProperties text: sh (script: "env", returnStdout: true).trim()
Map envMapFromProp = envProp as Map
echo "ONE=${envMapFromProp.ONE}\nTWO=${envMapFromProp.TWO}"
// now show that both ONE and TWO are indeed in the environment
// by shelling out and using the env linux command
// this outputs ONE=1 and TWO=2
sh 'env | grep -E "ONE|TWO"'
}
}
}
}
}
Output of env.getEnvironment() method will not return a list or Map, Hence it's difficult to iterate with each but there are some workaround you can do to make this work.
import groovy.json.JsonSlurper
pipeline {
agent any;
environment {
ONE = 1
TWO = 2
}
stages {
stage('debug') {
steps {
script {
def jsonSlurper = new JsonSlurper()
def object = jsonSlurper.parseText(env.getEnvironment().toString())
assert object instanceof Map
object.each { k,v ->
echo "Key: ${k}, Value: ${v}"
}
}
}
}
}
}
Note - env.getEnvironment().toString() will give you a JSON String . While parsing the JOSN string if groovy jsonSlurper.parseText found any special character it will through an error
You can also explore a little bit around env Jenkins API and find an appropriate method that will either return a Map or List so that you can use each
What i want to achieve is building a list of stages with avoiding using when{}. Im trying to run parallel pipelines
Here is example code
def stage_pull = {
stage('pulling') {
echo 'pulling'
}
}
def stage_build = {
stage(pulling) {
echo 'building'
}
}
def stage_deb = {
stage(pulling) {
echo 'deb file'
}
}
def transformIntoStages(stage1,stage2) {
//return stage1 + stage2
//return {stage1;stage2}
return stage1 << stage2
}
def agent_list = ["agent1", "agent2"]
stepsForParallel = [:]
stepsForParallel['agent1'] = transformIntoStages(stage_pull,stage_build)
stepsForParallel['agent2'] = transformIntoStages(stage_pull,stage_deb)
pipeline{
agent any
options {
timestamps()
}
stages{
stage('BUILD'){
steps{
script{
parallel stepsForParallel
}
}
}
}
}
This is simplified version. In real project, the number of used stages will be different for each agent.
I also have version with closures inside methods ...
https://pastebin.com/gPJjPx59
But none of this work.
PS. I know matrix{}, i use it often but I dont want to use it in this particular case.
I think i managed to achieve the goal by using strings and evaluate() function.
def stage_pull() {
return """
stage('pulling') {
echo 'pulling'
}
"""
}
def stage_build() {
return """
stage('building') {
echo 'building'
}
"""
}
def stage_deb() {
return """
stage('deb') {
echo 'deb file'
}
"""
}
def transformIntoStages(stage1,stage2) {
echo "{" + stage1 + stage2 + "}"
return { evaluate(stage1 + stage2) }
}
stepsForParallel = [:]
stepsForParallel['agent1'] = transformIntoStages(stage_pull(),stage_build())
stepsForParallel['agent2'] = transformIntoStages(stage_pull(),stage_deb())
stepsForParallel['agent3'] = transformIntoStages(stage_pull(),'')
pipeline{
agent any
options {
timestamps()
}
stages{
stage('BUILD'){
steps{
script{
parallel stepsForParallel
}
}
}
}
}
However im afraid that in case of more complicated stages/functions/structures with different kinds of parenthesis it will start to be a mess. And Blue Ocean cant show this properly. But in logs with timestamps and most of all in "Pipeline Steps" section i can see that it works as it should.
So im still open to some suggestions
I have noticed this (to me) strange behaviour. I have this Jenkins declarative pipeline:
#!groovy
pipeline {
agent {
node {
label 'mine-agent-pod'
}
}
environment {
MARKER = """run-${sh(
returnStdout: true,
script: "date -Ins | sed 's/[^a-zA-Z0-9-]/_/g'"
).trim()}"""
STATUS_DATA = "status-data-${MARKER}.json"
}
stages {
stage('Setup') {
steps {
sh("""echo MARKER=${MARKER}""")
sh("""echo STATUS_DATA=${STATUS_DATA}""")
}
}
}
}
I wanted the MARKER to be a kinda of ID I would use to mark all temporary stuff I create in a build (and I like it to be a date). But looks like MARKER is evaluated whenever it is used, as the output of the build shows (notice how nanoseconds part of the string differs):
[Pipeline] sh
+ echo MARKER=run-2020-07-07T12_04_23_369785902_00_00
MARKER=run-2020-07-07T12_04_23_369785902_00_00
[Pipeline] sh
+ echo STATUS_DATA=status-data-run-2020-07-07T12_04_23_727188019_00_00.json
STATUS_DATA=status-data-run-2020-07-07T12_04_23_727188019_00_00.json
Why is that? How to achieve having "static" variable?
It's due to Groovy closures have an interesting advantage over mere expressions: lazy evaluation. More detail
environment {
MARKER = 'run-' + sh(
returnStdout: true,
script: "date -Ins | sed 's/[^a-zA-Z0-9-]/_/g'").trim()
STATUS_DATA = "status-data-${MARKER}.json"
}
After a couleague's great advice, defining variable outside of pipeline helped:
#!groovy
def MARKER = """run-${ new Date().format("yyyy-MM-dd'T'HH:mm:ss.SZ") }"""
pipeline {
agent {
node {
label 'sat-cpt'
}
}
environment {
STATUS_DATA = "status-data-${MARKER}.json"
}
stages {
stage('Setup') {
steps {
sh("""echo MARKER=${MARKER}""")
sh("""echo STATUS_DATA=${STATUS_DATA}""")
}
}
}
}
This prints:
[Pipeline] sh
+ echo MARKER=run-2020-07-08T19:41:56.130+0000
MARKER=run-2020-07-08T19:41:56.130+0000
[Pipeline] sh
+ echo STATUS_DATA=status-data-run-2020-07-08T19:41:56.130+0000.json
STATUS_DATA=status-data-run-2020-07-08T19:41:56.130+0000.json
I am trying to split a URL http://localhost:8081/artifactory/api/storage/myrepo/sub1/file.zip in groovy DSL of Jenkins. It is a single line string. But following code does not work
String[] arr= string_var.split('/');
String[] arr=string_var.split('\\/');
It does not split it and returns itself in arr[0].
I am not sure if it is a bug. Please let me know if any other way is there in groovy to get "sub1" from URL string.
Are you sure that you doing DSL script correctly? As the groovy code looks to be OK.
Try to skip declaring types
def url_str = 'http://localhost:8081/artifactory/api/storage/myrepo/sub1/file.zip'
def sub = url_str.split('/')[-2]
println(sub)
in one line:
println('http://localhost:8081/artifactory/api/storage/myrepo/sub1/file.zip'.split('/')[-2])
no split, indexes:
def url_str = 'http://localhost:8081/artifactory/api/storage/myrepo/sub1/file.zip'
int[] indexes = url_str.findIndexValues {it == "/"}
println url_str.substring(indexes[-2] + 1, indexes[-1])
Try enclosing your code inside a 'script' tag of DSL language like the following piece of code:
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
def string_var = "http://localhost:8081/artifactory/api/storage/myrepo/sub1/file.zip"
String[] arr= string_var.split('/');
println "${arr[0]}"
}
}
}
}
}
Executing code above I get this result on the console:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
http:
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Thus, giving the expected 'http:' String
Another Groovy way to get the string 'sub1' (regex):
String s = "http://localhost:8081/artifactory/api/storage/myrepo/sub1/file.zip"
def match = s =~ /(sub1)/
if (match.size() > 0) { // If url 's' contains the expected string 'sub1'
println match[0][1]
// Or do something with the match
}
How do I pass variables between stages in a declarative pipeline?
In a scripted pipeline, I gather the procedure is to write to a temporary file, then read the file into a variable.
How do I do this in a declarative pipeline?
E.g. I want to trigger a build of a different job, based on a variable created by a shell action.
stage("stage 1") {
steps {
sh "do_something > var.txt"
// I want to get var.txt into VAR
}
}
stage("stage 2") {
steps {
build job: "job2", parameters[string(name: "var", value: "${VAR})]
}
}
If you want to use a file (since a script is the thing generating the value you need), you could use readFile as seen below. If not, use sh with the script option as seen below:
// Define a groovy local variable, myVar.
// A global variable without the def, like myVar = 'initial_value',
// was required for me in older versions of jenkins. Your mileage
// may vary. Defining the variable here maybe adds a bit of clarity,
// showing that it is intended to be used across multiple stages.
def myVar = 'initial_value'
pipeline {
agent { label 'docker' }
stages {
stage('one') {
steps {
echo "1.1. ${myVar}" // prints '1.1. initial_value'
sh 'echo hotness > myfile.txt'
script {
// OPTION 1: set variable by reading from file.
// FYI, trim removes leading and trailing whitespace from the string
myVar = readFile('myfile.txt').trim()
}
echo "1.2. ${myVar}" // prints '1.2. hotness'
}
}
stage('two') {
steps {
echo "2.1 ${myVar}" // prints '2.1. hotness'
sh "echo 2.2. sh ${myVar}, Sergio" // prints '2.2. sh hotness, Sergio'
}
}
// this stage is skipped due to the when expression, so nothing is printed
stage('three') {
when {
expression { myVar != 'hotness' }
}
steps {
echo "three: ${myVar}"
}
}
}
}
Simply:
pipeline {
parameters {
string(name: 'custom_var', defaultValue: '')
}
stage("make param global") {
steps {
tmp_param = sh (script: 'most amazing shell command', returnStdout: true).trim()
env.custom_var = tmp_param
}
}
stage("test if param was saved") {
steps {
echo "${env.custom_var}"
}
}
}
I had a similar problem as I wanted one specific pipeline to provide variables and many other ones using it to get this variables.
I created a my-set-env-variables pipeline
script
{
env.my_dev_version = "0.0.4-SNAPSHOT"
env.my_qa_version = "0.0.4-SNAPSHOT"
env.my_pp_version = "0.0.2"
env.my_prd_version = "0.0.2"
echo " My versions [DEV:${env.my_dev_version}] [QA:${env.my_qa_version}] [PP:${env.my_pp_version}] [PRD:${env.my_prd_version}]"
}
I can reuse these variables in a another pipeline my-set-env-variables-test
script
{
env.dev_version = "NOT DEFINED DEV"
env.qa_version = "NOT DEFINED QA"
env.pp_version = "NOT DEFINED PP"
env.prd_version = "NOT DEFINED PRD"
}
stage('inject variables') {
echo "PRE DEV version = ${env.dev_version}"
script
{
def variables = build job: 'my-set-env-variables'
def vars = variables.getBuildVariables()
//println "found variables" + vars
env.dev_version = vars.my_dev_version
env.qa_version = vars.my_qa_version
env.pp_version = vars.my_pp_version
env.prd_version = vars.my_prd_version
}
}
stage('next job') {
echo "NEXT JOB DEV version = ${env.dev_version}"
echo "NEXT JOB QA version = ${env.qa_version}"
echo "NEXT JOB PP version = ${env.pp_version}"
echo "NEXT JOB PRD version = ${env.prd_version}"
}
there is no need for (hidden plugin) parameter definitions or temp-file access. Sharing varibles across stages can be acomplished by using global Groovy variables in a Jenkinsfile like so:
#!/usr/bin/env groovy
def MYVAR
def outputOf(cmd) { return sh(returnStdout:true,script:cmd).trim(); }
pipeline {
agent any
stage("stage 1") {
steps {
MYVAR = outputOf('echo do_something')
sh "echo MYVAR has been set to: '${MYVAR}'"
}
}
stage("stage 2") {
steps {
sh '''echo "...in multiline quotes: "''' + MYVAR + '''" ... '''
build job: "job2", parameters[string(name: "var", value: MYVAR)]
}
}
}
I have enhanced the existing solution by correcting syntax .Also used hidden parameter plugin so that it does not show up as an extra parameter in Jenkins UI. Works well :)
properties([parameters([[$class: 'WHideParameterDefinition', defaultValue: 'yoyo', name: 'hidden_var']])])
pipeline {
agent any
stages{
stage("make param global") {
steps {
script{
env.hidden_var = "Hello"
}
}
}
stage("test if param was saved") {
steps {
echo"About to check result"
echo "${env.hidden_var}"
}
}
}
}