How to assign an array to a Env variable in Jenkinsfile - jenkins

I am trying to run a pipeline that has several servers. I want to do some actions in several servers at a time when selecting a choice parameter. My idea is to select a choice parameter 'APPLICATION' and execute some actions on 4 different servers sequentially (one server at a time). I am trying to put the environment variables assigning the value os the servers in an array and then ask for the environment variable to execute the actions.
pipeline {
agent {
node {
label 'master'
}
}
environment {
APPLICATION = ['veappprdl001','veappprdl002','veappprdl003','veappprdl004']
ROUTER = ['verouprdl001','verouprdl002']
}
parameters {
choice(name: 'SERVER_NAME', choices: ['APPLICATION','ROUTER'], description: 'Select Server to Test' )
}
stages {
stage ('Application Sync') {
steps {
script {
if (env.SERVER_NAME == 'APPLICATION') {
sh """
curl --location --request GET 'http://${SERVER_NAME}//configuration-api/localMemory/update/ACTION'
"""
}
}
}
}
} }
I want to execute the action on all the servers of the 'APPLICATION' variable if is selected the 'APPLICATION' parameter in 'Build with parameters'.
Any Help would be appreciate it.
Thanks

You can't store a value of an array type in the environment variable. Whatever you are trying to assign to the env variable gets automatically cast to the string type. (I explained it in more detail in the following blog post or this video.) So when you try to assign an array, what you assign is its toString() representation.
However, you can solve this problem differently. Instead of trying to assign an array, you can store a string of values with a common delimiter (like , for instance.) Then in the part that expects to work with a list of elements, you simply call tokenize(",") method to produce a list of string elements. Having that you can iterate and do things in sequence.
Consider the following example that illustrates this alternative solution.
pipeline {
agent any
environment {
APPLICATION = "veappprdl001,veappprdl002,veappprdl003,veappprdl004"
}
stages {
stage("Application Sync") {
steps {
script {
env.APPLICATION.tokenize(",").each { server ->
echo "Server is $server"
}
}
}
}
}
}
When you run such a pipeline you will get something like this:

Related

Create variable in shared library for Jenkinsfile

I'm new to shared libraries in Jenkins, and fairly new to Groovy as well.
I have several multibranch pipelines for different projects. I have setup email notifications for each job using an environmental variable containing a list of email addresses, which works just fine. However, several jobs share the same email addresses (depending on the project it's for) and I'd like to create a shared library for a master email list, so I don't have to update the list in each job individually if say I want to add or remove someone. I'm having trouble defining a variable in a library that can be used later in the Jenkinsfile. This is a simplified version of what I've been trying:
shared library (basically a copy paste of the environmental variables I was originally using in the individual Jenkinsfiles/jobs, which works):
Jenkinsfile-shared-libraries\vars\masterEmailList
def call () {
environment {
project1EmailList = "user1#xyz.com, user2#xyz.com, user3#xyz.com"
project2EmailList = "user2#xyz.com, user4#xyz.com, user5#xyz.com"
}
}
Jenkinsfile
#Library('Jenkinsfile-shared-libraries') _
pipeline {
agent any
stages {
stage ('email list for project 1') {
steps {
masterEmailList()
echo env.project1EmailList
}
}
}
}
The echo returns "null" rather than the email list of the project like I would expect.
Any guidance would be much appreciated!
Cheers.
The "Defining global variables" section of https://www.jenkins.io/doc/book/pipeline/shared-libraries/#defining-global-variables helped solve this one.
shared library:
Jenkinsfile-shared-libraries\vars\masterEmailList
def project1EmailList() {
"user1#xyz.com, user2#xyz.com, user3#xyz.com"
}
def project2EmailList() {
"user2#xyz.com, user4#xyz.com, user5#xyz.com"
}
Jenkinsfile:
#Library('Jenkinsfile-shared-libraries') _
pipeline {
agent any
stages {
stage ('email list for project 1') {
steps {
script {
echo masterEmailList.project1EmailList
}
}
}
}
}

Execute a stage if environment variable contains specific substring

I have a jenkins declarative pipeline which I am interested to be able to perform a stage only if a specific environment variable contains a specific substring(not fully equals to it, just contains it).
Does anyone got any idea on how can I implement it(maybe using the when condition if possible).
Thanks in advance,
Alon
As you mentioned, in declarative pipeline you can use the when directive to establish a condition in which the stage will be executed.
Among the built in condition options like triggeredBy,branch and tag there is the generic expression option, which allows you to run any groovy code and calculate the relevant Boolean value according to your needs.
So for your case for example you can just use the groovy contains methods to achieve what you want, something like:
pipeline {
agent any
stages {
stage('Conditional Stage') {
when {
expression { return env.MyParamter.contains('MySubstring') }
}
steps {
echo "Running the conditional stage"
}
}
}
}

Groovy/Jenkins: when are variables null, when are they empty strings, and when are they missing?

I'm trying to grok the rules surrounding variables in Groovy/Jenkinsfiles/declarative syntax.
The generic webhook trigger captures HTTP POST content and makes them available as variables available to your Jenkinsfile. E.g.:
pipeline {
agent any
triggers {
GenericTrigger (
genericVariables: [
[ key: "POST_actor_name", value: "\$.actor.name" ]
],
token: "foo"
)
}
stages {
stage( "Set up" ) {
steps {
script {
echo "env var ${env.actor_name}"
echo "global var ${actor_name}"
}
}
}
}
If the HTTP POST content contains a JSON object with an actor_name field valued "foo", then this prints:
env var foo
global var foo
If the HTTP POST content does not contain the JSON field actor_name, then this prints
env var null
...then asserts/aborts with a No such property error.
Jenkins jobs also have a "this project is parameterized" setting, which seems to introduce yet another way to inject variables into your Jenkinsfile. The following Jenkinsfile prints a populated, parameterized build variable, an unpopulated one, and an intentionally nonexistent variable:
pipeline {
agent any
stages {
stage( "Set up" ) {
steps {
script {
echo "1 [${env.populated_var}]"
echo "2 [${env.unpopulated_var}]"
echo "3 [${env.dontexist}]"
echo "4 [${params.populated_var}]"
echo "5 [${params.unpopulated_var}]"
echo "6 [${params.dontexist}]"
echo "7 [${populated_var}]"
echo "8 [${unpopulated_var}]"
echo "9 [${dontexist}]"
}
}
}
}
}
The result is:
1 [foo]
2 []
3 [null]
4 [foo]
5 []
6 [null]
7 [foo]
8 []
...then asserts/aborts with a No such property error.
The pattern I can ascertain is:
env.-scoped variables will be NULL if they come from unpopulated HTTP POST content.
env.-scoped variables will be empty strings if they come from unpopulated parameterized build variables.
env.-scoped variables will be NULL if are nonexistent among parameterized build variables.
Referencing global-scoped variables will assert if they come from unpopulated HTTP POST content.
Referencing global-scoped variables will be be empty strings if they come from unpopulated parameterized build variables.
params.-scoped variables will be NULL if they if are nonexistent among parameterized build variables.
params.-scoped variables will be empty strings if they come from unpopulated parameterized build variables.
I have a few questions about this - I believe they are reasonably related, so am including them in this one post:
What is the underlying pattern/logic behind when a variable is NULL and when it is an empty string?
Why are variables available in different "scopes": env., params., and globally, and what is their relationship (why are they not always 1:1)?
Is there a way for unpopulated values in parameterized builds to be null-valued variables in the Jenkinsfile instead of empty strings?
Context: in my first Jenkinsfile project, I made use of variables populated by HTTP POST content. Through this, I came to associate a value's absence with the corresponding .env variable's null-ness. Now, I'm working with variables coming from parameterized build values, and when a value is not populated, the corresponding .env variable isn't null -- it's an empty string. Therefore, I want to understand the pattern behind when and why these variables are null versus empty, so that I can write solid and simple code to handle absence/non-population of values from both HTTP POST content and parameterized build values.
The answer is a bit complicated.
For 1 and 2:
First of all pipeline, stage, steps... are groovy classes. Everything in there is defined as object/variable.
env is an object that holds pretty much everything,
params holds all parameter ;)
They are both a Map, if you access an empty value it's empty, if you access an non existing one it's null.
The globals are variables itself and if you try to access a non existing the compiler complains.
For 3:
You can define "default" parameter:
pipeline {
agent any
stages {
stage( "Set up" ) {
steps {
script {
params = setConfig(params);
}
}
}
}
}
def merge(Map lhs, Map rhs) {
return rhs.inject(lhs.clone()) { map, entry ->
if (map[entry.key] instanceof Map && entry.value instanceof Map) {
map[entry.key] = merge(map[entry.key], entry.value)
} else {
map[entry.key] = entry.value
}
return map
}
}
def setConfig(givenConfig = [:]) {
def defaultConfig = [
"populated_var": "",
"unpopulated_var": "",
"dontexist": ""
];
effectiveConfig = merge(defaultConfig, givenConfig);
return effectiveConfig
}

run 2 jobs with same jenkinsfile and different parameters

I want to configure 2 jobs in Jenkins that use the same jenkinsfile but the only difference is the parameters to these jobs.
for example:
create 2 jobs named: A and B that each one of them gets param of X.
in A the job get X as 1 and in B the X is 2.
I want to create it in this way instead of one job that has multi-checkbox because the jobs are independent and I don't want to leave any option to make mistakes.
How can I achieve that only via jenkinsfile?
I read about load jenkinsfile within other jenkinsfile but can't find a way to pass parameters.
How can I achieve this?
If you use Build with Parameters you can reference these declared parameters in the Jenkinsfile with this syntax ${PARAM}.
You could also declare a Choice Parameter named CHOICE in a declaritive pipeline it would look like this:
pipeline {
agent any
parameters {
choice(name: 'CHOICE', choices: ['One', 'Two', 'Three'], description: 'Pick something')
}
stages {
stage('Example') {
steps {
echo "Choice: ${params.CHOICE}"
}
}
}
}
You would not need 2 jobs this way. It is the same job but it runs with different configurations for CHOICE.
This example is directly taken from the official docs:
https://www.jenkins.io/doc/book/pipeline/syntax/#parameters
my solution was:
in Jenkins file to set
environment {
X= getX(env.JOB_NAME)
Y= getY(env.JOB_NAME)
}
and create the following functions in the end of pipeline:
def getX(jobName)
{
if (jobName.contains("X1"))
{
return "X1";
}
else return "X2";
}
def getY(jobName)
{
if (jobName.contains("BLA"))
{
return "BLA1";
}
return "BLA2";
}

Jenkins pipelineJob DSL not interpreting variables in pipeline script

I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}

Resources