Jenkins pipelineJob DSL not interpreting variables in pipeline script - jenkins

I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.

I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.

You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}

You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}

Related

Jenkins Job DSL - can't load parmeters from file

DSL job:
#!groovy
def file = readFileFromWorkspace('params.properties').trim()
job('app-adm') {
label("adm")
println("#" + file + "#")
parameters{
file
}
steps
{
shell(readFileFromWorkspace('script-adm.sh'))
}
}
job('app-tst-mt')
{
parameters
{
booleanParam('FLAG', true)
}
steps
{
shell(readFileFromWorkspace('script-tst-mt.sh'))
}
}
params.properties:
choiceParam('OPTION', ['option 1 (default)', 'option 2', 'option 3'])
I've tried:
Use files as input to Jenkins JobDSL
adding through single variable like x=<param> and parameteres { x }
different formats
Nothing is working, through println inside job, I can clearly see that there is string that I want to put in parameters, but when doing so it dosen't register it and I don't get any params.
Okey, the answer is stupidly obvious but if anybody has same problem just make a shell job previous to DSL job in Jenkins build.
In this shell job you can easily modify files in workspace, so place there a whole dsl job (groovy script), and just replace parts of text with sed or envsubst.

run 2 jobs with same jenkinsfile and different parameters

I want to configure 2 jobs in Jenkins that use the same jenkinsfile but the only difference is the parameters to these jobs.
for example:
create 2 jobs named: A and B that each one of them gets param of X.
in A the job get X as 1 and in B the X is 2.
I want to create it in this way instead of one job that has multi-checkbox because the jobs are independent and I don't want to leave any option to make mistakes.
How can I achieve that only via jenkinsfile?
I read about load jenkinsfile within other jenkinsfile but can't find a way to pass parameters.
How can I achieve this?
If you use Build with Parameters you can reference these declared parameters in the Jenkinsfile with this syntax ${PARAM}.
You could also declare a Choice Parameter named CHOICE in a declaritive pipeline it would look like this:
pipeline {
agent any
parameters {
choice(name: 'CHOICE', choices: ['One', 'Two', 'Three'], description: 'Pick something')
}
stages {
stage('Example') {
steps {
echo "Choice: ${params.CHOICE}"
}
}
}
}
You would not need 2 jobs this way. It is the same job but it runs with different configurations for CHOICE.
This example is directly taken from the official docs:
https://www.jenkins.io/doc/book/pipeline/syntax/#parameters
my solution was:
in Jenkins file to set
environment {
X= getX(env.JOB_NAME)
Y= getY(env.JOB_NAME)
}
and create the following functions in the end of pipeline:
def getX(jobName)
{
if (jobName.contains("X1"))
{
return "X1";
}
else return "X2";
}
def getY(jobName)
{
if (jobName.contains("BLA"))
{
return "BLA1";
}
return "BLA2";
}

Calculated-String as the parameter to Jenkins's Groovy "STAGE"

I put this in the script section of a Jenkins UI job's configuration -
pipeline {
agent any
stages{
stage('Project') {
...
That works, however -
pipeline {
agent any
stages{
stage('Project ' + 'Josh') {
...
throws and displays an incorrect error message because the parser gets all confused due to the constructed string inside the stage.
Moreover,
String description = 'Project' + ' Josh'
pipeline {
agent any
stages{
stage(description) {
...
does not fail, but displays 'description' as the stage's description.
Now, if you try to load a groovy PaC file with this in it:
node {
stage('Project' + 'Josh') {
...
it works without a hitch.
Is it possible that there are two different Groovy parsers employed, one for the UI and another for loaded PaC's? This means that the UI one has this really horrible bug in it...
Ideas?
.a.
Your example has nothing to do with Jenkins UI. You have shown two different pipeline types - a declarative and scripted one.
Declarative pipeline
A declarative pipeline
pipeline {
agent any
stages {
stage('Build') {
steps {
// do something here
}
}
}
}
introduces more simplified, limited and opinionated syntax. This type of a pipeline sets boundaries for Groovy code execution - it is only available inside a dedicated script block, e.g.
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
def name = 'Joe'
echo "My name is ${name}"
}
}
}
}
}
This is why stage block expects a literal and not a variable nor expression.
Scripted pipeline
The second example you have shown is a scripted pipeline. This kind of pipeline is more powerful comparing to a declarative pipeline - the whole pipeline script is more or less a Groovy script so you can put any code almost everywhere. A scripted pipeline starts with node block and it allows you to put any Groovy code inside this block. Consider following example:
node {
stage("Test") {
echo "1,2,3"
}
for (int i = 0; i < 5; i++) {
stage("Stage ${i}") {
echo "This is ${i}"
}
}
}
This pipeline script generates 6 stages:
As you can see there are actually no limits what kind of stuff you put inside node block. Declarative pipeline does not allow you doing that - its syntax is strict and you have to follow it directly.
Differences
As a final note I will quote Jenkins official docs:
Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model.
Source: https://jenkins.io/doc/book/pipeline/syntax/#compare
The script you configured via UI is using declarative pipeline syntax, while the other uses the scripted node syntax. I'd say that's probably where the other parser comes in and would agree that the one for pipeline has a bug.

Define you own global variable for JenkinsJob (Not for ALL jobs!!)

I have ha Jenkins job that has a string input parameter of the build flags for the make command in my Jenkins job. My problem is that some users forget to change the parameter values when we have a release branch. So I want to overwrite the existing string input parameter (or create a new one) that should be used if the job is a release job.
This is the statement I want to add:
If branch "release" then ${params.build_flag} = 'DEBUGSKIP=TRUE'
and the code that is not working is:
pipeline {
agent none
parameters {
string(name: 'build_flag', defaultValue: 'DEBUGSKIP=TRUE', description: 'Flags to pass to build')
If {
allOf {
branch "*release*"
expression {
${params.build_flag} = 'DEBUGSKIP=TRUE'
}
}
}else{
${params.build_flag} = 'DEBUGSKIP=FALSE'
}
}
The code above explains what I want to do but I don't know to do it.
If you can, see if you could use the JENKINS EnvInject Plugin, with your pipeline, using the supported use-case:
Injection of EnvVars defined in the "Properties Content" field of the Job Property
These EnvVars are being injected to the script environment and will be inaccessible via the "env" Pipeline global variable (as in here)
Or writing the right values in a file, and using that file content as "Properties Content" of a downstream job (as shown there).

How can I parameterize Jenkinsfile jobs

I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name

Resources