custom jenkins declarative pipeline dsl with named arguments - jenkins

I've been trying for a while now to start working towards moving our free style projects over to pipeline. To do so I feel like it would be best to build up a shared library since most of our builds are the same. I read through this blog post from Jenkins. I came up with the following
// vars/buildGitWebProject.groovy
def call(body) {
def args= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = args
body()
pipeline {
agent {
node {
label 'master'
customWorkspace "c:\\jenkins_repos\\${args.repositoryName}\\${args.branchName}"
}
}
environment {
REPOSITORY_NAME = "${args.repositoryName}"
BRANCH_NAME = "${args.branchName}"
SOLUTION_NAME = "${args.solutionName}"
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
skipStagesAfterUnstable()
timestamps()
}
stages {
stage("checkout") {
steps {
script{
assert REPOSITORY_NAME != null : "repositoryName is null. Please include it in configuration."
assert BRANCH_NAME != null : "branchName is null. Please include it in configuration."
assert SOLUTION_NAME != null : "solutionName is null. Please include it in configuration."
}
echo "building with ${REPOSITORY_NAME}"
echo "building with ${BRANCH_NAME}"
echo "building with ${SOLUTION_NAME}"
checkoutFromGitWeb(args)
}
}
stage('build and test') {
steps {
executeRake(
"set_assembly_to_current_version",
"build_solution[$args.solutionName, Release, Any CPU]",
"copy_to_deployment_folder",
"execute_dev_dropkick"
)
}
}
}
post {
always {
sendEmail(args)
}
}
}
}
in my pipeline project I configured the Pipeline to use Pipeline script and the script is as follows:
buildGitWebProject {
repositoryName:'my-git-repo'
branchName: 'qa'
solutionName: 'my_csharp_solution.sln'
emailTo='testuser#domain.com'
}
I've tried with and without the environment block but the result ends up being the same that the value is 'null' for each of those arguments. Oddly enough the script portion of the code doesn't make the build fail either... so not sure what's wrong with that. Also the echo parts show null as well. What am I doing wrong?

Your Closure body is not behaving the way you expect/believe it should.
At the beginning of your method you have:
def call(body) {
def args= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = args
body()
Your call body is:
buildGitWebProject {
repositoryName:'my-git-repo'
branchName: 'qa'
solutionName: 'my_csharp_solution.sln'
emailTo='testuser#domain.com'
}
Let's take a stab at debugging this.
If you add a println(args) after the body() in your call(body) method you will see something like this:
[emailTo:testuser#domain.com]
But, only one of the values got set. What is going on?
There are a few things to understand here:
What does setting a delegate of a Closure do?
Why does repositoryName:'my-git-repo' not do anything?
Why does emailTo='testuser#domain.com' set the property in the map?
What does setting a delegate of a Closure do?
This one is mostly straightforward, but I think it helps to understand. Closure is powerful and is the Swiss Army knife of Groovy. The delegate essentially sets what the this is in the body of the Closure. You are also using the resolveStrategy of Closure.DELEGATE_FIRST, so methods and properties from the delegate are checked first, and then from the enclosing scope (owner) - see the Javadoc for an in-depth explanation. If you call methods like size(), put(...), entrySet(), etc., they are all first called on the delegate. The same is true for property access.
Why does repositoryName:'my-git-repo' not do anything?
This may appear to be a Groovy map literal, but it is not. These are actually labeled statements. If you surround it instead with square brackets like [repositoryName:'my-git-repo'] then that would be a map literal. But, that is all you would be doing there - is creating a map literal. We want to make sure that these objects are consumed in the Closure
Why does emailTo='testuser#domain.com' set the property in the map?
This is using the map property notation feature of Groovy. As mentioned earlier, you have set the delegate of the Closure to def args= [:], which is a Map. You also set the resolveStrategy of Closure.DELEGATE_FIRST. This makes your emailTo='testuser#domain.com' resolve to being called on args, which is why the emailTo key is set to the value. This is equivalent to calling args.emailTo='testuser#domain.com'.
So, how do you fix this?
If you want to keep your Closure syntax approach, you could change the body of your call to anything that essentially stores values in the delegated args map:
buildGitWebProject {
repositoryName = 'my-git-repo'
branchName = 'qa'
solutionName = 'my_csharp_solution.sln'
emailTo = 'testuser#domain.com'
}
buildGitWebProject {
put('repositoryName', 'my-git-repo')
put('branchName', 'qa')
put('solutionName', 'my_csharp_solution.sln')
put('emailTo', 'testuser#domain.com')
}
buildGitWebProject {
delegate.repositoryName = 'my-git-repo'
delegate.branchName = 'qa'
delegate.solutionName = 'my_csharp_solution.sln'
delegate.emailTo = 'testuser#domain.com'
}
buildGitWebProject {
// example of Map literal where the square brackets are not needed
putAll(
repositoryName:'my-git-repo',
branchName: 'qa',
solutionName: 'my_csharp_solution.sln',
emailTo: 'testuser#domain.com'
)
}
Another way would be to have your call take in the Map as an argument and remove your Closure.
def call(Map args) {
// no more args and delegates needed right now
}
buildGitWebProject(
repositoryName: 'my-git-repo',
branchName: 'qa',
solutionName: 'my_csharp_solution.sln',
emailTo: 'testuser#domain.com'
)
There are also some other ways you could model your API, it will depend on the UX you want to provide.
Side note around declarative pipelines in shared library code:
It's worth keeping in mind the limitations of declarative pipelines in shared libraries. It looks like you are already doing it in vars, but I'm just adding it here for completeness. At the very end of the documentation it is stated:
Only entire pipelines can be defined in shared libraries as of this time. This can only be done in vars/*.groovy, and only in a call method. Only one Declarative Pipeline can be executed in a single build, and if you attempt to execute a second one, your build will fail as a result.

Related

Understanding Jenkins Groovy scripted pipeline code

I am looking at a Jenkins Scripted Pipeline tutorial here https://www.jenkins.io/blog/2019/12/02/matrix-building-with-scripted-pipeline/ and found that I need to learn some Groovy to understand this.
I have been reading through Groovy documentation, but still and not understanding all of this code. I will list the areas of question.
1
List getMatrixAxes(Map matrix_axes) {
List axes = []
matrix_axes.each { axis, values ->
List axisList = []
values.each { value ->
axisList << [(axis): value]
}
axes << axisList
}
// calculate cartesian product
axes.combinations()*.sum()
}
In most of the Groovy documentation I have seen, it defines lists such as List axes = []. The syntax above looks more like a function which would return a List. If this is what this is, I don't see any return statement inside the curly brackets, which just confuses me.
2
node(nodeLabel) {
withEnv(axisEnv) {
stage("Build") {
echo nodeLabel
sh 'echo Do Build for ${PLATFORM} - ${BROWSER}'
}
stage("Test") {
echo nodeLabel
sh 'echo Do Build for ${PLATFORM} - ${BROWSER}'
}
}
}
I have seen this concept of node in Groovy scripts before, somethings with the parameter section, ie: node(nodelabel) {...} and sometimes without, ie: node {...}. Is this core Groovy or somehow something specific to Jenkins? What does it mean and where can I find documentation about it?
getMatrixAxes is a function. In Groovy return statement is optional. If you don't explicitly return something in a function, the last expression evaluated in the body of a method or a closure is returned. In your case, the output generated by the axes.combinations()*.sum() will be returned. In the example, it's generating a List. You can read more from here.
These constructs are something specific to Jenkins. Specifically the mentioned syntax is from Jenkins Scripted Pipeline Syntax. node {...} Simply means run on any agent. node(nodelabel) {...} means run on the agent with the label nodelabel. Jenkins has a new Job DSL called Declarative syntax which is preferred over Scripted Syntax. You can read more about both here.

Jenkins pipelineJob DSL not interpreting variables in pipeline script

I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}

How to pass variable name, but not its value into /var function as argument piece?

I'm trying to divide a pipeline. Most of the parameters passed successful, but those containing variables are resolved before i need.
Jenkins ver. 2.164.1
Jenkins.file content:
stage ('prebuild') {
steps {
script {
VERSION="temprorary-value"
POSTBUILDACTION="make.exe \\some\\path\\file_${VERSION}"
}
}
}
stage ('build') {
steps {
script {
build (POSTBUILDACTION)
}
}
}
build.groovy content:
def call (String POSTBUILDACTION) {
...
checkout somefile
VERSION=readFile(somefile)
bat "${POSTBUILDACTION}"
}
here i expected that version will be taken from redefined VERSION variable but POSTBUILDACTION passed into the function as a string. In result it's called as is ("make.exe \some\path\file_temprorary-value"). In fact command i'd like to get is (somefile contains only one number, for example "5")
make.exe \some\path\file_5
But now i have
make.exe \some\path\file_temprorary-value
Or if i trying to pass \${VERSION} like:
POSTBUILDACTION="make.exe \\some\\path\\file_\${VERSION}"
- it's transfer as is:
make.exe \some\path\file_${VERSION}
I've tried to view a class of POSTBUILDACTION in prebuild stage - it's equal "class org.codehaus.groovy.runtime.GStringImpl" and same on build stage after passing throw - it become a string: "class java.lang.String"
So how to pass into a function argument contained a variable, but not it's value ?
OR
to "breathe life" into a dry string like
'make.exe \\some\\path\\file_${VERSION}'
so the variables could be resolved?
Option 1 - lazy evaluation (#NonCPS)
You can use a GString with lazy evaluation, but since Jenkins doesn't serialize lazy GStrings you'll have to return it from a #NonCPS method like so:
#NonCPS
def getPostBuildAction() {
"make.exe \\some\\path\\file_${ -> VERSION }"
}
stage ('prebuild') {
...
}
Then you set POSTBUILDACTION=getPostBuildAction() and you can use POSTBUILDACTION as you wanted, but be aware that the object you have here is a groovy.lang.GString and not a String, so you'll want to change your parameter class (or simply use def.)
Option 2 - use a closure for every call
You can use an eager GString inside a closure:
def String getPostBuildAction() {
"make.exe \\some\\path\\file_$VERSION"
}
But here you'll have to call getPostBuildAction() every time you want a different reading of VERSION, so you'll have to replace POSTBUILDACTION with this closure.

Scripted pipeline: wrap stage

I would like to be able to wrap a 'stage' in Jenkins, so I can execute custom code at the start and end of a stage, so something like:
myStage('foo') {
}
I thought I could do this by using metaClass:
//Wrap stages to automatically trace
def originalMethod = this.metaClass.getMetaMethod("stage", null)
this.metaClass.myStage = { args ->
println "Beginning of stage"
println "Args: " + args
def result = originalMethod.invoke(delegate, args)
println "End of stage"
return result
}
But it appears the Groovy script itself is a Binding, which doesn't have a metaClass:
groovy.lang.MissingPropertyException: No such property: metaClass for class: groovy.lang.Binding
I'm still learning how Groovy and Jenkins Pipeline work, so perhaps I'm just missing something.
I am not familiar with the metaclass concept but I think that a simple solution to your problem is to define a wrapped stage as a function.
Here's an example of how you'd define such a function:
def wrappedStage(name, Closure closure) {
stage(name) {
echo "Beginning of stage"
def result = closure.call()
echo "End of stage"
return result
}
}
and this is how you would call it:
wrappedStage('myStage') {
echo 'hi'
}
The return value of wrappedStage would only make sense when the body of your stage actually returns something, for example:
If you call another job, eg:
wrappedStage('myStage') {
build job: 'myJob'
}
you will get back org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper which you can use to access info of the job you run, like result, variables etc
If you print something to the console, eg:
wrappedStage('myStage') {
echo 'hi'
}
you will get back null.
Note that in my example I am not printing args because the way I understand stage, it only takes 2 arguments; the stage name and the closure it should run. The name of the stage will already be printed in the log, and I don't know how much value you'd get from printing the code you're about to execute but if that's something you want to do, take a look at this.
If you have a more specific case in mind for what you'd want to wrap, you can add more params to the wrapper and print all the information you want through those extra parameters.

Providing different values in Jenkins dsl configure block to create different jobs

I need my builds to timeout at a specific time (deadline) but currently Jenkins dsl only support the "absolute" strategy. So I tried to write the configure block but couldn't create jobs with different deadline values.
def settings = [
[
jobname: 'job1',
ddl: '13:10:00'
],
[
jobname: 'job2',
ddl: '14:05:00'
]
]
for (i in settings) {
job(i.jobname) {
configure {
it / buildWrappers << 'hudson.plugins.build__timeout.BuildTimeoutWrapper' {
strategy(class:'hudson.plugins.build_timeout.impl.DeadlineTimeOutStrategy') {
deadlineTime(i.ddl)
deadlineToleranceInMinutes(1)
}
}
}
steps {
// some stuff to do here
}
}
}
The above script gives me two jobs with the same deadline time(14:05:00):
<project>
<actions></actions>
<description></description>
<keepDependencies>false</keepDependencies>
<properties></properties>
<scm class='hudson.scm.NullSCM'></scm>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers></triggers>
<concurrentBuild>false</concurrentBuild>
<builders></builders>
<publishers></publishers>
<buildWrappers>
<hudson.plugins.build__timeout.BuildTimeoutWrapper>
<strategy class='hudson.plugins.build_timeout.impl.DeadlineTimeOutStrategy'>
<deadlineTime>14:05:00</deadlineTime>
<deadlineToleranceInMinutes>1</deadlineToleranceInMinutes>
</strategy>
</hudson.plugins.build__timeout.BuildTimeoutWrapper>
</buildWrappers>
</project>
I found this question but still couldn't get it to work.
You can use the Automatic Generated API
The generated DSL is only supported when running in Jenkins, e.g. it is
not available when running from the command line or in the Playground.
Use The Configure Block to generate custom config elements when not
running in Jenkins.
The generated DSL will not work for all plugins, e.g. if a plugin does
not use the #DataBoundConstructor and #DataBoundSetter annotations to
declare parameters. In that case The Configure Block can be used to
generate the config XML.
Fortunately the Timeout plugin support DataBoundConstructors
#DataBoundConstructor
public DeadlineTimeOutStrategy(String deadlineTime, int deadlineToleranceInMinutes) {
this.deadlineTime = deadlineTime;
this.deadlineToleranceInMinutes = deadlineToleranceInMinutes <= MINIMUM_DEADLINE_TOLERANCE_IN_MINUTES ? MINIMUM_DEADLINE_TOLERANCE_IN_MINUTES
: deadlineToleranceInMinutes;
}
So you should be able to do something like
def settings = [
[
jobname: 'job1',
ddl: '13:10:00'
],
[
jobname: 'job2',
ddl: '14:05:00'
]
]
for (i in settings) {
job(i.jobname) {
wrappers {
buildTimeoutWrapper {
strategy {
deadlineTimeOutStrategy {
deadlineTime(i.ddl)
deadlineToleranceInMinutes(1)
}
}
timeoutEnvVar('WHAT_IS_THIS_FOR')
}
}
steps {
// some stuff to do here
}
}
}
There is an extra layer in BuildTimeoutWrapper which houses the different strategies
When using nested classes you need to set the first letter of the class to lowercase
EDIT
You can see this in your own Jenkins install by using the 'Job DSL API Reference' link in a jobs page
http://<your jenkins>/plugin/job-dsl/api-viewer/index.html#method/javaposse.jobdsl.dsl.helpers.wrapper.WrapperContext.buildTimeoutWrapper
I saw very similar behaviour to this in a Jenkins DSL groovy script.
I was looping over a List of Maps in a for each, and I also have a configure closure like your example.
The behaviour I saw was that the Map object in the configure closure seemed to be the same for all iterations of the for each loop. Similar to how you are seeing the same deadline time.
I was actually referencing the same value in the Map both inside and outside the configure closure and the DSL was outputting different values. Outside the configure closure was as expected, but inside was the same value for all iterations.
My solution was just to use a variable to reference the Map value and use that both inside and outside the configure closure, when I did that, the value was consistent.
For your example (just adding a deadlineValue variable, and setting it outside the configure closure):
for (i in settings) {
def deadlineValue = i.ddl
job(i.jobname) {
configure {
it / buildWrappers << 'hudson.plugins.build__timeout.BuildTimeoutWrapper' {
strategy(class:'hudson.plugins.build_timeout.impl.DeadlineTimeOutStrategy') {
deadlineTime(deadlineValue)
deadlineToleranceInMinutes(1)
}
}
}
steps {
// some stuff to do here
}
}
}
I would not expect this to make a difference, but it worked for me.
However I agree as per the the other solution it is better to use buildTimeoutWrapper, so you can avoid using the configure block.
See: <Your Jenkins URL>/plugin/job-dsl/api-viewer/index.html#path/javaposse.jobdsl.dsl.DslFactory.job-wrappers-buildTimeoutWrapper-strategy-deadlineTimeOutStrategy for more details, obviously you'll need the Build Timeout plugin installed.
For my example I still needed the configure closure for the MultiJob plugin where some parameters were still not configurable via the DSL api.

Resources