Understanding Jenkins Groovy scripted pipeline code - jenkins

I am looking at a Jenkins Scripted Pipeline tutorial here https://www.jenkins.io/blog/2019/12/02/matrix-building-with-scripted-pipeline/ and found that I need to learn some Groovy to understand this.
I have been reading through Groovy documentation, but still and not understanding all of this code. I will list the areas of question.
1
List getMatrixAxes(Map matrix_axes) {
List axes = []
matrix_axes.each { axis, values ->
List axisList = []
values.each { value ->
axisList << [(axis): value]
}
axes << axisList
}
// calculate cartesian product
axes.combinations()*.sum()
}
In most of the Groovy documentation I have seen, it defines lists such as List axes = []. The syntax above looks more like a function which would return a List. If this is what this is, I don't see any return statement inside the curly brackets, which just confuses me.
2
node(nodeLabel) {
withEnv(axisEnv) {
stage("Build") {
echo nodeLabel
sh 'echo Do Build for ${PLATFORM} - ${BROWSER}'
}
stage("Test") {
echo nodeLabel
sh 'echo Do Build for ${PLATFORM} - ${BROWSER}'
}
}
}
I have seen this concept of node in Groovy scripts before, somethings with the parameter section, ie: node(nodelabel) {...} and sometimes without, ie: node {...}. Is this core Groovy or somehow something specific to Jenkins? What does it mean and where can I find documentation about it?

getMatrixAxes is a function. In Groovy return statement is optional. If you don't explicitly return something in a function, the last expression evaluated in the body of a method or a closure is returned. In your case, the output generated by the axes.combinations()*.sum() will be returned. In the example, it's generating a List. You can read more from here.
These constructs are something specific to Jenkins. Specifically the mentioned syntax is from Jenkins Scripted Pipeline Syntax. node {...} Simply means run on any agent. node(nodelabel) {...} means run on the agent with the label nodelabel. Jenkins has a new Job DSL called Declarative syntax which is preferred over Scripted Syntax. You can read more about both here.

Related

Execute a stage if environment variable contains specific substring

I have a jenkins declarative pipeline which I am interested to be able to perform a stage only if a specific environment variable contains a specific substring(not fully equals to it, just contains it).
Does anyone got any idea on how can I implement it(maybe using the when condition if possible).
Thanks in advance,
Alon
As you mentioned, in declarative pipeline you can use the when directive to establish a condition in which the stage will be executed.
Among the built in condition options like triggeredBy,branch and tag there is the generic expression option, which allows you to run any groovy code and calculate the relevant Boolean value according to your needs.
So for your case for example you can just use the groovy contains methods to achieve what you want, something like:
pipeline {
agent any
stages {
stage('Conditional Stage') {
when {
expression { return env.MyParamter.contains('MySubstring') }
}
steps {
echo "Running the conditional stage"
}
}
}
}

Retrieving an injected build parameter in Pipeline

I'm specifically working on the example below, but I'm guessing the question is a bit more general.
https://github.com/jenkinsci/poll-mailbox-trigger-plugin
States the following build parameters, are injected into the job
pipeline {
agent any
stages {
stage('Test') {
steps {
echo env.pmt_content
echo ${pmt_content}
}
}
}
}
The above methods don't seem to work for me:
How is it possible to retrieve an injected build parameter in a pipeline job?
As explained in the pipeline-plugin tutorial, you can use params to access the job's build parameters.
print params.pmt_content
print "Content is ${params.pmt_content}"

Calculated-String as the parameter to Jenkins's Groovy "STAGE"

I put this in the script section of a Jenkins UI job's configuration -
pipeline {
agent any
stages{
stage('Project') {
...
That works, however -
pipeline {
agent any
stages{
stage('Project ' + 'Josh') {
...
throws and displays an incorrect error message because the parser gets all confused due to the constructed string inside the stage.
Moreover,
String description = 'Project' + ' Josh'
pipeline {
agent any
stages{
stage(description) {
...
does not fail, but displays 'description' as the stage's description.
Now, if you try to load a groovy PaC file with this in it:
node {
stage('Project' + 'Josh') {
...
it works without a hitch.
Is it possible that there are two different Groovy parsers employed, one for the UI and another for loaded PaC's? This means that the UI one has this really horrible bug in it...
Ideas?
.a.
Your example has nothing to do with Jenkins UI. You have shown two different pipeline types - a declarative and scripted one.
Declarative pipeline
A declarative pipeline
pipeline {
agent any
stages {
stage('Build') {
steps {
// do something here
}
}
}
}
introduces more simplified, limited and opinionated syntax. This type of a pipeline sets boundaries for Groovy code execution - it is only available inside a dedicated script block, e.g.
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
def name = 'Joe'
echo "My name is ${name}"
}
}
}
}
}
This is why stage block expects a literal and not a variable nor expression.
Scripted pipeline
The second example you have shown is a scripted pipeline. This kind of pipeline is more powerful comparing to a declarative pipeline - the whole pipeline script is more or less a Groovy script so you can put any code almost everywhere. A scripted pipeline starts with node block and it allows you to put any Groovy code inside this block. Consider following example:
node {
stage("Test") {
echo "1,2,3"
}
for (int i = 0; i < 5; i++) {
stage("Stage ${i}") {
echo "This is ${i}"
}
}
}
This pipeline script generates 6 stages:
As you can see there are actually no limits what kind of stuff you put inside node block. Declarative pipeline does not allow you doing that - its syntax is strict and you have to follow it directly.
Differences
As a final note I will quote Jenkins official docs:
Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model.
Source: https://jenkins.io/doc/book/pipeline/syntax/#compare
The script you configured via UI is using declarative pipeline syntax, while the other uses the scripted node syntax. I'd say that's probably where the other parser comes in and would agree that the one for pipeline has a bug.

custom jenkins declarative pipeline dsl with named arguments

I've been trying for a while now to start working towards moving our free style projects over to pipeline. To do so I feel like it would be best to build up a shared library since most of our builds are the same. I read through this blog post from Jenkins. I came up with the following
// vars/buildGitWebProject.groovy
def call(body) {
def args= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = args
body()
pipeline {
agent {
node {
label 'master'
customWorkspace "c:\\jenkins_repos\\${args.repositoryName}\\${args.branchName}"
}
}
environment {
REPOSITORY_NAME = "${args.repositoryName}"
BRANCH_NAME = "${args.branchName}"
SOLUTION_NAME = "${args.solutionName}"
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
skipStagesAfterUnstable()
timestamps()
}
stages {
stage("checkout") {
steps {
script{
assert REPOSITORY_NAME != null : "repositoryName is null. Please include it in configuration."
assert BRANCH_NAME != null : "branchName is null. Please include it in configuration."
assert SOLUTION_NAME != null : "solutionName is null. Please include it in configuration."
}
echo "building with ${REPOSITORY_NAME}"
echo "building with ${BRANCH_NAME}"
echo "building with ${SOLUTION_NAME}"
checkoutFromGitWeb(args)
}
}
stage('build and test') {
steps {
executeRake(
"set_assembly_to_current_version",
"build_solution[$args.solutionName, Release, Any CPU]",
"copy_to_deployment_folder",
"execute_dev_dropkick"
)
}
}
}
post {
always {
sendEmail(args)
}
}
}
}
in my pipeline project I configured the Pipeline to use Pipeline script and the script is as follows:
buildGitWebProject {
repositoryName:'my-git-repo'
branchName: 'qa'
solutionName: 'my_csharp_solution.sln'
emailTo='testuser#domain.com'
}
I've tried with and without the environment block but the result ends up being the same that the value is 'null' for each of those arguments. Oddly enough the script portion of the code doesn't make the build fail either... so not sure what's wrong with that. Also the echo parts show null as well. What am I doing wrong?
Your Closure body is not behaving the way you expect/believe it should.
At the beginning of your method you have:
def call(body) {
def args= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = args
body()
Your call body is:
buildGitWebProject {
repositoryName:'my-git-repo'
branchName: 'qa'
solutionName: 'my_csharp_solution.sln'
emailTo='testuser#domain.com'
}
Let's take a stab at debugging this.
If you add a println(args) after the body() in your call(body) method you will see something like this:
[emailTo:testuser#domain.com]
But, only one of the values got set. What is going on?
There are a few things to understand here:
What does setting a delegate of a Closure do?
Why does repositoryName:'my-git-repo' not do anything?
Why does emailTo='testuser#domain.com' set the property in the map?
What does setting a delegate of a Closure do?
This one is mostly straightforward, but I think it helps to understand. Closure is powerful and is the Swiss Army knife of Groovy. The delegate essentially sets what the this is in the body of the Closure. You are also using the resolveStrategy of Closure.DELEGATE_FIRST, so methods and properties from the delegate are checked first, and then from the enclosing scope (owner) - see the Javadoc for an in-depth explanation. If you call methods like size(), put(...), entrySet(), etc., they are all first called on the delegate. The same is true for property access.
Why does repositoryName:'my-git-repo' not do anything?
This may appear to be a Groovy map literal, but it is not. These are actually labeled statements. If you surround it instead with square brackets like [repositoryName:'my-git-repo'] then that would be a map literal. But, that is all you would be doing there - is creating a map literal. We want to make sure that these objects are consumed in the Closure
Why does emailTo='testuser#domain.com' set the property in the map?
This is using the map property notation feature of Groovy. As mentioned earlier, you have set the delegate of the Closure to def args= [:], which is a Map. You also set the resolveStrategy of Closure.DELEGATE_FIRST. This makes your emailTo='testuser#domain.com' resolve to being called on args, which is why the emailTo key is set to the value. This is equivalent to calling args.emailTo='testuser#domain.com'.
So, how do you fix this?
If you want to keep your Closure syntax approach, you could change the body of your call to anything that essentially stores values in the delegated args map:
buildGitWebProject {
repositoryName = 'my-git-repo'
branchName = 'qa'
solutionName = 'my_csharp_solution.sln'
emailTo = 'testuser#domain.com'
}
buildGitWebProject {
put('repositoryName', 'my-git-repo')
put('branchName', 'qa')
put('solutionName', 'my_csharp_solution.sln')
put('emailTo', 'testuser#domain.com')
}
buildGitWebProject {
delegate.repositoryName = 'my-git-repo'
delegate.branchName = 'qa'
delegate.solutionName = 'my_csharp_solution.sln'
delegate.emailTo = 'testuser#domain.com'
}
buildGitWebProject {
// example of Map literal where the square brackets are not needed
putAll(
repositoryName:'my-git-repo',
branchName: 'qa',
solutionName: 'my_csharp_solution.sln',
emailTo: 'testuser#domain.com'
)
}
Another way would be to have your call take in the Map as an argument and remove your Closure.
def call(Map args) {
// no more args and delegates needed right now
}
buildGitWebProject(
repositoryName: 'my-git-repo',
branchName: 'qa',
solutionName: 'my_csharp_solution.sln',
emailTo: 'testuser#domain.com'
)
There are also some other ways you could model your API, it will depend on the UX you want to provide.
Side note around declarative pipelines in shared library code:
It's worth keeping in mind the limitations of declarative pipelines in shared libraries. It looks like you are already doing it in vars, but I'm just adding it here for completeness. At the very end of the documentation it is stated:
Only entire pipelines can be defined in shared libraries as of this time. This can only be done in vars/*.groovy, and only in a call method. Only one Declarative Pipeline can be executed in a single build, and if you attempt to execute a second one, your build will fail as a result.

How can I parameterize Jenkinsfile jobs

I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name

Resources