How to pass and invoke a method utility to Jenkins template? - jenkins

I have this template:
def call(body) {
def pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()
pipeline {
agent any
....
stages {
stage('My stages') {
steps {
script {
pipelineParams.stagesParams.each { k, v ->
stage("$k") {
$v
}
}
}
}
}
}
post { ... }
}
}
Then I use the template in a pipeline:
#Library('pipeline-library') _
pipelineTemplateBasic {
stagesParams = [
'First stage': sh "do something...",
'Second stage': myCustomCommand("foo","bar")
]
}
In the stagesParams I pass the instances of my command (sh and myCustomCommand) and they land in the template as $v. How can I then execute them? Some sort of InvokeMethod($v)?
At the moment I am getting this error:
org.jenkinsci.plugins.workflow.steps.MissingContextVariableException: Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node
The problem of using node is that it doesn't work in situations like parallel:
parallelStages = [:]
v.each { k2, v2 ->
parallelStages["$k2"] = {
// node {
stage("$k2") {
notifySlackStartStage()
$v2
checkLog()
}
// }
}
}

If you want to execute sh step provided with a map, you need to store map values as closures, e.g.
#Library('pipeline-library') _
pipelineTemplateBasic {
stagesParams = [
'First stage': {
sh "do something..."
}
'Second stage': {
myCustomCommand("foo","bar")
}
]
}
Then in the script part of your pipeline stage you will need to execute the closure, but also set the delegate and delegation strategy to the workflow script, e.g.
script {
pipelineParams.stagesParams.each { k, v ->
stage("$k") {
v.resolveStrategy = Closure.DELEGATE_FIRST
v.delegate = this
v.call()
}
}
}

Related

Jenkins Job DSL Parameter from list

I'm trying to create a DSL job that will create jobs from a list of maps iterating throughout them like the following:
def git_branch = "origin/awesomebranch"
def credential_id = "awesomerepocreds"
def jobs = [
[
title: "AwesomePipeline",
description: "This pipeline is awesome.",
directory: "awesomepath",
repo: "ssh://git#bitbucket.XXX.XX.XX:XXXX/repo/repo.git"
]
]
jobs.each { i ->
pipelineJob(i.title) {
description("${i.description}\n\n__branch__: ${git_branch}")
parameters {
stringParam('branch', defaultValue='origin/develop', description='Branch to build')
}
definition {
cpsScm {
scm {
git {
branch('$branch')
remote {
url(i.repo)
credentials(credential_id)
}
}
scriptPath("jenkins/${i.directory}/Jenkinsfile")
}
}
}
}
}
For jobs without parameters this works great but I don't know how to pass a list into the map of a job that will be used by the parameters block, something like
....
def jobs = [
[
title: "AwesomePipeline",
description: "This pipeline is awesome.",
directory: "awesomepath",
repo: "ssh://git#bitbucket.XXX.XX.XX:XXXX/repo/repo.git",
params: [
stringParam('branch', defaultValue='origin/develop', description='Branch to build'),
stringParam('sawesomeparam', defaultValue='awesomevalue', description='awesomething')
]
]
]
...
That might be used someway as some sort of each but not sure how to formulate this properly.
....
jobs.each { i ->
pipelineJob(i.title) {
description("${i.description}\n\n__branch__: ${git_branch}")
parameters {
i.params.each { p ->
p
}
}
....
Thanks in advance
This is a relatively old question, and I came across this a few days ago. This is how I ended up solving this.
First we need a class that has a static context method, that (I think) gets called by Groovy. This context is then later reused to actually add the parameters. I suppose you should use the additionalClasspath feature of job-dsl plugin to separate the Params class into a separate file so you can re-use it.
import javaposse.jobdsl.dsl.helpers.BuildParametersContext
class Params {
protected static Closure context(#DelegatesTo(BuildParametersContext) Closure params) {
params.resolveStrategy = Closure.DELEGATE_FIRST
return params
}
static Closure extend(Closure params) {
return context(params)
}
}
Then you can setup your job using a params attribute that contains a list of closures.
def jobs = [
[
name: "myjob",
params: [
{
stringParam('ADDITIONAL_PARAM', 'default', 'description')
}
]
]
]
Lastly, you can call the parameters property using the static extend of above class.
jobs.each { j ->
pipelineJob(j.name) {
// here go all the default parameters
parameters {
string {
name('SOMEPARAM')
defaultValue('')
description('')
trim(true)
}
}
j.params.each { p ->
parameters Params.extend(p)
}
// additional pipelineJob properties...
}
}
The reason I made the params attribute of the map into a list, is because this way you can have additional methods that return Closure that contain additional params. For example:
class DefaultParams {
static Closure someParam(String valueDefault) {
return {
string {
name('SOME_EXTRA_PARAM')
defaultValue(valueDefault)
description("It's a description bonanza")
trim(true)
}
}
}
}
def myjobs = [
[
name: "myjob",
params: [
defaultParams.someParam('default')
{
stringParam('ADDITIONAL_PARAM', 'default', 'description')
}
]
]
]
You should be able to use all the different syntaxes that is showing on the job-dsl api page.
simplest method is:
jobs.each { i ->
pipelineJob(i.title) {
description("${i.description}\n\n__branch__: ${git_branch}")
parameters {
i.params.each { p ->
p.delegate = owner.delegate
p()
}
}
where owner.delegate is delegate of parameters
or better which not creates empty parameters:
jobs.each { i ->
pipelineJob(i.title) {
description("${i.description}\n\n__branch__: ${git_branch}")
i.params.each { p ->
parameters {
p.delegate = getDelegate()
p()
}
}

Jenkins Pipeline Conditional Environmental Variables

I have a set of static environmental variables in the environmental directive section of a declarative pipeline. These values are available to every stage in the pipeline.
I want the values to change based on an arbitrary condition.
Is there a way to do this?
pipeline {
agent any
environment {
if ${params.condition} {
var1 = '123'
var2 = abc
} else {
var1 = '456'
var2 = def
}
}
stages {
stage('One') {
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
}
stag('Two'){
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
Looking for the same thing I found a nice answer in other question:
Basically is to use the ternary conditional operator
pipeline {
agent any
environment {
var1 = "${params.condition == true ? "123" : "456"}"
var2 = "${params.condition == true ? abc : def}"
}
}
Note: keep in mind that in the way you wrote your question (and I did my answer) the numbers are Strings and the letters are variables.
I would suggest you to create a stage "Environment" and declare your variable according to the condition you want, something like below:-
pipeline {
agent any
environment {
// Declare variables which will remain same throughout the build
}
stages {
stage('Environment') {
agent { node { label 'master' } }
steps {
script {
//Write condition for the variables which need to change
if ${params.condition} {
env.var1 = '123'
env.var2 = abc
} else {
env.var1 = '456'
env.var2 = def
}
sh "printenv"
}
}
}
stage('One') {
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
stage('Two'){
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
}
}
Suppose we want to use optional params for downstream job if it is called from upsteam job, and default params if downsteam job is called by itself.
But we don't want to have "holder" params with default value in downstream for some reason.
This could be done via groovy function:
upstream Jenkinsfile - param CREDENTIALS_ID is passed downsteam
pipeline {
stage {
steps {
build job: "my_downsteam_job_name",
parameters [string(name: 'CREDENTIALS_ID', value: 'other_credentials_id')]
}
}
}
downstream Jenkinsfile - if param CREDENTIALS_ID not passed from upsteam, function returns default value
def getCredentialsId() {
if(params.CREDENTIALS_ID) {
return params.CREDENTIALS_ID;
} else {
return "default_credentials_id";
}
}
pipeline {
environment{
TEST_PASSWORD = credentials("${getCredentialsId()}")
}
}
you can get another level of flexibility, using maps:
stage("set_env_vars") {
steps {
script {
def MY_MAP1 = [A: "123", B: "456", C: "789"]
def MY_MAP2 = [A: "abc", B: "def", C: "ghi"]
env.var1 = MY_MAP1."${env.switching_var}"
env.var2 = MY_MAP2."${env.switching_var}"
}
}
}
This way, more choices are possible.

Mocking jenkins pipeline steps

I have a class that i use in my jenkinsfile, simplified version of it here:
class TestBuild {
def build(jenkins) {
jenkins.script {
jenkins.sh(returnStdout: true, script: "echo build")
}
}
}
And i supply this as a jenkins parameter when using it in the jenkinsfile. What would be the best way to mock jenkins object here that has script and sh ?
Thanks for your help
I had similar problems the other week, I came up with this:
import org.jenkinsci.plugins.workflow.cps.CpsScript
def mockCpsScript() {
return [
'sh': { arg ->
def script
def returnStdout
// depending on sh is called arg is either a map or a string vector with arguments
if (arg.length == 1 && arg[0] instanceof Map) {
script = arg[0]['script']
returnStdout = arg[0]['returnStdout']
} else {
script = arg[0]
}
println "Calling sh with script: ${script}"
},
'script' : { arg ->
arg[0]()
},
] as CpsScript
}
and used together with your script (extended with non-named sh call):
class TestBuild {
def build(jenkins) {
jenkins.script {
jenkins.sh(returnStdout: true, script: "echo build")
jenkins.sh("echo no named arguments")
}
}
}
def obj = new TestBuild()
obj.build(mockCpsScript())
it outputs:
[Pipeline] echo
Calling sh with script: echo build
[Pipeline] echo
Calling sh with script: echo no named arguments
Now this it self isn't very useful, but it easy to add logic which defines behaviour of the mock methods, for example, this version controls the contents returned by readFile depending of what directory and file is being read:
import org.jenkinsci.plugins.workflow.cps.CpsScript
def mockCpsScript(Map<String, String> readFileMap) {
def currentDir = null
return [
'dir' : { arg ->
def dir = arg[0]
def subClosure = arg[1]
if (currentDir != null) {
throw new IllegalStateException("Dir '${currentDir}' is already open, trying to open '${dir}'")
}
currentDir = dir
try {
subClosure()
} finally {
currentDir = null
}
},
'echo': { arg ->
println(arg[0])
},
'readFile' : { arg ->
def file = arg[0]
if (currentDir != null) {
file = currentDir + '/' + file
}
def contents = readFileMap[file]
if (contents == null) {
throw new IllegalStateException("There is no mapped file '${file}'!")
}
return contents
},
'script' : { arg ->
arg[0]()
},
] as CpsScript
}
class TestBuild {
def build(jenkins) {
jenkins.script {
jenkins.dir ('a') {
jenkins.echo(jenkins.readFile('some.file'))
}
jenkins.echo(jenkins.readFile('another.file'))
}
}
}
def obj = new TestBuild()
obj.build(mockCpsScript(['a/some.file' : 'Contents of first file', 'another.file' : 'Some other contents']))
This outputs:
[Pipeline] echo
Contents of first file
[Pipeline] echo
Some other contents
If you need to use currentBuild or similar properties, then you can need to assign those after the closure coercion:
import org.jenkinsci.plugins.workflow.cps.CpsScript
def mockCpsScript() {
def jenkins = [
// same as above
] as CpsScript
jenkins.currentBuild = [
// Add attributes you need here. E.g. result:
result:null,
]
return jenkins
}

Jenkins pipeline: load properties from file

Below pipeline codes works well:
pipeline {
agent {
label "test_agent"
}
stages {
stage("test") {
steps {
script {
sh "echo 'number=${BUILD_NUMBER}' >log"
if (fileExists('log')) {
load 'log'
retVal = "${number}"
}
echo "${retVal}"
}
}
}
}
}
However, when I tried to put the logic of read file to a lib(named getNumber.groovy) and call it in pipeline, like this:
getNumber.groovy
def call() {
def retVal
if (fileExists('log')) {
load 'log'
retVal = "${number}"
}
return retVal
}
This is how the pipeline (test.groovy) call this lib:
#Library('lib') _
pipeline {
agent {
label "test_agent"
}
stages {
stage("test") {
steps {
script {
sh "echo 'number=${BUILD_NUMBER}' >log"
def retVal = getNumber()
echo "${retVal}"
}
}
}
}
}
It always fail with below error:
[Pipeline] End of Pipeline
groovy.lang.MissingPropertyException: No such property: number for class: getNumber
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:53)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.getProperty(ScriptBytecodeAdapter.java:458)
at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.getProperty(DefaultInvoker.java:34)
at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
Any suggestion? How to fix it if I want to encapsulate the logic in a lib?
[Edit]
If I change this segment
load 'log'
retVal = "${number}"
to this:
def matcher = readFile('log') =~ '^number=(.+)'
retVal=matcher ? matcher[0][1] : null
it works. But I just curious why the previous one can't work.

Dynamic number of parallel steps in declarative pipeline

I'm trying to create a declarative pipeline which does a number (configurable via parameter) jobs in parallel, but I'm having trouble with the parallel part.
Basically, for some reason the below pipeline generates the error
Nothing to execute within stage "Testing" # line .., column ..
and I cannot figure out why, or how to solve it.
import groovy.transform.Field
#Field def mayFinish = false
def getJob() {
return {
lock("finiteResource") {
waitUntil {
script {
mayFinish
}
}
}
}
}
def getFinalJob() {
return {
waitUntil {
script {
try {
echo "Start Job"
sleep 3 // Replace with something that might fail.
echo "Finished running"
mayFinish = true
true
} catch (Exception e) {
echo e.toString()
echo "Failed :("
}
}
}
}
}
def getJobs(def NUM_JOBS) {
def jobs = [:]
for (int i = 0; i < (NUM_JOBS as Integer); i++) {
jobs["job{i}"] = getJob()
}
jobs["finalJob"] = getFinalJob()
return jobs
}
pipeline {
agent any
options {
buildDiscarder(logRotator(numToKeepStr:'5'))
}
parameters {
string(
name: "NUM_JOBS",
description: "Set how many jobs to run in parallel"
)
}
stages {
stage('Setup') {
steps {
echo "Setting it up..."
}
}
stage('Testing') {
steps {
parallel getJobs(params.NUM_JOBS)
}
}
}
}
I've seen plenty of examples doing this in the old pipeline, but not declarative.
Anyone know what I'm doing wrong?
At the moment, it doesn't seem possible to dynamically provide the parallel branches when using a Declarative Pipeline.
Even if you have a stage prior where, in a script block, you call getJobs() and add it to the binding, the same error message is thrown.
In this case you'd have to fall back to using a Scripted Pipeline.

Resources