Jenkins Declarative Pipeline Include File - jenkins

I am trying to a separate file holding variable for a Jenkins pipeline, this is because it will be used by multiple pipelines. But I can't seem to find the proper way to include it? Or if there's any way to include it?
MapA:
def MapA = [
ItemA: [
Environment: 'envA',
Name: 'ItemA',
Version: '1.0.0.2',
],
ItemB: [
Environment: 'envB',
Name: 'ItemB',
Version: '2.0.0.1',
]
]
return this;
MainScript:
def NodeLabel = 'windows'
def CustomWorkSpace = "C:/Workspace"
// Tried loading it here (Location 1)
load 'MapA'
pipeline {
agent {
node {
// Restrict Project Execution
label NodeLabel
// Use Custom Workspace
customWorkspace CustomWorkSpace
// Tried loading it here (Location 2)
load 'MapA'
}
}
stages {
// Solution
stage('Solution') {
steps {
script {
// Using it here
MapA.each { Solution ->
stage("Stage A") {
...
}
stage("Stage B") {
...
}
// Extract Commit Solution
stage("Stage C") {
...
echo "${Solution.value.Environment}"
echo "${Solution.value.Name}"
echo "${Solution.value.Version}"
}
}
}
}
}
}
}
On Location 1 outside the pipeline and node section: it gave the below error
org.jenkinsci.plugins.workflow.steps.MissingContextVariableException: Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node
On Location 2 inside the node section: it gave the below error
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 7: Expected to find ‘someKey "someValue"’ # line 7, column 14.
load 'MapA'
node {
^

You can achieve your scenario in 2 ways:
#1
If you want you can hardcode the variable in the same Jenkins file and make use of it on your pipeline like below Example :
Jenkinsfile content
def MapA = [
ItemA: [
Environment: 'envA',
Name: 'ItemA',
Version: '1.0.0.2',
],
ItemB: [
Environment: 'envB',
Name: 'ItemB',
Version: '2.0.0.1',
]
]
pipeline {
agent any;
stages {
stage('debug') {
steps {
script {
MapA.each { k, v ->
stage(k) {
v.each { k1,v1 ->
// do your actual task by accessing the map value like below
echo "${k} , ${k1} value is : ${v1}"
}
}
}
}
}
}
}
}
#2
If you would like to keep the variable in a separate groovy file in a gitrepo, it will be like below
Git Repo file and folder structure
.
├── Jenkinsfile
├── README.md
└── var.groovy
var.groovy
def mapA() {
return [
ItemA: [
Environment: 'envA',
Name: 'ItemA',
Version: '1.0.0.2',
],
ItemB: [
Environment: 'envB',
Name: 'ItemB',
Version: '2.0.0.1',
]
]
}
def helloWorld(){
println "Hello World!"
}
return this;
Jenkinsfile
pipeline {
agent any
stages {
stage("iterate") {
steps {
sh """
ls -al
"""
script {
def x = load "${env.WORKSPACE}/var.groovy"
x.helloWorld()
x.mapA().each { k, v ->
stage(k) {
v.each { k1,v1 ->
echo "for ${k} value of ${k1} is ${v1}"
}
} //stage
} //each
} //script
} //steps
} // stage
}
}

Related

Jenkins file groovy issues

Hi My jenkins file code is as follows : I am basically trying to make call to a python script and execute it, I have defined some variables in my code : And when i am trying to run it, It gives no such property error in the beginning and I cant find out the reason behind it.
I would really appreciate any suggestions on this .
import groovy.json.*
pipeline {
agent {
label 'test'
}
parameters {
choice(choices: '''\
env1
env2'''
, description: 'Environment to deploy', name: 'vpc-stack')
choice(choices: '''\
node1
node2'''
, description: '(choose )', name: 'stack')
}
stages {
stage('Tooling') {
steps {
script {
//set up terraform
def tfHome = tool name: 'Terraform 0.12.24'
env.PATH = "${tfHome}:${env.PATH}"
env.TFHOME = "${tfHome}"
}
}
}
stage('Build all modules') {
steps {
wrap([$class: 'BuildUser']) {
// build all modules
script {
if (params.refresh) {
echo "Jenkins refresh!"
currentBuild.result = 'ABORTED'
error('Jenkinsfile refresh! Aborting any real runs!')
}
sh(script: """pwd""")
def status_code = sh(script: """PYTHONUNBUFFERED=1 python3 scripts/test/test_script.py /$vpc-stack""", returnStatus: true)
if (status_code == 0) {
currentBuild.result = 'SUCCESS'
}
if (status_code == 1) {
currentBuild.result = 'FAILURE'
}
}
}
}
}
}
post {
always {
echo 'cleaning workspace'
step([$class: 'WsCleanup'])
}
}
}
And this code is giving me the following error :
hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: vpc for class
Any suggestions what can be done to resolve this.
Use another name for the choice variable without the dash sign -, e.g. vpc_stack or vpcstack and replace the variable name in python call.

How to define and get/put the values in Jenkinsfile groovy map

I have this Jenkinsfile below. I am trying to get the key of a map but I am getting "java.lang.NoSuchMethodError: No such DSL method 'get' found among steps". Can someone help me to resolve this?
def country_capital = {
[Australia : [best: 'xx1', good: 'xx2', bad: 'xx3'],
America : [best: 'yy1', good: 'yy2', bad: 'yy3']]
}
pipeline {
agent any
stages {
stage('Test Map') {
steps {
script {
echo country_capital.get('Australia')['best']
}
}
}
}
}
You can get the value using this way
def country_capital = [
Australia: [
best: 'xx1',
good: 'xx2',
bad: 'xx3'
],
America: [
best: 'yy1',
good: 'yy2',
bad: 'yy3'
]
]
pipeline {
agent any
stages {
stage('Test Map') {
steps {
script {
echo country_capital['Australia'].best
}
}
}
}
}
// Output
xx1
For the above example one can also do
country_capital.each { capital_key, capital_value ->
try {
echo "Testing ${capital_value.best}..."
}
catch(ex){
echo "Test failed: ${capital_value.bad}" }
}

Mocking jenkins pipeline steps

I have a class that i use in my jenkinsfile, simplified version of it here:
class TestBuild {
def build(jenkins) {
jenkins.script {
jenkins.sh(returnStdout: true, script: "echo build")
}
}
}
And i supply this as a jenkins parameter when using it in the jenkinsfile. What would be the best way to mock jenkins object here that has script and sh ?
Thanks for your help
I had similar problems the other week, I came up with this:
import org.jenkinsci.plugins.workflow.cps.CpsScript
def mockCpsScript() {
return [
'sh': { arg ->
def script
def returnStdout
// depending on sh is called arg is either a map or a string vector with arguments
if (arg.length == 1 && arg[0] instanceof Map) {
script = arg[0]['script']
returnStdout = arg[0]['returnStdout']
} else {
script = arg[0]
}
println "Calling sh with script: ${script}"
},
'script' : { arg ->
arg[0]()
},
] as CpsScript
}
and used together with your script (extended with non-named sh call):
class TestBuild {
def build(jenkins) {
jenkins.script {
jenkins.sh(returnStdout: true, script: "echo build")
jenkins.sh("echo no named arguments")
}
}
}
def obj = new TestBuild()
obj.build(mockCpsScript())
it outputs:
[Pipeline] echo
Calling sh with script: echo build
[Pipeline] echo
Calling sh with script: echo no named arguments
Now this it self isn't very useful, but it easy to add logic which defines behaviour of the mock methods, for example, this version controls the contents returned by readFile depending of what directory and file is being read:
import org.jenkinsci.plugins.workflow.cps.CpsScript
def mockCpsScript(Map<String, String> readFileMap) {
def currentDir = null
return [
'dir' : { arg ->
def dir = arg[0]
def subClosure = arg[1]
if (currentDir != null) {
throw new IllegalStateException("Dir '${currentDir}' is already open, trying to open '${dir}'")
}
currentDir = dir
try {
subClosure()
} finally {
currentDir = null
}
},
'echo': { arg ->
println(arg[0])
},
'readFile' : { arg ->
def file = arg[0]
if (currentDir != null) {
file = currentDir + '/' + file
}
def contents = readFileMap[file]
if (contents == null) {
throw new IllegalStateException("There is no mapped file '${file}'!")
}
return contents
},
'script' : { arg ->
arg[0]()
},
] as CpsScript
}
class TestBuild {
def build(jenkins) {
jenkins.script {
jenkins.dir ('a') {
jenkins.echo(jenkins.readFile('some.file'))
}
jenkins.echo(jenkins.readFile('another.file'))
}
}
}
def obj = new TestBuild()
obj.build(mockCpsScript(['a/some.file' : 'Contents of first file', 'another.file' : 'Some other contents']))
This outputs:
[Pipeline] echo
Contents of first file
[Pipeline] echo
Some other contents
If you need to use currentBuild or similar properties, then you can need to assign those after the closure coercion:
import org.jenkinsci.plugins.workflow.cps.CpsScript
def mockCpsScript() {
def jenkins = [
// same as above
] as CpsScript
jenkins.currentBuild = [
// Add attributes you need here. E.g. result:
result:null,
]
return jenkins
}

Dynamically defining parallel steps in declarative jenkins pipeline

I try to parallelize dynamically defined set of functions as follows:
def somefunc() {
echo 'echo1'
}
def somefunc2() {
echo 'echo2'
}
running_set = [
{ somefunc() },
{ somefunc2() }
]
pipeline {
agent none
stages{
stage('Run') {
steps {
parallel(running_set)
}
}
}
}
And what I end up with is:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 17: No "steps" or "parallel" to execute within stage "Run" # line 17, column 9.
stage('Run') {
Although steps are defined within stage 'Run'. Anyway what I would like to achieve running is a dynamically defined set of functions to execute in parallel.
If you want to use dynamic parallel block with declarative pipeline script, you have to apply two changes to your Jenkinsfile:
You have to define running_set as a Map like ["task 1": { somefunc()}, "task 2": { somefunc2() }] - keys from this map are used as parallel stages names
You have to pass running_set to parallel method inside script {} block
Here is what updated Jenkinsfile could look like:
def somefunc() {
echo 'echo1'
}
def somefunc2() {
echo 'echo2'
}
running_set = [
"task1": {
somefunc()
},
"task2": {
somefunc2()
}
]
pipeline {
agent none
stages{
stage('Run') {
steps {
script {
parallel(running_set)
}
}
}
}
}
And here is what it looks like in Blue Ocean UI:
It is not obvious. But Szymon's way can be very straightforward.
pipeline {
agent none
stages{
stage('Run') {
steps {
script {
parallel([
'parallelTask1_Name': {
any code you like
},
'parallelTask2_Name': {
any other code you like
},
... etc
])
}
}
}
}
}

Jenkins DSL - Parse Yaml for complex processing

I'm using Jenkins Job DSL to construct pipelines for multiple SOA style services. All these service pipelines are identical.
job('wibble') {
publishers {
downstreamParameterized {
trigger("SOA_Pipeline_Builder") {
condition('SUCCESS')
parameters {
predefinedProp('PROJECT_NAME', "myproject-2" )
predefinedProp('PROJECT_REPO', "myprojecttwo#gitrepo.com" )
}
}
trigger("SOA_Pipeline_Builder") {
condition('SUCCESS')
parameters {
predefinedProp('PROJECT_NAME', "myproject-1" )
predefinedProp('PROJECT_REPO', "myprojectone#gitrepo.com" )
}
}
}
}
}
Given I'm adding in new projects everyday, I have to keep manipulating the DSL. I've decided that i'd rather have all the config in a yaml file outside of the DSL. I know I can use groovy to create arrays, do loops etc, but I'm not having much luck.
I'm trying to do something like this...
#Grab('org.yaml:snakeyaml:1.17')
import org.yaml.snakeyaml.Yaml
List projects = new Yaml().load(("conf/projects.yml" as File).text)
job('wibble') {
publishers {
downstreamParameterized {
projects.each {
trigger("SOA_Pipeline_Builder") {
condition('SUCCESS')
parameters {
predefinedProp('PROJECT_NAME', it.name )
predefinedProp('PROJECT_REPO', it.repo )
}
}
}
}
}
}
conf/projects.yml
---
- name: myproject-1
repo: myprojectone#gitrepo.com
- name: myproject-2
repo: myprojecttwo#gitrepo.com
Does anyone have any experience with this sort of thing?
This is how I'm using snakeyaml with jobDSL to separate configuration from "application" code.
config.yml
services:
- some-service-1
- some-service-2
target_envs:
- stage
- prod
folder_path: "promotion-jobs"
seed_job.groovy
#!/usr/bin/groovy
#Grab('org.yaml:snakeyaml:1.17')
import org.yaml.snakeyaml.Yaml
def workDir = SEED_JOB.getWorkspace()
print("Loading config from ${workDir}/config.yml")
def config = new Yaml().load(("${workDir}/config.yml" as File).text)
for (service in config.services) {
for (stage in config.target_envs) {
folder("${config.folder_path}/to-${stage}") {
displayName("Deploy to ${stage} jobs")
description("Deploy ECS services to ${stage}")
}
if (stage == "stage") {
stage_trigger = """
pipelineTriggers([cron["1 1 * * 1"])
"""
} else {
stage_trigger = ""
}
pipelineJob("${config.folder_path}/to-${stage}/${service}") {
definition {
cps {
sandbox()
script("""
node {
properties([
${stage_trigger}
parameters([
choice(
choices: ['dev,stage'],
description: 'The source environment to promote',
name: 'sourceEnv'
),
string(
defaultValue: '',
description: 'Specify a specific Docker image tag to deploy. This will override sourceEnv and should be left blank',
name: 'sourceTag',
trim: true
)
])
])
properties([
disableConcurrentBuilds(),
])
stage('init') {
dockerPromote(
app="${service}",
destinationEnv="${stage}"
)
}
}
""")
}
}
}
}
}

Resources