Call timeout() & input() inside a shared groovy script - jenkins

I have a Groovy class shared between different Jenkins pipelines. I would like to move this part of the pipeline inside the shared groovy script
timeout (time: 15, unit: 'SECONDS') {
input ('Validation is required')
}
But it doesnt recognize input () or timeout()
so I have to pass them as parameters
def requireValidation (Closure timeout, Closure input) {
timeout (time: 15, unit: 'SECONDS') {
input ('Validation is required')
}
}
In there a way to import input & timeout inside the groovy script in a way I can have function without parameters?
def requireValidation()

A normal groovy class G.groovy:
class G {
def hello(s) {
println("hello ${s}")
}
def timeout( ...
def input( ...
}
and the script that needs to use it main.groovy:
def requireValidation(){
def script = new GroovyScriptEngine('.').with {
loadScriptByName('G.groovy')
}
this.metaClass.mixin script
hello('jon');
}
requireValidation()
which will print:
hello jon
The import is occurring inside the requireValidation function (inspired by Python) using the GroovyScriptEngine. The direct usage of the function is due to the magic of this.metaClass.mixin script. A better approach in main.groovy would be:
def script = new GroovyScriptEngine('.').with {
loadScriptByName('G.groovy')
}
this.metaClass.mixin script
def requireValidation(){
hello('jon');
}
requireValidation()

Related

jenkins parallel call for external function loaded from file

I have an external function to call multiple times (with different variable values) from my jenkins pipeline and was hoping to make this call parallel.
Here's my pipeline code.
stepsForParallel = [:]
parallel_list = ['apple','banana','orange']
def groovyFile = load "${path}/runFruit.groovy"
for(value in parallel_list){
stepsForParallel.put("Run for fruit-${value}", groovyFile.runForFruit(value))
}
parallel stepsForParallel
And my groovy file called 'runFruit' is as follows.
#!/usr/bin/env groovy
def runForFruit(fruitValue){
..do something here..
}
return this
For this code the pipeline still runs sequentially.
Perhaps because in this line inside the for loop
stepsForParallel.put("Run for fruit-${value}", groovyFile.runForFruit(value)) an actual call to the external function is being made?
Any suggestions on how to achieve this parallelization?
Take a look at the Documentation for the parallel step, it takes a map from branch names (the name of the parallel executions) to closures which are the code that will be executed per parallel stage. In your case you are not passing a closure, but rather executing the function itself.
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false
So you need to construct your code to fit the parallel format by passing a closure that contains your code to execute as the map value.
In your case it will look something like (using collectEntries):
parallel_list = ['apple','banana','orange']
def groovyFile = load "${path}/runFruit.groovy"
stepsForParallel = parallel_list.collectEntries{ fruit ->
["Run for fruit-${fruit}" : {
groovyFile.runForFruit(fruit)
}]
}
parallel stepsForParallel
Or if you prefer your format using the for loop:
stepsForParallel = [:]
parallel_list = ['apple','banana','orange']
def groovyFile = load "${path}/runFruit.groovy"
for(value in parallel_list){
stepsForParallel.put("Run for fruit-${value}", { groovyFile.runForFruit(value) })
}
parallel stepsForParallel
You can also use it in below way :
def fruit_list = ['apple','banana','orange']
def groovyFile
def stepsForParallel = [:]
pipeline {
agent any
stages {
stage('Hello') {
steps {
script{
groovyFile = load "runFruit.groovy"
stepsForParallel = fruit_list.collectEntries {
["${it}" : groovyFile.runForFruit(it)]
}
}
}
}
}
}
parallel stepsForParallel
Output:

How to pass parameters in a stage call in Jenkinsfile

Actually my Jenkinsfile looks like this:
#Library('my-libs') _
myPipeline{
my_build_stage(project: 'projectvalue', tag: '1.0' )
my_deploy_stage()
}
I am trying to pass these two variables (project and tag) to my build_stage.groovy, but it is not working.
What is the correct syntax to be able to use $params.project or $params.tag in my_build_stage.groovy?
Please see the below code which will pass parameters.
In your Jenkinsfile write below code:
// Global variable is used to get data from groovy file(shared library file)
def mylibrary
def PROJECT_VALUE= "projectvalue"
def TAG = 1
pipeline{
agent{}
stages{
stage('Build') {
steps {
script {
// Load Shared library Groovy file mylibs.Give your path of mylibs file which will contain all your function definitions
mylibrary= load 'C:\\Jenkins\\mylibs'
// Call function my_build stage and pass parameters
mylibrary.my_build_stage(PROJECT_VALUE, TAG )
}
}
}
stage('Deploy') {
steps {
script {
// Call function my_deploy_stage
mylibrary.my_deploy_stage()
}
}
}
}
}
Create a file named : mylibs(groovy file)
#!groovy
// Write or add Functions(definations of stages) which will be called from your jenkins file
def my_build_stage(PROJECT_VALUE,TAG_VALUE)
{
echo "${PROJECT_VALUE} : ${TAG_VALUE}"
}
def my_deploy_stage()
{
echo "In deploy stage"
}
return this

Groovy - readYaml() expecting java.util.LinkedHashMap instead of a file

As a part of our Jenkins solutions, we use Groovy in our pipelines.
In one of our groovy file I want to update a docker-stack.yaml.
To do so I'm using readYaml():
stage("Write docker-stack.yaml") {
def dockerStackYamlToWrite = readFile 'docker-stack.yaml'
def dockerStackYaml = readYaml file: "docker-stack.yaml"
def imageOrigin = dockerStackYaml.services[domain].image
def versionSource = imageOrigin.substring(imageOrigin.lastIndexOf(":") + 1, imageOrigin.length())
def imageWithNewVersion = imageOrigin.replace(versionSource, imageTag)
dockerStackYamlToWrite = dockerStackYamlToWrite.replace(imageOrigin, imageWithNewVersion)
sh "rm docker-stack.yaml"
writeFile file: "docker-stack.yaml", text: dockerStackYamlToWrite
sh "git add docker-stack.yaml"
sh "git commit -m 'promote dockerStack to ${envname}'"
sh "git push origin ${envname}"
}
I am using test to validate my code:
import org.junit.Before
import org.junit.Test
class TestUpdateVersionInDockerStack extends JenkinsfileBaseTest {
#Before
void setUp() throws Exception {
helper.registerAllowedMethod("build", [Map.class], null)
helper.registerAllowedMethod("steps", [Object.class], null)
super.setUp()
}
#Test void success() throws Exception {
def script = loadScript("src/test/jenkins/updateVersionInDockerStack/success.jenkins")
script.execute()
}
}
Here is the success.jenkins:
def execute() {
node() {
stage("Build") {
def version = buildVersion()
updateVersionInDockerStack([
DOMAIN : "security-package",
IMAGE_TAG : version,
GITHUB_ORGA : "Bla",
TARGET_ENV : "int"
])
}
}
}
return this
When I run my test I get this message:
groovy.lang.MissingMethodException: No signature of method: updateVersionInDockerStack.readYaml() is applicable for argument types: (java.util.LinkedHashMap) values: [[file:docker-stack.yaml]]
At this point I'm lost. For what I understand from the documentation readYaml() can I a file as an argument.
Can you help to understand why it is expecting a LinkedHashMap? Do you have to convert my value in a LinkedHashMap?
Thank you
Your pipeline unit test fails, because there is no readYaml method registered in pipeline's allowed methods. In your TestUpdateVersionInDockerStack test class simply add to the setUp method following line:
helper.registerAllowedMethod("readYaml", [Map.class], null)
This will instruct Jenkins pipeline unit environment that the method readYaml that accepts a single argument of type Map is allowed to use in the pipeline and invocation of this method will be registered in the unit test result stack. You can add a method printCallStack() call to your test method to see the stack of all executed steps during the test:
#Test void success() throws Exception {
def script = loadScript("src/test/jenkins/updateVersionInDockerStack/success.jenkins")
script.execute()
printCallStack()
}

How to trigger multiple down stream jobs in jenkins dynamically based on some input parameter

Scenario: I want to trigger few down stream jobs(Job A and Job B ....) dynamically based on the input parameter received by the current job.
import hudson.model.*
def values = ${configname}.split(',')
def currentBuild = Thread.currentThread().executable
println ${configname}
println ${sourceBranch}
values.eachWithIndex { item, index ->
println item
println index
def job = hudson.model.Hudson.instance.getJob(item)
def params = new StringParameterValue('upstream_job', ${sourceBranch})
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)
}
How about something like this? I was getting a comma separated list from the upstream system and I splitted them as individaul string which is internally jobs. Making a call by passing each individual strings.
this Jenkinsfile would do that:
#!/usr/bin/env groovy
pipeline {
agent { label 'docker' }
parameters {
string(name: 'myHotParam', defaultValue: '', description: 'What is your param, sir?')
}
stages {
stage('build') {
steps {
script {
if (params.myHotParam == 'buildEverything') {
build 'mydir/jobA'
build 'mydir/jobB'
}
}
}
}
}
}

Jenkins pipeline script fails with "General error during class generation: Method code too large!"

When running a large Jenkins pipeline script, it can give the error:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during class generation: Method code too large!
java.lang.RuntimeException: Method code too large!
What is the reason for this error and how can it be fixed?
This is due to a limit between Java and Groovy, requiring that method bytecode be no larger than 64kb. It is not due to the Jenkins Pipeline DSL.
To solve this, instead of using a single monolithic pipeline script, break it up into methods and call the methods.
For example, instead of having:
stage foo
parallel([
... giant list of maps ...
])
Instead do:
stage foo
def build_foo() {
parallel([
...giant list of maps...
])}
build_foo()
If you are using declarative pipeline with shared library, you may need to refactor and externalize your global variables in the new methods. Here is a full example:
Jenkinsfile:
#Library("my-shared-library") _
myPipeline()
myPipeline.groovy:
def call() {
String SOME_GLOBAL_VARIABLE
String SOME_GLOBAL_FILENAME
pipeline {
stages() {
stage('stage 1') {
steps {
script {
SOME_GLOBAL_VARIABLE = 'hello'
SOME_GLOBAL_FILENAME = 'hello.txt'
...
}
}
}
stage('stage 2') {
steps {
script {
doSomething(fileContent: SOME_GLOBAL_VARIABLE, filename: SOME_GLOBAL_FILENAME)
sh "cat $SOME_GLOBAL_FILENAME"
}
}
}
}
}
}
def doSomething(Map params) {
String fileContent = params.fileContent
String filename = params.filename
sh "echo $fileContent > $filename"
}

Resources