Pass value/data between pipeline steps - jenkins

Hi!
I need to pass data from one step to another in a Jenkins pipeline. Something like this:
node {
// myPipelineStep is "my" own hello world pipeline step based on hello-world archetype, and I want it to return a variable that I have inside the plugin
def return_value = myPipelineStep inputVariable: value
// Then I want to do something else, a new step, where I use this value
sh 'echo $return_value'
//But the problem is I dont know how to return something from my pipeline step
}
But in the archetype empty-plugin the perform()-function where the action should take place is void.. So it is not possible to return something here.
And the same goes for the hello-world acrhetype.
Anyone with any leads?

Can you modify what is in the method ?? if so, create a global variable outside and assign the value inside the method.
Now the variable will have the value and you can use it in the other method
E.g.
node{
def isVaild
void perform(.....) {
//do stuff
isValid =true;
}
#Override
void perform(.....) {
//do stuff
if(isValid){
}
}
}
It should work :)

Related

How to use Jenkins Job DSL API in custom groovy classes

Is something like this possible, i.e. using the JobDSL API from a class outside the main DSL script?
//main_jobdsl_script.groovy:
new JobCreator().createJob()
//JobCreator.groovy:
job("new-job") {
steps {
batchFile("Hello World")
}
}
When running it I get the error
13:03:18 ERROR: No signature of method: JobCreator.job() is applicable for argument types:
(org.codehaus.groovy.runtime.GStringImpl, StartJobCreator$_createJob_closure1)
values: ["new-job", de.dbh.jobcreation.StartJobCreator$_createStartJob_closure1#374d293]
I want to avoid that the main-script gets too big and cluttered and rather divide the code into several scripts / classes.
Yes, it is possible. The current script has access to all API methods, so you need to pass it to the custom class.
//main_jobdsl_script.groovy:
new JobCreator(this).createJob()
//JobCreator.groovy:
class JobCreator {
private final Object context
JobCreator(Object context) {
this.context = context
}
void createJob() {
context.job('new-job') {
steps {
batchFile('Hello World')
}
}
}
}

Common wrapper in Jenkinsfile

I want to add timestamps() and colorizeOutput() features to our pipeline libraries. I find the wrappers {} in Jenkins documentation:
job('example') {
wrappers {
colorizeOutput()
timestamps()
}
}
I don`t get how to add wrappers to out library which looks like that:
// file ..src/helpers/Builder.groovy
package helpers.sw_main
def doSomething() {
// some Groovy stuff here
}
def doSomethingElse() {
// do something else
}
Our job pipeline looks like that:
#!/usr/bin/env groovy
// this is our library with custom methods
#Library('ext-lib')
def builder = new helpers.Builder()
node {
try {
stage('Some Stage') {
builder.doSomething()
}
}
catch (err) {
throw err
}
}
So, I want to add timestamps and ansi-colors to every function from library. Of course, I can do it with wrapping every function with
timestamps() {
colorizeOutput() {
// function body
}
}
But its a little stupid.
So can I easily wrap pipeline or library?
One solution to your problem is to use Global Variables (/vars/xxxxx.groovy).
To create an own build step, add a Global Variable like /vars/myOwnStep.groovy:
def call(STAGE_NAME, Closure closure) {
// do something
// return something if you like to
}
whcih you can call like this
myOwnStep("Step-name") {
// what ever you want to do
}
in your pipeline script.
Another possibility is to "overwrite" the sh step. Therefore create a file called /vars/sh.groovy with this code:
def call(String script, String encoding=null, String label=null, boolean returnStatus=null, boolean returnStdout=null) {
timestamps {
return steps.sh(script: script, endoding: encoding, label: label, returnStatus: returnStatus, returnStdout: returnStdout)
}
}
def call(Map params = [:]) {
return call(params.script, params.get('encoding', null), params.get('label', null), params.get('returnStatus', false), params.get('returnStdout', false))
}
(This can be done for other steps too, but he parameters have to match.)
I just added a GitHub repository with some examples: https://github.com/datze/jenkins_shared_library (untested!)

Jenkins pipeline shared library - passing arguments

I am trying to build a function that accepts parameters to override defaults but I keep getting "null".
I have written a simple function:
// vars/Run.groovy
def test(String type, String parallel = 'yes') {
println(type)
println(parallel)
}
My pipeline looks like this:
node('master') {
Run.test('unit')
Run.test('unit', parallel = 'no')
}
The result I get is:
unit
yes
unit
null
What am I missing?
You just have to pass the value. This will override your default value.
Run.test('unit', 'no')

Jenkins pipeline job hangs on simple call to print simple object from shared library

I have a jenkins pipeline job that has been working fine. I have a small handful of similar pipelines, and I've been duplicating a small set of reusable utility methods into each one. So, I've started to construct a shared library to reduce that duplication.
I'm using the following page for guidance: https://jenkins.io/doc/book/pipeline/shared-libraries/ .
For each method that I move into the shared library, I create a "vars/methodname.groovy" file in the shared library, and change the method name to "call".
I've been doing these one at a time and verifying the pipeline job still works, and this is all working fine.
The original set of methods would reference several "global" variables, like "env.JOB_NAME" and "params.". In order for the method to work in the shared library, I would add references to those env vars and params as parameters to the methods. This also works fine.
However, I don't like the fact that I have to pass these "global" variables, that are essentially static from the start of the job, sometimes through a couple of levels of these methods that I've put into the shared library.
So, I've now created something like the "vars/acme.groovy" example from that doc page. I'm going to define instance variables to store all of those "global" variables, and move each of the single methods defined in each of the "vars/methodname.groovy" files as instance variables in this new class.
I also defined a "with" method in the class for each of the instance variables (setter that returns "this" for chaining).
I initially would configure it inside my "node" block with something like the following (the file in the library is called "vars/uslutils.groovy"):
uslutils.withCurrentBuild(currentBuild).with...
And then when I need to call any of the reused methods, I would just do "uslutils.methodname(optionalparameters)".
I also added a "toString()" method to the class, just for debugging (since debugging Jenkinsfiles is so easy :) ).
What's odd is that I'm finding that if I call this toString() method from the pipeline script, the job hangs forever, and I have to manually kill it. I imagine I'm hitting some sort of non-obvious recursion in some Groovy AST, but I don't see what I'm doing wrong.
Here is my "vars/uslutils.groovy" file in the shared library:
import hudson.model.Cause
import hudson.triggers.TimerTrigger
import hudson.triggers.SCMTrigger
import hudson.plugins.git.GitStatus
class uslutils implements Serializable {
def currentBuild
String mechIdCredentials
String baseStashURL
String jobName
String codeBranch
String buildURL
String pullRequestURL
String qBotUserID
String qBotPassword
def getCurrentBuild() { return currentBuild }
String getMechIdCredentials() { return mechIdCredentials }
String getBaseStashURL() { return baseStashURL }
String getJobName() { return jobName }
String getCodeBranch() { return codeBranch }
String getBuildURL() { return buildURL }
String getPullRequestURL() { return pullRequestURL }
String getQBotUserID() { return qBotUserID }
String getQBotPassword() { return qBotPassword }
def withCurrentBuild(currentBuild) { this.currentBuild = currentBuild; return this }
def withMechIdCredentials(String mechIdCredentials) { this.mechIdCredentials = mechIdCredentials; return this }
def withBaseStashURL(String baseStashURL) { this.baseStashURL = baseStashURL; return this }
def withJobName(String jobName) { this.jobName = jobName; return this }
def withCodeBranch(String codeBranch) { this.codeBranch = codeBranch; return this }
def withBuildURL(String buildURL) { this.buildURL = buildURL; return this }
def withPullRequestURL(String pullRequestURL) { this.pullRequestURL = pullRequestURL; return this }
def withQBotUserID(String qBotUserID) { this.qBotUserID = qBotUserID; return this }
def withQBotPassword(String qBotPassword) { this.qBotPassword = qBotPassword; return this }
public String toString() {
// return "[currentBuild[${this.currentBuild}] mechIdCredentials[${this.mechIdCredentials}] " +
// "baseStashURL[${this.baseStashURL}] jobName[${this.jobName}] codeBranch[${this.codeBranch}] " +
// "buildURL[${this.buildURL}] pullRequestURL[${this.pullRequestURL}] qBotUserID[${this.qBotUserID}] " +
// "qBotPassword[${this.qBotPassword}]]"
return this.mechIdCredentials
}
Note that I've simplified the toString() method temporarily until I figure out what I'm doing wrong here.
This is what I added at the top of my "node" block:
uslutils.currentBuild = currentBuild
println "uslutils[${uslutils}]"
When I run the job, it prints information from lines that come before this, and then it just shows the rotating thing forever, until I kill the job. If I comment out the "println", it works fine.

How to get default value from groovy script variable in jenkins pipeline job

So the problem is, I am not able to get the default value for controllerIP variable using the getControllerIP method without calling setControllerIP. I tried similar groovy code locally and it works, but not working on my jenkins server. Also tried lots of other combination in my groovy script but nothing worked.
Note that we are using Jenkins: pipeline shared groovy libraries plugin.
This is my pipeline job on Jenkins:
node{
def controllerParameters = new com.company.project.controller.DeploymentParameters() as Object
controllerParameters.setOSUsername('jenkins')
controllerParameters.setOSPassword('jenkins123')
controllerParameters.setBuildNumber(33)
//controllerParameters.setControllerIP('192.44.44.44')
//if I uncomment above line everything works fine but I need to get default value in a case
echo "I want the default value from other file"
controllerParameters.getControllerIP()
echo "my code hangs on above line"
}
This is my other file ../controller/DeploymentParameters.groovy
package com.company.project.controller
import groovy.transform.Field
def String osUsername
def String osPassword
#Field String controllerIP = "NotCreated" //tried few combinations
//Open Stack username
def String setOSUsername(String osUsername) {
this.osUsername = osUsername
}
def String getOSUsername() {
return this.osUsername
}
//Open Stack password
void setOSPassword(String osPassword) {
this.osPassword = osPassword
}
def String getOSPassword() {
return this.osPassword
}
//Open Stack floating ip of master vm
void setControllerIP(String controllerIP) {
this.controllerIP = controllerIP
}
def String getControllerIP() {
return this.controllerIP
}
When groovy executes lines like this.osUsername = osUsername or return this.osUsername it actually calls getters and setters instead of direct field access.
So this:
def String getOSPassword() {
return this.osPassword
}
behaves like this:
def String getOSPassword() {
return this.getOsPassword()
}
And you code enters infinite recursion (same for setter and assignment).
Within your setters and getters you need to use Groovy direct field access operator
def String getOSPassword() {
return this.#osPassword
}

Resources