I'm trying to write a Jenkins plugin that provides Step myStep which expects a block with a single parameter per below
myStep { someParameter -> <user code> }
I've found that BodyInvoker ( retrieved from StepContext.newBodyInvoker() ) provides no facilities to invoke the user provided block with parameters.
Expanding the environment would not be ideal, even though the type of the parameter is serializable ( to/from String ), i'd have to provide additional helpers to carry out this serialization, e.g
myStep { deserialize "${env.value}" <user code> }
do i have any other option to pass a non-string type in to the provided block? would type information of the parameter survive even if i did?
nb: i understand you can return a value from your Execution.run() which will be the return value of the step in the pipeline. It's just that in a related shared pipeline library i'm already heavily leaning in to this pattern of:
withFoo { computedFoo ->
# something with computedFoo
withBar computedFoo { computedBar ->
}
}
i prefer this over
computedFoo = withFoo
# something with computedFoo
withBar(computedFoo)
..then again, i couldn't find any plugins pulling this off.
no matter how close i look at workflow-step-api-plugin this doesn't seem possible today. The options are:
expand the environment context with a string value
add a custom object to the context ( requires access to step context in pipeline )
use a return value
Related
I am trying to use a global class that I've defined in a shared library to help organise job parameters. It's not working, and I'm not even sure if it is possible.
My job looks something like this:
pipelineJob('My-Job') {
definition {
// Job definition goes here
}
parameters {
choiceParam('awsAccount', awsAccount.ALL)
}
}
In a file in /vars/awsAccount.groovy I have the following code:
class awsAccount implements Serializable {
final String SANDPIT = "sandpit",
final String DEV = "dev",
final String PROD = "prod"
static String[] ALL = [SANDPIT, DEV, PROD]
}
Global pipeline libraries are configured to load implicitly from the my repository's master branch.
When attempting to update the DSL scripts I receive the error:
ERROR: (myJob.groovy, line 67) No such property: awsAccount for class: javaposse.jobdsl.dsl.helpers.BuildParametersContext
Why does it not find the class, and is it even possible to use shared library classes like this in pipeline job?
Disclaimer: I know it works using Jenkinsfile. Unfortunatelly, not tested usng Declarative Pipelines - but no answers yet, so it may be worth a try
Regarding your first question: there are some reasons why a class from your shared-lib could not be found. Starting from the library import, the library syntax, etc. But they definitvely work for DSL. To be more precise about it, additional information would be great. But be sure that:
You have your groovy class definition using exactly the directory structure as described in the documentation (https://www.jenkins.io/doc/book/pipeline/shared-libraries/)
Give a name to the shared-lib in jenkins as you configure it and be sure is exactly the name you use in the import
Use the import as described in the documentation (under Using Libraries)
Regarding your second question (the one that names this SO question): yes, you can include parameter jobs from information in your shared-lib. At least, using Jenkinsfiles. You can even define properties to be included in the pipelie. I got it working with a tricky syntax due to different problems.
Again, I am using Jenkinsfile and this is what worked for me:
In my shared-lib class, I added a static function that introduces the build parameters. Notice the input parameters that function needs and its usage:
class awsAccount implements Serializable {
//
static giveMeParameters (script) {
return [
// Some parms
script.string(defaultValue: '', description: 'A default parameter', name: 'textParm'),
script.booleanParam(defaultValue: false, description: 'If set to True, do whatever you need - otherwise, do not do it', name: 'boolOption'),
]
}
}
To introduce those parameters in the pipeline, you need to place the returned value of the function into the parameters array
properties (
parameters (
awsAccount.giveMeParameters (this)
)
Again, notice the syntax when calling the function. Similar to this, you can also define functions in the shared-lib that return properties and use them in multiple jobs (disableConcurrentBuilds, buildDiscarder, etc)
I'm trying to divide a pipeline. Most of the parameters passed successful, but those containing variables are resolved before i need.
Jenkins ver. 2.164.1
Jenkins.file content:
stage ('prebuild') {
steps {
script {
VERSION="temprorary-value"
POSTBUILDACTION="make.exe \\some\\path\\file_${VERSION}"
}
}
}
stage ('build') {
steps {
script {
build (POSTBUILDACTION)
}
}
}
build.groovy content:
def call (String POSTBUILDACTION) {
...
checkout somefile
VERSION=readFile(somefile)
bat "${POSTBUILDACTION}"
}
here i expected that version will be taken from redefined VERSION variable but POSTBUILDACTION passed into the function as a string. In result it's called as is ("make.exe \some\path\file_temprorary-value"). In fact command i'd like to get is (somefile contains only one number, for example "5")
make.exe \some\path\file_5
But now i have
make.exe \some\path\file_temprorary-value
Or if i trying to pass \${VERSION} like:
POSTBUILDACTION="make.exe \\some\\path\\file_\${VERSION}"
- it's transfer as is:
make.exe \some\path\file_${VERSION}
I've tried to view a class of POSTBUILDACTION in prebuild stage - it's equal "class org.codehaus.groovy.runtime.GStringImpl" and same on build stage after passing throw - it become a string: "class java.lang.String"
So how to pass into a function argument contained a variable, but not it's value ?
OR
to "breathe life" into a dry string like
'make.exe \\some\\path\\file_${VERSION}'
so the variables could be resolved?
Option 1 - lazy evaluation (#NonCPS)
You can use a GString with lazy evaluation, but since Jenkins doesn't serialize lazy GStrings you'll have to return it from a #NonCPS method like so:
#NonCPS
def getPostBuildAction() {
"make.exe \\some\\path\\file_${ -> VERSION }"
}
stage ('prebuild') {
...
}
Then you set POSTBUILDACTION=getPostBuildAction() and you can use POSTBUILDACTION as you wanted, but be aware that the object you have here is a groovy.lang.GString and not a String, so you'll want to change your parameter class (or simply use def.)
Option 2 - use a closure for every call
You can use an eager GString inside a closure:
def String getPostBuildAction() {
"make.exe \\some\\path\\file_$VERSION"
}
But here you'll have to call getPostBuildAction() every time you want a different reading of VERSION, so you'll have to replace POSTBUILDACTION with this closure.
I am new to Jenkins pipeline scripting. I am developing a Jenkins pipeline in which the Jenkins code is as follows. The logic looks like this:
node{
a=xyz
b=abc
//defined some global variables
stage('verify'){
verify("${a}","${b}")
abc("${a}","${b}")
echo "changed values of a and b are ${a} ${b}"
}}
def verify(String a, String b)
{ //SOme logic where the initial value of a and b gets changed at the end of this function}
def verify(String a, String b){
//I need to get the changed value from verify function and manipulate that value in this function}
I need to pass the initial a and b(multiple) values to the verify function and pass the changed value on to the other function. I then need to manipulate the changed value, and pass it to the stage in the pipeline where echo will display the changed values. How can I accomplish all this?
Ok, here's what I meant:
def String verify_a(String a) { /* stuff */ }
def String verify_b(String b) { /* stuff */ }
node {
String a = 'xyz'
String b = 'abc'
stage('verify') {
a = verify_a(a)
b = verify_b(b)
echo "changed values of a and b are $a $b"
}
stage('next stage') {
echo "a and b retain their changed values: $a $b"
}
}
The easiest way I have found to pass variables between stages is to just use Environment Variables. The one - admittedly major - restriction is that they can only be Strings. But I haven't found that to be a huge issue, especially with liberal use of the toBoolean() and toInteger() functions. If you need to be passing maps or more complex objects between stages, you might need to build something with external scripts or writing things to temporary files (make sure to stash what you need if there's a chance you'll switch agents). But env vars have served me well for almost all cases.
This article is, as its title implies, the definitive guide on environment variables in Jenkins. You'll see a comment there from me that it's really helped me grok the intricacies of Jenkins env vars.
I'm setting a Jenkins pipeline Jenkinsfile and I'd like to check if a booleanparameter is set.
Here's the relevant portion of the file:
node ("master") {
stage 'Setup' (
[[$class: 'BooleanParameterValue', name: 'BUILD_SNAPSHOT', value: 'Boolean.valueOf(BUILD_SNAPSHOT)']],
As I understand, that is the way to access the booleanparameter but I'm not sure how to state the IF statement itself.
I was thinking about doing something like:
if(BooleanParameterValue['BUILD_SNAPSHOT']){...
What is the correct way to write this statement please?
A boolean parameter is accessible to your pipeline script in 3 ways:
As a bare parameter, e.g: isFoo
From the env map, e.g: env.isFoo
From the params map, e.g: params.isFoo
If you access isFoo using 1) or 2) it will have a String value (of either "true" or "false").
If you access isFoo using 3) it will have a Boolean value.
So the least confusing way (IMO) to test the isFoo parameter in your script is like this:
if (params.isFoo) {
....
}
Alternatively you can test it like this:
if (isFoo.toBoolean()) {
....
}
or
if (env.isFoo.toBoolean()) {
....
}
the toBoolean() is required to convert the "true" String to a boolean true and the "false" String to a boolean false.
The answer is actually way simpler than that !
According to the pipeline documention, if you define a boolean parameter isFoo you can access it in your Groovy with just its name, so your script would actually look like :
node {
stage 'Setup'
echo "${isFoo}" // Usage inside a string
if(isFoo) { // Very simple "if" usage
echo "Param isFoo is true"
...
}
}
And by the way, you probably should'nt call your parameter BUILD_SNAPSHOT but maybe buildSnapshot or isBuildSnapshot because it is a parameter and not a constant.
simply doing if(isFoo){...} that will not guarantee it working :) To be safe, use if(isFoo.toString()=='true'){ ... }
I have a very long workflow for building and testing our application. So long, in fact, that when we try to load the main workflow script, we get this exception:
java.lang.ClassFormatError: Invalid method Code length 67768 in class file WorkflowScript
I am not proud of this. I'm tying to split the workflow into smaller scripts that we load from the main workflow script, but are running into an issue with variable scoping. For example:
def a = 'foo' //some variable referenced in multiple workflow stages
node {
echo a
}
//... and then a whole bunch of other stages
might become
def a = 'foo' //some variable referenced in multiple workflow stages
node {
git: ...
load 'flowPartA.groovy'
}()
where flowPartA.groovy looks like:
{ ->
node {
echo a
}
}
Based on my understanding of the documentation, where flowPartA.groovy is interpreted as a closure, I expect the variable 'a' would remain in scope, but instead, I get an exception to the contrary.
groovy.lang.MissingPropertyException: No such property: a for class: groovy.lang.Binding
Am I missing something about the way workflow interprets the flow scripts? Is there a good way to take a huge workflow that uses many, many parameters and split it into smaller chunks?
You have to define a function in the external groovy and call it passing all required parameters:
def a = 'foo'
node('slave') {
git '…'
def flow = load 'flowPartA.groovy'
flow.echoFromA(a)
}
And flowPartA.groovy contains:
def echoFromA(String a) {
echo a
}
return this
See the documentation for more information.