How does variable scoping work when splitting a workflow into smaller chunks? - jenkins

I have a very long workflow for building and testing our application. So long, in fact, that when we try to load the main workflow script, we get this exception:
java.lang.ClassFormatError: Invalid method Code length 67768 in class file WorkflowScript
I am not proud of this. I'm tying to split the workflow into smaller scripts that we load from the main workflow script, but are running into an issue with variable scoping. For example:
def a = 'foo' //some variable referenced in multiple workflow stages
node {
echo a
}
//... and then a whole bunch of other stages
might become
def a = 'foo' //some variable referenced in multiple workflow stages
node {
git: ...
load 'flowPartA.groovy'
}()
where flowPartA.groovy looks like:
{ ->
node {
echo a
}
}
Based on my understanding of the documentation, where flowPartA.groovy is interpreted as a closure, I expect the variable 'a' would remain in scope, but instead, I get an exception to the contrary.
groovy.lang.MissingPropertyException: No such property: a for class: groovy.lang.Binding
Am I missing something about the way workflow interprets the flow scripts? Is there a good way to take a huge workflow that uses many, many parameters and split it into smaller chunks?

You have to define a function in the external groovy and call it passing all required parameters:
def a = 'foo'
node('slave') {
git '…'
def flow = load 'flowPartA.groovy'
flow.echoFromA(a)
}
And flowPartA.groovy contains:
def echoFromA(String a) {
echo a
}
return this
See the documentation for more information.

Related

parallel steps on different nodes on declarative jenkins pipeline cannot access variables defined in global scope

Let me preface this by saying that I don't yet fully understand how jenkins DSL / groovy handles namespace, scope, variables etc.
In order to keep my code DRY I put repeated command sequences into variables.
It turns out the variable script below is not readable by the code in doParallelStuff. Why is that? Is there a way to share global variables defined in the script (or elsewhere) among both the main pipleine steps and the doParallelStuff code?
def script = """\
#/bin/bash
python xyz.py
"""
def doParallelStuff() {
tests["1"] = {
node {
stage('ps1') {
sh script
}
}
}
tests["2"] = {
node {
stage('ps2') {
sh script
}
}
}
parallel tests
}
pipeline {
stages {
stage("myStage") {
steps {
script {
sh script
doParallelStuff()
}
}
}
}
}
The actual steps are a bit more complicated, but this causes an error like the following to be thrown:
hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: script for class: WorkflowScript
When you define a variable outside of the pipeline directive scope using the def keyword you are defining it in the local scope of the main script, because the pipeline keyword is actually a method that is executed in the main script it can access the variable is they are defined and executed in the same scope (they are actually transformed into a separated class).
When you define a function outside of the pipeline directive, that function has its own scope for variables which is separated from the scope of the main script and therefore it cannot access the defined variable in the top level.
To solve it you can define the variable without the def keyword which will affect the scope in which this variable is created, as without the def (in a groovy script, not class) the variable is added to the global variables of the script (The Binding) which makes it accessible from any function or code within the groovy script. You can read more on the following question: What is the difference between defining variables using def and without?
So in your case you want a variable that is available for both the pipeline code itself and for the defined functions - so it needs to be available anywhere in the script as a global variable and therefore just define it without the def keyword, and it should do the trick:
script = """\
#/bin/bash
python xyz.py
"""

No such field found: field java.lang.String sinput error when accessing cppcheck plugin classes

]I am a junior dev trying to lear about Jenkins, I have been learning on my own for a couple of months. Currently I have a pipeline (just for learning purposes) which runs static analysis on a folder, and then publish it, I have been able to send a report through email using jelly templates, from there I realized it is posbile to instantiate the classes of a plugin to use its methods so I went to the cppcheck javadoc and did some trial and error so I can get some values of my report and then do something else with them, so I had something like this in my pipeline:
pipeline {
agent any
stages {
stage('analysis') {
steps {
script{
bat'cppcheck "E:/My_project/Source/" --xml --xml-version=2 . 2> cppcheck.xml'
}
}
}
stage('Test'){
steps {
script {
publishCppcheck pattern:'cppcheck.xml'
for (action in currentBuild.rawBuild.getActions()) {
def name = action.getClass().getName()
if (name == 'org.jenkinsci.plugins.cppcheck.CppcheckBuildAction') {
def cppcheckaction = action
def totalErrors = cppcheckaction.getResult().report.getNumberTotal()
println totalErrors
def warnings = cppcheckaction.getResult().statistics.getNumberWarningSeverity()
println warnings
}
}
}
}
}
}
}
which output is:
[Pipeline] echo
102
[Pipeline] echo
4
My logic (wrongly) tells me that if I can access to the report and statistics classes like that and uses their methods getNumberTotal() and getNumberWarningSeverity() respectively, therefore I should be able to also access the DiffState class in the same way and use the valueOf() method to get an enum of the new errors. But adding this to my pipeline:
def nueva = cppcheckaction.getResult().diffState.valueOf(NEW)
println nueva
Gives me an error:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: No such field found: field org.jenkinsci.plugins.cppcheck.CppcheckBuildAction diffState
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.unclassifiedField(SandboxInterceptor.java:425)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:409)
...
I can see in the javadoc there is a diffState class with a valueOf() method, but I cannot access to it is therre any other way to get the new errors between the last build and the current one?
I see 2 issues that could be causing this:
CppcheckResult doesn't have a member variable diffState so you can't access it obviously
If you check the javadoc of CppcheckResult the class does have:
private CppcheckReport report;
public CppcheckStatistics getReport()
and
private CppcheckStatistics statistics;
public CppcheckStatistics getStatistics()
there is no member (and getter method) for diffState so maybe try to call:
/**
* Get differences between current and previous statistics.
*
* #return the differences
*/
public CppcheckStatistics getDiff(){
my suggestion: cppcheckaction.getResult().getDiff().valueOf(NEW). Furthermore CppcheckWorkspaceFile does have a method getDiffState().
Please have a look at the script approval of your Jenkins (see here).
The syntax error might appear because Jenkins (Groovy Sandbox) blocks the execution of an (for the Jenkins) "unknown" and potential dangerous method.
Jenkins settings - Script Approval - Approve your blocked method

Using global shared libraries in Jenkins to define parameter options

I am trying to use a global class that I've defined in a shared library to help organise job parameters. It's not working, and I'm not even sure if it is possible.
My job looks something like this:
pipelineJob('My-Job') {
definition {
// Job definition goes here
}
parameters {
choiceParam('awsAccount', awsAccount.ALL)
}
}
In a file in /vars/awsAccount.groovy I have the following code:
class awsAccount implements Serializable {
final String SANDPIT = "sandpit",
final String DEV = "dev",
final String PROD = "prod"
static String[] ALL = [SANDPIT, DEV, PROD]
}
Global pipeline libraries are configured to load implicitly from the my repository's master branch.
When attempting to update the DSL scripts I receive the error:
ERROR: (myJob.groovy, line 67) No such property: awsAccount for class: javaposse.jobdsl.dsl.helpers.BuildParametersContext
Why does it not find the class, and is it even possible to use shared library classes like this in pipeline job?
Disclaimer: I know it works using Jenkinsfile. Unfortunatelly, not tested usng Declarative Pipelines - but no answers yet, so it may be worth a try
Regarding your first question: there are some reasons why a class from your shared-lib could not be found. Starting from the library import, the library syntax, etc. But they definitvely work for DSL. To be more precise about it, additional information would be great. But be sure that:
You have your groovy class definition using exactly the directory structure as described in the documentation (https://www.jenkins.io/doc/book/pipeline/shared-libraries/)
Give a name to the shared-lib in jenkins as you configure it and be sure is exactly the name you use in the import
Use the import as described in the documentation (under Using Libraries)
Regarding your second question (the one that names this SO question): yes, you can include parameter jobs from information in your shared-lib. At least, using Jenkinsfiles. You can even define properties to be included in the pipelie. I got it working with a tricky syntax due to different problems.
Again, I am using Jenkinsfile and this is what worked for me:
In my shared-lib class, I added a static function that introduces the build parameters. Notice the input parameters that function needs and its usage:
class awsAccount implements Serializable {
//
static giveMeParameters (script) {
return [
// Some parms
script.string(defaultValue: '', description: 'A default parameter', name: 'textParm'),
script.booleanParam(defaultValue: false, description: 'If set to True, do whatever you need - otherwise, do not do it', name: 'boolOption'),
]
}
}
To introduce those parameters in the pipeline, you need to place the returned value of the function into the parameters array
properties (
parameters (
awsAccount.giveMeParameters (this)
)
Again, notice the syntax when calling the function. Similar to this, you can also define functions in the shared-lib that return properties and use them in multiple jobs (disableConcurrentBuilds, buildDiscarder, etc)

Jenkins pipeline - how to load a Jenkinsfile without first calling node()?

I have a somewhat unique setup where I need to be able to dynamically load Jenkinsfiles that live outside of the src I'm building. The Jenkinsfiles themselves usually call node() and then some build steps. This causes multiple executors to be eaten up unnecessarily because I need to have already called node() in order to use the load step to run a Jenkinsfile, or to execute the groovy if I read the Jenkinsfile as a string and execute it.
What I have in the job UI today:
#Library(value='myGlobalLib#head', changelog=fase) _
node{
load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
The Jenkinsfile that's loaded usually also calls node(). For example:
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
This causes 2 executors to be consumed during the pipeline run. Ideally, I load the Jenkinsfiles without first calling node(), but whenever I try, I get an error message stating:
"Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"
Is there any way to load a Jenkinsfile or execute groovy without first having hudson.FilePath context? I can't seem to find anything in the doc. I'm at the point where I'm going to preprocess the Jenkinsfiles to remove their initial call to node() and call node() with the value the Jenkinsfile was using, then load the rest of the file, but, that's somewhat too brittle for me to be happy with.
When using load step Jenkins evaluates the file. You can wrap your Jenkinsfile's logics into a function (named run() in my example) so that it will load but not run automatically.
def run() {
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
}
// This return statement is important in the end of Jenkinsfile
return this
Call it from your job script like this:
def jenkinsfile
node{
jenkinsfile = load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
jenkinsfile.run()
This way there is no more nested node blocks because the first gets closed before run() function is called.

How to invoke DLS Step in overriding Global Library Function

Through experimentation I have determined that I can mask the built-in pipeline steps such as build by defining a global function of the same name in shared library.
Example:
(root)
+- vars
+- build.groovy
where build.groovy is:
def call(Map args) {
echo "BUILD: ${args}"
}
If I load this library, then none of my calls to build actually do anything. They just echo that build was called and with what args. This is very useful for testing out pipeline scripts to ensure the script logic itself is correct while avoiding actually doing long running tasks.
But testing is only one use of this. What I really want to do is decorate build, node, stage and a few other steps to capture usage metrics. For example to record for every node that is ever allocated, what time of day it was allocated, and how long it was allocated for. This could be really useful for capacity analysis and planning.
Another application would be to enforce certain policies, such that nodes always be allocated by label and never by explicit node name.
To make any of this work though, the node.groovy decorator needs some way to invoke the real node step that it is masking. Any ideas how to do this?
Figured this out this evening. All the dsl steps are available as members of the steps variable. Which allows me to write something like:
#Library('pipeline-utils')
import mycompany.analytics.AnalyticsClient
import mycompany.analytics.Utils
node('linux') { sh 'echo test' }
def node(String label, Closure nodeAction) {
def executionTime
def actualNode
def allocationTime = Utils.startMeasureDuration()
steps.node(label){
allocationTime.stop()
actualNode = env.NODE_NAME
executionTime = Utils.measureDuration(nodeAction)
}
def fact = [
type: 'node_usage',
job_name: currentBuild.getProjectName(),
node_label: label,
node_name: actualNode,
ts: allocationTime.startTS,
time_in_queue: allocationTime.durationMillis,
execution_time: executionTime.durationMillis
]
AnalyticsClient.recordFact(fact)
if(!executionTime.success) throw executionTime.exception
}

Resources