Jenkins Method too large - jenkins

After modifying Jenkinfiles slightly, by adding 1 more environment variables to
environment{
...
uuid = <256 char long uuid>
}
I get error:
7:37:34 Library piper-lib-os#v1.221.0 is cached. Copying from home.
17:37:35 org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
17:37:35 General error during class generation: Method too large: WorkflowScript.___cps___18504794 ()Lcom/cloudbees/groovy/cps/impl/CpsFunction;
17:37:35
17:37:35 groovyjarjarasm.asm.MethodTooLargeException: Method too large: WorkflowScript.___cps___18504794 ()Lcom/cloudbees/groovy/cps/impl/CpsFunction;
I've searched for this, but cannot find any issue, as the only change is adding environment variable

Java has a 64K size limit on bytecode. This is count in the pipeline block, therefore, environment is included there.
You might need to break your pipeline into method as stated in this example
For environment you can create a custom method that returns the value you need:
pipeline {
environment { ...
MYENV = getEnvUUID()
...
}
...
}
def getEnvUUID() {
return 'really-long-uuid'
}

Related

Getting java.lang.IllegalArgumentException: One or more variables have some issues with their values in jenkins pipeline

I have a Jenkins declarative pipeline wherein I am trying to store the value returned from a method into environment variable as shown below.
steps {
script {
def job = getJob(JOB_NAME)
def param = getParam(job, "Ser")
echo param.getValue()
}
}
environment {
p_values = param.getValue()
}
But while running above script I am getting below error.
java.lang.IllegalArgumentException: One or more variables have some issues with their values: p_values
Could you please assist me here to resolve this issue?
I think the environment block will be executed prior to the script block.
You can try assign value to an new environment variable within script block, rather then in environment block as following:
script {
def job = getJob(JOB_NAME)
def param = getParam(job, "Ser")
echo param.getValue()
env.p_values = param.getValue()
}

parallel steps on different nodes on declarative jenkins pipeline cannot access variables defined in global scope

Let me preface this by saying that I don't yet fully understand how jenkins DSL / groovy handles namespace, scope, variables etc.
In order to keep my code DRY I put repeated command sequences into variables.
It turns out the variable script below is not readable by the code in doParallelStuff. Why is that? Is there a way to share global variables defined in the script (or elsewhere) among both the main pipleine steps and the doParallelStuff code?
def script = """\
#/bin/bash
python xyz.py
"""
def doParallelStuff() {
tests["1"] = {
node {
stage('ps1') {
sh script
}
}
}
tests["2"] = {
node {
stage('ps2') {
sh script
}
}
}
parallel tests
}
pipeline {
stages {
stage("myStage") {
steps {
script {
sh script
doParallelStuff()
}
}
}
}
}
The actual steps are a bit more complicated, but this causes an error like the following to be thrown:
hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: script for class: WorkflowScript
When you define a variable outside of the pipeline directive scope using the def keyword you are defining it in the local scope of the main script, because the pipeline keyword is actually a method that is executed in the main script it can access the variable is they are defined and executed in the same scope (they are actually transformed into a separated class).
When you define a function outside of the pipeline directive, that function has its own scope for variables which is separated from the scope of the main script and therefore it cannot access the defined variable in the top level.
To solve it you can define the variable without the def keyword which will affect the scope in which this variable is created, as without the def (in a groovy script, not class) the variable is added to the global variables of the script (The Binding) which makes it accessible from any function or code within the groovy script. You can read more on the following question: What is the difference between defining variables using def and without?
So in your case you want a variable that is available for both the pipeline code itself and for the defined functions - so it needs to be available anywhere in the script as a global variable and therefore just define it without the def keyword, and it should do the trick:
script = """\
#/bin/bash
python xyz.py
"""

java.lang.IllegalArgumentException: Expected named arguments while executing jobs in parallel

I have a monorepo and i am trying to make them run in parallel
def abc = findJenkinsfileToRun(modifiedFiles)
parallel {
for (file in abc) {
println("Building ${file.toString()}")
load "${file.toString()}/Jenkinsfile"
}
}
This results in the following
java.lang.IllegalArgumentException: Expected named arguments but got org.jenkinsci.plugins.workflow.cps.CpsClosure2#b7ccdc
can anyone help how to resolve this?
You are not using the parallel keyword correctly, it should receive a map of branch names as keys and execution code (closure) as values. See the Documentation.
So in your case you should use something like:
def abc = findJenkinsfileToRun(modifiedFiles)
parallel abc.collectEntries { file ->
["Building ${file.toString()}" : {
// The code to run in parallel
println("Building ${file.toString()}")
load "${file.toString()}/Jenkinsfile"
}]
}

How to invoke DLS Step in overriding Global Library Function

Through experimentation I have determined that I can mask the built-in pipeline steps such as build by defining a global function of the same name in shared library.
Example:
(root)
+- vars
+- build.groovy
where build.groovy is:
def call(Map args) {
echo "BUILD: ${args}"
}
If I load this library, then none of my calls to build actually do anything. They just echo that build was called and with what args. This is very useful for testing out pipeline scripts to ensure the script logic itself is correct while avoiding actually doing long running tasks.
But testing is only one use of this. What I really want to do is decorate build, node, stage and a few other steps to capture usage metrics. For example to record for every node that is ever allocated, what time of day it was allocated, and how long it was allocated for. This could be really useful for capacity analysis and planning.
Another application would be to enforce certain policies, such that nodes always be allocated by label and never by explicit node name.
To make any of this work though, the node.groovy decorator needs some way to invoke the real node step that it is masking. Any ideas how to do this?
Figured this out this evening. All the dsl steps are available as members of the steps variable. Which allows me to write something like:
#Library('pipeline-utils')
import mycompany.analytics.AnalyticsClient
import mycompany.analytics.Utils
node('linux') { sh 'echo test' }
def node(String label, Closure nodeAction) {
def executionTime
def actualNode
def allocationTime = Utils.startMeasureDuration()
steps.node(label){
allocationTime.stop()
actualNode = env.NODE_NAME
executionTime = Utils.measureDuration(nodeAction)
}
def fact = [
type: 'node_usage',
job_name: currentBuild.getProjectName(),
node_label: label,
node_name: actualNode,
ts: allocationTime.startTS,
time_in_queue: allocationTime.durationMillis,
execution_time: executionTime.durationMillis
]
AnalyticsClient.recordFact(fact)
if(!executionTime.success) throw executionTime.exception
}

How does variable scoping work when splitting a workflow into smaller chunks?

I have a very long workflow for building and testing our application. So long, in fact, that when we try to load the main workflow script, we get this exception:
java.lang.ClassFormatError: Invalid method Code length 67768 in class file WorkflowScript
I am not proud of this. I'm tying to split the workflow into smaller scripts that we load from the main workflow script, but are running into an issue with variable scoping. For example:
def a = 'foo' //some variable referenced in multiple workflow stages
node {
echo a
}
//... and then a whole bunch of other stages
might become
def a = 'foo' //some variable referenced in multiple workflow stages
node {
git: ...
load 'flowPartA.groovy'
}()
where flowPartA.groovy looks like:
{ ->
node {
echo a
}
}
Based on my understanding of the documentation, where flowPartA.groovy is interpreted as a closure, I expect the variable 'a' would remain in scope, but instead, I get an exception to the contrary.
groovy.lang.MissingPropertyException: No such property: a for class: groovy.lang.Binding
Am I missing something about the way workflow interprets the flow scripts? Is there a good way to take a huge workflow that uses many, many parameters and split it into smaller chunks?
You have to define a function in the external groovy and call it passing all required parameters:
def a = 'foo'
node('slave') {
git '…'
def flow = load 'flowPartA.groovy'
flow.echoFromA(a)
}
And flowPartA.groovy contains:
def echoFromA(String a) {
echo a
}
return this
See the documentation for more information.

Resources