I've got a piece of code that works perfectly in all of the groovy interpreters I know of, including Jenkins scripting console. Yet it has a weird behavior when it comes to pipeline scripts.
def kvs = ['key1': 'value1', 'key2': 'value2']
println kvs
println kvs.inject(''){ s,k,v -> s+= "{'$k': '$v' } "}
First of all, the map is printed differently:
Expected: [key1:value1, key2:value2]
Got: {key1=value1, key2=value2}
Then, more of a problem, the yielded result differs drastically:
Expected: {'key1': 'value1' } {'key2': 'value2' }
Got: null
Both of these results were obtained with the following groovy version: 2.4.12.
(Though, outside of the pipeline script, I also tried versions 2.4.6 and 2.4.15 and always got the expected results)
Please note that I am not interested in workarounds. I only wish to understand why the behavior changed from normal groovy to pipeline script.
It is happening because the Jenkins pipeline code is not actually running this Groovy code directly it is interpreting it with a parser to apply script security to keep the Jenkins system secure amongst other things. To quote "Pipeline code is written as Groovy but the execution model is radically transformed at compile-time to Continuation Passing Style (CPS)." - see best practices https://jenkins.io/blog/2017/02/01/pipeline-scalability-best-practice/. In short, don't write complex Groovy code in your pipelines - try to use standard steps supplied by the pipeline DSL or plugins. Simple Groovy code in script sections can be useful in some scenarios however. Nowadays I am putting some of my more complex stuff in plugins that supply custom steps.
Related
I am trying to implement a Jenkins pipeline whereby I want to control the source code for the pipeline code in git.
To declare my parameters in a Declarative pipeline, I'll have to adhere to the following syntax:-
...
pipeline {
parameters {
...
}
...
}
For the parameters section, how can I declare an Active Choice Reactive parameter, so that I can programatically populate the choices using a groovy script?
I know this is possible using the Jenkins UI using the Configure pipeline option, however I am trying to understand how I can implement this behavior in Groovy code.
Thank you
Jenkins dynamic declarative pipeline parameters
Look at solution in post #5, not the accepted. It works well always.
This is the only way for now cause Active Choices (AC) built for scripted pipelines and not straightforward support declarative pipelines.
I work a lot with AC in declarative pipelines, so my own decision is always move all the properties (parameters) wrote in scripted way before "pipeline" and even use it in Shared Libraries (as init function) or load from disc if needed and all other pipeline writing in declarative way to get all the advantages of both.
Additional benefit of that trick that I can reuse pipeline with different parameters cause loading them on the fly from Git or Shared Libraries.
Found lifehack:
To built Groovy script of different codes and AC types go to pipeline syntax constructor, select "input" > "This build is parametrized" > add there needed AC > fill all the fields and choose options as needed (same as in Job UI) > Generate code. After just copy "properties block" or part of AC code and put it in your Jenkinsfile before "pipeline".
How do you automatically parse ansible warnings and errors in your jenkins pipeline jobs?
I greatly enjoy the power of leveraging in ansible in jenkins when it works. Upon a failure, the hunt to locate the actual error can be challenging.
I use WarningsNG which supports custom parsers (and allows their programmatic generation)
Do you know of any plugins or addons that already transform these logs into the kind charts similar to WarningsNG?
I figured I'd ask as I go off into deep regex land and make my own.
One good way to achieve this seems to be the following:
select an existing structured output ansible callback plugin (json, junit and yaml are all viable) . I selected junit as I can play with the format to get a really nice view into the playbook with errors reported in a very obvious way.
fork that GPL file (yes, so be careful with that license) to augment with the following:
store output as file
implement the missing callback methods (the three mentioned above do not implement the v2...item callbacks.
forward events to the default or debug callback to ensure operators see something when they execute the plan
add a secrets cleaner - if you use jenkins credentials-binding-plugin it will hide secrets from the console, it will not not hide secrets within stored files. You'll need to handle that in your playbook or via some groovy code (if groovy, try{...} finally { clean } seems a good pattern)
Snippet - forewarding to default callback
from ansible.plugins.callback.default import CallbackModule as CallbackModule_default
...
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'json'
def __init__(self, display=None):
super(CallbackModule, self).__init__(display)
self.default_callback = CallbackModule_default()
...
def v2_on_file_diff(self, result):
self.default_callback.v2_on_file_diff(result)
... do whatever you'd want to ensure the content appears in the json file
Often when looking at scripted jenkins pipeline code, I see this pattern...
step([$class: 'GitHubSetCommitStatusBuilder',
statusMessage: [content: 'Pipeline Started']])
I have not had any luck finding documentation on this technique and would love it if someone could explain what this is doing and when/why it is useful. I believe this is a way to instantiate and populate the members of an underlying groovy class - but more detail would be appreciated.
Also is this documented anywhere?
Here is a reference that briefly explains the syntax. Basically, you are providing the step() function a map of arguments in the form of name-value pairs. The first argument which is especially denoted by the name $class tells the function which class (plugin) to instantiate.
It also seems that this syntax is being deprecated in favor of shorter syntax, also explained in the same link.
​
I am also struggling with this syntax, but unfortunately have not found any doc yet.
I guess this syntax is used to doing the instance initialization.
All step classes implement interface BuildStep. After script loaded, all step instances initiated, then their perform method are invoked during build procedure.
All above is my conjecture.
There's also another quick reference from "Pipeline: Basic Steps" github repo documentation :
https://github.com/jenkinsci/workflow-basic-steps-plugin/blob/master/CORE-STEPS.md
Syntax
As an example, you can write a Pipeline script:
node {
sh 'make something'
step([$class: 'ArtifactArchiver', artifacts: 'something'])
}
Here we are running the standard Archive the artifacts post-build action (hudson.tasks.ArtifactArchiver), and configuring the Files to archive property (artifacts) to archive our file something produced in an earlier step. The easiest way to see what class and field names to use is to use the Snippet Generator feature in the Pipeline configuration page.
See the compatibility list for the list of currently supported steps.
Simplified syntax
Build steps and post-build actions in Jenkins core as of 2.2, and in some plugins according to their individual changelogs, have defined symbols which allow for a more concise syntax. Snippet Generator will offer this when available. For example, the above script may also be written:
node {
sh 'make something'
archiveArtifacts 'something'
}
NB: The Simplified syntax, makes reference to Plugin Steps parameters
I'm attempting to write a global function script that uses groovy.sql.SQL.
When adding the annotation #GrabConfig(systemClassLoader=true) I get an exception when using the global function in Jenkinsfile.
Here is the exception:
hudson.remoting.ProxyException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
General error during conversion: No suitable ClassLoader found for grab
Here is my code:
#GrabResolver(name='nexus', root='http://internal.repo.com')
#GrabConfig(systemClassLoader=true)
#Grab('com.microsoft.sqlserver:sqljdbc4:4.0')
import groovy.sql.Sql
import com.microsoft.sqlserver.jdbc.SQLServerDriver
def call(name) {
echo "Hello world, ${name}"
Sql.newInstance("jdbc:sqlserver://ipaddress/dbname", "username","password", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
// sql.execute "select count(*) from TableName"
}
Ensure that the "Use groovy sandbox" checkbox is unticked (it's below the pipeline script text box).
As explained here, Pipeline "scripts" are not simple Groovy scripts, they are heavily transformed before running, some parts on master, some parts on slaves, with their state (variable values) serialized and passed to the next step. As such, not every Groovy feature is supported.
I'm not sure about #Grab support. It is discussed in JENKINS-26192 (which is declared as resolved, so maybe it works now).
Extract from a very interesting comment:
If you need to perform some complex or expensive tasks with
unrestricted Groovy physically running on a slave, it may be simplest
and most effective to simply write that code in a *.groovy file in
your workspace (for example, in an SCM checkout) and then use tool and
sh/bat to run Groovy as an external process; or even put this stuff
into a Gradle script, Groovy Maven plugin execution, etc. The workflow
script itself should be limited to simple and extremely lightweight
logical operations focused on orchestrating the overall flow of
control and interacting with other Jenkins features—slave allocation,
user input, and the like.
In short, if you can move that custom part that needs SQL to an external script and execute that in a separate process (called from your Pipeline script), that should work. But doing this in the Pipeline script itself is more complicated.
So my google-fu has failed me and the Jenkins O'Reilly book isn't helping either.
I have a Jenkins setup with a master and 20-odd nodes. I make heavy use of custom environment variables on the nodes, as many of them perform similar tasks, only slightly different platforms.
I'm now in a position that I have a job that runs on the master (by necessity), which needs to know certain node properties for any given node, including some of the environment variables that I've set up.
Is there any way to reference these? My alternative seems to be to have hundreds of environment variables in the master in the form node1_var1, node2_var1, node1_var2, node2_var2 etc., which just seems messy. The master clearly has knowledge of the variables, as that's where the configuration for them is done, but I just can't find a way to specify them in a job.
Any help (or ridicule and pointing out of obvious answers) much appreciated...
Here's a simple Groovy script that prints the list of environment variables for each slave:
for (slave in jenkins.model.Jenkins.instance.slaves) {
println(slave.name + ": ")
def props = slave.nodeProperties.getAll(hudson.slaves.EnvironmentVariablesNodeProperty.class)
for (prop in props) {
for (envvar in prop.envVars) {
println envvar.key + " -> " + envvar.value
}
}
}
Warning: I am not an experienced Groovy programmer, so I may not be using the appropriate idioms of Groovy.
You can run this from the Jenkins script console in order to experiment. You can also run a "System Groovy Script" as a build step. Both of the above require the Groovy plugin. If you don't use Groovy in your job, you could use this script to write a properties file that you load in the part of your build that does the real work.