I'm attempting to write a global function script that uses groovy.sql.SQL.
When adding the annotation #GrabConfig(systemClassLoader=true) I get an exception when using the global function in Jenkinsfile.
Here is the exception:
hudson.remoting.ProxyException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
General error during conversion: No suitable ClassLoader found for grab
Here is my code:
#GrabResolver(name='nexus', root='http://internal.repo.com')
#GrabConfig(systemClassLoader=true)
#Grab('com.microsoft.sqlserver:sqljdbc4:4.0')
import groovy.sql.Sql
import com.microsoft.sqlserver.jdbc.SQLServerDriver
def call(name) {
echo "Hello world, ${name}"
Sql.newInstance("jdbc:sqlserver://ipaddress/dbname", "username","password", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
// sql.execute "select count(*) from TableName"
}
Ensure that the "Use groovy sandbox" checkbox is unticked (it's below the pipeline script text box).
As explained here, Pipeline "scripts" are not simple Groovy scripts, they are heavily transformed before running, some parts on master, some parts on slaves, with their state (variable values) serialized and passed to the next step. As such, not every Groovy feature is supported.
I'm not sure about #Grab support. It is discussed in JENKINS-26192 (which is declared as resolved, so maybe it works now).
Extract from a very interesting comment:
If you need to perform some complex or expensive tasks with
unrestricted Groovy physically running on a slave, it may be simplest
and most effective to simply write that code in a *.groovy file in
your workspace (for example, in an SCM checkout) and then use tool and
sh/bat to run Groovy as an external process; or even put this stuff
into a Gradle script, Groovy Maven plugin execution, etc. The workflow
script itself should be limited to simple and extremely lightweight
logical operations focused on orchestrating the overall flow of
control and interacting with other Jenkins features—slave allocation,
user input, and the like.
In short, if you can move that custom part that needs SQL to an external script and execute that in a separate process (called from your Pipeline script), that should work. But doing this in the Pipeline script itself is more complicated.
Related
In Jenkins, I want to automatically run a function on load of the shared library, which is loaded implicitally on a global level. This would allow me to enforce certain functions in every pipeline.
This means, a user would not have to define anything in the pipeline script itself to have it run.
What I tried:
//src/org/test/Always.groovy
#!/usr/bin/env groovy
package org.test
class Always implements Serializable {
Always() {
println "Always print me"
}
}
Always()
This does not appear to do anything, however. I would expect it to always instantiate the Always class and print "Always print me".
An global-pre-script-plugin exists that seems to fit your use case. It can execute a groovy script before each job / build starts. I am not sure if the script can load shared libraries and inject methods (maybe as Closure variable?) from it. This is something we would need to test :)
The plugin's last commit is from March 2020 though, so looks rather unmaintained to me.
This plugin makes it possible to execute a groovy script at the start
of every job
Features:
Applies to all jobs/builds run in the server
Executes a groovy script when a build starts Injects any number of variables in the build
Injects properties based on the content of another property
Executes a script when the build starts
Very light plugin
Failures in the script do not abort the build
I've got a piece of code that works perfectly in all of the groovy interpreters I know of, including Jenkins scripting console. Yet it has a weird behavior when it comes to pipeline scripts.
def kvs = ['key1': 'value1', 'key2': 'value2']
println kvs
println kvs.inject(''){ s,k,v -> s+= "{'$k': '$v' } "}
First of all, the map is printed differently:
Expected: [key1:value1, key2:value2]
Got: {key1=value1, key2=value2}
Then, more of a problem, the yielded result differs drastically:
Expected: {'key1': 'value1' } {'key2': 'value2' }
Got: null
Both of these results were obtained with the following groovy version: 2.4.12.
(Though, outside of the pipeline script, I also tried versions 2.4.6 and 2.4.15 and always got the expected results)
Please note that I am not interested in workarounds. I only wish to understand why the behavior changed from normal groovy to pipeline script.
It is happening because the Jenkins pipeline code is not actually running this Groovy code directly it is interpreting it with a parser to apply script security to keep the Jenkins system secure amongst other things. To quote "Pipeline code is written as Groovy but the execution model is radically transformed at compile-time to Continuation Passing Style (CPS)." - see best practices https://jenkins.io/blog/2017/02/01/pipeline-scalability-best-practice/. In short, don't write complex Groovy code in your pipelines - try to use standard steps supplied by the pipeline DSL or plugins. Simple Groovy code in script sections can be useful in some scenarios however. Nowadays I am putting some of my more complex stuff in plugins that supply custom steps.
I have a pipeline created that is a series of Powershell jobs in various parallel stages. Whilst the jobs are in stages, there is no dependency between them (I only split them into stages in order to avoid conflicts).
I want to gather a report from every job but at a pipeline level. Each job will output a single line of text, but the full report needs to be at pipeline level. The current pipeline console output just says that the job is starting and stopping, there is no additional output brought in from the jobs. I've considered the following;
I have seen the stash/unstash option, but that seems to be at a file level and I'm not sure how to use that to generate a report.
I can see the echo command in pipeline, but can't see a way of passing a string/variable from the job to the pipeline.
I tried taking the pipeline 'WORKSPACE' variable to pass to the job so the job can write directly to a single file, but the variable didn't work (and I've no idea if this is violating some unwritten 'rule' of pipelines).
How can I get a single line of text, from each job in a pipeline, out to a single text file?
Blockquote
I can see the echo command in pipeline, but can't see a way of passing a string/variable from the job to the pipeline.
Blockquote
if you use powershell you can use write-host or -- verbose
Blockquote
I tried taking the pipeline 'WORKSPACE' variable to pass to the job so the job can write directly to a single file, but the variable didn't work (and I've no idea if this is violating some unwritten 'rule' of pipelines).
Blockquote
İf you want to use jenkins variable, you have to use "%" symbol.
%WORKSPACE%
A custom plugin we wrote for an older version of Jenkins uses an EnvironmentContributingAction to provide environment variables to the execution so they could be used in future build steps and passed as parameters to downstream jobs.
While attempting to convert our build to workflow, I'm having trouble accessing these variables:
node {
// this step queries an API and puts the results in
// environment variables called FE1|BE1_INTERNAL_ADDRESS
step([$class: 'SomeClass', parameter: foo])
// this ends up echoing 'null and null'
echo "${env.FE1_INTERNAL_ADDRESS} and ${env.BE1_INTERNAL_ADDRESS}"
}
Is there a way to access the environment variable that was injected? Do I have to convert this functionality to a build wrapper instead?
EnvironmentContributingAction is currently limited to AbstractBuilds, which WorkflowRuns are not, so pending JENKINS-29537 which I just filed, your plugin would need to be modified somehow. Options include:
Have the builder add a plain Action instead, then register an EnvironmentContributor whose buildEnvironmentFor(Run, …) checks for its presence using Run.getAction(Class).
Switch to a SimpleBuildWrapper which defines the environment variables within a scope, then invoke it from Workflow using the wrap step.
Depend on workflow-step-api and define a custom Workflow Step with comparable functionality but directly returning a List<String> or whatever makes sense in your context. (code sample)
Since PR-2975 is merged, you are able to use new interface:
void buildEnvVars(#Nonnull Run<?, ?> run, #Nonnull EnvVars env, #CheckForNull Node node)
It will be used by old type of builds as well.
So my google-fu has failed me and the Jenkins O'Reilly book isn't helping either.
I have a Jenkins setup with a master and 20-odd nodes. I make heavy use of custom environment variables on the nodes, as many of them perform similar tasks, only slightly different platforms.
I'm now in a position that I have a job that runs on the master (by necessity), which needs to know certain node properties for any given node, including some of the environment variables that I've set up.
Is there any way to reference these? My alternative seems to be to have hundreds of environment variables in the master in the form node1_var1, node2_var1, node1_var2, node2_var2 etc., which just seems messy. The master clearly has knowledge of the variables, as that's where the configuration for them is done, but I just can't find a way to specify them in a job.
Any help (or ridicule and pointing out of obvious answers) much appreciated...
Here's a simple Groovy script that prints the list of environment variables for each slave:
for (slave in jenkins.model.Jenkins.instance.slaves) {
println(slave.name + ": ")
def props = slave.nodeProperties.getAll(hudson.slaves.EnvironmentVariablesNodeProperty.class)
for (prop in props) {
for (envvar in prop.envVars) {
println envvar.key + " -> " + envvar.value
}
}
}
Warning: I am not an experienced Groovy programmer, so I may not be using the appropriate idioms of Groovy.
You can run this from the Jenkins script console in order to experiment. You can also run a "System Groovy Script" as a build step. Both of the above require the Groovy plugin. If you don't use Groovy in your job, you could use this script to write a properties file that you load in the part of your build that does the real work.