Often when looking at scripted jenkins pipeline code, I see this pattern...
step([$class: 'GitHubSetCommitStatusBuilder',
statusMessage: [content: 'Pipeline Started']])
I have not had any luck finding documentation on this technique and would love it if someone could explain what this is doing and when/why it is useful. I believe this is a way to instantiate and populate the members of an underlying groovy class - but more detail would be appreciated.
Also is this documented anywhere?
Here is a reference that briefly explains the syntax. Basically, you are providing the step() function a map of arguments in the form of name-value pairs. The first argument which is especially denoted by the name $class tells the function which class (plugin) to instantiate.
It also seems that this syntax is being deprecated in favor of shorter syntax, also explained in the same link.
I am also struggling with this syntax, but unfortunately have not found any doc yet.
I guess this syntax is used to doing the instance initialization.
All step classes implement interface BuildStep. After script loaded, all step instances initiated, then their perform method are invoked during build procedure.
All above is my conjecture.
There's also another quick reference from "Pipeline: Basic Steps" github repo documentation :
https://github.com/jenkinsci/workflow-basic-steps-plugin/blob/master/CORE-STEPS.md
Syntax
As an example, you can write a Pipeline script:
node {
sh 'make something'
step([$class: 'ArtifactArchiver', artifacts: 'something'])
}
Here we are running the standard Archive the artifacts post-build action (hudson.tasks.ArtifactArchiver), and configuring the Files to archive property (artifacts) to archive our file something produced in an earlier step. The easiest way to see what class and field names to use is to use the Snippet Generator feature in the Pipeline configuration page.
See the compatibility list for the list of currently supported steps.
Simplified syntax
Build steps and post-build actions in Jenkins core as of 2.2, and in some plugins according to their individual changelogs, have defined symbols which allow for a more concise syntax. Snippet Generator will offer this when available. For example, the above script may also be written:
node {
sh 'make something'
archiveArtifacts 'something'
}
NB: The Simplified syntax, makes reference to Plugin Steps parameters
Related
The Jenkins Job DSL Plugin documentation describes the following:
removeAction - "action to be taken for job that have been removed from DSL scripts"
removeViewAction - "action to be taken for views that have been removed from DSL scripts"
However the pipeline documentation for Job DSL lists slightly different names:
removedJobAction (extra d and Job)
removedViewAction (extra d)
They seem to have the same effect so why are there 2 subtly different spellings for the same thing?
Having resorted to the job-dsl-plugin source code (here) it turns out the shorter names (listed first above) are basically pass-through methods to their longer-named counterparts, but with extra checks on the values passed to ensure they are in the expected list of actions.
So you can use either, but removeAction and removeViewAction will throw helpful errors if you pass an invalid string.
I am trying to implement a Jenkins pipeline whereby I want to control the source code for the pipeline code in git.
To declare my parameters in a Declarative pipeline, I'll have to adhere to the following syntax:-
...
pipeline {
parameters {
...
}
...
}
For the parameters section, how can I declare an Active Choice Reactive parameter, so that I can programatically populate the choices using a groovy script?
I know this is possible using the Jenkins UI using the Configure pipeline option, however I am trying to understand how I can implement this behavior in Groovy code.
Thank you
Jenkins dynamic declarative pipeline parameters
Look at solution in post #5, not the accepted. It works well always.
This is the only way for now cause Active Choices (AC) built for scripted pipelines and not straightforward support declarative pipelines.
I work a lot with AC in declarative pipelines, so my own decision is always move all the properties (parameters) wrote in scripted way before "pipeline" and even use it in Shared Libraries (as init function) or load from disc if needed and all other pipeline writing in declarative way to get all the advantages of both.
Additional benefit of that trick that I can reuse pipeline with different parameters cause loading them on the fly from Git or Shared Libraries.
Found lifehack:
To built Groovy script of different codes and AC types go to pipeline syntax constructor, select "input" > "This build is parametrized" > add there needed AC > fill all the fields and choose options as needed (same as in Job UI) > Generate code. After just copy "properties block" or part of AC code and put it in your Jenkinsfile before "pipeline".
I've got a piece of code that works perfectly in all of the groovy interpreters I know of, including Jenkins scripting console. Yet it has a weird behavior when it comes to pipeline scripts.
def kvs = ['key1': 'value1', 'key2': 'value2']
println kvs
println kvs.inject(''){ s,k,v -> s+= "{'$k': '$v' } "}
First of all, the map is printed differently:
Expected: [key1:value1, key2:value2]
Got: {key1=value1, key2=value2}
Then, more of a problem, the yielded result differs drastically:
Expected: {'key1': 'value1' } {'key2': 'value2' }
Got: null
Both of these results were obtained with the following groovy version: 2.4.12.
(Though, outside of the pipeline script, I also tried versions 2.4.6 and 2.4.15 and always got the expected results)
Please note that I am not interested in workarounds. I only wish to understand why the behavior changed from normal groovy to pipeline script.
It is happening because the Jenkins pipeline code is not actually running this Groovy code directly it is interpreting it with a parser to apply script security to keep the Jenkins system secure amongst other things. To quote "Pipeline code is written as Groovy but the execution model is radically transformed at compile-time to Continuation Passing Style (CPS)." - see best practices https://jenkins.io/blog/2017/02/01/pipeline-scalability-best-practice/. In short, don't write complex Groovy code in your pipelines - try to use standard steps supplied by the pipeline DSL or plugins. Simple Groovy code in script sections can be useful in some scenarios however. Nowadays I am putting some of my more complex stuff in plugins that supply custom steps.
I would like to add general functionality to my Jenkins pipeline script, similar to what you can do with built-in functions (timestamps, ansiColor):
options {
timestamps()
ansiColor 'xterm'
// failureNotification() <- I want to add this
}
How do I write a function in the script so that it can be used as an option?
Currently I don't believe that's possible with the declarative syntax that you're using. You could write your own Jenkins plugin to do this, but that could get hairy.
If you're willing to use a slightly more complicated syntax, I would look at this blog post: https://jenkins.io/blog/2017/02/15/declarative-notifications/
Essentially, you'll need to create a shared groovy library and use the step from that to manage your notification step. There's a few steps to this:
Create a repository for your shared library. This should have a folder called "vars", which is where your steps and step documentation goes.
Create a step in your shared library. Using camelCase and a groovy extension, create a file name to describe your step. This is what you will call in your Jenkinsfile. Ex: sendFailureNotification.groovy
Within that file, create a function with the name of call. You can use whatever parameters you want. Ex: def call() { }
That call function is your "step logic". In your case, it sounds like you would want to look at the build result and use whatever notification steps you feel are necessary.
Copying from the documentation... 'To setup a "Global Pipeline Library," I navigated to "Manage Jenkins" → "Configure System" in the Jenkins web UI. Once there, under "Global Pipeline Libraries", I added a new library.'
Import your library into your Jenkinsfile like so: #Library('<library name you picked here>')
Now you should be able to call sendFailureNotification() at the end of your Jenkinsfile. Maybe in a post stage like so?:
post {
failure {
sendFailureNotification()
}
}
I'm attempting to write a global function script that uses groovy.sql.SQL.
When adding the annotation #GrabConfig(systemClassLoader=true) I get an exception when using the global function in Jenkinsfile.
Here is the exception:
hudson.remoting.ProxyException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
General error during conversion: No suitable ClassLoader found for grab
Here is my code:
#GrabResolver(name='nexus', root='http://internal.repo.com')
#GrabConfig(systemClassLoader=true)
#Grab('com.microsoft.sqlserver:sqljdbc4:4.0')
import groovy.sql.Sql
import com.microsoft.sqlserver.jdbc.SQLServerDriver
def call(name) {
echo "Hello world, ${name}"
Sql.newInstance("jdbc:sqlserver://ipaddress/dbname", "username","password", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
// sql.execute "select count(*) from TableName"
}
Ensure that the "Use groovy sandbox" checkbox is unticked (it's below the pipeline script text box).
As explained here, Pipeline "scripts" are not simple Groovy scripts, they are heavily transformed before running, some parts on master, some parts on slaves, with their state (variable values) serialized and passed to the next step. As such, not every Groovy feature is supported.
I'm not sure about #Grab support. It is discussed in JENKINS-26192 (which is declared as resolved, so maybe it works now).
Extract from a very interesting comment:
If you need to perform some complex or expensive tasks with
unrestricted Groovy physically running on a slave, it may be simplest
and most effective to simply write that code in a *.groovy file in
your workspace (for example, in an SCM checkout) and then use tool and
sh/bat to run Groovy as an external process; or even put this stuff
into a Gradle script, Groovy Maven plugin execution, etc. The workflow
script itself should be limited to simple and extremely lightweight
logical operations focused on orchestrating the overall flow of
control and interacting with other Jenkins features—slave allocation,
user input, and the like.
In short, if you can move that custom part that needs SQL to an external script and execute that in a separate process (called from your Pipeline script), that should work. But doing this in the Pipeline script itself is more complicated.