All the tutorials that I have come across regarding writing a declarative pipeline suggest to include the stages and steps in the Jenkinsfile.
But I have noticed one of my seniors writing it the opposite way. He uses the Jenkinsfile just for defining all the properties, i.e. his Jenkinsfile is just a properties file, nothing more nothing less.
And for defining the pipeline he makes use of the shared library concepts where he writes his pipeline code in a file in the vars folder. I am not able to guess the wisdom behind this approach.
Nowhere over the internet did I come across anything similar.
Any guidance in this regard is highly appreciated. I am a beginner in the Jenkins world.
As illustrated in Extending with Shared Libraries, that approach (that I am using as well) allows to:
keep a Jenkinsfile content to a minimum
enforce a standard way of doing a particular job (as coded in the shared library)
That shared library becomes a template of a process for which you provide only values in your Jenkinsfile before delegating the actual execution to the pre-defined library.
The OP Asif Kamran Malick note that the documentation does include:
There is also a “builder pattern” trick using Groovy’s Closure.DELEGATE_FIRST, which permits Jenkinsfile to look slightly more like a configuration file than a program, but this is more complex and error-prone and is not recommended.
He then asks:
Why did the blogger prefer that way when its actually discouraged in the official doc.
I checked and we are using also Closure.DELEGATE_FIRST.
The reason is in the part "permits Jenkinsfile to look slightly more like a configuration file than a program"
This avoids us having to define a JSON block, and keep the parameter as a series of key=value lines, easier to read.
A call to a shared library is then:
#!/usr/bin/env groovy
#Library("MyLibraries") _
MyLibrary {
config1 = 'value1'
config2 = 'value2'
...
}
{
anotherConfigA = 'valueA'
anotherConfigB = 'valueB'...
astep(
...
)
}
Then your jenkins pipeline template in MyLibraries/vars/MyLibrary.yml can use those closure blocks:
def call(Closure configBlock, Closure body) {
def config = [:]
configBlock.resolveStrategy = Closure.DELEGATE_FIRST
configBlock.delegate = config
configBlock()
astep(
...
){
if (body) { body() }
}
}
Related
Working on my 6th or 7th Jenkins script now - and I already noticed they share a bit of code (essentially just the same groovy subroutines over and over again). I wouldn't like to continue with that and rather learn some best practices.
It seems that "Shared Libraries" are the thing to do. (Or is there a better way when you just want to share groovy code, not script steps etc.?)
Those scripts are part of a larger repo (that contains the source of the entire project, including the other scripts), stored in a subfolder Jenkins/Library with this structure:
Jenkins/Library
+- vars
| common_code.groovy
There is only a vars folder, no src. The documentation said
For Shared Libraries which only define Global Variables (vars/), or a Jenkinsfile which only needs a Global Variable, the annotation pattern #Library('my-shared-library') _ may be useful for keeping code concise. In essence, instead of annotating an unnecessary import statement, the symbol _ is annotated.
so I concluded that I wouldn't need a src folder and can do with vars alone.
The library is made available via "Configure Jenkins" > "Global Pipeline Libraries" with SourcePath set to "/Jenkins/Library/" and is brought in with the statement #Library('{name}') _ as first line of the script.
However, when attempting to use the library, I get the error shown in the subject.
What's the problem? (I already searched around and found this instance of the problem, but that doesn't seem to fit for my issue - unless I misunderstood something.)
To specify a name of the library you should set the same name in your jenkins settings:
Name.
An identifier you pick for this library, to be used in the #Library
annotation. An environment variable library.THIS_NAME.version will
also be set to the version loaded for a build (whether that comes from
the Default version here, or from an annotation after the #
separator).
Your '{name}' parameter inside of #Library() means you should add a library with the same name. Because it's not a variable like "${name}" which is not a built in variable and undefined.
If you wish to set up your library with the same name as your jenkins pipleine you could use env.JOB_NAME variable, or check the all environment and pre-defined variables:
println env.getEnvironment()
Or check job parameters only:
println params
Now step-by-step instructions:
Create your library, for example from Git SCM as shown on the screenshot.
Put your library code to the project, e.g: <project_root_folder>/vars/common_code.groovy. You don't need your additional path Jenkins/Library. Also you have named your file in 'snake case' style, which is not usual for groovy:
The vars directory hosts scripts that define global variables
accessible from Pipeline. The basename of each *.groovy file should be
a Groovy (~ Java) identifier, conventionally camelCased.
So your file in 'camel case' should looks CommonCode.groovy.
Write your library code:
// vars/commonCode.groovy
// Define your method
def call() {
// do some stuff
return 'Some message'
}
Write your pipeline. Example of scripted pipeline:
#!/usr/bin/env groovy
// yourPipeline.groovy file in your project
#Library('jenkins-shared-library') _
// Get the message from the method in your library
def messageText = commonCode() as String
println messageText
If you wish to define some global variables this answer also may help you.
PS: Using 'vars' folder allows you to load everything from your vars folder once at the same time. If you wish to load dynamically use import from src folder.
It seems like it's really difficult to be able to store a bunch of variables for use in shared code in Jenkins/Groovy scripted pipelines. I've tried a bunch of methods and none of them seem to give the desired result.
This method looked the most promising, but the values all came back as null in the calling pipeline. Get Global Variables in jenkins pipeline.
My codes is something lie
import org.blabla.JobHelper
println("env.NO_PROXY: -->${env.NO_PROXY}<--")
And in the JobHelper.groovy file, I've defined
package org.blabla.project
env.NO_PROXY = 'localhost,127.0.0.1,169.254.169.254'
the names have been changed a bit to protect the innocent, but you get the idea.
the script just prints null for the value.
Is there a simple way (or indeed any way) that I can pull in a bunch of variables from a shared library file? This feels like it should be a really simple exercise, but after spending many hours searching I'm none the wiser.
In general, env is only available once the pipeline has started, but groovy scripts are resolved much earlier.
I'm using static class members as global variables. Applied to your code sample, it would look like this:
JobHelper.groovy
package org.blabla.project
# Class must be named like the file that contains it.
class JobHelper {
static String getNO_PROXY() { 'localhost,127.0.0.1,169.254.169.254' }
}
Elsewhere:
import org.blabla.project
println("NO_PROXY: -->${JobHelper.NO_PROXY}<--")
Note that Groovy automatically generates properties from get*() and set*() methods, so we can use the short form instead of having to write JobHelper.getNO_PROXY().
I recently switched my logback configuration file from logback.xml to logback.groovy. Using a DSL with Groovy is more versatile than XML for this sort of thing.
I need to analyse this file programmatically, like I analysed the previous XML file (any of innumerable parsing tools). I realise that this will be imperfect, as a DSL config file sits on top of an object which it configures and must be executed, so its results are inevitably dynamic, whereas an XML file is static.
If you want to include one Groovy file in another file there are solutions. This one worked for me.
But I'm struggling to find what I need from the results.
If I put a function like this in the DSL file ...
def greet(){
println "hello world"
}
... not only can I execute it (config.greet() as below), but I can also see it listed when I go
GroovyShell shell = new GroovyShell()
def config = shell.parse( logfileConfigPath.toFile() )
println "config.class.properties ${config.class.properties}"
But if I put a line like this in the DSL file...
def MY_CONSTANT = "XXX"
... I have no idea how to find it and get its value (it is absent from the confusing and copious output from config.class.properties).
PS printing out config.properties just gives this:
[class:class logback, binding:groovy.lang.Binding#564fa2b]
... and yes, I did look at config.binding.properties: there was nothing.
further thought
My question is, more broadly, about what if any tools are available for analysis of Groovy DSL configuration files. Given that such a file is pretty meaningless without the underlying object it is configuring (an object implementing org.gradle.api.Project in the case of Gradle; I don't know what class it may be in the case of logback), you would have thought there would need to be instrumentation to kind of hitch up such an object and then observe the effects of the config file in a controlled, observable way. If Groovy DSL config files are to be as versatile as their XML counterparts surely you need something along those lines? NB I have a suspicion that org.gradle.tooling.model.GradleProject or org.gradle.tooling.model.ProjectModel might serve that purpose. Unfortunately, at the current time I am unable to get GradleConnector working, as detailed here.
I presume there is nothing of this kind for logback, and at the moment I have no knowledge of its DSL or configurable object, or the latter's class or interface...
The use of def creates a local variable in the execution of the script that is not available in the binding of the script; see this. Even dropping def will not expose MY_CONSTANT in the binding because parsing the script via GroovyShell.parse() does not interpret/execute the code.
To expose MY_CONSTANT in config's binding, change def MY_CONSTANT = "XXX" to MY_CONSTANT = "XXX" and execute the config script via config.run().
How do you automatically parse ansible warnings and errors in your jenkins pipeline jobs?
I greatly enjoy the power of leveraging in ansible in jenkins when it works. Upon a failure, the hunt to locate the actual error can be challenging.
I use WarningsNG which supports custom parsers (and allows their programmatic generation)
Do you know of any plugins or addons that already transform these logs into the kind charts similar to WarningsNG?
I figured I'd ask as I go off into deep regex land and make my own.
One good way to achieve this seems to be the following:
select an existing structured output ansible callback plugin (json, junit and yaml are all viable) . I selected junit as I can play with the format to get a really nice view into the playbook with errors reported in a very obvious way.
fork that GPL file (yes, so be careful with that license) to augment with the following:
store output as file
implement the missing callback methods (the three mentioned above do not implement the v2...item callbacks.
forward events to the default or debug callback to ensure operators see something when they execute the plan
add a secrets cleaner - if you use jenkins credentials-binding-plugin it will hide secrets from the console, it will not not hide secrets within stored files. You'll need to handle that in your playbook or via some groovy code (if groovy, try{...} finally { clean } seems a good pattern)
Snippet - forewarding to default callback
from ansible.plugins.callback.default import CallbackModule as CallbackModule_default
...
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'json'
def __init__(self, display=None):
super(CallbackModule, self).__init__(display)
self.default_callback = CallbackModule_default()
...
def v2_on_file_diff(self, result):
self.default_callback.v2_on_file_diff(result)
... do whatever you'd want to ensure the content appears in the json file
I'm writing a somewhat complex global pipeline library. The library really just orchestrates a complex build, with a whole bunch of steps exposed as vars/* and a single src/com/myorg/pipeline/utils.groovy class that handles all common pieces of functionality. Each Jenkinsfile defines all 'build' specific config, and passes it to a vars/myBuildFlavor.groovy step that then calls all steps required for that flavor of the build. The vars/myBuildFlavor.groovy step also reads on a server config file that contains all config that is global to each Jenkins instance.
This setup works incredibly well. It allows users to either piece together their own builds from the steps I've exposed in the global library, or just set all build properties in their Jenkinsfile and call an existing flavor of a build that I've exposed as a step. What I'm struggling with is how I can access configuration values from both the 'build' and 'server' configuration, plus I have some random properties from steps early on in the build that I want to save and use later in the build. What is incredibly annoying is that I have to pass the entire context of the script around with 'this', or have extremely long method signatures to handle the juggling of all of these values.
What I'm thinking may be a good idea is to write a file in the workspace root that contains all build and server config values, plus any properties that I need later on in the build. Has anyone had to deal with this previously? Any major issues with my approach? Better ideas?
I haven't tried this, but you make me want to make sure this works. If you don't beat me to it, I'll give it a shot, so please report back...
The things in vars are created as singletons. So I think you should be able to do something like this:
// vars/customConfig.groovy
class customConfig implements Serializable {
private String url
private Map allTheThings
def setUrl(myUrl) {
url = myUrl
}
def getUrl() {
url
}
def setAllTheThings(Map configMap) {
allTheThings = configMap
}
def getAllTheThings() {
return allTheThings
}
def coolMethod(myVar) {
echo "This method does something cool with the ${myVar} and with ${name}"
}
}
Then access these things like:
customConfig.url = 'https://www.google.com'
echo ${customConfig.url}"
customConfig.coolMethod "FOOBAR"
customConfig.allTheThings.configItem1 = "BAZ"
customConfig.allTheThings.configItem2 = 12345
echo "${customConfig.allTheThings.configItem2} is an Int"
Since it is a "global var" or a singleton, I think you can use it everywhere and the values are all shared.
Let me know if this does what I think it will do.