How to make options in Jenkinsfile reusable across other Jenkinsfiles - jenkins

I've 55 Jenkinsfiles and in every Jenkinsfile I'm using the below block as common, Is there anyway to make this options block re-usable across all the jenkinsfiles?
options {
buildDiscarder(logRotator(daysToKeepStr:'30'))
ansiColor('xterm')
}
Tried - Method-1
Tried to wrap the options with script block and reading the options from the groovy function, that didn't work as usage of script is not permittable without steps & stage block.
options {
script {
code= load './options_section.groovy'
def res = code.options_section()
}
}
options_section.groovy file:
def options_section(){
buildDiscarder(logRotator(daysToKeepStr:'30'))
ansiColor('xterm')
}
return this
Any suggestions/ideas to make the options section of Jenkinsfile (declarative pipeline) reusable? Thanks for your time!!

Related

Jenkins - set options in a shared library for all pipelines that use the shared library

I have a bunch of repositories which use (parts of) the same Jenkins shared library for running tests, docker builds, etc. So far the shared library has greatly reduced the maintenance costs for these repos.
However, it turned out that basically all pipelines use the same set of options, e.g.:
#Library("myExample.jenkins.shared.library") _
import org.myExample.Constants
pipeline {
options {
disableConcurrentBuilds()
parallelsAlwaysFailFast()
}
agent {
label 'my-label'
}
stages {
stage {
runThisFromSharedLibrary(withParameter: "foo")
runThatFromSharedLibrary(withAnotherParameter: "bar")
...
...
In other words, I need to copy-and-paste the same option snippets in any new specific pipeline that I create.
Also, this means that I need to edit separately each Jenkinsfile (along with any peer-review processes we use internally) when I decide to change the set of options.
I'd very much like to remove this maintenance overhead somehow.
How can I delegate the option-setting to a shared library, or otherwise configure the required options for all pipelines at once?
Two options will help you the most:
Using global variables on Master/Agent level.
go to Jenkins-->Manage Jenkins-->Configure System--> Global properties.
Mark the Environment variables box then add name and value for the variable.
then you will be able to use it normally in your Jenkins pipelines as below code snippets.
Wrap the whole pipeline in a function inside shared-library.
Jenkinsfile will look like below:
#Library('shared-library') _
customServicePipeline(agent: 'staging',
timeout: 3,
server:'DEV')
shared library function
// customServicePipeline.groovy
def call(Map pipelineParams = [:]) {
pipeline {
agent { label "${pipelineParams.agent}" }
tools {
maven 'Maven-3.8.6'
jdk 'JDK 17'
}
options {
timeout(time: "${pipelineParams.timeout}", unit: 'MINUTES')
}
stages {
stage('Prep') {
steps {
echo 'prep started'
pingServer(pipelineParams.get("server"))
}
}
}
}
}

Is it possible to pass stages into a Jenkinsfile via pipeline parameters

I'm currently working with a Jenkinsfile I can't directly add code to as it is not developed by my team, however I was thinking there might be some way that I could get the owners of the Jenkinsfile (just another team in my company) to allow us to add "pre" and "post" type variables to the Jenkinsfile, where we could than pass in the stages and logic.
A sample Jenkinsfile today might look like
pipeline {
stages {
stage('Clean-Up WS') {
steps {
cleanWs()
}
}
stage('Do more....
And the desired Jenkinsfile might look like
def x = stage('Clean-Up WS') {
steps {
cleanWs()
}
}
pipeline {
stages {
x()
stage('Do more....
Where x in the above example could be passed in via a Jenkins parameter
I've played around with the above and tried with similar syntax but nothing seems to work
Does anyone know if anything like this possible using Jenkinsfiles?

Jenkins pipelineJob DSL not interpreting variables in pipeline script

I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}

Calculated-String as the parameter to Jenkins's Groovy "STAGE"

I put this in the script section of a Jenkins UI job's configuration -
pipeline {
agent any
stages{
stage('Project') {
...
That works, however -
pipeline {
agent any
stages{
stage('Project ' + 'Josh') {
...
throws and displays an incorrect error message because the parser gets all confused due to the constructed string inside the stage.
Moreover,
String description = 'Project' + ' Josh'
pipeline {
agent any
stages{
stage(description) {
...
does not fail, but displays 'description' as the stage's description.
Now, if you try to load a groovy PaC file with this in it:
node {
stage('Project' + 'Josh') {
...
it works without a hitch.
Is it possible that there are two different Groovy parsers employed, one for the UI and another for loaded PaC's? This means that the UI one has this really horrible bug in it...
Ideas?
.a.
Your example has nothing to do with Jenkins UI. You have shown two different pipeline types - a declarative and scripted one.
Declarative pipeline
A declarative pipeline
pipeline {
agent any
stages {
stage('Build') {
steps {
// do something here
}
}
}
}
introduces more simplified, limited and opinionated syntax. This type of a pipeline sets boundaries for Groovy code execution - it is only available inside a dedicated script block, e.g.
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
def name = 'Joe'
echo "My name is ${name}"
}
}
}
}
}
This is why stage block expects a literal and not a variable nor expression.
Scripted pipeline
The second example you have shown is a scripted pipeline. This kind of pipeline is more powerful comparing to a declarative pipeline - the whole pipeline script is more or less a Groovy script so you can put any code almost everywhere. A scripted pipeline starts with node block and it allows you to put any Groovy code inside this block. Consider following example:
node {
stage("Test") {
echo "1,2,3"
}
for (int i = 0; i < 5; i++) {
stage("Stage ${i}") {
echo "This is ${i}"
}
}
}
This pipeline script generates 6 stages:
As you can see there are actually no limits what kind of stuff you put inside node block. Declarative pipeline does not allow you doing that - its syntax is strict and you have to follow it directly.
Differences
As a final note I will quote Jenkins official docs:
Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model.
Source: https://jenkins.io/doc/book/pipeline/syntax/#compare
The script you configured via UI is using declarative pipeline syntax, while the other uses the scripted node syntax. I'd say that's probably where the other parser comes in and would agree that the one for pipeline has a bug.

Determine Failed Stage in Jenkins Scripted Pipeline

I am looking for a generic way to determine the name of the failed stage at the end of a Jenkins scripted Pipeline.
Please note that this is different than Determine Failed Stage in Jenkins Declaritive Pipeline which is about declarative pipeline.
Also note that use of try/catch inside each stage is out of question because it would make the pipeline script impossible to read. This is because we have like 10-15 stages which are stored in multiple files and they are compiled using JJB to create the final pipeline script. They are already complex so I need a clean approach on finding which stage failed.
U could also create a custom step in a shared library, a super_stage
Quick example:
// vars/super_stage.groovy
def call(name, body) {
try {
stage(name) {
body()
}
} catch(e) {
register_failed_stage(name, e)
throw e
}
}
In that way you can 'reuse' the same exception handler.
In your scripted pipeline you would then use it like:
super_stage("mystage01") {
//do stuff
}
Source
Use a GraphListener:
def listener = new GraphListener.Synchronous(){
#NonCPS
void onNewHead(FlowNode node){
if (node instanceof StepStartNode){
// before steps execution
}else if (node instanceof StepEndNode){
// after steps execution
}
}
def execution = (FlowExecution) currentBuild.getRawBuild().getExecution()
execution.addListener(listener)
You are going to need a few helper functions in order to make it work, for example StepStartNode and StepEndNode gets called twice so you have to filter the one with the label. Also variables like env are available inside the listener so you can store anything in there to be picked up later.
This answer is pretty generic but I've found that is useful in many of the stackoverflow questions regarding doing something before/after some stage (or in all).
You cannot try/catch exceptions inside the pipeline as this approach is not a wrapper for the stage but just a listener that gets executed once per each line instruction but you can just record the stage at the begining and at the end check currentBuild.result to see if the stage failed. You can do pretty much anything at this point.
At some point with the FlowExecution you have access to the pipeline script, I don't know if it's writtable at that point but it would be awesome to rewrite the pipeline to actually try/catch the stages. If you do something in this line please let me know ;)

Resources