Can I build stages with a function in a Jenkinsfile? - jenkins

I'd like to use a function to build some of the stages of my Jenkinsfile. This is going to be a build with a number of repetitive stages/steps - I'd not like to generate everything manually.
I was wondering if it's possible to do something like this:
_make_stage() {
stage("xx") {
step("A") {
echo "A"
}
step("B") {
echo "B"
}
}
}
_make_stages() {
stages {
_make_stage()
}
}
// pipeline starts here!
pipeline {
agent any
_make_stages()
}
Unfortunately Jenkins doesn't like this - when I run I get the error:
WorkflowScript: 24: Undefined section "_make_stages" # line 24, column 5.
_make_stages()
^
WorkflowScript: 22: Missing required section "stages" # line 22, column 1.
pipeline {
^
So what's going wrong here? The function _make_stages() really looks like it returns whatever the stages object returns. Why does it matter whether I put that in a function call or just inline it into the pipeline definition?

As explained here, Pipeline "scripts" are not simple Groovy scripts, they are heavily transformed before running, some parts on master, some parts on slaves, with their state (variable values) serialized and passed to the next step. As such, every Groovy feature is not supported, and what you see as simple functions really is not.
It does not mean what you want to achieve is impossible. You can create stages programmatically, but apparently not with the declarative syntax. See also this question for good suggestions.

You can define a declarative pipeline in a shared library, for example:
// vars/evenOrOdd.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
// Jenkinsfile
#Library('my-shared-library') _
evenOrOdd(currentBuild.getNumber())
See Defining Declarative Pipelines

Related

Can Jenkins pipelines have variable stages?

From my experience with Jenkins declarative-syntax pipelines, I'm aware that you can conditionally skip a stage with a when clause. E.g.:
run_one = true
run_two = false
run_three = true
pipeline {
agent any
stages {
stage('one') {
when {
expression { run_one }
}
steps {
echo 'one'
}
}
stage('two') {
when {
expression { run_two }
}
steps {
echo 'two'
}
}
stage('three') {
when {
expression { run_three }
}
steps {
echo 'three'
}
}
}
}
...in the above code block, there are three stages, one, two, and three, each of whose execution is conditional on a boolean variable.
I.e. the paradigm is that there is a fixed superset of known stages, of which individual stages may be conditionally skipped.
Does Jenkins pipeline script support a model where there is no fixed superset of known stages, and stages can be "looked up" for conditional execution?
To phrase it as pseudocode, is something along the lines of the following possible:
my_list = list populated _somehow_, maybe reading a file, maybe Jenkins build params, etc.
pipeline {
agent any
stages {
if (stage(my_list[0]) exists) {
run(stage(my_list[0]))
}
if (stage(my_list[1]) exists) {
run(stage(my_list[1]))
}
if (stage(my_list[2]) exists) {
run(stage(my_list[2]))
}
}
}
?
I think another way to think about what I'm asking is: is there a way to dynamically build a pipeline from some dynamic assembly of stages?
For dynamic stages you could write either a fully scripted pipeline or use a declarative pipeline with a scripted section (e. g. by using the script {…} step or calling your own function). For an overview see Declarative versus Scripted Pipeline syntax and Pipeline syntax overview.
Declarative pipeline is better supported by Blue Ocean so I personally would use that as a starting point. Disadvantage might be that you need to have a fixed root stage, but I usually name that "start" or "init" so it doesn't look too awkward.
In scripted sections you can call stage as a function, so it can be used completely dynamic.
pipeline {
agent any
stages {
stage('start') {
steps {
createDynamicStages()
}
}
}
}
void createDynamicStages() {
// Stage list could be read from a file or whatever
def stageList = ['foo', 'bar']
for( stageName in stageList ) {
stage( stageName ) {
echo "Hello from stage $stageName"
}
}
}
This shows in Blue Ocean like this:

Jenkins. Use a shared library on the options phase

So I have created a shared library in jenkins with a listener that gets triggered each time the pipelines reads a FlowNode so I can run groovy code before and after each stage, step, etc...
I'm able to call the shared library in a step phase like this:
pipeline {
agent any
stages {
stage('prepare') {
steps{
prepareStepsWrapper()
}
}
stage('step1') {
steps {
echo 'step1'
}
}
stage('step2') {
steps {
echo 'step2'
}
}
stage('step3') {
steps {
echo 'step3'
// fail on purpose
sh 'notfoundexecutablelol'
}
}
stage('step4') {
steps {
echo 'step4'
}
}
}
post{
always{
println env.getEnvironment()
}
}
}
And works pretty great!
With this approach the 'prepare' stage needs to be filtered out so I've switched to the options directive:
pipeline {
agent any
options {
prepareStepsWrapper()
}
stages {
stage('step1') {
steps {
echo 'step1'
}
}
...
}
}
But the pipeline fails with
WorkflowScript: 4: Invalid option type "prepareStepsWrapper"
tl;dr; How can I load a shared library within the options directive?
What does the option-stage do?
The options directive allows configuring Pipeline-specific options
from within the Pipeline itself.
You can't call the shared-library in the options-stage. This stage should not be used for execute any logic, rather it sets configurations for the pipeline. All availables options and the documentation can be found here.
You could try to create a stage that simply calls your prepareStepsWrapper() and use locks to avoid that other stages are executed before this stage.

Jenkins Declarative Pipeline detect first run and fail when choice parameters present

I often write Declarative Pipeline jobs where I setup parameters such as "choice". The first time I run the job, it blindly executes using the first value in the list. I don't want that to happen. I want to detect this case and not continue the job if a user didn't select a real value.
I thought about using "SELECT_VALUE" as the first item in the list and then fail the job if that is the value. I know I can use a 'when' condition on each stage, but I'd rather not have to copy that expression to each stage in the pipeline. I'd like to fail the whole job with one check up front.
I don't like the UI for 'input' tasks because the controls are hidden until you hover over a running stage.
What is the best way to validate arguments with a Declarative Pipeline? Is there a better way to detect when the job is run for the first time and stop?
I've been trying to figure this out myself and it looks like the pipeline runs with a fully populated parameters list.
So, the answer to your choice option is to make the first item a value like "please select option" and have your code use when to check that
For example
def paramset = true
pipeline {
parameters {
choice(choices: ['select','test','proof', 'prod'], name: 'ENVI')
}
stages {
stage ('check') {
when { expression { return params.choice.ENVI == 'select' }
steps {
script {
echo "Missing parameters"
paramset = false
}
}
}
stage ('step 1') {
when { expression { return paramset }
steps {
script {
echo "Doing step 1"
}
}
}
stage ('step 2') {
when { expression { return paramset }
steps {
script {
echo "Doing step 2"
}
}
}
stage ('step 3') {
when { expression { return paramset }
steps {
script {
echo "Doing step 3"
}
}
}
}
}

How to abort a declarative pipeline

I'm trying the new declarative pipeline syntax.
I wonder, how can I abort all the stages and steps of a pipeline, when for example a parameter has an invalid value.
I could add a when clause to every stage, but this isn't optimal for me. Is there a better way to do so?
This should work fine with a when directive, if you make use of the error step.
For example, you could do an up-front check and abort the build if the given parameter value is not acceptable — preventing subsequent stages from running:
pipeline {
agent any
parameters {
string(name: 'targetEnv',
defaultValue: 'dev',
description: 'Must be "dev", "qa", or "staging"')
}
stages {
stage('Validate parameters') {
when {
expression {
// Only run this stage if the targetEnv is invalid
!['dev', 'qa', 'staging'].contains(params.targetEnv)
}
}
steps {
// Abort the build, skipping subsequent stages
error("Invalid target environment: ${params.targetEnv}")
}
}
stage('Checkout') {
steps {
echo 'Checking out source code...'
}
}
stage('Build') {
steps {
echo 'Building...'
}
}
}
}
You could use the FlowInterruptedException, e.g.:
import hudson.model.Result
import org.jenkinsci.plugins.workflow.steps.FlowInterruptedException
pipeline {
...
steps {
script {
throw new FlowInterruptedException(Result.ABORTED)
}
...
}
This will stop execution immediately like the error step, but with more control over the result.
Please note that it requires you to approve a signature:
new org.jenkinsci.plugins.workflow.steps.FlowInterruptedException hudson.model.Result
Apart from Result.ABORTED there's also: Result.SUCCESS, Result.UNSTABLE, Result.FAILURE and Result.NOT_BUILT.
Disclaimer: it's a bit of a hack.

Jenkinsfile Pipeline errors: "expected a symbol" and "undefined section"

Can anyone explain why I get the following errors, and what can be a possible solution for them?
org.codehaus.groovy.control.MultipleCompilationErrorsException:
startup failed: WorkflowScript: 20: Expected a symbol # line 20,
column 4.
environment {
WorkflowScript: 17: Undefined section "error" # line 17, column 1.
pipeline {
The code in the Jenkinsfile is as follows:
#!groovy
def application, manifest, git, environment, artifactory, sonar
fileLoader.withGit('git#<reducted>', 'v1', 'ssh-key-credential-id-number') {
application = fileLoader.load('<reducted>');
manifest = fileLoader.load('<reducted>');
git = fileLoader.load('<reducted>');
environment = fileLoader.load('<reducted>');
}
pipeline {
agent { label 'cf_slave' }
environment {
def projectName = null
def githubOrg = null
def gitCommit = null
}
options {
skipDefaultCheckout()
}
stages {
stage ("Checkout SCM") {
steps {
checkout scm
script {
projectName = git.getGitRepositoryName()
githubOrg = git.getGitOrgName()
gitCommit = manifest.getGitCommit()
}
}
}
stage ("Unit tests") {
steps {
sh "./node_modules/.bin/mocha --reporter mocha-junit-reporter --reporter-options mochaFile=./testResults/results.xml"
junit allowEmptyResults: true, testResults: 'testResults/results.xml'
}
}
//stage ("SonarQube analysis") {
//...
//}
// stage("Simple deploy") {
// steps {
// // Login
// sh "cf api <reducted>"
// sh "cf login -u <reducted> -p <....>"
//
// // Deploy
// sh "cf push"
// }
// }
}
post {
// always {
// }
success {
sh "echo 'Pipeline reached the finish line!'"
// Notify in Slack
slackSend color: 'yellow', message: "Pipeline operation completed successfully. Check <reducted>"
}
failure {
sh "echo 'Pipeline failed'"
// Notify in Slack
slackSend color: 'red', message: "Pipeline operation failed!"
//Clean the execution workspace
//deleteDir()
}
unstable {
sh "echo 'Pipeline unstable :-('"
}
// changed {
// sh "echo 'Pipeline was previously failing but is now successful.'"
// }
}
}
Your Pipeline is mostly fine — adding Scripted Pipeline elements before the Declarative pipeline block is generally not a problem.
However, at the very start, you're defining an variable called environment (and git), which are basically overriding the elements declared by the various Pipeline plugins.
i.e. When you attempt to do pipeline { environment { … } }, the environment is referring to your variable declaration, which causes things to go wrong.
Rename those two variables, and you'll fix the first error message.
To fix the second error message, remove the attempts to declare variables from the environment block — this block is only intended for exporting environment variables for use during the build steps, e.g.:
environment {
FOO = 'bar'
BAR = 'baz'
}
The script block you have will work fine without these declarations. Alternatively, you can move those variable declarations to the top level of your script.
If you're using declarative pipeline (which you are, e.g. the outer pipeline step), then you may only declare the pipeline on the outer layer, e.g. you can't have variable and function definitions. This is the downside of using declarative pipeline.
More info here
As I see it you can solve this the following ways:
Use scripted pipeline instead
Move the code at the beginning to a global pipeline library (Might be tricky to solve variable scoping if a value is used in several places, but it should be doable.
Move the code at the beginning to an script step inside the pipeline and store the values as described here.

Resources