How to do simple if-statements inside a declarative pipeline in Jenkins - jenkins

I'm trying to convert my Scripted pipeline to a Declarative Pipeline.
Wondering how to do a simple if-statement inside a steps {} block.
stage ('Deploy to Docker') {
steps {
parallel (
"instance1" : {
environment {
containerId = sh(script: "docker ps --quiet --filter name=${fullDockerImageName}", returnStdout: true).trim()
}
steps {
if (containerId.isEmpty()) {
docker.image('some/image').run("--name ${fullDockerImageName}")
}
}
}
)
}
}
This causes the following Exception:
WorkflowScript: 201: Expected a step # line 201, column 29.
if (containerId.isEmpty()) {
Since I'm not allowed to do a simple if(..) inside a steps {} block, any idea on how to do this?
It doesn't seem to make sense to make this a full stage with a when {}, since there are more steps that happens in a simple stage (starting a stopped container if it exists, etc).
What's the best way to do a simple if?

This should work
pipeline {
stages {
stage ('Main Stage') {
steps {
script {
if (true) {
stage ('Stage 1') {
sh 'echo Stage 1'
}
}
if (false) {
stage ('Stage 2') {
sh 'echo Stage 2'
}
}
}
}
}
}
}

Unfortunately you have to wrap it within a script, for now. As it says here;
Declarative Pipelines may use all the available steps documented in the Pipeline Steps reference, which contains a comprehensive list of steps, with the addition of the steps listed below which are only supported in Declarative Pipeline.
And if you look at the step reference it simply lists all plugins which contributes pipeline steps. And as far as I can see, there is no step supporting if, then, else. So the answer is, no, right now it is not possible, but, it should be fairly simple to implement this as a step and add to a plugin.

I think that this is the most correct/best practice way about using if/else or control logic within your Jenkins Declarative pipeline.
https://jenkins.io/doc/book/pipeline/syntax/#when
#IronSean answer, doesn't seem like you need that plugin (anymore).

Using the Conditional BuildStep plugin you can add a when {} step to process a conditional.
The following should work, barring syntax issues with the isEmpty() check within this context.
stage ('Deploy to Docker') {
steps {
parallel (
"instance1" : {
environment {
containerId = sh(script: "docker ps --quiet --filter name=${fullDockerImageName}", returnStdout: true).trim()
}
when {
expression {
return containerId.isEmpty()
}
}
step {
docker.image('some/image').run("--name ${fullDockerImageName}")
}
}
)
}
}
The related blog post is here.
EDIT: Sorry, the actual snytax seems to be closer to this, which doesn't have access to your needed conditional:
stage ('Deploy to Docker') {
when {
expression {
return containerId.isEmpty()
}
}
steps {
parallel (
"instance1" : {
environment {
containerId = sh(script: "docker ps --quiet --filter name=${fullDockerImageName}", returnStdout: true).trim()
}
step {
docker.image('some/image').run("--name ${fullDockerImageName}")
}
}
)
}
}

Related

Jenkins pipeline: detect if a stage is started with the "Restart from stage" icon

Let's say I have a declarative pipeline. I want to run a stage only when 'Restart from stage' icon is used ?
Is there a way to do this (a method, a variable...)? I want to run the stage only if "Restart from stage" is used
stage('Test') {
when {
expression {
// An expression to detect if Restart from this stage is used
}
}
steps {
sh 'echo 1'
}
}
You can define a global variable that will hold a Boolean value representing if the pipeline was executed from the beginning or from a specific stage, update it in your first stage and use it later on in the when condition to determine if a restart from stage has occurred.
Something like:
RESTART = true
pipeline {
agent any
stages {
stage('Setup') {
steps {
script{
// signaling pipeline was executed from the beginning (first stage)
RESTART = false
}
// other setup steps
}
}
stage('Test') {
when {
expression { return RESTART }
}
steps {
sh 'echo 1'
}
}
}
}
Another nice option based on #Pamela's answer for using a cause condition, is to use the built in triggeredBy option in the when directive, thus avoiding the need to use getBuildCauses() and the need to filter all causes, and instaed get the condition out of the box.
Something like:
stage('Test') {
when { triggeredBy 'RestartDeclarativePipelineCause' }
steps {
sh 'echo 1'
}
}
You can use currentBuild.getBuildCauses(): https://www.jenkins.io/doc/pipeline/examples/#get-build-cause
Then, in your Test stage add when expression checking the cause of the build matches the one you need.
stage('Test') {
when {
expression {
return currentBuild.getBuildCauses().any { cause ->
cause._class == 'org.jenkinsci.plugins.pipeline.modeldefinition.causes.RestartDeclarativePipelineCause'
}
}
}
steps {
sh 'echo 1'
}
}

Jenkins pipeline - skip next stage on conditional failure of pylint

I've a Jenkinsfile, which has a two different stages: Pre-Build and Build. The Pre-Build is executing pylint and uses the warnings-ng-plugin to report that back to Jenkins.
Something like that:
stages {
stage('Pre-build') {
steps {
script {
sh """#!/usr/bin/env bash
pip install .
pylint --exit-zero --output-format=parseable --reports=n myProject > reports/pylint.log
"""
}
}
post {
always {
recordIssues(
enabledForFailure: true,
tool: pyLint(pattern: '**/pylint.log'),
unstableTotalAll: 20,
failedTotalAll: 30,
)
}
failure {
cleanWs()
}
}
}
stage('Build') {
steps {
script {
sh """#!/usr/bin/env bash
set -e
echo 'I AM STAGE TWO AND I SHOULD NOT BE EXECUTED'
"""
}
}
post {
always {
cleanWs()
}
}
}
}
I'm running into a couple of issues here. Currently I'm setting pylint to --exit-zero, as I want the warnings-ng plugin decide if it is good to go or not, based on the report.
Currently this is set to fail at a total of 30 issues. Now, myProject has 45 issues and I want to prevent that the next stage, Build is entered. But currently I can't seem to be able to prevent this behaviour, as it always continuous to the Build stage.
The build is flagged as failure, because of the results determined within recordIssues, but it doesn't abort the job.
I've found a ticket on https://issues.jenkins-ci.org (Ticket), but I can't seem to make sense out of all of this.
I've found a solution to your problem and I think that this is a bug in the pipeline workflow. The warnings-ng correctly sets build status to failed, but next stages are started despite the status in ${currentBuild.currentResult} variable.
You can use when { expression { return currentBuild.currentResult == "SUCCESS" } } to skip later stages, or throwing an error. But I think this should be default behaviour. Your file then should like:
stages {
stage('Pre-build') {
steps {
script {
sh """#!/usr/bin/env bash
pip install .
pylint --exit-zero --output-format=parseable --reports=n myProject > reports/pylint.log
"""
}
}
post {
always {
recordIssues(
enabledForFailure: true,
tool: pyLint(pattern: '**/pylint.log'),
unstableTotalAll: 20,
failedTotalAll: 30,
)
}
}
}
stage('Build') {
when { expression { return currentBuild.currentResult == "SUCCESS" } }
steps {
script {
echo "currentResult: ${currentBuild.currentResult}"
sh """#!/usr/bin/env bash
set -e
echo 'I AM STAGE TWO AND I SHOULD NOT BE EXECUTED'
"""
}
}
}
post {
always {
cleanWs()
}
}
}
I've created an issue in their Jira.
My environment:
Jenkins ver.: 2.222.1
warnings-ng ver.: 8.1
worfklow-api ver.: 2.40
You have used post 2 times which is wrong implementation as post is designed to get executed only once after all stages are done. It should be written after all the stages just before end of pipeline.
To stop or skip the execution of 2nd Build stage, you can create global varaible at the top, capture the output of pylint in that and use if or when condition at start of stage. Something similar to --
pipeline {
def result
stages {
stage('Pre-build') {
steps {
script {
sh """#!/usr/bin/env bash
pip install .
pylint --exit-zero --output-format=parseable --reports=n myProject > reports/pylint.log
"""
}
}
}
}
stage('Pylint result') { // Not sure how recordIssue works. This just an example.
result = recordIssues(
enabledForFailure: true,
tool: pyLint(pattern: '**/pylint.log'),
unstableTotalAll: 20,
failedTotalAll: 30,
)
}
stage('Build') {
if ( result == "pass") {
steps {
script {
sh """#!/usr/bin/env bash
set -e
echo 'I AM STAGE TWO AND I SHOULD NOT BE EXECUTED'
"""
}
}
}
}
}
post { // this should be used after stages
always {
cleanWs()
}
failure {
cleanWs()
}
}
Also, stages are designed in such a way that if they fail, next stage will not be executed so it's a good idea to have the pylint to be executed inside a stage instead of post condition.
Note: The code above is just an example. Please modify it according to your need.
One option that you may consider is to fail the build explicitly with the following code:
post {
always {
recordIssues(
enabledForFailure: true,
tool: pyLint(pattern: '**/pylint.log'),
unstableTotalAll: 20,
failedTotalAll: 30
)
script {
if (currentBuild.currentResult == 'FAILURE') {
error('Ensure that the build fails if the quality gates fail')
}
}
}
}
Here, after you record the issues, you also check if the value of currentBuild.currentResult is FAILURE and in that case you explicitly call the error() function which fails the build correctly.

Is it possible to create parallel Jenkins Declarative Pipeline stages in a loop?

I have a list of long running Gradle tasks on different sub projects in my project. I would like to run these in parallel using Jenkins declarative pipeline.
I was hoping something like this might work:
projects = [":a", ":b", ":c"]
pipeline {
stage("Deploy"){
parallel {
for(project in projects){
stage(project ) {
when {
expression {
someConditionalFunction(project)
}
}
steps {
sh "./gradlew ${project}:someLongrunningGradleTask"
}
}
}
}
}
}
Needless to say that gives a compile error since it was expecting stage instead of for. Any ideas on how to overcome this? Thanks
I was trying to reduce duplicated code in my existing Jenkinsfile using declarative pipeline syntax. Finally I was able to wrap my head around the difference between scripted and declarative syntax.
It is possible to use scripted pipeline syntax in a declarative pipeline by wrapping it with a script {} block.
Check out my example below: you will see that all three parallel stages finish at the same time after waking up from the sleep command.
def jobs = ["JobA", "JobB", "JobC"]
def parallelStagesMap = jobs.collectEntries {
["${it}" : generateStage(it)]
}
def generateStage(job) {
return {
stage("stage: ${job}") {
echo "This is ${job}."
sh script: "sleep 15"
}
}
}
pipeline {
agent any
stages {
stage('non-parallel stage') {
steps {
echo 'This stage will be executed first.'
}
}
stage('parallel stage') {
steps {
script {
parallel parallelStagesMap
}
}
}
}
}
Parallel wants a map structure. You are doing this a little inside-out. Build your map and then just pass it to parallel, rather than trying to iterate inside parallel.
Option 2 on this page shows you a way to do something similar to what you are trying.
At this link you can find a complex way I did this similar to a matrix/multi-config job:

Scripted jenkinsfile parallel stage

I am attempting to write a scripted Jenkinsfile using the groovy DSL which will have parallel steps within a set of stages.
Here is my jenkinsfile:
node {
stage('Build') {
sh 'echo "Build stage"'
}
stage('API Integration Tests') {
parallel Database1APIIntegrationTest: {
try {
sh 'echo "Build Database1APIIntegrationTest parallel stage"'
}
finally {
sh 'echo "Finished this stage"'
}
}, Database2APIIntegrationTest: {
try {
sh 'echo "Build Database2APIIntegrationTest parallel stage"'
}
finally {
sh 'echo "Finished this stage"'
}
}, Database3APIIntegrationTest: {
try {
sh 'echo "Build Database3APIIntegrationTest parallel stage"'
}
finally {
sh 'echo "Finished this stage"'
}
}
}
stage('System Tests') {
parallel Database1APIIntegrationTest: {
try {
sh 'echo "Build Database1APIIntegrationTest parallel stage"'
}
finally {
sh 'echo "Finished this stage"'
}
}, Database2APIIntegrationTest: {
try {
sh 'echo "Build Database2APIIntegrationTest parallel stage"'
}
finally {
sh 'echo "Finished this stage"'
}
}, Database3APIIntegrationTest: {
try {
sh 'echo "Build Database3APIIntegrationTest parallel stage"'
}
finally {
sh 'echo "Finished this stage"'
}
}
}
}
I want to have 3 stages: Build; Integration Tests and System Tests.
Within the two test stages, I want to have 3 sets of the tests executed in parallel, each one against a different database.
I have 3 available executors. One on the master, and 2 agents and I want each parallel step to run on any available executor.
What I've noticed is that after running my pipeline, I only see the 3 stages, each marked out as green. I don't want to have to view the logs for that stage to determine whether any of the parallel steps within that stage were successful/unstable/failed.
I want to be seeing the 3 steps within my test stages - marked as either green, yellow or red (Success, unstable or failed).
I've considered expanding the tests out into their own stages, but have realised that parallel stages are not supported (Does anyone know whether this will ever be supported?), so I cannot do this as the pipeline would take far too long to complete.
Any insight would be much appreciated, thanks
In Jenkins scripted pipeline, parallel(...) takes a Map describing each stage to be built. Therefore you can programatically construct your build stages up-front, a pattern which allows flexible serial/parallel switching.
I've used code similar to this where the prepareBuildStages returns a List of Maps, each List element is executed in sequence whilst the Map describes the parallel stages at that point.
// main script block
// could use eg. params.parallel build parameter to choose parallel/serial
def runParallel = true
def buildStages
node('master') {
stage('Initialise') {
// Set up List<Map<String,Closure>> describing the builds
buildStages = prepareBuildStages()
println("Initialised pipeline.")
}
for (builds in buildStages) {
if (runParallel) {
parallel(builds)
} else {
// run serially (nb. Map is unordered! )
for (build in builds.values()) {
build.call()
}
}
}
stage('Finish') {
println('Build complete.')
}
}
// Create List of build stages to suit
def prepareBuildStages() {
def buildStagesList = []
for (i=1; i<5; i++) {
def buildParallelMap = [:]
for (name in [ 'one', 'two', 'three' ] ) {
def n = "${name} ${i}"
buildParallelMap.put(n, prepareOneBuildStage(n))
}
buildStagesList.add(buildParallelMap)
}
return buildStagesList
}
def prepareOneBuildStage(String name) {
return {
stage("Build stage:${name}") {
println("Building ${name}")
sh(script:'sleep 5', returnStatus:true)
}
}
}
The resulting pipeline appears as:
There are certain restrictions on what can be nested within a parallel block, refer to the pipeline documentation for exact details. Unfortunately much of the reference seems biased towards declarative pipeline, despite it being rather less flexible than scripted (IMHO).
The pipeline examples page was the most helpful.
Here's a simple example without loops or functions based on #Ed Randall's post:
node('docker') {
stage('unit test') {
parallel([
hello: {
echo "hello"
},
world: {
echo "world"
}
])
}
stage('build') {
def stages = [:]
stages["mac"] = {
echo "build for mac"
}
stages["linux"] = {
echo "build for linux"
}
parallel(stages)
}
}
...which yields this:
Note that the values of the Map don't need to be stages. You can give the steps directly.
Here is an example from their docs:
Parallel execution
The example in the section above runs tests across two different platforms in a linear series. In practice, if the make check execution takes 30 minutes to complete, the "Test" stage would now take 60 minutes to complete!
Fortunately, Pipeline has built-in functionality for executing portions of Scripted Pipeline in parallel, implemented in the aptly named parallel step.
Refactoring the example above to use the parallel step:
// Jenkinsfile (Scripted Pipeline)
stage('Build') {
/* .. snip .. */
}
stage('Test') {
parallel linux: {
node('linux') {
checkout scm
try {
unstash 'app'
sh 'make check'
}
finally {
junit '**/target/*.xml'
}
}
},
windows: {
node('windows') {
/* .. snip .. */
}
}
}
To simplify the answer of #Ed Randall here.
Remember this is Jenkinsfile scripted (not declarative)
stage("Some Stage") {
// Stuff ...
}
stage("Parallel Work Stage") {
// Prealocate dict/map of branchstages
def branchedStages = [:]
// Loop through all parallel branched stage names
for (STAGE_NAME in ["Branch_1", "Branch_2", "Branch_3"]) {
// Define and add to stages dict/map of parallel branch stages
branchedStages["${STAGE_NAME}"] = {
stage("Parallel Branch Stage: ${STAGE_NAME}") {
// Parallel stage work here
sh "sleep 10"
}
}
}
// Execute the stages in parallel
parallel branchedStages
}
stage("Some Other Stage") {
// Other stuff ...
}
Please pay attention to the curly braces.
This will result in the following result (with the BlueOcean Jenkins Plugin):
I was also trying similar sort of steps to execute parallel stages and display all of them in a stage view. You should write a stage inside a parallel step as shown in the following code block.
// Jenkinsfile (Scripted Pipeline)
stage('Build') {
/* .. Your code/scripts .. */
}
stage('Test') {
parallel 'linux': {
stage('Linux') {
/* .. Your code/scripts .. */
}
}, 'windows': {
stage('Windows') {
/* .. Your code/scripts .. */
}
}
}
The above example with a FOR is wrong, as varible STAGE_NAME will be overwritten everytime, I had the same problem as Wei Huang.
Found the solution here:
https://www.convalesco.org/notes/2020/05/26/parallel-stages-in-jenkins-scripted-pipelines.html
def branchedStages = [:]
def STAGE_NAMES = ["Branch_1", "Branch_2", "Branch_3"]
STAGE_NAMES.each { STAGE_NAME ->
// Define and add to stages dict/map of parallel branch stages
branchedStages["${STAGE_NAME}"] = {
stage("Parallel Branch Stage: ${STAGE_NAME}") {
// Parallel stage work here
sh "sleep 10"
}
}
}
parallel branchedStages
I have used as below where the three stages are parallel.
def testCases() {
stage('Test Cases') {
def stages = [:] // declaring empty list
stages['Unit Testing'] = {
sh "echo Unit Testing completed"
}
stages['Integration Testing'] = {
sh "echo Integration Testing completed"
}
stages['Function Testing'] = {
sh "echo Function Testing completed"
}
parallel(stages) // declaring parallel stages
}
}
I have used stage{} in parallel blocks several times. Then each stage shows up in the Stage view. The parent stage that contains parallel doesn't include the timing for all the parallel stages, but each parallel stage shows up in stage view.
In blue ocean, the parallel stages appear separately instead of the stages showing. If there is a parent stage, it shows as the parent of the parallel stages.
If you don't have the same experience, maybe a plugin upgrade is due.

How do I assure that a Jenkins pipeline stage is always executed, even if a previous one failed?

I am looking for a Jenkinsfile example of having a step that is always executed, even if a previous step failed.
I want to assure that I archive some builds results in case of failure and I need to be able to have an always-running step at the end.
How can I achieve this?
We switched to using Jenkinsfile Declarative Pipelines, which lets us do things like this:
pipeline {
agent any
stages {
stage('Test') {
steps {
sh './gradlew check'
}
}
}
post {
always {
junit 'build/reports/**/*.xml'
}
}
}
References:
Tests and Artifacts
Jenkins Pipeline Syntax
try {
sh "false"
} finally {
stage 'finalize'
echo "I will always run!"
}
Another possibility is to use a parallel section in combination with a lock. For example:
pipeline {
stages {
parallel {
stage('Stage 1') {
steps {
lock('MY_LOCK') {
echo 'do stuff 1'
}
}
}
stage('Stage 2') {
steps {
lock('MY_LOCK') {
echo 'do stuff 2'
}
}
}
}
}
}
Parallel stages in a parallel section only abort other stages in the same parallel section if the fail fast option for the parallel section is set. See the docs.

Resources