How to detect which parallel stage failed in a Jenkins declarative pipeline? - jenkins

My Jenkins pipeline runs several tasks in parallel. It appears that if one stage fails, all subsequent stages will run their failure post block (whether they actually failed or not). I don't know if this is by design or if I'm doing something wrong.
Note: This pipeline runs on a Windows node, hence the bat('exit /b 1')
pipeline {
agent any
stages {
stage('Parallel steps') {
parallel {
stage('Successful stage') {
steps {
script { sleep 10 }
}
post {
failure {
echo('detected failure: Successful stage')
}
}
}
stage('Failure stage') {
steps {
script { bat('exit /b 1') }
}
post {
failure {
echo('detected failure: Failure stage')
}
}
}
}
}
}
}
In the above pipeline, only 'Failure stage' fails, yet in the output I see this, indicating the failure conditional executed for both steps!
Started by user Doe, John
Running on WINDOWS_NODE in D:\workspace
[Successful stage] Sleeping for 10 sec
[Failure stage] [test] Running batch script
[Failure stage] D:\workspace>exit /b 1
Post stage
[Pipeline] [Failure stage] echo
[Failure stage] detected failure: Failure stage
[Failure stage] Failed in branch Failure stage
Post stage
[Pipeline] [Successful stage] echo
[Successful stage] detected failure: Successful stage
ERROR: script returned exit code 1
Finished: FAILURE
What's the best way for me to detect which parallel stage failed and report it to the overall pipeline?

It looks like this is a known bug with Declarative Pipelines. I had to give up using the built-in post->failure block and use try/catch instead, which has its own problems:
You have to catch and then re-throw the error in order to make the stage fail appropriately.
The UI can get a little confusing, as the step that failed is no longer highlighted in red (but the error message is still in the log).
The code is slightly less readable.
This code works correctly. Only the failing stage echoes "detected failure" instead of both.
pipeline {
agent any
stages {
stage('Parallel steps') {
parallel {
stage('Successful stage') {
steps {
script {
try {
sleep 10
} catch (e) {
echo('detected failure: Successful stage')
throw(e)
}
}
}
}
stage('Failure stage') {
steps {
script {
try {
bat('exit /b 1')
} catch (e) {
echo('detected failure: Failure stage')
throw(e)
}
}
}
}
}
}
}
}

Related

How to mark whole Jenkins pipeline build as SUCCESS, after a stage fails and the remaining stages don't run?

My first stage runs a shell script. Exit 0 marks it as success and exit 1 marks it as fail. How can I read this result into the pipeline and get the desired behavior:
Run stage 1
If stage 1 fails, don't run the remaining stages, but mark the whole pipeline as a success
If stage 1 succeeds, run the remaining stages
If any of them fail, mark the pipeline as a fail
If they all succeed, mark the pipeline as a success
I am doing this in a declarative pipeline, how can I enforce this behavior?
You can use something like this, catch the error and then change the currentBuild result :
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
script {
try {
// do something that fails
sh "exit 1"
} catch (Exception err) {
currentBuild.result = 'SUCCESS'
}
}
}
}
stage('Stage 2') {
steps {
echo "Stage 2"
}
}
stage('Stage 3') {
steps {
echo "Stage 3"
}
}
}
}
If you need to change a specific stage result, have a look to this link wich explain how to perform it.

How to execute next stage in sequential stages inspite of previous stage failure in Jenkins pipeline

At the moment I have two stages defined in Jenkins because they both need different agents.
Here is my Proof of concept code
stages {
stage("Run Tests"){
agent docker
stage("Do setup"){}
stage("Testing") {}
post{
always{
echo "always"
}
failure {
echo "failure"
}
}
}
stage("Generate Reports"){
agent node-label
stage("Generate"){
}
}
}
I need "Generate reports" on different agent since certain binaries are on the node and not inside docker container. Tests run inside docker share volume on node so I get all artifacts that are needed to generate report on the node
(I have tried to run "Generate reports" in post stage, but it seems to run inside docker container somehow.)
Now if "Run Tests" fails, "Generate reports" is skipped due to previous stage failure. Any idea how I can force "Generate reports" stage to be run even on failure of previous stage.
Below is the pipeline.
pipeline {
agent none
stages {
stage("Run Tests"){
agent { label "agent1" }
stages {
stage("Do setup"){
steps{
sh 'exit 0'
}
}
stage("Testing") {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh "exit 1"
} //catchError
} //steps
post{
always{
echo "always"
}
failure {
echo "failure"
}
} // post
} // Testing
} // stages
} // Run Test
stage("Generate Reports"){
agent { label "agent2" }
steps {
sh 'exit 0'
}
} // Reports
}
}
The pipeline is successful, but stage Testing is showed as failed, you can choose the status of buildResult and stageResult in case you want it to be unstable or fail:
If you want to run "Generate Reports" stage always then you could mark your earlier stages as unstable if there are failures. By this way Jenkins will execute all stages and will not stop on errors at a particular stage.
Example:
stages {
stage("Run Tests"){
agent docker
stage("Do setup"){}
stage("Testing") {
steps {
script {
// To show an example, Execute "set 1" which will return failure
def result = bat label: 'Check bat......', returnStatus: true, script: "set 1"
if (result !=0) {
// If the result status is not 0 then mark this stage as unstable to continue ahead
// and all later stages will be executed
unstable ('Testing failed')
}
}
}
}
}
}
stage("Generate Reports"){
agent node-label
stage("Generate"){
}
}
}
Option 2 , if you dont want to handle via return status , you can use try and catch block
stage("Testing") {
steps {
script {
try {
// To show an example, Execute "set 1" which will return failure
bat "set 1"
}
catch (e){
unstable('Testing failed!')
}
}
}
}
Option 3: You can change the return status as success of complete build irrespective of stage failure.
stage("Testing") {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE'){
script {
// To show an example, Execute "set 1" which will return failure
bat "set 1"
}
}
}
NOTE: Option 3 has a drawback it will not execute further steps in the same stage if at all there were any errors.
Example:
In this example, the print message "Testing stage" will not be executed as there were issues in bat set 1 command
stage("Testing") {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE'){
script {
// To show an example, Execute "set 1" which will return failure
bat "set 1"
// Note : Below step will not be executed since there was failure in prev step
println "Testing stage"
}
}
}
Option 4 : which you already tried out to keep Generate Reports stage in the Post always section of the build so that is executed always irrespective of any failure.

How would I store all failed stages of my declarative Jenkins pipeline

In my Jenkins pipeline, I have 15 stages. Now I have a post function at the end of the Jenkins file to send me an email about whether the whole process is failed or success. I would like to include all the stages that are failed in the email too. Using post in each stage is not a good idea, because I would receive 15 emails each time the job runs.
I am thinking of creating a list and save all failed env.STAGE_NAME in the list and print it at the end? But it would not allow me to do such a thing in the post.
I want to achieve something like:
pipeline {
agent { label 'master'}
stages {
stage('1') {
steps {
echo 'make fail'
}
}
stage('2') {
steps {
sh 'make fail'
}
}
...
stage('15') {
steps {
sh 'make fail'
}
}
}
post {
always {
echo 'ok'
}
failure {
"There are 3 stages have failed the test, which are: '1', '2' '15'"
}
}
}
How would I do it?

How to mark build success when one of the stages is aborted?

I have a pipeline with stages where one of the stages, intermittently takes longer than expected and hence using timeout to abort it. But if the stage is aborted, build also marked as aborted. Following is the code for the pipeline:
pipeline {
agent any
stages {
stage('First') {
options {
timeout(time: 10, unit: 'SECONDS')
}
steps {
script {
catchError(buildResult: 'SUCCESS') {
echo "Executing stage I"
sleep 12
}
}
}
}
stage('Second') {
steps {
script {
echo "Executing stage II"
}
}
}
}
}
Even though the stage is marked as Aborted, I want to mark build as Success. Can you please help how I can achieve this?
I would suggest one improvement to Michael's answer (which is correct btw). You can use catchError to mark stage ABORTED (or UNSTABLE) and mark the build SUCCESS, but you need to wrap the code that may timeout with try-catch block to control the error. Consider the following example:
pipeline {
agent any
stages {
stage('First') {
options {
timeout(time: 3, unit: 'SECONDS')
}
steps {
script {
catchError(buildResult: 'SUCCESS', stageResult: 'ABORTED') {
try {
echo "Executing stage I"
sleep 4
} catch(org.jenkinsci.plugins.workflow.steps.FlowInterruptedException e) {
error "Stage interrupted with ${e.toString()}"
}
}
}
}
}
stage('Second') {
steps {
script {
echo "Executing stage II"
}
}
}
}
}
When you run this pipeline, the stage that timed out is marked as ABORTED, but the pipeline continues and if there is no failure in the remaining stages, it is marked as SUCCESS.
And here is what the UNSTABLE stage status looks like.
Michael's solution works as well, but it produces a slightly different result - the stage that times out is marked as SUCCESS, and this might be less intuitive. You need to click on the stage to check if it timed out or not.
pipeline {
agent any
stages {
stage('First') {
options {
timeout(time: 3, unit: 'SECONDS')
}
steps {
script {
try {
echo "Executing stage I"
sleep 4
} catch(Exception e) {
currentBuild.result = "SUCCESS"
}
}
}
}
stage('Second') {
steps {
script {
echo "Executing stage II"
}
}
}
}
}
Your catchError() won't work in your case. The documantation (Source) tells the following:
buildResult (optional)
If an error is caught, the overall build result
will be set to this value. Note that the build result can only get
worse, so you cannot change the result to SUCCESS if the current
result is UNSTABLE or worse. Use SUCCESS or null to keep the build
result from being set when an error is caught.
The build status is set with currentBuild.currentResult which can have three values: SUCCESS, UNSTABLE, or FAILURE.
If you want to mark the build as SUCCESS on abortion the post-option (Source) aborted can be used:
pipeline {
agent any
stages {
stage('Example') {
steps {
echo 'Hello World'
}
}
}
post {
aborted {
// Executed only if stage is aborted
currentBuild.result = 'SUCCESS'
}
}
}

Jenkins Pipeline still executes following stages even though current stage failed

I'm implementing a try catch block on most of my stages inside my jenkins pipeline to skip all the following stages when the current stage fails however, one of my stages is returning an error but still continues to execute the next stages.
I've tried using sh 'exit 1', currentStage.result = 'FAILED', if else clause to check the stage result but to no avail.
pipeline {
agent none
stages {
stage ('one') {
steps {
echo 'Next stage should be skipped if this stage fails'
script {
try {
sh '''#!/bin/bash -l
source ~/.nvm/nvm.sh
nvm use node
node somefile.js'''
}
catch (e) {
currentBuild.result = 'FAILURE';
throw e
}
}
}
}
stage ('two') {
steps {
echo 'This stage should be skipped if prior stage throws an erorr'
}
}
}
}
I expect stage two to be skipped as my somefile.js is throwing an error.
You can use the when-clause that Jenkins provides (Source).
stage ('two') {
// Skip this stage if pipeline has already failed
when { not { equals expected: 'FAILURE', actual: currentBuild.result } }
steps {
echo 'This stage should be skipped if prior stage throws an erorr'
}
}

Resources