Jenkins Pipeline post is not honoring stage level pass when earlier stage failed within the catchError step - jenkins

I have a Jenkins Pipeline where I am doing deployment, running automated tests and then posting the results to the Test Management System.
If Deploy stage fails, I don't want to go ahead with Run Tests and Post Results stages.
If Run Tests fail, I still want to go ahead and post the pass + failed test results to the Test Management System.
On each stage, I want to trigger an email that the respective stage failed.
pipeline {
agent { label 'my-agent' }
stages {
stage('Deploy') {
steps {
// carry out deployment
}
post {
failure {
// send email that deployment failed
}
}
}
stage('Run Tests') {
steps {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
// carry out run
}
}
post {
failure {
// send email that run tests failed
}
}
}
stage('Post Results') {
steps {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
// post the results to the test management system
}
}
post {
failure {
// send email that posting results encountered error
}
}
}
}
}
The problem:
The email triggers for Deploy and Run Tests are working fine. However, when Run Tests has failures; even though the results are successfully posted to the Test Management System, the control is entering into failure part of the post for Post Results stage.
I tried making the buildResult as SUCCESS and stageResult as FAILURE. However, the control is not going into the failure part of even the same stage.
What changes do I need to make to avoid sending email for the Post Test failure even if it passes but the earlier Run Tests has failed?

This is not ideal solution. But I have solved the above issue by shifting the post results error email trigger from the post section to a separate stage.
The post result execution stage has touch import_success statement at the end. It gets executed only when the posting of results is successful.
The post results error email trigger stage has a when clause to check existence of the import_success file to send the email (trigger email when import_success file doesn't exist).
Here is the final pipeline,
pipeline {
agent { label 'my-agent' }
stages {
stage('Deploy') {
steps {
// carry out deployment
}
post {
failure {
// send email that deployment failed
}
}
}
stage('Run Tests') {
steps {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
// carry out run
}
}
post {
failure {
// send email that run tests failed
}
}
}
stage('Post Results') {
steps {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
// post the results to the test management system
sh 'touch import_success' // this is executed only when post results is successful
}
}
}
stage ('Post Results Error Email Trigger') {
when {
not {
expression {
fileExists 'import_success'
}
}
}
steps {
// send email that posting results encountered error
}
}
}
}

Related

Jenkins declarative pipeline conditional post action depending on stage (not pipeline) status

I have a Jenkins declarative pipeline in which some stages have a post action:
stage('1 unit tests') { ..... }
stage('2 other tests') {
steps { ..... }
post {
success { ...... }
}
}
It seems that if a unit test fails (build becomes unstable in stage 1), then the conditional post action of stage 2 is not performed.
How can I make sure the post action is only skipped if the build status changes
during the current stage?
There are some "options", I don't know what you might like or find acceptable. If a stage is skipped; it also skips all of its internals.
1: It's not exactly what you want but you can manipulate/mark a stage differently than other stages and continue the execution using something like the skipStagesAfterUnstable option or catchError. For more info see this post. But it may also be to heavy-handed and forcing you into a limited set of results or lead to unwanted execution.
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE')
2: You can move the stage to a separate pipeline/job, and trigger it via or after the run of this pipeline
3: Another option might be something like the following pseudo-code, but this feels more like a hack and adding ('global') 'state flags' adds clutter:
failedOne = false
failedTwo = false
pipeline {
agent any
stages {
stage('Test One') {
steps {...}
post {
failure {
failedOne=true
postFailureOne()
}
}
}
stage('Test Two') {
steps {...}
post {
failure {
failedTwo=true
postFailureTwo()
}
}
}
}
post {
success { .... }
failure {
if (!failedTwo) postFailureTwo()
}
}
}
void postFailureOne() { echo 'Oeps 1' }
void postFailureTwo() { echo 'Oeps 2' }

Continuous Integration pipeline

I am looking to trigger the on_failure step in my pipeline. I have a very simple script.
2 resources and 1 job. The job has a run step in which I would like to trigger failure manually. I have tried many things and they all leaded to an error.
Is there an shell script exit code that could make the task to fail and not being errored
Triggering post → failure and not failing the build is not possible:
failure
Only run the steps in post if the current Pipeline’s or stage’s run has a "failed" status, typically denoted by red in the web UI.
However, you can do the following:
def status
pipeline {
agent any
stages {
stage('Failing stage') {
steps {
script {
status = sh script: 'exit 99', returnStatus: true
}
}
}
}
post {
always {
script {
if ( status == 99 )
echo 'Script failed...'
else
echo 'Script succeeded...'
}
}
}
}
Example post -> failure
post {
always {
cleanWs()
}
success {
sendEmail('SUCCESSFUL')
}
unstable {
sendEmail('UNSTABLE')
}
failure {
sendEmail('FAILED')
}
}

How would I store all failed stages of my declarative Jenkins pipeline

In my Jenkins pipeline, I have 15 stages. Now I have a post function at the end of the Jenkins file to send me an email about whether the whole process is failed or success. I would like to include all the stages that are failed in the email too. Using post in each stage is not a good idea, because I would receive 15 emails each time the job runs.
I am thinking of creating a list and save all failed env.STAGE_NAME in the list and print it at the end? But it would not allow me to do such a thing in the post.
I want to achieve something like:
pipeline {
agent { label 'master'}
stages {
stage('1') {
steps {
echo 'make fail'
}
}
stage('2') {
steps {
sh 'make fail'
}
}
...
stage('15') {
steps {
sh 'make fail'
}
}
}
post {
always {
echo 'ok'
}
failure {
"There are 3 stages have failed the test, which are: '1', '2' '15'"
}
}
}
How would I do it?

How to mark build success when one of the stages is aborted?

I have a pipeline with stages where one of the stages, intermittently takes longer than expected and hence using timeout to abort it. But if the stage is aborted, build also marked as aborted. Following is the code for the pipeline:
pipeline {
agent any
stages {
stage('First') {
options {
timeout(time: 10, unit: 'SECONDS')
}
steps {
script {
catchError(buildResult: 'SUCCESS') {
echo "Executing stage I"
sleep 12
}
}
}
}
stage('Second') {
steps {
script {
echo "Executing stage II"
}
}
}
}
}
Even though the stage is marked as Aborted, I want to mark build as Success. Can you please help how I can achieve this?
I would suggest one improvement to Michael's answer (which is correct btw). You can use catchError to mark stage ABORTED (or UNSTABLE) and mark the build SUCCESS, but you need to wrap the code that may timeout with try-catch block to control the error. Consider the following example:
pipeline {
agent any
stages {
stage('First') {
options {
timeout(time: 3, unit: 'SECONDS')
}
steps {
script {
catchError(buildResult: 'SUCCESS', stageResult: 'ABORTED') {
try {
echo "Executing stage I"
sleep 4
} catch(org.jenkinsci.plugins.workflow.steps.FlowInterruptedException e) {
error "Stage interrupted with ${e.toString()}"
}
}
}
}
}
stage('Second') {
steps {
script {
echo "Executing stage II"
}
}
}
}
}
When you run this pipeline, the stage that timed out is marked as ABORTED, but the pipeline continues and if there is no failure in the remaining stages, it is marked as SUCCESS.
And here is what the UNSTABLE stage status looks like.
Michael's solution works as well, but it produces a slightly different result - the stage that times out is marked as SUCCESS, and this might be less intuitive. You need to click on the stage to check if it timed out or not.
pipeline {
agent any
stages {
stage('First') {
options {
timeout(time: 3, unit: 'SECONDS')
}
steps {
script {
try {
echo "Executing stage I"
sleep 4
} catch(Exception e) {
currentBuild.result = "SUCCESS"
}
}
}
}
stage('Second') {
steps {
script {
echo "Executing stage II"
}
}
}
}
}
Your catchError() won't work in your case. The documantation (Source) tells the following:
buildResult (optional)
If an error is caught, the overall build result
will be set to this value. Note that the build result can only get
worse, so you cannot change the result to SUCCESS if the current
result is UNSTABLE or worse. Use SUCCESS or null to keep the build
result from being set when an error is caught.
The build status is set with currentBuild.currentResult which can have three values: SUCCESS, UNSTABLE, or FAILURE.
If you want to mark the build as SUCCESS on abortion the post-option (Source) aborted can be used:
pipeline {
agent any
stages {
stage('Example') {
steps {
echo 'Hello World'
}
}
}
post {
aborted {
// Executed only if stage is aborted
currentBuild.result = 'SUCCESS'
}
}
}

Jenkins: Ignore failure in pipeline build step

With jenkins build flow plugin this was possible:
ignore(FAILURE){
build( "system-check-flow" )
}
How to do this with Declarative Pipeline syntax?
To ignore a failed step in declarative pipeline you basically have two options:
Use script step and try-catch block (similar to previous proposition by R_K but in declarative style)
stage('someStage') {
steps {
script {
try {
build job: 'system-check-flow'
} catch (err) {
echo err.getMessage()
}
}
echo currentBuild.result
}
}
Use catchError
stage('someStage') {
steps {
catchError {
build job: 'system-check-flow'
}
echo currentBuild.result
}
}
In both cases the build won't be aborted upon exception in build job: 'system-check-flow'. In both cases the echo step (and any other following) will be executed.
But there's one important difference between these two options. In first case if the try section raises an exception the overall build status won't be changed (so echo currentBuild.result => SUCCESS). In the second case you overall build will fail (so echo currentBuild.result => FAILURE).
This is important, because you can always fail the overall build in first case (by setting currentBuild.result = 'FAILURE') but you can't repair build in second option (currentBuild.result = 'SUCCESS' won't work).
In addition to simply making the stage pass, it is now also possible to fail the stage, but continue the pipeline and pass the build:
pipeline {
agent any
stages {
stage('1') {
steps {
sh 'exit 0'
}
}
stage('2') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh "exit 1"
}
}
}
stage('3') {
steps {
sh 'exit 0'
}
}
}
}
In the example above, all stages will execute, the pipeline will be successful, but stage 2 will show as failed:
As you might have guessed, you can freely choose the buildResult and stageResult, in case you want it to be unstable or anything else. You can even fail the build and continue the execution of the pipeline.
Just make sure your Jenkins is up to date, since this feature is only available since "Pipeline: Basic Steps" 2.16 (May 14, 2019). Before that, catchError is still available but without parameters:
steps {
catchError {
sh "exit 1"
}
}
I was looking for an answer for a long time and I found a hack for it! I put the try/catch block on the whole stage:
try {
stage('some-stage') {
//do something
}
} catch (Exception e) {
echo "Stage failed, but we continue"
}
try {
stage("some-other-stage") { // do something }
} catch (Exception e) {
echo "Stage failed, but we still continue"
}
As result you will get something like this:
This is still not ideal, but it gives the necessary results.
In recent versions it's possible to pass propogate=false option to build step.
link:
https://jenkins.io/doc/pipeline/steps/pipeline-build-step/
example:
build job:"jobName", propagate:false
Try this example:
stage('StageName1')
{
steps
{
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE')
{
SomeCodeThatCanBeErrored
}
}
}
stage('StageName2')
{
steps
{
ContinueOtherCode
}
}
For my decalartive pipeline I have found another solution:
stage('Deploy test')
{
steps
{
bat returnStatus: true, script: 'sc stop Tomcat9'
// The return value of the step will be the status code!
// evaluate return status yourself, or ignore it
}
}
The same works for the sh command to execute scripts on Unix platforms.
The example ignores the return status, because the tomcat might be already stopped, because of a previously failed pipeline run.
In the new pipeline, you can use try-catch to achieve this.
node{
try{
build job: 'system-check-flow'
}
catch (err){
echo "system-check-flow failed"
}
try{
build job: 'job2'
}
catch (err){
echo "job2 failed"
}
}
Here it will build the 'system-check-flow' job. If it fails it will catch the error, ignore, and then move on to build 'job2'
See this post for a full discussion.
pipeline {
agent any
stages {
stage('Stage') {
steps{
script{
jobresult = build(job: './failing-job',propagate:false).result
if(jobresult != 'SUCCESS'){
catchError(stageResult: jobresult, buildResult: 'UNSTABLE'){
error("Downstream job failing-job failed.")
}
}
}
}
}
}
}
For all those that are wondering about how to set the result of a downstream job to the stage/build) Not the most graceful solution, but it gets the job done. Funny thing is that if this stageResult variable was available as a global variable or as a variable outside the catchError block these kinds of solutions would not be needed. Sadly it isn't, and the only way to set the stage result in a pipeline that I thought of is this way. The error() block is needed, otherwise catchError will not set the stageResult/buildResult(the catchError block requires an error, ofcourse).
Complementing the existing working solutions that use catchError as a step or in script, you can also use catchError as a stage option.
This is useful if you have multiple sub stages that you want to catch errors for in the parent stage:
pipeline {
agent any
stages {
stage('Tests') {
options {
catchError(message: "Test failed", stageResult: 'UNSTABLE', buildResult: 'UNSTABLE')
}
stages {
stage('Test 1') {
echo 'test 1 succeeded'
}
stage('Test 2') {
error 'test 2 failed'
}
}
}
}
}
This isn't explicitly documented, but there is a hint that you may use steps as options (emphasis mine):
However, the stage-level options can only contain steps like retry,
timeout, or timestamps, or Declarative options that are relevant to a
stage, like skipDefaultCheckout.
It names a few steps as examples, but not as the only possible steps to be used as options. Also, if you enter an invalid option, Jenkins lists all available options in the error message, which includes catchError.
The cleanest and latest way would be:
stage('Integration Tests') {
steps {
script {
warnError(message: "${STAGE_NAME} stage was unstable.", catchInterruptions: false) {
// your scripts
}
}
}
}
Reference: https://www.jenkins.io/doc/pipeline/steps/workflow-basic-steps/#warnerror-catch-error-and-set-build-and-stage-result-to-unstable
you could put the step script inside "post" step, if if it's a teardown like step
code as below:
post {
always {
script {
try{
echo 'put your alway need run scripts here....if it's a teardown like step'
}catch (err) {
echo 'here failed'
}
script{
emailext (
xxxx
)
}
}

Resources