How to kill a stage of Jenkins Pipeline? - jenkins

I've a Jenkins pipeline with multiple stages but because of some issue, in one of the stages, it is likely to run longer unnecessarily.
Instead of aborting entire pipeline build and skip next stages, I want to kill that specific stage on which other stages are not dependent.
Is there a way to kill specific stage of Jenkins pipeline?

There are ways to skip a stage. But I'm not sure if there are options to kill a long running stage. I'd simply add a conditional expression to run the stage or not OR maybe you could put a timeout condition wrapped in a try..catch block for the long running unnecessary stage to skip and proceed to other stages you want like as below.
pipeline {
agent any
stages {
stage('stage1') {
steps {
script {
try {
timeout(time: 2, unit: 'NANOSECONDS')
echo "do your stuff"
} catch (Exception e) {
echo "Ended the never ending stage and proceeding to the next stage"
}
}
}
}
stage('stage2') {
steps {
script {
echo "Hi Stage2"
}
}
}
}
}
OR Check this page for conditional step/stage.

You can try using "try/catch" block in the scripted pipeline. Even if there is error in a particular stage, Jenkins will continue to execute the next stage.
node {
stage('Example') {
try {
sh 'exit 1'
}
catch (exc) {
echo 'Something failed, I should sound the klaxons!'
throw
}
}
}
You can refer documentation here: https://jenkins.io/doc/book/pipeline/syntax/

Related

I want to know python build in parallel (Jenkins)

Now I'm building using execute shell in Jenkins.
(currently) The code below is built in order. I want to implement this in parallel.
now code status
(I want) build action -> test1.py ~ test4.py executed in parallel
Is there a way to build in parallel in this way(execute shell) or other strategy?
You have several options to run things in parallel within a Jenkins pipeline.
The first option is to use the static Parallel Directive Stages which allow you to easily define parallel stages inside your declarative pipeline, something like:
pipeline {
agent any
stages {
stage('Non-Parallel Stage') {
steps {
echo 'This stage will be executed first.'
}
}
stage('Parallel Stages') {
parallel {
stage('Test 1') {
steps {
sh "python3 $WORKSPACE/folder/test1.py"
}
}
stage('Test 2') {
steps {
sh "python3 $WORKSPACE/folder/test2.py"
}
}
.....
}
}
}
}
A second and more dynamic option is to use the built in parallel keyword which takes a map from branch names to closures:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false
and use it to dynamically create your parallel execution steps, something like:
tests = ['test1','test2','test3', 'test4']
parallel tests.collectEntries{ test ->
["Running test ${test}" : {
sh "python3 $WORKSPACE/folder/${test}.py"
}]
}
This code can reside anywhere in a scripted pipeline, and in a script directive in a declarative pipeline.

Jenkins post actions on stage level are influenced by buildResult

In my jenkins pipeline i have several post-success actions in different stages that define environment variables depending on the stage results (in success case). I am also using parallel and sequential stages, but i do not believe this is affecting the result. Below is a trimmed excerpt:
stage('stage1') {
agent {
label 'agent1'
}
when {
expression {
return Jenkins.instance.getItem("${'job1'}").isBuildable()
}
environment name: 'job0', value: 'success'
}
steps {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
build job: 'job1'
}
}
post {
success {
script {
env.job1 = "success"
echo "job1: ${env.job1} !"
}
}
}
}
Now whenever the build result is set to FAILURE within a stage by catchError, all post actions in other stages do not evaluate the related stage result, but the build result instead, which i do not want (and believe they should not do), referring to https://www.jenkins.io/doc/book/pipeline/syntax/#post :
The post section defines one or more additional steps that are run upon the completion of a Pipeline’s or stage’s run (depending on the location of the post section within the Pipeline)
But in my case, after the first error no more stage level post-success actions are executed..
My workaround so far is to modify catcherror, add a post-failure action that sets another variable which is evaluated in the very last stage to set the build result:
stage('stage1') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
build job: 'job1'
}
}
post {
success { .. }
failure {
script {
env.testFail = "true"
echo "testFail: ${testFail} !"
}
}
}
}
..
stage('set buildResult') {
when {
environment name: 'testFail', value: 'true'
}
steps {
script {
bat '''
echo "testFail variable found. Changing buildresult to FAILURE."
exit 1
'''
}
}
}
Is there anything I missed in the docs, or can anybody confirm this behavior? I would appreciate to remove my workaround and let the catchError do all the work for me.
I have already searched through official docs and mailing list, including jenkins issues, but have not found information about this issue anywhere.
This is my first time contributing to stackoverflow, so please forgive me if I made mistakes or did not express myself clearly (leaving reader-only mode).
Jenkins - v2.204.2
Pipeline - v2.6
Pipeline Declarative - v1.8.4

How to run Jenkins scripted pipeline job on free slave?

I want to run Jenkins scripted pipeline job at the specified slave at the moment when no other job is running on it.
After my job will be started, no other jobs should be performed on this slave, they will have to wait for my job ending running
All the tutorials that I found allowed me to run job on the node when it is free but did not protect me from launching other jobs on this node
Could you tell me how can I do this?
Because Pipeline has two Syntax, there are two ways to achieve that. For scripted pipeline please check the second one.
Declarative
pipeline {
agent none
stages {
stage('Build') {
agent { label 'slave-node​' }
steps {
echo 'Building..'
sh '''
'''
}
}
}
post {
success {
echo 'This will run only if successful'
}
}
}
Scripted
node('your-node') {
try {
stage 'Build'
node('build-run-on-this-node') {
sh ""
}
} catch(Exception e) {
throw e
}
}

Jenkins pipeline - try catch for particular stage and subsequent conditional step

I'm trying to replicate the equivalent of a conditional stage in Jenkins pipeline using a try / catch around a preceding stage, which then sets a success variable, which is used to trigger the conditional stage.
It appears that a try catch block is the way to go, setting a success var to SUCCESS or FAILED, which is used as part of a when statement later (as part of the conditional stage).
The code I am using is as follows:
pipeline {
agent any
stages {
try{
stage("Run unit tests"){
steps{
sh '''
# Run unit tests without capturing stdout or logs, generates cobetura reports
cd ./python
nosetests3 --with-xcoverage --nocapture --with-xunit --nologcapture --cover-package=application
cd ..
'''
currentBuild.result = 'SUCCESS'
}
}
} catch(Exception e) {
// Do something with the exception
currentBuild.result = 'SUCCESS'
}
stage ('Speak') {
when {
expression { currentBuild.result == 'SUCCESS' }
}
steps{
echo "Hello, CONDITIONAL"
}
}
}
}
The latest syntax error I am receiving is as follows:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup
failed:
WorkflowScript: 4: Expected a stage # line 4, column 9.
try{
I've also tried lots of variations.
Am I taking the wrong approach here? This seems like a fairly common requirement.
Thanks.
This might solve your problem depending on what you are going for. Stages are only run when the preceding stages succeed, so if you actually have two stages like in your example, and if you want the second to only run when the first succeeds, you want to ensure that the first stage fails appropriately when tests fail. Catching will prevent the (desirable) failure. Finally will preserve the failure, and can also still be used to grab your test results.
So here, the second stage will only run when the tests pass, and the test results will be recorded regardless:
pipeline {
agent any
stages {
stage("Run unit tests"){
steps {
script {
try {
sh '''
# Run unit tests without capturing stdout or logs, generates cobetura reports
cd ./python
nosetests3 --with-xcoverage --nocapture --with-xunit --nologcapture --cover-package=application
cd ..
'''
} finally {
junit 'nosetests.xml'
}
}
}
}
stage ('Speak') {
steps{
echo "Hello, CONDITIONAL"
}
}
}
}
Note that I'm actually using try in a declarative pipeline, but like StephenKing says, you can't just use try directly (you have to wrap arbitrary groovy code in the script step).

Scripted Jenkins pipeline: continue on fail

Is there a way to continue execution of the scripted pipeline even if the previous stage failed? I need to run specific commands (cleanup) when the build fails before the whole job fails.
The accepted answer wouldn't fail the stage or even mark it as unstable. It is now possible to fail a stage, continue the execution of the pipeline and choose the result of the build:
pipeline {
agent any
stages {
stage('1') {
steps {
sh 'exit 0'
}
}
stage('2') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh "exit 1"
}
}
}
stage('3') {
steps {
sh 'exit 0'
}
}
}
}
In the example above, all stages will execute, the pipeline will be successful, but stage 2 will show as failed:
As you might have guessed, you can freely choose the buildResult and stageResult, in case you want it to be unstable or anything else. You can even fail the build and continue the execution of the pipeline.
Just make sure your Jenkins is up to date, since this is a fairly new feature.
The usual approach is to wrap your steps within a try block.
try {
sh "..."
} catch (err) {
echo "something failed"
}
// cleanup
sh "rm -rf *"
To ease the pain and make the pipeline code more readable, I've encapsulated this in another method here in my global library code.
Another approach, esp. created because of this very issue, are the declarative pipelines (blog, presentation).
post {
always {
cleanWs()
}
}
}
Will always cleanup the job even if the rest fails

Resources