I am having a problem with my pipelin in Jenkins.
I perform a path search for files with the specified extension. I then execute php -l with the previously found files.
Everything works ok but I would like if php -l finds an error then build and step go to UNSTABLE state and further execution of the pipeline is stopped.
I managed to do it this way but then build and step are in FAILED state
} catch (Exception e) {
error("${e}")
Part of code my pipeline.
def check(){
stage('Validate files') {
try {
sh "find . -type f -iregex '.*\\.\\(php\\)' | xargs -I % sh -c 'php -l \'%\''"
} catch (Exception e) {
error("${e}")
}
}
}
I hope someone smarter can direct me to a solution :)
Got an example to work but maybe not exactly what you wanted. I used unstable() to mark the stage / build and then checked for the exit code of the sh step to return or continue the pipeline.
There are 2 if's as you need to return outside of a stage to not just return from the stage.
#!/usr/bin/env groovy
try {
node {
def exitCode = 0
exitCode = check()
if (exitCode != 0){
return
}
somethingelse()
}
} catch (Throwable err) { // catch all exceptions
throw err
} finally {}
def check(){
stage('Validate files') {
exitCode = sh script:"exit 1", returnStatus:true
if (exitCode !=0){
unstable('message')
}
}
return exitCode
}
def somethingelse(){
stage('Something'){
echo "somethingelse"
}
}
Related
I am trying to populate an environment variable with the exit code status and use that in the catch block to avoid certain exceptions.
not able to store the exit code value to the env variable in groovy pipeline. Should I take a different approach here?
environment {
cmdStatus = 0
}
try{
image.inside("--env-file ${PWD}/creds.env -v /config/.ssh:/config/.ssh") {
sh """
python -u /config/env-python/abc.py -u ${update}
cmdStatus = \$? --> error exit code value is not getting stored in the env variable
"""
}
}
catch(Exception e) {
if (env.cmdStatus == 0) {
echo 'Inside Success'
} else {
echo 'Inside Failure'
}
}
My pipeline is compiling on a windows and linux machine in parallel.
Since I used the parallel directive logrotators does not work anymore and I don't manage to find what is wrong
All artefacts are keeping stored .
here is a sample of my Jenkinsfile
properties([gitLabConnection('numagit'),
buildDiscarder(
logRotator(
numToKeepStr:'1', artifactNumToKeepStr:'1'
)
)
]
)
parallel (
'linux' : {
node('linux64') {
stage('Checkout sources') {
echo 'Checkout..'
checkout(scm)
}
gitlabBuilds(builds:[
"Compiling linux64"
] ) {
try {
stage('Compiling linux64') {
gitlabCommitStatus("Compiling linux64") {
sh('rm -rf build64')
sh('mkdir -p build64')
dir('build64')
{
......
}
archiveArtifacts(artifacts: 'build64/TARGET/numalliance/MAJ/data.tgz', fingerprint: true)
archiveArtifacts(artifacts: 'build64/TARGET/numalliance/MAJ/machine.sh', fingerprint: true)
}
}
} catch (e) {
currentBuild.result = "FAILURE" // make sure other exceptions are recorded as failure too
// en cas d erreur on archive la sortie du CMake
archiveArtifacts("build64/CMakeFiles/CMakeOutput.log")
}
}
cleanWs()
}
},
'windows' : {
// Noeud de compilation windows
node('win32') {
def revision = ""
stage('Checkout sources') {
echo 'Checkout..'
checkout(scm)
}
try {
stage('Compilation windows') {
gitlabCommitStatus("Compilation windows") {
echo 'Building win32 version'
....
}
}
stage('Packaging for win32') {
gitlabCommitStatus('Packaging for win32') {
....
dir('win32/TARGET/numalliance/MACHINE'){
...
archiveArtifacts(artifacts: '*.exe', fingerprint: true)
}
}
}
} catch (e) {
currentBuild.result = "FAILURE" // make sure other exceptions are recorded as failure too
}
cleanWs()
}
}
)
The properties function changes the configuration of the job. It is the same as if you were to go to the configuration page of the job and change the log rotator settings manually. It seems you want to achieve that 5 development builds and one of everything else are kept but what actually happens is that the pipeline will keep 5 of anything when a develop job has run last and it will keep 1 when something else ran last.
The easiest fix would be to use seperate jobs (with the same pipeline code). In that case you would have one job that only keeps the last build and another (the develop job) which keeps the last 5.
I am sure I am not the only one who is interested in how to handle something like this: docker build stage in Jenkins pipeline fails with Unexpected EOF (there can be a lot of reasons, in my case the docker daemon was restarted on the slave)
appImage = docker.build ("${projectName}:${env.BRANCH_NAME}-${gitCommit}", "--build-arg APP_ENV=${appEnv} --build-arg SKIP_LINT=true .")
The deploy phase kicks in, because the Unexpected EOF does not actually throw any error, there is no exception to catch so the build status is null.
I know that it's not a regular situation but still how can we handle smth like this so that the following stages do not run in case the build is interrupted.
Additional details:
#JRichardsz , thanks for the answer! Usually currentBuild.result . defaults to null e.g. https://issues.jenkins-ci.org/browse/JENKINS-46325 so unless you set it to success explicitly upon successful stage's execution , it will be null. But all in all the same can be achieved with try catch like :
if (deployableBranches.contains(env.BRANCH_NAME)) {
try {
stage('Build image') {
ansiColor('xterm') {
appImage = docker.build
("${projectName}:${env.BRANCH_NAME}-${gitCommit}", "--build-arg
SKIP_LINT=true .")
}
}
stage('Push image') {
docker.withRegistry("${registryUrl}", "${dockerCredsId}") {
appImage.push()
appImage.push "${env.BRANCH_NAME}-latest"
}
}
stage('Deploy') {
build job: 'kubernetes-deploy', parameters: [
/////
]
}
} catch (e) {
// A shell step returns with a nonzero exit code
// When pipeline is in a shell step, and a user presses abort
if (e.getMessage().contains('script returned exit code 143')) {
currentBuild.result = "ABORTED"
} else {
currentBuild.result = "FAILED"
}
throw e
} finally {
// Success or failure or abort, always send notifications
stage('Send deployment status') {
helpers.sendDeploymentStatus(projectName, currentBuild.result,
helpers.getCommitHashShort())
}
}
}
But the issue is that stage('Build image') may exit without any error code like it was in my case.
I had a similar requirement : "If some rule is executed in stage A, following stages must not run"
This worked for me :
def flag;
node {
stage('A') {
flag = 1;
}
stage('B') {
// exit this stage if flag == 1
if(flag == 1){
return;
}
//start of stage B tasks
...
}
}
Also you could use some jenkins variable like currentBuild.result instead of flag like this :
node {
stage('A') {
//stage A tasks
//this stage could modify currentBuild.result variable
}
stage('B') {
// exit this stage if currentBuild.result is null , empty, "FAILURE", etc
if(currentBuild.result == null ||
currentBuild.result == "" ||
currentBuild.result=="FAILURE" ){
return;
}
//stage B tasks
}
}
So without Jenkins Pipeline the Naginator Plugin allows to restart a specific build on failure using regular expressions.
I like the retry option in Jenkins pipeline but I am not sure if I can catch an error from the build in the catch block and do a retry.
Is there a way to do so?
Eg: I have jenkins build which runs make. now make fails with an error: "pg_config.h missing". I want to catch this error and retry the build again a couple of times.
How can I do the above? Also, is it possible to catch multiple errors similar to regular expressions in Naginator somehow using pipelines?
retry("3"){
try {
sh "${cmd} 2>&1 > cmdOutput.txt"
sh "cat cmdOutput.txt"
} catch(FlowInterruptedException interruptEx) {
throw interruptEx
} catch(err) {
def cmdOutput = readFile('cmdOutput.txt').trim()
if (cmdOutput.contains("pg_config.h missing")) {
error "Command failed with error : ${err}. Retrying ...."
} else {
echo "Command failed with error other than `pg_config.h missing`"
}
}
}
I use the 'waitUntil' step and a counter to retry a shell command. I capture the output of the shell command so that I can run regex checks against the output and then continue or exit the loop.
// example pipeline
pipeline {
agent {
label ""
}
stages {
// stage('clone repo') {
// steps {
// git url: 'https://github.com/your-account/project.git'
// }
// }
// stage ('install') {
// steps {
// sh 'npm install'
// }
// }
stage('build') {
steps {
script {
// wrap with timeout so the job aborts if no activity
timeout(activity: true, time: 5, unit: 'MINUTES') {
// loop until the inner function returns true
waitUntil {
// setup or increment "count" counter and max value
count = (binding.hasVariable('count')) ? count + 1 : 1
countMax = 3
println "try: $count"
// Note: you must include the "|| true" after your command,
// so that the exit code always returns as 0. The "sh" command is
// actually running '/bin/sh -xe'. The '-e' option forces the script
// to exit on non-zero exit code. Prevent this by forcing a 0 exit code
// by adding "|| true"
// execute command and capture stdout
// Uncomment one of these 3 lines to test different conditions.
output = sh returnStdout: true, script: 'echo "Finished: SUCCESS" || true'
// output = sh returnStdout: true, script: 'echo "BUILD FAILED" || true'
// output = sh returnStdout: true, script: 'echo "something else happened" || true'
// show the output in the log
println output
// run different regex tests against the output to check the state of your build
buildOK = output ==~ /(?s).*Finished: SUCCESS.*/
buildERR = output ==~ /(?s).*BUILD FAILED.*/
// then check your conditions
if (buildOK) {
return true // success, so exit loop
} else if (buildERR) {
if (count >= countMax) {
// count exceeds threshold, so throw an error (exits pipeline)
error "Retried $count times. Giving up..."
}
// wait a bit before retrying
sleep time: 5, unit: 'SECONDS'
return false // repeat loop
} else {
// throw an error (exits pipeline)
error 'Unknown error - aborting build'
}
}
}
}
}
}
}
// post {
// always {
// cleanWs notFailBuild: true
// }
// }
}
Within a Jenkinfile pipeline script, how do you query the running job state to tell if it has been aborted?
Normally a FlowInterruptedException or AbortException (if a script was running) will be raised but these can be caught and ignored. Also scripts will not exit immediately if it has multiple statements.
I tried looking at 'currentBuild.Result' but it doesn't seem to be set until the build has complete. Something in 'currentBuild.rawBuild' perhaps?
There is nothing that would automatically set the build status if the exception has been caught. If you want such exceptions to set a build status, but let the script continue, you can write for example
try {
somethingSlow()
} catch (InterruptedException x) {
currentBuild.result = 'ABORTED'
echo 'Ignoring abort attempt at this spot'
}
// proceed
You could implement a watchdog branch in a parallel step. It uses a global to keep track of the watchdog state which could be dangerous, I don't know if accessing globals in 'parallel' is threadsafe. It even works if 'bat' ignores the termination and doesn't raise an exception at all.
Code:
runWithAbortCheck { abortState ->
// run all tests, print which failed
node ('windows') {
for (int i = 0; i < 5; i++) {
try {
bat "ping 127.0.0.1 -n ${10-i}"
} catch (e) {
echo "${i} FAIL"
currentBuild.result = "UNSTABLE"
// continue with remaining tests
}
abortCheck(abortState) // sometimes bat doesn't even raise an exception! so check here
}
}
}
def runWithAbortCheck(closure) {
def abortState = [complete:false, aborted:false]
parallel (
"_watchdog": {
try {
waitUntil { abortState.complete || abortState.aborted }
} catch (e) {
abortState.aborted = true
echo "caught: ${e}"
throw e
} finally {
abortState.complete = true
}
},
"work": {
try {
closure.call(abortState)
}
finally {
abortState.complete = true
}
},
"failFast": true
)
}
def _abortCheckInstant(abortState) {
if (abortState.aborted) {
echo "Job Aborted Detected"
throw new org.jenkinsci.plugins.workflow.steps.FlowInterruptedException(Result.ABORTED)
}
}
def abortCheck(abortState) {
_abortCheckInstant(abortState)
sleep time:500, unit:"MILLISECONDS"
_abortCheckInstant(abortState)
}