I am conducting some tests in my pipeline. My aim is that if an error file exists the build should fail. But if for some reason the tests experienced an exception and didn't write either an error or successful file, the pipeline should also fail. If neither of the conditions for failure are met, I would like the an upstream job to execute.
I wrote it in stage and initially it looked like this:
stage('system tests') {
steps {
dir(project_root) {
def error_exists = sh(
script: 'ls error.txt', returnStatus: true
)
if (error_exists == 0) {
currentBuild.result = 'FAILED'
return
}
build job: 'my-job;
}
}
}
The above code works. When the tests being executed wrote an error file, the pipeline fails. I then tried to modify the code to cater for the outcome where neither error or success files are written.
stage('system tests') {
steps {
dir(project_root) {
def error_exists = sh(
script: 'ls error.txt', returnStatus: true
)
def success_exists = sh(
script: 'ls success.txt', returnStatus: true
)
if (error_exists == 0) {
currentBuild.result = 'FAILED'
return
} else if (success_exists == 1 && error_exists == 1) {
currentBuild.result = 'FAILED'
return
}
build job: 'my-job;
}
}
}
I simulated a situation where neither file was written and the pipeline didn't fail, and instead it triggered the upstream build. Why am I not entering the else if clause` if the result of both shell scripts is false? I took the logical operators from here and I think they should be met (The code below is output from the shell scripts in the new-job pipeline)
[new-job] Running shell script
+ ls error.txt
ls: cannot access error.txt: no such file or directory
[new-job] Running shell script
+ ls success.txt
ls: cannot access success.txt: no such file or directory
If these files do not exist then sh jenkins step returns error code 2. You should rewrite your 'if condition' like that:
success_exists == 2 && error_exists == 2
But, I think that in your case this code is more suitable:
stage('system tests') {
steps {
dir(project_root) {
def error_exists = sh(
script: 'ls error.txt', returnStatus: true
)
def success_exists = sh(
script: 'ls success.txt', returnStatus: true
)
if (error_exists == 0) {
currentBuild.result = 'FAILED'
return
} else if (success_exists != 0 && error_exists != 0) {
currentBuild.result = 'FAILED'
return
}
build job: 'my-job;
}
}
Because there may be other reasons for not being able to find the file (lack of access, etc).
Related
I'm defining a Jenkins declarative pipeline and having a hard time configuring a step to not execute if two strings are equal.
I've tried several things but string comparison doesn't work.
Here's my current state:
stages {
stage('Check if image has changed') {
steps {
script {
OLD_DIGEST = sh(returnStdout: true, script: "podman manifest inspect registry/myimage:11 2>/dev/null | jq .config.digest").trim()
NEW_DIGEST = sh(returnStdout: true, script: "podman inspect --format='sha256:{{.Id}}' myimage:11-tmp").trim()
}
sh "echo previous digest:${OLD_DIGEST}, new digest:${NEW_DIGEST}"
}
}
stage('Release') {
when {
allOf {
expression { env.RELEASE != null && env.RELEASE == "true" }
expression { env.OLD_DIGEST != env.NEW_DIGEST }
}
}
steps {
sh "echo Releasing image..."
sh "podman image push myimage:11-tmp registry/myimage:11.${DATE_TIME}"
sh "podman image push myimage:11-tmp registry/myimage:11"
}
}
}
More specifically, the issues lies in the when:
allOf {
expression { env.RELEASE != null && env.RELEASE == "true" }
expression { env.OLD_DIGEST != env.NEW_DIGEST }
}
The first expression works fine but I can't make the second work: even if OLD_DIGEST and NEW_DIGEST are different, the step is skipped.
Example output:
previous digest:sha256:736fd651afdffad2ee48a55a3fbab8de85552f183602d5bfedf0e74f90690e32, new digest:sha256:9003077f080f905d9b1a960b7cf933f04756df9560663196b65425beaf21203d
...
Stage "Release" skipped due to when conditional
I've also tried expression { OLD_DIGEST != NEW_DIGEST } (removing the env.) but now the result is the opposite: even when both strings are equals, the step is NOT skipped.
Output in this case:
previous digest:sha256:8d966d43262b818073ea23127dedb61a43963a7fafc5cffdca85141bb4aada57, new digest:sha256:8d966d43262b818073ea23127dedb61a43963a7fafc5cffdca85141bb4aada57
...
Releasing image...
I'm wondering if the issue lies in the expression or allOf at some point.
According to my tests in the latest 2023 version, env variables are initialized on any stage, so the previous values are being overrided.
Note: Inside when, the env vars has the default values, ignoring the expected values put in the previous stage. After that, in the steps, has the expected values(updated in the previous stage)
If you use global variables instead env variables, it works. I simulates your podman output with echo.
def OLD_DIGEST
def NEW_DIGEST
pipeline {
agent any
environment {
RELEASE = "true"
}
stages {
stage('Check if image has changed') {
steps {
script {
OLD_DIGEST = sh(returnStdout: true, script: "echo '1'").trim()
NEW_DIGEST = sh(returnStdout: true, script: "echo '1'").trim()
}
sh "echo previous digest:${OLD_DIGEST}, new digest:${NEW_DIGEST}"
}
}
stage('Release') {
when {
allOf {
expression { env.RELEASE != null && env.RELEASE == "true" }
expression { OLD_DIGEST != NEW_DIGEST }
}
}
steps {
sh "echo Releasing image..."
}
}
}
}
when OLD_DIGEST = 1 && NEW_DIGEST = 1 , the stage is skipped
if there are different, the stage is executed
The root cause of my issue was the output of my two strings to compare which was indeed different: one was "xxx" while the other was xxx but Jenkins output doesn't show the double quotes.
The correct Jenkins comparison, as stated in the comments, is expression { OLD_DIGEST != NEW_DIGEST } (without env.).
I'm attempting to validate that all Jenkins pipelines, at least in a single group/organization, have published their junit tests. Is there a way to programmatically do this? Also, would it be relegated to Jenkinsfiles or work on all pipelines? Thanks!
I could manually check this via looking for the "Test Results" on the page that I have included the image for below. This indicates that the job has published Test Results to the JUnit plugin.
If I were to write a Jenkinsfile, it might look something like this. But it is possible to attach these to the JUnit pipeline via manual methods as well:
pipeline {
agent any
stages {
stage('Compile') {
steps {
// Login to Repository
configFileProvider([configFile(fileId: 'nexus_maven_configuration', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS compile'
}
}
}
stage('Test') {
steps {
configFileProvider([configFile(fileId: 'nexus_maven_configuration', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS test'
}
}
}
}
post {
always {
junit '**/target/surefire-reports/*.xml'
archive 'target/*.jar'
}
}
}
Here is a script you can use to check whether you have tests attached for Jobs in a specific Subdirectory. You can either run this through a Pipeline or using the Jenkins Script Console.
def subFolderToCheck = "folder1" // We will only check Jobs in a specific sub directory
Jenkins.instance.getAllItems(Job.class).each { jobitem ->
def jobName = jobitem.getFullName()
def jobInfo = Jenkins.instance.getItemByFullName(jobName)
// We will check if the last successfull build has any tests attached.
if(jobName.contains(subFolderToCheck) && jobInfo.getLastSuccessfulBuild() != null) {
def results = jobInfo.getLastSuccessfulBuild().getActions(hudson.tasks.junit.TestResultAction.class).result
println("Job : " + jobName + " Tests " + results.size())
if(results == null || results.size() <= 0) {
print("Job " + jobName + " Does not have any tests!!!!!")
}
}
}
I have a series of steps in a stage that I want to run even if the first one fails. I want the stage result to fail and the build to get aborted, but only after all steps have run. For example,
pipeline {
agent any
stages {
stage('Run Test') {
steps {
sh "echo running unit-tests"
sh "echo running linting && false" // failure
sh "echo generating report" // This should still run (It currently doesn't)
publishCoverage adapters: [coberturaAdapter("coverage.xml")] // This should still run (It currently doesn't)
junit 'unit-test.xml' // This should still run (It currently doesn't)
}
}
stage('Deploy') {
steps {
echo "deploying" // This should NOT run
}
}
}
}
The result should be a failed build where the "Run Test" stage failed and the "Deploy" stage did not run. Is this possible?
P.S.
I am NOT asking for the same behavior as in Continue Jenkins pipeline past failed stage. I want to run the steps following the failure, but not any of the stages afterwards. I tried to enclose each of the test steps with catchError (buildResult: 'FAILURE', stageResult: 'FAILURE'), but the "Deploy" stage still runs.
EDIT:
I cannot combine all the steps into one big sh step and capture its return code because some of the steps are not shell commands, but instead jenkins steps like junit and publishCoverage.
A script witha non-zero exit code will always cause a jenkins step to fail. You can use returnStatus as true so that jenkins does not fails the step.
Additionally considering your use case, you could use a post always execution, so that the steps are always carried out.
Please see below reference example:
stage('Run Test') {
steps {
def unit_test_result= sh returnStatus: true, script: 'echo "running unit-tests"'
def lint_result= sh returnStatus: true, script: 'echo "running linting"'
if (unit_test_result!=0 || lint_result!=0 ) {
// If the unit_test_result or lint_result status is not 0 then mark this stage as unstable to continue ahead
// and all later stages will be executed
unstable ('Testing failed')
// You can also mark as failed as below and it will not conintue other stages:
// error ('Testing failed')
}
}
post {
always {
// This block would always be executed inspite of failure
sh "echo generating report"
publishCoverage adapters: [coberturaAdapter("coverage.xml")]
junit 'unit-test.xml'
}
}
}
I found a slightly hacky way to get the behavior I want. The other answers didn't work for me, either because they need all the steps to be sh steps, or they don't stop the deploy stage from running. I used catchError to set the build and stage result. But to prevent the next stage from running, I needed to an explicit call to error if the stage failed.
pipeline {
agent any
stages {
stage('Run Test') {
steps {
script {
// catchError sets the stageResult to FAILED, but does not stop next stages from running
catchError (buildResult: 'FAILURE', stageResult: 'FAILURE') {
sh "echo running unit-tests"
}
catchError (buildResult: 'FAILURE', stageResult: 'FAILURE') {
sh "echo running linting && false" // failure
}
catchError (buildResult: 'FAILURE', stageResult: 'FAILURE') {
sh "echo generating report" // This still runs
}
publishCoverage adapters: [coberturaAdapter("coverage.xml")] // This still runs
junit 'unit-test.xml' // This still runs
if (currentBuild.result == "FAILURE") { // This is needed to stop the next stage from running
error("Stage Failed")
}
}
}
}
stage('Deploy') {
steps {
echo "deploying" // This should NOT run
}
}
}
}
Theoretically you should be able to use sh "<command>||true" It would ignore the error on command and continue. However, Jenkins will not fail as it would ignore the error.
If you don't want Jenkins to ignore the error and want it to stop at the end of the stage, you can do something like: sh "<command>||$error=true" then fail the build based on the $error variable. (sh "$error" might be enough but I am not sure, may require an if statement at the end.) It will be only set to true iff command fails.
Another option is to wrap your build steps in a try-catch block! if there's an exception, i.e. return code of build is not 0 you can catch it, mark the build as unstable and then the rest of the pipeline continues on.
here's an example `
pipeline {
agent {
node {
label 'linux'
}
}
options {
timestamps()
disableConcurrentBuilds()
buildDiscarder(logRotator(numToKeepStr: '3'))
}
tools {
maven 'Maven 3.6.3'
jdk 'jdk11'
}
stages {
stage('CleanWS') {
steps {
cleanWs()
}
}
stage('Build') {
steps {
withMaven(options: [artifactsPublisher(disabled: true)]) {
sh "export NLS_LANG=GERMAN_GERMANY.WE8ISO8859P1 && mvn -f pom.xml clean install -DskipTests -Pregression-test -Dmaven.javadoc.skip=true"
}
}
}
stage('Test') {
steps {
script {
try {
withMaven(options: [artifactsPublisher(disabled: true)]) {
sh "export MAVEN_OPTS=\"-Xmx2048m\" && export NLS_LANG=GERMAN_GERMANY.WE8ISO8859P1 && mvn -B verify -Dmaven.source.skip=true -Dmaven.javadoc.skip=true"
}
} catch (exc) {
currentBuild.result = 'UNSTABLE'
}
}
}
post {
always {
script {
junit "**/surefire-reports/*.xml"
}
}
}
}
stage('Sonar Analyse') {
steps {
script {
withMaven(options: [artifactsPublisher(disabled: true)]) {
withSonarQubeEnv("SonarQube") {
sh "export MAVEN_OPTS=\"-Xmx2048m\" && export NLS_LANG=GERMAN_GERMANY.WE8ISO8859P1 && mvn sonar:sonar"
}
}
}
}
}
stage('Deploy to Nexus') {
steps {
sh "export NLS_LANG=GERMAN_GERMANY.WE8ISO8859P1 && mvn -f pom.xml -B clean deploy -DdeployAtEnd=true -DskipTests"
}
}
}
post {
failure {
script {
emailext(
body: "Please go to ${env.BUILD_URL}/console for more details.",
to: emailextrecipients([developers(), requestor()]),
subject: "Nightly-Build-Pipeline Status is ${currentBuild.result}. ${env.BUILD_URL}"
)
}
}
unstable {
script {
emailext(
body: "Please go to ${env.BUILD_URL}/console for more details.",
to: emailextrecipients([developers(), requestor()]),
subject: "Nightly-Build-Pipeline Build Status is ${currentBuild.result}. ${env.BUILD_URL}"
)
}
}
}
}`
I have a pipeline roughly below (largely borrowed from this) and I need to stop and remove it from the history if it aborts. I'm trying to avoid a plugin. Is there an easy way to delete it from the history?
node {
checkout scm
result = sh (script: "git log -1 | grep '\\[release\\]'", returnStatus: true)
if (result == 0) {
currentBuild.result = 'ABORTED'
}
}
you can add BUILD HISTORY MANAGER(https://plugins.jenkins.io/build-history-manager/) plugin and do it .
then you add this code in pipeline . by this code the history was deleted from 1 to countBuildRemain .
def buildNum = BUILD_ID as Integer
def num = countBuildRemain as Integer
def result = (buildNum) - (num)
options {
buildDiscarder BuildHistoryManager([[actions: [DeleteBuild()],
conditions: [BuildResult(matchAborted: true),
BuildNumberRange(maxBuildNumber: "${result}", minBuildNumber: 1) ]]])
}
Use this script, the current job will be deleted.
(use with caution, not a best practice)
node {
checkout scm
result = sh (script: "git log -1 | grep '\\[release\\]'", returnStatus: true)
if (result == 0) {
currentBuild.result = 'ABORTED'
Run.fromExternalizableId(currentBuild.externalizableId).delete()
}
}
or declarative syntax
post {
success {
}
aborted{
script{
Run.fromExternalizableId(currentBuild.externalizableId).delete()
}
}
}
Using new jenkins declarative pipeline syntax, I'd like to test the return status of a sh script execution. Is it possible without using script step?
Script pipeline (working) :
...
stage ('Check url') {
node {
timeout(15) {
waitUntil {
sleep 20
def r = sh script: "wget -q ${CHECK_URL} -O /dev/null", returnStatus: true
return (r == 0);
}
}
}
}
Declarative pipeline (try) :
...
stage('Check url'){
steps {
timeout(15) {
waitUntil {
sleep 20
sh script: "wget -q ${CHECK_URL} -O /dev/null", returnStatus: true == 0
}
}
}
}
log : java.lang.ClassCastException: body return value null is not boolean
Since it's not possible without script block, we get something like :
...
stage('Check url'){
steps {
script {
timeout(15) {
waitUntil {
sleep 20
def r = sh script: "wget -q ${CHECK_URL} -O /dev/null", returnStatus: true
return r == 0
}
}
}
}
}