Jenkins declarative pipeline - settings stable/unstable threshold - jenkins

I am working on a project with a Jenkinsfile setup. This project runs on a number of integration tests, some of which are expected to fail. We are fixing the tests ( or the implementation ) one by one, but in the meantime the jobs are marked as failed.
The relevant state snippet is
stage ('Run ITs') {
steps {
sh 'SHOW_LOGS=0 ./compose/scripts/up-testing.sh'
sh 'sleep 60'
timeout (720) {
sh './testing/scripts/run-its.sh'
}
}
post {
always {
sh './testing/scripts/summarize-it-results.sh'
junit 'testing/failsafe-resports/*.xml'
sh './compose/scripts/killall.sh'
}
}
I'd like to set a threshold (T) on the number of failures + errors (F+E) and mark the build as unstable if we get F+E <= T and failed otherwise.
How can I do this with the Jenkins pipeline plugin?

I think it is currently not possible with JUnit plugin "out-of-the-box". Here is the corresponding issue in Jenkins issue tracker.

Related

How to create pass threshold for TestCafe Tests on Jenkins

We've TestCafe.js UI tests that runs regression suite on Jenkins environment.
We're exploring a way to create a mechanism, wherein we can potentially set certain pass threshold for the test suite to make the Jenkins job status as Pass / Fail.
i.e. if 98% + tests pass then mark the test job as pass.
Under XUnit projects same could be achieved using XUnit test Plugin etc.
Example reference: How can I have Jenkins fail a build only when the number of test failures changes?
How to fail a Jenkins job based on pass rate threshold of testng tests
How to not mark Jenkins job as FAILURE when pytest tests fail
Is similar possible for TestCafe based tests either through TestCafe customization / through some Jenkins plugin?
Our Jenkins file:
#!groovy
pipeline {
environment {
CI = 'true'
}
options {
buildDiscarder(logRotator(numToKeepStr: '50'))
disableResume()
ansiColor('xterm')
}
agent none
// Define the stages of the pipeline:
stages {
stage('setup') {
steps {
script {
cicd.setupBuild()
}
}
}
// Use the make target to run tests:
stage('Tests') {
agent any
steps {
script {
cicd.withSecret(<keys>) {
cicd.runMake("test")
}
}
}
post {
cleanup {
archiveArtifacts artifacts: "screenshots/**", allowEmptyArchive: true
}
}
}
}
post {
success {
script { cicd.buildSuccess() }
}
failure {
script {
slackSend channel: "#<test-notifications-channel>", color: 'bad', message: "Regression tests failed or unstable <${env.RUN_DISPLAY_URL}|${env.JOB_NAME}>"
cicd.buildFailure()
}
}
}
}
enter code here
TestCafe provides a bunch of specified reporters, which generate a report in the special format. Once produced, CI system (or a plugin therein) can parse a report and perform threshold checks based on the number of failed/passed tests. TestCafe documentation includes an example with Jenkins integration. The Jenkins JUnit Plugin used in the example doesn't support set threshold yet: issue. But you can try to follow the steps in the guide in a similar way, except using Jenkins xUnit Plugin.

Jenkins cucumber reports

I'm using Cucumber reports plugin in my declarative pipeline like that:
cucumber '**/cucumber.json'
I'm able to check if some tests fail through link on the sidebar, but do I need to do something to mark the stage containing cucumber.json check as failed if some cucumber reports are failed? Because the problem is the build and stage are both green and successful despite there are some failed cucumber reports.
Jenkins version is 2.176.3
Cucumber reports version is 4.10.0
Cucumber command you are using just generates the report regardless the test result.
So yes, you have to make your pipeline fail somehow as the problem you are facing is that your test command is not returning making your pipeline fail.
The way to go is to make that the command that runs the tests returns non-zero exit code (exit 1) if something went wrong on your tests. That would make your pipeline stage to go red.
In case you run your tests using Maven this would be automatically managed on 'mvn test' (or whatever).
Otherwise, if you cannot do that, you will have to manage to make something like for example an sh script
that returns the exit code (0 pass / 1 fail) or a groovy function inside 'script' tag that sets the pipeline currentBuild.result value:
def checkTestResult() {
// Check some file to see if tests went fine or not
return 'SUCCESS' // or 'FAILURE'
}
...
stage {
script {
currentBuild.result = checkTestResult()
if (currentBuild.result == 'FAILURE') {
sh "exit 1" // Force pipeline exit with build result failed
}
}
}
...
I recommend you to use cucumber command on a 'always' post build action of your declarative pipeline
as it is a step that you will likely execute every time at the end of the pipeline either if it passes or fails. See the following example:
pipeline {
stages {
stage('Get code') {
// Whatever
}
stage('Run tests') {
steps {
sh "mvn test" // run_tests.sh or groovy code
}
}
}
post {
always {
cucumber '**/cucumber.json'
}
}
}
It is possible to set BuildStatus : 'FAILURE' to mark build as failed if a report marked as failed.
cucumber fileIncludePattern: '**/cucumber.json', buildStatus: 'FAILURE'

My jenkinsfile does not compile anymore when trying to add a post build action

My jenkinsfile does not compile anymore when trying to add a POST action. This last one should be displayed to the jenkins console output at the end of build.
Part I is about my jenkinsfile code for which builds are done well.
Part II is the patch added to part I for which any builds fail.
I want to integrate part I and part II to get the expected output described hereafter but integration fails whatever how insertion is made.
I have tried a lot of thing and i'm stucked now, so any help will be appreciate.
// Part I : my base code
node {
def mvnHome
stage('Preparation') {
git 'https://github.com/jglick/simple-maven-project-with- tests.git'
// Get the Maven tool.
// ** NOTE: This 'M3' Maven tool must be configured
// ** in the global configuration.
mvnHome = tool 'M3'
}
stage('Build') {
// Run the maven build
if (isUnix()) {
sh "'${mvnHome}/bin/mvn' -Dmaven.test.failure.ignore clean package"
} else {
bat(/"${mvnHome}\bin\mvn" -Dmaven.test.failure.ignore clean package/)
}
}
stage('Results') {
junit '**/target/surefire-reports/TEST-*.xml'
archiveArtifacts 'target/*.jar'
}
}
// Part II : code to add to the previous code
post {
always {
echo 'I have finished and deleting workspace'
// deleteDir()
}
success {
echo 'Job succeeeded!
}
unstable {
echo 'I am unstable :/'
}
failure {
echo 'I failed :('
}
changed {
echo 'Things were different before...'
}
}
output expected in the console output : 'Job succeeeded! or I am unstable :/ or 'I failed :(' ... depending on the jenkins build status and always clean the workspace before each new build
Actual result is the error message from the console output :
java.lang.NoSuchMethodError: No such DSL method 'post' found among steps [archive, bat, build, catchError, checkout, deleteDir, dir ......
You are mixing up scripted and declarative pipeline syntax. post is part of declarative, but you use the scripted variant (no pipeline, but node steps).
You have to use try/catch.
See the documentation.

Mark a stage in Jenkins Pipeline as eg "UNSTABLE" but proceed with future stages?

I'm going to use Jenkins pipeline plugin to test several binaries A B C on several nodes 1 2 3.
In the end of my test I would like to have every single result of all possible combinations. So my Pipe may not abort when a single stage fails. It should proceed.
eg: A1 green, A2 green, A3 red, B1 green, B2 red, ..., C3 green
But when the first binary returns with an value unequal zero ("Binary not working on the system") it's stage is marked as FAILURE and any other stages are skipped.
Is there a possibility in Jenkins Pipeline to mark a stage as "UNSTABLE" but proceed with running the other tests?
According to Continue Jenkins job after failed stage while marking stage as failed can't mark this step as failed. The solution of this in running tasks in parallel is not working for my setup. So is it possible to safely mark it as something else? Is it possible to manipulate the result of a stage?
This question How to continue past a failing stage in Jenkins declarative pipeline syntax intents to use a scripted pipeline. I would like to avoid that if it is possible to do it in an other way.
pipeline {
agent {label 'master'}
stages {
stage('A1') {
agent {label 'Node1'}
steps {
sh 'binA'
}
}
stage('A2') {
agent {label 'Node1'}
steps {
sh 'binB' // If this bin fails, all following stages are skipped
}
}
// ...
stage('C3'){
agent {label 'Node3'}
steps {
sh 'binC'
}
}
}
}
Declarative Pipeline: Though using currentBuild.result = 'UNSTABLE' works in declarative pipelines too, Blue Ocean displays all stages as unstable irrespective of which stage fails.
To mark only specific stages as unstable, use the step unstable(message: String) as described here within your stage and install/update the following plugins:
Pipeline: Basic Steps to 2.16 or newer
Pipeline: API Plugin to 2.34 or newer
Pipeline: Groovy to 2.70 or newer
Pipeline Graph Analysis to 1.10 or newer
Sample pipeline stage:
stage('Sign Code') {
steps {
script {
try {
pwd()
sh "<YOUR SCRIPT HERE>"
}
catch (err) {
unstable(message: "${STAGE_NAME} is unstable")
}
}
}
}
Note: This also marks the overall build status as unstable.
There is now a more elegant solution, that not only allows you to set a stage and the job result to unstable. Using catchError, you can set any combination of stage and build result:
pipeline {
agent any
stages {
stage('1') {
steps {
sh 'exit 0'
}
}
stage('2') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh "exit 1"
}
}
}
stage('3') {
steps {
sh 'exit 0'
}
}
}
}
In the example above, all stages will execute, the pipeline will be successful, but stage 2 will show as failed:
As mentioned above, you can freely choose the buildResult and stageResult. You can even fail the build and continue the execution of the pipeline.
Just make sure your Jenkins is up to date, since this is a fairly new feature. (Pipeline: Basic Steps needs to be 2.18 or newer)
For scripted pipeline, you can use try .. catch blocks inside the stages and then set currentBuild.result = 'UNSTABLE'
in the exception handler.

Using waitForQualityGate in a Jenkins declarative pipeline

The following SonarQube (6.3) analysis stage in a declarative pipeline in Jenkins 2.50 is failing with this error in the console log: http://pastebin.com/t2ja23vC. More specifically:
SonarQube installation defined in this job (SonarGate) does not match any configured installation. Number of installations that can be configured: 1.
Update: after changing "SonarQube" to "SonarGate" in the Jenkins settings (under SonarQube servers, so it'll match the Jenkinsfile), I get a different error: http://pastebin.com/HZZ6fY6V
java.lang.IllegalStateException: Unable to get SonarQube task id and/or server name. Please use the 'withSonarQubeEnv' wrapper to run your analysis.
The stage is a modification of the example from the SonarQube docs: https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner+for+Jenkins#AnalyzingwithSonarQubeScannerforJenkins-AnalyzinginaJenkinspipeline
stage ("SonarQube analysis") {
steps {
script {
STAGE_NAME = "SonarQube analysis"
if (BRANCH_NAME == "develop") {
echo "In 'develop' branch, don't analyze."
}
else { // this is a PR build, run sonar analysis
withSonarQubeEnv("SonarGate") {
sh "../../../sonar-scanner-2.9.0.670/bin/sonar-scanner"
}
}
}
}
}
stage ("SonarQube Gatekeeper") {
steps {
script {
STAGE_NAME = "SonarQube Gatekeeper"
if (BRANCH_NAME == "develop") {
echo "In 'develop' branch, skip."
}
else { // this is a PR build, fail on threshold spill
def qualitygate = waitForQualityGate()
if (qualitygate.status != "OK") {
error "Pipeline aborted due to quality gate coverage failure: ${qualitygate.status}"
}
}
}
}
}
I also created a webhook, sonarqube-webhook, with the URL http://****/sonarqube-webhook/. Should it be like that, or http://****/sonarqube/sonarqube-webhook? To access the server dashboard I use http://****/sonarqube.
In SonarQube's Quality Gates section I created a new quality gate:
I am not sure if the setting in SonarGate is correct. I do use jenkins-mocha to generate an lcov.info file that is used in Sonar to generate the coverage data.
Perhaps the quality gate setting is the wrong setting to do? The end result is to fail the job in Jenkins if coverage % is not met.
Finally, I am not sure if the following configurations in the Jenkins system configuration are at all required:
And
(It's 9000 not 900... cut text in the screen shot)
The SonarQube Jenkins plugin scans the build output for two specific lines, which it uses to get the SonarQube report task properties and project URL. If your invocation of sonar-scanner does not output these lines, the waitForQualityGate() call won't have the task ID to look them up. So you will have to figure out the correct settings to make it more verbose.
See the extractSonarProjectURLFromLogs and extractReportTask methods in the SonarUtils class of the plugin to understand how they work:
ANALYSIS SUCCESSFUL, you can browse <project URL> is used to add a link to the badge (in the build history)
Working dir: <dir with report-task.txt> is used to pass the task ID to the waitForQualityGate step
This was discovered to be a bug in the SonarQube scanner for Jenkins, when using a Jenkins slave for jobs (if the job is run on the master, it'd work). You can read more here: https://jira.sonarsource.com/browse/SONARJNKNS-282
I have tested this using a test build of v2.61 of the scanner plug-in and found it working.
The solution is to upgrade to v2.61 when released.
This stage will then work:
stage ("SonarQube analysis") {
steps {
withSonarQubeEnv('SonarQube') {
sh "../../../sonar-scanner-2.9.0.670/bin/sonar-scanner"
}
def qualitygate = waitForQualityGate()
if (qualitygate.status != "OK") {
error "Pipeline aborted due to quality gate coverage failure: ${qualitygate.status}"
}
}
}
If you're running SonarCube in a docker container check that the memory isn't exhausted. We were maxing out. Which seemed to be the issue.

Resources