I want my build to fail if the code coverage is below 90%.
In order to do that, I created the jacocoTestCoverageVerification task to my build gradle.
jacocoTestCoverageVerification {
violationRules {
rule {
limit {
minimum = 0.9
}
}
}
}
And then I called it in my Jenkins pipeline
stage('Integration tests coverage report - Windows')
{
when { expression { env.OS == 'BAT' }}
steps {
dir('') {
bat 'gradlew.bat jacocoIntegrationReport'
bat 'gradlew.bat jacocoTestCoverageVerification'
}
}
}
But my build is not failing. I also tried to set the minimum to 1.0, but it was also successful.
I tried to add check.dependsOn jacocoTestCoverageVerification, but the build didn't fail.
Why is it not failing?
dependencies {
implementation .....
testImplementation(platform('org.junit:junit-bom:5.9.1'))
testImplementation('org.junit.jupiter:junit-jupiter')
}
test {
useJUnitPlatform()
testLogging {
events "passed", "skipped", "failed"
}
}
test {
finalizedBy jacocoTestReport // report is always generated after tests run
}
jacocoTestReport {
dependsOn test // tests are required to run before generating the report
}
jacocoTestReport {
finalizedBy jacocoTestCoverageVerification // report is always generated after tests run
}
jacocoTestReport {
reports {
xml.required = false
csv.required = false
html.outputLocation = layout.buildDirectory.dir('jacocoHtml')
}
}
jacocoTestCoverageVerification {
violationRules {
rule {
limit {
minimum = 0.5
}
}
}
}
run on command line
$ ./gradlew :application:jacocoTestReport
> Task :application:test
MessageUtilsTest > testNothing() PASSED
> Task :application:jacocoTestCoverageVerification FAILED
[ant:jacocoReport] Rule violated for bundle application: instructions covered ratio is 0.0, but expected minimum is 0.5
FAILURE: Build failed with an exception.
Related
I have a groovy script to setup a scheduled job in Jenkins.
I want to execute some shell scripts on failed build.
If I had the scripts manually after the job creation after the job is updated by groovy script, they run.
But the groovy script does not add it:
job('TestingAnalysis') {
triggers {
cron('H 8 28 * *')
}
steps {
shell('some jiberish to create error')
}
publishers {
postBuildScripts {
steps {
shell('echo "fff"')
shell('echo "FFDFDF"')
}
onlyIfBuildSucceeds(false)
onlyIfBuildFails(true)
}
retryBuild {
rerunIfUnstable()
retryLimit(3)
fixedDelay(600)
}
}
}
Every thing works fine except:
postBuildScripts {
steps {
shell('echo "fff"')
shell('echo "FFDFDF"')
}
onlyIfBuildSucceeds(false)
onlyIfBuildFails(true)
}
This is my result:
I tried postBuildSteps and also got error.
I tried also with error:
postBuildScripts {
steps {
sh' echo "ggg" '
}
onlyIfBuildSucceeds(false)
onlyIfBuildFails(true)
}
Take a look at JENKINS-66189 seems like there is an issue with version 3.0 of the PostBuildScript in which the old syntax (that you are using) is no longer supported. In order to use the new version it in a Job Dsl script you will need to use Dynamic DSL syntax.
Use the following link in your own Jenkins instance to see the correct usage:
YOUR_JENKINS_URL/plugin/job-dsl/api-viewer/index.html#path/freeStyleJob-publishers-postBuildScript.
it will help you build the correct command. In your case it will be:
job('TestingAnalysis') {
triggers {
cron('H 8 28 * *')
}
steps {
shell('some jiberish to create error')
}
publishers {
postBuildScript {
buildSteps {
postBuildStep {
stopOnFailure(false) // Mandatory setting
results(['FAILURE']) // Replaces onlyIfBuildFails(true)
buildSteps {
shell {
command('echo "fff"')
}
shell {
command('echo "FFDFDF"')
}
}
}
}
markBuildUnstable(false) // Mandatory setting
}
}
}
Notice that instead of using functions like onlyIfBuildSucceeds and onlyIfBuildFails you now just pass a list of relevant build results to the results function. (SUCCESS,UNSTABLE,FAILURE,NOT_BUILT,ABORTED)
In Jenkins pipeline need to run multiple commands in a loop mode in a different directory
preBuild = ['test\testSchemas\testSchemas', 'Common\Common']
stage('PreBuild') {
when { expression { pipelineParams.preBuild != null } }
steps {
script {
pipelineParams.preBuild.each {
dir(pipelineParams.preBuild.${it}){
executeShellCommand("\"${env.NUGET}\" restore -source ${env.NEXUS_SERVER_URL}/repository/nuget-group/ -PackagesDirectory \"Packages\" ${it}")
executeShellCommand("\"${buildTool}\" ${pipelineParams.buildCommand} ${it}")
}
}
}
}
}
Getting error due to some syntax error, can you please help in this am new to Jeninsfile
I have a Jenkinsfile with a definition for parallel test execution, and the task is to grab the test results from both in order to process them in a post step somewhere.
Problem is: How to do this? Searching for anything acting as an example code does not bring up anything - either the examples deal with explaining parallel, or they explain post with junit.
pipeline {
agent { node { label 'swarm' } }
stages {
stage('Testing') {
parallel {
stage('Unittest') {
agent { node { label 'swarm' } }
steps {
sh 'unittest.sh'
}
}
stage ('Integrationtest') {
agent { node { label 'swarm' } }
steps {
sh 'integrationtest.sh'
}
}
}
}
}
}
Defining a post {always{junit(...)}} step at both parallel stages yielded a positive reaction from the BlueOcean GUI, but the test report recorded close to double the amount of tests - very odd, some file must have been scanned twice. Adding this post step to the surrounding "Testing" stage gave an error.
I am missing an example detailing how to post-process test results that get created in a parallel block.
Just to record my solution for the internet:
I came up with stashing the test results in both parallel steps, and adding a final step that unstashes the files, then post-processes them:
pipeline {
agent { node { label 'swarm' } }
stages {
stage('Testing') {
parallel {
stage('Unittest') {
agent { node { label 'swarm' } }
steps {
sh 'rm build/*'
sh 'unittest.sh'
}
post {
always {
stash includes: 'build/**', name: 'testresult-unittest'
}
}
}
stage ('Integrationtest') {
agent { node { label 'swarm' } }
steps {
sh 'rm build/*'
sh 'integrationtest.sh'
}
post {
always {
stash includes: 'build/**', name: 'testresult-integrationtest'
}
}
}
}
}
stage('Reporting') {
steps {
unstash 'testresult-unittest'
unstash 'testresult-integrationtest'
}
post {
always {
junit 'build/*.xml'
}
}
}
}
}
My observation though is that you have to pay attention to clean up your workspace: Both test stages do create one file, but on the second run, both workspaces are inherited from the previous run and have both previously created test results present in the build directory.
So you have to remove any remains of test results before you start a new run, otherwise you'd stash an old version of the test result from the "other" stage. I don't know if there is a better way to do this.
To ensure the stage('Reporting') will be always executed, put all the step in the 'post':
post {
always {
unstash 'testresult-unittest'
unstash 'testresult-integrationtest'
junit 'build/*.xml'
}
}
I'm creating a declarative Jenkins pipeline, that looks like this:
pipeline {
agent {
label 'mylabel'
}
stages {
stage('Install dependencies') {
milestone()
steps {
sh "yarn install"
}
}
stage('Lint files') {
steps {
sh "eslint src"
}
}
stage('Create bundle') {
steps {
sh "yarn run build:server"
sh "yarn run build:client"
}
}
stage('Publish') {
steps {
timeout(time: 15, unit: 'SECONDS') {
input(message: 'Deploy this build to QA?')
}
// deployment steps
}
}
}
}
It works great, however, if the timeout step fails (because we don't want to deploy this build, or nobody is there to push the button, and so on), the build is marked with status "aborted". Unfortunately this means that for example Github marks our pull requests as "checks failing".
Is there a way to declare the build with the status that it had before the timeout() step? Eg. if the build was a success up until the timeout step, it should be marked as success, even if the timeout happens.
Since all you want is to let the build abort without marking it as failed, you can just add a simple try/catch to your code.
stage('Publish') {
steps {
script {
def proceed = true
try {
timeout(time: 15, unit: 'SECONDS') {
input(message: 'Deploy this build to QA?')
}
} catch (err) {
proceed = false
}
if(proceed) {
// deployment steps
}
}
}
}
If a user aborts the build or it times out, the error is suppressed, the build is still a success, and the deployment steps won't execute.
We have a situation where we don't want to start a build if there are no user commits. Because of a bug in the scm trigger prevention on message or user.
What we then do is fail the build with a NOT_BUILT result.
Maybe this will also work for your situation
Try the following in your script
try {
stage ('wait') {
timeout(time: 15, unit: 'SECONDS') {
input(message: 'Deploy this build to QA?')
}
}
} catch (err) {
def user = err.getCauses()[0].getUser()
if('SYSTEM' == user.toString()) { //timeout
currentBuild.result = "SUCCESS"
}
}
If you don't want the job to be marked as "aborted" or "failed":
For a Declarative pipeline (not a scripted) you would do something like this
stage('Foo') {
steps {
script {
env.PROCEED_TO_DEPLOY = 1
try {
timeout(time: 2, unit: 'MINUTES') {
// ...
}
} catch (err) {
env.PROCEED_TO_DEPLOY = 0
}
}
}
}
stage('Bar') {
when {
expression {
env.PROCEED_TO_DEPLOY == 1
}
}
//rest of your pipeline
stage "Bar" would be skipped but as for the rest of the stages the job will be marked as passed (assuming nothing wrong happened before)
You could probably adjust currentBuild.result in post section of declarative pipeline.
E.g.
pipeline {
stages {
...
}
post {
aborted {
script {
currentBuild.result = 'SUCCESS'
}
}
}
}
I have a multibranch pipeline with a Jenkinsfile in my repo and I am able to have my CI workflow (build & unit tests -> deploy-dev -> approval -> deploy-QA -> approval -> deploy-prod) on every commit.
What I would like to do is add SonarQube Analysis on nightly builds in the first phase build & unit tests.
Since my build is triggerd by Gitlab I have defined my pipeline triggers as follow :
pipeline {
...
triggers {
gitlab(triggerOnPush: true, triggerOnMergeRequest: true, branchFilterType: 'All')
}
...
}
To setup my nightly build I have added
triggers {
...
cron('H H * * *')
}
But now, how to execute analysis step if we are only building the job triggered by the cron expression at night ?
My simplified build stage looks as follow :
stage('Build & Tests & Analysis') {
// HERE THE BEGIN SONAR ANALYSIS (to be executed on nightly builds)
bat 'msbuild.exe ...'
bat 'mstest.exe ...'
// HERE THE END SONAR ANALYSIS (to be executed on nightly builds)
}
There is the way how to get build trigger information. It is described here:
https://jenkins.io/doc/pipeline/examples/#get-build-cause
It is good for you to check this as well:
how to get $CAUSE in workflow
Very good reference for your case is https://hopstorawpointers.blogspot.com/2016/10/performing-nightly-build-steps-with.html. Here is the function from that source that exactly matches your need:
// check if the job was started by a timer
#NonCPS
def isJobStartedByTimer() {
def startedByTimer = false
try {
def buildCauses = currentBuild.rawBuild.getCauses()
for ( buildCause in buildCauses ) {
if (buildCause != null) {
def causeDescription = buildCause.getShortDescription()
echo "shortDescription: ${causeDescription}"
if (causeDescription.contains("Started by timer")) {
startedByTimer = true
}
}
}
} catch(theError) {
echo "Error getting build cause"
}
return startedByTimer
}
This works in declarative pipeline
when {
triggeredBy 'TimerTrigger'
}
For me the easiest way is to define a cron in build trigger and verify the hour on the nightly stage using a when expression:
pipeline {
agent any
triggers {
pollSCM('* * * * *') //runs this pipeline on every commit
cron('30 23 * * *') //run at 23:30:00
}
stages {
stage('nightly') {
when {//runs only when the expression evaluates to true
expression {//will return true when the build runs via cron trigger (also when there is a commit at night between 23:00 and 23:59)
return Calendar.instance.get(Calendar.HOUR_OF_DAY) in 23
}
}
steps {
echo "Running the nightly stage only at night..."
}
}
}
}
You could check the build cause like so:
stage('Build & Tests & Analysis') {
when {
expression {
for (Object currentBuildCause : script.currentBuild.rawBuild.getCauses()) {
return currentBuildCause.class.getName().contains('TimerTriggerCause')
}
}
steps {
bat 'msbuild.exe ...'
bat 'mstest.exe ...'
}
}
}
However, this requires the following entries in script-approval.xml:
<approvedSignatures>
<string>method hudson.model.Run getCauses</string>
<string>method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild</string>
</approvedSignatures>
This can also be approved via https://YOURJENKINS/scriptApproval/.
Hopefully, this won't be necessary after JENKINS-41272 is fixed.
Until then, a workaround could be to check the hour of day in the when expression (keep in mind that these times refer to to the timezone of Jenkins)
when { expression { return Calendar.instance.get(Calendar.HOUR_OF_DAY) in 0..3 } }
I've found a way, which does not use "currentBuild.rawBuild" which is restricted. Begin your pipeline with:
startedByTimer = false
def buildCauses = "${currentBuild.buildCauses}"
if (buildCauses != null) {
if (buildCauses.contains("Started by timer")) {
startedByTimer = true
}
}
Test the boolean where you need it, for example:
stage('Clean') {
when {
anyOf {
environment name: 'clean_build', value: 'Yes'
expression { (startedByTimer == true) }
}
}
steps {
echo "Cleaning..."
...
Thanks to this you can now do this without needing to the the use the non-whitelisted currentBuild.getRawBuild().getCauses() function which can give you Scripts not permitted to use method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild depending on your setup:
#NonCPS
def isJobStartedByTimer() {
def startedByTimer = false
try {
def buildCauses = currentBuild.getBuildCauses()
for ( buildCause in buildCauses ) {
if (buildCause != null) {
def causeDescription = buildCause.shortDescription
echo "shortDescription: ${causeDescription}"
if (causeDescription.contains("Started by timer")) {
startedByTimer = true
}
}
}
} catch(theError) {
echo "Error getting build cause"
}
return startedByTimer
}