We've TestCafe.js UI tests that runs regression suite on Jenkins environment.
We're exploring a way to create a mechanism, wherein we can potentially set certain pass threshold for the test suite to make the Jenkins job status as Pass / Fail.
i.e. if 98% + tests pass then mark the test job as pass.
Under XUnit projects same could be achieved using XUnit test Plugin etc.
Example reference: How can I have Jenkins fail a build only when the number of test failures changes?
How to fail a Jenkins job based on pass rate threshold of testng tests
How to not mark Jenkins job as FAILURE when pytest tests fail
Is similar possible for TestCafe based tests either through TestCafe customization / through some Jenkins plugin?
Our Jenkins file:
#!groovy
pipeline {
environment {
CI = 'true'
}
options {
buildDiscarder(logRotator(numToKeepStr: '50'))
disableResume()
ansiColor('xterm')
}
agent none
// Define the stages of the pipeline:
stages {
stage('setup') {
steps {
script {
cicd.setupBuild()
}
}
}
// Use the make target to run tests:
stage('Tests') {
agent any
steps {
script {
cicd.withSecret(<keys>) {
cicd.runMake("test")
}
}
}
post {
cleanup {
archiveArtifacts artifacts: "screenshots/**", allowEmptyArchive: true
}
}
}
}
post {
success {
script { cicd.buildSuccess() }
}
failure {
script {
slackSend channel: "#<test-notifications-channel>", color: 'bad', message: "Regression tests failed or unstable <${env.RUN_DISPLAY_URL}|${env.JOB_NAME}>"
cicd.buildFailure()
}
}
}
}
enter code here
TestCafe provides a bunch of specified reporters, which generate a report in the special format. Once produced, CI system (or a plugin therein) can parse a report and perform threshold checks based on the number of failed/passed tests. TestCafe documentation includes an example with Jenkins integration. The Jenkins JUnit Plugin used in the example doesn't support set threshold yet: issue. But you can try to follow the steps in the guide in a similar way, except using Jenkins xUnit Plugin.
Related
We have 2 jenkins job AA and BB.
Is there a way to allow BB only to be trigger from AA after it completes?
Basically, you can use build triggers to do this. More about build triggers can be found here. There are two ways to add build triggers. Following is how you can add triggers using the UI. By going to Configure Job you can add triggers. Please refer to the following image.
In a declarative pipeline, you can add triggers as shown below.
pipeline {
agent any
triggers { upstream(upstreamProjects: 'AA', threshold: hudson.model.Result.SUCCESS) }
stages {
stage('Hello') {
steps {
echo 'Hello World BB'
}
}
}
}
Here you can specify the threshold on when to trigger the build based on the status of the upstream build. There are 4 different thresholds.
hudson.model.Result.ABORTED: The upstream build was manually aborted.
hudson.model.Result.FAILURE: The upstream build had a fatal error.
hudson.model.Result.SUCCESS: The upstream build had no errors.
hudson.model.Result.UNSTABLE: The upstream build had an unstable result.
Update 02
If you want to restrict all other jobs/users from triggering this job you will have to restructure your Job. You can wrap your stages with a parent Stage and conditionally check who triggered the Job. But note that the Job will anyway trigger but the stages will be skipped. Please refer the following pipeline.
pipeline {
agent any
triggers { upstream(upstreamProjects: 'AA', threshold: hudson.model.Result.SUCCESS) }
stages{
stage('Parent') {
// We will restrict triggering this Job for everyone other than Job AA
when { expression {
print("Checking if the Trigger is allowed to execute this job")
print('AA' in currentBuild.buildCauses.upstreamProject)
}
}
stages {
stage('Hello') {
steps {
echo 'Hello World BB'
}
}
}
}
}
}
I already wrote an example Jenkinsfile to checkout and build and deploy a signal project. Is there a way to do all these for multiple project in different git repo the same time just using one Jenkinsfile ? I know I can set up these projects as independent jobs and use a Jenkinsfile to call them,but I'm wondering if I can do this without independent jobs.
Thanks.
You can make use of Job DSL Plugin to achieve this.
Jenkins Job DSL API will help you to write DSL scripts. You can find all the built-in DSL methods that will be needed to construct jobs.
Example pipeline script:
pipeline {
agent any
stages {
stage('Job1') {
steps {
//Pipeline Job
jobDsl scriptText: '''pipelineJob(\"$job1\") {
definition {
cpsScm {
scm {
git {
remote{
name('origin')
url('https://github.com/satta19/user-node.git')
credentials('git2-cred')
}
branch ('master')
}
}
scriptPath('Jenkinsfile')
}
}
}'''
}
}
stage('Job2') {
steps {
//Freestyle job
jobDsl scriptText: '''job(\"$job2\") {
steps {
shell(\'echo Hello World!\')
}
}'''
}
}
}
}
Note: I have taken the jobs name as string parameter i.e. $job1 and $job2 in the above example pipeline script.
My use case: I want to set up a Jenkins configuration via the Jenkins Helm chart, using the JCasC plugin. I would also like to define a number of jobs via the Pipeline plugin in a series of Jenkinsfiles, so that the entire setup is configured in code, with a clean, complete installation able to be performed just by running helm install.
However, I'm having trouble loading my Pipeline scripts. In my JCasC, I have defined a Job DSL seed script as follows:
job('seedJob') {
scm {
git {
remote {
url 'ssh://git#foo/bar.git'
credentials 'creds'
}
}
}
steps {
dsl {
external 'jenkins/jobs/*.groovy'
}
}
}
This successfully pulls the scripts from the repo, an example of which is:
pipeline {
// hello.groovy
// Do stuff
}
However, the job fails when parsing the Pipeline script with the following error:
ERROR: (hello.groovy, line 1) No signature of method: hello.pipeline() is applicable for argument types: (hello$_run_closure1) values: [hello$_run_closure1#4c6f43b6]
Possible solutions: pipelineJob(java.lang.String), pipelineJob(java.lang.String, groovy.lang.Closure)
Finished: FAILURE
My suspicion is that Pipeline scripts can't be read this way via Job DSL. If this is the case, is there are way I can achieve the loading of multiple Pipeline scripts from a single seed job?
Seed jobs should look like:
pipelineJob('job_name_here') {
definition {
cpsScm {
scm {
git {
branches('*/master')
branches('*/release')
remote {
credentials('credentials_id_from_jenkins_here')
name('name')
url('git#gitlab_repo_here.git')
}
}
}
}
}
triggers {
gitlab {
ciSkip(true)
triggerOnPush(true)
triggerOnMergeRequest(false)
triggerOnClosedMergeRequest(true)
branchFilterType('RegexBasedFilter')
targetBranchRegex('(.*master.*|.*release.*)')
secretToken('paste_secret_token_for_webhook_here')
}
}
}
I am working on a project with a Jenkinsfile setup. This project runs on a number of integration tests, some of which are expected to fail. We are fixing the tests ( or the implementation ) one by one, but in the meantime the jobs are marked as failed.
The relevant state snippet is
stage ('Run ITs') {
steps {
sh 'SHOW_LOGS=0 ./compose/scripts/up-testing.sh'
sh 'sleep 60'
timeout (720) {
sh './testing/scripts/run-its.sh'
}
}
post {
always {
sh './testing/scripts/summarize-it-results.sh'
junit 'testing/failsafe-resports/*.xml'
sh './compose/scripts/killall.sh'
}
}
I'd like to set a threshold (T) on the number of failures + errors (F+E) and mark the build as unstable if we get F+E <= T and failed otherwise.
How can I do this with the Jenkins pipeline plugin?
I think it is currently not possible with JUnit plugin "out-of-the-box". Here is the corresponding issue in Jenkins issue tracker.
The following SonarQube (6.3) analysis stage in a declarative pipeline in Jenkins 2.50 is failing with this error in the console log: http://pastebin.com/t2ja23vC. More specifically:
SonarQube installation defined in this job (SonarGate) does not match any configured installation. Number of installations that can be configured: 1.
Update: after changing "SonarQube" to "SonarGate" in the Jenkins settings (under SonarQube servers, so it'll match the Jenkinsfile), I get a different error: http://pastebin.com/HZZ6fY6V
java.lang.IllegalStateException: Unable to get SonarQube task id and/or server name. Please use the 'withSonarQubeEnv' wrapper to run your analysis.
The stage is a modification of the example from the SonarQube docs: https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner+for+Jenkins#AnalyzingwithSonarQubeScannerforJenkins-AnalyzinginaJenkinspipeline
stage ("SonarQube analysis") {
steps {
script {
STAGE_NAME = "SonarQube analysis"
if (BRANCH_NAME == "develop") {
echo "In 'develop' branch, don't analyze."
}
else { // this is a PR build, run sonar analysis
withSonarQubeEnv("SonarGate") {
sh "../../../sonar-scanner-2.9.0.670/bin/sonar-scanner"
}
}
}
}
}
stage ("SonarQube Gatekeeper") {
steps {
script {
STAGE_NAME = "SonarQube Gatekeeper"
if (BRANCH_NAME == "develop") {
echo "In 'develop' branch, skip."
}
else { // this is a PR build, fail on threshold spill
def qualitygate = waitForQualityGate()
if (qualitygate.status != "OK") {
error "Pipeline aborted due to quality gate coverage failure: ${qualitygate.status}"
}
}
}
}
}
I also created a webhook, sonarqube-webhook, with the URL http://****/sonarqube-webhook/. Should it be like that, or http://****/sonarqube/sonarqube-webhook? To access the server dashboard I use http://****/sonarqube.
In SonarQube's Quality Gates section I created a new quality gate:
I am not sure if the setting in SonarGate is correct. I do use jenkins-mocha to generate an lcov.info file that is used in Sonar to generate the coverage data.
Perhaps the quality gate setting is the wrong setting to do? The end result is to fail the job in Jenkins if coverage % is not met.
Finally, I am not sure if the following configurations in the Jenkins system configuration are at all required:
And
(It's 9000 not 900... cut text in the screen shot)
The SonarQube Jenkins plugin scans the build output for two specific lines, which it uses to get the SonarQube report task properties and project URL. If your invocation of sonar-scanner does not output these lines, the waitForQualityGate() call won't have the task ID to look them up. So you will have to figure out the correct settings to make it more verbose.
See the extractSonarProjectURLFromLogs and extractReportTask methods in the SonarUtils class of the plugin to understand how they work:
ANALYSIS SUCCESSFUL, you can browse <project URL> is used to add a link to the badge (in the build history)
Working dir: <dir with report-task.txt> is used to pass the task ID to the waitForQualityGate step
This was discovered to be a bug in the SonarQube scanner for Jenkins, when using a Jenkins slave for jobs (if the job is run on the master, it'd work). You can read more here: https://jira.sonarsource.com/browse/SONARJNKNS-282
I have tested this using a test build of v2.61 of the scanner plug-in and found it working.
The solution is to upgrade to v2.61 when released.
This stage will then work:
stage ("SonarQube analysis") {
steps {
withSonarQubeEnv('SonarQube') {
sh "../../../sonar-scanner-2.9.0.670/bin/sonar-scanner"
}
def qualitygate = waitForQualityGate()
if (qualitygate.status != "OK") {
error "Pipeline aborted due to quality gate coverage failure: ${qualitygate.status}"
}
}
}
If you're running SonarCube in a docker container check that the memory isn't exhausted. We were maxing out. Which seemed to be the issue.