Jenkins waitForQualityGate wrong id - jenkins

I'm trying to integrate Sonarqube in my Jenkins pipeline, all works fine untile gateway checks.
stage('Sonar') {
steps {
withSonarQubeEnv(installationName: 'Sonarqube', credentialsId: 'sonar') {
sh "$SCANNER_HOME/bin/sonar-scanner -D'sonar.projectKey=$JOB_NAME'"
}
}
}
stage("Quality Gate") {
steps {
timeout(time: 1, unit: 'HOURS') {
waitForQualityGate abortPipeline: true
}
}
}
withSonarQubeEnv reports an id, for example: AXyPGkHZtOM2BAFbSUcX . Using api/ce/task?id=AXyPGkHZtOM2BAFbSUcX I can see process status and analysisId(f.e. AXyPGkmqJtbgJ09MpQ6B)
The point is that waitForQualityGate alway reads api/qualitygates/project_status?analysisId=AXyLfj5JlX0w7MRERt_e resulting in a 404.
I'm on that from something like 5 h and can't get out.
Someone has ideas?

The stages look fine, the HTTP 404 issue is due SonarQube not understanding the projectKey by the looks of it, from above you have-D'sonar.projectKey=$JOB_NAME' . The command & variable are defined incorrectly, it should be -Dsonar.projectKey='$JOB_NAME'.
That way the SonarQube server will be able to send analysis back to the Jenkins webhook with the correct key

I found the problem.
I moved sonarcube temp files to another location. Those changes reflect on Jenkins too but plug-in can't handle that so it was looking for file before the temp-dir change.

Related

Prevent Jenkins job from building from another jenkins file

We have 2 jenkins job AA and BB.
Is there a way to allow BB only to be trigger from AA after it completes?
Basically, you can use build triggers to do this. More about build triggers can be found here. There are two ways to add build triggers. Following is how you can add triggers using the UI. By going to Configure Job you can add triggers. Please refer to the following image.
In a declarative pipeline, you can add triggers as shown below.
pipeline {
agent any
triggers { upstream(upstreamProjects: 'AA', threshold: hudson.model.Result.SUCCESS) }
stages {
stage('Hello') {
steps {
echo 'Hello World BB'
}
}
}
}
Here you can specify the threshold on when to trigger the build based on the status of the upstream build. There are 4 different thresholds.
hudson.model.Result.ABORTED: The upstream build was manually aborted.
hudson.model.Result.FAILURE: The upstream build had a fatal error.
hudson.model.Result.SUCCESS: The upstream build had no errors.
hudson.model.Result.UNSTABLE: The upstream build had an unstable result.
Update 02
If you want to restrict all other jobs/users from triggering this job you will have to restructure your Job. You can wrap your stages with a parent Stage and conditionally check who triggered the Job. But note that the Job will anyway trigger but the stages will be skipped. Please refer the following pipeline.
pipeline {
agent any
triggers { upstream(upstreamProjects: 'AA', threshold: hudson.model.Result.SUCCESS) }
stages{
stage('Parent') {
// We will restrict triggering this Job for everyone other than Job AA
when { expression {
print("Checking if the Trigger is allowed to execute this job")
print('AA' in currentBuild.buildCauses.upstreamProject)
}
}
stages {
stage('Hello') {
steps {
echo 'Hello World BB'
}
}
}
}
}
}

what does No such DSL method 'readCSV' means in a jenkins pipeline?

I need to read a simple csv file and going through the documentation I found this readCSV method which comes with Jenkins, I have set a sample file named test.csv in the workspace folder and use this simple test pipeline:
'''
pipeline {
agent any
stages {
stage('read csv') {
steps {
script{
def records = readCSV file: 'test.csv'
println records
}
}
}
}
}
'''
But I keep getting the No such DSL method 'readCSV' error and I am not sure what it means, I have read here in SO that usually means you lack a plugin but this does not seem to be the case
Apparently the pipeline-utility-steps plugin was not installed and I thought it was installed by default, in case anyone else faces the same issue.

fodPollResults (FORTIFY on demand) plugin is not working properly either in direct plugin or pipeline script mode in jenkins

We are using "fortify on-demand (FOD)" platform to scan our source code to find out any security vulnerabilities are present. We integrated the FOD with jenkins to automate the process of uploading and scanning. And we opted the pipeline script method for integration. All the process up to uploading and scanning is running fine and we are capturing policy scan status (passed or failed) also, but the pipeline script of fodPollResults is failing to fail the build when the FOD policy scan is failed. irrespective of the result of policy scan the build is getting success.
jenkins pipeline script
stage('FOD POLL') {
steps {
fodPollResults bsiToken: '', personalAccessToken: 'fortify_personal_access_token', policyFailureBuildResultPreference: 2, pollingInterval: 3, releaseId: '******', tenantId: '', username: ''
}
}
Fortify on Demand Poll Results
the source code of this plugin is located here:
https://github.com/jenkinsci/fortify-on-demand-uploader-plugin/blob/master/src/main/java/org/jenkinsci/plugins/fodupload/steps/FortifyPollResults.java
and there is a bug ticket about this problem here:
https://github.com/jenkinsci/fortify-on-demand-uploader-plugin/issues/118
Following workaround seems to work:
steps {
fodPollResults ...
script {
if (manager.logContains('.*Scan failed established policy check.*')) {
error("Build failed because of negative fortify policy check.")
}
}
}

Jenkins declarative pipeline - settings stable/unstable threshold

I am working on a project with a Jenkinsfile setup. This project runs on a number of integration tests, some of which are expected to fail. We are fixing the tests ( or the implementation ) one by one, but in the meantime the jobs are marked as failed.
The relevant state snippet is
stage ('Run ITs') {
steps {
sh 'SHOW_LOGS=0 ./compose/scripts/up-testing.sh'
sh 'sleep 60'
timeout (720) {
sh './testing/scripts/run-its.sh'
}
}
post {
always {
sh './testing/scripts/summarize-it-results.sh'
junit 'testing/failsafe-resports/*.xml'
sh './compose/scripts/killall.sh'
}
}
I'd like to set a threshold (T) on the number of failures + errors (F+E) and mark the build as unstable if we get F+E <= T and failed otherwise.
How can I do this with the Jenkins pipeline plugin?
I think it is currently not possible with JUnit plugin "out-of-the-box". Here is the corresponding issue in Jenkins issue tracker.

Using waitForQualityGate in a Jenkins declarative pipeline

The following SonarQube (6.3) analysis stage in a declarative pipeline in Jenkins 2.50 is failing with this error in the console log: http://pastebin.com/t2ja23vC. More specifically:
SonarQube installation defined in this job (SonarGate) does not match any configured installation. Number of installations that can be configured: 1.
Update: after changing "SonarQube" to "SonarGate" in the Jenkins settings (under SonarQube servers, so it'll match the Jenkinsfile), I get a different error: http://pastebin.com/HZZ6fY6V
java.lang.IllegalStateException: Unable to get SonarQube task id and/or server name. Please use the 'withSonarQubeEnv' wrapper to run your analysis.
The stage is a modification of the example from the SonarQube docs: https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner+for+Jenkins#AnalyzingwithSonarQubeScannerforJenkins-AnalyzinginaJenkinspipeline
stage ("SonarQube analysis") {
steps {
script {
STAGE_NAME = "SonarQube analysis"
if (BRANCH_NAME == "develop") {
echo "In 'develop' branch, don't analyze."
}
else { // this is a PR build, run sonar analysis
withSonarQubeEnv("SonarGate") {
sh "../../../sonar-scanner-2.9.0.670/bin/sonar-scanner"
}
}
}
}
}
stage ("SonarQube Gatekeeper") {
steps {
script {
STAGE_NAME = "SonarQube Gatekeeper"
if (BRANCH_NAME == "develop") {
echo "In 'develop' branch, skip."
}
else { // this is a PR build, fail on threshold spill
def qualitygate = waitForQualityGate()
if (qualitygate.status != "OK") {
error "Pipeline aborted due to quality gate coverage failure: ${qualitygate.status}"
}
}
}
}
}
I also created a webhook, sonarqube-webhook, with the URL http://****/sonarqube-webhook/. Should it be like that, or http://****/sonarqube/sonarqube-webhook? To access the server dashboard I use http://****/sonarqube.
In SonarQube's Quality Gates section I created a new quality gate:
I am not sure if the setting in SonarGate is correct. I do use jenkins-mocha to generate an lcov.info file that is used in Sonar to generate the coverage data.
Perhaps the quality gate setting is the wrong setting to do? The end result is to fail the job in Jenkins if coverage % is not met.
Finally, I am not sure if the following configurations in the Jenkins system configuration are at all required:
And
(It's 9000 not 900... cut text in the screen shot)
The SonarQube Jenkins plugin scans the build output for two specific lines, which it uses to get the SonarQube report task properties and project URL. If your invocation of sonar-scanner does not output these lines, the waitForQualityGate() call won't have the task ID to look them up. So you will have to figure out the correct settings to make it more verbose.
See the extractSonarProjectURLFromLogs and extractReportTask methods in the SonarUtils class of the plugin to understand how they work:
ANALYSIS SUCCESSFUL, you can browse <project URL> is used to add a link to the badge (in the build history)
Working dir: <dir with report-task.txt> is used to pass the task ID to the waitForQualityGate step
This was discovered to be a bug in the SonarQube scanner for Jenkins, when using a Jenkins slave for jobs (if the job is run on the master, it'd work). You can read more here: https://jira.sonarsource.com/browse/SONARJNKNS-282
I have tested this using a test build of v2.61 of the scanner plug-in and found it working.
The solution is to upgrade to v2.61 when released.
This stage will then work:
stage ("SonarQube analysis") {
steps {
withSonarQubeEnv('SonarQube') {
sh "../../../sonar-scanner-2.9.0.670/bin/sonar-scanner"
}
def qualitygate = waitForQualityGate()
if (qualitygate.status != "OK") {
error "Pipeline aborted due to quality gate coverage failure: ${qualitygate.status}"
}
}
}
If you're running SonarCube in a docker container check that the memory isn't exhausted. We were maxing out. Which seemed to be the issue.

Resources