I'm trying to use Owasp Zap(V2.11.1) within jenkins pipeline.
I need to scan a simple Url for this example: https: //MyHost:MyPort/ANY_PATH
After downloading the Jenkins Zap plugin, I executed the build, but it seems that the scan doesn't start.
This is the my Jenkins pipeline:
pipeline {
agent any
stages {
stage('Zap Test') {
steps {
script {
startZap(host: "127.0.0.1", port: 8080, timeout:30000, zapHome: "C:\\Program Files\\OWASP\\Zed Attack Proxy", allowedHosts:['https://igfscg.netsw.it:18080/IGFS_CG_WEB/app/cc/main/show?referenceData=B6345DAEC2AFF5A1FD8E84F8FC745E50'])
}
}
}
}
post {
always {
script {
archiveZap(failAllAlerts: 10, failHighAlerts: 0, failMediumAlerts: 0, failLowAlerts: 0, falsePositivesFilePath: "zapFalsePositives.json")
}
}
}
}
This is the Jenkins log file where it show that Zap is started, but seems also to attempt to connect to Zap
10027 [ZAP-daemon] INFO org.zaproxy.addon.oast.services.callback.CallbackService - Started callback service on 0.0.0.0:62554
10031 [ZAP-daemon] INFO org.zaproxy.zap.extension.dynssl.ExtensionDynSSL - Creating new root CA certificate
11205 [ZAP-daemon] INFO org.zaproxy.zap.extension.dynssl.ExtensionDynSSL - New root CA certificate created
11209 [ZAP-daemon] INFO org.zaproxy.zap.DaemonBootstrap - ZAP is now listening on
127.0.0.1:8080
11624 [ZAP-cfu] INFO org.zaproxy.zap.extension.autoupdate.ExtensionAutoUpdate - There is/are 24 newer addons
zap: Attempting to connect to ZAP at 127.0.0.1:8080
zap: Archiving results... zap: Checking results... zap: Deleting temp directory: C:\Users\a.carnevali.jenkins\workspace\igfs_cg_testing\zaptemp.tmp3371682578955130815
zap: Failed to delete temp directory. Unable to delete 'C:\Users\xxxxxx.jenkins\workspace\igfs_cg_testing\zaptemp.tmp3371682578955130815'. Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 15 additional exceptions)
Moreover there are some additional messages coming from the windows console where I start Jenkins
INFO o.a.h.impl.execchain.RetryExec#execute: Retrying request to
{}->http://127.0.0.1:8080
INFO o.a.h.impl.execchain.RetryExec#execute: I/O exception
(java.net.SocketException) caught when processing request to {}->
http://127.0.0.1:8080: Software caused connection abort: recv failed
Could you please help on this?
Related
I've been trying to get our CI job in Jenkins to run on spot instances in EC2 (using the Amazon EC2 plugin), and I'm having trouble figuring out how to retry consistently when they get interrupted. The test run is parallelized across several Jenkins nodes that run on EC2 instances. This is the relevant script for the pipeline:
for (int i = 0; i < numNodes; i++) {
int index = i
def nodeDisplayName = "node_${i.toString().padLeft(2, '0')}"
env["NODE_${index}_RETRY_COUNT"] = 0
nodes[nodeDisplayName] = {
retry(2) {
timeout(time: 90, unit: 'MINUTES') {
int retryCount = env["NODE_${index}_RETRY_COUNT"]
nodeLabel = (retryCount == 0) ? "ec2-spot" : "ec2-on-demand"
env["NODE_${index}_RETRY_COUNT"] = retryCount + 1
node(nodeLabel) {
stage('Debug info') {
// ...
}
stage('Run tests') {
// ...
}
}
}
}
}
}
parallel nodes
Most of the time, this works. If a spot-based node gets interrupted, it retries. But occasionally, the retry just doesn't happen. I don't see anything in the logs (or anywhere else) about why it didn't retry. Here's an example of such a run:
One thing that I've noticed is that I always see this message on the build page the same number of times as there were successful retries:
In other words, if 20 nodes were interrupted, and 19 of them were retried, I will see the "Agent was removed" mesasge 19 times. It seems like for some reason jenkins is not always detecting that the agent disappeared.
Another clue is that at the end of the logs from each node, there's a difference between what gets logged for ones that retry vs ones that didn't. On the ones that retry, the log looks like this:
Cannot contact EC2 (ec2-spot) - Jenkins Agent Image (sir-688pdhsm): hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel#4b2fd30b:EC2 (ec2-spot) - Jenkins Agent Image (sir-688pdhsm)": Remote call on EC2 (ec2-spot) - Jenkins Agent Image (sir-688pdhsm) failed. The channel is closing down or has closed down
Could not connect to EC2 (ec2-spot) - Jenkins Agent Image (sir-688pdhsm) to send interrupt signal to process
for nodes that don't retry, the end of the log looks like this:
Cannot contact EC2 (ec2-spot) - Jenkins Agent Image (sir-24h6etnm): hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel#63450caa:EC2 (ec2-spot) - Jenkins Agent Image (sir-24h6etnm)": Remote call on EC2 (ec2-spot) - Jenkins Agent Image (sir-24h6etnm) failed. The channel is closing down or has closed down
note that the final line from the first log does not appear. I'm not sure what this means, but I'm hoping someone else might have a clue.
We are using "fortify on-demand (FOD)" platform to scan our source code to find out any security vulnerabilities are present. We integrated the FOD with jenkins to automate the process of uploading and scanning. And we opted the pipeline script method for integration. All the process up to uploading and scanning is running fine and we are capturing policy scan status (passed or failed) also, but the pipeline script of fodPollResults is failing to fail the build when the FOD policy scan is failed. irrespective of the result of policy scan the build is getting success.
jenkins pipeline script
stage('FOD POLL') {
steps {
fodPollResults bsiToken: '', personalAccessToken: 'fortify_personal_access_token', policyFailureBuildResultPreference: 2, pollingInterval: 3, releaseId: '******', tenantId: '', username: ''
}
}
Fortify on Demand Poll Results
the source code of this plugin is located here:
https://github.com/jenkinsci/fortify-on-demand-uploader-plugin/blob/master/src/main/java/org/jenkinsci/plugins/fodupload/steps/FortifyPollResults.java
and there is a bug ticket about this problem here:
https://github.com/jenkinsci/fortify-on-demand-uploader-plugin/issues/118
Following workaround seems to work:
steps {
fodPollResults ...
script {
if (manager.logContains('.*Scan failed established policy check.*')) {
error("Build failed because of negative fortify policy check.")
}
}
}
I am trying to run sonar-scanner and access quality gate results and am kind of stuck after trying various options suggested on forums. This is my first time trying to post, so please let me know if I am missing any details. I do see the json payload in sonarqube server webhooks console but it is in failed status (red cross-mark). Ours is a shared CBJ and SonarQube server with limited access for me on both of those. Any help/guidance is really appreciated. Thank you so much.
======================================
SonarQube Configuration
Project_Name > Administration > Webhooks
Name: Webhook_Name
URL: https://CloudBeesJenkins_Server_FQDN/dev-master/sonarqube-webhook/
Secret: 'webhook_secret_text'
======================================
CBJ Configuration
CredentialsID: 'SonarQubeToken': Value: Scope: Global credentials (unrestricted)
======================================
Jenkins Job - Pipeline Script
/* this stage succeeds */
stage('SonarQube Analysis') {
def scannerHome = tool 'Sonar-Prod';
withSonarQubeEnv('Sonar-Prod') {
sh """${scannerHome}/bin/sonar-scanner -X \
-Dsonar.projectKey=ProjKey \
-Dsonar.sources=src \
-Dsonar.host.url=https://sonarqube_server_fqdn \
-Dsonar.login=sonar_project_secret_text"""
}
}
/* fails at waitForQualityGate */
stage("Quality Gate Status Check") {
timeout(time: 1, unit: 'HOURS')// Just in case something goes wrong, pipeline will be killed after a timeout
// had previously tried using waitForQualityGate() and waitForQualityGate(webhookSecretId: 'webhook_secret_text' with same result
def qg = waitForQualityGate(webhookSecretId: 'webhook_secret_text', credentialsId: 'sonar_project_secret_text') // Reuse taskId previously collected by withSonarQubeEnv
if (qg.status != 'OK') {
error "Pipeline aborted due to quality gate failure: ${qg.status}"
}
}
=====================================
Logs from Jenkins Server - Job Running Sonarscanner and qualitygate
SonarQube Scanner 4.2.0.1873
Java 1.8.0_242 Oracle Corporation (64-bit)
Linux 2.6.32-754.27.1.el6.x86_64 amd64
SonarQube server 7.9.1 - Community 7.9.1.27448
[CloudBees Jenkins Enterprise 2.204.3.7-rolling]
09:40:13.671 DEBUG: Upload report
09:40:13.931 DEBUG: POST 200 https://sonarqube_server_fqdn/api/ce/submit?projectKey=ProjKey | time=256ms
09:40:13.935 INFO: Analysis report uploaded in 264ms
09:40:13.938 INFO: ANALYSIS SUCCESSFUL, you can browse https://sonarqube_server_fqdn/dashboard?id=ProjKey
09:40:13.938 INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
09:40:13.938 INFO: More about the report processing at https://sonarqube_server_fqdn/api/ce/task?id=AXDt34Wae-uSoUyAgrS-
[Pipeline] waitForQualityGate
Checking status of SonarQube task 'AXDt34Wae-uSoUyAgrS-' on server 'Sonar-Prod'
org.sonarqube.ws.client.HttpException: Error 401 on https://sonarqube_server_fqdn/api/ce/task?id=AXDt34Wae-uSoUyAgrS-
It was a firewall issue. Communication from Jenkins to SonarQube server was opened but not the other way round. This issue can be closed.
I created a pipeline which should trigger a job on a different Jenkinsserver.
I use the Remote Trigger Plug-in and I am able to trigger the Job with following statement (currently this is the only statement in my pipeline):
triggerRemoteJob enhancedLogging: true, job: 'myJob', maxConn: 1, remoteJenkinsName: 'MyJenkins'
But after the Job is triggered the pipeline tries to connect a job in a job running on localhost which obviously fails.
I tried to disable some options an found that it works if I disable blockBuildUntilComplete.
From the Log I got following with option enabled:
############################################################################## ##################################
Parameterized Remote Trigger Configuration:
- job: myJob
- remoteJenkinsName: myJenkins
- parameters:
- blockBuildUntilComplete: true
- connectionRetryLimit: 5
################################################################################################################
Triggering non-parameterized remote job 'http://x.x.x.x:8080/job/myJob'
Using globally defined 'Credentials Authentication' as user 'myUser' (Credentials ID 'myCredentials')
Triggering remote job now.
CSRF protection is disabled on the remote server.
Remote job queue number: 47
Remote build started!
Remote build URL: http://localhost:8080/job/myJob /8/
Remote build number: 8
Blocking local job until remote job completes.
calling remote without locking...
Connection to remote server failed , waiting for to retry - 10 seconds until next attempt. URL: http://localhost:8080/job/myJob /8/api/json/, parameters:
Retry attempt #1 out of 5
Connection to remote server failed , waiting for to retry - 10 seconds until next attempt. URL: http://localhost:8080/job/myJob /8/api/json/, parameters:
Retry attempt #2 out of 5
Connection to remote server failed , waiting for to retry - 10 seconds until next attempt. URL: http://localhost:8080/job/myJob /8/api/json/, parameters:
Retry attempt #3 out of 5
Connection to remote server failed , waiting for to retry - 10 seconds until next attempt. URL: http://localhost:8080/job/myJob /8/api/json/, parameters:
Retry attempt #4 out of 5
Connection to remote server failed , waiting for to retry - 10 seconds until next attempt. URL: http://localhost:8080/job/myJob /8/api/json/, parameters:
Retry attempt #5 out of 5
Max number of connection retries have been exeeded.
I changed the names and the IP-Adress of my Jenkins-Server.
I must do some steps after my remote job finished which are depending from its results. So I must wait until the job is done.
Is there a way to do this without the block-option or what must I do to get the option working?
I checked the released of the plugin and I found an improvement for release 3.0.8 called "Extend POST timeout & avoid re-POST after timeout"
I reviewed the change an since it looked good with regards to our problem I updated our plugin (v3.0.7) to the current version.
Now the error does not appear anymore.
I try to run sonar tests with maven in my Jenkins pipeline project. The documentations says if the sonar is configured globally and you use the withSonarQube step the environment variables with the globally configured sonar properites are injected. So far so good.
http://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner+for+Jenkins#AnalyzingwithSonarQubeScannerforJenkins-AnalyzingwithSonarQubeScannerforMaven
My pipeline config looks like:
def stash = '********'
def branch = 'dev'
stage('git') {
node {
git branch: branch, credentialsId: 'Buildserver-Private.key', url: stash
}
}
stage('build') {
node {
//....
}
}
stage('sonar') {
node {
withSonarQubeEnv('Sonar') {
sh 'mvn org.sonarsource.scanner.maven:sonar-maven-plugin:3.2:sonar'
}
}
}
The build fails because the sonar plugin trys to connect to the default h2 database instead of the configured one. If i check the log, there are no sonar properties passed to maven.
Injecting SonarQube environment variables using the configuration: Sonar
[Pipeline] {
[Pipeline] tool
[Pipeline] sh
[***********] Running shell script
+ cd .
+ /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3_3_9/bin/mvn org.sonarsource.scanner.maven:sonar-maven-plugin:3.2:sonar
[INFO] Scanning for projects...
[...]
[INFO] --- sonar-maven-plugin:3.2:sonar (default-cli) # *******.project.build ---
[INFO] User cache: /var/lib/jenkins/.sonar/cache
[INFO] SonarQube version: 4.5.6
[INFO] Default locale: "en_US", source code encoding: "UTF-8" (analysis is platform dependent)
12:23:17.971 INFO - Load global referentials...
12:23:18.071 INFO - Load global referentials done: 102 ms
12:23:18.102 INFO - User cache: /var/lib/jenkins/.sonar/cache
12:23:18.109 INFO - Install plugins
12:23:18.176 INFO - Install JDBC driver
12:23:18.183 INFO - Create JDBC datasource for jdbc:h2:tcp://localhost/sonar
Why is my config ignored? What does the documentation mean if it says?
Since version 2.5 of the SonarQube Scanner for Jenkins, there is an
official support of Jenkins pipeline. We provide a 'withSonarQubeEnv'
block that allow to select the SonarQube server you want to interact
with. Connection details you have configured in Jenkins global
configuration will be automatically passed to the scanner.
It seems they are not ...
Has anybody an idea what am I missing?
You are using an old version of SonarQube (4.5.6, the previous LTS) that requires to pass DB connection parameters (URL, login, password) to the scanners - which is a security issue. withSonarQubeEnv does not propagate those settings in order to fix this flaw.
Since SonarQube 5.2, these parameters are no longer required. So you have to use a version that is more recent. I suggest you to upgrade to the latest LTS version of SonarQube (5.6).