Jhipster CI (Jenkins 2 + Sonarqube) -> Memory Heap - docker

I try to scan Jhipster generated code with Sonarqube during a Jenkins build.
My configuration is the following:
Gitlab as Docker Container
Jenkins 2 as Docker Container
Sonarqube 5.4 (higher is not permitted with MariaDB; right?) as Docker Container
This is a fresh install and all systems are communicating together.
When running builds, Jenkins is alerting about duplicates references in the 'bower_components' repository.
WARN: Too many duplication references on file
[moduleKey=Challenge1:0.1-HENDRIX,
relative=src/main/webapp/bower_components/angular-i18n/angular-locale_vun-tz.js,
basedir=/var/jenkins_home/workspace/Challenge-1] for block at line 20.
Keep only the first 100 references.
My problem is that I do not get the reasons why is alerting about it since few exclusions have been set at different levels:
1) Within pom.xml
<sonar.exclusions>src/main/webapp/content/**/*.*, src/main/webapp/bower_components/**/*.*, target/www/**/*.*</sonar.exclusions>
2) Within Jenkinsfile
node {
...
stage 'scan'
sh "${sonarqubeScannerHome}/bin/sonar-scanner -e -Dsonar.host.url=http://sonarqube:9000 -Dsonar.projectKey=Challenge1:'0.1-HENDRIX' -Dsonar.projectName='Challenge 1' -Dsonar.projectVersion='0.1-HENDRIX' -Dsonar.sources='src/' -Dsonar.coverage.exclusions=**/bower_components/**"
}
3) On sonarqube customizing itself (Analysis Scope).
EDIT:
This "non-exclusion" was originally leading to a memory dump that I could solved by extending the memory on sonarqube (sonar.properties).
sonar.web.javaOpts=-Xmx2048m -Xms512m -XX:MaxPermSize=160m -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true
However, I a still am afraid that extending memory isn't the appropriate solution and I must be able to exclude some parts of code. What should I do to remove bower components from the scan analysis? (I probably made something wrong).

My mistake: I was using -Dsonar.coverage.exclusions instead of -Dsonar.global.exclusions

Related

Jenkins pipeline -how to read file from outside workspace

i have a script that should run on both linux and windows agents.
this script reads a config file sitting on a network drive.
it gets worse - we have 2 different jenkins masters - one on docker ubuntu, and one on master. they run different jobs but with the same script.
so now -
using script.readFile is out of the question because the file is outside of workspace.
using groovy File(path).text is also problematic because the path (the mounts) is different on windows/linux (the jenkins masters).
There is a shared env var across all machines to get the right mount. when using groovy File, this doesn't work "${SOME_ENV_VAR}/file" it doesn't translate the env var
is there a way to use jenkins pipeline to read a file outside workspace? this would be the best solution.
or some other solution you can think of?
Thanks
using script.readFile is out of the question because the file is outside of workspace.
Not really. Assuming you are referring to the Jenkins step readFile you still can use it. It just takes a whole lot of dots
def config = readFile "../../../../mnt/config/my_config.txt
You'd have to figure out the exact amount of dots yourself

Coverity scan while building in Docker container

I have a custom Docker container in which I perform build and test of a project. It is somehow integrated with Travis CI. Now I want to run the Coverity scan analysis from within the Travis CI as well, but the tricky part is (if I understand the Coverity docs correctly), that I need to run the build. The build, however, runs in the container.
Now, according to the cov-build --help
The cov-build or cov-build-sbox command intercepts all calls to the
compiler invoked by the build system and captures source code from the
file system.
What I've tried:
cov-build --dir=./cov docker exec -ti container_name sh -c "<custom build commands>"
With this approach, however, Coverity apparently does not catch the calls to the compiler (it is quite understandable considering Docker philosophy) and emits no files
What I do not want (at least while there is hope for a better solution):
to install locally all the necessary stuff to build in the container
only to be able to run Coverity scan.
to run cov-build from within the container, since:
I believe this would increase the docker image size significantly
I use Travis CI addon for the Coverity scan and this would
complicate things a lot.
The Travis CI part just FWIW, tried all that locally and it doesn't work either.
I am thrilled for any suggestions to the problem. Thank you.
Okay, I sort of solved the issue.
I downloaded and modified ( just a few modification to fit my
environment ) the script that Travis uses to download and run Coverity
scan.
Then I installed Coverity to the host machine (in my case Travis
CI machine).
I ran the docker container and mounted the directory where the
Coverity is installed using docker run -dit -v <coverity-dir>:<container-dir>:ro .... This way I avoided increasing the docker image size.
Executed the cov-build command and uploaded the analysis using
another part of the script directly from docker container.
Hope this helps someone struggling with similar issue.
If you're amenable to adjusting your build, you can change your "compiler" to be cov-translate <args> --run-compile <original compiler command line>. This is effectively what cov-build does under the hood (minus the run-compile since your compiler is already running), and should result in a build capture.
Here is the solution I use:
In "script", "after_script" or another phase in Travis job's lifecycle you want
Download coverity tool archive using wget (the complete Command to use can be found in your coverity scan account)
Untar the archive into a coverity_tool directory
Start your docker container as usual without needing to mount coverity_tool directory as a volume (in case you've created coverity_tool inside the directory from where the docker container is started)
Build the project using cov-build tool inside docker
Archive the generated cov-int directory
Send the result to coverity using curl command
Step 6 should be feasible inside the container but I usually do it outside.
Also don't forget the COVERITY_SCAN_TOKEN to be encrypted and exported as an environment variable.
A concrete example is often more understandable than a long text; here is a commit that applies above steps to build and send results to coverity scan:
https://github.com/BoubacarDiene/NetworkService/commit/960d4633d7ec786d471fc62efb85afb5af2bed7c

Artifact Deployer Plugin Alternative

We are using the artifact deployer plugin in a Jenkins freestyle job, and recently Jenkins is displaying warnings about the plugin no longer being safe to use.
This plugin is no longer being distributed according to their wiki site here
Does anyone know of any alternative plugins, or ways in a freestyle job to copy content from one location to another (on same node)
Thanks
To copy all the contents from one directory to another directory in a Linux system use the following command:
cp -a /path/to/source_dir/. /path/to/dest_dir/
You can add an Execute shell step in in your job configuration in Build section and add the above command into it.

Jenkins Gradle "Could not reserve enough space for object heap"

I'm trying to run a build task with Gradle on Jenkins, but Gradle fails to run.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Parameter -Xmx2048m is apparently used in Java to run Gradle process.
If this parameter is the problem, where should I change it? (Jenkins is configured for -Xmx1024m). I'm running it on device with 1GB RAM (about 700 - 500MB free before running the task).
Full log: http://pastebin.com/BBsjp5pZ
I had to modify gradle.properties file in project folder
Original settings:
org.gradle.jvmargs=-Xmx2048m
New settings:
org.gradle.jvmargs=-Xmx512m -Xms100m
References:
Where should I put gradle.properties in Jenkins
https://docs.gradle.org/current/userguide/build_environment.html
The Jenkins manual talks about "GRADLE_OPTS".
Gradle build steps You can set the -Xmx or -XX:MaxPermSize by adding a
GRADLE_OPTS global environment variable in the Jenkins global
configuration. To do this, click Manage Jenkins, then Configure
System. In the Global properties section, click the Environment
Variables checkbox, then add a new environment variable called
GRADLE_OPTS with the value set appropriately, similar to the screen
shot above regarding MAVEN_OPTS

configuring system properties for Jenkins service

Background
I have the following Jenkins config.
Ubuntu machine
Jenkins installed using apt-get, and is started as a service (service jenkins start).
To this point I have not made any modifications to Jenkins config.
We have several Ant projects for which I want to publish Javadocs using Jenkins.
After configuring the Javadoc plugin, I quickly hit this issue where only the Javadoc frames are displaying, without any content.
Some reading (here and here) told me that I need to configure Jenkins' Content Security Policy, and that this is done by modifying system properties passed to Jenkins.
However, despite digging around I have not found clear docs on how to pass these system properties to the Jenkins service. How do I do that?
Answering my own question.
To set system properties for the Jenkins service:
Steps
Stop Jenkins (service jenkins stop). You will need root privileges.
Edit the /etc/defaults/jenkins file.
Add an additional line for the JAVA_ARGS that you want to pass.
JAVA_ARGS="-Dhudson.model.DirectoryBrowserSupport.CSP=\"your CSP configuration here\""
Start Jenkins (service jenkins start).
Explanation
Look at /etc/init.d/jenkins for a line similar to:
NAME=jenkins
SCRIPTNAME=/etc/init.d/$NAME
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
These tell us that the Jenkins daemon will look for a file named /etc/default/jenkins. If present, it .s that file.
If you set $JAVA_ARGS in /etc/default/jenkins it will be substituted in the line below, located later in the /etc/init.d/jenkins file:
$SU -l $JENKINS_USER --shell=/bin/bash -c "$DAEMON $DAEMON_ARGS -- $JAVA $JAVA_ARGS -jar $JENKINS_WAR $JENKINS_ARGS" || return 2
Notes
Even after you do the above, the Javadoc may not load properly. Try doing a hard refresh (Ctrl-Shift-R on Chrome).
As detailed in (the Jenkins docs)[https://wiki.jenkins-ci.org/display/JENKINS/Configuring+Content+Security+Policy] there is a temporary way to do this as well. Read that page and try to understand the implications well.
Changing the Content Security Policy has serious implications especially if your Jenkins is public. It's worth the effort to understand just what policies you are modifying.

Resources