Single Jenkins job for SONAR analysis of multiple projects - jenkins

I have a number of projects that need to be analysed by SONAR from jenkins. These projects include ant and maven projects. I have created a separate job for each SONAR analysis in jenkins.
Is it possible to have a single jenkins job in which I can pass some parameters from each individual sonar job and then see the dashboard?
If so, how do i go about it?

This solution is for Subversion and Maven.
Install the Parameterized Trigger Plugin
Create a Maven job for the SonarQube analysis, eg. _common-sonar with these settings:
Source Code Management: "Subversion", Repository URL: $PREVIOUS_SVN_URL, Check-out Strategy: "Always check out a fresh copy"
Build: Goals and options: install
Post-build Actions: "Sonar"
For the job you want to run analysis on add a Post-build Action "Trigger parameterized build on other projects" with these settings:
Projects to build: _common-sonar
Add Predefined parameters: Parameters: PREVIOUS_SVN_URL=${SVN_URL}
Now when the job-to-analyse completes it triggers the analysis job. The analysis job checks out the same SVN URL which was used by the first job.
This solution works without scripting or copying workspaces but there are quite obvious limitations and non-ideal features:
the build command is always only mvn install
the SVN checkout may be from different revision than original build
checkout and build are always done from scratch
I didn't consider ant at all here.
Improvement ideas are quite welcome!
Late improvement edit:
Instead of using a maven build ( in _common-sonar), you may also use SonarQube directly by invoking a Standalone SonarQube analysis
Additionally to the SVN URL you can add a parameter for the build tag and project name to use in sonar. Simply add
NAME=YOUR_PROJECT_NAME
BUILDTAG=$BUILD_TAG
beneath the PREVIOUS_SVN_URL parameter.
In your _common-sonar you can use it with ${NAME} and ${BUILDTAG}.

In a similar need I had once, I created a single job, which pulled sources of several projects (each to its own sub-folder in job's workspace).
Then, I wrote a simple shell script that looped over all directories and ran the sonar analysis.
The job had the sonar post build plugin which showed an aggregated report.
Unfortunately, I don't have an example as this was some years ago, but you can make it work.
I hope this helps.

Related

Output sonarqube result to different server locations

Is there a way to output SonarQube results to 2 different server locations through a Jenkins configuration, using a single Jenkins build for each SonarQube output?
I know Jenkins has a concept of parameterized build where the build could be parameterized by the Sonar Server name.
I guess that you are talking about the parameterized plugin:
https://wiki.jenkins.io/display/JENKINS/Parameterized+Trigger+Plugin
This plugin let you provide data when you trigger the build. This is a great plugin when your builds trigger each others, and you need data from a previous build executed on another slave.
If you want a single build, and the Sonar Server Name is determined inside the build, you will need to find your way using Shell.
Get it at some point:
SONAR_NAME=$( .... )
and re-use it within the same build:
ssh $SONAR_NAME#....

Pipeline to use artifacts from 2 projects associated by the same git branch name

the company where I work for is evaluating jenkins 2.71, in particular the pipeline and blue ocean plugins. We already tested also GoCD and we need, as in GoCD, a way for a pipeline to automatically fetch the artifacts from 2 other pipelines (taking the last successful result of each one of them), here our case.
We have these initial pipelines (build & run tests), which reflect 2 projects:
frontend, ~ 15 minutes
backend, ~10 minutes
I created a pipeline called configure (~1 minute), with e.g. a parameter called customer-name, which takes backend and frontend files and puts them together, then applies specific customer specific configurations and customizations and produces deployable artifacts. Instead of "customer-name" I could also parallelize this job to create all the artifacts for each customer at once, separated in different directories.
The next pipeline would be to deploy them on different test servers separated for each customer. This could be also part of the same configure pipeline, we still have to see how to put things together in jenkins...
Ideally, I need configure pipeline to be triggered automatically (or also on demand) after each frontend or backend success and take as input the last successful artifacts from these 2 pipelines, but not just having the last successful build, we need as dependency the git branch name.
E.g. we have:
backend branches:
master
release/2017.2
frontend braches:
master
release/2017.2
In the pipeline editor, I found a Build Triggers option and set it as follows: Build after other projects are built > Projects to watch: frontend, backend > Check Trigger only if build is stable or better in my test environment full of failures Trigger even if the build is unstable.
Searching further, I found Copy Artifact Plugin
But now the big question, how to fetch the last successful artifacts from these pipelines with the same git branch name?
Because we don't want to mix e.g. a backend build of "release/2017.2" with frontend "master", it has to find as the last successful build having the same relationship or parameter or whatever you wanna call it, in our case the association is the git branch name.
Is it possible to achieve this? If yes, how?
The copy artifact plugin seems to work in a freestyle project. Would it work in a pipeline? That's also a concern...
Thanks
Yes, the Copy Artifact plugin does work in both freestyle and pipeline projects; pipeline uses the copyArtifact function that I referenced in my comment. Note that if you go to the Pipeline Syntax link, it's kind of hidden: you have to first select "step: General Build Step" from the drop-down, then it will give you the Copy Artifact pipeline command builder.
I'm going to assume that your frontend and backend projects are built as multi-branch pipelines, as that would probably be easiest to maintain so that you don't have to keep creating new projects for every release. You can reference these projects from other projects by referencing <project name>/<branch name> (sometimes I've had to replace the / with %2f instead, I think mostly on freestyle projects). You could then set up your configure project as a parameterized build (either pipeline or freestyle), say with a string parameter of PROJECT_BRANCH_NAME. Then put in the following in your frontend/backend project pipeline scripts to trigger a build of your configure project
build job: 'configure', parameters: [[$class: 'StringParameterValue', name: 'PROJECT_BRANCH_NAME', value: ${env.BRANCH_NAME}]]
Then you should just be able to make your configure project reference the frontend/%PROJECT_BRANCH_NAME% and backend/%PROJECT_BRANCH_NAME% (or ${env.PROJECT_BRANCH_NAME} in a pipeline script) when copying the artifacts.
Also, is there a particular reason why you're evaluating specifically Jenkins 2.7? 2.7 is a year old now, and there have been a few new LTS releases since then. I'd recommend staying reasonably up-to-date unless you know there's a specific reason you want 2.7.

what did jenkins actually build?

I created a freestyle project in jenkins, in which I chose source code management as git, screenshot below
That's pretty my config. The repo you see in there is public repo. then I save the config, then I click build now.
It seems to works base on the notification on screen, which says 'success'. But I have no idea I what the heck Jenkins produced. I didn't instruct what to build and how to build. How does it know what I want? And lets say it did build something, where does it store the build? I didn't instruct it where to store the built file either. Can someone explain what is going on?
To actually build something you need to add something to your Build section in the project configuration. For a javascript configuration it might look something like:
npm install
npm run test-coverage
npm run linter
npm run complexity
where each item after run is a script in your package.json. Then you can add plugins to read the outputs of those actions, for example:
Clover test coverage publisher
TAP (Test) results publisher
HTML Publisher for publishing static analysis results
Checkstyle publisher for linting results
This allows you to pass and fail builds based on certain test criteria and where continuous integration starts to shine.
In Jenkins job you have several sections - you can define pre build actions to prepare the environment, SCM to check out from source control, Build section to run your build pipeline and post build operation to run actions after the build section.
If you defined only the SCM section all your job did is to check out your sources from the source control you provided. the status of this action is SUCCESS.
Don't forget to check the console output of the job that ran to see which steps ran.

Can I store Jenkins configuration in the project repo (like Travis CI)?

How do you maintain the Jenkins job configuration in SCM along side the source code?
As source code evolves, so does the job configuration. It would be ideal to be able to keep the job configuration in SCM, for the following benefits:
easy to see who a history of the changes, including the author and the description
able to rebuild old branch/tag by checking out the revision and build just work
not having to scroll through the UI to find the appropriate section and make change
I see there is a Jenkins Job Builder plugin. I prefer a solution along the lines of Travis CI, where the job configuration is maintained in a YAML file (.travis.yml). Any good suggestions?
Note: Most of our projects are using Java & Maven.
Update 2016: Jenkins now provides a Jenkinsfile which provides exactly this. This is supported by the core Jenkins developers and actively developed.
Benefits:
Creating a Jenkinsfile, which is checked into source control, provides a number of immediate benefits:
Code review/iteration on the Pipeline
Audit trail for the Pipeline
Single source of truth for the Pipeline, which can be viewed and edited by multiple members of the project.
I've written a plugin that does this!
Other than my plugin, you have some (limited) options with existing Jenkins plugins:
Use a single test script
If you configure your Jenkins to simply run:
$ bash run_tests.sh
You can then check in a run_tests.sh file into your SCM repo and you're now tracking changes for how you run tests. However, this won't track configuration of any plugins.
Similarly, if you're using Maven, the Maven Project Plugin simply runs a specified goal for your repo.
The Literate Plugin does allow Jenkins to run the commands in your README.md, but it hasn't yet been released.
Track changes to Jenkins configuration
You can use the SCM Sync configuration plugin to write configuration changes to SCM, so you at least have a persistent record. This is global, across all projects on your Jenkins instance.
There's also the job config history plugin, which stores config history on the filesystem.
Write Jenkins configuration from SCM
The Jenkins job builder project you mentioned lets you check config changes into SCM and have them applied to your Jenkins instance. Again, this is across all projects on your Jenkins instance.
Write Jenkins configuration from another job
You can use the Job DSL Plugin with a repo of groovy scripts. Jenkins then polls that repo, executes the groovy scripts, which create job configurations.
Discussions
Issue 996 (now closed) discusses this, and it has also been discussed on the mailing list: 'Keeping track of Hudson's configuration changes', and 'save hudson config in svn'.
you can do this all with the workflow plugin and a lot more. Workflow is one of the most advanced technics to use jenkins and it has a very strong support.
It is based on a groovy DSL and allows you to keep the whole configuration in the SCM of your choise (e.g. GIT, SVN...).

What is the advantage of the Jenkins SonarQube plugin over adding sonar:sonar to maven build step

I am using Maven as a build tool and Jenkins as a CI tool. Currently I have a Jenkins job configured with a Maven build step.
I started using SonarQube and was wondering what is the advantage of using the Jenkins SonarQube plugin and configuring the SonarQube analysis as a post-build-action over simply adding sonar:sonar to the goals of my existing Maven build step.
Thanks and best regards,
Ronald
You can save a lot of configuration. So, if you use jenkins sonar plugin you can centralize database credentials and sonar credentials but if you make a decision about execute sonar:sonar in each jenkins job you will configure each with the same credentials.
I just found: Why use sonar plugin for Jenkins rather than simply use maven goal "sonar:sonar"?
And to add one reason: Using the Jenkins SonarQube plugins one can specify "Skip if triggered by SCM Changes". This is nice if you trigger your Jenkins job for each commit but only want to do a SonarQube analysis at a scheduled time, e.g. one per night.
And here is a summary of the the points made by "emelendez":
Centralize database credentials and sonar credentials Use jenkins
Use jenkins sonar plugin configuring SonarRunner for non Java projects
I've just changed to maven-sonar-plugin from the Jenkins SonarQube plugin to avoid divergence of information between the pom.xml and sonar-project.properties.
For example, developers elsewhere had bumped the project version number in the pom.xml, but they don't use the Jenkins builds and didn't care about the sonar-project.properties (or probably understand it). By switching to the maven plugin instead, the project version is defined once and referenced in the sonar property set within the pom.
The downside is that I no longer have the SonarQube link from the project's Jenkins page.
I'm not sure where the responsibility might be for adding this link back for projects using maven-sonar-plugin... The link is "owned" by the Jenkins SonarQube Plugin, but this is not being used here. Meanwhile the maven-sonar-plugin component is integrating with maven not Jenkins.
Something would need to observe the build and extract the SonarQube link which is emitted as a [INFO] ANALYSIS SUCCESSFUL, you can browse http://... line in the log.

Resources