Creating views of sonar analysis that cuts across multiple modules/packages? - code-coverage

I have a jenkins build that runs sonar analysis on my code base, which is a multi-module maven project. The sonar results allow me to view coverage and issues by drilling down from the project as a whole to the modules, then the packages in those modules, then the classes in those packages.
Is there a way to create different views of the sonar analysis that span different aspects of the project, e.g. "show me the results for packages A,B,C in module M1 and packages X,Y,Z in module M2"?
All this information is (I assume) stored in the database for the project. There may be a plugin that already does this, or maybe I need to write a plugin of my own that queries the database.

I believe the Views plugin should answer your need: http://www.sonarsource.com/products/plugins/governance/portfolio-management/.
If you want an example of how to use it, here is a post on my blog: http://qualilogy.com/en/your-own-quality-model/.
Not recent, but you'll get the idea.
Regards.

Related

Jenkins - aggregate test results (SonarQube/JUnit) from multiple projects

I am currently working on a project which consists of several applications which in our infrastructure are tested with SonarQube and JUnit. For the moment I receive test reports for each subproject (application) separately.
My question is : what tools or solutions do you use to generate a report across multiple independent Maven projects to aggregate the test results generated by JUnit / Maven / Surefire ?
For the moment I came across SonarQube Governance plugin, however I would like to know the alternatives.
The Governance plugin ($) is the only solution to aggregate projects. However, if you're talking about aggregating multiple test reports within a single project, that functionality is native in SonarQube 6.2. Further, for projects with sub-projects, the aggregation of sub-project data should happen naturally in the parent project.

Jenkins with many solutions in one repository

I am trying to setup Jenkins to automate a build. We have one enormous repository with approximately 100 solution files. To build this repository we have a build program which finds all the solutions and builds in a specified order.
I would like to change that to use Jenkins. Is there a way to setup msbuild and specify a build order for all of these solution files?
p.s. I am trying to avoid creating one mammoth solution file which contains all of the projects.
If in fact all of your projects use ProjectReferences to other VS projects, you should be able to use MSBuild to extract all those references via the ResolveProjectReferences MSBuild task. You can then build an MSBuild script build those dependent projects in order. There's a pretty good example of this here. The example given there goes so far as constructing a specialized MSBuild task that builds a dependency graph, which I must say is pretty cool.

Combine cobertura code coverage reports of three separate projects

I have three projects which are stored in three separate repositories. Each of them is a individual mvn project. I wonder is there a way to aggregate three reports in one?
I took a look at the cobertura aggregate function. But seems like it can only handle sub-modules of a project.
Anybody has any suggestion?
The Maven plugin goal cobertura:cobertura supports an aggregate parameter that would work for all the projects in the reactor I suppose.
But you seem to suggest the projects might not be in the same structure/reactor, and i wouldn't know how to do it with maven per-se. However, you can easily do it with a little ant script that can be integrated in your maven structure.
The Cobertura Ant library has a merge task that can merge a number of .ser files (generated by the runtime execution of your instrumented code). This will generate a combined .ser file for which you can generate a xml or html report from.
Let me know if you need more pointers.
In another question a responder gave a link to a python script they had written that did what you are asking, I moved that "xml combiner" to a gist that is located here

Multiple classifiers in Maven

Being a Maven newbie, I want to know if its possible to use multiple classifiers at once; in my case it would be for generating different jars in a single run. I use this command to build my project:
mvn -Dclassifier=bootstrap package
Logically I would think that this is possible:
mvn -Dclassifier=bootstrap,api package
I am using Maven 3.0.4
Your project seems like a candidate for refactoring into a couple of what Maven calls "modules". This involves splitting the code into separate projects within a single directory tree, where the topmost level is normally a parent or aggregator POM with <packaging>pom</packaging> and a <modules/> list containing the sub-project directory names.
Then, I'd advise putting the API interfaces/exceptions/whatnot into an api/ subdirectory with its own pom.xml, and putting the bootstrap classes into a bootstrap/ subdirectory with its own pom.xml. The top-level pom.xml would then list the modules like this:
<modules>
<module>api</module>
<module>bootstrap</module>
</module>
Once you've refactored the project, you will probably want to add a dependency from the bootstrap module to the api module, since I'm guessing the bootstrap will depend on interfaces/etc. from the api.
Now, you should be able to go into the top level of the directory structure and simply call:
mvn clean install
This approach is good because it forces you to think about how different use cases are supported in your code, and it makes dependency cycles between classes harder to miss.
If you want an example to follow, have a look at one of my github projects: Aprox.
NOTE: If you have many modules dependent on the api module, you might want to list it in the top-level pom.xml in the <dependencyManagement/> section, so you can leave off the version in submodule dependency declarations (see Introduction to the Dependency Mechanism).
UPDATE: Legacy Considerations
If you can't refactor the codebase for legacy reasons, etc. then you basically have two options:
Construct a series of pom.xml files in an empty multimodule structure, and use the build-helper-maven-plugin along with source includes/excludes to fragment the codebase and allocate the classes to different modules out of a single source tree.
Maybe use a plugin like the assembly plugin to carve up the target/classes directory (${project.build.directory}) and allocate classes to the different jars. In this scenario, each assembly descriptor requires an <id/> and by default this value becomes the classifier for that assembly jar. Under this plan, the "main" jar output will still be the monolithic one created by the Maven build. If you don't want this, you can use a separate execution of the assembly plugin, and in the configuration use <appendAssemblyId>false</appendAssemblyId>. If the output of that assembly is a jar, then it will effectively replace the old output from the jar plugin. If you decide to pursue this approach, you might want to read the assembly plugin documents to get as much exposure to different examples as you can.
Also, I should note that in both cases you would be stuck with manipulating the list of things produced by using a set of profiles in the pom in order to turn on/off different parts of the build. I'd highly recommend making the default, un-qualified build one that produces everything. This makes it more likely for things like the release plugin to catch everything you want to release, and up-rev versions, etc. appropriately.
These solutions are usually what I promote as migration steps when you can't refactor the codebase all at once. They are especially useful when migrating from something like an Ant build that produces multiple jars out of a single source tree.

Ant/Ivy for project building

I am considering switching a Maven project that I manage to Apache-Ant/Ivy. I need more control over the build process and am getting very frustrated with Maven. Please no comments about how great Maven is. My question is about Ivy.
I would like to set up a "standard" Ant build template that can later be used for other projects with minimal changes.
I will set up a central "enterprise" repository where we can place third-party libraries that are not available in the public Maven repositories (e.g. commercial libraries, Sun libraries, proprietary libraries, etc.). This enterprise repository will be available on our local LAN, but may not be available from outside the office.
Each developer will have a private repository in ~/.ivy/repository. I would like the Ant build to automatically update this private repository with changed versions of libraries from the enterprise repository.
In ~/.ivy/ant, I plan on placing "standard" modules for including in the individual project build.xml files, using the include task in Ant 1.8. These modules will provide things like Scala and Clojure compilation targets with different versions for different Scala and Clojure versions (e.g.: scala-compile-2.9.1.xml, clojure-compile-1.3.xml, etc.) The build modules will be available in the enterprise repository and should be updated automatically in the private repositories if they change.
Each project will follow a standard Maven directory structure: ${project}/src/main/java, ${project}/target/classes, etc.
In the past, I tried using Ivy but the Ant build files got to be very large (> 500 lines for the template build file) and hard to manage/edit. I am hoping that by putting standard targets in their own build modules in the ~/.ivy/ant directory, I can avoid that code bloat.
Can this be done? Am I way off base? The only documentation I can find on Ivy is at the Apache web site (http://ant.apache.org/ivy). Is there any other documentation available, including books?
Rather sensible idea about dividing template build file into includable helper files. Personally, now i'm switchin' a really large project from ant (no dependency managment at all - only copying files from ftp) to ant/ivy solution. So i've done this way - i have a file with milestones targets - i.e ready-to-compile, compiled, ready-to-archiving, archived - so on. I think u got the idea. I've configured dependencies of all this targets ( dependencies in terms of ant, do't get me wrong). In that way - compiled depends on ready-to-compile, ready-to-compilede depends on initialized - smth like this. This targets don't have body - they are for including in every build-file of every module of your multi-module project. The sole purpose of this targets for maintaining the STATE of build, because of this import stuff things become rather tricky and it's hard time to know what target was overriden, and when this target would be run. But with this file i can easily change state of vy build on every sensible milestone. I want in one module to compile help files with exteran exe. No problem - in this project i just do this - ready-to-archiving depends on the target for compiling help. And as this milestones targets are included - i can override only some of them - all others would presere the desired way of building project.
Another part of my strategy - mixins build files - for every specific area. So i have a file for ivy. There i put initializing, resolving, publishing and so on. When i want to use ivy - i just include this file and manage depdendencies through my milestones targets. If the build is typical - i only include this file and have a convention-over-configuration functionality. All out of the box. How?? Just combining with other mixins. Mixins may include other mixins to depend on them. So each mixin is a reusable part of my build strategy. The stuff from OOP - single-concerned unit. In your case it's scala mixin with targets specific for scala stuff.
Then i have delegate.xml that delegates child projects common build activities. I have dist, all, test and whatever u want for multimodule project. The build order is evaluated with ant-ivy task buildlist.
There also some other files - but this are the strategically basic files that helped me to have a reusable and maintanable build with this BIG and VERY Conservative project. So, if u are interested about details, don't be shy and contact me. I will be very pleased to help you, because ivy docs are really comlicated and incomplete.
EDIT: About books - Ant in Action may help you, i took several ideas from this book, and i really highly recommend it everyone to read. There u can find ivy stuff, also. And about ivy docs - sorry, it's all that is available. But when i was in trouble with this cumbersome ivy+ant - i found several interesting articles on private blogs. So ... that may fill the gap some way.

Resources