Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been vetting Static Analysis tools and I've recently come across both Atlassian Clover and SonarQube. The two products seem remarkably similar and virtually identical from a server perspective, yet I can't find a good comparison of them online.
I've also been vetting their IntelliJ plugins, and this where I see vast differences between the two. Clover's IDE integration is amazing, pointing out exact lines of code that aren't covered by unit tests. However, the SonarQube server has this functionality, but I'm not sure the $300/person cost of Clover is worth this IDE convenience.
Sonar's plugin simply seems to point out code issues in the IDE, which is good, but IntelliJ has functionality for this already. Also, does Clover lack this in their plugin, or do I simply not see it because I haven't put the Clover plugin in front of a server yet?
Lastly, I've also seen that Sonar can consume reports generated by Clover. Does anyone have any experience with this? Does the SonarQube server sufficiently replace the Clover server by utilizing these reports? If not, what does Sonar lack?
For reference: http://docs.codehaus.org/display/SONAR/Clover+Plugin
Some background: The product being analyzed is a Java web project being build with Maven. Both tools seem to have appropriate Maven integration.
Disclaimer: I'm a Clover developer at Atlassian.
But I'll try to be as objective as possible, of course.
Let me emphasize one difference between Clover and Sonar first:
Clover is mainly a code coverage tool. It tracks both total coverages as well as per-test coverage. It has some code metrics in addition to this, but it's not a typical static code analysis tool like Checkstyle, FindBugs or PMD.
Sonar (from my perspective) is mainly a data aggregation tool, which can collect various kinds of data (like code coverage, static analysis results, code metrics) from various tools and present them in one place.
What is similar in these two tools that both of them can create rich reports.
Having said this, let me answer your questions.
Clover's IDE integration is amazing, pointing out exact lines of code that aren't covered by unit tests [...] but I'm not sure the $300/person cost of Clover is worth this IDE convenience.
You have to answer it yourself :-) Few things worth consideration:
How does your developer run unit tests - do they run them in IDE before commit? Do you have a "green build" policy? If both answers are yes then having Clover in IDEA may be worth it.
How much time takes the execution of unit tests in IDE? How frequently they're launched? If they take a long time and are frequently launched, then using Clover's Test Optimization feature in IDEA could be interesting.
Do you have your tests split into several build plans running on CI server? Running in specific environment configurations? in such case, server reports could be more valuable than in IDE
do your developers prefer to see code coverage directly in their IDEs or rather clicking through the HTML report in a browser?
Do you expect to see any productivity boost of your team thanks to having source code with coverage highlighting in IDE? How much? Is the 'time saved * salary > clover license price'?
Sonar's plugin simply seems to point out code issues in the IDE, which is good, but IntelliJ has functionality for this already. Also, does Clover lack this in their plugin [...]?
Clover does not perform static analysis, and thus it does not show code bugs. Neither in its HTML report nor IDE plugins (IDEA/Eclipse).
Lastly, I've also seen that Sonar can consume reports generated by Clover. [...] Does the SonarQube server sufficiently replace the Clover server by utilizing these reports? If not, what does Sonar lack?
I'm not 100% sure (please correct me if I'm wrong) but I think the Sonar Clover Plugin parses Clover's XML report file (at least Clover plugins for Jenkins, Hudson and Bamboo work this way) to get some numbers to display. Which means that you won't see Clover's HTML report in Sonar with detailed source line coloring, per-test coverage, tag clouds, etc.
Cheers
Marek
Related
Having just switched to VS2019 I’m exploring whether to use code analysis. In the project properties, “code analysis” tab, there are numerous built-in Microsoft rule sets, and I can see the editor squiggles when my code violates one of these rules. I can customise these rule sets and “save as” to create my own.
I have also seen code analyser NuGet packages such as “Roslynator” and “StyleCop.Analyzers”. What’s the difference between these and the built-in MS rules? Is it really just down to more comprehensive sets of rules/more choice?
If I wanted to stick with the built-in MS rules, are there any limitations? E.g. will they still get run and be reported on during a TFS/Azure DevOps build?
What's the difference between legacy FxCop and FxCop analyzers?
Legacy FxCop runs post-build analysis on a compiled assembly. It runs as a separate executable called FxCopCmd.exe. FxCopCmd.exe loads the compiled assembly, runs code analysis, and then reports the results (or diagnostics).
FxCop analyzers are based on the .NET Compiler Platform ("Roslyn"). You install them as a NuGet package that's referenced by the project or solution. FxCop analyzers run source-code based analysis during compiler execution. FxCop analyzers are hosted within the compiler process, either csc.exe or vbc.exe, and run analysis when the project is built. Analyzer results are reported along with compiler results.
Note
You can also install FxCop analyzers as a Visual Studio extension. In this case, the analyzers execute as you type in the code editor, but they don't execute at build time. If you want to run FxCop analyzers as part of continuous integration (CI), install them as a NuGet package instead.
https://learn.microsoft.com/en-us/visualstudio/code-quality/fxcop-analyzers-faq?view=vs-2019
So, the built-in legacy FxCop and NuGet analyzers only run at build time while the extension analyzers can run at the same time the JIT compiler does as you type. Also, you have to specifically say to run legacy code analysis on build, whereas the NuGet analyzers will run on build just because they are installed. And analyzers installed as NuGet or extensions won't run when you go to the menu option "Run Code Analysis".
At least, that's what I get out of that page.
There's a link near the bottom of that page that takes you to what code analysis rules have moved over to the new analyzers, including rules that are now deprecated.
https://learn.microsoft.com/en-us/visualstudio/code-quality/fxcop-rule-port-status?view=vs-2019
The different analyzers attempt to cover different coding styles and things Microsoft didn't cover when they built FxCop. With the little research I just did on this, there's a whole rabbit hole to follow, Alice, that would take more time than I have right now to devote to it. And it seems to be filled with lots of arcane knowledge and OCD style code nitpicks that make Wonderland seem normal. But that's just my opinion.
There's lots of personal and professional opinion about various rules in these and basic Microsoft rules, so there's plenty of room to use what you want and disable what you don't. For a beginner, I'd suggest turning on only a few rules at a time. That way you aren't inundated with more warnings and errors than lines of code you might have. Ok, so that might be a bit of an exaggeration, but there's so many rules that really are nitpicks, especially on legacy code, that they aren't really worth it to have enabled, since you likely won't have time to fix it all. You will also want to do basic research and use "common sense" when you decide what to enable. ("Do I really need to worry about variable capitalization coding style consistency on an app that's been ported into 4 different languages over 15+ years and has 10k files?") This is both personal and professional opinion here, so follow it or not.
And don't forget the rules that contradict each other. Those are fun to deal with.......
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I am setting up a c# BDD Test automation framework using the following basic components:
Specflow
NUnit / SpecRun (test runner - see below)
Selenium
I have successfully set up a framework which executes tests and generates a nice HTML report (to do this, I used SpecRun as the test runner.. http://www.specflow.org/plus/runner/).
I am now trying to set my tests up to execute in Sauce labs to do cross-browser and device testing. Jenkins has a nice Sauce plugin which allows you to specify the platforms and the tests are then run across each selected platform.
I have also identified Saucery (http://fullcirclesolutions.com.au/) as a potential time saver in setting this integration up, however, this would mean that I would need to use NUnit as my test runner instead of Specrun.
If I am to go down the NUnit route, does anyone know of any decent html reporting solutions which I can integrate into the test run. A lot of googling has returned very little in the way of answers here.
Thanks!
You can use the specflow.exe program to create reports, it comes with the specflow package. How it works in detail can be found on the specflow github. Summarized:
In order to generate this report you have to execute the acceptance
tests with the nunit-console runner. This tool generates an XML
summary about the test executions. To have the detailed scenario
execution traces visible, you also need to capture the test output
using the /out and the /labels options as it can be seen in the
following example.
nunit-console.exe /labels /out=TestResult.txt
/xml=TestResult.xml bin\Debug\BookShop.AcceptanceTests.dll
The two generated files can be
used to invoke the SpecFlow report generation. If you use the output
file names shown above it is enough to specify the project file path
containing the feature files.
specflow.exe nunitexecutionreport BookShop.AcceptanceTests.csproj
/out:MyResult.html
Allure.
https://github.com/allure-framework/allure2
NUnit, Specflow, any framework
I am trying to run sonar analysis for Erlang. I have downloaded the plug-ins and with 60+ rules, it is able to tell me which part of the source code is not compliant.
However, I cannot get the SQALE rating to work correctly, in particular, the technical debt always shows 0.0 days. How do I configure this?
It is not configurable, basically the plug-in does not support this SQALE feature. In fact, the most recent version of SonarQube does not use SQALE anymore.
I'm setting up automated regression testing for an FPGA project, almost exactly as described here:
Continuous integration of complex reconfigurable systems
Now I want to get test results (from VHDL REPORT statements in ModelSim simulation) to appear in Jenkins testing reports. My understanding is that Jenkins only natively supports jUnit format, and I looked for plugins supporting non-XML formats but didn't see any.
Generating valid XML from VHDL REPORT statements would be very difficult, since the simulation may immediately terminate depending on the severity. Which means that the closing tags would have to be duplicated in every single possible exit path for every single test -- not the most maintainable approach.
So, do you know of any straightforward way to convert plain text into jUnit (or another format, if supported by Jenkins)? If something doesn't already exist, is there an advantage to writing a Jenkins plugin vs just throwing together a perl script? Any other suggestions?
You should take a look at the XUnit Plugin. The Plugin reads test results from a number of tools, and seems adaptable to custom formats. From the documentation the plugin is able to read not only xml, but also csv and txt. For custom format you need to specify some style sheet for the transformation, I am not quite sure if this will go all the way for you. But even if it does not, I suppose the plugin should be easy to extend for your own format.
Old post but today there is a unit testing framework for VHDL that we've developed. It solves the problem by generating a report on the JUnit format. It also handles the case when the simulation stops due to a severe error. The tool is free and open source and can be found at https://github.com/LarsAsplund/vunit
I'm looking into setting up a CI environment for our flex projects. I have very little experience in setting up an environment like this, but have read a lot about it and think we could benefit a lot from this in our projects. I do have experience with ANT and we're currently using it for our building. I've been looking at Hudson for a while and it looks really nice and simple while still having the power to support a proper CI environment.
So basically, my question is if anyone has experience in setting up Flex projects with Hudson? If so, please do share some info on issues, cost/benefit as well what kind of effort is required per project to get up and running with Hudson. I've googled for a while and can proudly say that I know more about both the Fast lexical analyzer and the Hudson River, but little more about the topic of this post =)
Just about anything that can be executed from a command line can be executed via hudson. If your flex app can be built via ant from the command line; it will work just fine in hudson.
This might be helpful:
http://www.subotnik.com/blog/?p=100