What is the difference between reportgenerator and BIRTreportgenerator? - fortify

Is BIRTreportgenerator just a newer approach to reportgenerator?
Or are there differences to what they report and their intended use.

ReportGenerator is all about code coverage. If you want to visualize the code coverage of your tests, then ReportGenerator might be relevant for you.
BIRTreportgenerator seems to be a tool to perform static code analysis.

Related

Generating an in-memory coverage report using Clang Source-based Code Coverage

I followed The Clang manual and used __llvm_profile_write_buffer to collect coverage cprofile data inside the instrumented program.
This works well, but to actually generate a coverage report the recommended way is to use the llvm-cov tool like this:
llvm-cov show ./foo -instr-profile=foo.profdata
This tool needs access to the binary which does not play well with __llvm_profile_write_buffer .
Is there a way to generate a coverage report similar to what llvm-cov does, but inside the process, from the buffer updated by __llvm_profile_write_buffer ?
I guess this would involve accessing the symbol table from within the process, which I think is doable?
Use case : I would like to upload the coverage report from within the process to a remote server without having to execute an external tool.
Thanks for your help,
Antoine

how to see line coverage in Bullseye

Recently I started using BullseyeCoverage.
I'm going through the steps: compiling with BullseyeCoverage, running some test cases on the binaries created, generating a coverage report.
In the coverage report there are: function coverage, and condition/decision coverage. However, there is no line coverage. I tried to find a way of generating line coverage statistics, unsuccessfully. I thought of using covbr to this end, but, I need something that will cover all of my sources altogether.
Thanks for your help!
Bullseye does not support line coverage (which is also called statement coverage).For reasons, see http://www.bullseye.com/statementCoverage.html

Ruby test coverage tool

I need a tool which measures test coverage under Ruby.
I tried rcov, but couldn't install it under Windows, Cygwin, or Ubuntu.
Which programs (not necessarily free) calculate the branch and/or line coverage of tests in Ruby and work with Ruby on Rails?
Simplecov. Rcov doesn't work so well under Ruby 1.9.*, but simplecov does.
You also have deep-cover which aims to be more complete than the mentioned examples.
From the Readme:
Deep Cover aims to be the best coverage tool for Ruby code:
more accurate line coverage
branch coverage
can be used as a drop-in replacement for the built-in Coverage library.
Use Ruby's built in Coverage feature if you are wanting something very simple. It is what Simplecov uses under the hood.

Test code coverage without source code?

What tools are out there that can perform code coverage analysis at the machine code level rather than the source code level? I'm looking for a possible solution to perform fuzz testing on software that I do not have source code access.
I think the IBM Rational test coverage tools instrument object code.
Assuming you had such a tool, but no access to the source, what exactly
would code coverage mean, other than 100%?
If you didn't have 100% coverage, you'd know you hadn't exercised something.
But you would have no way of knowing what.
For compiled code (not Java), try Valgrind.
Old post... but my two cents.
If you have a bunch of jars and if you know what classes/methods you are using, you can instrument the jars with Emma and run your sample application against those jars.
In my case, I have jars which are actually proprietary components (to generate html code) which our company uses to build it's web-pages. We have a sample application that utilizes these components and a bunch of tests that are run against the sample app. I wrote an ant task to copy the maven dependencies to a directory, instrument them and run the tests against these instrumented jars. This task is invoked from the maven POM and is hence part of the build process.
Also, as part of the build process, we process the emma coverage data to produce a report. This report shows the classes and methods in the jar for which we do not have the source code! Hope this helps.
If you have the number of entry points (public methods), you can test the coverage for that. I don't know any tool for that though.
Otherwise you would have to test the assembly code coverage, and I don't know if it is possible.

Measuring code coverage in Delphi

Is there any way to measure code coverage with DUnit? Or are there any free tools accomplishing that? What do you use for that? What code coverage do you usually go for?
Jim McKeeth: Thanks for the detailed answer. I am talking about unit testing in the sense of a TDD approach, not only about unit tests after a failure occured. I'm interested in the code coverage I can achieve with some basic prewritten unit tests.
I have just created a new open source project on Google Code with a basic code coverage tool for Delphi 2010. https://sourceforge.net/projects/delphicodecoverage/
Right now it can measure line coverage but I'm planning to add class and method coverage too.
It generates html reports with a summary as well as marked up source showing you what lines are covered (green), which were not (red) and the rest of the lines that didn't have any code generated for them.
Update:
As of version 0.3 of Delphi Code Coverage you can generate XML reports compatible with the Hudson EMMA plugin to display code coverage trends within Hudson.
Update:
Version 0.5 brings bug fixes, increased configurability and cleaned up reports
Update:
Version 1.0 brings support for emma output, coverage of classes and methods and coverage of DLLs and BPLs
I don't know of any free tools. AQTime is almost the defacto standard for profiling Delphi. I haven't used it, but a quick search found Discover for Delphi, which is now open source, but just does code coverage.
Either of these tools should give you an idea of how much code coverage your unit tests are getting.
Are you referring to code coverage from unit tests or stale code? Generally I think only testable code that has a failure should be covered with a unit test (yes I realize that may be starting a holy war, but that is where I stand). So that would be a pretty low percentage.
Now stale code on the other hand is a different story. Stale code is code that doesn't get used. You most likely don't need a tool to tell you this for a lot of your code, just look for the little Blue Dots after you compile in Delphi. Anything without a blue dot is stale. Generally if code is not being used then it should be removed. So that would be 100% code coverage.
There are other scenarios for stale code, like if you have special code to handle if the date ever lands on the 31st of February. The compiler doesn't know it can't happen, so it compiles it in and gives it a blue dot. Now you can write a unit test for that, and test it and it might work, but then you just wasted your time a second time (first for writing the code, second for testing it).
There are tools to track what code paths get used when the program runs, but that is only simi-reliable since not all code paths will get used every time. Like that special code you have to handle leap year, it will only run every four years. So if you take it out then your program will be broken every four years.
I guess I didn't really answer your question about DUnit and Code Coverage, but I think I may have left you with more questions then you started with. What kind of code coverage are you looking for?
UPDATE: If you are taking a TDD approach then no code is written until you write a test for it, so by nature you have 100 test coverage. Of course just because each method is exercised by a test does not mean that its entire range of behaviors is exercised. SmartInspect provides a really easy method to measure which methods are called along with timing, etc. It is a little less then AQTime, but not free. With some more work on your part you can add instrumentation to measure every code path (branches of "if" statements, etc.) Of course you can also just add your own logging to your methods to achieve a coverage report, and that is free (well, expect for your time, which is probably worth more then the tools). If you use JEDI Debug then you can get a call stack too.
TDD really cannot easily be applied retroactively to existing code without a lot of refactoring. Although the newer Delphi IDEs have the ability to generate unit test stubs for each public method, which then gives you 100% coverage of your public methods. What you put in those stubs determines how effective that coverage is.
I use Discover for Delphi and it does the work, for unit testing with DUnit and Functional testing with TestComplete.
Discover can be configured to run from the command line for automation.
As in:
Discover.exe Project.dpr -s -c -m
Discover works great for me. It hardly slows down your application, unlike AQTime. This may not be a problem for you anyway, of course. I think the recent versions of AQTime perform better in this respect.
I've been using Discover" for years, worked excellently up to and including BDS2006 (which was the last pre-XE* version of Delphi i used and still use), but its current opensourced state, it's unclear how to make it work with XE* versions of Delphi. A shame really, because I loved this tool, fast and convenient in almost every way.
So now I'm moving to delphi-code-coverage...

Resources