What tools are out there that can perform code coverage analysis at the machine code level rather than the source code level? I'm looking for a possible solution to perform fuzz testing on software that I do not have source code access.
I think the IBM Rational test coverage tools instrument object code.
Assuming you had such a tool, but no access to the source, what exactly
would code coverage mean, other than 100%?
If you didn't have 100% coverage, you'd know you hadn't exercised something.
But you would have no way of knowing what.
For compiled code (not Java), try Valgrind.
Old post... but my two cents.
If you have a bunch of jars and if you know what classes/methods you are using, you can instrument the jars with Emma and run your sample application against those jars.
In my case, I have jars which are actually proprietary components (to generate html code) which our company uses to build it's web-pages. We have a sample application that utilizes these components and a bunch of tests that are run against the sample app. I wrote an ant task to copy the maven dependencies to a directory, instrument them and run the tests against these instrumented jars. This task is invoked from the maven POM and is hence part of the build process.
Also, as part of the build process, we process the emma coverage data to produce a report. This report shows the classes and methods in the jar for which we do not have the source code! Hope this helps.
If you have the number of entry points (public methods), you can test the coverage for that. I don't know any tool for that though.
Otherwise you would have to test the assembly code coverage, and I don't know if it is possible.
Related
By default, Maven standard directory layout has two Java source folders:
src/main/java
src/test/java
For my purposes, I need a third one src/junit/java which should be packaged into a JAR with the classifier junit.
If possible, the new source folder should have it's own classpath (compile + everything with scope junit).
My guess is that for this, I will have to modify at least the resource and compile plugins.
Or is there an easier way?
I have a workaround as explained here but for that, I have to put things like Mockito and JUnit on the compile classpath which violates my sense of purity.
For all people who doubt the wisdom of my approach: I have support code that help to write unit tests when you work with code from src/main/java. Since I'm using the same support code in the tests for the project itself, this code needs to be compiled after src/main/java and before src/test/java.
Specifically, my support code needs to import code from src/main/java and the tests need to be able to import the support code.
I've seen a couple of Maven setups, which bundle test code in an own Maven module. You could then create a simple main-module <- support-module <- test-module dependency chain with that. But then main-module would compile fine, if you build it on it's own without test-module. Ofc you could aggreate them together with a reactor-pom and just build the project via this pom.
Edit:
If you have problems with this setup regarding code coverage, you can use the Jacoco Maven plugin to aggregate the test coverage generated by test-module to main-module. See this for further information: http://www.petrikainulainen.net/programming/maven/creating-code-coverage-reports-for-unit-and-integration-tests-with-the-jacoco-maven-plugin/
what is the best way to test a pub lib package before deploy, as if I had downloaded it via pub install? (not talking about unit tests)
You can use path packages. Instead of going through a pub server this will fetch the package from the local filesystem.
It very much depends on the type of package you are looking to use.
If the pub package is primarily a non-UI library then you should be able to exercise its API via a UnitTest script, a small script that has a main to kick off a bunch of unit tests (grouped or otherwise).
Another option for a non-UI package is to find the source project (usually noted in the package's page on pub.dartlang.org) and download it, where if you're lucky there will be a test directory with a unit test script in it.
Some UI providing packages do include unit tests in their project too.
A lot of projects include an example or two you can run to see how it works and pick up some tips from their source code, so I encourage you to check out the original source of the project you're interested in.
But generally (especially for UI providing packages), you're going to get best results by creating a small skeleton app for the purpose of playing with the package and then apply what you learn to your main application.
Hope that helps.
I've been doing some research on the maven source and javadoc plugins, and I wanted to inquire a bit about the usage of each.
I understand conceptually how the plugins work, and what they do.
What I'm confused about, is why you would want to bundle sources or javadoc along with your artifact. Doesn't the javadoc get published when you do site:deploy? If I am creating a JAR library that will be used as a dependency of another project in eclipse, will attaching javadoc or sources enable me to see the javadoc in eclipse when using functions in that library, whereas if I fail to use the javadoc plugin, they won't be available?
What is "forked-path" and "jar-no-fork"? They seem to be relevant to this. Like I said I've done a lot of researching, I just can't tie it all together. Thanks!
Eclipse and other tools know how to download source and javadoc artifacts and use them to show you doc and source of your dependencies.
Forked-path and jar-no-fork are just about not running out of memory.
The Delphi Linker strips out any functions that aren't actually used, thus reducing the executable size.
Is there any way to stop the Delphi Linker doing this? e.g. a compiler switch?
To those wondering "why?"...
I am trying to use the delphi-code-coverage tool, but it only reports on code that is actually compiled into the executable. Which makes it not very useful. If I could get Delphi to include all code, I'm hoping I could then get some useful code coverage statistics.
I should mention that I have DUnit tests in a separate project to my application. So even though the code is "unused" in the DUnit project, it is used in the actual application.
See here for more details.
Your code-coverage tool is measuring the wrong thing. It works off the map file instead of the source code, so it will only report on live code instead of on all code in a project. The linker already filters out the dead code, and in a blank unit-test project, all code is dead code. There is no way to tell Delphi to include dead code in an EXE.
Run the code-coverage tool on your application to get a list of functions that needs testing. Then, write code in your unit-test project that mentions all those functions. (It doesn't have to call everything yet, and it certainly doesn't have to test it all. We're just making sure it's linked to the unit-test project.) Now the coverage tool can get an accurate measurement of what's been executed and what hasn't.
Is there any way to measure code coverage with DUnit? Or are there any free tools accomplishing that? What do you use for that? What code coverage do you usually go for?
Jim McKeeth: Thanks for the detailed answer. I am talking about unit testing in the sense of a TDD approach, not only about unit tests after a failure occured. I'm interested in the code coverage I can achieve with some basic prewritten unit tests.
I have just created a new open source project on Google Code with a basic code coverage tool for Delphi 2010. https://sourceforge.net/projects/delphicodecoverage/
Right now it can measure line coverage but I'm planning to add class and method coverage too.
It generates html reports with a summary as well as marked up source showing you what lines are covered (green), which were not (red) and the rest of the lines that didn't have any code generated for them.
Update:
As of version 0.3 of Delphi Code Coverage you can generate XML reports compatible with the Hudson EMMA plugin to display code coverage trends within Hudson.
Update:
Version 0.5 brings bug fixes, increased configurability and cleaned up reports
Update:
Version 1.0 brings support for emma output, coverage of classes and methods and coverage of DLLs and BPLs
I don't know of any free tools. AQTime is almost the defacto standard for profiling Delphi. I haven't used it, but a quick search found Discover for Delphi, which is now open source, but just does code coverage.
Either of these tools should give you an idea of how much code coverage your unit tests are getting.
Are you referring to code coverage from unit tests or stale code? Generally I think only testable code that has a failure should be covered with a unit test (yes I realize that may be starting a holy war, but that is where I stand). So that would be a pretty low percentage.
Now stale code on the other hand is a different story. Stale code is code that doesn't get used. You most likely don't need a tool to tell you this for a lot of your code, just look for the little Blue Dots after you compile in Delphi. Anything without a blue dot is stale. Generally if code is not being used then it should be removed. So that would be 100% code coverage.
There are other scenarios for stale code, like if you have special code to handle if the date ever lands on the 31st of February. The compiler doesn't know it can't happen, so it compiles it in and gives it a blue dot. Now you can write a unit test for that, and test it and it might work, but then you just wasted your time a second time (first for writing the code, second for testing it).
There are tools to track what code paths get used when the program runs, but that is only simi-reliable since not all code paths will get used every time. Like that special code you have to handle leap year, it will only run every four years. So if you take it out then your program will be broken every four years.
I guess I didn't really answer your question about DUnit and Code Coverage, but I think I may have left you with more questions then you started with. What kind of code coverage are you looking for?
UPDATE: If you are taking a TDD approach then no code is written until you write a test for it, so by nature you have 100 test coverage. Of course just because each method is exercised by a test does not mean that its entire range of behaviors is exercised. SmartInspect provides a really easy method to measure which methods are called along with timing, etc. It is a little less then AQTime, but not free. With some more work on your part you can add instrumentation to measure every code path (branches of "if" statements, etc.) Of course you can also just add your own logging to your methods to achieve a coverage report, and that is free (well, expect for your time, which is probably worth more then the tools). If you use JEDI Debug then you can get a call stack too.
TDD really cannot easily be applied retroactively to existing code without a lot of refactoring. Although the newer Delphi IDEs have the ability to generate unit test stubs for each public method, which then gives you 100% coverage of your public methods. What you put in those stubs determines how effective that coverage is.
I use Discover for Delphi and it does the work, for unit testing with DUnit and Functional testing with TestComplete.
Discover can be configured to run from the command line for automation.
As in:
Discover.exe Project.dpr -s -c -m
Discover works great for me. It hardly slows down your application, unlike AQTime. This may not be a problem for you anyway, of course. I think the recent versions of AQTime perform better in this respect.
I've been using Discover" for years, worked excellently up to and including BDS2006 (which was the last pre-XE* version of Delphi i used and still use), but its current opensourced state, it's unclear how to make it work with XE* versions of Delphi. A shame really, because I loved this tool, fast and convenient in almost every way.
So now I'm moving to delphi-code-coverage...