How to find out what caused the coverage drop in simplecov - ruby-on-rails

We are using simplecov as our code coverage tool and it's great. One thing that would make it even better would be an output saying which file changes caused the drop in the coverage.
One possible way would be to store .resultset.json files and compare them with the newest results. Any other ideas? Has anyone done something similar?

Related

Junit5 with ant

I am new in ant and junit5. I went through several examples ant/junit4. But I did not find a normal example with a pair of ant with junit5. With a pair of ant with junit4 everything works fine. I downloaded the files from the official site of the junit5 https://junit.org/junit5/docs/current/user-guide/ (For Ant, check out the junit5-jupiter-starter-ant project.), but even they give an error at the very beginning(screen attached) I will try to fix them one by one, but I don't think that official repo committed with errors. Maybe I'm doing something wrong. Or maybe gurus suggest some other simple sample with ant/junit5.
Thank you in advance.
I try to run the original sources from https://github.com/junit-team/junit5-samples and expected to at least compile them. But I get plenty of errors.
So this post has 2 questions:
errors on the screenshot
ant+juni5 integration in general
To reproduce the exact issues one should have the same environment, which is at least the OS, Java version & Ant version. The example itself seems to be using the Junit5 ConsoleLauncher, which is one of the way to run the tests. Looking at the errors it looks like the issue is in the project itself, because if it cannot find the 'symbol' #Test then the Junit(5?) is simply not present on the classpath. Maybe this can be a hint to the author to trty to dig a bit more into the issue, particularly looking into the dependencies (jars) included.
Now going back to how to run Junit5 test with Ant, I can recall the "JUnitLauncher" that apache suggests: https://ant.apache.org/manual/Tasks/junitlauncher.html . Note that you should be attentive on the dependencies here as well, there are number of Jars to be included (opentest4j, junit-platform-xyz). Note that this also depends on the installation of Ant in the environment, so for example if you get a NoClassDefFound for JUnitLauncher, this can be solved by using the 'up-to-date' version of Ant: https://ant.apache.org/bindownload.cgi . In case of Linux you can place these Jars in home/your_username/.ant/lib directory, where it will be automatically picked up.

Unreliable code coverage with quarkus-jacoco extension

We are using the quarkus-jacoco extension (version 2.4.2) in order to collect code coverage during tests. The problem we are facing is that it works unreliable. Most of the times, the coverage is recorded correctly but sometimes there is 0% coverage and the jacoco.xml files are not created at all. They are simply missing. No warning and errors are printed. If you run the very same process again coverage collection works perfectly.
Did anyone experience a similar issue? Is there any debugging we can turn on in order to track down the problem? Unfortunately the Maven output does not show any indication about Jacoco operations at all.

how to see line coverage in Bullseye

Recently I started using BullseyeCoverage.
I'm going through the steps: compiling with BullseyeCoverage, running some test cases on the binaries created, generating a coverage report.
In the coverage report there are: function coverage, and condition/decision coverage. However, there is no line coverage. I tried to find a way of generating line coverage statistics, unsuccessfully. I thought of using covbr to this end, but, I need something that will cover all of my sources altogether.
Thanks for your help!
Bullseye does not support line coverage (which is also called statement coverage).For reasons, see http://www.bullseye.com/statementCoverage.html

Finding out rails' test coverage

I've just seen this issue on rails' issue tracker:
https://github.com/rails/rails/issues/2667
And I'd like to find out which parts of the code aren't covered. I couldn't find a coverage tool into the Rakefiles, and searching for it is a bit frustrating, since it returns far more results about test coverage on your rails app than test coverage of the framework itself.
Has anybody set up a code coverage tool? Is there any documentation on how to do it?
Rcov (or SimpleCov for 1.9) is the standard tool for Ruby code coverage. It should be fairly straightforward to get one of these to run the rails tests.

Measuring code coverage in Delphi

Is there any way to measure code coverage with DUnit? Or are there any free tools accomplishing that? What do you use for that? What code coverage do you usually go for?
Jim McKeeth: Thanks for the detailed answer. I am talking about unit testing in the sense of a TDD approach, not only about unit tests after a failure occured. I'm interested in the code coverage I can achieve with some basic prewritten unit tests.
I have just created a new open source project on Google Code with a basic code coverage tool for Delphi 2010. https://sourceforge.net/projects/delphicodecoverage/
Right now it can measure line coverage but I'm planning to add class and method coverage too.
It generates html reports with a summary as well as marked up source showing you what lines are covered (green), which were not (red) and the rest of the lines that didn't have any code generated for them.
Update:
As of version 0.3 of Delphi Code Coverage you can generate XML reports compatible with the Hudson EMMA plugin to display code coverage trends within Hudson.
Update:
Version 0.5 brings bug fixes, increased configurability and cleaned up reports
Update:
Version 1.0 brings support for emma output, coverage of classes and methods and coverage of DLLs and BPLs
I don't know of any free tools. AQTime is almost the defacto standard for profiling Delphi. I haven't used it, but a quick search found Discover for Delphi, which is now open source, but just does code coverage.
Either of these tools should give you an idea of how much code coverage your unit tests are getting.
Are you referring to code coverage from unit tests or stale code? Generally I think only testable code that has a failure should be covered with a unit test (yes I realize that may be starting a holy war, but that is where I stand). So that would be a pretty low percentage.
Now stale code on the other hand is a different story. Stale code is code that doesn't get used. You most likely don't need a tool to tell you this for a lot of your code, just look for the little Blue Dots after you compile in Delphi. Anything without a blue dot is stale. Generally if code is not being used then it should be removed. So that would be 100% code coverage.
There are other scenarios for stale code, like if you have special code to handle if the date ever lands on the 31st of February. The compiler doesn't know it can't happen, so it compiles it in and gives it a blue dot. Now you can write a unit test for that, and test it and it might work, but then you just wasted your time a second time (first for writing the code, second for testing it).
There are tools to track what code paths get used when the program runs, but that is only simi-reliable since not all code paths will get used every time. Like that special code you have to handle leap year, it will only run every four years. So if you take it out then your program will be broken every four years.
I guess I didn't really answer your question about DUnit and Code Coverage, but I think I may have left you with more questions then you started with. What kind of code coverage are you looking for?
UPDATE: If you are taking a TDD approach then no code is written until you write a test for it, so by nature you have 100 test coverage. Of course just because each method is exercised by a test does not mean that its entire range of behaviors is exercised. SmartInspect provides a really easy method to measure which methods are called along with timing, etc. It is a little less then AQTime, but not free. With some more work on your part you can add instrumentation to measure every code path (branches of "if" statements, etc.) Of course you can also just add your own logging to your methods to achieve a coverage report, and that is free (well, expect for your time, which is probably worth more then the tools). If you use JEDI Debug then you can get a call stack too.
TDD really cannot easily be applied retroactively to existing code without a lot of refactoring. Although the newer Delphi IDEs have the ability to generate unit test stubs for each public method, which then gives you 100% coverage of your public methods. What you put in those stubs determines how effective that coverage is.
I use Discover for Delphi and it does the work, for unit testing with DUnit and Functional testing with TestComplete.
Discover can be configured to run from the command line for automation.
As in:
Discover.exe Project.dpr -s -c -m
Discover works great for me. It hardly slows down your application, unlike AQTime. This may not be a problem for you anyway, of course. I think the recent versions of AQTime perform better in this respect.
I've been using Discover" for years, worked excellently up to and including BDS2006 (which was the last pre-XE* version of Delphi i used and still use), but its current opensourced state, it's unclear how to make it work with XE* versions of Delphi. A shame really, because I loved this tool, fast and convenient in almost every way.
So now I'm moving to delphi-code-coverage...

Resources