How to include test case descriptions in lcov/genhtml code coverage output - code-coverage

I'm using lcov to generate code coverage reports for a C code base. I would like to integrate test descriptions into the final output (using lcov's gendesc utility.)
However, I have no clue on how to do that, and documentation on gendesc seems rather sparse (as far as the good old google has been able to tell me.)
The gendesc info at LTP describes how to create the input test case description files (as expected by genhtml). And the genhtml info provides --show-descriptions, and --description-file for inputting such test case description files.
However, I don't know how to reference the test cases so that they get included in the final report. genhtml sees them as unused test cases and thus keeps them out of the generated html output. I can use --keep-descriptions, but that doesn't tell me what test cases were run (obviously because I do not know how to make the reference from code to test description.)
So, how do we tell lcov/genhtml which tests were run in the final output? Any ideas?

To associate a test case name with coverage data, specify that name while collecting coverage data using lcov's --test-name option:
lcov --capture --directory project-dir --output-file coverage.info --test-name "test01"
Then continue with the steps that you already mentioned, that is create a test case description file "tests.txt":
test01
Some test
Convert it into the format expected by genhtml:
gendesc tests.txt --output-filename tests.desc
Finally specify the descriptions file to genhtml:
genhtml coverage.info --output-directory out --description-file tests.desc --show-details

Related

gcovr generates empty report with --add-tracefile --html-details

I was trying to generate a code coverage report from trace1.json. The trace1.json was not generated by gcovr, it was generated using Lauterbach software from the real hardware trace data. From Lauterbach spec, it has the ability to export coverage information about functions and lines to a file in JSON format compatible to Gcov. So after I got the JSON file and tried to use gcovr to generate the code coverage report:
gcovr --add-tracefile result.json --html-details result.html --verbose
I got an empty report and the gcovr log shows "Gathered coveraged data for 0 files".
So I'm wondering after I get the Json file, do I still need to compile the source with --coverage? since even I compile the source with coverage flags, the executable is running on another real hardware, which will not able to collect any gcda.
As of gcovr 5.2, there is no support for the gcov JSON format. Gcovr's internal JSON format is closely based on the gcov JSON format, but differs in some details. Importantly, gcovr validates that the input JSON document has a matching gcovr/format_version. This means gcovr should die with an error in your scenario, and shouldn't even get to the “Gathered coveraged data for 0 files” message.
This suggests that your result.json is an empty JSON report generated by a previous gcovr --json result.json run, and NOT the trace1.json generated by your Lauterbach tool!
You may be able to write a script to modify the JSON file to conform to gcovr's expected format, but you're on your own there. The JSON format is documented in the Gcovr User Guide, though there's a pending update.
Assuming things work as expected, the GCC --coverage flag is unnecessary. While --coverage is necessary for producing .gcda and .gcno files that are needed by gcov to work, gcovr's --add-tracefile mode only consumes the JSON file (and perhaps the source code files) and no other data.

how to show all gtest case by bazel without "test" cmd

I want to query all gtest cases by bazel,
parameter "--gtest_filter" only can be used with "bazel test " cmd
and I am try to use "bazel query bazel query //xxx:all", but it will show the test list which defined in BUILD file , I want to get the cases list from xxx.cc files.
This is not a job that bazel query can do. Query operates on the graph structure of targets. A fundamental design decision of Bazel is that this graph can be computed by looking only at BUILD files and the .bzl files they depend on. In particular, parsing source files is not allowed.
(The argument to --test_filter is simply passed through the test runner; Bazel does not know what it represents.)
If you use CLion with the Bazel plugin you get the following view for googletest tests:
This works even with Catch2 (but for Catch2 the view is not so nice). I guess that's some IDE magic here - nevertheless, it gives you what you want. I assume you can also come up with some type of Bazel Aspect that produces this information for you.
I tested this also with Lavender (with minor modifications) and Visual Studio which gives me in the test overview also a list of all test:

Ranorex custom re-run component

Since Ranorex does not provide re-run functionality from under the hood, I have to write my own and before I started, just want to ask for advice from people who've done it or maybe possible existing solution on the market.
Goal is:
In the end of the run, to re-run failed test cases.
Requirements:
Amount of recursive iterations should be customized
If Data binding is used, should include only Iterations for Data binding that failed
I would use the Ranorex command line argument possiblities to achieve this. Main thing would be to structure the suit accordingly that each test-case could be run seperately.
During the test I would log down the failed test cases either into a text file, database or any other solution that you can later on read the data from (even parse it from the xml result if you want to).
And from that data you'll just insert the test-case name as a command line argument while running the suite again:
testSuite.exe /testcase:TestCaseName
or
testSuite.exe /tc:TestCaseName
The full command line args reference can be found here:
https://www.ranorex.com/help/latest/lesson-4-ranorex-test-suite
Possible solutions:
1a. Based on the report xml: Parse report and collect info about all failed TC.
Cons:
Parse will be tricky
or:
1b. Or create list of failed TC on runtime : If failure occurs on tear-down add this iteration to the re-run list (could be file or DB table).
Using for example:
string testCaseName = TestCaseNode.Current.Name;
int testCaseIndex = TestSuite.Current.GetTestCase(testCaseName).DataContext.CurrentRowIndex;
then:
2a. Based on the list, run executable with parameters, looping though each record.
like this:
testSuite.exe /tc:testCaseName tcdr:testCaseIndex
or:
2b. Or generate new TestSuite file .rxtxt and recompile solution to created updated executable.
and last part:
3a. In the end repeat process, checking that failedTestCases == 0 || currentRerunIterations < expectedRerunIterations with script through CI run executable
or:
3b. Wrap whole Test Suite into Rerun test module and do the same check for failedTestCases == 0 || currentRerunIterations < expectedRerunIterations and run Ranorex from TestModule
Please let me know what you think about it.

gcov froze when giving -a option?

When I do gcov . there is no problems. However, when I do gcov -a . gcov froze. The last few lines of the output is:
File '/usr/include/boost/archive/detail/iserializer.hpp'
Lines executed:78.18% of 55
/usr/include/boost/archive/detail/iserializer.hpp:creating 'iserializer.hpp.gcov'
File '/usr/include/boost/serialization/extended_type_info_typeid.hpp'
Lines executed:40.74% of 27
/usr/include/boost/serialization/extended_type_info_typeid.hpp:creating 'extended_type_info_typeid.hpp.gcov
Do you know why that is happening ? The reason I need "-a" is when I use lcov, it gives that option to gcov, I can hack geninfo to ignore that option but I prefer not to since I'll eventually run lcov on a public system.
Thank you for any inputs!
I also have code that uses boost::serialization - the lcov process isn't /frozen/, it just takes a very very long time to run. I have had it complete successfully after several hours, and I finally do get a nice lcov report.
It would be lovely to be able to exclude processing of the boost serialization code when running lcov -c but I have not been able to figure out exactly how to do that yet. (Of course, I /want/ to get coverage over the code that uses boost serialization, but not the boost headers themselves) Even putting // LCOV_EXCL_START & LCOV_EXCL_STOP around the majority of the serialization code doesn't work, as I think those exclusion markers are only used when genhtml is called, not on lcov -c.

PartCover and multiple TargetArgs

I need to load coverage report from multiples test source, but if I set multiple dlls (two of them test the same class) in TargetArgs, the coverage data is overwritten with the results of the last dll.
How can I add the results from multiples dll testing the same class?
Here is an example of my partcover config file
<PartCoverSettings>
<Target>c:\NUnit\nunit-console.exe</Target>
<TargetWorkDir>c:\MyProject\Testing</TargetWorkDir>
<TargetArgs>ApplicationServices.Test.dll Integration.Test.dll</TargetArgs>
<Rule>+[MyProject.*]*</Rule>
<Rule>-[*.Test]*</Rule>
</PartCoverSettings>
Thanks In Advance
That should actually work. I do the same - run multiple test assemblies and get coverage output. I do it from the command line rather than config file though.
Have you double checked that your rules are correct?

Resources