PartCover and multiple TargetArgs - partcover

I need to load coverage report from multiples test source, but if I set multiple dlls (two of them test the same class) in TargetArgs, the coverage data is overwritten with the results of the last dll.
How can I add the results from multiples dll testing the same class?
Here is an example of my partcover config file
<PartCoverSettings>
<Target>c:\NUnit\nunit-console.exe</Target>
<TargetWorkDir>c:\MyProject\Testing</TargetWorkDir>
<TargetArgs>ApplicationServices.Test.dll Integration.Test.dll</TargetArgs>
<Rule>+[MyProject.*]*</Rule>
<Rule>-[*.Test]*</Rule>
</PartCoverSettings>
Thanks In Advance

That should actually work. I do the same - run multiple test assemblies and get coverage output. I do it from the command line rather than config file though.
Have you double checked that your rules are correct?

Related

gcov is invoking gtest sources and unit-tests. how can I avoid this?

I am working on creating a Jenkins pipeline for unit-testing maybe with GTest.
My plan is to use the following tools:
GTest for Unit-Testing, gcov for generating gcda and gcno Files and gcovr for xml or Html outputs of the unit-testing results.
It's working well till now with the help from the internet and particularly stack overflow.
But I am struggling with 3 issues.
gcov is creating gcda and gcno files for gtest sources and my unit-tests. Because gcovr is invoking them and I see them in the HTML files. how can I avoid this? I only want my production code in the HTML files.
I can only see code coverage for template classes if gcov is generating gcda and gcno files for my unit-tests. So I need a simple idea for 1) Maybe can I use an exclude flag in gcovr?
Unused functions in template classes (inline functions) are not covered. Code coverage is always 100% but I tried with different flags, and nothing helped.
-fprofile-abs-path --coverage -fno-inline -fno-inline-small-functions -fno-default-inline -fkeep-inline-functions
I added a picture to show, what I am talking about. UnitTests and GTests covering results should not appear in gcovr HTML...
You can filter out unwanted coverage data, but you can't create data that doesn't exist.
1. gcov is creating gcda and gcno files for gtest sources and my unit-tests. Because gcovr is invoking them and I see them in the HTML files. how can I avoid this? I only want my production code in the HTML files.
Use gcovr --exclude GoogleTest/ --exclude UnitTests/
Gcovr has a per-file filtering system that allows you to specify which source code files to include/exclude. For a file to be included in the coverage report,
any --filter pattern must match, and
no --exclude pattern must match.
Or phrased in reverse: a file is excluded if it doesn't match any --filter or if it matches any --exclude pattern.
If you don't provide an explicit --filter, then the default filter is the --root directory, which in turn defaults to the current working directory.
These patterns are regexes. Usually, these are used to match paths relative to the current working directory. For example, you can limit the reports to a src/ directory with gcovr --filter src/. Or you can exclude the GoogleTest/ directory with gcovr --exclude GoogleTest/.
Gcovr also has a way to filter gcda/gcno files (search_paths and --gcov-filter), but that is mostly useful as a performance optimization.
2. I can only see code coverage for template classes if gcov is generating gcda and gcno files for my unit-tests. So I need a simple idea for 1) Maybe can I use an exclude flag in gcovr?
This is by design. As explained above, you can solve this via gcovr's exclude flag.
You get a gcda/gcno file per compilation unit. Header files are included into multiple compilation units, so their coverage information is essentially split across all compilation units that include it.
So, if you want coverage for code in header files, and you include these headers into your unit tests, then gcovr must also process the gcda/gcno files from those unit tests.
3. Unused functions in template classes (inline functions) are not covered. Code coverage is always 100% but I tried with different flags, and nothing helped. -fprofile-abs-path --coverage -fno-inline -fno-inline-small-functions -fno-default-inline -fkeep-inline-functions
The gcov coverage data model works on an assembly-code level. Counters are inserted by the compiler itself, but only for functions for which the compiler actually generates machine code. Thus, as far as gcov is concerned, inline functions, optimized-out code, and non-instantiated templates simply do not exist.
This is quite annoying, but it's potentially difficult to work around.
This can most clearly be avoided by making sure that all functions for which you want coverage data are referenced via your unit tests. It is not necessary to actually invoke that function, merely referencing it should be sufficient. For example, I'd write a function to ignore() arbitrary values despite optimizations, then:
ignore(&some_inline_function);
Possible implementation: template<class T> void ignore(T const& t) { volatile T sinkhole = t; }
Your suggested options like -fno-inline do not work because the code for these functions isn't generated in the first place.
With GCC and when using C++ (but not C), the -fkeep-inline-functions should work, but only for non-templated inline functions.
If a non-templated inline function is only used within one file and isn't provided in a header to multiple files, then it should instead be declared static (in C) or in an anonymous namespace (in C++11 or later), so that -Wunused-function or -Wall notify you if it isn't referenced.
Templates are more tricky in general. Each distinct instantiation of a template results in separate functions. Gcovr does aggregate coverage data across them, but in order for the template to appear in the coverage data it must be instantiated at least once. You will have to do this manually.

how to show all gtest case by bazel without "test" cmd

I want to query all gtest cases by bazel,
parameter "--gtest_filter" only can be used with "bazel test " cmd
and I am try to use "bazel query bazel query //xxx:all", but it will show the test list which defined in BUILD file , I want to get the cases list from xxx.cc files.
This is not a job that bazel query can do. Query operates on the graph structure of targets. A fundamental design decision of Bazel is that this graph can be computed by looking only at BUILD files and the .bzl files they depend on. In particular, parsing source files is not allowed.
(The argument to --test_filter is simply passed through the test runner; Bazel does not know what it represents.)
If you use CLion with the Bazel plugin you get the following view for googletest tests:
This works even with Catch2 (but for Catch2 the view is not so nice). I guess that's some IDE magic here - nevertheless, it gives you what you want. I assume you can also come up with some type of Bazel Aspect that produces this information for you.
I tested this also with Lavender (with minor modifications) and Visual Studio which gives me in the test overview also a list of all test:

Determine if files are part of any package

Given I have a list of files, e.g foo/src/main.cpp, foo/src/bar.cpp, foo/README.md is it possible to determine which of those files are part of a bazel package?
In my example, the output would e.g. be foo/src/main.cpp, foo/src/bar.cpp since the README.md would not be part of the build.
One way to do this would be to call bazel query on each file and see if it results in an output, but that is quite inefficient and so I was wondering if there is an easier way.
Background: I am trying to determine if a changes in a set of files have an impact on a target, and I want to use bazel query somepath(//some/target, set($FILES)) for that, but this will fail if any of the files in $FILES is not part of a BUILD file.
How about flipping it around and querying for all the source files of the target with:
bazel query 'kind("source file", deps(//some:target))'
and then checking if the result has any of the files in the set

Jmeter doesn't save response data or headers

I'm building some simple load testing for my API, and to make sure everything is on the up and up I'd like to also review the response headers and data. But when I run my test using the command line and then re-open the GUI to add a View Results Tree listener and load the created file the response headers or response data is empty.
I entered the following values into user.properties (also tried uncommenting those values in jmeter.properties and changing them there, same result)
jmeter.save.saveservice.output_format=csv (tried xml, omitting it, jtl)
jmeter.save.saveservice.data_type=false
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.response_data.on_error=true
jmeter.save.saveservice.response_message=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
jmeter.save.saveservice.time=true
jmeter.save.saveservice.subresults=false
jmeter.save.saveservice.assertions=false
jmeter.save.saveservice.latency=true
jmeter.save.saveservice.bytes=true
jmeter.save.saveservice.hostname=true
jmeter.save.saveservice.thread_counts=true
jmeter.save.saveservice.sample_count=true
jmeter.save.saveservice.response_message=true
jmeter.save.saveservice.assertion_results_failure_message=true
jmeter.save.saveservice.timestamp_format=HH:mm:ss
jmeter.save.saveservice.default_delimiter=;
jmeter.save.saveservice.print_field_names=true
But still no luck when opening the result file. I tried declaring the file after the -l tag as results.csv, .jtl, even .xml but none of them show me the headers and data.
I'm running it locally on Mac OS X 10.10 using the following command, jmeter version is 2.12
java -jar ApacheJMeter.jar -n -t /Users/[username]/Documents/API_test.jmx -l results_15.jtl
I don't know if it's not even saving that data, or if the Listeners can't read it or if I've been cursed but any help is appreciated.
It works fine if I add a Listener and run it using the GUI, but if I try to run my larger tests that way, well, things don't end well for anyone.
So my question is:
How do I save the response header and data to a file when using the command line, and how do I then view said file in jmeter?
Add a Simple Data Writer (under Listeners) and output to a file (NB: different file than your log). Under the 'configure' button, there are all sorts of options of what to save. One of the check boxes is Save Response Header.
This file can get huge if you're saving a bunch of things for every request- one strategy is to check everything, but only save for errors. But you can do whatever works for you.
You can also turn on "Functional Test Mode" which will produce a large file but will contain pretty much anything you might need to debug your test.
Beware, this can create a very large JTL file, so don't forget to turn it off for your large test runs! See JMeter Maven mojo throws IllegalArgumentException with large JTL file
Alternatively use a Tree View Listener in the GUI for a small sample of the requests and check the request/response in the GUI (including headers) to debug or check your test.
Add Below lines in user.properties file
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.url=true
Restart cmd prompt.

How to include test case descriptions in lcov/genhtml code coverage output

I'm using lcov to generate code coverage reports for a C code base. I would like to integrate test descriptions into the final output (using lcov's gendesc utility.)
However, I have no clue on how to do that, and documentation on gendesc seems rather sparse (as far as the good old google has been able to tell me.)
The gendesc info at LTP describes how to create the input test case description files (as expected by genhtml). And the genhtml info provides --show-descriptions, and --description-file for inputting such test case description files.
However, I don't know how to reference the test cases so that they get included in the final report. genhtml sees them as unused test cases and thus keeps them out of the generated html output. I can use --keep-descriptions, but that doesn't tell me what test cases were run (obviously because I do not know how to make the reference from code to test description.)
So, how do we tell lcov/genhtml which tests were run in the final output? Any ideas?
To associate a test case name with coverage data, specify that name while collecting coverage data using lcov's --test-name option:
lcov --capture --directory project-dir --output-file coverage.info --test-name "test01"
Then continue with the steps that you already mentioned, that is create a test case description file "tests.txt":
test01
Some test
Convert it into the format expected by genhtml:
gendesc tests.txt --output-filename tests.desc
Finally specify the descriptions file to genhtml:
genhtml coverage.info --output-directory out --description-file tests.desc --show-details

Resources