Setting threshold for Coverage in Bazel Java Project - code-coverage

I am using following command for running coverage for complete Java project module.
bazel coverage ... --compilation_mode=dbg --subcommands --announce_rc --verbose_failures --jobs=auto --sandbox_debug --build_runfile_links --combined_report=lcov --coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main
Then running following command to get html view. Html report is
generated in output-directory-name we specified.
genhtml -o <output-directory-name> bazel-out/_coverage/_coverage_report.dat
I wanted to set overall coverage threshold for a project module. Basically Error/Warning should be thrown when this coverage threshold is exceeded. Is there some flag we can use to set this coverage threshold?

Related

How to generate a detail report of functional coverage in Questasim?

How to generate the detailed coverage report of functional coverage? I am using following command to simulate my code :
vlog -64 -work work -vopt +notimingchecks +cover +fcover -f pcie_jammer.f
vsim -novopt -c <CODE SPECIFIC ARGS> -t ps work.tb_top work.glbl -vopt -do "set WildcardFilter None;**coverage save -onexit -directive -cvg -codeAll pcie_cov_${1}_gen${speed}_X${width}** ; add log -r /*;coverage report -file pcie_cov_${1}_gen${speed}_X${width}.txt -byfile -detail -noannotate -option -directive -cvg -details -verbose;**coverage report -directive -cvg -details -verbose**;run -all;exit" > transcript_${tname}_gen${speed}_X${width}.txt
vcover report -html pcie_cov_${1}_gen${speed}_X${width} -verbose
I am not able to see the details of the covergroup in the report.
After some research I am able to solve the above question. Please find the solution below:
To generate a detailed function coverage report:
1.First compile and simulate your code using below mentioned script :
vlog -work work -O0 +fcover +acc -f pcie_jammer.f
vsim -cvgperinstance -c <ARGUMENTS> work.tb_top work.glbl -do " coverage save -onexit <Name_of_File>.ucdb; run -all;exit"
Save the coverage report of the simulation in a UCDB file (Refer Questa User Manual for details about UCDB file).
2.In order to get a html or text report, reload the formed ucdb file and use coverage report to form the report as follows:
vsim -cvgperinstance -viewcov merged.ucdb -do "coverage report -file final_report.txt -byfile -detail -noannotate -option -cvg"
One can also use Questa GUI to form the report.
This approach is quite useful in order to merge the functional coverage reports of multiple testcases.
After creating .ucdb file go to console (cmd) and type the following commands:
vcover report -details -html result.ucdb
(this is for html report with details.)
vcover report -details result.ucdb
(this is for questasim.)

Using Bazel to generate coverage report

I am using genhtml command to generate html coverage report from Bazel generated coverage.dat file:
genhtml bazel-testlogs/path/to/TestTarget/coverage.dat --output-directory coverage
The problem with using genhtml is that I have to provide the paths to the coverage.dat files (which are generated in bazel-testlogs/..) Is it possible to fetch those coverage.dat files as an output from another rule?
I would like to not have to call genthml command directly, but have Bazel handle everything.
I was not able to find a way to get coverage.dat files as an output of a bazel rule. However, I was able to wrap all the locations of all the .dat files as srcs to a filegroup in WORKSPACE directory:
filegroup(
name = "coverage_files",
srcs = glob(["bazel-out/**/coverage.dat"]),
)
and then use that filegroup in a custom .bzl rule that wraps the genthml command to generate html coverage report. So now I only have to call
bazel coverage //path/... --instrumentation_filter=/path[/:]
command to generate the coverage.dat files, generate html report and zip it up. Thus, bazel handles everything.
Bazel added support for C++ coverage (though I couldn't find much documentation for it).
I was able to generate a combined coverage.dat file with
bazel coverage -s \
--instrument_test_targets \
--experimental_cc_coverage \
--combined_report=lcov \
--coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main \
//...
The coverage file gets added to bazel-out/_coverage/_coverage_report.dat
For Java based project we can get code coverage in following way
To get coverage for complete module ->
Running coverage for complete project module. Run following command ->
bazel coverage ... --compilation_mode=dbg --subcommands --announce_rc --verbose_failures --jobs=auto --sandbox_debug --build_runfile_links --combined_report=lcov --coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main
Then run following command from parent project directory to get html view. Html report is generated in output-directory-name we specified. From that open index.html to see coverage report.
genhtml -o <output-directory-name> bazel-out/_coverage/_coverage_report.dat
bazel-out directory usually gets created in project parent directory(e.g. where bazel WORKSPACE file is present)
To get coverage for specific IT/Test in a module ->
Running coverage for for specific IT/Test in a module. Run following command from project/sub-project directory ->
bazel coverage <class-name-of-Test-or-IT> --compilation_mode=dbg --subcommands --announce_rc --verbose_failures --jobs=auto --sandbox_debug --build_runfile_links --combined_report=lcov --coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main
Then run following command from parent project directory to get html view. Html report is generated in output-directory-name we specified. From that open index.html to see coverage report.
genhtml -o <output-directory-name> bazel-out/_coverage/_coverage_report.dat

Exhausted Virtual Memory Installing SyntaxNet Using Docker Toolbox

I exhausted my virtual memory when trying to install SyntaxNet from this Dockerfile using the Docker Toolbox. I received this message when compiling the Dockerfile:
ERROR: /root/.cache/bazel/_bazel_root/5b21cea144c0077ae150bf0330ff61a0/external/org_tensorflow/tensorflow/core/kernels/BUILD:1921:1: C++ compilation of rule '#org_tensorflow//tensorflow/core/kernels:svd_op' failed: gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -Wl,-z,-relro,-z,now -B/usr/bin -B/usr/bin -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-canonical-system-headers ... (remaining 115 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1. virtual memory exhausted: Cannot allocate memory ____Building complete. ____Elapsed time: 8548.364s, Critical Path: 8051.91s
I have a feeling this could be resolved by changing Bazel's default jobs limit with (for example) --jobs=1, however I'm not sure where I would put that in the Dockerfile.
There are two possibilities: You could either modify the Dockerfile so that it creates a ~/.bazelrc that contains the following text:
build --jobs=1
Note that this works, even though the Dockerfile runs bazel test (as opposed to bazel build), because build flags in the .bazelrc also apply to Bazel's test command.
The other possibility would be to modify the RUN command in the Dockerfile to include the --jobs=1 parameter, e.g. RUN [...] && bazel test --jobs=1 --genrule_strategy=standalone [...].
Bazel should then spawn not more than a single child process during the build. You can verify this by running "ps axuf" on your host and looking at the process tree of your container. If you modified the RUN cmd, you should also see the --jobs=1 parameter on Bazel's command-line.

GCOV: why sample.gcda and sample.gcno may be different

At first I take the message sample.gcda:stamp mismatch with graph file
the order of compilation and running is observed
hexdump -e '"%x\n"' -s8 -n4 sample.gcno -> aaa1aaaa
hexdump -e '"%x\n"' -s8 -n4 sample.gcda -> bbb2bbbb
stamp mismatch with graph file
Means that graph file has been compiled again after binaries built.
If the compilation order is correct, you could try to check if there is a compilation of the sample.cpp twice somewhere in building rules.
For example we have something like that:
g++ ... sample.cpp -o sample
g++ ... -shared sample.cpp -o sample2.o
So one file is compiled twice. It will cause that gcno file will be updated by new timestamp that will not match to gcda file anymore.
If you performed your product or application testing thoroughly and manually and spent lot of effort on it. If your objective is to get code coverage report using lcov and gcov but by mistake deleted gcno files. You can regenerate gcno files by recompiling the code but it will be generated with new timestamp and gcov reports error saying "stamp mismatch with graph file" and no code coverage report will be generated. This will result in all your testing effort getting wasted.
There is a shortcut to still generate the code coverage report. This is just a workaround and should not be relied upon all the time. Its recommended to preserve *.gcno files till your testing completes.
Note down your gcc version(gcc -v) and download its source code from one of the mirror sites
Eg - ftp://gd.tuwien.ac.at/gnu/sourceware/gcc/releases/gcc-4.4.6/gcc-4.4.6.tar.bz2
After extracting downloaded file, gcc the folder structure will be as follows
gcc-4.4.6
gcc-4.4.6/gcc
If you directly go inside gcc-4.4.6/gcc and try to do ./configure and compile(make) from there then you will encounter below problem
build/genmodes -h > tmp-modes.h
/bin/sh: build/genmodes: No such file or directory
Solution is do ./configure and make from gcc-4.4.6 and no errors will be shown related to genmodes. This will compile all modules including gcc. You may have to install mpfr and gmp modules which are needed by gcc if any error shown by ./configure
goto gcc-4.4.6/gcc/gcov.c and comment below lines and then recompile with above command
/* if (tag != bbg_stamp)
{
fnotice (stderr, "%s:stamp mismatch with graph file\n", da_file_name);
goto cleanup;
}*/
Example path of new gcov binary after compilation is gcc-4.4.6/host-x86_64-unknown-linux-gnu/gcc/gcov
Place this binary in /usr/bin and regenerate code coverage report with command as shown in below example
lcov --capture --directory ./ --output-file coverage.info ; genhtml coverage.info --output-directory /var/www/html/coverage
Now you should not get "stamp mismatch with graph file" error and you will get code coverage report properly

Rcov coverage changes drastically with -xrefs

My current Ruby on Rails project does testing via rcov (specifically, relevance rcov, and we have a pretty high standard (we fail the build if we have < 95% code coverage).
We use the following command to test this:
rcov_cmd = "rcov --rails --text-summary \
--include #{included_dirs} \
--exclude #{excluded_dirs} \
--aggregate #{coverage_dir}/coverage.data \
--output #{coverage_dir} \
Today I found some code that registers green (having run) in the rcov reports. Homever, I can prove that this code isn't getting run (I raise an exception in the beginning of the function, and my unit tests pass)
I did some research and found the --xrefs flag for rcov, which I thought would add all the callers for each line in the rcov reports.
I changed the rcov command to:
rcov_cmd = "rcov --rails --text-summary --xrefs \
--include #{included_dirs} \
--exclude #{excluded_dirs} \
--aggregate #{coverage_dir}/coverage.data \
--output #{coverage_dir} \
(notice the added --xrefs flag).
Instead of additional callsite information, I instead have my test coverage go from 96% to 48%.
Does --xrefs change the kind of analysis how rcov does? (I thought it would just gather callsite information). How is this different / better from the first command?
(I've seen the unit test coverage drop if there's a failing unit test, and I know that the coverage percentage can drop if there's an error in the run, but it looks good to me)
From rcov manual:
--[no-]callsites
Show callsites in generated XHTML report. (somewhat slower; disabled by default)
--[no-]xrefs
Generate fully cross-referenced report. (includes --callsites)
From Rcov CallSiteAnalyzer Class
A CallSiteAnalyzer can be used to obtain information about:
* where a method is defined ("defsite")
* where a method was called from ("callsite")
Having this analyze rcov can provide more accurate coverage information in cost of longer execution.

Resources