How to generate a detail report of functional coverage in Questasim? - code-coverage

How to generate the detailed coverage report of functional coverage? I am using following command to simulate my code :
vlog -64 -work work -vopt +notimingchecks +cover +fcover -f pcie_jammer.f
vsim -novopt -c <CODE SPECIFIC ARGS> -t ps work.tb_top work.glbl -vopt -do "set WildcardFilter None;**coverage save -onexit -directive -cvg -codeAll pcie_cov_${1}_gen${speed}_X${width}** ; add log -r /*;coverage report -file pcie_cov_${1}_gen${speed}_X${width}.txt -byfile -detail -noannotate -option -directive -cvg -details -verbose;**coverage report -directive -cvg -details -verbose**;run -all;exit" > transcript_${tname}_gen${speed}_X${width}.txt
vcover report -html pcie_cov_${1}_gen${speed}_X${width} -verbose
I am not able to see the details of the covergroup in the report.

After some research I am able to solve the above question. Please find the solution below:
To generate a detailed function coverage report:
1.First compile and simulate your code using below mentioned script :
vlog -work work -O0 +fcover +acc -f pcie_jammer.f
vsim -cvgperinstance -c <ARGUMENTS> work.tb_top work.glbl -do " coverage save -onexit <Name_of_File>.ucdb; run -all;exit"
Save the coverage report of the simulation in a UCDB file (Refer Questa User Manual for details about UCDB file).
2.In order to get a html or text report, reload the formed ucdb file and use coverage report to form the report as follows:
vsim -cvgperinstance -viewcov merged.ucdb -do "coverage report -file final_report.txt -byfile -detail -noannotate -option -cvg"
One can also use Questa GUI to form the report.
This approach is quite useful in order to merge the functional coverage reports of multiple testcases.

After creating .ucdb file go to console (cmd) and type the following commands:
vcover report -details -html result.ucdb
(this is for html report with details.)
vcover report -details result.ucdb
(this is for questasim.)

Related

Setting threshold for Coverage in Bazel Java Project

I am using following command for running coverage for complete Java project module.
bazel coverage ... --compilation_mode=dbg --subcommands --announce_rc --verbose_failures --jobs=auto --sandbox_debug --build_runfile_links --combined_report=lcov --coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main
Then running following command to get html view. Html report is
generated in output-directory-name we specified.
genhtml -o <output-directory-name> bazel-out/_coverage/_coverage_report.dat
I wanted to set overall coverage threshold for a project module. Basically Error/Warning should be thrown when this coverage threshold is exceeded. Is there some flag we can use to set this coverage threshold?

Using Bazel to generate coverage report

I am using genhtml command to generate html coverage report from Bazel generated coverage.dat file:
genhtml bazel-testlogs/path/to/TestTarget/coverage.dat --output-directory coverage
The problem with using genhtml is that I have to provide the paths to the coverage.dat files (which are generated in bazel-testlogs/..) Is it possible to fetch those coverage.dat files as an output from another rule?
I would like to not have to call genthml command directly, but have Bazel handle everything.
I was not able to find a way to get coverage.dat files as an output of a bazel rule. However, I was able to wrap all the locations of all the .dat files as srcs to a filegroup in WORKSPACE directory:
filegroup(
name = "coverage_files",
srcs = glob(["bazel-out/**/coverage.dat"]),
)
and then use that filegroup in a custom .bzl rule that wraps the genthml command to generate html coverage report. So now I only have to call
bazel coverage //path/... --instrumentation_filter=/path[/:]
command to generate the coverage.dat files, generate html report and zip it up. Thus, bazel handles everything.
Bazel added support for C++ coverage (though I couldn't find much documentation for it).
I was able to generate a combined coverage.dat file with
bazel coverage -s \
--instrument_test_targets \
--experimental_cc_coverage \
--combined_report=lcov \
--coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main \
//...
The coverage file gets added to bazel-out/_coverage/_coverage_report.dat
For Java based project we can get code coverage in following way
To get coverage for complete module ->
Running coverage for complete project module. Run following command ->
bazel coverage ... --compilation_mode=dbg --subcommands --announce_rc --verbose_failures --jobs=auto --sandbox_debug --build_runfile_links --combined_report=lcov --coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main
Then run following command from parent project directory to get html view. Html report is generated in output-directory-name we specified. From that open index.html to see coverage report.
genhtml -o <output-directory-name> bazel-out/_coverage/_coverage_report.dat
bazel-out directory usually gets created in project parent directory(e.g. where bazel WORKSPACE file is present)
To get coverage for specific IT/Test in a module ->
Running coverage for for specific IT/Test in a module. Run following command from project/sub-project directory ->
bazel coverage <class-name-of-Test-or-IT> --compilation_mode=dbg --subcommands --announce_rc --verbose_failures --jobs=auto --sandbox_debug --build_runfile_links --combined_report=lcov --coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main
Then run following command from parent project directory to get html view. Html report is generated in output-directory-name we specified. From that open index.html to see coverage report.
genhtml -o <output-directory-name> bazel-out/_coverage/_coverage_report.dat

Interpreting Fortify results file (.fpr) through command line

As part of automating the process of running secure code analysis, I have a Jenkins job which uses the sourceanalyzer command line tool to generate an .fpr results file. At the moment I'm opening this results file in Audit Workbench application to view the results and check if there's any newly introduced issues etc, and generating a report from there in PDF/XML format.
Does anyone is it possible to invoke Audit Workbench through the command line and generate a report on the issues, which we could then leverage through a Jenkins script and also then mail the results? Looking online the command line usage seems to stop at the fpr generation stage.
Thanks in advance!
There is a command-line utility to generate an Report from the FPR file.
Currently there are two report generators: Legacy and BIRT. The BIRT report engine was introduced into Audit Workbench with version 4.40.
Here is an example using the BIRT Report engine to generate a DISA STIG report
BIRTReportGenerator -template "DISA STIG" -source HelloWorld_second.fpr
-output BirtReport.pdf -format PDF -showSuppressed --Version "DISA STIG 3.9"
-UseFortifyPriorityOrder
Using the legacy one is a little more involved. The command is:
ReportGenerator -format pdf -f LegacyReport.pdf -source HelloWorld_second.fpr
-template DisaStig3.10.xml -showSuppressed -showHidden
You can either use one of the predefined template reports located in the <SCA Install Dir>/Core/config/reports directory or generate one using the Report Wizard and saving the template which gets stored in the C:\Users\<USER>\AppData\Local\Fortify\config\AWB-XX.XX\reports\ directory in Windows.
On Linux/Mac look at the configuration file <SCA Install Dir>/Core/config/fortify.properties for the com.fortify.WorkingDirectory property, this is where the reports will be stored
#SBurris,
If you don't want to show Suppressed/Hidden is it just -hideSuppressed and -hideHidden?
Also, is there a way to add custom filters to not show things like "nones" from the STIG/SANS/OWASP like you can create in the AWB GUI?
Basically, I need a command(s) to merge two FPRs and then compare them based on what is found new on the scanned code vs. the old FPR.
Merge should be:
FPRUtility -merge -project <newest_scan.fpr> -source <previous_scan.fpr> -f <BUILDXX_MergedWith_BUILDXY.fpr>
The custom filter I need after the merge is:
"[OWASP Top 10 2013]:!<none> OR [SANS Top 25 2011]:!<none> OR [STIG 3.9]:!<none> AND [Detected On]:!/^/"
Where the Detected On field is a custom tag that I need to carry through from the previous FPR file into the newly merged one.
AND THEN output the report from that newly merged fpr in pdf and xml format to a location/filename I specify. Something along the lines of:
~AWB_Installation_Dir/bin/ReportGenerator -format pdf -f [BUILDXX_MergedWith_BUILDXY].pdf -source output.fpr
-template DisaStig3.10.xml -hideSuppressed -hideHidden
Obviously this can be a multitude of commands as long as we can get it back to Bamboo. Any help would be greatly appreciated. Thanks.
FPRUtility interprets the space-separated conditions in the -information -search -query ... parameter by applying the boolean AND operator. To obtain a union of 2 conditions A || B, I figured I could intersect negations of other conditions that complement the former: !C && !D (where A || B || C || D always holds true). I.e., to find all high and critical issues, I use
FORTIFY_ROOT\jre\bin\java -d64 -Xmx4096M -jar FORTIFY_ROOT\Core\lib\exe\fpr-utility-exe.jar -project APP_VER_DATE.fpr -information -search -query "[OWASP Top 10 2017]:A [fortify priority order]:!low [fortify priority order]:!medium" -categoryIssueCounts -listIssues > issues.txt
In case of an audit, I figured I needed the older report generation utility to include suppressed issues (and their comments),
sed -e 's/\(IssueListing limit=\)"[^"]\+"/\1"-1"/' -i "FORTIFY_ROOT/Core/config/reports/DeveloperWorkbook.xml"
cmd /c call ReportGenerator -template DeveloperWorkbookAll.xml -format pdf -source APP_VER_DATE.fpr -showSuppressed -f "APP_VER_DATE_with_suppressed.pdf"

GCOV: why sample.gcda and sample.gcno may be different

At first I take the message sample.gcda:stamp mismatch with graph file
the order of compilation and running is observed
hexdump -e '"%x\n"' -s8 -n4 sample.gcno -> aaa1aaaa
hexdump -e '"%x\n"' -s8 -n4 sample.gcda -> bbb2bbbb
stamp mismatch with graph file
Means that graph file has been compiled again after binaries built.
If the compilation order is correct, you could try to check if there is a compilation of the sample.cpp twice somewhere in building rules.
For example we have something like that:
g++ ... sample.cpp -o sample
g++ ... -shared sample.cpp -o sample2.o
So one file is compiled twice. It will cause that gcno file will be updated by new timestamp that will not match to gcda file anymore.
If you performed your product or application testing thoroughly and manually and spent lot of effort on it. If your objective is to get code coverage report using lcov and gcov but by mistake deleted gcno files. You can regenerate gcno files by recompiling the code but it will be generated with new timestamp and gcov reports error saying "stamp mismatch with graph file" and no code coverage report will be generated. This will result in all your testing effort getting wasted.
There is a shortcut to still generate the code coverage report. This is just a workaround and should not be relied upon all the time. Its recommended to preserve *.gcno files till your testing completes.
Note down your gcc version(gcc -v) and download its source code from one of the mirror sites
Eg - ftp://gd.tuwien.ac.at/gnu/sourceware/gcc/releases/gcc-4.4.6/gcc-4.4.6.tar.bz2
After extracting downloaded file, gcc the folder structure will be as follows
gcc-4.4.6
gcc-4.4.6/gcc
If you directly go inside gcc-4.4.6/gcc and try to do ./configure and compile(make) from there then you will encounter below problem
build/genmodes -h > tmp-modes.h
/bin/sh: build/genmodes: No such file or directory
Solution is do ./configure and make from gcc-4.4.6 and no errors will be shown related to genmodes. This will compile all modules including gcc. You may have to install mpfr and gmp modules which are needed by gcc if any error shown by ./configure
goto gcc-4.4.6/gcc/gcov.c and comment below lines and then recompile with above command
/* if (tag != bbg_stamp)
{
fnotice (stderr, "%s:stamp mismatch with graph file\n", da_file_name);
goto cleanup;
}*/
Example path of new gcov binary after compilation is gcc-4.4.6/host-x86_64-unknown-linux-gnu/gcc/gcov
Place this binary in /usr/bin and regenerate code coverage report with command as shown in below example
lcov --capture --directory ./ --output-file coverage.info ; genhtml coverage.info --output-directory /var/www/html/coverage
Now you should not get "stamp mismatch with graph file" error and you will get code coverage report properly

Rcov coverage changes drastically with -xrefs

My current Ruby on Rails project does testing via rcov (specifically, relevance rcov, and we have a pretty high standard (we fail the build if we have < 95% code coverage).
We use the following command to test this:
rcov_cmd = "rcov --rails --text-summary \
--include #{included_dirs} \
--exclude #{excluded_dirs} \
--aggregate #{coverage_dir}/coverage.data \
--output #{coverage_dir} \
Today I found some code that registers green (having run) in the rcov reports. Homever, I can prove that this code isn't getting run (I raise an exception in the beginning of the function, and my unit tests pass)
I did some research and found the --xrefs flag for rcov, which I thought would add all the callers for each line in the rcov reports.
I changed the rcov command to:
rcov_cmd = "rcov --rails --text-summary --xrefs \
--include #{included_dirs} \
--exclude #{excluded_dirs} \
--aggregate #{coverage_dir}/coverage.data \
--output #{coverage_dir} \
(notice the added --xrefs flag).
Instead of additional callsite information, I instead have my test coverage go from 96% to 48%.
Does --xrefs change the kind of analysis how rcov does? (I thought it would just gather callsite information). How is this different / better from the first command?
(I've seen the unit test coverage drop if there's a failing unit test, and I know that the coverage percentage can drop if there's an error in the run, but it looks good to me)
From rcov manual:
--[no-]callsites
Show callsites in generated XHTML report. (somewhat slower; disabled by default)
--[no-]xrefs
Generate fully cross-referenced report. (includes --callsites)
From Rcov CallSiteAnalyzer Class
A CallSiteAnalyzer can be used to obtain information about:
* where a method is defined ("defsite")
* where a method was called from ("callsite")
Having this analyze rcov can provide more accurate coverage information in cost of longer execution.

Resources