Gcov to consider filename and function name as an input for computing code coverage - gcov

Currently in our project both our client and our team working parallelly, to build the code both teams modules code required. We are using gcov for code coverage. Currently the generated coverage shows both teams. Is there any way to generate the unit gcov coverage only for our module.(Our module doesn’t build alone).
Is there any way is available to run only for our files (or) is it possible to run based on function name?

Yes, there are two main ways to control the coverage reports that are generated:
Telling your generator (e.g. lcov, gcovr) to include/exclude certain file patterns
Compiling only your source files with coverage enabled.
(1) is much easier than (2). For example, if you are using lcov, consider the --extract option:
Use this switch if you want to extract coverage data for only a
particular set of files from a tracefile. Additional command
line parameters will be interpreted as shell wildcard patterns
(note that they may need to be escaped accordingly to prevent
the shell from expanding them first). Every file entry in
tracefile which matches at least one of those patterns will be
extracted.
(2) could be difficult (or impossible) depending on your build system. To do so, you will need to:
Compile only your module with --coverage (equivalent to -ftest-coverage -fprofile-arcs for GNU compilers)
Link your library with lgcov.
This will produce the *.gcno 'notes' files that tell coverage generators about your source files only for the files that you compiled with the --coverage flags. Then, upon running your test suite, *.gcda files should only be generated for that same set of files. Running the final coverage report/HTML generator will only produce a report for your module.
To illustrate, here's a simple CMake file that only generates coverage information for covered.cxx. Notice the extra target_compile_options and target_link_libraries for the covered library.
add_executable(${PROJECT_NAME} main.cxx)
add_library(not-covered SHARED not-covered.cxx)
add_library(covered SHARED covered.cxx)
target_compile_options(covered PRIVATE --coverage)
target_link_libraries(covered PRIVATE --coverage)
target_link_libraries(${PROJECT_NAME} covered not-covered)

Related

How to enable specific coverage like only branch coverage using clang source based coverage

I am using clang source based coverage for one of my projects which generates metrics for line, function and region coverage.
The profile files generated are of huge sizes is there a way to optimize it?
Also, is there a way to provide option to compiler to limit instrumentation based on coverage type?
Using fprofile-instr-generate -fcoverage-mapping option for compilation and linking.
This creates huge size of profile files and it takes a lot of time and memory for merging these profile files resulting in out of memory

Can bazel package depend on a source file in another package

A few years ago I wrote a set of wrappers for Bazel that enabled me to use it to build FPGA code. The FPGA bit is only relevant because the full clean build takes many CPU days so I really care about caching and minimizing rebuilds.
Using Bazel v0.28 I never found a way to have my Bazel package depend on a single source file from somewhere else in the git repo. It felt like this wasn't something Bazel was designed for.
We want to do this because we have a library of VHDL source files that are parameterized and the parameters are set in the instantiating VHDL source. (VHDL generics). If we declare this library as a Bazel package in its own right then a change to one library file would rebuild everything (at huge time cost) when in practice only a couple of steps might need to be rebuilt.
I worked around this with a python script to copy all the individual source files into a subdirectory and then generate the BUILD file to reference these copies. The resulting build process is:
call python preparation script
bazel build //:allfpgas
call python result extractor
This is clearly quite ugly but the benefits were huge so we live with it.
Now we want to leverage Bazel to build our Java, C++ etc so I wanted to revisit and try and make everything work with Bazel alone.
In the latest Bazel is there a way to have a BUILD package depend on individual source files outside of the package directory? If Bazel cant, would buck pants or please.build work better for our use case?
The Bazel rules for most languages support doing something like this already. For example, the Python rules bundle source files from multiple packages together, and the C++ rules manage include files from other packages. Somehow the rule has to pass the source files around in providers, so that another rule can generate actions which use them. Hard to be more specific without knowing which rules you're using.
If you just want to copy the files, you can do that in bazel with a genrule. In the package with the source file:
exports_files(["templated1.vhd", "templated2.vhd"])
In the package that uses it:
genrule(
name = "copy_templates",
srcs = ["//somewhere:templated1.vhd", "//somewhere:templated2.vhd"],
outs = ["templated1.vhd", "templated2.vhd"],
cmd = "cp $(SRCS) $(RULEDIR)",
)
some_library(
srcs = ["templated1.vhd", "templated2.vhd", "other.vhd"],
)
If you want to deduplicate that across multiple packages that use it, put the filenames in a list and write a macro to create the genrule.

Bazel Starlark: how can I generate a BUILD file procedurally?

After downloading an archive throug http_archive I'd like to run a script to generate a BUILD file from the folder structure and Cmake files in it (I currently do that by hand and it is easy enough that it could be scripted). I don't find anything on how to open, read and write files in the starlark documentation but since http_archive itself is loaded from a bzl file (haven't found the source of that file yet though...) and generates BUILD files (by unpacking them from archives) I guess it must be possible to write a wrapper for http_archive that also generates the BUILD file?
This is a perfect use case for a custom repository rule. That lets you run arbitrary commands to generate the files for the repository, along with some helpers for common operations like downloading a file over HTTP using the repository cache (if configured). A repository rule conceptually similar to a normal rule, but with much less infrastructure because it's running during the loading phase when most of the Bazel infrastructure doesn't apply yet.
The starlark implementation of http_archive is in http.bzl. The core of it is a single call to ctx.download_and_extract. Your custom rule should do that too. http_archive then calls workspace_and_buildfile and patch from util.bzl, which do what they sound like. Instead of workspace_and_buildfile, you should call ctx.execute to run your command to generate the BUILD file. You could call patch if you want, or skip that functionality if you're not going to use it.
The repository_ctx page in the documentation is the top-level reference for everything your repository rule's implementation function can do, if you want to extend it further.
When using http_archive, you can use the build_file argument to create a BUILD file. To generate it dynamically, I think you can use the patch_cmds argument to run external commands.

How to create one batch file to scan all code languages supported by fortify?

Currently, I am scanning java code in a repository using fortify batch file and scanning C/C++ code in the same repository using the command line with the help of visual studio integration.
Is it possible to scan both java and C/C++ code in a repository using a single batch file?
Also, is there command line options to scan both coding languages at once?
Yes, but you probably shouldn't.
One scan (one FPR file) should represent one codebase. Unless you have one application that's part Java and part C/C++, you want to produce 2 separate FPRs, one for Java and one for C/C++.
Instead, take your 2 scan scripts, write another very short script that calls them, and voila, a script that scans both your applications.
If you do have one application with both languages, here's what you do:
Fortify first translates your source code into its intermediate language (NST files), then it scans those NST files. The translation is the sourceanalyzer command where you point it to your code, and the scan step is the sourceanalyzer command with -scan in it. It uses the build ID to keep track of those intermediate files (that's the argument after -b).
To scan the whole codebase together, first translate one set of files, then translate the other set of files (using the same exact build ID), and then do the scan step (same build ID), and it'll scan all of the code together. But only do this if it really is one application.)

Using gcovr to show zero coverage

we try to use gcovr to generate coverage report for our c++ project in Jenkins.
I was able to get it worked, but I'm stuck with one problem. gcovr doesn't show any statistics for files with zero coverage - they have only .gcno files, no .gcda files are produced and gcovr don't show it in results.
So I have 80% coverage for the whole project, but only 2 tests were written and it's actually 80% coverage only for source files involved in tests.
For large project it makes of course no sense to use such statistic.
I have found https://software.sandia.gov/trac/fast/changeset/2766 this changeset as solution for this ticket https://software.sandia.gov/trac/fast/ticket/3887, but it seems not to be working.
Did I miss something?
p.s. I use gcovr 3.1-prerelease

Resources