I am using gcov to find code coverage of my application while running functional test cases.
Issue: .gcda files are not getting created for few classes though those classes are being executed
Steps:
Compile Application code with gcov and verify .gcno files are generated for all classes
Create binary image of compiled code and deploy server using that binary
Use cross profiling while deploying server. Home directory structure for my source code was "/proj/QQ/scm/tools/jenkins/db_ws/FunctionTestCoverage/ccode/" and I used below mentioned gcov parameters for cross profiling
GCOV_PREFIX=/automation/testCoverage
GCOV_PREFIX_STRIP=7
Run Functional Test Cases to hit application code
.gcda files gets created for few classes only. It is not getting created for all classes which are being executed.
.gcda files are generating under directory structure "/automation/testCoverage/ccode"
I believe this should not be issue with cross profiling. suppose I have 5 directories in parallel then .gcda files are generating for 2 directories only.
What can be root cause for my issue where .gcda are generating for only few files and how to resolve this?
Related
I am trying to get bazel coverage //my:test to output coverage data files, building with custom C rules and using a custom clang toolchain.
For Bazel's native C rules this is a solved problem. I can build coverage output using the cc_library and cc_test native rules by running the following command with env set:
export BAZEL_USE_LLVM_NATIVE_COVERAGE=1
export GCOV=/path/to/llvm-profdata
export BAZEL_LLVM_COV=/path/to/llvm-cov
export CC=/path/to/clang
bazel coverage //my:test --experimental_generate_llvm_lcov --combined_report=lcov
The test target has a coverage.dat file output and there is a combined dat report file too. I have noticed that the cc_library target returns an InstrumentedFilesInfo provider that has the "metadata files" attribute populated with the .gcno files output during compilation.
I am using the cc_common Starlark library to build custom C rules and my compile action is setup via cc_common.compile(). While *.gcno files are outputs that Bazel expects from this action [0], the compile function does not return any *.gcno File objects in the compilation context or compilation output so using them as the inputs to another action/returning in a provider/adding to the target's runfiles is not possible.
I understand that the .dat files are produced using the *.gcno compile output and the *.gcda sandboxed test execution output and combined in the collect_cc_coverage.sh script. Something in the plumbing of my rule implementation is missing that is not being fixed by returning a provider constructed by coverage_common.instrumented_files_info() and declaring extra outputs of cc_common.compile() is currently not possible.
[0]: Running under coverage rather than test has the toolchain feature where --compile is added the .gcno files are output and they appear in bazel-out.
My questions:
Has anyone had any experience implementing code coverage for custom C rules?
How do I get my test executable to take in .gcno files, generate the .gcda files and combine the two using my toolchain to produce the .dat files that are expected with the native C rules? (This question does not require .gcno - solutions involving profraw/profdata are equally valid.)
Good morning,
I've got a problem when using GCOV within my working environment.
Gcov is working very well when I run some tests cases (up to 1000) but no gcda are generated when running more tests.
This is how I use it.
I compile my code with gcov flags correctly set
I boot a test server containing the gcov libs, and the variables GCOV_PREFIX and GCOV__PREFIX_STRIP
I launch my regression on this server
Once finished I stop the server and now all the gcda files are generated
use lcov and genhtml to generate the test coverage and the report.
This works very well when I've got few tests to lauch (up to 1000 cases I guess), but if I run more tests, I don't get any gcda files anymore...
I could not find any documentation on this part, is there a buffer somewhere, where all the gcdas files are stored waiting for the server to be released?
Is it possible to parameter this setting?
Is there any documentation on this subject somewhere?
Thanks a lot for your help.
Regards,
Thomas
As part of our efforts to create a bazel-maven transition interop tool (that creates maven sized jars from more granular sized bazel jars),
we have written an aspect that runs on bazel build of the entire bazel repo and writes important information to txt files outputs (e.g.: jar file paths, compile deps targets and runtime deps targets, etc.)
We ran across an issue where the repo's code was changed such that some of the txt file were not written anymore. But the old txt file from previous runs (before the code change) remained!
Is there a way to know that these txt files are no longer relevant?
You should be able to run with --build_event_json_file=file.json and try to locate generated artifacts. For example we use it on ci.bazel.io to locate actual test.xml file that were generated: https://github.com/bazelbuild/continuous-integration/blob/09975cbb487a84a62ca1e43aa43e7c6fe078f058/jenkins/lib/src/build/bazel/ci/BazelUtils.groovy#L218
The definition of the protocol can be found in build_event_stream.proto
When i build my project for coverage testing with "--coverage -fprofile-arcs -ftest-coverage", and then move the build and source to the other user directory to execute testing. I will get so many problems such as "xxx/cc/cc/getopt_log.c:cannot open source file"
the details as the below:
Processing cs/CMakeFiles/cfa/__/src/base/fault_injection.c.gcda
/home/cov/build/xfcq/src/base/fault_injection.c:cannot open source file
the path of "/home/cov/build/xfcq/src/base/fault_injection.c" is the path of build environment, how to change it as the relative path or the path I specified.
I tried to use GCOV_PREFIX and GCOV_PREFIX_STRIP, these can't work well for me.
I also tried to add -b option for lcov, it does not also work well for me.
e.g., lcov --gcov-tool=/bin/gcov -d . -b xx/src -t "xfcq" -o test_cov.info
do you have idea to resolve it?
Well for using gcov coverage process you should never move the files after building your project, instead you should modify your automated build scripts to build everything to the desired location.
When you compile your project with the specified options it generates *.gcno files for each source file which are necessarily the flow chart like details of the relevant source file.
So, the object files are instrumented in such a way that they should trigger function(added by compiler to generate coverage info ) whenever any line of statement is executed to generate *.gcda files with all the execution information.
Note: I can see that you have specified three options in question (--coverage -fprofile-arcs -ftest-coverage) which is again wrong, as --coverage works as a replacement to the other two.
If you specify only --coverage then it will do for the compilation and the linking too.(remember to use it at both the places positively though)
Apologies if this question has been asked before. I have found variants on this theme but nothing that seems to fit our particular configuration.
We have developed a custom GulpJS task which parses a .json file located inside our folder assets/javascript. This json file contains an array of relative paths to javascript files (both our own and library) in a specific order for minification. They are then outputted to the folder assets/javascript/build and concatenated. The javascript source files are in the project but the minified and concatenated versions of the scripts, in fact the entire build folder itself, are not included in the Visual Studio project.
Ideally, I would like to have a step in the MSDeploy configuration which would copy all the files in the javascript build folder to the destination. Otherwise I could potentially include another step in Teamcity to do so.
Has anyone successfully instituted a similar build configuration and could share some insight? I tried using the MSBuild copy task but that didn't seem to copy the files to the output location. One option that I am considering is including the minified scripts in the project file but this might potentially trip up other developers who don't have Gulp running in their development environments (hilarious as that might be)