I am trying to generate coverage report for my project and running into a problem.
I understand that to get the coverage info, I need .gcno, .gcda and source files.
My current project dir structure is
/root/proj/src --> top level Makefile and main.c
/root/proj/src/module1
/root/proj/src/module2
..... -> contains all .c/.h ,makefile
/root/proj/build/obj -> contains all .o,.gcno,.gcda files after compilation
/root/proj/build/exe -> contains the executable
(copying minimal lines below to show the problem)
cd /root/proj/build/obj
when I run
lcov -b ../../src/ --directory . --capture --output-file app.info
Processing module1.gcda
module1.c:cannot open source file
......
Finished .info-file creation
Then :
genhtml --legend -o ./latest_code_cov/ app.info
Reading data file app.info
Found 5 entries.
Found common filename prefix "/root/proj/src"
Writing .css and .png files.
Generating output.
Processing file src/module1.c
genhtml: ERROR: cannot read /root/proj/src/module1.c
bash-4.1$
1) Do I need to change my makefile to dump .gcno/.gcda files in the same folders as the source?
2) Is there a way(some flag) to set the source file path in .gcno/.gcda files?
Any suggestions?
gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)
lcov: LCOV version 1.13
Related
I'm using Jenkins to generate automatic build every 15 minutes for a test project.
When compiling for Delphi itself does not give a problem, now when I run by jenkins it gives the following problem ...
C:\Program Files (x86)\Embarcadero\RAD
Studio\7.0\Bin\CodeGear.Delphi.Targets(136,3): error : Project1.dpr(1)
Fatal: F1027 Unit not found: 'System.pas' or binary equivalents (.dcu)
I have already tried to run by windows command this way.
cd "C:\Users\carlos.santos\Desktop\teste"
call build.bat
And Build.bat in this way.
call "C:\Program Files (x86)\Embarcadero\RAD Studio\7.0\bin\rsvars.bat"
msbuild "Project1.dproj" /p:config=Release /p:Warn=0
I have already tried to run by the Rad Studio plugin itself for Jenkins this way and it also did not work.
I have already seen in some forums that if I have many paths or do not have the default paths of Delphi in Library path can be a problem, but I have tested this too and nothing ..
Here are the paths I have.
$ (BDS) \ lib; $ (BDS) \ Imports; $ (BDS) \ Lib \ Indy10; $
(BDSCOMMONDIR) \ Dcp; $ (BDS) \ include; $ (BDS) \ RaveReports \ Lib;
$ (BDSCOMMONDIR) \ Bpl; $(BDS)\lib\debug; $(BDS)\bin; $(DELPHI)\Bin;
The Content of rsvars.bat are there:
#SET BDS=C:\Program Files (x86)\Embarcadero\RAD Studio\7.0
#SET BDSCOMMONDIR=C:\Users\Public\Documents\RAD Studio\7.0
#SET FrameworkDir=C:\Windows\Microsoft.NET\Framework\v2.0.50727
#SET FrameworkVersion=v2.0.50727
#SET FrameworkSDKDir=
#SET PATH=%FrameworkDir%;%FrameworkSDKDir%;%PATH%
#SET LANGDIR=EN
I have third party component installed in my Delphi, but I do not know if it influences anything. Anyone who can help me will be grateful.
the difference between the properties executed by jenkins and cmd are these:
it was resolved putting the paramether _EnvLibraryPath with my Delphi
LibraryPath parameter on msbuild call, example:
call "C:\Program Files (x86)\Embarcadero\RAD Studio\7.0\bin\rsvars.bat"
msbuild "C:\Users\carlos.santos\Desktop\teste\Project1.dproj" /p:_EnvLibraryPath="C:\Program Files (x86)\Embarcadero\RAD Studio\7.0\lib\EN;C:\Program Files (x86)\Embarcadero\RAD Studio\7.0\lib"
There may be another cause for that same problem.
I have just experienced it and solved it.
In my case, it was related to the fact that I had another configuration besides Release and Debug, i.e. ReleaseConsumer.
My build command included specifying the configuration:
DoJenkinsTasks "US2D_GUI" "Config=ReleaseConsumer;Platform=Win64"
Now, I got that same error as mentioned above.
The cause is that it can't find the ReleaseConsumer folder with dcu files.
I solved it by creating, under c:\Program Files (x86)\Embarcadero\Studio\19.0\lib\win64\, a new folder releaseConsumer and copying all dcu files from release to it.
Run msbuild with the /v:diagnostic option, and you will see that none of the default environment paths are set:
Initial Properties:
___ResourcePath =
__ObjectPath =
__ResourcePath =
_EnvDCPOutput =
_EnvLibraryPath =
_EnvNamespace =
_EnvPackageOutput =
_ObjectPath =
_OutputDRCFiles = false
_ProjectFiles = #(DelphiCompile)
_ResolveGENTLBBindingsTarget =
This is because you are missing the %APPDATA%\CodeGear directory on your build server (under the user that is running the builds).
Particularly, you need the CodeGear\BDS\7.0\EnvOptions.proj settings file, which contains all the environment variables.
Put that in place and you'll see msbuild's diagnostics shows properties like this one:
Initial Properties:
___ResourcePath = C:\Program Files (x86)\Embarcadero\RAD Studio\7.0\lib;\Imports;C:\Program Files (x86)\Embarcadero\RAD Studio\7.0\Imports;C:\Users\Public\Documents\RAD Studio\7.0\Dcp;C:\Program Files (x86)\Embarcadero\RAD Studio\7.0\include;C:\Program Files (x86)\Embarcadero\RAD Studio\7.0\lib;C:\Program Files (x86)\Embarcadero\RAD Studio\7.0\Lib\Indy10;
I am using genhtml command to generate html coverage report from Bazel generated coverage.dat file:
genhtml bazel-testlogs/path/to/TestTarget/coverage.dat --output-directory coverage
The problem with using genhtml is that I have to provide the paths to the coverage.dat files (which are generated in bazel-testlogs/..) Is it possible to fetch those coverage.dat files as an output from another rule?
I would like to not have to call genthml command directly, but have Bazel handle everything.
I was not able to find a way to get coverage.dat files as an output of a bazel rule. However, I was able to wrap all the locations of all the .dat files as srcs to a filegroup in WORKSPACE directory:
filegroup(
name = "coverage_files",
srcs = glob(["bazel-out/**/coverage.dat"]),
)
and then use that filegroup in a custom .bzl rule that wraps the genthml command to generate html coverage report. So now I only have to call
bazel coverage //path/... --instrumentation_filter=/path[/:]
command to generate the coverage.dat files, generate html report and zip it up. Thus, bazel handles everything.
Bazel added support for C++ coverage (though I couldn't find much documentation for it).
I was able to generate a combined coverage.dat file with
bazel coverage -s \
--instrument_test_targets \
--experimental_cc_coverage \
--combined_report=lcov \
--coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main \
//...
The coverage file gets added to bazel-out/_coverage/_coverage_report.dat
For Java based project we can get code coverage in following way
To get coverage for complete module ->
Running coverage for complete project module. Run following command ->
bazel coverage ... --compilation_mode=dbg --subcommands --announce_rc --verbose_failures --jobs=auto --sandbox_debug --build_runfile_links --combined_report=lcov --coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main
Then run following command from parent project directory to get html view. Html report is generated in output-directory-name we specified. From that open index.html to see coverage report.
genhtml -o <output-directory-name> bazel-out/_coverage/_coverage_report.dat
bazel-out directory usually gets created in project parent directory(e.g. where bazel WORKSPACE file is present)
To get coverage for specific IT/Test in a module ->
Running coverage for for specific IT/Test in a module. Run following command from project/sub-project directory ->
bazel coverage <class-name-of-Test-or-IT> --compilation_mode=dbg --subcommands --announce_rc --verbose_failures --jobs=auto --sandbox_debug --build_runfile_links --combined_report=lcov --coverage_report_generator=#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main
Then run following command from parent project directory to get html view. Html report is generated in output-directory-name we specified. From that open index.html to see coverage report.
genhtml -o <output-directory-name> bazel-out/_coverage/_coverage_report.dat
mkdir -p /tmp/build &&
cd /tmp/build &&
mkdir -p /tmp/src &&
echo "int main(){return 0;}" > /tmp/src/prog.c &&
gcc --coverage -o prog /tmp/src/prog.c &&
./prog &&
gcovr -v -r .
will output an empty report.
Scanning directory . for gcda/gcno files...
Found 2 files (and will process 1)
Processing file: /tmp/build/prog.gcda
Running gcov: 'gcov /tmp/build/prog.gcda --branch-counts --branch-probabilities --preserve-paths --object-directory /tmp/build' in '/tmp/build'
Finding source file corresponding to a gcov data file
currdir /tmp/build
gcov_fname #tmp#src#prog.c.gcov
[' -', ' 0', 'Source', '/tmp/src/prog.c\n']
source_fname /tmp/build/prog.gcda
root /tmp/build
fname /tmp/src/prog.c
Parsing coverage data for file /tmp/src/prog.c
Filtering coverage data for file /tmp/src/prog.c
Gathered coveraged data for 0 files
------------------------------------------------------------------------------
GCC Code Coverage Report
Directory: .
------------------------------------------------------------------------------
File Lines Exec Cover Missing
------------------------------------------------------------------------------
------------------------------------------------------------------------------
TOTAL 0 0 --%
------------------------------------------------------------------------------
However if I manually run
gcov /tmp/build/prog.gcda --branch-counts --branch-probabilities --preserve-paths --object-directory /tmp/build
I get correct results
File '/tmp/src/prog.c'
Lines executed:100.00% of 1
No branches
No calls
Creating '#tmp#src#prog.c.gcov'
It seems that gcovr did not extract the coverage from the otherwise correct gcov output. This only happens if the source file is outside the current directory (same as build directory, same as output directory, same as run directory), and gcc ics called with an absolute path to the source file.
How can I fix this?
Edit
Fixed in upstream gcovr for relative paths, but looks like a bug for absolute paths.
See https://github.com/gcovr/gcovr/issues/169.
What I understood from your code up there is that you made everything and ran the program but you are still inside build directory where the object file resides.
So, what you need to understand is:
gcovr -v -r <path>
this -r flag takes the root directory, which means the parent directory inside which the source and object directory resides. So that it can trace them both and generate coverage data and whatever else you want it to generate.
Try doing that and it will work.
For your understanding:
The .gcno files that gets generated after compilation is just the flowchart kind of data for that particular source file.
Then later when you execute the program, a .gcda file gets generated for each source file. This file contains real coverage data, but for gcovr all these three files are necessary (.gcno, .gcda, sourceFile)
Hope it helped. :)
update:
Back with the work around
You can supply your coverage data location as a pure arg (no option) and point the root to your sources.
gcovr .../path/To/GCDA -r .../path/to/src/ [rest desired flags]
This will solve your problem for sure.
Worked for me in covering my projects.
Gcovr only generates reports for source files within your project. This is intended to exclude coverage from library headers etc.
The question is, which files are in your project? This is determined by the -r root path.
If you are in /tmp/build and root is . aka /tmp/build and the source file is /tmp/src/prog.c, then that source file is clearly outside of your project. In the verbose output, gcovr will report Filtering coverage data for file /tmp/src/prog.c.
If you are in /tmp/build and root is .. aka /tmp and the source file is /tmp/src/prog.c, then that source file is within the project.
If you are in /tmp/build and root is . aka /tmp/build and the source file is ../src/prog.c, then gcovr seems to do something questionable: It joins the file name with the current directory and checks that. So we actually see /tmp/build/../src/prog.c. As far as gcovr is concerned, that's within your project. It seems this behaviour is necessary to include code that is symlinked into a project.
You can disable this “is the source within the project?” filter by providing your own, better filter. For example, you can ask gcovr to only report coverage for sources under /tmp/src:
gcovr -r . -f /tmp/src
Hi I am using gcov (GCC) 4.1.2 20080704 (Red Hat 4.1.2-51)
When I run gcov I am getting errors like "cannot open graph file". (My gcno and gcda file are created with name as abc.pic.gcda and abc.pic.gcno). But when I rename these files by removing "pic" (abc.gcda and abc.gcno) gcov is working fine. My question is: how to make gcov to read the files which are named like abc.pic.gcda and abc.pic.gcno?
Looks like you have file like abc.pic.cpp
and when you compile it
$ g++ --coverage abc.pic.cpp
$ ls
abc.pic.cpp abc.pic.gcno a.out
File abc.pic.gcno is created as you see. Next run binary
$ ./a.out
$ ls
abc.pic.cpp abc.pic.gcda abc.pic.gcno a.out
And run gcov:
$ gcov abc.pic
abc.gcno:cannot open graph file
$ gcov abc
abc.gcno:cannot open graph file
As you can see there is an error. To make it working you should provide full filename like
$ gcov abc.pic.cpp
File 'abc.pic.cpp'
Lines executed:100.00% of 6
abc.pic.cpp:creating 'abc.pic.cpp.gcov'
At first I take the message sample.gcda:stamp mismatch with graph file
the order of compilation and running is observed
hexdump -e '"%x\n"' -s8 -n4 sample.gcno -> aaa1aaaa
hexdump -e '"%x\n"' -s8 -n4 sample.gcda -> bbb2bbbb
stamp mismatch with graph file
Means that graph file has been compiled again after binaries built.
If the compilation order is correct, you could try to check if there is a compilation of the sample.cpp twice somewhere in building rules.
For example we have something like that:
g++ ... sample.cpp -o sample
g++ ... -shared sample.cpp -o sample2.o
So one file is compiled twice. It will cause that gcno file will be updated by new timestamp that will not match to gcda file anymore.
If you performed your product or application testing thoroughly and manually and spent lot of effort on it. If your objective is to get code coverage report using lcov and gcov but by mistake deleted gcno files. You can regenerate gcno files by recompiling the code but it will be generated with new timestamp and gcov reports error saying "stamp mismatch with graph file" and no code coverage report will be generated. This will result in all your testing effort getting wasted.
There is a shortcut to still generate the code coverage report. This is just a workaround and should not be relied upon all the time. Its recommended to preserve *.gcno files till your testing completes.
Note down your gcc version(gcc -v) and download its source code from one of the mirror sites
Eg - ftp://gd.tuwien.ac.at/gnu/sourceware/gcc/releases/gcc-4.4.6/gcc-4.4.6.tar.bz2
After extracting downloaded file, gcc the folder structure will be as follows
gcc-4.4.6
gcc-4.4.6/gcc
If you directly go inside gcc-4.4.6/gcc and try to do ./configure and compile(make) from there then you will encounter below problem
build/genmodes -h > tmp-modes.h
/bin/sh: build/genmodes: No such file or directory
Solution is do ./configure and make from gcc-4.4.6 and no errors will be shown related to genmodes. This will compile all modules including gcc. You may have to install mpfr and gmp modules which are needed by gcc if any error shown by ./configure
goto gcc-4.4.6/gcc/gcov.c and comment below lines and then recompile with above command
/* if (tag != bbg_stamp)
{
fnotice (stderr, "%s:stamp mismatch with graph file\n", da_file_name);
goto cleanup;
}*/
Example path of new gcov binary after compilation is gcc-4.4.6/host-x86_64-unknown-linux-gnu/gcc/gcov
Place this binary in /usr/bin and regenerate code coverage report with command as shown in below example
lcov --capture --directory ./ --output-file coverage.info ; genhtml coverage.info --output-directory /var/www/html/coverage
Now you should not get "stamp mismatch with graph file" error and you will get code coverage report properly