We are using gcov and gcovr.py to get coverage reports for our tests. Not all the source files are tested at all and therefore those aren't mentioned in gcovr report. Now I'd like to calculate overall coverage for whole code base nevertheless.
From the reports I can get lines covered but I'd also need to get number of C code lines in those files which aren't tested. What would be the possibilities to to get lines of C code in files inside code directory?
Have a look at cloc, which will count lines of code in files or process a directory: https://github.com/AlDanial/cloc.
According to what I know, when you generate coverage report using gcovr it gives out this kind of report in console
------------------------------------------------------------------------------
GCC Code Coverage Report
Directory: ...../src/
------------------------------------------------------------------------------
File Lines Exec Cover Missing
------------------------------------------------------------------------------
src/A/A1/xyz.cpp 1609 2 0% 97,99,101....
src/A/A2/abcg.cpp 271 4 1% .......
src/B/B1/mnop.cpp 74 2 2% 34,42,56-.....
src/B/B2/wrds.cpp 1533 6 0% 76,83,85-.....
src/C/C1/abcdefg.cpp 1079 8 0% 143,150,152.....
This has all the liine numbers which were not executed relevant to each source file.
Hope it helped :)
Related
After having performed my test coverage on my product using lcov (for C++ dev), i'd like to draw a matrix to have the correspondence between the test name and the files it covers.
The idea is to have a quick view of the code covered by 1 test file.
eg:
xxxx |file 1 |file 2 |file 3 |file 4 | file 5 |
test 1 | YES | NO | YES | YES | YES |
test 2 | YES | NO | NO | No | NO |
test 3 | YES | YES | NO | NO | YES |
In my project, I need to run thousands of tests to check the coverage of thousands of files, so the matrix will be huge.
Unfortunately, it seems that by design GCOV does not works this way, because we will have only one set of gcda files that covers the whole code, and it looks not possible to determines which test covers which part of the code.
The only solution I could imagine is the following one:
for current_test in all_tests do:
run 1 current_test
retrieve gcda -> .info file
extract from the .info file the name of covered code files
append the matrix with current_test / code filename
The problem is that it will be extremely long, because to do so, it will take around 5 min for 1 test... I'll spend weeks to wait...
Any idea would be very welcomed.
Thanks a lot for your help.
Regards,
Thomas
Unfortunately the gcov data does not include test names, and they must be added in post-processing. Therefore, your sequential loop is the sensible approach if you stay within gcov-based coverage collection.
Workarounds you can try:
Run your tests with an appropriate GCOV_PREFIX variable so that the coverage is written into a different directory, rather than next to your object files.
Use a different coverage tool. E.g. kcov performs runtime instrumentation and writes the coverage results into a directory you specify. However, the coverage data formats are not usable for gcov-based tools.
Distribute your tests across multiple machines.
My guess is that GCOV_PREFIX is likely to work in your scenario so that you can easily run your tests in parallel. This variable is a bit fiddly because you need to know the absolute paths of your object files, but it's probably easier to figure that out than it is to wait multiple days for your coverage matrix.
here's output from a Duplicity backup that I run every night on a server:
--------------[ Backup Statistics ]--------------
StartTime 1503561610.92 (Thu Aug 24 02:00:10 2017)
EndTime 1503561711.66 (Thu Aug 24 02:01:51 2017)
ElapsedTime 100.74 (1 minute 40.74 seconds)
SourceFiles 171773
SourceFileSize 83407342647 (77.7 GB)
NewFiles 15
NewFileSize 58450408 (55.7 MB)
DeletedFiles 4
ChangedFiles 6
ChangedFileSize 182407535 (174 MB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 25
RawDeltaSize 59265398 (56.5 MB)
TotalDestinationSizeChange 11743577 (11.2 MB)
Errors 0
-------------------------------------------------
I don't know if I'm reading this right, but what it seems to be saying is that:
I started with 77.7 GB
I added 15 files totaling 55.7 MB
I deleted or changed files whose sum total was 174 MB
My deltas after taking all changes into account totaled 56.5 MB
The total disk space on the remote server that I pushed the deltas to was 11.2 MB
It seems to me that we're saying I only pushed 11.2 MB but should've probably pushed at least 55.7 MB because of those new files (can't really make a small delta of a file that didn't exist before), and then whatever other disk space the deltas would've taken.
I get confused when I see these reports. Can someone help clarify? I've tried digging for documentation but am not seeing much in the way of clear, concise plain English explanations on these values.
Disclaimer: I couldn't find a proper resource that explained the difference nor something in the duplicity docs that supports this theory.
ChangedDeltaSize, DeltaEntries and RawDeltaSize do not relate to changes in the actual files, they are related to differences between sequential data. Duplicity uses the rsync algorithm to create your backups which in its turn is a type of delta encoding.
Delta encoding is a way of storing data in the form of differences rather than complete files. Thus the delta changes you see listed is a change in those pieces of data and can therefore be smaller. In fact I think they should be smaller as they are just small snippets of changed data.
Some sources:
- http://duplicity.nongnu.org/ "Encrypted bandwidth-efficient backup using the rsync algorithm" .
- https://en.wikipedia.org/wiki/Rsync " The rsync algorithm is a type of delta encoding.. "
- https://en.wikipedia.org/wiki/Delta_encoding
Lcov has genhtml tool which converts the lcov coverage info file into HTML report. It is possible to color code the results table- indicating low, medium and high coverage with following lcov configuration file options:
genhtml_hi_limit
genhtml_med_limit
However these limits seem to apply globally to all types of coverage metrics i.e. line, function and branch. Is there a way to set individual limits for the line, function and branch coverage metrics? Or can this be achieved with CSS somehow?
Although genhtml documentation describes only about the global limits, on inspecting the source, there is following limits than can be set in lcovrc file to set colors specific to coverage types.
genhtml_branch_hi_limit, genhtml_branch_med_limit
genhtml_function_hi_limit, genhtml_function_med_limit
I tried many things to make the opencv_createsamples command work, no results. I try to create an OpenCV format training set using photos. I specify files names and rectangles coordinates in the descriptive info file and I always get this parse error. I simplified to one single example, still does not work. Num parameter is there. Windows 7 env.
My command:
C:\lib\opencv\build\x64\vc14\bin>opencv_createsamples -info C:\opencv_ws\info.txt -bg C:\opencv_ws\bg.txt -num 1 -w 360 -h 640 -vec platesCl.vec
My info file:
pos/test_img01.jpg 1 10 10 24 24
I tried using "\", tabs, absolute path, two lines with all ending characters, smaller img dimensions and still stuck.
Maybe there is something I am missing, I don't know - still, seems in accordance with the documentation...
Okay, actually, the opencv_createsamples.exe tool (Windows) does not like Windows End Of File character. I edited the info descriptor (positives samples) on Linux and fed it to the tool on Windows and it worked like a charm.
Im currently doing dynamic memory analysis, for our eclipse based application using jprobe.After starting the eclipse application and jprobe, when I try to profile the eclipse application, the application gets closed abruptly causing a Fatal error. A fatal error log file is generated. In the Fatal error log file, I could see that the PermGen space seems to be full. Below is a sample Heap summary which I got in the log file
Heap
def new generation total 960K, used 8K [0x07b20000, 0x07c20000, 0x08000000)
eden space 896K, 0% used [0x07b20000, 0x07b22328, 0x07c00000)
from space 64K, 0% used [0x07c00000, 0x07c00000, 0x07c10000)
to space 64K, 0% used [0x07c10000, 0x07c10000, 0x07c20000)
tenured generation total 9324K, used 5606K [0x08000000, 0x0891b000, 0x0bb20000)
the space 9324K, 60% used [0x08000000, 0x08579918, 0x08579a00, 0x0891b000)
compacting perm gen total 31744K, used 31723K [0x0bb20000, 0x0da20000, 0x2bb20000)
the space 31744K, 99% used [0x0bb20000, 0x0da1af00, 0x0da1b000, 0x0da20000)
ro space 8192K, 66% used [0x2bb20000, 0x2c069920, 0x2c069a00, 0x2c320000)
rw space 12288K, 52% used [0x2c320000, 0x2c966130, 0x2c966200, 0x2cf20000)
I tried to increase the permGen space, using the command -XX:MaxPermSize=512m. But that doesnt seem to work. I would like to know how to increase the PermGen size via command prompt. I would like to know if I have to go to the java location in my computer and execute the above command or should I increase the PermGen space specifically for the eclipse application or Jprobe ? Please advise.
Any help on this is much appreciated.