Group gcov coverage results per folder - code-coverage

I'm wondering if there is any way to get gcov coverage results while keeping the source code tree structure, so that we can see coverage results for entire folders ?
I'm currently using lcov to visualize the results, but I don't mind switching to another tool if is provides that features.
Lcov output (top view)
src/folder1/subfolder1/ 12%
src/folder1/subfolder2/ 39%
src/folder2/subfolder1/ 76%
src/folder2/subfolder2/ 100%
What I'm looking for
src/ 58%
|- folder1/ 22%
| |- subfolder1 12%
| - subfolder2 39%
- folder2/ 94%
|- subfolder1 76%
- subfolder2 100%
I'm working on a large codebase, and I'd like to have a quick overview of which parts of the code are well covered or not. It would be even better if I could expand the subfolders so that I can see only up to a certain depth :)

Try to generate Html-report based on gcov/lcov results. It can help you to visualize results and sorts them differently.
For example
lcov --capture --rc lcov_branch_coverage=1 --directory . lcovrc --output coverage.info
genhtml --branch-coverage --output ./report_dir/ coverage.info

Related

lcov remove option is not removing coverage data as expected

I'm using lcov to generate coverage reports. I have a tracefile (broker.info) with this content (relevant fragment shown):
$ lcov -r broker.info
...
[/var/lib/jenkins/jobs/ContextBroker-PreBuild-UnitTest/workspace/test/unittests/orionTypes/]
EntityTypeResponse_test.cpp | 100% 11| 100% 6| - 0
...
[/var/lib/jenkins/jobs/ContextBroker-PreBuild-UnitTest/workspace/test/unittests/parse/]
CompoundValueNode_test.cpp | 100% 82| 100% 18| - 0
...
[/var/lib/jenkins/jobs/ContextBroker-PreBuild-UnitTest/workspace/test/unittests/rest/]
OrionError_test.cpp |92.1% 38| 100% 6| - 0
...
[/var/lib/jenkins/jobs/ContextBroker-PreBuild-UnitTest/workspace/test/unittests/serviceRoutines/]
badVerbAllFour_test.cpp | 100% 24| 100% 7| - 0
...
I want to remove all the info corresponding to test/unittest files.
I have attemped to use the -r option which, according to man page is:
-r tracefile pattern
--remove tracefile pattern
Remove data from tracefile.
Use this switch if you want to remove coverage data for a particular set of files from a tracefile. Additional command line parameters will be interpreted as
shell wildcard patterns (note that they may need to be escaped accordingly to prevent the shell from expanding them first). Every file entry in tracefile
which matches at least one of those patterns will be removed.
The result of the remove operation will be written to stdout or the tracefile specified with -o.
Only one of -z, -c, -a, -e, -r, -l, --diff or --summary may be specified at a time.
Thus, I'm using
$ lcov -r broker.info 'test/unittests/*' -o broker.info2
As far as I understand test/unittest/* matches the files under test/unittest. However, it's not working (note Deleted 0 files below):
Reading tracefile broker.info
Deleted 0 files
Writing data to broker.info2
Summary coverage rate:
lines......: 92.6% (58313 of 62978 lines)
functions..: 96.0% (6451 of 6718 functions)
branches...: no data found
I have tried also this variants (same result):
$ lcov -r broker.info "test/unittests/*" -o broker.info2
$ lcov -r broker.info "test/unittests/\*" -o broker.info2
$ lcov -r broker.info "test/unittests" -o broker.info2
So, maybe I'm doing something wrong?
I'm using lcov version 1.13 (just in case the data is relevant)
Thanks!
I have been testing another options and the following one seems to work, using the wildcard in the prefix also:
$ lcov -r broker.info "*/test/unittests/*" -o broker.info2
Maybe it is something new in version 1.13 because in version 1.11 it seems it works without wildcard in the prefix...
The below mentioned lcov command is working fine, even with wild characters (lcov 1.14):
lcov --remove meson-logs/coverage.info '/home/builduser/external/*' '/home/builduser/unittest/*' -o meson-logs/sourcecoverage.info

Apple app store rejection; How to find which lib has string referencing "transition:didComplete:"

We got rejected by Apple. Happens all the time right? But this time we are a bit stumped. The old ways of sussing this out aren't producing a clue to the solution.
From Apple:
5 PERFORMANCE: SOFTWARE REQUIREMENTS
Performance - 2.5.1
Your app uses or references the following non-public APIs:
transition:didComplete:
The use of non-public APIs is not permitted on the App Store because
it can lead to a poor user experience should these APIs change.
This app has been around half a decade and over the years, mostly due to business needs, it has a lot of references to 3rd party SDKs. This is where we are focusing our attention, but the trail dries up quick and is turning into a massive removal of everything until we find the pieces that include this old code.
What we know is it is not a symbol, otool and nm don’t find anything. strings does locate a match (1 time in the debug build and 2 times in our final release build if that is a clue or makes a difference.) This would appear to be a call is UIKit so I assuming that would not be the case.
Does anyone have any suggestions on how to proceed?
We're going through every archive/lib/binary we can find referenced in the project and doing string searches. If that fails we are about to rip every SDK out and do a destructive binary search to find the guilty party... If there was a hot tip on how to solve this I am all ears!
Here is the command line output (strings, otool, and nm):
Dev-MBP:helloworld.app codemonkey$ otool -ov helloworld | grep -C 10 "transition:didComplete"
Dev-MBP:helloworld.app codemonkey$ nm helloworld | grep -C 10 "transition:didComplete"
Dev-MBP:helloworld.app codemonkey$ strings helloworld | grep -C 3 "transition:didComplete"
destinationLayout
prepareTransition:
performTransition:
transition:didComplete:
destinationViewController
sourceViewController
isViewTransition
--
--
destinationLayout
prepareTransition:
performTransition:
transition:didComplete:
destinationViewController
sourceViewController
isViewTransition
Dev-MBP:helloworld.app codemonkey$ strings helloworld | grep "transition:didComplete"
transition:didComplete:
transition:didComplete:
Dev-MBP:helloworld.app codemonkey$
The lib containing the string "transition:didComplete:" was coming from the 4.x Google Play Games libs for c++. We also found it in the latest 5.1.1.
The command/script I ran to find this string inside of a file is probably the most useful part of this answer:
Dev-MBP:helloworldproj codemonkey$ find . -type f | while read i; do grep 'transition:didComplete' "$i" >/dev/null; [ $? == 0 ] && echo $i; done
Run this from the root of your iOS project (assuming your frameworks you added are all under that directory)
We can now deliberate on the most efficient way to write this command. I've already had one suggestion:
From a friend:
Naturally, it could be improved upon. The -l option to grep does the
same thing, so ...
find . -type f |
while read i
do
grep -l 'transition:didComplete' "$i"
done

Why doesn't grep work on this file?

I'm trying to grep this file. Here is a sample of the file (Note: my problem is obviously not present if you just copy/paste this sample and run grep)
'startTime': 1415066802,
'timeout': 6,
'totalRequests': 9201823,
'write': 0}]}
INFO:root:Running setup module stop (cwd=/home/techempower/FrameworkBenchmarks/frameworks/Java)
benchmark: 3% |# | Rough ETA: 17:27:56
--------------------------------------------------------------------------------
Running Test: activeweb-raw
--------------------------------------------------------------------------------
INFO:root:Running setup module start (cwd=/home/techempower/FrameworkBenchmarks/frameworks/Java)
INFO:root:Called setup.py start
INFO:root:Sleeping 60 seconds to ensure framework is ready
I'd like to extract lines like these:
benchmark: 1% | | Rough ETA: 00:00:01
Here's the output I get when I run grep:
$ cat NhHR | grep Rough
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
It appears that I'm detecting the text, but the lines that are being returned do not include the detected text (as thought it's not printing in my terminal?). Printing contextual lines doesn't provide any further clues to me
Does anyone know how I can get grep to work for this file, or why it's not working currently?
It looks to me that the matched line contains a carriage return just before the long dashes that when printed to stdout makes the non-dashed part of the line to be overwritten. Try piping grep to a file and open the file in an editor, you should see the matched part.

tar pre-run to evaluate expected size or amount of files

The problem:
I have a back-end process that at some point he collect and build a big tar file.
This tar receive few directories and an exclude files.
the process can take up to few minutes and i want to report in my front-end process (GUI) about the progress of the taring process (This is a big issue for a user that press download button and it seems like nothing is happening...).
i know i can use -v -R in the tar command and count files and size progress but i am looking for some kind of tar pre-run mode / dry run to help me evaluate either the expected number of files or the expected tar size.
the command I am using: tar -jcf 'FILE.tgz' 'exclude_files' 'include_dirs_and_files'
10x for everyone who is willing to assist.
You can pipe the output to the wc tool instead of actually making a file.
With file listing (verbose):
[git#server]$ tar czvf - ./test-dir | wc -c
./test-dir/
./test-dir/test.pdf
./test-dir/test2.pdf
2734080
Without:
[git#server]$ tar czf - ./test-dir | wc -c
2734080
Why don't you run a
DIRS=("./test-dir" "./other-dir-to-test")
find ${DIRS[#]} -type f | wc -l
beforehand. This gets all the files (-type f) one per line and counts the number of files. DIRS is an array in bash, so you can store the folders in a variable
If you want to know the size of all the stored files, you can use du
DIRS=("./test-dir" "./other-dir-to-test")
du -c -d 0 ${DIRS[#]} | tail -1 | awk -F ' ' '{print $1}'
This prints the disk usage with du, calculates a grand total (-c flag), gets the last line (example 4378921 total), and uses just the first column with awk

Creating a tar stream of arbitrary data and size

I need to create an arbitrarily large tarfile for testing but don't want it to hit the disk.
What's the easiest way to do this?
You can easily use python to generate such a tarfile:
mktar.py:
#!/usr/bin/python
import datetime
import sys
import tarfile
tar = tarfile.open(fileobj=sys.stdout, mode="w|")
info = tarfile.TarInfo(name="fizzbuzz.data")
info.mode = 0644
info.size = 1048576 * 16
info.mtime = int(datetime.datetime.now().strftime('%s'))
rand = open('/dev/urandom', 'r')
tar.addfile(info,rand)
tar.close()
michael#challenger:~$ ./mktar.py | tar tvf -
-rw-r--r-- 0/0 16777216 2012-08-02 13:39 fizzbuzz.data
You can use tar with -O option tar -O, like this tar -xOzf foo.tgz bigfile | process
https://www.gnu.org/software/tar/manual/html_node/Writing-to-Standard-Output.html
PS: However, it could be, that you will not get the benefits you intend to gain as tar starts writing stdout only after it has read through the entire compressed file. You can demonstrate this behavior by starting a large file extraction and following the file size over time; it should be zero most of the processing time and start growing at very late stage. On the other hand I haven't researched this extensively, there might be some work around, or I might be just plain wrong with my first hand out-of-memory experience.

Resources