Generating an in-memory coverage report using Clang Source-based Code Coverage - clang

I followed The Clang manual and used __llvm_profile_write_buffer to collect coverage cprofile data inside the instrumented program.
This works well, but to actually generate a coverage report the recommended way is to use the llvm-cov tool like this:
llvm-cov show ./foo -instr-profile=foo.profdata
This tool needs access to the binary which does not play well with __llvm_profile_write_buffer .
Is there a way to generate a coverage report similar to what llvm-cov does, but inside the process, from the buffer updated by __llvm_profile_write_buffer ?
I guess this would involve accessing the symbol table from within the process, which I think is doable?
Use case : I would like to upload the coverage report from within the process to a remote server without having to execute an external tool.
Thanks for your help,
Antoine

Related

Instrument code coverage using bazel and testwell ctc+

I am trying to instrument the code coverage using bazel and testwell ctc++.
As per testwell we need to just pre-append the ctcwrap utility and it will create MON.sym and MON.dat file respectively.
But the command is not working. There are no files created.
My question is more about knowing the best way to instrument third party coverage tools using bazel.

Getting llvm-cov to talk to codecov.io

I'm in the process of (finally!) setting up code coverage monitoring for my brand new C++ project. Due to the fact that I need some advanced C++20 features (read, coroutines), I am using clang 6 as compiler.
Now, I followed this guide on how to do basic code coverage for your project, and everything worked like magic. If I do:
clang++ -fprofile-instr-generate -fcoverage-mapping test.cpp -o test.out
LLVM_PROFILE_FILE="coverage/test.profraw" ./test.out
llvm-profdata merge -sparse coverage/test.profraw -o coverage/test.profdata
llvm-cov show ./test.out -instr-profile=coverage/test.profdata
I get a nice, colored report on my terminal that tells me what is covered and what is not.
So far so good! I thought I was close to what I wanted, but then the pain started when I tried to get the report uploaded to codecov.io.
I have tried a few things, including:
Running their https://codecov.io/bash script on my coverage folder in the hope that maybe it would catch on my test.profdata. No dice, and it makes sense, since even llvm-cov needs the path to the executable file to run.
Using the export functionality: when running llvm-cov export --instr-profile=coverage/test.profdata ./test.out I get a good-looking JSON file (via terminal). I tried throwing the output in a coverage.json file, which actually got uploaded, but then codecov just says that there was an error parsing it, with no further information.
I'm feeling completely lost. Everything seems so black-box-ish on their website that I just don't understand how to get anything done that doesn't by chance perfectly fit the cases that they can manage.
How can I get this working with codecov? If codecov can't handle my reports, is there any other equivalent online code coverage that I can use to get this to work?
It looks like the bash script codecov uses to upload coverage data to their site looks for files matching a wide range of patterns associated with formats that it understands. These are poorly documented, but you can at least see which patterns are viable by looking at the script on Github. Of course, this doesn't tell you what expectations codecov has about the format of files matching a given pattern, as you discovered when your coverage.json file was rejected.
Through trial and error I have found that the following produces a file that codecov will interpret correctly when you run the bash script:
llvm-cov show ./test.out -instr-profile=default.profdata > coverage.txt
I haven't extensively tested what file names are allowed, but it seems that you can put whatever additional characters you want between coverage and .txt in the name of the file that you're piping the coverage data to (e.g. you could call it coverage_my_file_name.txt).
EDIT: Just in case this is helpful to anyone, it turns out that an important corollary to the above is that it's critical that you avoid naming anything that isn't a coverage report something that matches this pattern. I just dealt with a scenario where I had a bunch of executables named coverage_[more_text_here].out that were getting uploaded with the reports. It turns out that attempting to parse assembly code as a coverage report can cause codecov to mysteriously fail without any useful errors.
Another option is to use GCOV profiling, which is a little less precise than source-based, but it is supported by codecov.io. You need the "--coverage" compiler flag to enable it.
You can use grcov (which you can also download from https://github.com/mozilla/grcov/releases) to parse the gcno/gcda files and upload them via the codecov.io bash uploader:
grcov OBJ_DIR -s SRC_DIR -t lcov --branch > lcov.info
bash codecov.sh -f "lcov.info"
I'm planning to add support for source-based reports to grcov, which will make it easier to support the format on codecov too.

Generating code coverage report using theIntern

I am using theIntern for unit testing my javascript framework. My test is running fine using node.
However, I am not able to generate code coverage report properly. I tried the options provided in the documentation. I was successful to print code coverage information on to the console while testing through selenium web driver. That gives only a summary.
How can I generate extensive code coverage report using reporters other than console?
I provided the "reporters" option but doesn't print the report. Any help would be appreciated.
The lcov reporter generates an lcov.info file that can then be passed to the lcov genhtml utility to output a complete set of HTML coverage reports (the simplest invocation is just genhtml lcov.info).
In Intern 1.2, however, there is a bug with the generated lcov.info files (fixed for Intern 1.3) that may cause genhtml to fail to find any coverage data inside a generated lcov.info file. The patch for this issue is very simple and you should be able to cleanly it to Intern 1.2 until the new version is released in the next couple of weeks.

VHDL test results into jUnit (or other Jenkins-recognized) format

I'm setting up automated regression testing for an FPGA project, almost exactly as described here:
Continuous integration of complex reconfigurable systems
Now I want to get test results (from VHDL REPORT statements in ModelSim simulation) to appear in Jenkins testing reports. My understanding is that Jenkins only natively supports jUnit format, and I looked for plugins supporting non-XML formats but didn't see any.
Generating valid XML from VHDL REPORT statements would be very difficult, since the simulation may immediately terminate depending on the severity. Which means that the closing tags would have to be duplicated in every single possible exit path for every single test -- not the most maintainable approach.
So, do you know of any straightforward way to convert plain text into jUnit (or another format, if supported by Jenkins)? If something doesn't already exist, is there an advantage to writing a Jenkins plugin vs just throwing together a perl script? Any other suggestions?
You should take a look at the XUnit Plugin. The Plugin reads test results from a number of tools, and seems adaptable to custom formats. From the documentation the plugin is able to read not only xml, but also csv and txt. For custom format you need to specify some style sheet for the transformation, I am not quite sure if this will go all the way for you. But even if it does not, I suppose the plugin should be easy to extend for your own format.
Old post but today there is a unit testing framework for VHDL that we've developed. It solves the problem by generating a report on the JUnit format. It also handles the case when the simulation stops due to a severe error. The tool is free and open source and can be found at https://github.com/LarsAsplund/vunit

Test code coverage without source code?

What tools are out there that can perform code coverage analysis at the machine code level rather than the source code level? I'm looking for a possible solution to perform fuzz testing on software that I do not have source code access.
I think the IBM Rational test coverage tools instrument object code.
Assuming you had such a tool, but no access to the source, what exactly
would code coverage mean, other than 100%?
If you didn't have 100% coverage, you'd know you hadn't exercised something.
But you would have no way of knowing what.
For compiled code (not Java), try Valgrind.
Old post... but my two cents.
If you have a bunch of jars and if you know what classes/methods you are using, you can instrument the jars with Emma and run your sample application against those jars.
In my case, I have jars which are actually proprietary components (to generate html code) which our company uses to build it's web-pages. We have a sample application that utilizes these components and a bunch of tests that are run against the sample app. I wrote an ant task to copy the maven dependencies to a directory, instrument them and run the tests against these instrumented jars. This task is invoked from the maven POM and is hence part of the build process.
Also, as part of the build process, we process the emma coverage data to produce a report. This report shows the classes and methods in the jar for which we do not have the source code! Hope this helps.
If you have the number of entry points (public methods), you can test the coverage for that. I don't know any tool for that though.
Otherwise you would have to test the assembly code coverage, and I don't know if it is possible.

Resources