Coverall branches [[0,0]] missed meaning? - code-coverage

I'm just starting out with Coveralls and testing coverage and I see info such as this. What do the arrays mean? How should I interpret these? I can't seem to find anywhere in the docs or online regarding the meaning of these.

First index is block number and the second is branch number assigned by GCC
Basing my theory on lcov documentation for branch coverage information
BRDA: line number,block number,branch number,taken
Block number and branch number are gcc internal IDs for the branch.
Taken is either '-' if the basic block containing the branch was never
executed or a number indicating how often that branch was taken.

Related

Getting different statement coverage for the same piece of code using gcov and gcovr

I am new to using gcov and gcovr and I wanted to get the statement coverage of a given function. It is coded in C, compiled with minGW and called from Matlab (which I use to later process the coverage information given by gcov).
I am executing the code in two different ways: for the first one I am using Simulink, in which the function inputs are given by the outputs of other functions that encompass the dynamic process I modelled on Simulink. For the second one, I am using the editor on Matlab and defining directly the inputs to the function.
Because the Simulink - executed code depends on secondary functions whose output I cannot control (contrary to the second way), I expected the statement coverage of the first execution to be worse than the second but to have the same number of statement lines (since it is exactly the same code). However, I found that:
For some function callers inside the function, the second method counts the few lines of the caller (like the first line and the following lines when the input and output variables are too long to fit in a single line), adding up statements that in reality don't exist.
The first method doesn't take into account some variable definitions at the beginning of the code, not counting them as line statements (for instance, setting input variables to 0).
Has anybody also encountered this discrepancy when getting the statement coverage of the same function? Do you know why this may be?
Thank you very much in advance!

More metrics for CodeCoverage Elixir

Background
I have a test suite and I need to know the coverage of the project.
I have played around with mix test --cover but I find the native erlang's coverage analysis tool to be insufficient at best.
The native coverage tool doesn't tell you about branch coverage nor function coverage. It's only metric seems to be relevant lines which I have no idea how they calculate. For all I know, this is just the most basic form of test coverage: see if a given text line was executed.
What have you tried?
I have tried Coverex but the result was disastrous. Not only does it suffer from the same issues that the native tool does, it also seems not produce correct results as it counts imported modules as untested.
Or maybe it is doing a great job and my code is poorly tested, but I can't know for sure because it doesn't tell me how it is evaluating my code. Have 40% coverage in a file? What am I missing? I can't know, the tool wont tell me.
I am now using ExCoveralls. It is considerably better than the previous options, it allows me to easily configure which folders I want to ignore, but it uses the native coverage tool, so it suffers pretty much from the same issues.
What do you want?
I was hoping to find something among the lines of Istanbul, or in this case nyc:
https://github.com/istanbuljs/nyc
It's test coverage analysis tells me everything I need to know, metrics and all:
Branches, Functions, Lines, Statements, everything you need to know is there.
Questions
Is there any tool that uses Istanbul for code coverage metrics with Elixir instead of the native erlang one?
If not, is there a way to configure the native coverage tool to give me more information?
Which metrics does the native coverage tool uses ?
The native coverage tool inserts "bump" calls on every line of the source code, recording module, function, arity, clause number and line number:
bump_call(Vars, Line) ->
A = erl_anno:new(0),
{call,A,{remote,A,{atom,A,ets},{atom,A,update_counter}},
[{atom,A,?COVER_TABLE},
{tuple,A,[{atom,A,?BUMP_REC_NAME},
{atom,A,Vars#vars.module},
{atom,A,Vars#vars.function},
{integer,A,Vars#vars.arity},
{integer,A,Vars#vars.clause},
{integer,A,Line}]},
{integer,A,1}]}.
(from cover.erl)
The code inserted by the function above is:
ets:update_counter(?COVER_TABLE,
{?BUMP_REC_NAME, Module, Function, Arity, Clause, Line}, 1)
That is, increment the entry for the given module / function / line in question by 1. After all tests have finished, cover will use the data in this table and show how many times a given line was executed.
As mentioned in the cover documentation, you can get coverage for modules, functions, function clauses and lines. It looks like ExCoveralls only uses line coverage in its reports, but there is no reason it couldn't do all four types of coverage.
Branch coverage is not supported. Seems like supporting branch coverage would require expanding the "bump" record and updating cover.erl to record that information. Until someone does that, coverage information is only accurate when branches appear on different lines. For example:
case always_false() of
true ->
%% this line shows up as not covered
do_something();
false ->
ok
end.
%% this line shows up as covered, even though do_something is never called
always_false() andalso do_something()
To add to #legoscia excellent response, I also want to clarify why cover does not do statements evaluation. According to this discussion in the official forum:
https://elixirforum.com/t/code-coverage-tools-for-elixir/18102/10
The code is first compiled into erlang and then from erlang into a modified binary file (but no .beam file is created) that is automatically loaded into memory and executed.
Because of the way erlang code works, a single statement can have several instructions:
and single line can result in multiple VM “statements”, for example:
Integer.to_string(a + 1)
Will result with 2 instructions:
{line,[{location,"lib/tasks.ex",6}]}.
{gc_bif,'+',{f,0},1,[{x,0},{integer,1}],{x,0}}.
{line,[{location,"lib/tasks.ex",6}]}.
{call_ext_only,1,{extfunc,erlang,integer_to_binary,1}}.
Therefore it is rather tricky for an automatic analysis tool to provide statement coverage because it is hard to match statements to instructions, especially as in theory a compiler is free to reorder commands as it pleases as long as the result is the same.

can bazel test if two rules are identical

I'd like to compare two rules to see whether or not they are identical (in particular, I'd like to be able to test a bazel target before and after a commit to see if it has changed)
Is there a way to accomplish this, perhaps with bazel query?
You can try bazel query with --output=build to have bazel print out the rule with everything expanded (e.g. macros evaluated, globs expanded, expressions evaluated, etc) before and after the change, and compare the results. See https://docs.bazel.build/versions/master/query.html#output-build for more information.

Is there a solution for transpiling Lua labels to ECMAScript3?

I'm re-building a Lua to ES3 transpiler (a tool for converting Lua to cross-browser JavaScript). Before I start to spend my ideas on this transpiler, I want to ask if it's possible to convert Lua labels to ECMAScript 3. For example:
goto label;
:: label ::
print "skipped";
My first idea was to separate each body of statements in parts, e.g, when there's a label, its next statements must be stored as a entire next part:
some body
label (& statements)
other label (& statements)
and so on. Every statement that has a body (or the program chunk) gets a list of parts like this. Each part of a label should have its name stored in somewhere (e.g, in its own part object, inside a property).
Each part would be a function or would store a function on itself to be executed sequentially in relation to the others.
A goto statement would lookup its specific label to run its statement and invoke a ES return statement to stop the current statements execution.
The limitations of separating the body statements in this way is to access the variables and functions defined in different parts... So, is there a idea or answer for this? Is it impossible to have stable labels if converting them to ECMAScript?
I can't quite follow your idea, but it seems someone already solved the problem: JavaScript allows labelled continues, which, combined with dummy while loops, permit emulating goto within a function. (And unless I forgot something, that should be all you need for Lua.)
Compare pages 72-74 of the ECMAScript spec ed. #3 of 2000-03-24 to see that it should work in ES3, or just look at e.g. this answer to a question about goto in JS. As usual on the 'net, the URLs referenced there are dead but you can get summerofgoto.com [archived] at the awesome Internet Archive. (Outgoing GitHub link is also dead, but the scripts are also archived: parseScripts.js, goto.min.js or goto.js.)
I hope that's enough to get things running, good luck!

meaning of llvm[n] when compiling llvm, where n is an integer

I'm compiling LLVM as well as clang. I noticed that the output of compilation has llvm[1]: or llvm[2]: or llvm[3]: prefixed to each line. What do those integers in brackets mean?
Apparently, it's not connected to the number of the compilation job (can be easily checked via make -j 1). The autoconf-based build system indicates the "level" of the makefile inside the source tree). To be prices, it's a value of make's MAKELEVEL variable.
The currently accepted answer is not correct. Furthermore, this is really a GNU Make question, not a LLVM question.
What you're seeing is the current value of the MAKELEVEL variable echoed by make to the command line. This value is set as a result of recursive execution. From the GNU make manual:
As a special feature, the variable MAKELEVEL is changed when it is passed down from level to level. This variable’s value is a string which is the depth of the level as a decimal number. The value is ‘0’ for the top-level make; ‘1’ for a sub-make, ‘2’ for a sub-sub-make, and so on. The incrementation happens when make sets up the environment for a recipe.
If you have a copy of the GNU Make source code on hand, you can see your output message being generated in output.c with the void message(...) function. In GNU make v4.2, this happens on line 626. Specifically the program argument is set to the string "llvm" and the makelevel argument is set as noted above.
Since it was erroneously brought up, it is not the number of the compilation job. The -j [jobs] or --jobs[=jobs] options enable parallel execution of up to jobs number of recipes simultaneously. If -j or --jobs is selected, but jobs is not set, GNU Make attempts to execute as many recipes simultaneously as possible. See this section of the GNU Make manual for more information.
It is possible to have recursive execution without parallel execution, and parallel execution without recursive execution. This is the main reason that the currently accepted answer is not correct.
It's the number of the compilation job (make -j). Helpful to trace compilation errors.

Resources