More metrics for CodeCoverage Elixir - erlang

Background
I have a test suite and I need to know the coverage of the project.
I have played around with mix test --cover but I find the native erlang's coverage analysis tool to be insufficient at best.
The native coverage tool doesn't tell you about branch coverage nor function coverage. It's only metric seems to be relevant lines which I have no idea how they calculate. For all I know, this is just the most basic form of test coverage: see if a given text line was executed.
What have you tried?
I have tried Coverex but the result was disastrous. Not only does it suffer from the same issues that the native tool does, it also seems not produce correct results as it counts imported modules as untested.
Or maybe it is doing a great job and my code is poorly tested, but I can't know for sure because it doesn't tell me how it is evaluating my code. Have 40% coverage in a file? What am I missing? I can't know, the tool wont tell me.
I am now using ExCoveralls. It is considerably better than the previous options, it allows me to easily configure which folders I want to ignore, but it uses the native coverage tool, so it suffers pretty much from the same issues.
What do you want?
I was hoping to find something among the lines of Istanbul, or in this case nyc:
https://github.com/istanbuljs/nyc
It's test coverage analysis tells me everything I need to know, metrics and all:
Branches, Functions, Lines, Statements, everything you need to know is there.
Questions
Is there any tool that uses Istanbul for code coverage metrics with Elixir instead of the native erlang one?
If not, is there a way to configure the native coverage tool to give me more information?
Which metrics does the native coverage tool uses ?

The native coverage tool inserts "bump" calls on every line of the source code, recording module, function, arity, clause number and line number:
bump_call(Vars, Line) ->
A = erl_anno:new(0),
{call,A,{remote,A,{atom,A,ets},{atom,A,update_counter}},
[{atom,A,?COVER_TABLE},
{tuple,A,[{atom,A,?BUMP_REC_NAME},
{atom,A,Vars#vars.module},
{atom,A,Vars#vars.function},
{integer,A,Vars#vars.arity},
{integer,A,Vars#vars.clause},
{integer,A,Line}]},
{integer,A,1}]}.
(from cover.erl)
The code inserted by the function above is:
ets:update_counter(?COVER_TABLE,
{?BUMP_REC_NAME, Module, Function, Arity, Clause, Line}, 1)
That is, increment the entry for the given module / function / line in question by 1. After all tests have finished, cover will use the data in this table and show how many times a given line was executed.
As mentioned in the cover documentation, you can get coverage for modules, functions, function clauses and lines. It looks like ExCoveralls only uses line coverage in its reports, but there is no reason it couldn't do all four types of coverage.
Branch coverage is not supported. Seems like supporting branch coverage would require expanding the "bump" record and updating cover.erl to record that information. Until someone does that, coverage information is only accurate when branches appear on different lines. For example:
case always_false() of
true ->
%% this line shows up as not covered
do_something();
false ->
ok
end.
%% this line shows up as covered, even though do_something is never called
always_false() andalso do_something()

To add to #legoscia excellent response, I also want to clarify why cover does not do statements evaluation. According to this discussion in the official forum:
https://elixirforum.com/t/code-coverage-tools-for-elixir/18102/10
The code is first compiled into erlang and then from erlang into a modified binary file (but no .beam file is created) that is automatically loaded into memory and executed.
Because of the way erlang code works, a single statement can have several instructions:
and single line can result in multiple VM “statements”, for example:
Integer.to_string(a + 1)
Will result with 2 instructions:
{line,[{location,"lib/tasks.ex",6}]}.
{gc_bif,'+',{f,0},1,[{x,0},{integer,1}],{x,0}}.
{line,[{location,"lib/tasks.ex",6}]}.
{call_ext_only,1,{extfunc,erlang,integer_to_binary,1}}.
Therefore it is rather tricky for an automatic analysis tool to provide statement coverage because it is hard to match statements to instructions, especially as in theory a compiler is free to reorder commands as it pleases as long as the result is the same.

Related

Getting different statement coverage for the same piece of code using gcov and gcovr

I am new to using gcov and gcovr and I wanted to get the statement coverage of a given function. It is coded in C, compiled with minGW and called from Matlab (which I use to later process the coverage information given by gcov).
I am executing the code in two different ways: for the first one I am using Simulink, in which the function inputs are given by the outputs of other functions that encompass the dynamic process I modelled on Simulink. For the second one, I am using the editor on Matlab and defining directly the inputs to the function.
Because the Simulink - executed code depends on secondary functions whose output I cannot control (contrary to the second way), I expected the statement coverage of the first execution to be worse than the second but to have the same number of statement lines (since it is exactly the same code). However, I found that:
For some function callers inside the function, the second method counts the few lines of the caller (like the first line and the following lines when the input and output variables are too long to fit in a single line), adding up statements that in reality don't exist.
The first method doesn't take into account some variable definitions at the beginning of the code, not counting them as line statements (for instance, setting input variables to 0).
Has anybody also encountered this discrepancy when getting the statement coverage of the same function? Do you know why this may be?
Thank you very much in advance!

How to parse only user defined source files with clang tools

I am writing a clang tool, yet I am quite new to it, so i came across a problem, that I couldn't find in the docs (yet).
I am using the great Matchers API to find some nodes that I will later want to manipulate in the AST. The problem is, that the clang tool will actually parse eeeverything that belongs to the sourcefile including headers like iostream etc.
Since my manipulation will probably include some refactoring I definitely do not want to touch each and every thing the parser finds.
Right now I am dealing with this by comparing the sourceFiles of nodes that I matched against with the argumets in argv, but needless to say, that this feels wrong since it still parses through ALL the iostream code - it just ignores it whilst doing so. I just cant believe there is not a way to just tell the ClangTool something like:
"only match nodes which location's source file is something the user fed to this tool"
Thinking about it it only makes sense if its possible to individually create ASTs for each source file, but I do need them to be aware of each other or share contextual knowledge and I also haven't figured out a way to do that either.
I feel like I am missing something very obvious here.
thanks in advance :)
There are several narrowing matchers that might help: isExpansionInMainFile and isExpansionInSystemHeader. For example, one could combine the latter with unless to limit matches to AST nodes that are not in system files.
There are several examples of using these in the Code Analysis and Refactoring with Clang Tools repository. For example, see the file lib/callsite_expander.h around line 34, where unless(isExpansionInSystemHeader)) is used to exclude call expressions that are in system headers. Another example is at line 27 of lib/function_signature_expander.h, where the same is used to exclude function declarations in system headers that would otherwise match.

Is there a way to see Rails test coverage as 'methods covered / methods not covered,' rather than line by line?

Using a gem like SimpleCov I can see my test coverage on a line-by-line basis for all the files I specify. Is there a way to see test coverage on a method-by-method basis? For example, if my tests engaged a method at all, that method would be considered 'covered.'
if my tests engaged a method at all, that method would be considered
'covered.'
That's a very arbitrary definition of covered methods. What if it has 100 lines but it returns on first guard clause?
Because of this (I imagine) this is not feasible to implement - there would be a problem to even agree what is a covered method.
Such metric would lie (reporting bigger coverage). If you assumed that method is covered if all lines are covered - it would lie too, reporting smaller coverage.
Note: line coverage is also lying a bit (check ternary operators for example), but it's the smallest liar...

Is there a solution for transpiling Lua labels to ECMAScript3?

I'm re-building a Lua to ES3 transpiler (a tool for converting Lua to cross-browser JavaScript). Before I start to spend my ideas on this transpiler, I want to ask if it's possible to convert Lua labels to ECMAScript 3. For example:
goto label;
:: label ::
print "skipped";
My first idea was to separate each body of statements in parts, e.g, when there's a label, its next statements must be stored as a entire next part:
some body
label (& statements)
other label (& statements)
and so on. Every statement that has a body (or the program chunk) gets a list of parts like this. Each part of a label should have its name stored in somewhere (e.g, in its own part object, inside a property).
Each part would be a function or would store a function on itself to be executed sequentially in relation to the others.
A goto statement would lookup its specific label to run its statement and invoke a ES return statement to stop the current statements execution.
The limitations of separating the body statements in this way is to access the variables and functions defined in different parts... So, is there a idea or answer for this? Is it impossible to have stable labels if converting them to ECMAScript?
I can't quite follow your idea, but it seems someone already solved the problem: JavaScript allows labelled continues, which, combined with dummy while loops, permit emulating goto within a function. (And unless I forgot something, that should be all you need for Lua.)
Compare pages 72-74 of the ECMAScript spec ed. #3 of 2000-03-24 to see that it should work in ES3, or just look at e.g. this answer to a question about goto in JS. As usual on the 'net, the URLs referenced there are dead but you can get summerofgoto.com [archived] at the awesome Internet Archive. (Outgoing GitHub link is also dead, but the scripts are also archived: parseScripts.js, goto.min.js or goto.js.)
I hope that's enough to get things running, good luck!

How to profile an Antlr grammar

I have an Antlr grammar that is currently about 1200 lines. It parses the language that I want, but for at least one construct it is prohibitively slow even for smaller input files. The execution time seems to be growing exponentially for each added element of the construct.
I want to know if there are any good guidelines for debugging/profiling such performance problems.
I have already tried with VisualVM and that gave be the name of the two methods closureCheckingStopState and closure_, but that does not bring be much closer to figure out what is wrong with the grammar.
There is a Profiler option in the JetBrains IDEA plugin
see:
https://github.com/antlr/intellij-plugin-v4/blob/master/README.md
Right click on any rule to test a rule and you'll get the tabs for
Parse tree
Hierarchy
Profiler
See example screen shots below.
The ambiguity lines in the profiler tab help finding ambigous parsing rules. If you click on such a red line the rule is highlighted.
Profile Tab
Parse Tree Tab
I rely on two primary items to analyze and improve the performance of a grammar.
The latest release of ANTLRWorks 2 includes advanced profiling capabilities. Current limitations include the following:
The profiler doesn't support languages which require a custom CharStream or TokenStream (e.g. for preprocessing the input).
The profiler doesn't execute custom embedded actions in the lexer or parser, so your grammar needs to be able to produce a parse tree without relying on these operations. Standard lexer commands such as -> skip or -> channel(HIDDEN) do not pose a problem.
The output of the profiler is tables of numbers which are not easily understood by most ANTLR users, especially in terms of knowing what you should do in response to the numbers.
I use a fork of the primary release which includes a number of optimizations not present in the reference release of ANTLR 4. Note that these features are "sparingly" documented as their only purpose to date was supporting the in-house development of ANTLRWorks and GoWorks. For most grammars, this fork performs roughly equivalent to the reference release. However, for some known grammars the "optimized" release performs over 200x as fast as the reference release.
If you could provide the grammar and an input which is particularly, I could run the analysis and try to interpret the key pieces of the results.
The latest release of ANTLRWorks is distributed through the official NetBeans Update Center. Simply run Tools → Plugins, go to Available Plugins and locate ANTLRWorks Editor.
To run the profiler, use the Run → Interpret Parser... command. The results window is available after the parsing operation by choosing Window → Parser Debugger Controller.
As Wolfgang Fahl said, IDEA has a great plugin, but that of course just displays the information collected by your parser.
So in case you cannot use IDEA, or for example want to do profiling live, you can do it programmatically, like this:
public void parseAndProfile(GmmlSaneParser parser) {
parser.setProfile(true);
// do the actual parsing
ParseInfo parseInfo = parser.getParseInfo();
ATN atn = parser.getATN();
for (DecisionInfo di : parseInfo.getDecisionInfo()) {
DecisionState ds = atn.decisionToState.get(di.decision);
String ruleName = GmmlParser.ruleNames[ds.ruleIndex];
System.out.println(ruleName +" -> " + di.toString());
}
}
If you already have Android studio, you may use built-in Antlr V4 plugin to use Antlr profiler.
The tutorial in the link works for me
http://blog.dgunia.de/2017/10/26/creating-and-testing-an-antlr-parser-with-intellij-idea-or-android-studio/
Android Studion version used for testing: 2.3.1

Resources