Getting test results from Eunit in Erlang - erlang

I am working with Erlang and EUnit to do unit tests, and I would like to write a test runner to automate the running of my unit tests. The problem is that eunit:test/1 seems to only return "error" or "ok" and not a list of tests and what they returned in terms of what passed or failed.
So is there a way to run tests and get back some form of a data structure of what tests ran and their pass/fail state?

If you are using rebar you don't have to implement your own runner. You can simply run:
rebar eunit
Rebar will compile and run all tests in the test directory (as well as eunit tests inside your modules). Furthermore, rebar allows you set the same options in the rebar.config as in the shell:
{eunit_opts, [verbose, {report,{eunit_surefire,[{dir,"."}]}}]}.
You can use these options also in the shell:
> eunit:test([foo], [verbose, {report,{eunit_surefire,[{dir,"."}]}}]).
See also documentation for verbose option and structured report.
An alternative option would be to use Common Test instead of Eunit. Common Test comes with a runner (ct_run command) and gives you more flexibility in your test setup but is also a little more complex to use. Common Test lacks on the available macros but produces very comprehensible html reports.

No easy or documented way, but there are currently two ways you can do this. One is to give the option 'event_log' when you run the tests:
eunit:test(my_module, [event_log])
(this is undocumented and was really only meant for debugging). The resulting file "eunit-events.log" is a text file that can be read by Erlang using file:consult(Filename).
The more powerful way (and not really all that difficult) is to implement a custom event listener and give it as an option to eunit:
eunit:test(my_module, [{report, my_listener_module}])
This isn't documented yet, but it ought to be. A listener module implements the eunit_listener behaviour (see src/eunit_listener.erl). There are only five callback functions to implement. Look at src/eunit_tty.erl and src/eunit_surefire.erl for examples.

I've just pushed to GitHub a very trivial listener, which stores the EUnit results in a DETS table. This can be useful, if you need to further process those data, since they're stored as Erlang terms in the DETS table.
https://github.com/prof3ta/eunit_terms
Example of usage:
> eunit:test([fact_test], [{report,{eunit_terms,[]}}]).
All 3 tests passed.
ok
> {ok, Ref} = dets:open_file(results).
{ok,#Ref<0.0.0.114>}
> dets:lookup(Ref, testsuite).
[{testsuite,<<"module 'fact_test'">>,8,<<>>,3,0,0,0,
[{testcase,{fact_test,fact_zero_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_neg_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_pos_test,0,0},[],ok,0,<<>>}]}]
Hope this helps.

Related

how to show all gtest case by bazel without "test" cmd

I want to query all gtest cases by bazel,
parameter "--gtest_filter" only can be used with "bazel test " cmd
and I am try to use "bazel query bazel query //xxx:all", but it will show the test list which defined in BUILD file , I want to get the cases list from xxx.cc files.
This is not a job that bazel query can do. Query operates on the graph structure of targets. A fundamental design decision of Bazel is that this graph can be computed by looking only at BUILD files and the .bzl files they depend on. In particular, parsing source files is not allowed.
(The argument to --test_filter is simply passed through the test runner; Bazel does not know what it represents.)
If you use CLion with the Bazel plugin you get the following view for googletest tests:
This works even with Catch2 (but for Catch2 the view is not so nice). I guess that's some IDE magic here - nevertheless, it gives you what you want. I assume you can also come up with some type of Bazel Aspect that produces this information for you.
I tested this also with Lavender (with minor modifications) and Visual Studio which gives me in the test overview also a list of all test:

Dets leaves open process when test fails with EUnit

I have been playing with EUnit, it is nice but I'm running into issue with dets, when my test failed and haven't properly closed dets, the file is still open in my shell and I cannot close it because it was created by another process(when i ran tests).
Have you ran into the same issue? Can I do try catch in EUnit efficiently ?
Thanks for any input!
Use Common Test. EUnit suitable for testing small function without side effects.
EUnit is perfectly suitable to test multiple processes and DETS, don't worry.
I think that the only case of a test failing and as a result not closing the DETS file, it is because you are not using a fixture.
That is, a code like this:
wrong_test() ->
setup(),
?assert(false),
cleanup().
will NOT call cleanup(), because the line with the ?assert() will throw an exception. This is expected behavior. So if the cleanup() is supposed to close the DETS file, it will not close it.
The EUnit documentation explains that the way to be sure that the cleanup function is always executed, no matter what happens to the tests, is to use a "fixture", either setup or foreach. For example:
correct_test_() ->
{setup,
% Setup
fun() ->
?assertMatch({ok, Table}, dets:open_file("hello", [])),
Table
end,
% Cleanup
fun(Table) ->
?assertMatch(ok, dets:close(Table)).
end,
% Tests
[
% This will fail, but the cleanup WILL be called
?_assert(false)
]
}.
So, there is no need to "catch" an exception in Erlang for this particular case. You obtain the same using a fixture.
Regarding the fact that you couldn't close the DETS file from the shell, this will not happen with a fixture. In addition, also with your buggy test, it is not a problem, because the file will be closed properly when exiting from the Erlang shell. The only time when a DETS file is not closed properly is when the Erlang runtime system itself crashes.
Other helpful source of documentation, easier to understand than the very terse official one I mentioned before, are the LYSE chapter on Eunit and two presentations about Eunit that you can find on the Erlang Factory web site.

How to avoid init_per_suite and end_per_suite to be counted as test cases in Common Test?

I have a test suite which has the init and end functions implemented in it.
When I run the suite it produces some html outputs to show the results of the test cases (pass and fail etc.) from the suite.
But in the log the init_per_suite and end_per_suite are also counted as test cases and their run result is shown in the log. Is there a way to avoid this? I guess there might be a flag in Erlang common test which can be used to disable this.
No, you can't disable it. Besides it may be important information if start_per_suite/end_per_suite succeeds or or fails.
Also you can see that start_per_suite/end_per_suite are not included in general numeration of testcases in resulting html. May be it'll help you if you want to parse the html output. Also you can sort cases by their numbers so the unnumered cases will be on the top/bottom.

Is it possible to display the call "graph" of an ant extension-point?

I have an extension-point defined in ant :
<extension-point name="foo"/>
A lot of tasks contribute to this point in several imported ant files :
<bindtargets targets="bar" extensionPoint="foo" />
However I'm kinda lost as to exactly which tasks are contributing. Is there a way to have ant report the tasks that would be triggered by a given extension point ? More generaly, is there a way to display the "call-graph" (or simply the list of dependencies) of an ant task ?
I tried using verbose options for ant (-v and such), with no luck.
Thanks
First of all, you can try to debug the ANT process in your IDE using remote debugging by adding some parameters to ANT_OPTS (mine is set in ~/.profile):
http://blog.dahanne.net/2010/06/03/debugging-any-java-application/
And profiling may help. I found project Antro on ANT Wiki...
http://sourceforge.net/projects/antro
Maybe you can try it out. The project is said to be designed for ANT, which looks promising in solving your problem.
Also you can use Yourkit Java Profiler to do a CPU profiling. YJP can show the call graph of a java application, but I'm not sure if one can find out which are ANT targets.
The following document shows how to start a java application with YJP agent.
http://www.yourkit.com/docs/95/help/agent.jsp
I know of 2 ways to get this information:
You can get the effective target/extension-point invocation sequence from Ant's console logger. To do this, place Ant's logging facility into verbose mode by passing -verbose on the command line to Ant. There are two lines, one after the other, that dump to the console immediately before most targets as they are invoked in your build script:
A line that shows a summary of the targets in the call sequence starting with the text, Build sequence for target(s) 'artifact' is [...].
A line showing the detailed call sequence (nested targets and antcalls included). This line starts with the text, Complete build sequence is [...]. This listing considers, as much as reasonably possible, the evaluation of any if and unless attributes of each target listed at the point the line is logged to the console.
Simply invoke your Ant build as you would normally with the -verbose option and your console should have the information you're looking for.
You can get a pictorial representation of the call sequence using a tool called Grand. However, it hasn't been updated for quite some time and thus doesn't support extension-points (which is what you're concerned with here). It will interpret antcall's, ant, and depend'encies. It doesn't evaluate the if and unless attributes but simply identifies potential execution sequence - more of a dependency hierarchy than an actual call graph. The project is on Github so an update to support extension-points may not be too difficult.
The graphic is rendered using Graphviz.
For an actual call sequence, use option 1.
This is pretty sloppy, but it works. Ant is actually pretty easily scripted, and if you are using at least Java 6 (or it might be Java 7), javascript support is built in and thus can be used right out of the box. This defines a task that will echo the dependencies of any target in call order:
<scriptdef name="listdepends" language="javascript">
<attribute name="target"/>
<![CDATA[
var done = [];
var echo = project.createTask("echo")
function listdepend(t) {
done.push(t.getName());
var depends = t.getDependencies();
while (depends.hasMoreElements()) {
var t2 = depends.nextElement();
if (done.indexOf(t2)==-1) listdepend(project.getTargets().get(t2));
}
echo.setMessage(t.getName());
echo.perform();
}
var t = attributes.get("target");
if (t!=null) {
var targ = project.getTargets().get(t);
listdepend(targ);
}
]]>
</scriptdef>
In your case, you can create a new target (or not) and call it like so:
<target name="listfoo">
<listdepends target="foo"/>
</target>
As I said, this is somewhat sloppy. It probably isn't very fast (although unless your target triggers thousands of others, it probably isn't noticeably slow). It won't handle antcall tasks (although it could be modified to do so easily) or respond to if and unless attributes. If dependencies nest too far, it may hit a recursion depth limit (but I doubt any project nest them deep enough).
The array is used to make sure that each dependency is listed once (ant would only run them once).

How to execute a exe file using fitnesse

I want to call an exe file in my fitnesse test case.
Help me in calling an exe file in my test cases
With fitnesse, you'll need to write a fixture to run the EXE (and/or find a fitnesse plugin to do it for you). The easiest way is to write a simple fixture and just run
Runtime.getRuntime().exec(<cmd>);
While #Steven Mastandrea's answer is right but it does require you to write you a Java class extending one of the provided fixture's from Fitnesse and compile and put the class files in Fitnesse classpath and then use it.
There is a much simpler way of doing it if you use Generic Fixture like this:
!| Generic Fixture |
| exec | mycommand.exe | | expected outpout |
Disclaimer: Generic Fixture was written and distributed by me as open source 2 years ago on sourceforge.
With fitSharp on Windows, you can write this:
|with|type|System.Diagnostics.Process|
|with|start|C:\dev\myFileImporter.exe||-f c:\dev\data\file.txt|
|wait for exit|
I would suggest taking the CommandLineFixture as a baseline, and expand it from there. The CommandLineFixture has a lot of functionality and is well commented and easily extended should you wish to do so.
This fixture incorporates Steven's code, but has a lot more functionality than simply exec, including being able to asynchronously spawn processes, search output for expected results, etc.
Post a command if you feel some examples of how to use it would be helpful!

Resources