I have been playing with EUnit, it is nice but I'm running into issue with dets, when my test failed and haven't properly closed dets, the file is still open in my shell and I cannot close it because it was created by another process(when i ran tests).
Have you ran into the same issue? Can I do try catch in EUnit efficiently ?
Thanks for any input!
Use Common Test. EUnit suitable for testing small function without side effects.
EUnit is perfectly suitable to test multiple processes and DETS, don't worry.
I think that the only case of a test failing and as a result not closing the DETS file, it is because you are not using a fixture.
That is, a code like this:
wrong_test() ->
setup(),
?assert(false),
cleanup().
will NOT call cleanup(), because the line with the ?assert() will throw an exception. This is expected behavior. So if the cleanup() is supposed to close the DETS file, it will not close it.
The EUnit documentation explains that the way to be sure that the cleanup function is always executed, no matter what happens to the tests, is to use a "fixture", either setup or foreach. For example:
correct_test_() ->
{setup,
% Setup
fun() ->
?assertMatch({ok, Table}, dets:open_file("hello", [])),
Table
end,
% Cleanup
fun(Table) ->
?assertMatch(ok, dets:close(Table)).
end,
% Tests
[
% This will fail, but the cleanup WILL be called
?_assert(false)
]
}.
So, there is no need to "catch" an exception in Erlang for this particular case. You obtain the same using a fixture.
Regarding the fact that you couldn't close the DETS file from the shell, this will not happen with a fixture. In addition, also with your buggy test, it is not a problem, because the file will be closed properly when exiting from the Erlang shell. The only time when a DETS file is not closed properly is when the Erlang runtime system itself crashes.
Other helpful source of documentation, easier to understand than the very terse official one I mentioned before, are the LYSE chapter on Eunit and two presentations about Eunit that you can find on the Erlang Factory web site.
Related
I recently started taking an interest in Lua programming with a Minecraft addon called Computercraft, which involves console-based GUIs to control computers and other things with Lua. However, I seem to be randomly getting an error where the code requires something called an "eof". I have searched multiple manuals and how-tos, and none mention this particular error. In fact, I am having trouble finding anything with an error list. I am fairly new to programming, but have had basic Python experience. Could anyone explain what "eof" is?
An eof is do or then.
Usually eof means you have too many (or not enough) end statements. Paste code maybe?
Suppose I create a file with one too many end statements:
> edit test.lua
for i = 1, 10 do
print(i)
end
end
When I run it, and lua encounters that extra end statement on the last line where there is no code block still open, you'll get an error like:
> test.lua
bios: 337: [string "test.lua"]: 4: '<eof>' expected
(from a quick test in CCDesk pr7.1.1)
Problems with the basic structure of the blocks of lua code show up with bios on the left rather than your file name; the part of bios that loads lua files will usually tell you where it was at in the file when couldn't make sense of the code anymore (like line 4: here). Sometimes it might be a bit of a puzzle to work your way from the scene of the accident back to where things got off track. =)
Using ESPlorer there is a difference between "uploading" a script and "sending" a script. Use "Upload".
This is not the same issue as the one reported, but as searching landed me here...
I am working with Erlang and EUnit to do unit tests, and I would like to write a test runner to automate the running of my unit tests. The problem is that eunit:test/1 seems to only return "error" or "ok" and not a list of tests and what they returned in terms of what passed or failed.
So is there a way to run tests and get back some form of a data structure of what tests ran and their pass/fail state?
If you are using rebar you don't have to implement your own runner. You can simply run:
rebar eunit
Rebar will compile and run all tests in the test directory (as well as eunit tests inside your modules). Furthermore, rebar allows you set the same options in the rebar.config as in the shell:
{eunit_opts, [verbose, {report,{eunit_surefire,[{dir,"."}]}}]}.
You can use these options also in the shell:
> eunit:test([foo], [verbose, {report,{eunit_surefire,[{dir,"."}]}}]).
See also documentation for verbose option and structured report.
An alternative option would be to use Common Test instead of Eunit. Common Test comes with a runner (ct_run command) and gives you more flexibility in your test setup but is also a little more complex to use. Common Test lacks on the available macros but produces very comprehensible html reports.
No easy or documented way, but there are currently two ways you can do this. One is to give the option 'event_log' when you run the tests:
eunit:test(my_module, [event_log])
(this is undocumented and was really only meant for debugging). The resulting file "eunit-events.log" is a text file that can be read by Erlang using file:consult(Filename).
The more powerful way (and not really all that difficult) is to implement a custom event listener and give it as an option to eunit:
eunit:test(my_module, [{report, my_listener_module}])
This isn't documented yet, but it ought to be. A listener module implements the eunit_listener behaviour (see src/eunit_listener.erl). There are only five callback functions to implement. Look at src/eunit_tty.erl and src/eunit_surefire.erl for examples.
I've just pushed to GitHub a very trivial listener, which stores the EUnit results in a DETS table. This can be useful, if you need to further process those data, since they're stored as Erlang terms in the DETS table.
https://github.com/prof3ta/eunit_terms
Example of usage:
> eunit:test([fact_test], [{report,{eunit_terms,[]}}]).
All 3 tests passed.
ok
> {ok, Ref} = dets:open_file(results).
{ok,#Ref<0.0.0.114>}
> dets:lookup(Ref, testsuite).
[{testsuite,<<"module 'fact_test'">>,8,<<>>,3,0,0,0,
[{testcase,{fact_test,fact_zero_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_neg_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_pos_test,0,0},[],ok,0,<<>>}]}]
Hope this helps.
io:format calls in common_test modules don't come to the user console, although error_log messages do. I can't figure out where io:format calls DO go, either. Running ack in my repo on relevant strings turns up nothing. Does anyone know where they go?
Define {logdir, "logs"}. in your spec and then the io:format goes to the log. It is done by setting the group_leader via erlang:group_leader/2 to capture the IO output from your tests.
The actual output is underneath the respective test case in the log output of that test case. logs/index.html is your starting point to peruse.
Finally, it is somewhat loosely documented in http://www.erlang.org/doc/apps/common_test/run_test_chapter.html section 6.9.
Is it possible to check if a lua script contains errors without executing it? I have fallowing code:
if(luaL_loadbuffer(L, data, size, name))
{
fprintf (stderr, "%s", lua_tostring (L, -1));
lua_pop (L, 1);
}
if(lua_pcall(L, 0, 0, 0))
{
fprintf (stderr, "%s", lua_tostring (L, -1));
lua_pop (L, 1);
}
But if the script contains errors it passes first if and it is executed. I want to know if it contains errors when I load it, not when I execute it. Is this possible?
You can use the LUA Compiler. It will only compile your file to bytecode without executing it.
Your program will also have the advantage the run faster if it is compiled.
You can even use the -p option to only perform a syntax checking, according to the linked man page :
-p load files but do not generate any output file. Used mainly for syntax checking or testing precompiled chunks: corrupted files will probably generate errors when loaded. For a thourough integrity test, use -t.
(This was originally meant as a reply to the first comment to Krtek's question, but I ran out of space there and to be honest it works as an answer just fine.)
Functions are essentially values, and thus a named function is actually a variable of that name. Variables, by their very definition, can change as a script is executed. Hell, someone might accidentally redefine one of those functions. Is that bad? To sum my thoughts up: depending on the script, parameters passed and/or actual implementations of those pre-defined functions you speak of (one might unset itself or others, for example), it is not possible to guarantee things work unless you are willing to narrow down some of your demands. Lua is too dynamic for what you are looking for. :)
If you want a flawless test: create a dummy environment with all bells and whistles in place, and see if it crashes anywhere along the way (loading, executing, etc). This is basically a sort of unit test, and as such would be pretty heavy.
If you want a basic check to see if a script has a valid syntax: Krtek gave an answer for that already. I am quite sure (but not 100%) that the lua equivalent is to loadfile or loadstring, and the respective C equivalent is to try and lua_load() the code, each of which convert readable script to bytecode which you would already need to do before you could actually execute the code in your normal all-is-well usecase. (And if that contained function definitions, those would need to be executed later on for the code inside those to execute.)
However, these are the extent of your options with regards to pre-empting errors before they actually happen. Lua is a very dynamic language, and what is a great strength also makes for a weakness when you want to prove correctness. There are simply too many variables involved for a perfect solution.
In general it is not possible, as Lua is a dynamic language, and most of errors happen in runtime.
If you want to check for syntax errors, use luac -p option. I use it as a part of my pre-commit hook, for example.
Other common errors are triggering by misusing the global variables. You may analyze output of luac -l to catch these cases. See here: http://lua-users.org/wiki/DetectingUndefinedVariables.
If you want something more advanced, there are several more-or-less functional static analysis tools for Lua code. Start with LuaInspect.
In any case, you are advised to write unit tests instead of just relying on static code checks. Less pain, more gain.
I've got an application where I cast a message to a gen_server to start an operation, then I call the gen_server every second to gather intermediate results until the operation completes. In production, it usually takes a couple of minutes, but it's only limited by the input size and I'd like to test hour long operations, too.
I want to always make sure this operation still works by running a test as needed. Ideally, I'd like to run this test multiple times with different inputs, also.
I use eunit right now, but it doesn't seem to have a purpose-built way of exercising this scenario. Does commmon test provide for this? Is there an elegant way of testing this or should I just hack something up? In general, I'm having trouble wrapping my head around how to systematically test stateful, asynchronous operations in Erlang.
Yes, common test will do this.
This is a cut down version of the common test suite skeleton that our erlang emacs mode provides (you can use the normal erlang one or erlware one):
-module(junk).
%% Note: This directive should only be used in test suites.
-compile(export_all).
-include("test_server.hrl").
%%
%% set up for the suite...
%%
init_per_suite(Config) ->
Config.
end_per_suite(_Config) ->
ok.
%%
%% setup for each case in the suite - can know which test case it is in
init_per_testcase(_TestCase, Config) ->
Config.
end_per_testcase(_TestCase, _Config) ->
ok.
%%
%% allows the suite to be programmatically managed
%%
all(doc) ->
["Describe the main purpose of this suite"];
all(suite) ->
[].
%% Test cases starts here.
%%--------------------------------------------------------------------
test_case(doc) ->
["Describe the main purpose of test case"];
test_case(suite) ->
[];
test_case(Config) when is_list(Config) ->
ok.
There are 2 basic ways you could do it.
First up start the gen_server in init_per_suite/1 and then have a large number of atomic tests that act on that long running server and then tear the gen_server down in end_per_suite/1. This is the preferred way - your gen_server should be long-running and persistent over many transactions, blah-blah...
The other way is to make a singleton test and start the gen_server with init_per_testcase/2 and tear it down in end_per_testcase/2
Testing Stateful Asynchronous operations is hard in any language, Erlang or otherwise.
I would actuall recommend using etap and have the asynchronous tests run a callback that will then run etap:end_tests()
Since etap uses a running test server and it waits for the end_test call you have a little more control for asynchronous tests.