Erlang EUnit test module that depends on a library application - erlang

I have a medium-sized release with a handful of applications. I recently refactored some common functionality out into a library application within the release. This made my EUnit tests fail with undef messages whenever testing anything that required the library application.
The set up is something like this:
% In apps/utils/src/utils.erl
-module(utils).
-export([foo/0]).
foo() -> "OH HAI".
Then
% In apps/some_app/src/some_app.erl
-module(some_app).
-export([bar/0]).
bar() -> io:format("foo: ~s~n", [utils:foo()]).
% unit tests for bar()
Then the unit tests for some_app:bar() fail. I'm running them with rebar eunit skip_deps=true. I'm using skip_deps=true because my release uses some 3rd party applications (SQL, etc).
I assume that the tests start failing because EUnit is invoking the app under test without its dependencies? Is there any way to fix this? I have configured the .app file to explicitly declare the dependency. It works fine in the release, and it's been deployed for about a day now with no problem, but I'll feel a lot better if I can get the tests to pass again :)
(I could use a mocking app to stub out utils:foo/0, and I can see where that would be ideal idiomatically, but that seems like overkill in this case because utils:foo/0 (read: it's real-world counterpart) does some really simple stuff.)

I was able to get this to work by doing rebar compile eunit skip_deps=true.
The key is to have the compile in there and I have no idea why. I'm guessing that the compile step gets all of the modules into memory. I'd love to hear a good explanation.

I think you could have one of your applications load the utility by including it in the application portion of you .app file, as in:
{application,yourapp
[{description,"A description"},
{vsn,"1.0.0"},
{modules,[mod1, mod2, utils]},
SNIP
or in some other way add it to the path of the erlang node... maybe using the -pa flag on starting the node.

Related

What is the workflow for compiling one file using rebar3?

rebar3 seems to recompile everything every time.
Often I am only modifying one file. That's the file I want to recompile, I know everything else is fine.
What is the workflow for doing this? Ideally I could do it from the Erlang shell. Rebar3 already knows my include paths and build directory with the beams in it, how can I take advantage of Rebar's knowledge so I don't have to type it all into the shell over again as arguments to c(File, Opts)?
Keep in mind that rebar3 avoids recompiling up-to-date modules (although it checks them).
That being said, I think the most popular option (and suitable for your needs) is using this plugin.
In my case, I have a set of scripts to set up inotifywait and bring up the whole release down and up again. I also often create shell functions if I need to compile often:
4> C = fun() -> c('my_awesome_module', []) end.
#Fun<erl_eval.45.97283095>
5> C().
{error,non_existing}

Dealing with shared helpers in Common Test suites?

I've got an Erlang project comprising a bunch of different applications. I'm using Common Test to do some of the testing.
apps/foo/suites/foo_SUITE.erl
apps/bar/suites/bar_SUITE.erl
I'm starting to see duplication of utility code in those suites.
Where should I put my utility code so that it can be shared between the two suites?
I've considered adding another application:
apps/test_stuff
...but I can't make the CT suites depend on this without making the application under test depend on this (or can I?). I don't want to do that, because test_stuff is only needed when testing.
I have a similar problem with my eunit tests, both between applications (apps/foo/test vs. apps/bar/test), and where I'm using similar functionality between the eunit and CT tests in the same application (apps/bar/suites vs apps/bar/test). Can I use the same solution for this case as well? Or do I need to ask another question about that?
Do you think ct:require/1,2 could help you so that foo and bar SUITE would require test_stuff before it gets executed? For more information http://www.erlang.org/doc/man/ct.html#require-1
It depends on how you are packaging your final releases. For example, I use rebar for relase management. I have Cowboy fetched along with other dependencies for testing purposes, but in my reltool.config, I omit it, so it doesn't get packaged with the final product. I use rebar to run Common Test, and it's able to add Cowboy to the path without having it bundled as a lib with everything else or added as a dependency to the app I'm testing.
However, if you have another process which infers your release configuration from your dependencies, you'll have to find a way to exclude your test code when you generate a release.

EUnit vs. Common Test

I am new to Erlang. It has 2 test frameworks: EUnit and Common Test. I am confused when to use one over another. Can someone explain to me what are the advantages of EUnit over Common Test, and vice versa. Seem Common Test can do everything EUnit can do and much more, not sure what I should use EUnit. Thanks!
Learn you some erlang (one of the best online resources for erlang, besides the official doc) explains both methods quite well:
EUnit
Common Test
As Pascal pointed out, EUnit is best used in white-box testing, more like internal function-by-function unit testing, light integration tests.
Common Test does the more heavy lifting: integration & system testing, black-box kind of stuff. It's also more complex, of course, and much more powerful.
While you're at it, you might try Dialyzer, another integrated testing tool in Erlang/OTP , which is great in locating dead or unreachable code, logical & type errors (it's a static type analyser). Again, learn you some erlang provides a nice introduction to it: Dialyzer.
Oh, by the way, if you chose to put the EUnit tests in separate files (which is perfectly possible), you won't be able to test unexported functions (that's expected). What's also expected is that Common Test does not test unexported functions: otherwise it wouldn't be a black-box testing tool (maybe a cheating one).
Eunit is really simple and fit very well for module test or library test at white box level. It is integrated to rebar.
Common Test is more oriented for black box testing and application and system of application test. It comes with test coverage very easily.
Edit (after Andy comment):
It is right that you can use Common test for unitary white box test, as well as it is right that you can leverage eunit to some application test using fixtures.
However eUnit is very handy for simple unitary test: you write a function myFun, you add a test function myFun_test or a test generator myFun_test_ (useful to test many patterns even is some test fails in the middle) in your test module and that's it. You can run it as often as you want (no history of test).
Common test asks you to list each test case in the all function or in a group. As far as I know it has not test generator so it is less easy to go through all test patterns of each function. It is why I think it is less adapted to unitary white box tests. On the other hand, the init_per_testcase, init_per_group ... are much more flexible than eunit fixtures to organize the tests when they need some application context to run. Common Test also keeps an history of all tests done in the log directory, It is nice, but I suggest to limit the number of run to keep it useful.
EDIT:
To avoid the problem of not exported functions, for both eunit and common test, it is possible to use defines. for example in rebar.config (because I use separate files for eunit tests):
{eunit_compile_opts, [{d,'EUNIT_TEST',true}]}.
{erl_opts, [debug_info, warn_export_all]}.
and in a module, if it is necessary:
%% export all functions when used in eunit context
-ifdef(EUNIT_TEST).
-compile(export_all).
-endif.
You can verify that it only modify the compiled code for eunit
D:\git\helper>rebar clean eunit compile
==> helper (clean)
==> helper (eunit)
Compiled test/help_list_tests.erl
Compiled test/help_ets_tests.erl
Compiled test/help_num_tests.erl
Compiled src/help_ets.erl
Compiled src/help_list.erl
Compiled src/helper.erl
src/help_num.erl:6: Warning: export_all flag enabled - all functions will be exported
Compiled src/help_num.erl
Compiled src/help_code.erl
All 31 tests passed.
Cover analysis: d:/git/helper/.eunit/index.html
==> helper (compile)
Compiled src/help_ets.erl
Compiled src/help_list.erl
Compiled src/helper.erl
Compiled src/help_num.erl
Compiled src/help_code.erl
From http://www.erlang.org/doc/apps/common_test/basics_chapter.html:
Common Test is also a very useful tool for white-box testing Erlang code (e.g. module testing), since the test programs can call exported Erlang functions directly and there's very little overhead required for implementing basic test suites and executing simple tests. For black-box testing Erlang software, Erlang RPC as well as standard O&M interfaces can for example be used.

Is it possible to run erlang without compilation?

Is there any VM for Erlang that allows you to do compilation on the fly instead of compiling before?
There is a possibility to compile from the shell, thanks Martin.
Now, from the Erlang shell (or some other module!):
1> compile:file("mymod.erl").
{ok,mymod}
2> mymod:myfun().
Hello Joe
Is there any pros or cons with doing this?
Will you still be able to hot swap code?
Is it the regular use-case to handle code?
What benefits does the compiler give you in the end then?
From the Erlang shell, you can compile a module on the fly using c("path/to/module.erl"). You can also access this functionality through the compile module, specifically the compile:file/{1,2} functions.
For example, suppose we have a file mymod.erl:
-module(mymod).
-export([myfun/0]).
myfun() -> io:format("Hello Joe~n").
Now, from the Erlang shell (or some other module!):
1> compile:file("mymod.erl").
{ok,mymod}
2> mymod:myfun().
Hello Joe
See Erldocs on the compile module for more information.
You can do a great deal with the Erlang compiler in runtime. For example, you can dynamically generate code for a module (use erl_syntax!) and then compile it without even writing it to a file using compile:forms/{1,2}.
(Insert standard speech on great power and great responsibility.)
Will you still be able to hot swap code?
Yes.
Is it the regular use-case to handle code?
No. Normally Erlang code is compiled ahead of time into BEAM bytecode. Depending on whether Erlang was started in embedded or interactive mode, the modules are either loaded on startup, or dynamically as they are referenced. If you are building a release, you basically have to compile ahead of time.
What benefits does the compiler give you in the end then?
Well, for one thing, we can build compact releases without unnecessary components like the compiler. Of course, we also get all the traditional benefits of ahead-of-time compilation, particularly that of not having to waste time compiling all the time.
To sum it up, unless you fully understand the implications and have a very good reason not to compile your code ahead of time, please follow the standard practices.
The Erlang VM can only run compiled code! If you want to interpret Erlang code then you need an interpreter. The module erl_eval implements an Erlang interpreter and is part of the standard Erlang/OTP distribution. It is used by the Erlang shell to interpret the expressions entered.
All code handling in the Erlang VM, whether compiling, loading or updating, is done at the module level so it is impossible to compile or load a just one function. The Erlang compiler is written in Erlang and always available and can compile to either a file or a binary which can be immediately loaded into the system. As #MartinTörnwall has pointed out compiling a module from the shell using c(module) is in essence compiling on the fly.
So there would be no problems in automatically compiling code on the fly when it is used, at the module level. It is just that the current system is not designed to work that way and by default when it tries to load a module it only looks for the pre-compiled object file, the .beam file.
Erlang has an interpreter escript. Entire Erlang archive can be written in script. Almost all features are available.
By default, the script will be interpreted. You can force it to be compiled by including the -mode(compile). in the script.
Though it depends on the way you design your application, regular practice is to have .erl files which are compiled and run than having escript files.
So now you have many options.
Compile .erl file to .beam using c(my_module) this auto loads the .beam file. So the existing VM can run it on the fly. On in code you can use compile module functions like file, purge and load to load and run it on the fly.
Compile and keep the .erl files using erlc, erl -make, rebar, etc (Erlang has rich support) and then run it. You can build archives, boot scripts, rel etc to manage running and release of the Erlang software. This usually is the practice for production.
Use escript and run everything in interpreted mode.
Use escript and give -mode(compile) option to tell Erlang VM that at runtime (when starting to run escript) compile the code and run the compiled code (in memory)
Is there any pros or cons with doing this?
Compiled code is faster than interpreted code. I dont see any other right now in Erlang as pretty much everything is supported in both. Erlang even supports combination (Calling compiled code from interpreted code)
Will you still be able to hot swap code?
Yes in all cases. Your code also should be able to handle this.
Is it the regular use-case to handle code?
Option 2 for production. Option for 1 for learning / simple development. Option 3 and 4 in need basis for specific requirements (May be one time running).
What benefits does the compiler give you in the end then?
To make it clear, erlc program provides a common way to run all compilers in the Erlang system and compile module gives an interface to Erlang compilers. Compiler gives intermediate binary .beam file which helps in running Erlang code faster than interpreted counterpart. They also catch syntax errors (compilation errors).

Organization of UnitTests in a existing library ProjectGroup

In our Delphi2007 environment we have a SGLibrary groupproj which contains some 30 bpls. We're just starting out creating unittests for these libraries and are not sure what the most convenient way of organizing the Unittest projects would be.
We are inclined to create a test-executable for each bpl, as this will make compilation an running easy and fast. The test-exe can be set as the active project and Compilation of the bpl can be forced by setting a dependency. It is also easy to run tests, ie by setting the test-executable as the Hostapplication of the bpl.
But the downside is that the library groupproject will be expanded with another 30 items, making it a very large group (why can't we make subgroups in Delpi ???).
The opposite arrangement would be to create 1 test executable which contains all unit-tests but that would create a executable with over a hundred units, and lots of depencies which all have to be compiled before a single test can be run.
So my question ... Does anybody have any suggestions, best practices, or other ideas on how to organize this into a manageable and fast running setup?
Extra consideration: We want to have the possibility to run all tests at once, and of course this will be easier in we put all tests in one executable.
There is a little known feature of DUnit that supports running tests from a dll. You basically create a dunit exe project that has no tests of its own, rather it loads tests from dlls.
Each dll needs to export a single function:
library MyTests;
uses
TestFramework{, add your test units};
function Test: ITest;
begin
result := RegisteredTests;
end;
exports
Test;
end;
Then you just add test cases to the dll as normal. The tests are automatically registered in each unit's initialization section.
IMHO its a pity this isn't promoted as the standard way of working with DUnit. Most unit testing frameworks for other languages are organized this way. They provide a single test runner executable which dynamically loads test cases from any number of loadable modules.
In addition to letting you break up your tests for easier organization it also allows you to run the same tests under multiple scenarios. Perhaps you want to run your tests using different compiler options for your debug and release builds (or even different versions of the compiler) so you are more confident that the code behaves consistently. You can build multiple dlls from the same source and run them in the same session.
I'd probably do both, so you end up with this:
all your unit tests, group them by BPL.
a project for each of the units tests for each BPL.
a project with all the tests.
You can use the final project in your continuous integration system, and the former for testing things that are not yet checked in.
This is indeed a large number of projects, a price you pay for being able to improve the quality of your code.
--jeroen

Resources