EUnit vs. Common Test - erlang

I am new to Erlang. It has 2 test frameworks: EUnit and Common Test. I am confused when to use one over another. Can someone explain to me what are the advantages of EUnit over Common Test, and vice versa. Seem Common Test can do everything EUnit can do and much more, not sure what I should use EUnit. Thanks!

Learn you some erlang (one of the best online resources for erlang, besides the official doc) explains both methods quite well:
EUnit
Common Test
As Pascal pointed out, EUnit is best used in white-box testing, more like internal function-by-function unit testing, light integration tests.
Common Test does the more heavy lifting: integration & system testing, black-box kind of stuff. It's also more complex, of course, and much more powerful.
While you're at it, you might try Dialyzer, another integrated testing tool in Erlang/OTP , which is great in locating dead or unreachable code, logical & type errors (it's a static type analyser). Again, learn you some erlang provides a nice introduction to it: Dialyzer.
Oh, by the way, if you chose to put the EUnit tests in separate files (which is perfectly possible), you won't be able to test unexported functions (that's expected). What's also expected is that Common Test does not test unexported functions: otherwise it wouldn't be a black-box testing tool (maybe a cheating one).

Eunit is really simple and fit very well for module test or library test at white box level. It is integrated to rebar.
Common Test is more oriented for black box testing and application and system of application test. It comes with test coverage very easily.
Edit (after Andy comment):
It is right that you can use Common test for unitary white box test, as well as it is right that you can leverage eunit to some application test using fixtures.
However eUnit is very handy for simple unitary test: you write a function myFun, you add a test function myFun_test or a test generator myFun_test_ (useful to test many patterns even is some test fails in the middle) in your test module and that's it. You can run it as often as you want (no history of test).
Common test asks you to list each test case in the all function or in a group. As far as I know it has not test generator so it is less easy to go through all test patterns of each function. It is why I think it is less adapted to unitary white box tests. On the other hand, the init_per_testcase, init_per_group ... are much more flexible than eunit fixtures to organize the tests when they need some application context to run. Common Test also keeps an history of all tests done in the log directory, It is nice, but I suggest to limit the number of run to keep it useful.
EDIT:
To avoid the problem of not exported functions, for both eunit and common test, it is possible to use defines. for example in rebar.config (because I use separate files for eunit tests):
{eunit_compile_opts, [{d,'EUNIT_TEST',true}]}.
{erl_opts, [debug_info, warn_export_all]}.
and in a module, if it is necessary:
%% export all functions when used in eunit context
-ifdef(EUNIT_TEST).
-compile(export_all).
-endif.
You can verify that it only modify the compiled code for eunit
D:\git\helper>rebar clean eunit compile
==> helper (clean)
==> helper (eunit)
Compiled test/help_list_tests.erl
Compiled test/help_ets_tests.erl
Compiled test/help_num_tests.erl
Compiled src/help_ets.erl
Compiled src/help_list.erl
Compiled src/helper.erl
src/help_num.erl:6: Warning: export_all flag enabled - all functions will be exported
Compiled src/help_num.erl
Compiled src/help_code.erl
All 31 tests passed.
Cover analysis: d:/git/helper/.eunit/index.html
==> helper (compile)
Compiled src/help_ets.erl
Compiled src/help_list.erl
Compiled src/helper.erl
Compiled src/help_num.erl
Compiled src/help_code.erl

From http://www.erlang.org/doc/apps/common_test/basics_chapter.html:
Common Test is also a very useful tool for white-box testing Erlang code (e.g. module testing), since the test programs can call exported Erlang functions directly and there's very little overhead required for implementing basic test suites and executing simple tests. For black-box testing Erlang software, Erlang RPC as well as standard O&M interfaces can for example be used.

Related

How to structure and organize tests for Erlang/OTP?

I came to Erlang/OTP from python world, where I'm using unittest library. Typical test environment will be presented by some TestSuite for entire application and TestCases with test methods for different modules from subpackages of application.
My first application on Erlang is cowboy-based web application. It has some modules which are required by cowboy framework and its behavior plus some set of my custom modules, let's say: parsers.erl, encoders.erl, fetchers.erl.
In the beginning of development I was writing tests inside that modules (in methods method_name_test) and then running them with eunit. But as for me it was kind of inconvenient. In a week or so I got in touch with commont_test framework. And as for newcomer from python world - CT with its suites, grouping, setup-ing, configs, execution order model looked like very familiar.
Considering my application - what is the proper way of writing test suites? Should I prepare separate suites for different modules (as for me it will create some overhead) or introduce single test suite for application and in different groups put test cases for separate modules? Would be great to read about tests organizing in real-world Erlang applications.
The stdlib in Erlang/OTP has a single Common Test suite per module in the library.

Dealing with shared helpers in Common Test suites?

I've got an Erlang project comprising a bunch of different applications. I'm using Common Test to do some of the testing.
apps/foo/suites/foo_SUITE.erl
apps/bar/suites/bar_SUITE.erl
I'm starting to see duplication of utility code in those suites.
Where should I put my utility code so that it can be shared between the two suites?
I've considered adding another application:
apps/test_stuff
...but I can't make the CT suites depend on this without making the application under test depend on this (or can I?). I don't want to do that, because test_stuff is only needed when testing.
I have a similar problem with my eunit tests, both between applications (apps/foo/test vs. apps/bar/test), and where I'm using similar functionality between the eunit and CT tests in the same application (apps/bar/suites vs apps/bar/test). Can I use the same solution for this case as well? Or do I need to ask another question about that?
Do you think ct:require/1,2 could help you so that foo and bar SUITE would require test_stuff before it gets executed? For more information http://www.erlang.org/doc/man/ct.html#require-1
It depends on how you are packaging your final releases. For example, I use rebar for relase management. I have Cowboy fetched along with other dependencies for testing purposes, but in my reltool.config, I omit it, so it doesn't get packaged with the final product. I use rebar to run Common Test, and it's able to add Cowboy to the path without having it bundled as a lib with everything else or added as a dependency to the app I'm testing.
However, if you have another process which infers your release configuration from your dependencies, you'll have to find a way to exclude your test code when you generate a release.

Erlang EUnit test module that depends on a library application

I have a medium-sized release with a handful of applications. I recently refactored some common functionality out into a library application within the release. This made my EUnit tests fail with undef messages whenever testing anything that required the library application.
The set up is something like this:
% In apps/utils/src/utils.erl
-module(utils).
-export([foo/0]).
foo() -> "OH HAI".
Then
% In apps/some_app/src/some_app.erl
-module(some_app).
-export([bar/0]).
bar() -> io:format("foo: ~s~n", [utils:foo()]).
% unit tests for bar()
Then the unit tests for some_app:bar() fail. I'm running them with rebar eunit skip_deps=true. I'm using skip_deps=true because my release uses some 3rd party applications (SQL, etc).
I assume that the tests start failing because EUnit is invoking the app under test without its dependencies? Is there any way to fix this? I have configured the .app file to explicitly declare the dependency. It works fine in the release, and it's been deployed for about a day now with no problem, but I'll feel a lot better if I can get the tests to pass again :)
(I could use a mocking app to stub out utils:foo/0, and I can see where that would be ideal idiomatically, but that seems like overkill in this case because utils:foo/0 (read: it's real-world counterpart) does some really simple stuff.)
I was able to get this to work by doing rebar compile eunit skip_deps=true.
The key is to have the compile in there and I have no idea why. I'm guessing that the compile step gets all of the modules into memory. I'd love to hear a good explanation.
I think you could have one of your applications load the utility by including it in the application portion of you .app file, as in:
{application,yourapp
[{description,"A description"},
{vsn,"1.0.0"},
{modules,[mod1, mod2, utils]},
SNIP
or in some other way add it to the path of the erlang node... maybe using the -pa flag on starting the node.

How to deal with tangled uses dependencies in order to start unit testing?

I have a messy Delphi 7 legacy system to maintain and develop. I am already reading "Working effectively with legacy code" and I like this book very much.
In order to start following the advices in the book, I created a test project and tried to write a single test. To do this I need to add some unit to the test project, but here lies the problem: the system under test has horrific uses dependencies. One unit uses some other unit, that uses some other unit and so on, and so on. It seems that most units directly or indirectly use one particular unit, and this unit in turn has 170 dependencies in its uses clause. There are indirect circular dependencies also.
Currently I am trying to add all of the legacy system's units into the test project, but I am running into all kind of problems, like "unit xxx was compiled with a different version of xxx", and others.
So I wonder if I am doing something wrong. I have used unit testing before, but in my own projects, that were smaller and with better structure and modularization. What are the options I have in this situation? Am I missing something?
You will always have dependencies in your code. Well, as long as you have code re-use, you will have dependencies. Since you are testing a legacy system, wholesale re-structuring is out of the question.
So you simply need to accept the dependencies. The most convenient and practical approach is to have a single unit tests project. That project contains all your unit tests. Use the facilities of your runner program to run only specific tests at any one time.
This leads to your project have the same list of units in its .dpr file as the main project. That's what you have currently tried and it's the right approach.
Your problem sounds like you are sharing the DCU directory (unit output directory) between the main project and the unit tests project. And you have different compiler options for the two projects. That's the most likely explanation for the error you report.
There are a couple of obvious solutions:
Align the compiler options for both projects. Then they can share DCUs.
Have separate DCU directories for the two projects.
Option 2 is much more robust and is best practise. However, you should try to understand why the compiler options differ. It's quite possible that your compiler options in the new unit tests project will need to be changed so that the units under test compile and function as desired. In modern Delphi I would use option sets to ensure consistency of compiler options.
Now, there may be other technical problems that you are facing, and my explanation of the error may not be quite right since I'm having to guess a little. But the bottom line is that having the same list of units in your .dpr files is the way to go.

Organization of UnitTests in a existing library ProjectGroup

In our Delphi2007 environment we have a SGLibrary groupproj which contains some 30 bpls. We're just starting out creating unittests for these libraries and are not sure what the most convenient way of organizing the Unittest projects would be.
We are inclined to create a test-executable for each bpl, as this will make compilation an running easy and fast. The test-exe can be set as the active project and Compilation of the bpl can be forced by setting a dependency. It is also easy to run tests, ie by setting the test-executable as the Hostapplication of the bpl.
But the downside is that the library groupproject will be expanded with another 30 items, making it a very large group (why can't we make subgroups in Delpi ???).
The opposite arrangement would be to create 1 test executable which contains all unit-tests but that would create a executable with over a hundred units, and lots of depencies which all have to be compiled before a single test can be run.
So my question ... Does anybody have any suggestions, best practices, or other ideas on how to organize this into a manageable and fast running setup?
Extra consideration: We want to have the possibility to run all tests at once, and of course this will be easier in we put all tests in one executable.
There is a little known feature of DUnit that supports running tests from a dll. You basically create a dunit exe project that has no tests of its own, rather it loads tests from dlls.
Each dll needs to export a single function:
library MyTests;
uses
TestFramework{, add your test units};
function Test: ITest;
begin
result := RegisteredTests;
end;
exports
Test;
end;
Then you just add test cases to the dll as normal. The tests are automatically registered in each unit's initialization section.
IMHO its a pity this isn't promoted as the standard way of working with DUnit. Most unit testing frameworks for other languages are organized this way. They provide a single test runner executable which dynamically loads test cases from any number of loadable modules.
In addition to letting you break up your tests for easier organization it also allows you to run the same tests under multiple scenarios. Perhaps you want to run your tests using different compiler options for your debug and release builds (or even different versions of the compiler) so you are more confident that the code behaves consistently. You can build multiple dlls from the same source and run them in the same session.
I'd probably do both, so you end up with this:
all your unit tests, group them by BPL.
a project for each of the units tests for each BPL.
a project with all the tests.
You can use the final project in your continuous integration system, and the former for testing things that are not yet checked in.
This is indeed a large number of projects, a price you pay for being able to improve the quality of your code.
--jeroen

Resources