How do I use "rebar ct" with a two application node? - erlang

I'm using rebar to compile my application. Actually, it's two applications:
deps/
apps/A/
apps/B/
apps/B/suites
...where B depends on A. This is correctly configured in apps/B/src/B.app.src. However, when I attempt to run rebar ct, it fails to test B, reporting that it can't find A.app.
Running rebar ct in verbose mode shows that it's setting the code search path (-pa) to include apps/B/ebin, and deps/foo/ebin, deps/bar/ebin, etc.
It is not including apps/A/ebin.
How do I use Common Test to test an Erlang "application" that's made up of multiple applications?

Add in apps/B/rebar.config
{lib_dirs, [
".."
]}.
or
{ct_extra_params, "-pa ../A/ebin"}.

IMO, if B depends on A, I would have two separate tests. One for A and mention it deps section of B of rebar config and write separate test cases for B and run only for B so that application A modules would be automatically taken care by rebar.

Related

Can I add static analysis to a py_binary or py_library rule?

I have a repo which uses bazel to build a bunch of Python code. I would like to introduce various flavors of static analysis into the build and have the build fail if these static analyses throw errors. What is the best way to do this?
For example, I'd like to declare something like:
py_library_with_static_analysis(
name = "foo",
srcs = ["foo.py"],
)
py_library_with_static_analysis(
name = "bar",
srcs = ["bar.py"],
deps = [":foo"],
)
In a build file and have it error out if there are mypy/flake/etc errors in foo.py. I would like to be able to do this gradually, converting libraries/binaries to static analysis one target at a time. I'm not sure if I should do this via a new rule, a macro, an aspect or something else.
Essentially, I think I'm asking how to run an additional command while building a py_binary/py_library and fail if that command fails.
I could create my own version of a py_library rule and have it run static analysis within the implementation but that seems like something which is really easy to get wrong (my guess is that native.py_library is quite complex?) and there doesn't seem to be a way to instantiate a native.py_library within a custom rule.
I've also played around with macros a bit, but haven't been able to get that to work either. I think my issue there is that a macro doesn't actually specify new commands, only new targets and I can't figure out how to make the static analysis target get force built along with the py_library/py_binary I'm interested in.
A macro that adds implicit test targets is not such a bad idea: The test targets will be picked up automatically when you run bazel test //..., which you could do in a gating CI to prevent imperfect code from merging.
Bazel supports a BUILD prelude (which is underdocumented) that you could use to replace all py_binary, py_library, and even py_test with your test-adding wrapper macros with minimal changes to existing code.
If you somehow fail the build instead it will make it harder to quickly prototype things. Sometimes you want to just quickly try something out, and you don't care about any pydoc violations yet.
In case you do want to fail the build, you might be able to use the Validations Output Group of a rule that you implement to wrap or replace your py_libraries.

How do I export all functions for common test only?

I have been trying to export all the functions in an erlang module for use in a common test SUITE, not an eunit module. So far it has not worked for me. I am using rebar to run the SUITE, and I came across this question (http://lists.basho.com/pipermail/rebar_lists.basho.com/2011-October/001141.html), which is essentially exactly what I want to do, but the method will not work for me.
I have also added {plugins, [rebar_ct]}. into rebar.config but it has made no difference. I should point out all my tests pass when I export the functions normally, but I want to avoid this.
Any help would be great thanks.
The compiler will cause all functions in a module to be exported if you add this into it:
-compile(export_all).
Or you could do it based on defs, like:
-ifdef(EXPORTALL).
-compile(export_all).
-endif.
That will only export everything if you have {d, 'EXPORTALL', true} in your rebar config erl_opts setting, e.g. something like:
{erl_opts, [
{d, 'EXPORTALL', true}
]}.
If that doesn't work, make sure you don't have erl_opts twice in your rebar config.
with rebar3 you can define in the config file extra option for the compilation for common test:
{ct_compile_opts, []}.
her you can add export_all which will be available for common test only. not sure it exists for rebar.

Getting test results from Eunit in Erlang

I am working with Erlang and EUnit to do unit tests, and I would like to write a test runner to automate the running of my unit tests. The problem is that eunit:test/1 seems to only return "error" or "ok" and not a list of tests and what they returned in terms of what passed or failed.
So is there a way to run tests and get back some form of a data structure of what tests ran and their pass/fail state?
If you are using rebar you don't have to implement your own runner. You can simply run:
rebar eunit
Rebar will compile and run all tests in the test directory (as well as eunit tests inside your modules). Furthermore, rebar allows you set the same options in the rebar.config as in the shell:
{eunit_opts, [verbose, {report,{eunit_surefire,[{dir,"."}]}}]}.
You can use these options also in the shell:
> eunit:test([foo], [verbose, {report,{eunit_surefire,[{dir,"."}]}}]).
See also documentation for verbose option and structured report.
An alternative option would be to use Common Test instead of Eunit. Common Test comes with a runner (ct_run command) and gives you more flexibility in your test setup but is also a little more complex to use. Common Test lacks on the available macros but produces very comprehensible html reports.
No easy or documented way, but there are currently two ways you can do this. One is to give the option 'event_log' when you run the tests:
eunit:test(my_module, [event_log])
(this is undocumented and was really only meant for debugging). The resulting file "eunit-events.log" is a text file that can be read by Erlang using file:consult(Filename).
The more powerful way (and not really all that difficult) is to implement a custom event listener and give it as an option to eunit:
eunit:test(my_module, [{report, my_listener_module}])
This isn't documented yet, but it ought to be. A listener module implements the eunit_listener behaviour (see src/eunit_listener.erl). There are only five callback functions to implement. Look at src/eunit_tty.erl and src/eunit_surefire.erl for examples.
I've just pushed to GitHub a very trivial listener, which stores the EUnit results in a DETS table. This can be useful, if you need to further process those data, since they're stored as Erlang terms in the DETS table.
https://github.com/prof3ta/eunit_terms
Example of usage:
> eunit:test([fact_test], [{report,{eunit_terms,[]}}]).
All 3 tests passed.
ok
> {ok, Ref} = dets:open_file(results).
{ok,#Ref<0.0.0.114>}
> dets:lookup(Ref, testsuite).
[{testsuite,<<"module 'fact_test'">>,8,<<>>,3,0,0,0,
[{testcase,{fact_test,fact_zero_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_neg_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_pos_test,0,0},[],ok,0,<<>>}]}]
Hope this helps.

F# 'modular' scripting

What is the recommended way to load+reload fsx files? Just experimenting... yes yes right language right job ect ect..
I love how the following can be done in FSI:
#load "script.fsx";
open Script
> let p = script.x 1
Error: This expression was expected to have type string but here has int...
(* edit script.fsx x to make it int -> int *)
>
> #load "script.fsx"
> let p = script.x 1
val it : int = 2
But how do we do this for an application that we are running via fsi blah.fsx? Maybe something that is sitting in a while loop. It seems #load and #use must not be inside let or module.. i.e. you cannot use #load like let reload script = #load script, wonder why?
My original method was to have .fs files and recompile + relaunch each time I wanted to add/fix something. This method feels primitive.
Second method was to attempt to use the #load directive inside of a module, which turns out to not work (kind of makes sense in terms of scoping)...
module test1 =
#load #"C:\users\pc\Desktop\test.fsx"
open Test
module test2 =
...
Another way would be to create a new process for every module by loading fsi module.fsx with process diagnostics, but this seems horrible, inefficient and ugh.
I have a feeling deep in my heart that this will not be trivial inside .NET, but I would like to pose the question anyway, FSI does it... I wonder if I can leverage the FSI API or something (or at the least to copy their code)?
TL;DR I read the following about erlang and want it for myself in F#.
Erlang: Is there a way to reload changed modules into an already running node with rebar?
"...any time a module in your program changes on disk, the reloader will replace the running copy automatically."
I don't know if this would work in FS but in ML you can load a master file that loads all your files in your project and then executes any code that you need to use to knit them together and runs your application. To see an example of a massive app run from inside of a REPL look at the Isabelle/HOL site at the Cambridge laboratory of Computational Science http://www.cl.cam.ac.uk/research/hvg/Isabelle/installation.html. After downloading the app look in the src code directory for any file called root.ml. There will be half a dozen of them that control various levels of implementation. This is recursive because a top level file can call a file in several sub-directories that loads that particular sub-feature. This allows targeting your application to various scenarios depending on which top level file is executed.
Typical .NET Framework applications cannot unload/reload assemblies unless they are in an App Domains that are separate from the primary one that starts up with the application. This is essentially how most plugin systems are designed for applications that run on the full .NET Framework. Things may be changing post .NET Standard 2.0 in .NET Core with the Collectible Assemblies feature.
References:
https://github.com/dotnet/coreclr/issues/552
https://github.com/dotnet/corefx/issues/19773

Where should you put application properties in a rebar erlang application?

A newbie question: I wrote my first rebar based erlang application. I want to configure some basic properites like server host etc. Where is the best place to put them and how should I load them into the app?
The next steps are to make a release and create a node in it. A node runs your application in a standalone Erlang VM. A good starting point for creating a release using rebar:
Erlang Application Management with Rebar
Once you have created a release. The configuration properties for all applications in your node can then be added to
{your-app}/{release}/files/sys.config
You can read individual properties as follows:
Val = application:get_env(APP, KEY)
Alternatively, all properties for your application can be read as
Config = application:get_all_env(APP)
In sys.config, properties can be added as a proplist.
Example:
{myapp,
[
{port, 1234},
{pool_size, 5}
]
}

Resources