Why do some examples (and templates in text editor) of gen_server have:
-define(SERVER, ?MODULE).
Is there any good reason for it?
This question brought about by Inaka's guildelines, where they state the opposite:
Don't use macros for module or function names
Here is the code example they provide:
-module(macro_mod_names).
-define(SERVER, ?MODULE). % Oh, god! Why??
-define(TM, another_module).
-export([bad/1, good/1]).
bad(Arg) ->
Parsed = gen_server:call(?SERVER, {parse, Arg}),
?TM:handle(Parsed).
good(Arg) ->
Parsed = gen_server:call(?MODULE, {parse, Arg}),
another_module:handle(Parsed).
Why does every example (and templates in text editor) of gen_server always have
Searching for "erlang gen_server example", no hits on the first page for me define this macro (and in fact I haven't seen it before). In particular, this includes Erlang documentation's own http://erlang.org/doc/design_principles/gen_server_concepts.html, "Learn you some Erlang", and the Erlang wikibook.
Is there any good reason for it?
The reason is clearly to use a more "descriptive" name; whether this is a good reason is a question of taste.
I think it is a good practice to use -define to define and document relevant variables for the module. This is especially true for variables that get used at different places in the module and you want to make it configurable.
Actually, I think your question tackles this at the wrong side: the gen_server name is a module-wide configurable variable (and hence it is best practice to define it), and for the sake of simplicity it became common practice to choose the server name equal to the module name: gen_servers name is normally registered so you can send messages to it. Since the name is a critical variable here (and there might even be cases when you would like to change it), it is normally -defineded.
I also think that the guidelines you quotes are speaking about a different use-case for macros.
Related
In all samples of gen_server implementations I've saw the ?SERVER is assigned to ?MODULE.
Look down here:
-define(SERVER, ?MODULE).
...
gen_server:start_link({local, ?SERVER}, ?MODULE, [], [])
The idea, I have clued is to run many server processes with different names but implemented in one module.
But, when I tried to run server with the name different from module name in my experiments, I always got errors.
Can, please, somebody explain me this subtlety.
The code you show does not and cannot implement multiple servers with different names, since the server name is defined as the same as the module name. So if you try with this code to get multiple servers implemented in one module your attempts will fail.
The reason for introducing separate SERVER macro with the same value as MODULE is to make things more explicit. In start_link call the two macros may have the same value, but they serve different purposes, so it is clearer to use two instead of one.
I'm writing tests with EUnit and some of the Units Under Test need to read a data file via file:consult/1. My tests make assumptions about the data that will be available in /priv, but the data will be different in production. What's the best way to accomplish this?
I'm a total newbie in Erlang and I have thought of a few solutions that feel a bit ugly to me. For example,
Put both files in /priv and use a macro (e.g., "-ifdef(EUNIT)") to determine which one to pass to file:consult/1. This seems too fragile/error-prone to me.
Get Rebar to copy the right file to /priv.
Also please feel free to point out if I'm trying to do something that is fundamentally wrong. That may well be the case.
Is there a better way to do this?
I think both of your solutions would work. It is rather question of maintaining such tests, and both of those rely on some external setup (file existing, and having wright data).
For me easiest way to keep contents of such file local to given test is mocking, and making file:consult/1 return value you want.
7> meck:new(file, [unstick, passthrough]).
ok
8> meck:expect(file, consult, fun( _File ) -> {some, data} end).
ok
9> file:consult(any_file_at_all).
{some,data}
It will work, but there are two more things you could do.
First of all, you should not be testing file:consult/1 at all. It is part of standard library, and can assume it works all wright. Rather than doing that you should test functions that use data you read from this file; and of course pass to them some "in-test-created" data. It will give you some nice separation between data source, and parsing (acting on) it. And later it might be simpler to replace file:consult with call to external service, or something like that.
Other thing is that problem with testing something should be sign of bad smell for you. You might think a little about redesigning your system. I'm not saying that you have to, but such problems are good indicator to justify on . If you testing some functionality x, and you would like it to behave one way in production and other in tests (read one file or another), than maybe this behaviour should be injected to it. Or in other words, maybe file your function should read should be a parameter in this function. If you would like to still have some "default file to read" functionality in your production code, you could use something like this
function_with_file_consult(A, B, C) ->
function_with_file_consult(A, B, C, "default_file.dat").
function_with_file_consult(A, B, C, File) ->
[ ... actual function logic ... ]
It will allow you to use shorter version in production, and longer just for your tests.
I find Erlang's module arity import /n where n is the number of arguments rather bizarre.
In Java and various other languages you can do something like:
import static com.stuff.Blah.myFunction;
Which will import all overloaded Blay.myFunction(..) regardless of parameters.
Besides I guess being explicit why did the language designers decide this was a good idea (I'm not trying to criticize the language... just curious)?
Does it have to do with code swapping?
Or does it have to do with hiding guard methods for recursion? If so why not allow arity on export but no need for arity on import?
Why would I want to be that explicit? That is import the two argument function but not the the three argument of myFunction?
You should be aware of what importing functions in Erlang really does. It is a pure textual transformation. If I do an -import(foo, [bar/1,baz/2]). it means that when I write a call like bar(5) or baz(a, 3) the compiler transforms these to foo:bar(5) and foo:baz(a, 3). That is all it does, nothing else. It doesn't check anything:
It doesn't check if the module foo contains the functions bar/1 or baz/2.
It doesn't even check if the module foo exists.
Really all it does is hide that you are calling a function in another module. That is why the recommendation from experienced Erlangers is "don't use it". It was a mistake. Unfortunately it is much easier to add stupid things than to get rid of them so we were never able to remove it.
"Does it have to do with code swapping?"
Yes, sort of. The unit of all code handling in Erlang is the module. So you compile modules, load modules, purge and delete modules. This means that there are no inter-module dependencies at all in the system and the compiler makes no assumptions about other modules when it is compiling a module. No assumptions are made that the environment in which a module is compiled will be the same in which it is run. That is why it is at runtime the system checks whether the function you are trying to call in another exists, or even if the module itself exists. That is why the import was a purely textual transformation.
Erlang was originally developed in Prolog.
In Prolog, the arity adds additional meaning to what you consider to be the 'arguments, as I understand from a function' in a procedural programming language. But that model does not apply here.
The so-called clauses 'married(X,Y).' and 'married(X,Y,Z).' imply a different kind of relationship 'married', which can be declared as married/2 and married/3.
In procedural programming, 'add(a,b)' or 'add(a,b,c)' are intended to generate the addition of a different number of arguments. That's not immediately the case in Prolog, where it is possible to have the relationship 'a and b, added' or 'a, b and c, added' mean something else. Needless to say, Prolog allows you to declare 'add' as you would expect a function would do. But it allows for more. More available meaning, means more need to control it.
And as in any module system, selecting what you want to expose to external clients makes sense: hence the declaration of arity.
Does it have to do with code swapping?
Kind of. The modules in Erlang are compiled separately (which is part of what allows code swapping), unlike Java classes, so the compiler doesn't know how many versions of the imported function with different arities exist. It could assume that all calls of a function with the given name come from the same module, of course, but the designers likely decided it wasn't particularly useful.
In fact, you rarely want to use imports at all, at least in my experience, just as you rarely use static imports in Java. Just write module:function, like Class.staticMethod.
Or does it have to do with hiding guard methods for recursion?
No, since not importing functions doesn't hide them in any way.
While I'm learning a new language, I'll typically put lots of silly println's to see what values are where at specific times. It usually suffices because the languages typically have available a tostring equivalent. In trying that same approach with erlang, my webapp just "hangs" when there's a value attempted to be printed that's not a list. This happens when variable being printed is a tuple instead of a list. There's no error, exception, nothing... just doesn't respond. Now, I'm muddling through by being careful about what I'm writing out and as I learn more, things are getting better. But I wonder, is there a way to more reliably to [blindly] print a value to stdout?
Thanks,
--tim
In Erlang, as in other languages, you can print your variables, no matter if they are a list, a tuple or anything else.
My feeling is that, for printing, you're doing something like (just a guess):
io:format("The value is: ~p.", A).
This is wrong, because you're supposed to pass a list of arguments:
io:format("The value is: ~p.", [A]).
Where A can be anything.
I usually find comfortable to use:
erlang:display/1
to print variables.
Also, tracing functions is usually a better way to debug an application, rather than using printouts. Please see:
http://aloiroberto.wordpress.com/2009/02/23/tracing-erlang-functions/
When developing webapps I use the error_logger module
I usually define some macros like this
-ifdef(debug).
-define(idbg(FmtStr, Err),
error_logger:info_msg("~p (line ~p): " FmtStr "~n",
[?MODULE, ?LINE | Err])).
-define(rdbg(Term), error_logger:info_report(Term)).
-else.
-define(idbg(_FmtStr, _Err), void).
-define(rdbg(_Term), void).
-endif.
You call the macros with something like:
code...
?rdbg(ErlangTerm),
other code...
During development you compile your modules with:
erlc -Ddebug *.erl
and so you get info messages in your erlang console.
Also make sure that there is no terminating process without link which could then cause other process to wait on something and not timeout-ing - hence strange hanging part.
EDIT:
I'm not looking to use parameters as a general purpose way to construct Erlang programs--I'm still learning the traditional design principles. I'm also not looking to emulate OOP. My only point here is to make my gen_server calls consistent across server instances. This seems more like fixing a broken abstraction to me. I can imagine a world where the language or OTP made it convenient to use any gen_server instance's api, and that's a world I want to live in.
Thanks to Zed for showing that my primary objective is possible.
Can anyone figure out a way to use parameterized modules on gen_servers? In the following example, let's assume that test_child is a gen_server with one parameter. When I try to start it, all I get is:
42> {test_child, "hello"}:start_link().
** exception exit: undef
in function test_child:init/1
called as test_child:init([])
in call from gen_server:init_it/6
in call from proc_lib:init_p_do_apply/3
Ultimately, I'm trying to figure out a way to use multiple named instances of a gen_server. As far as I can tell, as soon as you start doing that, you can't use your pretty API anymore and have to throw messages at your instances with gen_server:call and gen_server:cast. If I could tell instances their names, this problem could be alleviated.
I just want to say two things:
archaelus explains it correctly. As he says the final way he shows is the recommended way of doing it and does what you expect.
never, NEVER, NEVER, NEVER use the form you were trying! It is a left over from the old days which never meant what you intended and is strongly deprecated now.
There are two parts to this answer. The first is that you probably don't want to use paramatized modules until you're quite proficient with Erlang. All they give you is a different way to pass arguments around.
-module(test_module, [Param1]).
some_method() -> Param1.
is equivalent to
-module(test_non_paramatized_module).
some_method(Param1) -> Param1.
The former doesn't buy you much at all, and very little existing Erlang code uses that style.
It's more usual to pass the name argument (assuming you're creating a number of similar gen_servers registered under different names) to the start_link function.
start_link(Name) -> gen_server:start_link({local, Name}, ?MODULE, [Name], []).
The second part to the answer is that gen_server is compatible with paramatized modules:
-module(some_module, [Param1, Param2]).
start_link() ->
PModule = ?MODULE:new(Param1, Param2),
gen_server:start_link(PModule, [], []).
Param1 and Param2 will then be available in all the gen_server callback functions.
As Zed mentions, as start_link belongs to a paramatized module, you would need to do the following in order to call it:
Instance = some_module:new(Param1, Param2),
Instance:start_link().
I find this to be a particularly ugly style - the code that calls some_module:new/n must know the number and order of module parameters. The code that calls some_module:new/n also cannot live in some_module itself. This in turn makes a hot upgrade more difficult if the number or order of the module parameters change. You would have to coordinate loading two modules instead of one (some_module and its interface/constructor module) even if you could find a way to upgrade running some_module code. On a minor note, this style makes it somewhat more difficult to grep the codebase for some_module:start_link uses.
The recommended way to pass parameters to gen_servers is explicitly via gen_server:start_link/3,4 function arguments and store them in the state value you return from the ?MODULE:init/1 callack.
-module(good_style).
-record(state, {param1, param2}).
start_link(Param1, Param2) ->
gen_server:start_link(?MODULE, [Param1, Param2], []).
init([Param1, Param2]) ->
{ok, #state{param1=Param1,param2=Param2}}.
Using this style means that you won't be caught by the various parts of OTP that don't yet fully support paramatized modules (a new and still experimental feature). Also, the state value can be changed while the gen_server instance is running, but module parameters cannot.
This style also supports hot upgrade via the code change mechanism. When the code_change/3 function is called, you can return a new state value. There is no corresponding way to return a new paramatized module instance to the gen_server code.
I think you shouldn't use this feature this way. Looks like you are going after a OO-like interface to your gen_servers. You are using locally-registered names for this purpose - this add a lot of shared state into your program, which is The Bad Thing. Only crucial and central servers should be registered with register BIF - let all the others be unnamed and managed by some kind of manager on top of them (which should probably be registered under some name).
-module(zed, [Name]).
-behavior(gen_server).
-export([start_link/0, init/1, handle_cast/2]).
-export([increment/0]).
increment() ->
gen_server:cast(Name, increment).
start_link() ->
gen_server:start_link({local, Name}, {?MODULE, Name}, [], []).
init([]) ->
{ok, 0}.
handle_cast(increment, Counter) ->
NewCounter = Counter + 1,
io:format("~p~n", [NewCounter]),
{noreply, NewCounter}.
This module is working fine for me:
Eshell V5.7.2 (abort with ^G)
1> S1 = zed:new(s1).
{zed,s1}
2> S1:start_link().
{ok,<0.36.0>}
3> S1:increment().
1
ok
4> S1:increment().
2
ok