Erlang: Send message to module? - erlang

I was reading this section in "Learn you some Erlang" and there's a piece of code that looks like:
start() ->
register(?MODULE, Pid=spawn(?MODULE, init, [])),
Pid.
start_link() ->
register(?MODULE, Pid=spawn_link(?MODULE, init, [])),
Pid.
terminate() ->
?MODULE ! shutdown.
I'm super confused by the terminate function. Does that say to send a message to the module itself? How does that work? What's going on?

TL;DR: shutdown is being sent to a process, not the module.
?MODULE is a value that, at compile time, is changed to the name of the current module (file).
What specifically is happening in this sample of code is that the process that is being spawned is being registered with the VM under the name of the module so that other processes can refer to it that way. You could replace ?MODULE in that entire block of code with nearly any atom at all, as long as you gave the same value each time.
So when terminate() is invoked, the shutdown message is not sent to the module, but rather to the process that was spawned and been registered under that name with the VM.
Using ?MODULE is merely a convenient approach for avoiding naming conflicts with other registered processes.

Related

Erlang: spawn a process and wait for termination without using `receive`

In Erlang, can I call some function f (BIF or not), whose job is to spawn a process, run the function argf I provided, and doesn't "return" until argf has "returned", and do this without using receive clause (the reason for this is that f will be invoked in a gen_server, I don't want pollute the gen_server's mailbox).
A snippet would look like this:
%% some code omitted ...
F = fun() -> blah, blah, timer:sleep(10000) end,
f(F), %% like `spawn(F), but doesn't return until 10 seconds has passed`
%% ...
The only way to communicate between processes is message passing (of course you can consider to poll for a specific key in an ets or a file but I dont like this).
If you use a spawn_monitor function in f/1 to start the F process and then have a receive block only matching the possible system messages from this monitor:
f(F) ->
{_Pid, MonitorRef} = spawn_monitor(F),
receive
{_Tag, MonitorRef, _Type, _Object, _Info} -> ok
end.
you will not mess your gen_server mailbox. The example is the minimum code, you can add a timeout (fixed or parameter), execute some code on normal or error completion...
You will not "pollute" the gen_servers mailbox if you spawn+wait for message before you return from the call or cast. A more serious problem with this maybe that you will block the gen_server while you are waiting for the other process to terminate. A way around this is to not explicitly wait but return from the call/cast and then when the completion message arrives handle it in handle_info/2 and then do what is necessary.
If the spawning is done in a handle_call and you want to return the "result" of that process then you can delay returning the value to the original call from the handle_info handling the process termination message.
Note that however you do it a gen_server:call has a timeout value, either implicit or explicit, and if no reply is returned it generates an error in the calling process.
Main way to communicate with process in Erlang VM space is message passing with erlang:send/2 or erlang:send/3 functions (alias !). But you can "hack" Erlang and use multiple way for communicating over process.
You can use erlang:link/1 to communicate stat of the process, its mainly used in case of your process is dying or is ended or something is wrong (exception or throw).
You can use erlang:monitor/2, this is similar to erlang:link/1 except the message go directly into process mailbox.
You can also hack Erlang, and use some internal way (shared ETS/DETS/Mnesia tables) or use external methods (database or other things like that). This is clearly not recommended and "destroy" Erlang philosophy... But you can do it.
Its seems your problem can be solved with supervisor behavior. supervisor support many strategies to control supervised process:
one_for_one: If one child process terminates and is to be restarted, only that child process is affected. This is the default restart strategy.
one_for_all: If one child process terminates and is to be restarted, all other child processes are terminated and then all child processes are restarted.
rest_for_one: If one child process terminates and is to be restarted, the 'rest' of the child processes (that is, the child processes after the terminated child process in the start order) are terminated. Then the terminated child process and all child processes after it are restarted.
simple_one_for_one: A simplified one_for_one supervisor, where all child processes are dynamically added instances of the same process type, that is, running the same code.
You can also modify or create your own supervisor strategy from scratch or base on supervisor_bridge.
So, to summarize, you need a process who wait for one or more terminating process. This behavior is supported natively with OTP, but you can also create your own model. For doing that, you need to share status of every started process, using cache or database, or when your process is spawned. Something like that:
Fun = fun
MyFun (ParentProcess, {result, Data})
when is_pid(ParentProcess) ->
ParentProcess ! {self(), Data};
MyFun (ParentProcess, MyData)
when is_pid(ParentProcess) ->
% do something
MyFun(ParentProcess, MyData2) end.
spawn(fun() -> Fun(self(), InitData) end).
EDIT: forgot to add an example without send/receive. I use an ETS table to store every result from lambda function. This ETS table is set when we spawn this process. To get result, we can select data from this table. Note, the key of the row is the process id of the process.
spawner(Ets, Fun, Args)
when is_integer(Ets),
is_function(Fun) ->
spawn(fun() -> Fun(Ets, Args) end).
Fun = fun
F(Ets, {result, Data}) ->
ets:insert(Ets, {self(), Data});
F(Ets, Data) ->
% do something here
Data2 = Data,
F(Ets, Data2) end.

spawn_monitor registered name in different module

I'm trying to monitor a process with a registered name in a different module than where the monitor code is placed. This is an assignment for school, which is why I'm not going to post all of my code. However, here's the outline:
module1:start() spawns a process and registers its name:
register(name, Pid = spawn(?MODULE, loop, [])), Pid.
The loop waits for messages. If the message is of the wrong type it crashes.
module2:start() should start the registered process in module1 and monitor it, restarting it if it's crashed. I've been able to get it working using:
spawn(?MODULE, loop, [module1:start()]).
Then in the loop function I use erlang:monitor(process, Pid).
This way of solving the problem means the registered process can crash before the monitoring starts. I've been looking at spawn_monitor, but haven't been able to get the monitoring to work. The latest I've tried is:
spawn(?MODULE, loop, [spawn_monitor(name, start, [])]).
It starts the registered process. I can send messages to it, but I can't seem to detect anything. In the loop function I have a receive block, where I try to pattern match {'DOWN', Ref, process, Pid, _Why}. I've tried using spawn_monitor in module1 instead of simply spawn, but I noticed no change. I've also been trying to solve this using links (as in spawn_link), but I haven't gotten that to work either.
Any suggestions? What am I monitoring, if I'm not monitoring the registered process?
Since this is a homework assignment, I won't give you a complete answer.
Generally, you need two loops, one in module1 to do the work, and one in module2 to supervise the work. You already have a module1:start/0 function that calls spawn to execute the module1:loop/0 function to do the work, but as you've stated, this leaves a window of vulnerability between the spawning of the process and its monitoring by module2 that you're trying to close. As a hint, you could change the start function to call spawn_monitor instead:
start() ->
{Pid, Ref} = spawn_monitor(?MODULE, loop, []),
register(name, Pid),
{Pid, Ref}.
and then your module2:start/0 function would then just call it like this:
start() ->
{Pid, Ref} = module1:start(),
receive
{'DOWN', Ref, process, Pid, _Why} ->
%% restart the module1 pid
%% details left out intentionally
end.
Note that this implies that module2:start/0 needs a loop of some sort to spawn and monitor the module1 pid, and restart it when necessary. I leave that to your homework efforts.
Also, using spawn_link instead of spawn_monitor is definitely worth exploring.

Pass some arguments to supervisor init function when app starts

I want to pass some arguments to supervisor:init/1 function and it is desirable, that the application's interface was so:
redis_pool:start() % start all instances
redis_pool:start(Names) % start only given instances
Here is the application:
-module(redis_pool).
-behaviour(application).
...
start() -> % start without params
application:ensure_started(?APP_NAME, transient).
start(Names) -> % start with some params
% I want to pass Names to supervisor init function
% in order to do that I have to bypass application:ensure_started
% which is not GOOD :(
application:load(?APP_NAME),
case start(normal, [Names]) of
{ok, _Pid} -> ok;
{error, {already_started, _Pid}} -> ok
end.
start(_StartType, StartArgs) ->
redis_pool_sup:start_link(StartArgs).
Here is the supervisor:
init([]) ->
{ok, Config} = get_config(),
Names = proplists:get_keys(Config),
init([Names]);
init([Names]) ->
{ok, Config} = get_config(),
PoolSpecs = lists:map(fun(Name) ->
PoolName = pool_utils:name_for(Name),
{[Host, Port, Db], PoolSize} = proplists:get_value(Name, Config),
PoolArgs = [{name, {local, PoolName}},
{worker_module, eredis},
{size, PoolSize},
{max_overflow, 0}],
poolboy:child_spec(PoolName, PoolArgs, [Host, Port, Db])
end, Names),
{ok, {{one_for_one, 10000, 1}, PoolSpecs}}.
As you can see, current implementation is ugly and may be buggy. The question is how I can pass some arguments and start application and supervisor (with params who were given to start/1) ?
One option is to start application and run redis pools in two separate phases.
redis_pool:start(),
redis_pool:run([] | Names).
But what if I want to run supervisor children (redis pool) when my app starts?
Thank you.
The application callback Module:start/2 is not an API to call in order to start the application. It is called when the application is started by application:start/1,2. This means that overloading it to provide differing parameters is probably the wrong thing to do.
In particular, application:start will be called directly if someone adds your application as a dependency of theirs (in the foo.app file). At this point, they have no control over the parameters, since they come from your .app file, in the {mod, {Mod, Args}} term.
Some possible solutions:
Application Configuration File
Require that the parameters be in the application configuration file; you can retrieve them with application:get_env/2,3.
Don't start a supervisor
This means one of two things: becoming a library application (removing the {mod, Mod} term from your .app file) -- you don't need an application behaviour; or starting a dummy supervisor that does nothing.
Then, when someone wants to use your library, they can call an API to create the pool supervisor, and graft it into their supervision tree. This is what poolboy does with poolboy:child_spec.
Or, your application-level supervisor can be a normal supervisor, with no children by default, and you can provide an API to start children of that, via supervisor:start_child. This is (more or less) what cowboy does.
You can pass arguments in the AppDescr argument to application:load/1 (though its a mighty big tuple already...) as {mod, {Module, StartArgs}} according to the docs ("according to the docs" as in, I don't recall doing it this way myself, ever: http://www.erlang.org/doc/apps/kernel/application.html#load-1).
application:load({application, some_app, {mod, {Module, [Stuff]}}})
Without knowing anything about the internals of the application you're starting, its hard to say which way is best, but a common way to do this is to start up the application and then send it a message containing the data you want it to know.
You could make receipt of the message form tell the application to go through a configuration assertion procedure, so that the same message you send on startup is also the same sort of thing you would send it to reconfigure it on the fly. I find this more useful than one-shotting arguments on startup.
In any case, it is usually better to think in terms of starting something, then asking it to do something for you, than to try telling it everything in init parameters. This can be as simple as having it start up and wait for some message that will tell the listener to then spin up the supervisor the way you're trying to here -- isolated one step from the application inclusion issues RL mentioned in his answer.

Link two process in Erlang?

To exchange data,it becomes important to link the process first.The following code does the job of linking two processes.
start_link(Name) ->
gen_fsm:start_link(?MODULE, [Name], []).
My Question : which are the two processes being linked here?
In your example, the process that called start_link/1 and the process being started as (?MODULE, Name, Args).
It is a mistake to think that two processes need to be linked to exchange data. Data links the fate of the two processes. If one dies, the other dies, unless a system process is the one that starts the link (a "system process" means one that is trapping exits). This probably isn't what you want. If you are trying to avoid a deadlock or do something other than just timeout during synchronous messaging if the process you are sending a message to dies before responding, consider something like this:
ask(Proc, Request, Data, Timeout) ->
Ref = monitor(process, Proc),
Proc ! {self(), Ref, {ask, Request, Data}},
receive
{Ref, Res} ->
demonitor(Ref, [flush]),
Res;
{'DOWN', Ref, process, Proc, Reason} ->
some_cleanup_action(),
{fail, Reason}
after
Timeout ->
{fail, timeout}
end.
If you are just trying to spawn a worker that needs to give you an answer, you might want to consider using spawn_monitor instead and using its {pid(), reference()} return as the message you're listening for in response.
As I mentioned above, the process starting the link won't die if it is trapping exits, but you really want to avoid trapping exits in most cases. As a basic rule, use process_flag(trap_exit, true) as little as possible. Getting trap_exit happy everywhere will have structural effects you won't intend eventually, and its one of the few things in Erlang that is difficult to refactor away from later.
The link is bidirectional, between the process which is calling the function start_link(Name) and the new process created by gen_fsm:start_link(?MODULE, [Name], []).
A called function is executed in the context of the calling process.
A new process is created by a spawn function. You should find it in the gen_fsm:start_link/3 code.
When a link is created, if one process exit for an other reason than normal, the linked process will die also, except if it has set process_flag(trap_exit, true) in which case it will receive the message {'EXIT',FromPid,Reason} where FromPid is the Pid of the process that came to die, and Reason the reason of termination.

What's gen:start meaning?

In project gproc's file gen_leader.erl,a customized behavior is created. But In the following statement, what is module "gen"? I can't find this module in the "erlang document tools http://www.erlang.org/erldoc"? Could you give me some explanation?
behaviour_info(callbacks) ->
[{init,1},
{elected,2},
{surrendered,3},
{handle_leader_call,4},
{handle_leader_cast,3},
{handle_local_only, 4},
{from_leader,3},
{handle_call,3},
{handle_cast,2},
{handle_DOWN,3},
{handle_info,2},
{terminate,2},
{code_change,4}];
behaviour_info(_Other) ->
undefined.
start_link(Name, [_|_] = CandidateNodes, Workers,
Mod, Arg, Options) when is_atom(Name) ->
gen:start(?MODULE, link, {local,Name}, Mod, %<<++++++ What's the meaning?
{CandidateNodes, Workers, Arg}, Options).
It looks like gen:start() is referring to gen.erl. According to the documentation in the file, gen.erl implements the generic parts of gen_server, gen_fsm and other OTP behaviors. In this case, it looks like gen_start handles spawning new processes. It checks to see if a process has already been spawned with the given name. If it has, an error is returned. If it has not, a new process is spawned by calling the module's start or start_link function.
In other words, when you call gen_server:start or gen_fsm:start, it calls gen:start (which does basic sanity checking) and gen:start, in turn, calls the module's start or start_link. When you're creating custom OTP behaviors, you'll have to call gen:start directly so that you don't need to replicate the error checking code in gen.erl.

Resources