Is it bad to send a message to self() in init? - erlang

In this example, the author avoids a deadlock situation by doing:
self() ! {start_worker_supervisor, Sup, MFA}
in his gen_server's init function. I did something similar in one of my projects and was told this method was frowned upon, and that it was better to cause an immediate timeout instead. What is the accepted pattern?

Update for Erlang 19+
Consider using the new gen_statem behaviour. This behaviour supports generating of events internal to the FSM:
The state function can insert events using the action() next_event and such an event is inserted as the next to present to the state function. That is, as if it is the oldest incoming event. A dedicated event_type() internal can be used for such events making them impossible to mistake for external events.
Inserting an event replaces the trick of calling your own state handling functions that you often would have to resort to in, for example, gen_fsm to force processing an inserted event before others.
Using the action functionality in that module, you can ensure your event is generated in init and always handled before any external events, specifically by creating a next_event action in your init function.
Example:
...
callback_mode() -> state_functions.
init(_Args) ->
{ok, my_state, #data{}, [{next_event, internal, do_the_thing}]}
my_state(internal, do_the_thing, Data) ->
the_thing(),
{keep_state, Data);
my_state({call, From}, Call, Data) ->
...
...
Old answer
When designing a gen_server you generally have the choice to perform actions in three different states:
When starting up, in init/1
When running, in any handle_* function
When stopping, in terminate/2
A good rule of thumb is to execute things in the handling functions when acting upon an event (call, cast, message etc). The stuff that gets executed in init should not wait for events, that's what the handle callbacks are for.
So, in this particular case, a kind of "fake" event is generated. I'd say it seems that the gen_server always wants to initiate the starting of the supervisor. Why not just do it directly in init/1? Is there really a requirement to be able to handle another message in-between (the effect of doing it in handle_info/2 instead)? That windown is so incredibly small (the time between start of the gen_server and the sending of the message to self()) so it's highly unlikely to happen at all.
As for the deadlock, I would really advise against calling your own supervisor in your init function. That's just bad practice. A good design pattern for starting worker process would be one top level supervisor, with a manager and a worker supervisor beneath. The manager starts workers by calling the worker supervisor:
[top_sup]
| \
| \
| \
man [work_sup]
/ | \
/ | \
/ | \
w1 ... wN

Just to complement what has already been said about splitting a servers initialisation into two parts, the first in the init/1 function and the second in either handle_cast/2 or handle_info/2. There is really only one reason to do this and that is if the initialisation is expected to take a long time. Then splitting it up will allow the gen_server:start_link to return faster which can be important for servers started by supervisors as they "hang" while starting their children and one slow starting child can delay the whole supervisor startup.
In this case I don't think it is bad style to split the server initialisation.
It is important to be careful with errors. An error in init/1 will cause the supervisor to terminate while an error in the second part as they will cause the supervisor to try and restart that child.
I personally think it is better style for the server to send a message to itself, either with an explicit ! or a gen_server:cast, as with a good descriptive message, for example init_phase_2, it will be easier to see what is going on, rather than a more anonymous timeout. Especially if timeouts are used elsewhere as well.

Calling your own supervisor sure does seem like a bad idea, but I do something similar all the time.
init(...) ->
gen_server:cast(self(), startup),
{ok, ...}.
handle_cast(startup, State) ->
slow_initialisation_task_reading_from_disk_fetching_data_from_network_etc(),
{noreply, State}.
I think this is clearer than using timeout and handle_info, it's pretty much guaranteed that no message can get ahead of the startup message (no one else has our pid until after we've sent that message), and it doesn't get in the way if I need to use timeouts for something else.

This may be very efficient and simple solution, but I think it is not good erlang style.
I am using timer:apply_after, which is better and does not make impression of interacting with external module/gen_*.
I think that the best way would be to use state machines (gen_fsm). Most of our gen_srvers are really state machine, however because initial work effort to set up get_fsm I think we end up with gen_srv.
To conclude, I would use timer:apply_after to make code clear and efficient or gen_fsm to be pure Erlang style (even faster).
I have just read code snippets, but example itself is somehow broken -- I do not understand this construct of gen_srv manipulating supervisor. Even if it is manager of some pool of future children, this is even more important reason to do it explicitly, without counting on processes' mailbox magic. Debugging this would be also hell in some bigger system.

Frankly, I don't see a point in splitting initialization. Doing heavy lifting in init does hang supervisor, but using timeout/handle_info, sending message to self() or adding init_check to every handler (another possibility, not very convenient though) will effectively hang calling processes. So why do I need "working" supervisor with "not quite working" gen_server? Clean implementation should probably include "not_ready" reply for any message during initialization (why not to spawn full initialization from init + send message back to self() when complete, which would reset "not_ready" status), but then "not ready" reply should be properly processed by the caller and this adds a lot of complexity. Just suspending a reply is not a good idea.

Related

Dialyzer Errors call to missing or unexported function gen_server:call/4

stop_link(UserDefined) ->
gen_server:call({local, UserDefined}, terminate, [], []),
ok
I am using dialyzer to fix warning in erlang code, I came across this mistake which reads missing or unexported function gen_server:call/4.
I am not able to understand what is wrong with this can, anyone please guide me in what the mistake is I had just started with Erlang I would greatly appreciate if you can explain it briefly.
There are many things wrong with this code. Here goes...
The reason the start_link function is called that is because it starts the process and links to it. Your stop function should just be called stop.
The documentation for gen_server:call/2,3 shows up two problems with this code:
You don't need the {local, Name} form with gen_server:call. You only need it when calling gen_server:start_link (and only then if you want a registered name for your process). For calling local names, just use Name. Or the process ID.
There isn't a variant of the function with arity 4 (i.e. 4 parameters). The 3-arity variant takes a timeout. You probably want the 2-arity one.
I suspect that you're trying to specify an arbitrary function in gen_server:call (i.e. you want to call the terminate function). That's not how this works.
gen_server:call(NameOrPid, Request) results in a call to handle_call(Request, From, State). See the documentation.
In that function, you can match the request and do the appropriate thing. Something like this:
handle_call(frob, _From, State) ->
% do whatever 'frob' means.
{reply, ok, NewState};
(that ; might be a ., depending on whether this is the final handle_call clause).
If you really want the server to stop, you should just do the following:
handle_call(terminate, _From, State) ->
{stop, meh, State}.
That will result in a call to terminate.
Oh, and if you're only just learning Erlang, you probably don't want to be running dialyzer until you've got a bit more experience. It's a bit ... tricky ... for the uninitiated. Though it did find this mistake, which was nice.

Can an OTP supervisor monitor a process on a remote node?

I'd like to use erlang's OTP supervisor in a distributed application I'm building. But I'm having trouble figuring out how a supervisor of this kind can monitor a process running on a remote Node. Unlike erlang's start_link function, start_child has no parameters for specifying the Node on which the child will be spawned.
Is it possible for a OTP supervisor to monitor a remote child, and if not, how can I achieve this in erlang?
supervisor:start_child/2 can be used across nodes.
The reason for your confusion is just a mix-up regarding the context of execution (which is admittedly a bit hard to keep straight sometimes). There are three processes involved in any OTP spawn:
The Requestor
The Supervisor
The Spawned Process
The context of the Requestor is the one in which supervisor:start_child/2 is called -- not the context of the supervisor itself. You would normally provide a supervisor interface by exporting a function that wraps the call to supervisor:spawn_child/2:
do_some_crashable_work(Data) ->
supervisor:start_child(sooper_dooper_sup, [Data]).
That might be defined and exported from the supervisor module, be defined internally within a "manager" sort of process according to the "service manager/supervisor/workers" idiom, or whatever. In all cases, though, some process other than the supervisor is making this call.
Now look carefully at the Erlang docs for supervisor:start_child/2 again (here, and an R19.1 doc mirror since sometimes erlang.org has a hard time for some reason). Note that the type sup_ref() can be a registered name, a pid(), a {global, Name} or a {Name, Node}. The requestor may be on any node calling a supervisor on any other node when calling with a pid(), {global, Name} or a {Name, Node} tuple.
The supervisor doesn't just randomly kick things off, though. It has a child_spec() it is going off of, and the spec tells the supervisor what to call to start that new process. This first call into the child module is made in the context of the supervisor and is a custom function. Though we typically name it something like start_link/N, it can do whatever we want as a part of startup, including declare a specific node on which to spawn. So now we wind up with something like this:
%% Usually defined in the requestor or supervisor module
do_some_crashable_work(SupNode, WorkerNode, Data) ->
supervisor:start_child({sooper_dooper_sup, SupNode}, [WorkerNode, Data]).
With a child spec of something like:
%% Usually in the supervisor code
SooperWorker = {sooper_worker,
{sooper_worker, start_link, []},
temporary,
brutal_kill,
worker,
[sooper_worker]},
Which indicates that the first call would be to sooper_worker:start_link/2:
%% The exported start_link function in the worker module
%% Called in the context of the supervisor
start_link(Node, Data) ->
Pid = proc_lib:spawn_link(Node, ?MODULE, init, [self(), Data]).
%% The first thing the newly spawned process will execute
%% in its own context, assuming here it is going to be a gen_server.
init(Parent, Data) ->
Debug = sys:debug_options([]),
{ok, State} = initialize_some_state(Data)
gen_server:enter_loop(Parent, Debug, State).
You might be wondering what all that mucking about with proc_lib was for. It turns out that while calling for a spawn from anywhere within a multi-node system to initiate a spawn anywhere else within a multi-node system is possible, it just isn't a very useful way of doing business, and so the gen_* behaviors and even proc_lib:start_link/N doesn't have a method of declaring the node on which to spawn a new process.
What you ideally want is nodes that know how to initialize themselves and join the cluster once they are running. Whatever services your system provides are usually best replicated on the other nodes within the cluster, and then you only have to write a way of picking a node, which allows you to curry out the business of startup entirely as it is now node-local in every case. In this case whatever your ordinary manager/supervisor/worker code does doesn't have to change -- stuff just happens, and it doesn't matter that the requestor's PID happens to be on another node, even if that PID is the address to which results must be returned.
Stated another way, we don't really want to spawn workers on arbitrary nodes, what we really want to do is step up to a higher level and request that some work get done by another node and not really care about how that happens. Remember, to spawn a particular function based on an {M,F,A} call the node you are calling must have access to the target module and function -- if it has a copy of the code already, why isn't it a duplicate of the calling node?
Hopefully this answer explained more than it confused.

Erlang: Is it ok to write application without a supervisor?

I don't need a supervisor for some specific application I develop. Is it ok to not to use one?
The doc says about the start/2 that
"should return {ok,Pid} or {ok,Pid,State} where Pid is the pid of the
top supervision"
so I'm not sure if it is OK not to start a supervisor and to return some invalid pid (I tried and nothing bad happened)
Returning an {ok, self()} or something similar works fine until you start doing release upgrades. At that point, you'll need to use a supervisor with an empty child list. (The application and supervisor behaviours don't have colliding callback functions, so you can put both in the same module.)
Just to make sure: you are doing some kind of initialisation in your application module's start callback function, right? If not, you can just remove the mod directive from the .app file and the callback won't even be called, and thus there will be no supervisor, real or fake.

Erlang tracing (collect data from processes only in my modules)

While tracing my modules using dbg, I encountered with the problem how to collect messages such as spawn, exit, register, unregister, link, unlink, getting_linked, getting_unlinked, which are allowed by erlang:trace, but only for those processes which were spawned from my modules directly?
As an examle I don't need to know which processes io module create, when i call io:format in some module function. Does anybody know how to solve this problem?
Short answer:
one way is to look at call messages followed by spawn messages.
Long answer:
I'm not an expert on dbg. The reason is that I've been using an (imho much better, safer and even handier) alternative: pan , from https://gist.github.com/gebi/jungerl/tree/master/lib/pan
The API is summarized in the html doc.
With pan:start you can trace specifying a callback module that receives all the trace messages. Then your callback module can process them, e.g. keep track of processes in ETS or a state data that is passed into every call.
The format of the trace messages is specified under pan:scan.
For examples of callback modules, you may look at src/cb_*.erl.
Now to your question:
With pan you can trace on process handling and calls in your favourit module like this:
pan:start({ip, CallbackModule}, Node, all, [procs,call], {Module}).
where Module is the name of your module (in this case: sptest)
Then the callback module (in this case: cb_write) can look at the spawn messages that follow a call message within the same process, e.g.:
32 - {call,{<6761.194.0>,{'fun',{shell,<node>}}},{sptest,run,[[97,97,97]]},{1332,247999,200771}}
33 - {spawn,{<6761.194.0>,{'fun',{shell,<node>}}},{{<6761.197.0>,{io,fwrite,2}},{io,fwrite,[[77,101,115,115,97,103,101,58,32,126,115,126,110],[[97,97,97]]]}},{1332,247999,200805}}
As pan is also using the same tracing back end as dbg, the trace messages (and the information) can be collected using the Erlang trace BIF-s as well, but pan is much more secure.

How to properly run a Symfony task in the background from an action?

$path=sfConfig::get('sf_app_module_dir')."/module/actions/MultiTheading.php";
foreach($arr as $id)
{
if($id)
passthru ("php -q $path $id $pid &");
}
when when i running action script is running sequenctly despite "&".
Please help
There are two common methods to achieve what you want.
Both involve creating a table in your database (kind of a to-do list). Your frontend saves work to do there.
The first one is easier, but it's only ok if you don't mind a slight latency. You start by creating a symfony task. When it wakes up (every 10/30/whatever minutes) it check that table if it has anything to do, simply exists if not. Otherwise it does what it needs to, then marks them as processed.
The second one is more work and more error-prone, but can work instantly. You create a task, that daemonizes itself when started (forks, forks again, and sets the parent pid to zero), then goes to sleep. If you have some work to do, you wake it up by sending a signal. Daemonizing and signal sending/receiving can be done with php's pcntl_* functions.

Resources