I have a supervisor with two worker processes: a TCP client which handles connection to a remote server and an FSM which handles the connection protocol.
Handling TCP errors in the child process complicates code significantly. So I'd prefer to "let it crash", but this has a different problem: when the server is unreachable, the maximum number of restarts will be quickly reached and the supervisor will crash along with my entire application, which is quite undesirable for this case.
What I'd like is to have a restart strategy with back-off; failing that, it would be good enough if the supervisor was aware when it is restarted due to a crash (i.e. had it passed as a parameter to the init function). I've found this mailing list thread, but is there a more official/better tested solution?
You might find our supervisor cushion to be a good starting point. I use it slow down the restart on things that must be running, but are failing quickly on startup (such as ports that are encountering a resource problem).
I've had this problem many times working with erlang and tried many solutions. I think the best best I've found is to have an extra process that is started by the supervisor and starts the that might crash.
It starts the child on start-up, awaits child exits and restarts the child (with a delay) or exits as appropriate. I think this is simpler than the back-off server (which you link to) as you only need to keep state regarding a single child.
Another solution that I've used is to have to start the child processes as transient and have a separate process that polls and issues restarts to any processes that have crashed.
So first you want to catch an early termination of the child by using a process_flag(trap_exit, true) in your init.
Then you need to decide how long you want to delay a restart by, for example 10 sec., do this in the
handle_info({'EXIT', _Pid, Reason}, State) ->
erlang:send_after(10000, self(), {die, Reason}),
{noreply, State};
Lastly, let the process die with
handle_info({die, Reason}, State) ->
{stop, Reason, State};
Related
I have what may be an unusual situation, an application that starts 2 top-level supervisors, e.g.,
...
-behavior(application).
...
start(_StartType, _StartArgs) ->
sup1:start_link(),
sup2:start_link().
They both have a {one_for_one, 0, 1} restart strategy. Their children implement a simple crash function that throws a bad_match error.
To my question, if I call sup1_child1:crash() supervisor sup1 will terminate but the application will keep running (i.e., supervisor sup2 and its children are still available). If instead I call sup2_child1:crash() then the entire application terminates. This latter behavior is what I expect in both cases. If I flip the order of the start_link() calls, i.e.,
...
sup2:start_link(),
sup1:start_link().
then crashing sup1 will cause the application to terminate but crashing sup2 will not. So it appears the order in which start_link() is called determines which supervisor crash will cause the application to terminate. Is this expected? Or am I abusing the supervision tree capability by having 2 root supervisors?
Thanks,
Rich
It is entirely expected, and it is expected because you are abusing the supervision tree capability. There is a hidden supervisor called the "application supervisor". Your application:start function is supposed to return a SINGLE pid which is to be monitored by the application supervisor. If that process crashes, the BEAM VM will also crash (depending, actually, on how the application is started; similar to worker processes, your applications can be permanent or transient (maybe even temporary)).
You should have one top-level supervisor (your application supervisor). If you need two supervisors at the top level, they should both be children of your application supervisor.
I am just reading Manning's Erlang & OTP In Action. Very good book, I think. It contains a nice TCP server example but I'd like to write a UDP server. This is how I structured my app so far.
my_app % app behaviour
|-- my_sup % root supervisor
|-- my_server.erl % gen_server to open UDP connection and dispatch
|-- my_worker_sup % simple_one_to_one supervisor to start workers
|-- my_worker_server % gen_server worker
So, my_app starts my_sup, which in turn starts my_worker_sup and my_server. The UDP connection is opened in my_server in active mode such that handle_info/2 is invoked on each new UDP message in response to which I call my_worker_sup:start_child/2 to pass the message to a new worker process for processing. (The last call to start_child/2 is in fact, as per the book's recommendation, wrapped in an API function to hide some of the details, but this is essentially what happens.)
Am I suffering from OTP fever? Should the my_worker_server really implement the gen_server behaviour? Do I need my_worker_sup at all?
I set it up in like this so that I can use my_worker_sup as a factory via the start_child/2 call but I only use the worker's init/1 and handle_info(timeout,State) functions to first setup state and then to process the message before shutting the worker down.
Should I just spawn the worker directly? Is another behaviour better suited, perhaps?
Thanks,
HC
The key answer to this question is: "how do you want your application to crash?"
If a worker dies, then what should happen? If this should stop everything, including the UDP connection, then surely you can just spawn_link them under the my_server directly, no supervisor tree needed. But if you want them to be able to gracefully restart or something else, then the above diagram is usually better. Perhaps add a monitor on the workers from my_server so it can keep a book of who is alive.
In my utp erlang library, I have almost the same construction. A master handles the UDP socket and forwards to workers based on a routing table kept in ETS. Each worker keeps a connection state and can handle the incoming information.
Since you don't track state, then your best bet is probably to run via proc_lib:spawn_link and then hook them to the s_1_1 supervisor as transient processes. That way, you will force too many crashes to be propagated up the supervisor tree but allow them to exit with normal. This allows you to have them run exactly once.
Note that you could also handle everything directly in the my_server, but then you will not be able to process data concurrently. This may or may not be acceptable. The general rule is to spawn a new process when you have concurrent work that needs to be executed next to each other, blocks or otherwise behaves in some way.
I've set up a simple test-case at https://github.com/bvdeenen/otp_super_nukes_all that shows that an otp application:stop() actually kills all spawned processes by its children, even the ones that are not linked.
The test-case consists of one gen_server (registered as par) spawning a plain erlang process (registered as par_worker) and a gen_server (registered as reg_child), which also spawns a plain erlang process (registered as child_worker). Calling application:stop(test_app) does a normal termination on the 'par' gen_server, but an exit(kill) on all others!
Is this nominal behaviour? If so, where is it documented, and can I disable it? I want the processes I spawn from my gen_server (not link), to stay alive when the application terminates.
Thanks
Bart van Deenen
The application manual says (for the stop/1 function):
Last, the application master itself terminates. Note that all processes with the
application master as group leader, i.e. processes spawned from a process belonging
to the application, thus are terminated as well.
So I guess you cant modify this behavior.
EDIT: You might be able to change the group_leader of the started process with group_leader(GroupLeader, Pid) -> true (see: http://www.erlang.org/doc/man/erlang.html#group_leader-2). Changing the group_leader might allow you to avoid killing your process when the application ends.
I made that mistakes too, and found out it must happen.
If parent process dies, all children process dies no matter what it is registered or not.
If this does not happen, we have to track all up-and-running processes and figure out which is orphaned and which is not. you can guess how difficult it would be. You can think of unix ppid and pid. if you kill ppid, all children dies too. This, I think this must happen.
If you want to have processes independent from your application, you can send a messageto other application to start processes.
other_application_module:start_process(ProcessInfo).
I have several gen_server workers periodically requesting some information from hardware sensors. Sensors may temporary fail, it is normal. If sensor fails worker terminates with an exception.
All workers are spawned form supervisor with simple_one_to_one strategy. Also I have a control gen_server, which can start and stop workers and also recives 'DOWN' messages.
So now I have two problems:
If worker is restarted by supervisor its state is lost, which is not acceptable to me. I need to recreate the worker with the same state.
If the worker is failing several times in period of time something serious has happened with the sensors and it requires the operator's attention. Thus I need to give up restarting the worker and send a message to event handlers. But the default behaviuor of supervisor is terminate after exhaust process restart limit.
I see two solutions:
Set the type of the processes in the supervisor as temporary and control them and restart them in control gen_server. But this is exactly what supervisor should do, so I'm reinventing the wheel.
Create a supervisor for each worker under the main supervisor. This exactly solves my second problem, but the state of workers is lost after restart, thus I need some storage like ets table storing the states of workers.
I am very new to Erlang, so I need some advice to my problem, as to which (if any) solution is the best. Thanks in advance.
If worker is restarted by supervisor its state is lost, which is not
accertable to me. I need to recreate worker with the same state.
If you need the process state to persist the process lifecycle, you need to store it elsewhere, for example in an ETS table.
If the worker is failing several times in particular amount of time
something serious happened with sensors and it require operator's
attention. Thus I need to give up restarting worker and send some
message for event handlers. But default behaviuor of supervisor is
terminate after exhaust process restart limit.
Correct. Generally speaking, the less logic you put into your supervisor, the better it is. Supervisors should just supervise child processes and that's it. But you could still monitor your supervisor and be notified whenever your supervisor gave up (just an idea). This way you can avoid re-invent the wheel and use the supervisor to manage the children.
in Erlang I have a supervisor-tree of processes, containing one that accepts tcp/ip connections. For each incoming connection I spawn a new process. Should this process be added to the supervisor tree or not?
Regards,
Steve
Yes, you should add these processes to the supervision heirarchy as you want them to be correctly/gracefully shutdown when your application is stopped. (Otherwise you end up leaking connections that will fail as the application infrastructure they depend on been shutdown).
You could create a simple_one_for_one strategy supervisor say yourapp_client_sup that has a child spec of {Id, {yourapp_client_connection, start_link_with_socket, []}, Restart, Shutdown, worker, temporary}. The temporary type here is important because there's normally no useful restart strategy for a connection handler - you can't connect out to the client to restart the connection. temporary here will cause the supervisor to report the connection handler exit but otherwise ignore it.
The process that does gen_tcp:accept will then create the connection handler process by doing supervisor:start_child(yourapp_client_sup, [Socket,Options,...]) rather than yourapp_client_sup:start_link(Socket, Options, ...). Ensure that the youreapp_client_connection:start_link_with_socket function starts the child via gen_server or proc_lib functions (a requirement of the supervisor module) and that the function transfers control of the socket to the child with gen_tcp:controlling_process otherwise the child won't be able to use the socket.
An alternate approach is to create a dummy yourapp_client_sup process that yourclient_connection_handler processes can link to at startup. The yourapp_client_sup process will just exist to propagate EXIT messages from its parent to the connection handler processes. It will need to trap exists and ignore all EXIT messages other than those from its parent. On the whole, I prefer to use the simple_one_for_one supervisor approach.
If you expect these processes to be many, it could be a good idea to add a supervisor under your main supervisor as to separate responsibility (and maybe use the simple_one_for_one setting to make things simpler, maybe even simpler than your current case).
The thing is, if you need to control these processes, it's always nice to have a supervisor. If it doesn't matter if they succeed or not, then you might not need one. But then again, I always argue that that is sloppy coding. ;-)
The only thing I wouldn't do, is to add them to your existing tree, unless it is very obvious where they come from and they're fairly few.