Can't start supervisor from yaws runmod - erlang

I have a yaws runmod defined in yaws.conf as:
runmod = sg_app
the module contains an exported function:
start()->
io:format("~p start~n", [ sg_sup:start_link() ]).
When I start yaws I see a call to the runmod:
=INFO REPORT==== 29-Oct-2015::16:46:51 === sync call sg_app:start
{ok,<0.61.0>} start
But the supervisor is nonexistent:
1> whereis(sg_sup).
undefined
If I call the runmod:start manually, the supervisor hangs around.
2> sg_app:start().
{ok,<0.73.0>} start
ok
3> whereis(sg_sup).
<0.73.0>
What have I done wrong?

Your runmod's start/0 function is starting the supervisor with start_link/0, which means it's getting linked to the parent process. When that process dies, it takes your runmod process down with it, due to the link. The runmod feature isn't designed for starting a supervision tree.
You might instead consider using a yapp, which allows your code to run as a regular Erlang application in the same Erlang node as Yaws and be registered to have Yaws dispatch requests into it.

Another option is to launch your application using a separately spawned, infinite process:
start()->
spawn(fun () ->
application:start(my_app, permanent),
receive after infinity -> ok end
end).

Related

Erlang: starting gen_server on another node fails after init

I am stuck in a bit of a fix trying to run gen_server on another node. So I have a common gen_server class which looks like this
start(FileName) ->
start_link(node(), FileName).
start_link(ThisNode, FileName) ->
gen_server:start_link({local, ?MODULE}, ?MODULE, [ThisNode, FileName], []).
init([ThisNode, FileName]) ->
process_flag(trap_exit, true),
{ok, Terms} = file:consult(FileName),
{A1, B1, C1} = lists:nth(1,Terms),
place_objects(A1, B1, C1).
Now I want to start multiple nodes that will run the same gen_server and somehow communicate with each other, and use a another node to orchestrate that. (All these nodes are started on my local terminal).
So I start a new node in one terminal using erl -sname bar where I intend to run the gen_server, and compile the gen_server module on this node. Then I start another node called 'sup' which I intend to use as a coordinator for all the other nodes. If I run the command my_gen_server:start("config_bar.txt"). on bar, it successfully returns but when I run the command rpc:call('bar#My-MacBook-Pro', my_gen_server, start, ["config_bar.txt"]). on sup, it successfully returns from the init method (I checked this by putting in the logs) but immediately after that, I get this error:
{ok,<9098.166.0>}
(sup#My-MacBook-Pro)2> =ERROR REPORT==== 21-Feb-2022::11:12:30.443051 ===
** Generic server my_gen_server terminating
** Last message in was {'EXIT',<9098.165.0>,
{#Ref<0.3564861827.2990800899.137513>,return,
{ok,<9098.166.0>}}}
** When Server state == {10,10,#Ref<9098.1313723616.3973185546.82660>,
'bar#My-MacBook-Pro'}
** Reason for termination ==
** {#Ref<0.3564861827.2990800899.137513>,return,{ok,<9098.166.0>}}
=CRASH REPORT==== 21-Feb-2022::11:12:30.443074 ===
crasher:
initial call: my_gen_server:init/1
pid: <9098.166.0>
registered_name: my_gen_server
exception exit: {#Ref<0.3564861827.2990800899.137513>,return,
{ok,<9098.166.0>}}
in function gen_server:decode_msg/9 (gen_server.erl, line 481)
ancestors: [<9098.165.0>]
message_queue_len: 0
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 1598
stack_size: 29
reductions: 3483
neighbours:
I can't seem to figure out what causes the error and if there's anything I need to add to my gen_server code to fix it. Would really appreciate some help on this one!
The gen_server in the remote node is linked to an ephemeral process created for the rpc call.
As this ephemeral process exits with a term that's different from normal (the actual result of the rpc call), the exit signal propagates to the gen_server, killing it.
You can use gen_server:start instead of gen_server:start_link or, if you want the gen_server to be part of the supervission tree, instruct its supervisor to spawn it:
rpc:call('bar#My-MacBook-Pro', my_gen_sup, start_child, ["config_bar.txt"]).

Call Python function from Erlang Code

I am working on MQTT broker (http://www.emqtt.io), which is written in Erlang. I have a use case where I need to call one of my python module from emqtt broker code written in erlang.
I have already checked into the google about erlport (http://erlport.org/) which is use for port communication between erlang and python. It works well in erlang shell but when I use the same in emqtt erlang code then it does not work. It throws an error shown below
17:22:40.073 <0.717.0> [error] gen_server <0.717.0> terminated with reason: call to undefined function python:start()
17:22:40.073 <0.717.0> [error] CRASH REPORT Process <0.717.0> with 1 neighbours exited with reason: call to undefined function python:start() in gen_server2:terminate/3 line 1151
17:22:40.073 <0.631.0> [error] Supervisor emqttd_session_sup had child session started with {emqttd_session,start_link,undefined} at <0.717.0> exit with reason call to undefined function python:start() in context child_terminated
17:22:40.073 <0.677.0> [error] Supervisor 'esockd_connection_sup - <0.677.0>' had child connection started with emqttd_client:start_link([{packet,[{max_clientid_len,512},{max_packet_size,65536}]},{client,[{idle_timeout,30}]},{session,...},...]) at <0.716.0> exit with reason call to undefined function python:start() in context connection_crashed
We are calling python module from emqtt plugins code, change of code is show below
on_message_acked(ClientId, Message, _Env) ->
io:format("client ~s acked: ~s~n", [ClientId, emqttd_message:format(Message)]),
io:format("client ~s",[python:start()]),
{ok, Message}.
Please help us.

erlang dbg module not work when use relx

I modified relx.config in a cowboy example,add runtime_tools
{release, {echo_get_example, "1"}, [runtime_tools, echo_get]}.
{extended_start_script, true}.
when I use dbg:start() -> dbg:tracer() -> ....
nothing outputs when calls then functions.
why?
Since you can actually call the dbg module you have most likely succeeded in including it in the release.
Are you connecting with a remote node? In that case, you have to tell dbg to trace on the node you're connected to:
debugger#localhost> dbg:tracer().
{ok,<0.35.0>}
debugger#localhost> dbg:n(target#host).
{ok,target#host}
debugger#localhost> dbg:p(all, call).
{ok,[{target#host,33},{debugger#localhost,34}]}
debugger#localhost> dbg:tp(...)
More details here and in the documentation for dbg.

Error running the cowboy application

I am having trouble starting the cowboy application it is giving me following error. For some reason the ranch is not starting, although I have added code to start the ranch in my application.
I see a new git repo cowlib being pulled. but still having trouble.
1> application:start(satomi).
{error,
{bad_return,
{{satomi_app,start,[normal,[]]},
{'EXIT',
{noproc,
{gen_server,call,
[ranch_sup,
{start_child,
{{ranch_listener_sup,http},
{ranch_listener_sup,start_link,
[http,100,ranch_tcp,
[{port,9090}],
cowboy_protocol,
[{...}]]},
permanent,5000,supervisor,
[ranch_listener_sup]}},
infinity]}}}}}}
=INFO REPORT==== 12-Sep-2013::11:42:46 ===
application: satomi
exited: {bad_return,
{{satomi_app,start,[normal,[]]},
{'EXIT',
{noproc,
{gen_server,call,
[ranch_sup,
{start_child,
{{ranch_listener_sup,http},
{ranch_listener_sup,start_link,
[http,100,ranch_tcp,
[{port,9090}],
cowboy_protocol,
[{env,
[{dispatch,
[{'_',[],[{[],[],toppage_handler,[]}]}]}]}]]},
permanent,5000,supervisor,
[ranch_listener_sup]}},
infinity]}}}}}
type: temporary
Following is my app.src
>cat satomi.app.src
{application, satomi,
[
{description, ""},
{vsn, "1"},
{registered, []},
{applications, [
kernel,
stdlib,
cowboy
]},
{mod, { satomi_app, []}},
{env, []}
]}.
>cat satomi.erl
-module(satomi).
-export([start/0]).
start()->
ok = application:start(crypto),
ok = application:start(sasl),
ok = application:start(ranch),
ok = application:start(cowlib),
ok = application:start(cowboy),
ok = application:start(satomi).
I am trying to figure out what's going wrong here
Can anyone point me to the working sample of cowboy that I can use as a template. I am using rebar to compile the code. I don't think that should make any difference.
I am using following command to start the application
erl -pa ./ebin ./deps/*/ebin
When calling application:start(satomi) from the shell it doesn't automatically start the applications it depends on, those need to be started manually.
The satomi:start/0 function you have does exactly this, so the solution is to call satomi:start() from the shell.
The reason is that application:start(satomi) will actually not call satomi:start(), it is a convenience method for starting the application and its dependencies when the application is not part of an Erlang release.
UPDATE: Since Erlang R16B02, there is also application:ensure_all_started. It starts all the dependencies automatically.

How to always log/show the error reason when a supervisor child returns error from start_link?

When starting gen_server's from a supervisor (which itself is started by a application) I have the problem that when the start_link of the gen_server doesn't return {ok, ...} but {error, Reason} the only error message I see is:
=INFO REPORT==== 20-Jan-2011::13:14:43 ===
application: foo
exited: {shutdown,{foo_app,start,[normal,[]]}}
type: temporary
The Reason that for terminating is not shown/logged.
Is there a way to see/log these error returns to the supervisor?
The childspec I'm using is e.g.:
{ok, {{one_for_one, 3, 10}, ...
{usb_mux_1,
{usb_mux, start_link,
[Some_Params]},
permanent,
10000,
worker,
[usb_mux]}, ...
Edit: Clarification
I know about error_logger and using it already. The question is not how to get something logged but how to get supervisor to log the reason for it terminating, e.g. log who terminated with an error return and what did it return.
And just to get this also out of the way, yes I start erlang with sasl on:
-boot start_sasl
Just discovered the answer myself:
The supervisor is really logging the error exit as crash report.
The problem is the shell doesn't show these crash reports. Just to confuse me it shows info/warning and error reports but no progess reports and crash reports from the supervisor.
If I look in the on disk log there is a detailed crash report there:
10> rb:show(4).
CRASH REPORT <0.53.0> 2011-01-20 17:33:52
===============================================================================
Crashing process
initial_call {usb_mux,init,['Argument__1']}
pid <0.53.0>
registered_name []
error_info
{exit,{undef,[{usb_port,get_gw_hw_spec,[<0.59.0>]},
...
The reason the SASL events were not shown on the screen was a ommission in the -config file, which looked like this:
[{sasl, [
{sasl_error_logger, false}, %% no SASL error logger installed
{error_logger_mf_dir,"./log"},
{error_logger_mf_maxbytes,10485760}, % 10 MB
{error_logger_mf_maxfiles, 10}
]}].
Meaning there was a multi-file erroro logger installen (all the error_logger_mf_* entries) but no on screen logger for SASL events.
Changing the entry like this fixed it:
{sasl_error_logger, tty}, %% SASL reports to tty
From the sasl manpage:
sasl_report_tty_h:
Formats and writes supervisor reports, crash reports and progress reports to stdio .

Resources