Erlang newbie: why do I have to restart to load new code - erlang

I am trying to write a first program in Erlang that effects message communication between a client and server. In theory the server exits when it receives no message from the client, but every time I edit the client code and run the server again, it executes the old code. I have to ^G>q>erl>[re-enter command] to get it to see the new code.
-module(srvEsOne).
%%
%% export functions
%%
-export([start/0]).
%%function definition
start()->
io:format("Server: Starting at pid: ~p \n",[self()]),
case lists:member(serverEsOne, registered()) of
true ->
unregister(serverEsOne); %if the token is present, remove it
false ->
ok
end,
register(serverEsOne,self()),
Pid = spawn(esOne, start,[self()]),
loop(false, false,Pid).
%
loop(Prec, Nrec,Pd)->
io:format("Server: I am waiting to hear from: ~p \n",[Pd]),
case Prec of
true ->
case Nrec of
true ->
io:format("Server: I reply to ~p \n",[Pd]),
Pd ! {reply, self()},
io:format("Server: I quit \n",[]),
ok;
false ->
receiveLoop(Prec,Nrec,Pd)
end;
false ->
receiveLoop(Prec,Nrec,Pd)
end.
receiveLoop(Prec,Nrec,Pid) ->
receive
{onPid, Pid}->
io:format("Server: I received a message to my pid from ~p \n",[Pid]),
loop(true, Nrec,Pid);
{onName,Pid}->
io:format("Server: I received a message to name from ~p \n",[Pid]),
loop(Prec,true,Pid)
after
5000->
io:format("Server: I received no messages, i quit\n",[]),
ok
end.
And the client code reads
-module(esOne).
-export([start/1, func/1]).
start(Par) ->
io:format("Client: I am ~p, i was spawned by the server: ~p \n",[self(),Par]),
spawn(esOne, func, [self()]),
io:format("Client: Now I will try to send a message to: ~p \n",[Par]),
Par ! {self(), hotbelgo},
serverEsOne ! {self(), hotbelgo},
ok.
func(Parent)->
io:format("Child: I am ~p, i was spawned from ~p \n",[self(),Parent]).
The server is failing to receive a message from the client, but I can't sensibly begin to debug that until I can try changes to the code in a more straightforward manner.

When you make modification to a module you need to compile it.
If you do it in an erlang shell using the command c(module) or c(module,[options]), the new compiled version of the module is automatically loaded in that shell. It will be used by all the new process you launch.
For the one that are alive and already use it is is more complex to explain and I think it is not what you are asking for.
If you have several erlang shells running, only the one where you compile the module loaded it. That means that in the other shell, if the module were previously loaded, basically if you already use the module in those shell, and even if the corresponding processes are terminated, the new version is ignored.
Same thing if you use the command erlc to compile.
In all these cases, you need to explicitly load the module with the command l(module) in the shell.

Your server loop contain only local function calls. Running code is changed only if there is remote (or external) function call. So you have to export your loop function first:
-export([loop/3]).
and then you have to change all loop/3 calls in function receiveLoop/3 to
?MODULE:loop(...)
Alternatively you can do same thing with receiveLoop/3 instead. Best practice for serious applications is doing hot code swapping by demand so you change loop/3 to remote/external only after receiving some special message.

Related

How to implement general Erlang server that can become any kind of specific server

Currently I'm experimenting with Erlang and would like to implement a kind of universal server (like this one) described by Joe Armstrong. The general idea is to create a general server that we can later tell to become a specific one, like this:
universal_server() ->
receive
{become, F} ->
F()
end.
And some specific server:
factorial_server() ->
receive
{From, N} ->
From ! factorial(N),
factorial_server()
end.
factorial(0) -> 1;
factorial(N) -> N * factorial(N-1).
And finally send a "become factorial server" message to the universal server:
test() ->
Pid = spawn(fun universal_server/0),
Pid ! {become, fun factorial_server/0},
Pid ! {self(), 50},
receive
X -> X
end.
What I would like to do is to implement a universal server that can accept multiple subsequent "become" messages (so that I could send a "become factorial server" message and then a "become other kind of specific server" message...).
A naive approach is to require that every specific server implementation will include the {become, F} pattern in a receive clause. Maybe I could have a behavior that defines the general shape of all specific servers (containing the {become, F} clause) and propagates other messages forward to callbacks.
My question is, how to implement such a case in a clean, smart way?
Here is mine:
-module(myserver).
-export([start/0, init/0]).
start() ->
erlang:spawn_link(?MODULE, init, []).
init() ->
State = undefined, % You may want to do something at startup
loop(State).
% if something went wrong comment above line and uncomment below line:
% exit(element(2, catch loop(State))).
loop(MyState) ->
Msg =
receive
Any ->
Any
end,
handle_message(Msg, MyState).
% We got a message for becoming something:
handle_message({become, Mod, InitArgument}, _) ->
% Also our callback may want to do something at startup:
CallbackState = Mod:init(InitArgument),
loop({Mod, CallbackState});
% We got a message and we have a callback:
handle_message(Other, {Mod, CallbackState}) ->
case Mod:handle_message(Other, CallbackState) of
stop ->
loop(undefined);
NewCallbackState ->
loop({Mod, NewCallbackState})
end;
% We got a message and we Don't have a callback:
handle_message(Other, undefined) ->
io:format("Don't have any callback for handling ~p~n", [Other]),
loop(undefined).
Also I wrote a simple counter program for my server:
-module(counter).
-export([init/1, handle_message/2]).
init(Start) ->
Start.
handle_message(inc, Number) ->
Number + 1;
handle_message(dec, Number) ->
Number - 1;
handle_message({From, what_is}, Number) ->
From ! Number;
handle_message(stop, _) ->
stop;
handle_message(Other, Number) ->
io:format("counter got unknown message ~p~n", [Other]),
Number.
Let's test them:
Eshell V10.1 (abort with ^G)
1> S = myserver:start().
<0.79.0>
2> S ! hello.
Don't have any callback for handling hello
hello
3> S ! {become, counter, 10}.
{become,counter,10}
4> S ! hi.
counter got unknown message hi
hi
5> S ! inc.
inc
6> S ! dec.
dec
7> S ! dec.
dec
8> S ! {self(), what_is}.
{<0.77.0>,what_is}
9> flush().
Shell got 9
ok
10> S ! stop.
stop
11> S ! inc.
Don't have any callback for handling inc
inc
What should we do to complete it?
As you can see, It's not a production ready code, We should:
Have a way to set a timeout for initialize.
Have a way to set process spawn options.
Have a way to registering process locally or globally or using custom process registries.
Call callback functions in try catch.
Make sure that a message reply is for current message passing, not for other message that our process sent it before! (what gen module provides as call).
Kill ourself when our starter process died and don't be a zombie process if starter is linked to us!
Call a function at the end for each callback and let them clean those things if they have (you can name it terminate).
Be compatible with OTP sys module, So we should defined its callback functions. see sys callback functions. Then we can turn our process to debug mode, see its I/O, change its state in reloading the code, etc.
Note that proc_lib and gen module can help you to do most of them.

how to create a keep-alive process in Erlang

I'm currently reading Programming Erlang! , at the end of Chapter 13, we want to create a keep-alive process,
the example likes:
on_exit(Pid, Fun) ->
spawn(fun() ->
Ref = monitor(process, Pid),
receive
{'DOWN', Ref, process, Pid, Info} ->
Fun(Info)
end
end).
keep_alive(Name, Fun) ->
register(Name, Pid = spawn(Fun)),
on_exit(Pid, fun(_Why) -> keep_alive(Name, Fun) end).
but when between register/2 and on_exit/2 the process maybe exit, so the monitor will failed, I changed the keep_alive/2 like this:
keep_alive(Name, Fun) ->
{Pid, Ref} = spawn_monitor(Fun),
register(Name, Pid),
receive
{'DOWN', Ref, process, Pid, _Info} ->
keep_alive(Name, Fun)
end.
There also an bug, between spawn_monitor/2 and register/2, the process maybe exit. How could this come to run successfully? Thanks.
I'm not sure that you have a problem that needs solving. Monitor/2 will succeed even if your process exits after register/2. Monitor/2 will send a 'DOWN' message whose Info component will be noproc. Per the documentation:
A 'DOWN' message will be sent to the monitoring process if Item dies, if Item does not exist, or if the connection is lost to the node which Item resides on. (see http://www.erlang.org/doc/man/erlang.html#monitor-2).
So, in your original code
register assocates Name to the Pid
Pid dies
on_exit is called and monitor/2 is executed
monitor immediately sends a 'DOWN' message which is received by the function spawned by on_exit
the Fun(Info) of the received statement is executed calling keep_alive/2
I think all is good.
So why you did't want to use erlang supervisor behaviour? it's provides useful functions for creating and restarting keep-alive processes.
See here the example: http://www.erlang.org/doc/design_principles/sup_princ.html
In your second example, if process exits before registration register will fail with badarg. The easiest way to get around that would be surrounding register with try ... catch and handle error in catch.
You can even leave catch empty, because even if registration failed, the 'DOWN' message, will be sent.
On the other hand, I wouldn't do that in production system. If your worker fails so fast, it is very likely, that the problem is in its initialisation code and I would like to know, that it failed to register and stopped the system. Otherwise, it could fail and be respawned in an endless loop.

why there is not output from spawned process show using escript

please could you enlighten me why following code is not using stdout if run using escript?
main(_) ->
spawn(fun() -> io:fwrite("blah") end).
Thanks!
fwrite still writes to stdout when running in an escript, the problem here is that your program terminates before the spawned process has had a chance to run!
The escript terminates as soon as the main function terminates, depending on how the virtual machine scheduled your spawned process you may or may not get the fwrite executed.
A simple workaround for your example is to add some synchronization:
main(_) ->
MainPid=self(),
spawn(fun() -> io:fwrite("blah"), MainPid ! done end),
receive
done ->
ok
end.
This makes the main process wait with termination until the spawned process has sent a message.

Erlang spawn simple process from erl .. no such process or port

When running this code in the Erlang console
Pid = spawn(fun() -> "foo" end),link(Pid),receive X -> X end.
I receive the following error.
** exception error: no such process or port
in function link/1
called as link(<0.71.0>)```
This happens because the process you spawn finishes very quickly: it only "returns" a string (and the return value goes nowhere, since it is the top-level function in the call stack of the new process), so it's very likely to finish before the emulator gets to the link call.
You can make it more likely to succeed by making the process sleep before exiting:
2> Pid = spawn(fun() -> timer:sleep(1000), "foo" end),link(Pid).
true
Note however that the receive expression in your example most likely won't receive anything, since the spawned process doesn't send any message, and the link won't generate any message either since the process exits normally, and the calling process most likely isn't trapping exits. You may want to do something like:
Parent = self(),
spawn(fun() -> Parent ! "foo" end),
receive X -> X end.
That returns "foo".

Query an Erlang process for its state?

A common pattern in Erlang is the recursive loop that maintains state:
loop(State) ->
receive
Msg ->
NewState = whatever(Msg),
loop(NewState)
end.
Is there any way to query the state of a running process with a bif or tracing or something? Since crash messages say "...when state was..." and show the crashed process's state, I thought this would be easy, but I was disappointed that I haven't been able to find a bif to do this.
So, then, I figured using the dbg module's tracing would do it. Unfortunately, I believe because these loops are tail call optimized, dbg will only capture the first call to the function.
Any solution?
If your process is using OTP, it is enough to do sys:get_status(Pid).
The error message you mentions is displayed by SASL. SASL is an error reporting daemon in OTP.
The state you are referring in your example code is just an argument of tail recursive function. There is no way to extract it using anything except for tracing BIFs. I guess this would be not a proper solution in production code, since tracing is intended to be used only for debug purposes.
Proper, and industry tested, solution would be make extensive use of OTP in your project. Then you can take full advantage of SASL error reporting, rb module to collect these reports, sys - to inspect the state of the running OTP-compatible process, proc_lib - to make short-lived processes OTP-compliant, etc.
It turns out there's a better answer than all of these, if you're using OTP:
sys:get_state/1
Probably it didn't exist at the time.
It looks like you're making the problem out of nothing. erlang:process_info/1 gives enough information for debugging purposes. If your REALLY need loop function arguments, why don't you give it back to caller in response to one of the special messages that you define yourself?
UPDATE:
Just to clarify terminology. The closest thing to the 'state of the process' on the language level is process dictionary, usage of which is highly discouraged. It can be queried by erlang:process_info/1 or erlang:process/2.
What you actually need is to trace process's local functions calls along with their arguments:
-module(ping).
-export([start/0, send/1, loop/1]).
start() ->
spawn(?MODULE, loop, [0]).
send(Pid) ->
Pid ! {self(), ping},
receive
pong ->
pong
end.
loop(S) ->
receive
{Pid, ping} ->
Pid ! pong,
loop(S + 1)
end.
Console:
Erlang (BEAM) emulator version 5.6.5 [source] [smp:2] [async-threads:0] [kernel-poll:false]
Eshell V5.6.5 (abort with ^G)
1> l(ping).
{module,ping}
2> erlang:trace(all, true, [call]).
23
3> erlang:trace_pattern({ping, '_', '_'}, true, [local]).
5
4> Pid = ping:start().
<0.36.0>
5> ping:send(Pid).
pong
6> flush().
Shell got {trace,<0.36.0>,call,{ping,loop,[0]}}
Shell got {trace,<0.36.0>,call,{ping,loop,[1]}}
ok
7>
{status,Pid,_,[_,_,_,_,[_,_,{data,[{_,State}]}]]} = sys:get_status(Pid).
That's what I use to get the state of a gen_server. (Tried to add it as a comment to the reply above, but couldn't get formatting right.)
As far as I know you cant get the arguments passed to a locally called function. I would love for someone to prove me wrong.
-module(loop).
-export([start/0, loop/1]).
start() ->
spawn_link(fun () -> loop([]) end).
loop(State) ->
receive
Msg ->
loop([Msg|State])
end.
If we want to trace this module you do the following in the shell.
dbg:tracer().
dbg:p(new,[c]).
dbg:tpl(loop, []).
Using this tracing setting you get to see local calls (the 'l' in tpl means that local calls will be traced as well, not only global ones).
5> Pid = loop:start().
(<0.39.0>) call loop:'-start/0-fun-0-'/0
(<0.39.0>) call loop:loop/1
<0.39.0>
6> Pid ! foo.
(<0.39.0>) call loop:loop/1
foo
As you see, just the calls are included. No arguments in sight.
My recommendation is to base correctness in debugging and testing on the messages sent rather than state kept in processes. I.e. if you send the process a bunch of messages, assert that it does the right thing, not that it has a certain set of values.
But of course, you could also sprinkle some erlang:display(State) calls in your code temporarily. Poor man's debugging.
This is a "oneliner" That can be used in the shell.
sys:get_status(list_to_pid("<0.1012.0>")).
It helps you convert a pid string into a Pid.

Resources