I can understand why a callback module must provide init and handle_call functions. init is for creating the initial state, and handle_call is the main purpose for creating a server process: to serve requests.
But I don't understand why handle_cast is required. Couldn't gen_server module provide a default implementation, like it does for many other callbacks? It could be a noop like
handle_cast(_, State) -> {noreply, State}.
It seems to me that the majority of callback modules provide noops like this one anyway.
handle_cast is similar to handle_call, and is used for asynchronous calls to the gen_server you're running (calls are synchronous). It is handling requests for you, just not with a reply as a call does.
Similarly to gen_call it can alter the state of your gen_server (or leave it as is, up to your needs and implementation). Also it can stop your server, hibernate, etc. just like your calls - see learn you some erlang for examples and a broader explanation.
It "can be a noop" as you said in the question, but in some cases it's better to implement and handle async calls to your server.
and handle_call is the main purpose for creating a server process: to serve requests.
The client-server architecture can be applied to a much wider range of problems than merely a web server that serves up documents. One example is the frequency server discussed in several erlang books. A client can request a frequency from the server for making a phone call, then the client must wait for the server to give return a specific frequency before a call can be made. That is a classic gen_server:call() situation: the client must wait for the server to return a frequency before the client can make a phone call.
However, when the client is done using the frequency the client sends a message to the server telling the server to deallocate the frequency. In that case, the client does not need to wait for a response from the server because the client doesn't even care what the server's response is. The client just needs to send the deallocate message, then the client can continue executing other code. It's the server's responsibility to process the deallocate message when it has time, then move the frequency from a "busy" list to a "free" list, so that the frequency is available for other clients to use. As a result, a client uses gen_server:cast() to send a deallocate message to the server.
Now, what is the "main purpose" of the frequency server? To allocate or deallocate frequencies? If the server doesn't deallocate frequencies, then after a certain number of client requests, there won't be any more frequencies to hand out and clients will get a message that says "no frequencies available". Therefore, for the system to work correctly the act of deallocating frequencies is essential. In other words, handle_call() is not the "main purpose" of the server--handle_cast() is equally important--and both handlers are needed to keep the system running as efficiently as possible.
Couldn't gen_server module provide a default implementation, like it
does for many other callbacks?
Why can't you create a gen_server template, which has a default implementation of handle_cast() yourself? Here's emac's default gen_server template:
-behaviour(gen_server).
%% API
-export([start_link/0]).
%% gen_server callbacks
-export([init/1, handle_call/3, handle_cast/2, handle_info/2,
terminate/2, code_change/3]).
-define(SERVER, ?MODULE).
-record(state, {}).
%%%===================================================================
%%% API
%%%===================================================================
%%--------------------------------------------------------------------
%% #doc
%% Starts the server
%%
%% #spec start_link() -> {ok, Pid} | ignore | {error, Error}
%% #end
%%--------------------------------------------------------------------
start_link() ->
gen_server:start_link({local, ?SERVER}, ?MODULE, [], []).
%%%===================================================================
%%% gen_server callbacks
%%%===================================================================
%%--------------------------------------------------------------------
%% #private
%% #doc
%% Initializes the server
%%
%% #spec init(Args) -> {ok, State} |
%% {ok, State, Timeout} |
%% ignore |
%% {stop, Reason}
%% #end
%%--------------------------------------------------------------------
init([]) ->
{ok, #state{}}.
%%--------------------------------------------------------------------
%% #private
%% #doc
%% Handling call messages
%%
%% #spec handle_call(Request, From, State) ->
%% {reply, Reply, State} |
%% {reply, Reply, State, Timeout} |
%% {noreply, State} |
%% {noreply, State, Timeout} |
%% {stop, Reason, Reply, State} |
%% {stop, Reason, State}
%% #end
%%--------------------------------------------------------------------
handle_call(_Request, _From, State) ->
Reply = ok,
{reply, Reply, State}.
%%--------------------------------------------------------------------
%% #private
%% #doc
%% Handling cast messages
%%
%% #spec handle_cast(Msg, State) -> {noreply, State} |
%% {noreply, State, Timeout} |
%% {stop, Reason, State}
%% #end
%%--------------------------------------------------------------------
handle_cast(_Msg, State) ->
{noreply, State}.
%%--------------------------------------------------------------------
%% #private
%% #doc
%% Handling all non call/cast messages
%%
%% #spec handle_info(Info, State) -> {noreply, State} |
%% {noreply, State, Timeout} |
%% {stop, Reason, State}
%% #end
%%--------------------------------------------------------------------
handle_info(_Info, State) ->
{noreply, State}.
%%--------------------------------------------------------------------
%% #private
%% #doc
%% This function is called by a gen_server when it is about to
%% terminate. It should be the opposite of Module:init/1 and do any
%% necessary cleaning up. When it returns, the gen_server terminates
%% with Reason. The return value is ignored.
%%
%% #spec terminate(Reason, State) -> void()
%% #end
%%--------------------------------------------------------------------
terminate(_Reason, _State) ->
ok.
%%--------------------------------------------------------------------
%% #private
%% #doc
%% Convert process state when code is changed
%%
%% #spec code_change(OldVsn, State, Extra) -> {ok, NewState}
%% #end
%%--------------------------------------------------------------------
code_change(_OldVsn, State, _Extra) ->
{ok, State}.
%%%===================================================================
%%% Internal functions
%%%===================================================================
Related
Here is an example trace where I'm able to call erlang:monitor/2 on the same Pid:
1> Loop = fun F() -> F() end.
#Fun<erl_eval.30.99386804>
2> Pid = spawn(Loop).
<0.71.0>
3> erlang:monitor(process, Pid).
#Ref<0.2485499597.1470627842.126937>
4> erlang:monitor(process, Pid).
#Ref<0.2485499597.1470627842.126942>
5> erlang:monitor(process, Pid).
#Ref<0.2485499597.1470627842.126947>
The expressions returned by instruction #4 and #5 are different than #3, meaning that it is possible to create multiple monitor references between the current process and Pid. Is there a practical case where you would need or use multiple monitor references to the same process?
I would expect this to return the same reference (returning a new one would perhaps imply that the old one had failed/crashed), following the same logic that exists for link/1.
Imagine you use third party library which does this (basically what OTP *:call/* functions does):
call(Pid, Request) ->
call(Pid, Request, ?DEFAULT_TIMEOUT).
call(Pid, Request, Timeout) ->
MRef = erlang:monitor(process, Pid),
Pid ! {call, self(), MRef, Request},
receive
{answer, MRef, Result} ->
erlang:demonitor(Mref, [flush]),
{ok, Result};
{'DOWN', MRef, _, _, Info} ->
{error, Info}
after Timeout ->
erlang:demonitor(MRef, [flush]),
{error, timeout}
end.
and then you use it in your code where you would monitor the same process Pid and then call function call/2,3.
my_fun1(Service) ->
MRef = erlang:monitor(process, Service),
ok = check_if_service_runs(MRef),
my_fun2(Service),
mind_my_stuf(),
ok = check_if_service_runs(MRef),
erlang:demonitor(MRef, [flush]),
return_some_result().
check_if_service_runs(MRef) ->
receive
{'DOWN', MRef, _, _, Info} -> {down, Info}
after 0 -> ok
end.
my_fun2(S) -> my_fun3(S).
% and a many layers of other stuff and modules
my_fun3(S) -> call(S, hello).
What a nasty surprise it would be if erlang:monitor/2,3 would always return the same reference and if erlang:demonitor/1,2 would remove your previous monitor. It would be a source of ugly and unsolvable bugs. You should start to think that there are libraries, other processes, your code is part of a huge system and Erlang was made by experienced people who thought it through. Maintainability is key here.
Try to use OTP-style in project and got one OTP-interface question. What solution is more popular/beautiful?
What I have:
web-server with mochiweb
one process, what spawns many (1000-2000) children.
Children contain state (netflow-speed). Process proxies messages to children and create new children, if need.
In mochiweb I have one page with speed of all actors, how whey made:
nf_collector ! {get_abonents_speed, self()},
receive
{abonents_speed_count, AbonentsCount} ->
ok
end,
%% write http header, chunked
%% and while AbonentsCount != 0, receive speed and write http
This is not-opt style, how i can understand. Solutions:
In API synchronous function get all requests with speed and return list with all speeds. But I want write it to client at once.
One argument of API-function is callback:
nf_collector:get_all_speeds(fun (Speed) -> Resp:write_chunk(templater(Speed)) end)
Return iterator:
One of results of get_all_speeds will be function with receive-block. Every call of it will return {ok, Speed}, at the end it return {end}.
get_all_speeds() ->
nf_collector ! {get_abonents_speed, self()},
receive
{abonents_speed_count, AbonentsCount} ->
ok
end,
{ok, fun() ->
create_receive_fun(AbonentsCount)
end}.
create_receive_fun(0)->
{end};
create_receive_fun(Count)->
receive
{abonent_speed, Speed} ->
Speed
end,
{ok, Speed, create_receive_fun(Count-1)}.
Spawn your 'children' from a supervisor:
-module(ch_sup).
-behaviour(supervisor).
-export([start_link/0, init/1, start_child/1]).
start_link() -> supervisor:start_link({local, ?MODULE}, ?MODULE, []).
init([]) -> {ok, {{simple_one_for_one}, [{ch, {ch, start_link, []}, transient, 1000, worker, [ch]}]}}.
start_child(Data) -> supervisor:start_child(?MODULE, [Data]).
Start them with ch_sup:start_child/1 (Data is whatever).
Implement your children as a gen_server:
-module(ch).
-behaviour(gen_server).
-record(?MODULE, {speed}).
...
get_speed(Pid, Timeout) ->
try
gen_server:call(Pid, get, Timeout)
catch
exit:{timeout, _} -> timeout;
exit:{noproc, _} -> died
end
.
...
handle_call(get, _From, St) -> {reply, {ok, St#?MODULE.speed}, St} end.
You can now use the supervisor to get the list of running children and query them, though you have to accept the possibility of a child dying between getting the list of children and calling them, and obviously a child could for some reason be alive but not respond, or respond with an error, etc.
The get_speed/2 function above returns either {ok, Speed} or died or timeout. It remains for you to filter appropriately according to your applications needs; easy with a list comprehension, here's a few.
Just the speeds:
[Speed || {ok, Speed} <- [ch:get_speed(Pid, 1000) || Pid <-
[Pid || {undefined, Pid, worker, [ch]} <-
supervisor:which_children(ch_sup)
]
]].
Pid and speed tuples:
[{Pid, Speed} || {Pid, {ok, Speed}} <-
[{Pid, ch:get_speed(Pid, 1000)} || Pid <-
[Pid || {undefined, Pid, worker, [ch]} <-
supervisor:which_children(ch_sup)]
]
].
All results, including timeouts and 'died' results for children that died before you got to them:
[{Pid, Any} || {Pid, Any} <-
[{Pid, ch:get_speed(Pid, 1000)} || Pid <-
[Pid || {undefined, Pid, worker, [ch]} <-
supervisor:which_children(ch_sup)]
]
].
In most situations you almost certainly don't want anything other than the speeds, because what are you going to do about deaths and timeouts? You want those that die to be respawned by the supervisor, so the problem is more or less fixed by the time you know about it, and timeouts, as with any fault, are a separate problem, to be dealt with in whatever way you see fit... There's no need to mix the fault fixing logic with the data retrieval logic though.
Now, the problem with all these, which I think you were getting at in your post, but I'm not quite sure, is that the timeout of 1000 is for each call, and each call is synchronous one after the other, so for 1000 children with a 1 second timeout, it could take 1000 seconds to produce no results. Making time timeout 1ms might be the answer, but to do it properly is a bit more complicated:
get_speeds() ->
ReceiverPid = self(),
Ref = make_ref(),
Pids = [Pid || {undefined, Pid, worker, [ch]} <-
supervisor:which_children(ch_sup)],
lists:foreach(
fun(Pid) -> spawn(
fun() -> ReceiverPid ! {Ref, ch:get_speed(Pid, 1000)} end
) end,
Pids),
receive_speeds(Ref, length(Pids), os_milliseconds(), 1000)
.
receive_speeds(_Ref, 0, _StartTime, _Timeout) ->
[];
receive_speeds(Ref, Remaining, StartTime, Timeout) ->
Time = os_milliseconds(),
TimeLeft = Timeout - Time + StartTime,
receive
{Ref, acc_timeout} ->
[];
{Ref, {ok, Speed}} ->
[Speed | receive_speeds(Ref, Remaining-1, StartTime, Timeout)];
{Ref, _} ->
receive_speeds(Ref, Remaining-1, StartTime, Timeout)
after TimeLeft ->
[]
end
.
os_milliseconds() ->
{OsMegSecs, OsSecs, OsMilSecs} = os:timestamp(),
round(OsMegSecs*1000000 + OsSecs + OsMilSecs/1000)
.
Here each call is spawned in a different process and the replies collected, until the 'master timeout' or they have all been received.
Code has largely been cut-n-pasted from various works I have lying round, and edited manually and by search replace, to anonymise it and remove surplus, so it's probably mostly compilable quality, but I don't promise I didn't break anything.
I am trying to implement a distributed ring in Erlang, in which each node will store data.
My idea was create a gen_server module node_ring which will provide state of node in ring:
-record(nodestate, {id, hostname, previd, nextid, prevnodename, nextnodename, data}).
Next, I created virtual hosts via:
werl -sname node -setcookie cook
werl -sname node1 -setcookie cook
werl -sname node2 -setcookie cook
In the first node: node#Machine I start the first item in the ring:
**(node#Machine)1> node_ring:start_link()**
Functions:
start_link() ->
{Hostname, Id} = {'node#Machine', 0},
{ok, Pid} = gen_server:start_link({local, ?MODULE}, ?MODULE, [first, Hostname, Id], []).
and:
init([first, Hostname, Id]) ->
State = #nodestate{id = Id, hostname = Hostname, previd = Id, nextid = Id, prevnode = Hostname, nextnode = Hostname, data = dict:new()},
{ok, State}.
In the next node: **(node1#Machine)1>** I want to start the same module node_ring,
but I have no idea how link it with the previous item in the ring and how the next node will know which node and node_ring is started.
Can you somebody explain me, how make distributed ring in Erlang? I know that there are some implemented systems like Riak. I looked into the source code, but I am really new to distributed Erlang programming, and I do not understand it.
Distributed systems programming is hard. It's hard to understand. It's hard to implement correctly.
The source code for riak_core can be very hard to understand at first. Here are some resources that helped me better understand riak_core:
Where to Start with Riak Core (specifically, Try Try Try by Ryan Zezeski)
Any of the riak_core projects in project-fifo. howl is probably the smallest project built on top of riak_core that is fairly easy to understand.
Understand that at the heart of riak_core is a consistent hashing algorithm that allows it to distribute data and work across the ring using partitions in a uniform manner: Why Riak Just Works
A while ago I wrote erlang-ryng which is a generic consistent hash algorithm handler for rings. It may be helpful for understanding the purpose of consistent hashing in the context of a ring.
Understanding how riak_pipe works also helped me better grasp how work can be distributed in a uniform manner.
In regards to "It's hard to implement correctly", you can read the Jepsen posts by aphyr for examples and cases where major databases and distributed storage systems have or previously had issues in their own implementations.
That said, here is a very simplistic implementation of a ring in Erlang, however it still has many holes that are addressed below:
-module(node_ring).
-behaviour(gen_server).
% Public API
-export([start_link/0]).
-export([erase/1]).
-export([find/1]).
-export([store/2]).
% Ring API
-export([join/1]).
-export([nodes/0]).
-export([read/1]).
-export([write/1]).
-export([write/2]).
% gen_server
-export([init/1]).
-export([handle_call/3]).
-export([handle_cast/2]).
-export([handle_info/2]).
-export([terminate/2]).
-export([code_change/3]).
-record(state, {
node = node() :: node(),
ring = ordsets:new() :: ordsets:ordset(node()),
data = dict:new() :: dict:dict(term(), term())
}).
% Public API
start_link() ->
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).
erase(Key) ->
write({erase, Key}).
find(Key) ->
read({find, Key}).
store(Key, Value) ->
write({store, Key, Value}).
% Ring API
join(Node) ->
gen_server:call(?MODULE, {join, Node}).
nodes() ->
gen_server:call(?MODULE, nodes).
read(Request) ->
gen_server:call(?MODULE, {read, Request}).
write(Request) ->
gen_server:call(?MODULE, {write, Request}).
write(Node, Request) ->
gen_server:call(?MODULE, {write, Node, Request}).
% gen_server
init([]) ->
State = #state{},
{ok, State}.
handle_call({join, Node}, _From, State=#state{node=Node}) ->
{reply, ok, State};
handle_call({join, Peer}, From, State=#state{node=Node, ring=Ring}) ->
case net_adm:ping(Peer) of
pong ->
case ordsets:is_element(Peer, Ring) of
true ->
{reply, ok, State};
false ->
monitor_node(Peer, true),
NewRing = ordsets:add_element(Peer, Ring),
spawn(fun() ->
rpc:multicall(Ring, ?MODULE, join, [Peer])
end),
spawn(fun() ->
Reply = rpc:call(Peer, ?MODULE, join, [Node]),
gen_server:reply(From, Reply)
end),
{noreply, State#state{ring=NewRing}}
end;
pang ->
{reply, {error, connection_failed}, State}
end;
handle_call(nodes, _From, State=#state{node=Node, ring=Ring}) ->
{reply, ordsets:add_element(Node, Ring), State};
handle_call({read, Request}, From, State) ->
handle_read(Request, From, State);
handle_call({write, Request}, From, State=#state{node=Node, ring=Ring}) ->
spawn(fun() ->
rpc:multicall(Ring, ?MODULE, write, [Node, Request])
end),
handle_write(Request, From, State);
handle_call({write, Node, _Request}, _From, State=#state{node=Node}) ->
{reply, ok, State};
handle_call({write, _Peer, Request}, From, State) ->
handle_write(Request, From, State);
handle_call(_Request, _From, State) ->
{reply, ignore, State}.
handle_cast(_Request, State) ->
{noreply, State}.
handle_info({nodedown, Peer}, State=#state{ring=Ring}) ->
NewRing = ordsets:del_element(Peer, Ring),
{noreply, State#state{ring=NewRing}};
handle_info(_Info, State) ->
{noreply, State}.
terminate(_Reason, _State) ->
ok.
code_change(_OldVsn, State, _Extra) ->
{ok, State}.
%% #private
handle_read({find, Key}, _From, State=#state{data=Data}) ->
{reply, dict:find(Key, Data), State}.
%% #private
handle_write({erase, Key}, _From, State=#state{data=Data}) ->
{reply, ok, State#state{data=dict:erase(Key, Data)}};
handle_write({store, Key, Value}, _From, State=#state{data=Data}) ->
{reply, ok, State#state{data=dict:store(Key, Value, Data)}}.
If we start 3 different nodes with the -sname set to node0, node1, and node2:
erl -sname node0 -setcookie cook -run node_ring start_link
erl -sname node1 -setcookie cook -run node_ring start_link
erl -sname node2 -setcookie cook -run node_ring start_link
Here's how we join a node to the ring:
(node0#localhost)1> node_ring:nodes().
['node0#localhost']
(node0#localhost)2> node_ring:join('node1#localhost').
ok
(node0#localhost)3> node_ring:nodes().
['node0#localhost', 'node1#localhost']
If we run node_ring:nodes() on node1 we get:
(node1#localhost)1> node_ring:nodes().
['node0#localhost', 'node1#localhost']
Now let's go to node2 and join one of the other two nodes:
(node2#localhost)1> node_ring:nodes().
['node2#localhost']
(node2#localhost)2> node_ring:join('node0localhost').
ok
(node2#localhost)3> node_ring:nodes().
['node0#localhost', 'node1#localhost',
'node2#localhost']
Notice how both node0 and node1 were added to node2, even though we only specified node0 on the join. This means if we had hundreds of nodes, we would only need to join one of them in order to join the entire ring.
Now we can use store(Key, Value) on any of the nodes and it will be replicated to the other two:
(node0#localhost)4> node_ring:store(mykey, myvalue).
ok
Let's try reading mykey from the other two, first node1:
(node1#localhost)2> node_ring:find(mykey).
{ok,myvalue}
Then node2:
(node2#localhost)4> node_ring:find(mykey).
{ok,myvalue}
Let's use erase(Key) on node2 and try to read the key again on the other nodes:
(node2#localhost)5> node_ring:erase(mykey).
ok
On node0:
(node0#localhost)5> node_ring:find(mykey).
error
On node1:
(node1#localhost)3> node_ring:find(mykey).
error
Awesome! We have a distributed decentralized ring that can act as a simple key/value store! That was easy, not hard at all! As long as we don't have any nodes go down, packet loss, network partitions, nodes added to the ring, or some other form of chaos, we have a near-perfect solution here. In reality, however, you have to account for all of those things in order to have a system that won't drive you crazy in the long run.
Here's a brief example of something our little node_ring can't handle:
node1 goes down
node0 stores key a and value 1
node1 comes back up and joins the ring
node1 tries to find key a
First, let's kill node1. If we check the nodes on node0:
(node0#localhost)6> node_ring:nodes().
['node0#localhost','node2#localhost']
And on node2:
(node2#localhost)6> node_ring:nodes().
['node0#localhost','node2#localhost']
We see that node1 has been removed from the ring automatically. Let's store something on node0:
(node0#localhost)7> node_ring:store(a, 1).
ok
And read it from node2:
(node2#localhost)7> node_ring:find(a).
{ok,1}
Let's start up node1 again and join the ring:
(node1#localhost)1> node_ring:join('node0#localhost').
ok
(node1#localhost)2> node_ring:nodes().
['node0#localhost','node1#localhost',
'node2#localhost']
(node1#localhost)3> node_ring:find(a).
error
Whoops, we have inconsistent data across the ring. Further study of other distributed systems and CAP theorem is necessary before we can decide how we want our little node_ring to behave in these different situations (like whether we want it to behave like an AP or a CP system).
I'm making this call:
add(Login, Pass, Role) ->
gen_server:call(?SERVER, {add, Login, Pass, Role}).
and I expect it to match with:
handle_call(State, {add, Login, Pass, Role}) ->
io:format("add ~n"),
Db = State#state.db,
case lists:keyfind(Login, 1, Db) of
false->
io:format("add - reg new ~n"),
{reply, registered, State#state{db=[{Login, erlang:md5(Pass), Role, ""}|Db]}};
{Key, Result}->
{reply, invalid_params, Db}
end.
but it always goes to:
handle_call(_Request, _From, State) ->
io:format("undef ~n"),
Reply = ok,
{reply, Reply, State}.
What's wrong?
The behaviour seems valid,
handle_call has such spec:
-spec(handle_call(Request :: term(), From :: {pid(), Tag :: term()},
State :: #state{}) ->
{reply, Reply :: term(), NewState :: #state{}} |
{reply, Reply :: term(), NewState :: #state{}, timeout() | hibernate} |
{noreply, NewState :: #state{}} |
{noreply, NewState :: #state{}, timeout() | hibernate} |
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
{stop, Reason :: term(), NewState :: #state{}}).
If you can take a look here
http://erlang.org/doc/man/gen_server.html#Module:handle_call-3
Also, for otp default behaviours, it would be the best, as a start, to use templates. For gen_server eg https://gist.github.com/kevsmith/1211350
Cheers!
In a module using the gen_server behaviour, the handle_call callback function should take three arguments. However, you have defined two different functions, handle_call/2 and handle_call/3. (In Erlang, functions that have the same name but take different numbers of arguments are considered different functions.)
Since the gen_server module only looks for handle_call/3 and ignores handle_call/2, your "undef" function is always called.
To fix this, change the function to take an (ignored) second argument, and put the request first and the state last:
handle_call({add, Login, Pass, Role}, _From, State) ->
and change the end. to end; — . separates different functions, while ; separates different clauses of the same function.
when executing the below code gen_server is raising an exception
-module(drop).
-behaviour(gen_server).
-export([start_link/0]).
-export([init/1,
handle_call/3,
handle_cast/2,
handle_info/2,
terminate/2,
code_change/3]).
-define(SERVER, ?MODULE).
-record(state, {count}).
start_link() ->
gen_server:start_link({local, ?SERVER}, ?MODULE, [], []).
init([]) ->
{ok, #state{count=0}}.
handle_call(_Request, _From, State) ->
Distance = _Request,
Reply = {ok, fall_velocity(Distance)},
NewState=#state{ count= State#state.count+1},
{reply, Reply, NewState}.
handle_cast(_Msg, State) ->
io:format("so far, calculated ~w velocities.~n", [State#state.count]),
{noreply, State}.
handle_info(_Info, State) ->
{noreply, State}.
terminate(_Reason, _State) ->
ok.
code_change(_OldVsn, State, _Extra) ->
{ok, State}.
fall_velocity(Distance) -> math:sqrt(2 * 9.8 * Distance).
OUTPUT:
1> gen_server:call(drop, 60).
** exception exit: {noproc,{gen_server,call,[drop,60]}}
in function gen_server:call/2 (gen_server.erl, line 180).
What's wrong in the above code? Do we need to compile the gen_server module after compiling the drop module?
no_proc -- means 'no process' -- you have not started your server.
Gen_server is a part of OTP architecture. It means you need to write application that starts supervisor which starts your drop server.
And then you can call it using gen_server:call
If you need just function to calculate velocity, you actually dont need OTP, you can export and call a function in the module.. Kind of
-module(drop).
-export([fall_velocity/1]).
.....
and then invoke it
drop:fall_velocity(60).
BTW gen_server module is already compiled in the erlang libs.
The code you are testing works fine. As it is already said you need to start the gen_server. Here is the way to do it, and then ask some request:
1> c(drop).
{ok,drop}
2> S = spawn(drop,start_link,[]).
<0.40.0>
3> registered().
[rex,net_sup,inet_db,kernel_sup,global_name_server,
code_server,file_server_2,init,kernel_safe_sup,
application_controller,user,error_logger,user_drv,
standard_error,global_group,standard_error_sup,drop,auth,
erl_epmd,net_kernel,erl_prim_loader]
4> gen_server:call(drop,25).
{ok,22.135943621178658}
5> gen_server:call(drop,13).
{ok,15.962455951387932}
6> gen_server:call(drop,20).
{ok,19.79898987322333}
7> gen_server:cast(drop,what).
so far, calculated 3 velocities.
ok
command 1 compiles the module. There is no need to compile the gen_server, it is already done in the Erlang libraries.
command 2 start the gen_server, generally in a module like drop, you add some interface function that hide this call something like start() -> spawn(?MODULE,start_link,[]). so you can start the server with simple call drop:start()
command 3 shows that the new process whas registered whith the name drop.
commands 4,5 and 6 ask for a velocity evaluation. As for start, the usage is to have an interface function such as velocity(N) -> gen_server:call(?MODULE,N) so you can simply call drop:velocity(25) the usage is also to "decorate" the message so you will be able to have more function later
command 7 use the message cast to get the number of velocities evaluated so far. Same remark about interface and decoration. here is a version more compliant with usage:
-module(drop).
-behaviour(gen_server).
%% interfaces
-export([start_link/0,velocity/1,so_far/0]).
-export([init/1,
handle_call/3,
handle_cast/2,
handle_info/2,
terminate/2,
code_change/3]).
-define(SERVER, ?MODULE).
-record(state, {count}).
%% interfaces
start_link() ->
spawn (gen_server,start_link,[{local, ?SERVER}, ?MODULE, [], []]).
velocity(N) ->
gen_server:call(?MODULE,{get_velocity,N}).
so_far() ->
gen_server:cast(?MODULE,so_far).
%% call back
init([]) ->
{ok, #state{count=0}}.
handle_call({get_velocity,Distance}, _From, State) ->
Reply = {ok, fall_velocity(Distance)},
NewState=#state{ count= State#state.count+1},
{reply, Reply, NewState};
handle_call(Request, _From, State) ->
Reply = io:format("unknown request ~p~n",[Request]),
{reply, Reply, State}.
handle_cast(so_far, State) ->
io:format("so far, calculated ~w velocities.~n", [State#state.count]),
{noreply, State};
handle_cast(Msg, State) ->
io:format("unknown request ~p~n", [Msg]),
{noreply, State}.
handle_info(_Info, State) ->
{noreply, State}.
terminate(_Reason, _State) ->
ok.
code_change(_OldVsn, State, _Extra) ->
{ok, State}.
fall_velocity(Distance) -> math:sqrt(2 * 9.8 * Distance).
and now the commands look simpler:
12> drop:start_link().
<0.60.0>
13> drop:velocity(25).
{ok,22.135943621178658}
14> drop:velocity(20).
{ok,19.79898987322333}
15> drop:velocity(13).
{ok,15.962455951387932}
16> drop:so_far().
so far, calculated 3 velocities.
ok
You need to start your server before being able to interact with it via gen_server:call/2.
drop:start_link().