I am trying to use Erlang/ets to store/update various informations by pattern matching received data. Here is the code
start() ->
S = ets:new(test,[]),
register(proc,spawn(fun() -> receive_data(S) end)).
receive_data(S) ->
receive
{see,A} -> ets:insert(S,{cycle,A}) ;
[[f,c],Fcd,Fca,_,_] -> ets:insert(S,{flag_c,Fcd,Fca});
[[b],Bd,Ba,_,_] -> ets:insert(S,{ball,Bd,Ba})
end,
receive_data(S).
Here A is cycle number, [f,c] is center flag , [b] is ball and Fcd,Fca, Bd, Ba are directions and angle of flag and ball from player.
Sender process is sending these informations. Here, pattern matching is working correctly which I checked by printing values of A, Fcd,Fca..etc. I believe there is something wrong with the use of Erlang/ets.
When I run this code I get error like this
Error in process <0.48.0> with exit value: {badarg,[{ets,insert,[16400,{cycle,7}]},{single,receive_data,1}]
Can anybody tell me what's wrong with this code and how to correct this problem?
The problem is that the owner of the ets-table is the process running the start/1 function and the default behavior for ets is to only allow the owner to write and other processes to read, aka protected. Two solutions:
Create the ets table as public
S = ets:new(test,[public]).
Set the owner to your newly created process
Pid = spawn(fun() -> receive_data(S) end,
ets:give_away(test, Pid, gift)
register(proc,Pid)
Documentation for give_away/3
Related
I am using observer in elixir and the following is the snapshot of an Application [under applications tab]:
I need to exit these processes once their work is done. Somehow, I am not able to figure out where some of the processes are originating. Is there a way in elixir/erlang to figure out the module/function where a particular process was created?
Suggestions will be highly appreciated. Thanks.
First you must always have the process's PID or its reference name.
Process.info/2
will give you information about that Process. You may get more documentation and information on how this function works in the Erlang's function it is calling:
process_info-2
There are also arity 1 variants: Process Docs
[erlang:process_info(Pid, initial_call) || Pid <- erlang:processes()].
But note that gen_server, etc., all have the same initial call, so you need to dig a little deeper.
The following is adapted from https://gist.github.com/rlipscombe/a8e87583d47799170f8b:
lists:map(
fun(Pid) ->
InitialCall = case erlang:process_info(Pid, initial_call) of
{initial_call,{proc_lib,init_p,A}} ->
case erlang:process_info(Pid, dictionary) of
{dictionary, D} ->
proplists:get_value('$initial_call', D, undefined);
_ ->
{proc_lib,init_p,A}
end;
{initial_call,{erlang,apply,A}} ->
case erlang:process_info(Pid, current_function) of
{current_function,MFA} -> MFA;
_ -> {erlang,apply,A}
end;
{initial_call,IC} ->
IC;
Other ->
Other
end,
{Pid, InitialCall}
end, erlang:processes()).
Using process_info/1 you can get a list of process information from which initial_call and current_function could help you to find the initial function call with which the process was spawned and the current function call of the process respectively.
Also process_info(Pid, initial_call) and process_info(Pid, current_function) functions are using as shortcut.
I am a little confused with gproc and Pub/Sub methods( https://github.com/uwiger/gproc#use-case-pubsub-patterns ).
I can't understand how to receive messages from another process.
Example:
-module(ws_handler).
-export([init/2]).
init(Req, Opts) ->
lager:info("WS: init ws handler"),
gproc:reg({p, l, {?MODULE, WSNewMsgKey}}),
{cowboy_websocket, Req, Opts}.
process_data(Data) ->
lager:info("WS: start processing of json data"),
gproc:send({p, l, WSNewMsgKey}, {self(), WSNewMsgKey, Data}).
There are 2 processes, both of them are registered as subscribers. They should share incoming data with each other. I guess that i have to implement some interface/function but docs don't tell which exactly.
I've never used gproc for this, but it certainly seems like two things are missing: a definition of WSNewMsgKey (its never in scope in your snippet above) and a receive clause somewhere to accept the messages sent:
-module(ws_handler).
-export([init/2]).
init(Req, Opts) ->
gproc:reg({p, l, {?MODULE, ws_event}}),
{some_state_blah, Req, Opts}.
notify_peers(Event) ->
gproc:send({p, l, ws_event}, {self(), ws_event, Event}).
...and elsewhere either
handle_info({From, ws_event, Event}, State) ->
ok = handle_ws_event(From, Event).
or in your loop (if you wrote your process by hand):
loop(State) ->
receive
{From, ws_event, Event} ->
ok = handle_ws_event(From, Event),
loop(State);
Whatever ->
% other stuff...
end.
I'm not sure if the message that is sent would be sent by as a call, a cast, or a normal message (I'm assuming either an OTP generic cast, or normal message) -- but it seems that this is what should happen. In all cases, though, you need a well-defined key to identify the category of message being sent, and here I've used the atom 'ws_event' to make this explicit.
As for the details of the snippet above... you appear to be broadcasting the same JSON message to a bunch of processes at once for some sort of processing? I'm not sure what this would do for you -- I can't think of any case where broadcasting raw JSON would be beneficial (unless maybe if the need is to broadcast the JSON outside of the system and you are broadcasting to a bunch of subscribed client socket handlers?). So I'm confused at the context (what are you trying to achieve?).
This appears to be the way the docs intend this to be used -- but I'd have to actually play with it to be certain.
E.g. suppose I have a list that looks something roughly like this:
Handlers = [{foo, FooHandler}, {bar, BarHandler} | Etc()]
The best that I can come up with is this:
receive
Message ->
Handler = find_matching_handler(Message, Handlers),
Handler(Message)
end
The problem with this is that if Message does not match anything in Handlers, it's too late: I've taken it out of the mailbox.
I guess if there's a way to put a message back into the mailbox (into the save queue) without reordering, then that would take care of it. Simply resending to self() would reorder. It would also not restart the receive, and even if it did, you might get stuck in a spin loop until a message of interest arrives. Is there a way to put a message into the mailbox's save queue?
Another near solution that I thought of was to use match guard, but IIUC, you can only use BIFs in guards, which seems to preclude using find_matching_handler (unless there is a BIF for that).
Another near solution: map matching:
receive
M when Handlers#{M := Handler} -> Handler(M) % booyah?
end
Alas, I have not found an incantation that satisfies Erlang...
Match on the message:
loop() ->
receive
{foo, Data} ->
handle_foo(Data),
loop();
{bar, Data} ->
handle_bar(Data),
loop()
end.
This is the basic way of distinguishing between message forms.
You can also be less direct and match in a function head you pass all messages to:
loop() ->
receive
Message ->
handle_message(Message),
loop()
end.
handle_message({foo, Data}) ->
foo(Data),
ok;
handle_message({bar, Data}) ->
bar(Data),
ok.
A combination of the first and second forms is sort of the way gen_server type callback modules are structured in OTP. The message handlers receive a slightly more complex set of arguments and exist in their own module (the part you write), and the actual receive occurs in the generic gen_server module.
You can use a selective receive pattern to periodcally scan the mailbox for handler messages. Something like this:
check_msg_handlers(Handlers) ->
[check_handler(X) || X <- Handlers],
timer:sleep(500),
check_msg_handlers(Handlers).
check_handler(Handler) ->
receive
{_Handler={M,F}, Msg} ->
M:F(Msg)
after
0 ->
no_msg
end.
Note the receive X -> Y after -> N no_msg end, this is the selective receive. When using a timeout of N=0 it effectively becomes a scan of the mailbox to see if the X message is present or not, i.e. it becomes a non-blocking receive. The order of the messages is preserved after the scan as required in your case.
The LYSE chapter More On Multiprocessing has a section on selective receives that is very good.
I have a function that sets a value to a process Pid and I can have a process depend on another one. So if I set a value to a process then I have to also set the value to the processes that depend on it. However, if there is a circle between the processes
i.e. A depends on B and B depends on A
then I want to return an error message.
I try to do this by passing a list of Pids which have already changed values so that if I come across the same Pid twice (By checking if it is a member of the list of Pids) then the whole function stops. This is my code:
set_values(Pid, Value, PidSet, PidList) ->
case lists:member(Pid, PidList) of
false -> io:format("Setting Value~n"),
lists:map(fun(Pid) ->
Pid ! {self(), set_value, Value, [Pid | PidList]} end, PidSet);
true -> io:format("circle_detected~n"),
Pid ! {circle_detected}
end.
When I run it, I get this error:
=ERROR REPORT==== 2-Nov-2014::17:47:45 ===
Error in process <0.888.0> with exit value: {badarg,[{lists,member,
[<0.888.0>,empty_list],[]},{process,set_viewer_values,4,[{file,"process.erl"},{line,56}]},
{process,looper,2,[{file,"process.erl"},{line,116}]}]}
From what I understand I give bad arguments to lists:member function.
What should I do?
Thanks
If you read your error message, you have {lists,member,
[<0.888.0>,empty_list] ..., where lists is module, member is function name, and [<0.888.0>,empty_list] are aruguments (two) presented as list. So you are making call to lists:nenber/2 with PidList variable being atom empty_list. And this gives you an error.
So you need to look into how you funciton is being called (prefered), or create some pattern match on PidList like
set_values(Pid, Value, PidSet, _PidList = empty_list) ->
...
I'm getting started with Erlang, and could use a little help understanding the different results when applying the PID returned from spawn/3 to the process_info/1 method.
Given this simple code where the a/0 function is exported, which simply invokes b/0, which waits for a message:
-module(tester).
-export([a/0]).
a() ->
b().
b() ->
receive {Pid, test} ->
Pid ! alrighty_then
end.
...please help me understand the reason for the different output from the shell:
Example 1:
Here, current_function of Pid is shown as being tester:b/0:
Pid = spawn(tester, a, []).
process_info( Pid ).
> [{current_function,{tester,b,0}},
{initial_call,{tester,a,0}},
...
Example 2:
Here, current_function of process_info/1 is shown as being tester:a/0:
process_info( spawn(tester, a, []) ).
> [{current_function,{tester,a,0}},
{initial_call,{tester,a,0}},
...
Example 3:
Here, current_function of process_info/1 is shown as being tester:a/0, but the current_function of Pid is tester:b/0:
process_info( Pid = spawn(tester, a, []) ).
> [{current_function,{tester,a,0}},
{initial_call,{tester,a,0}},
...
process_info( Pid ).
> [{current_function,{tester,b,0}},
{initial_call,{tester,a,0}},
...
I assume there's some asynchronous code happening in the background when spawn/3 is invoked, but how does variable assignment and argument passing work (especially in the last example) such that Pid gets one value, and process_info/1 gets another?
Is there something special in Erlang that binds variable assignment in such cases, but no such binding is offered to argument passing?
EDIT:
If I use a function like this:
TestFunc = fun( P ) -> P ! {self(), test}, flush() end.
TestFunc( spawn(tester,a,[]) ).
...the message is returned properly from tester:b/0:
Shell got alrighty_then
ok
But if I use a function like this:
TestFunc2 = fun( P ) -> process_info( P ) end.
TestFunc2( spawn(tester,a,[]) ).
...the process_info/1 still shows tester:a/0:
[{current_function,{tester,a,0}},
{initial_call,{tester,a,0}},
...
Not sure what to make of all this. Perhaps I just need to accept it as being above my pay grade!
If you look at the docs for spawn it says it returns the newly created Pid and places the new process in the system scheduler queue. In other words, the process gets started but the caller keeps on executing.
Erlang is different from some other languages in that you don't have to explicitly yield control, but rather you rely on the process scheduler to determine when to execute which process. In the cases where you were making an assignment to Pid, the scheduler had ample time to switch over to the spawned process, which subsequently made the call to b/0.
It's really quite simple. The execution of the spawned process starts with a call to a() which at some point shortly afterwards will call b() and then just sits there and waits until it receives a specific message. In the examples where you manage to immediately call process_info on the pid, you catch it while the process is still executing a(). In the other cases, when some delay is involved, you catch it after it has called b(). What about this is confusing?