How to send and receive in a PROMELA process without timeout / deadlock? - timeout

I cannot wrap my head around this PROMELA problem: I have N processes ("pc") which may both send and receive messages over a channel ("to_pc"). Each process has its own channel over which it receives messages.
For a process to be able to receive, I have to keep it in a loop which checks the channel for incoming messages. As a second loop option the process sends a message to all other channels.
However, in simulation mode, this always causes a timeout, without anything being sent at all. My theory so far is that I created a deadlock where all processes want to send at once, causing them all to be unable to receive (since they are stuck in their "send" part of the code).
So far I have been unable to resolve this problem. I have tried to use a global variable as a semaphore to "forbid" sending, so that only one channel may send. However, this did not change the results. My only other idea is to use a timeout as the trigger for the sending, but this does not seem right to me at all.
Any ideas? Thanks in advance!
#define N 4
mtype={request,reply}
typedef message {
mtype type;
byte target;
byte sender;
};
chan to_pc[N] = [0] of {message}
inline send() {
byte j = 0;
for (j : 0 .. N-1) {
if
:: j != address ->
to_pc[j]!msg;
:: else;
fi
}
}
active [N] proctype pc(){
byte address = _pid;
message msg;
do
:: to_pc[address]?msg -> /* Here I am receiving a message. */
if
::msg.type == request->
if
:: msg.target == address ->
d_step {
msg.target = msg.sender
msg.sender = address;
msg.type = reply;
}
send();
:: else
fi
:: msg.type == reply;
:: else;
fi
:: /* Here I want to send a message! */
d_step {
msg.target = (address + 1) % N;
msg.sender = address;
msg.type = request;
}
send();
od
};

I can write a full-fledged working version of your source code if you want, but perhaps it is sufficient to highlight the source of the issue you are dealing with and let you have fun solving it.
Branching Rules
any branch with an executable condition can be taken, non-deterministically
if there is no branch with an executable condition, the else branch is taken
if there is no branch with an executable condition and no else branch, then the process hangs till when one of the conditions becomes true
Consider this
1: if
2: :: in?stuff -> ...
3: :: out!stuff -> ...
4: fi
where in and out are both synchronous channels (size is 0).
Then
if someone is sending on the other end of in then in?stuff is executable, otherwise it is not
if someone is receiving on the other end of out then out!stuff is executable, otherwise it is not
the process blocks at line 1: up until when at least one of the two conditions is executable.
Compare that code to this
1: if
2: :: true -> in?stuff; ...
3: :: true -> out!stuff; ...
4: fi
where in and out are again synchronous channels (size is 0).
Then
both branches have an executable condition (true)
the process immediately commits itself to either send or receive something, by non-deterministically choosing to execute a branch either at line 2: or 3:
if the process chooses 2: then it blocks if in?stuff is not executable, even when out!stuff would be executable
if the process chooses 3: then it blocks if out!stuff is not executable, even when in!stuff would be executable
Your code falls in the latter situation, since all the instructions within d_step { } are executable and your process commits to send way too early.
To sum up: in order to fix your model, you should refactor your code so that it's always possible to jump from send to receive mode and viceversa. Hint: get rid of that inline code, separate the decision to send from actual sending.

Related

How do I access the value of the counter in a process?

In this program, I cannot for the life of me figure out how to access the value of the counter in a process.
-module(counter).
-export([start/0,loop/1,increment/1,value/1,stop/1]).
%% First the interface functions.
start() ->
spawn(counter, loop, [0]).
increment(Counter) ->
Counter ! increment.
value(Counter) ->
Counter ! {self(),value},
receive
{Counter,Value} ->
Value
end.
stop(Counter) ->
Counter ! stop.
%% The counter loop.
loop(Val) ->
receive
increment ->
loop(Val + 1);
{From,value} ->
From ! {self(),Val},
loop(Val);
stop -> % No recursive call here
true;
Other -> % All other messages
loop(Val)
end.
I assume it's:
{From,value} ->
From ! {self(),Val},
loop(Val);
which returns the value of the counter, but every time I use PID ! {PID,value}, or something similar to that it returns the thing after !, e.g. {<0.57.0>, value}.
TL;DR
You shouldn't use ! operator explicitly, it is considered an anti-pattern. You could run into some problems with typos in atoms or some bad data, just like you did this time.
To ensure correct communication with you one usually create some wrapper functions witch handle correct data format and communication with process. Function just like increment/1 value/1 and stop/1. In fact if you would use those, you would get expected results; in your case, assuming that PID is your counter, just call counter:value(PID).
Let me explain
There are few thing you seem to getting little bit wrong.
First of all ! will send message to another process. And that's all it does. Since everything in Erlang is expression (needs to return something, have a value) each call to ! will return right hand side of !. PID ! ok. will return ok, no matter what (there is slight chance that it will fail, but lets no go there). You send your message, and go on with your life, or execution.
Than, some process after receiving your message might decide to send you some message back. In case of {From, value} it will, in case of increment it wont. If you are expecting to get message back you need to wait for it and retrieve it from your mailbox. receive clause will do both waiting and retrieving. So if you decise to use ! on your own you should fallow it with receive with correct pattern match. You can see that value/1 function does just that.
Third thing is correct use of process ID's. I guess you started your counter correctly and you have it's Pid, and you can send messages to it with !. But if you expect it to send something back it needs to know your process ID, your address if you will. So you should have called PID ! {MyPid, values}. How to get MyPid? With self() function. Again, just like in value/1 function.
And last thing many people get wrong at the begging. counter module is just a file with some functions, it's not whole actor/process, and it's not an object. Fact that some value/1 and stop/1 are implemented in it, it doesn't mean that they will be run on counter actor/process. They are functions like any other, and if you call them they will be evaluated in your actor/process, on your stack (same goes for calling them from shell, shell is just another actor). You can spawn new process and tell it to run loop/1 function, but that's all it does. All increment/1 value/1 and stop/1 calls will be executed "on your side".
If this is somewhat confusing try to imagine some simpler function inside counter module, like
add(A, B) ->
A + B.
You can execute it from shell even without any counter process started. It will be created in your process, on your stack, it will add two numbers and return result.
This is important because when you call counter:value(Counter). it will execute Counter ! {self(),value}, "on your side", on your process, so self() will return Pid of your process, not the Pid of counter.
In theory you don't need to know this if you are just using those wrapper function (API or interface if you will), but since you are learning Erlang I would guess you will soon have to write such wrapper. Understanding what happens where is crucial then. Just remember that there is no magic in modules, no secret binding or special execution. Those are just plain old functions and they will be behaving just like in any other language. Only spawn, receive and maybe ! are little different.

Elixir: Genserver.call not initiaing handle_call

I am implementing the Gossip Algorithm in which multiple actors spread a gossip at the same time in parallel. The system stops when each of the Actor has listened to the Gossip for 10 times.
Now, I have a scenario in which I am checking the listen count of the recipient actor before sending the gossip to it. If the listen count is already 10, then gossip will not be sent to the recipient actor. I am doing this using synchronous call to get the listen count.
def get_message(server, msg) do
GenServer.call(server, {:get_message, msg})
end
def handle_call({:get_message, msg}, _from, state) do
listen_count = hd(state)
{:reply, listen_count, state}
end
The program runs well in the starting but after some time the Genserver.call stops with a timeout error like following. After some debugging, I realized that the Genserver.call becomes dormant and couldn't initiate corresponding handle_call method. Is this behavior expected while using synchronous calls? Since all actors are independent, shouldn't the Genserver.call methods be running independently without waiting for each others response.
02:28:05.634 [error] GenServer #PID<0.81.0> terminating
** (stop) exited in: GenServer.call(#PID<0.79.0>, {:get_message, []}, 5000)
** (EXIT) time out
(elixir) lib/gen_server.ex:774: GenServer.call/3
Edit: The following code can reproduce the error when running in iex shell.
defmodule RumourActor do
use GenServer
def start_link(opts) do
{:ok, pid} = GenServer.start_link(__MODULE__,opts)
{pid}
end
def set_message(server, msg, recipient) do
GenServer.cast(server, {:set_message, msg, server, recipient})
end
def get_message(server, msg) do
GenServer.call(server, :get_message)
end
def init(opts) do
state=opts
{:ok,state}
end
def handle_cast({:set_message, msg, server, recipient},state) do
:timer.sleep(5000)
c = RumourActor.get_message(recipient, [])
IO.inspect c
{:noreply,state}
end
def handle_call(:get_message, _from, state) do
count = tl(state)
{:reply, count, state}
end
end
Open iex shell and load above module. Start two processes using:
a = RumourActor.start_link(["", 3])
b = RumourActor.start_link(["", 5])
Produce error by calling a deadlock condition as mentioned by Dogbert in comments. Run following without much time difference.
cb = RumourActor.set_message(elem(a,0), [], elem(b,0))
ca = RumourActor.set_message(elem(b,0), [], elem(a,0))
Wait for 5 seconds. Error will appear.
A gossip protocol is a way of dealing with asynchronous, unknown, unconfigured (random) networks that may be suffering intermittent outages and partitions and where no leader or default structure is present. (Note that this situation is somewhat unusual in the real world and out-of-band control is always imposed on systems in some way.)
With that in mind, let's change this to be an asynchronous system (using cast) so that we are following the spirit of the concept of chatty gossip style communication.
We need digest of messages that counts how many times a given message has been received, a digest of messages that have been received and are already over the magic number (so we don't re-send one if it is way late), and a list of processes enrolled in our system so we know to whom we are broadcasting:
(The following example is in Erlang because I just trip over Elixir syntax ever since I stopped using it...)
-module(rumor).
-record(s,
{peers = [] :: [pid()],
digest = #{} :: #{message_id(), non_neg_integer()},
dead = sets:new() :: sets:set(message_id())}).
-type message_id() :: zuuid:uuid().
Here I am using a UUID, but it could be whatever. An Erlang reference would be fine for a test case, but since gossip isn't useful within an Erlang cluster, and references are unsafe outside the originating system I'm just jumping to the assumption this is for a networked system.
We will need an interface function that allows us to tell a process to inject a new message into the system. We will also need an interface function that sends a message between two processes once it is already in the system. Then we will need an inner function that broadcasts messages to all the known (subscribed) peers. Ah, that means we need a greeting interface so that peer processes can notify each other of their presence.
We will also want a way to have a process tell itself to keep broadcasting over time. How long to set the interval on retransmission is not actually a simple decision -- it has everything to do with network topology, latency, variability, etc (you would actually probably occasionally ping peers and develop some heuristic based on the latency, drop peers that seem unresponsive, and so on -- but we're not going to get into that madness here). Here I'm just going to set it for 1 second because that is an easy to interpret interval for humans observing the system.
Note that everything below is asynchronous.
Interfaces...
insert(Pid, Message) ->
gen_server:cast(Pid, {insert, Message}).
relay(Pid, ID, Message) ->
gen_server:cast(Pid, {relay, ID, Message}).
greet(Pid) ->
gen_server:cast(Pid, {greet, self()}).
make_introduction(Pid, PeerPid) ->
gen_server:cast(Pid, {make_introduction, PeerPid}).
That last function is going to be our way as testers of the system to cause one of the processes to call greet/1 on some target Pid so they start to build a peer network. In the real world something slightly different usually goes on.
Inside our gen_server callback for receiving a cast we will get:
handle_cast({insert, Message}, State) ->
NewState = do_insert(Message, State);
{noreply, NewState};
handle_cast({relay, ID, Message}, State) ->
NewState = do_relay(ID, Message, State),
{noreply, NewState};
handle_cast({greet, Peer}, State) ->
NewState = do_greet(Peer, State),
{noreply, NewState};
handle_cast({make_introduction, Peer}, State) ->
NewState = do_make_introduction(Peer, State),
{noreply, NewState}.
Pretty simple stuff.
Above I mentioned that we would need a way for this thing to tell itself to resend after a delay. To do that we are going to send ourselves a naked message to "redo_relay" after a delay using erlang:send_after/3 so we are going to need a handle_info/2 to deal with it:
handle_info({redo_relay, ID, Message}, State) ->
NewState = do_relay(ID, Message, State),
{noreply, NewState}.
Implementation of the message bits is the fun part, but none of this is terribly tricky. Forgive the do_relay/3 below -- it could be more concise, but I'm writing this in a browser off the top of my head, so...
do_insert(Message, State = #s{peers = Peers, digest = Digest}) ->
MessageID = zuuid:v1(),
NewDigest = maps:put(MessageID, 1, Digest),
ok = broadcast(Message, Peers),
ok = schedule_resend(MessageID, Message),
State#s{digest = NewDigest}.
do_relay(ID,
Message,
State = #s{peers = Peers, digest = Digest, dead = Dead}) ->
case maps:find(ID, Digest) of
{ok, Count} when Count >= 10 ->
NewDigest = maps:remove(ID, Digest),
NewDead = sets:add_element(ID, Dead),
ok = broadcast(Message, Peers),
State#s{digest = NewDigest, dead = NewDead};
{ok, Count} ->
NewDigest = maps:put(ID, Count + 1),
ok = broadcast(ID, Message, Peers),
ok = schedule_resend(ID, Message),
State#s{digest = NewDigest};
error ->
case set:is_element(ID, Dead) of
true ->
State;
false ->
NewDigest = maps:put(ID, 1),
ok = broadcast(Message, Peers),
ok = schedule_resend(ID, Message),
State#s{digest = NewDigest}
end
end.
broadcast(ID, Message, Peers) ->
Forward = fun(P) -> relay(P, ID, Message),
lists:foreach(Forward, Peers).
schedule_resend(ID, Message) ->
_ = erlang:send_after(1000, self(), {redo_relay, ID, Message}),
ok.
And now we need the social bits...
do_greet(Peer, State = #s{peers = Peers}) ->
case lists:member(Peer, Peers) of
false -> State#s{peers = [Peer | Peers]};
true -> State
end.
do_make_introduction(Peer, State = #s{peers = Peers}) ->
ok = greet(Peer),
do_greet(Peer, State).
So what did all of the horribly untypespecced stuff up there do?
It avoided any possibility of a deadlock. The reason deadlocks are so, well, deadly in peer systems is that anytime you have two identical processes (or actors, or whatever) communicating synchronously, you have created a textbook case of a potential deadlock.
Any time A has a synchronous message headed toward B and B has a synchronous message headed toward A at the same time you now have a deadlock. There is no way to create to identical processes that call each other synchronously without creating a potential deadlock. In massively concurrent systems anything that might happen almost certainly will eventually, so you're going to run into this sooner or later.
Gossip is intended to be asynchronous for a reason: it is a sloppy, unreliable, inefficient way to deal with a sloppy, unreliable, inefficient network topology. Trying to make calls instead of casts not only defeats the purpose of gossip-style message relay, it also pushes you into impossible deadlock territory incident to changing the nature of the protocol from asynch to synch.
Genser.call has a default timeout of 5000 milliseconds. So what probably happening is, the message queue of the actor is filled with millions of messages and by the time it reaches to call, the calling actor has timed out.
You can handle timeout using a try...catch:
try do
c = RumourActor.get_message(recipient, [])
catch
:exit, reason ->
# handle timeout
Now, the called actor will finally get to the call message and respond, which will come as an unexpected message to the first process. This you'll need to handle using handle_info. So one way is to ignore the error in catch block and send it rumor from handle_info.
Also, this will significantly degrade the performance if there are many process waiting to be timed-out for 5 seconds before moving ahead. One could deliberately reduce the timeout and handle the reply in handle_info. This will reduce to using cast and handling reply from other process.
Your blocking call need to be broken into two non blocking calls. So if A is making a blocking call to B, instead of waiting for reply, A can ask B to send its state on a given address (A's address) and move on.
Then A will handle that message separately and reply if necessary.
A.fun1():
body of A before blocking call
result = blockingcall()
do things based on result
needs to be divided into:
A.send():
body of A before blocking call
nonblockingcall(A.receive) #A.receive is where B should send results
do other things
A.receive(result):
do things based on result

Handling WebExceptions properly?

I have the following F# program that retrieves a webpage from the internet:
open System.Net
[<EntryPoint>]
let main argv =
let mutable pageData : byte[] = [| |]
let fullURI = "http://www.badaddress.xyz"
let wc = new WebClient()
try
pageData <- wc.DownloadData(fullURI)
()
with
| :? System.Net.WebException as err -> printfn "Web error: \n%s" err.Message
| exn -> printfn "Unknown exception:\n%s" exn.Message
0 // return an integer exit code
This works fine if the URI is valid and the machine has an internet connection and the web server responds properly etc. In an ideal functional programming world the results of a function would not depend on external variables not passed as arguments (side effects).
What I would like to know is what is the appropriate F# design pattern to deal with operations which might require the function to deal with recoverable external errors. For example if the website is down one might want to wait 5 minutes and try again. Should parameters like how many times to retry and delays between retries be passed explicitly or is it OK to embed these variables in the function?
In F#, when you want to handle recoverable errors you almost universally want to use the option or the Choice<_,_> type. In practice the only difference between them is that Choice allows you to return some information about the error while option does not. In other words, option is best when it doesn't matter how or why something failed (only that it did fail); Choice<_,_> is used when having information about how or why something failed is important. For example, you might want to write the error information to a log; or perhaps you want to handle an error situation differently based on why something failed -- a great use case for this is providing accurate error messages to help users diagnose a problem.
With that in mind, here's how I'd refactor your code to handle failures in a clean, functional style:
open System
open System.Net
/// Retrieves the content at the given URI.
let retrievePage (client : WebClient) (uri : Uri) =
// Preconditions
checkNonNull "uri" uri
if not <| uri.IsAbsoluteUri then
invalidArg "uri" "The URI must be an absolute URI."
try
// If the data is retrieved successfully, return it.
client.DownloadData uri
|> Choice1Of2
with
| :? System.Net.WebException as webExn ->
// Return the URI and WebException so they can be used to diagnose the problem.
Choice2Of2 (uri, webExn)
| _ ->
// Reraise any other exceptions -- we don't want to handle them here.
reraise ()
/// Retrieves the content at the given URI.
/// If a WebException is raised when retrieving the content, the request
/// will be retried up to a specified number of times.
let rec retrievePageRetry (retryWaitTime : TimeSpan) remainingRetries (client : WebClient) (uri : Uri) =
// Preconditions
checkNonNull "uri" uri
if not <| uri.IsAbsoluteUri then
invalidArg "uri" "The URI must be an absolute URI."
elif remainingRetries = 0u then
invalidArg "remainingRetries" "The number of retries must be greater than zero (0)."
// Try to retrieve the page.
match retrievePage client uri with
| Choice1Of2 _ as result ->
// Successfully retrieved the page. Return the result.
result
| Choice2Of2 _ as error ->
// Decrement the number of retries.
let retries = remainingRetries - 1u
// If there are no retries left, return the error along with the URI
// for diagnostic purposes; otherwise, wait a bit and try again.
if retries = 0u then error
else
// NOTE : If this is modified to use 'async', you MUST
// change this to use 'Async.Sleep' here instead!
System.Threading.Thread.Sleep retryWaitTime
// Try retrieving the page again.
retrievePageRetry retryWaitTime retries client uri
[<EntryPoint>]
let main argv =
/// WebClient used for retrieving content.
use wc = new WebClient ()
/// The amount of time to wait before re-attempting to fetch a page.
let retryWaitTime = TimeSpan.FromSeconds 2.0
/// The maximum number of times we'll try to fetch each page.
let maxPageRetries = 3u
/// The URI to fetch.
let fullURI = Uri ("http://www.badaddress.xyz", UriKind.Absolute)
// Fetch the page data.
match retrievePageRetry retryWaitTime maxPageRetries wc fullURI with
| Choice1Of2 pageData ->
printfn "Retrieved %u bytes from: %O" (Array.length pageData) fullURI
0 // Success
| Choice2Of2 (uri, error) ->
printfn "Unable to retrieve the content from: %O" uri
printfn "HTTP Status: (%i) %O" (int error.Status) error.Status
printfn "Message: %s" error.Message
1 // Failure
Basically, I split your code out into two functions, plus the original main:
One function that attempts to retrieve the content from a specified URI.
One function containing the logic for retrying attempts; this 'wraps' the first function which performs the actual requests.
The original main function now only handles 'settings' (which you could easily pull from an app.config or web.config) and printing the final results. In other words, it's oblivious to the retrying logic -- you could modify the single line of code with the match statement and use the non-retrying request function instead if you wanted.
If you want to pull content from multiple URIs AND wait for a significant amount of time (e.g., 5 minutes) between retries, you should modify the retrying logic to use a priority queue or something instead of using Thread.Sleep or Async.Sleep.
Shameless plug: my ExtCore library contains some things to make your life significantly easier when building something like this, especially if you want to make it all asynchronous. Most importantly, it provides an asyncChoice workflow and collections functions designed to work with it.
As for your question about passing in parameters (like the retry timeout and number of retries) -- I don't think there's a hard-and-fast rule for deciding whether to pass them in or hard-code them within the function. In most cases, I prefer to pass them in, though if you have more than a few parameters to pass in, you're better off creating a record to hold them all and passing that instead. Another approach I've used is to make the parameters option values, where the defaults are pulled from a configuration file (though you'll want to pull them from the file once and assign them to some private field to avoid re-parsing the configuration file each time your function is called); this makes it easy to modify the default values you've used in your code, but also gives you the flexibility of overriding them when necessary.

Understanding the return value of spawn

I'm getting started with Erlang, and could use a little help understanding the different results when applying the PID returned from spawn/3 to the process_info/1 method.
Given this simple code where the a/0 function is exported, which simply invokes b/0, which waits for a message:
-module(tester).
-export([a/0]).
a() ->
b().
b() ->
receive {Pid, test} ->
Pid ! alrighty_then
end.
...please help me understand the reason for the different output from the shell:
Example 1:
Here, current_function of Pid is shown as being tester:b/0:
Pid = spawn(tester, a, []).
process_info( Pid ).
> [{current_function,{tester,b,0}},
{initial_call,{tester,a,0}},
...
Example 2:
Here, current_function of process_info/1 is shown as being tester:a/0:
process_info( spawn(tester, a, []) ).
> [{current_function,{tester,a,0}},
{initial_call,{tester,a,0}},
...
Example 3:
Here, current_function of process_info/1 is shown as being tester:a/0, but the current_function of Pid is tester:b/0:
process_info( Pid = spawn(tester, a, []) ).
> [{current_function,{tester,a,0}},
{initial_call,{tester,a,0}},
...
process_info( Pid ).
> [{current_function,{tester,b,0}},
{initial_call,{tester,a,0}},
...
I assume there's some asynchronous code happening in the background when spawn/3 is invoked, but how does variable assignment and argument passing work (especially in the last example) such that Pid gets one value, and process_info/1 gets another?
Is there something special in Erlang that binds variable assignment in such cases, but no such binding is offered to argument passing?
EDIT:
If I use a function like this:
TestFunc = fun( P ) -> P ! {self(), test}, flush() end.
TestFunc( spawn(tester,a,[]) ).
...the message is returned properly from tester:b/0:
Shell got alrighty_then
ok
But if I use a function like this:
TestFunc2 = fun( P ) -> process_info( P ) end.
TestFunc2( spawn(tester,a,[]) ).
...the process_info/1 still shows tester:a/0:
[{current_function,{tester,a,0}},
{initial_call,{tester,a,0}},
...
Not sure what to make of all this. Perhaps I just need to accept it as being above my pay grade!
If you look at the docs for spawn it says it returns the newly created Pid and places the new process in the system scheduler queue. In other words, the process gets started but the caller keeps on executing.
Erlang is different from some other languages in that you don't have to explicitly yield control, but rather you rely on the process scheduler to determine when to execute which process. In the cases where you were making an assignment to Pid, the scheduler had ample time to switch over to the spawned process, which subsequently made the call to b/0.
It's really quite simple. The execution of the spawned process starts with a call to a() which at some point shortly afterwards will call b() and then just sits there and waits until it receives a specific message. In the examples where you manage to immediately call process_info on the pid, you catch it while the process is still executing a(). In the other cases, when some delay is involved, you catch it after it has called b(). What about this is confusing?

Unable to use Erlang/ets in receive block

I am trying to use Erlang/ets to store/update various informations by pattern matching received data. Here is the code
start() ->
S = ets:new(test,[]),
register(proc,spawn(fun() -> receive_data(S) end)).
receive_data(S) ->
receive
{see,A} -> ets:insert(S,{cycle,A}) ;
[[f,c],Fcd,Fca,_,_] -> ets:insert(S,{flag_c,Fcd,Fca});
[[b],Bd,Ba,_,_] -> ets:insert(S,{ball,Bd,Ba})
end,
receive_data(S).
Here A is cycle number, [f,c] is center flag , [b] is ball and Fcd,Fca, Bd, Ba are directions and angle of flag and ball from player.
Sender process is sending these informations. Here, pattern matching is working correctly which I checked by printing values of A, Fcd,Fca..etc. I believe there is something wrong with the use of Erlang/ets.
When I run this code I get error like this
Error in process <0.48.0> with exit value: {badarg,[{ets,insert,[16400,{cycle,7}]},{single,receive_data,1}]
Can anybody tell me what's wrong with this code and how to correct this problem?
The problem is that the owner of the ets-table is the process running the start/1 function and the default behavior for ets is to only allow the owner to write and other processes to read, aka protected. Two solutions:
Create the ets table as public
S = ets:new(test,[public]).
Set the owner to your newly created process
Pid = spawn(fun() -> receive_data(S) end,
ets:give_away(test, Pid, gift)
register(proc,Pid)
Documentation for give_away/3

Resources