I'm still in the learning fase of Erlang, so I might be wrong, but this is how I understood a process' message queue.
A process could be in it's main receive loop, receiving certain types of messages, while later it could be put in a waiting loop to deal with a different kind of message in the second loop. If the process would receive messages intended for the first loop in the second loop, it would just put them in the queue, ignore them for the time being and only process those message that it can match against in the current loop it is in. Now if it would enter the first receive loop again, it would start from the beginning and again process the messages that it can match against.
Now my question would be, if this is how Erlang works and I understood this correctly, then what happens when a malicious process would send a bunch of messages that the process will never process. Will the queue eventually overflow, resulting in a crash for the process or how should I deal with this? I'll type out an example to illustrate what I mean.
Now if a malicious program would get a hold of the Pid and would go Pid ! {malicioudata, LotsOfData} repeatedly, would those messages be filtered out since they will never possibly be processed or would they just stack up in the queue?
startproc() -> firstloop(InitValues).
firstloop(Values) ->
receive
retrieveinformation ->
WaitingList=askforinformation(),
retrieveloop(WaitingList);
dostuff ->
NewValues=doingstuff(),
firstloop(NewValues);
sendmeyourdata ->
sendingdata(Values),
firstloop(Values)
end.
retrieveloop([],Values) -> firstloop(Values).
retrieveloop(WaitingList,Values) ->
receive
{hereismyinformation,Id,MyInfo} ->
NewValues=dosomethingwithinfo(Id,MyInfo),
retrieveloop(lists:remove(Id,1,WaitingList),NewValues);
end.
There is not a hard limit on message counts, and there is not a fixed amount of memory you are limited to, but you can certainly run out of memory if you have billions of messages (or a few super huge ones, maybe).
Long before you OOM because of a huge mailbox you will notice either selective receives taking a long time (not that "selective receive" is a good pattern to follow much of the time...) or innocently peek into a process mail queue and realized you've opened Pandora's Box in your terminal.
This is usually treated as a throttling and monitoring issue in the Erlang world. If you aren't able to keep up and your problem is parallelizable then you need more workers. If you are maxing out your hardware then you need more efficiency. If you are still maxing out your hardware, can't get any more, and you're still overwhelmed then you need to decide how to implement pushback or load shedding.
Unfortunately there is no "message queue overflow" and it's going to grow until VM crashes due to memory allocation error.
Solution is to drop any invalid messages in main loop, because you are not suppose to receive any of {hereismyinformation, _,_} nor one you get in askforinformation() due to blocking nature of your process.
startproc() -> firstloop(InitValues).
firstloop(Values) ->
receive
retrieveinformation ->
WaitingList=askforinformation(),
retrieveloop(WaitingList, Values); % i assume you meant that
dostuff ->
NewValues=doingstuff(),
firstloop(NewValues);
sendmeyourdata ->
sendingdata(Values),
firstloop(Values);
_ ->
firstloop(Values) % you can't get {hereismyinformation, _,_} here so we can drop any invalid message
end.
retrieveloop([],Values) -> firstloop(Values).
retrieveloop(WaitingList,Values) ->
receive
{hereismyinformation,Id,MyInfo} ->
NewValues=dosomethingwithinfo(Id,MyInfo),
retrieveloop(lists:remove(Id,1,WaitingList),NewValues);
end.
It's not really a problem with unexpected messages because it's easily avoidable but when process queue is growing faster than it's processed. For this specific problem there is a nice jobs framework for production systems.
Related
When I have code like this:
receive do
{:hello, msg} -> msg
end
And let's say I have N messages in my mailbox. Is the performance of finding this particular message O(1), O(N), or something in between?
Receive perform a linear scan of the message box and then returns the first one which matches. There is one exception (Since R14A)
OTP-8623 == compiler erts hipe stdlib ==
Receive statements that can only read out a newly created
reference are now specially optimized so that it will execute
in constant time regardless of the number of messages in the
receive queue for the process. That optimization will benefit
calls to gen_server:call(). (See gen:do_call/4 for an example
of a receive statement that will be optimized.)
So in your case it is O(N) operation.
Messaging in Erlang, hence Elixir, is "first in, first out". You browse them one by one and the first that's meet any clause in receive is handled. In worst case scenario you can choke up your messagebox.
The performance will grow linearly and in direct proportion to the number of elements in the mailbox, thus being O(N).
I want know the the erlang process state when this process is running receive after:
receive
X ->
ok
after 1000 ->
ok
end
1、Is the process state is running or waiting?
2、Does this process will use cpu schedler time?
3、If i have 120000 erlang process like this, every process will run code like this:
receive
X ->
ok
after 1000 ->
ok
end
So, Does this code will be a bottleneck?
The process is just moving along with whatever comes after the receive expression.
For example, let's say a inline a request/response:
ask_foo(SomePID) ->
Ref = make_ref(),
SomePID ! {self(), Ref, why},
receive
{Ref, Answer} ->
io:format("The answer: ~tp~n", [Answer])
after
1000 ->
io:format("~p is too slow. Moving on...~n", [SomePID])
end,
io:format("I'll print this in any case, and then exit.").
receive blocks until it either receives a message that matches one of its receive clauses, or the timeout occurs -- whichever happens first. Then it continues on doing whatever else is in its code. Very often there is a single receive loop, but it is not uncommon to use a series of inline receive clauses for things that should block, like waiting on a fixed sequence of inputs from a user or something similar.
The "process's state" is not changing in terms of its state data at all. It is blocking -- which means it is suspended until a message or a timeout occurrs. But, unlike polling systems, this does not carry an overhead penalty with it because the VM is managing the scheduling (the process doesn't have to wake itself up, it can safely block on receive).
You asked if this will be a bottleneck: No. No other processes are blocking, only this one. All other processes are executing on their own schedule, and they have nothing to do with this one. So when blocking on a receive you are only holding up the rest of the things this particular process is supposed to do. Whether or not that is a bottleneck becomes, therefore, an architectural question.
I'm wondering if it's possible to send variables from a dying process to it's calling process. I have a process A that spawned another process B through spawn_link. B is about to die by calling exit(killed). I can catch this in A through {'EXIT', From, killed}, but I'd like to pass some variables from B to A before it dies. I can do this by sending a message from B to A right before it dies, but I'm wondering if this is a 'bad' thing to do. Because technically I'd be sending two messages from B to A. Right now, what I have looks like this:
B sends a message with values to A
A receives values and re-enters receive loop
B calls exit(killed)
A receives EXIT message and spawns another linked process
The idea is that B should always exist and when it gets killed, it should be 'resurrected' immediately. What seems like a better alternative in my opinion is to have something like exit(killed, [Variables]) and to catch it with {'EXIT', From, killed, [Variables]}. Is this possible? And if so, are there any reasons for not doing it? Having A store values for B when B hasn't even died yet seems like a bad move. I'd have to start implementing atomic actions to prevent problems with two linked processes dying at the same time. It also forces me to keep the variables in my receive loop.
What I mean is, if I could send values directly with the EXIT call, my loop would look like this:
loop() ->
receive ->
{'EXIT', From, killed, Variables} -> % spawn new linked process with variables
end.
But if I first need to receive a message, get into the loop again to then receive the exit message, I would get;
loop(Vars) ->
receive ->
{values, Variables} -> loop(Variables);
{'EXIT', From, killed} -> % spawn new linked process with variables
end.
This means I keep the list of variables long after I don't need them anymore and I need to enter my loop twice for what could be considered one action.
To answer your question directly: the exit reason can be any term, which means it can also be a tuple like exit({killed, Values}), so instead of receiving {'EXIT', From, killed, Values} you would received {'EXIT', From, {killed, Values}}.
But!
The way you are doing it now is not wrong. Its not particularly ugly, either. Sending a message (especially an asynchronous one) isn't some major operation to be minimized as much as possible, and neither is spawning/killing processes. If your way works for you, fine.
But! (again!)
Why are you doing this in the first place? Consider what it is about state that you need to be shuttling between two processes, one of which you are terminating just then? Should this value be a permanent entity held by the spawning process? Should it die with the worker? Should it be a quantity maintained by a third process and asked for as part of the worker's startup (a more general phrasing of what Łukasz Ptaszyński was getting at)?
I don't know the answers to those questions, because I don't know your program, but they are the things I would think about if I was finding it necessary to do this sort of work. In particular, if there is some base value that process A must seed process B with for it to work, and the next version of the base value is dependent on something process B does, then process B should be returning it as a part of its processing, not as a part of its shutdown.
This seems like a minor semantic difference, but its important to think about. You may find that you shouldn't be terminating B at all, or that you really need A to manage a directory for several concurrent B's and they should seed themselves as they move along, or whatever. You might even find that this means A should be spawning B as a synchronous, monitored operation, not an asynchronous linked one, and the whole herd of processes should be spawned as a complex of multiple managed A-B pairs! I don't know the answers in your case, but these are the things that come to mind on reading what you are doing.
I think you can try this method:
main()->
ParentPid = self(),
From = spawn_link(?MODULE, child, [ParentPid]),
receive
{'EXIT', From, Reason} ->
Reason
end.
child(ParentPid) ->
Value = 2*2,
exit(ParentPid, {killed, Value}).
Please read this link about erlang:exit/2
I use ODBC to query a table from a database:
getTable(Ref,SearchKey) ->
Q = "SELECT * FROM TestDescription WHERE NProduct = " ++ SearchKey,
case odbc:sql_query(Ref,Q) of
{_,_,Data} ->
%io:format("GetTable Query ok ~n"),
{ok, Data};
{error,_Reason} ->
%io:format("Gettable Query error ~p ~n",[_Reason]),
{stop, odbc_query_failed};
_->
io:format("Error Logic in getTable function ~n")
end.
This function will return a tuple which includes all the db data. Sending this to another process:
OtherProcessPid!{ok,Data};
It works fine with a small number of rows, but how about a very large number, greater than a million, say? Can erlang still work with it?
The question isn't "Can Erlang handle very large messages?" (it can), it is rather "are you ready to deal with the consequences of very large messages?"
All messages are copied (exception of some larger binaries): this means you have to prepare for some slowdowns if you're doing a lot of messaging of large messages, have memory use a lot less stable than with small messages, etc.
In the case of distributed Erlang, a very large message that needs to be 'uploaded' to a remote node might block the heartbeats making it possible to know whether a VM is alive or not if the delays are too short, or the messages too large for how often you send them.
In any case, the solution is to measure what you can or can't deal with. There is no hardcoded limit that I know of regarding message size. Know that smaller messages are usually preferable as a general rule of thumb, though.
Are messages processed in a first-come-first-serve basis or are they sorted by timestamp or something like that?
Order of messages is preserved between a process and another one. Reading from the FAQ:
10.9 Is the order of message reception guaranteed?
Yes, but only within one process.
If there is a live process and you send it message A and then message
B, it's guaranteed that if message B arrived, message A arrived before
it.
On the other hand, imagine processes P, Q and R. P sends message A to
Q, and then message B to R. There is no guarantee that A arrives
before B. (Distributed Erlang would have a pretty tough time if this
was required!)
#knutin is right regarding how you can consume messages within a process. As an addition, note that you might use two subsequent receive statements to ensure that a certain message is consumed after another one:
receive
first ->
do_first()
end,
receive
second ->
do_second()
end
The receive statement is blocking. This will ensure that you never do_second() before you do_first(). The difference from #knutin's second solution is that, in that case, if something not important arrives just before an important one, you queue the important bit.
The mailbox is always kept in the order the messages arrived.
However, the order the messages are consumed is determined by your code.
If you have a plain process with a generic receive clause that receives anything, the order you get messages are the same as the order they arrived in.
loop() ->
receive
Any ->
do_something(Any),
loop()
end.
However, if you have a selective receive with match clauses, it will search the mailbox for messages of this specific type and consume the first matching message, effectively skipping non-matching messages. In the following example, if there are messages tagged as important in the queue, they will be processed before any other message. Note: Matching like this will search all messages in the queue, which is a problem for many messages. There has been some developments in this area, but I'm not up to speed.
loop() ->
receive
{important, Stuff} ->
do_something_important(Stuff),
loop();
Any ->
do_something(Any)
loop()
end.
to further define the answer, I would like to point the fact that, as stated above, messages that don't pattern match are skipped, but in reality they are simply put aside and then reintroduced in order (so first that any other message arrived after the not matching messages) for next receive pattern matching.
This problem really shows its worst when you have, for example, a gen_server behaviour module because in this case, having always the same pattern matching call scheme, messages not in scope are going to flood message queue unless you define a (ugly and error prone, IMHO) match-all pattern like:
receive
... -> ...;
... -> ...;
MatchAllPatterns -> ok.
end