Trace event message ordering using erlang:trace_delivered/1 - erlang

I am trying to understand the exact semantics of erlang:trace_delivered/1 in order to determine whether this function can be suitably used to solve a problem I am currently facing. The problem is as follows.
Suppose there is a tracee process X, a tracer Y tracing X, and a third process, Z. Tracer Y is initially tracing X. Process Z is tasked with the responsibility of stopping Y from tracing X by calling erlang:trace(X, false). This call is arbitrarily effected by Z whilst X is running. Thereafter, Z is also to issue a special message stopped to tracer Y which signals to Y that Z has indeed stopped Y from tracing X.
I would like to guarantee that the special stopped message is delivered to Y's mailbox after all other trace messages have been delivered to Y. To my knowledge, Erlang does not guarantee the ordering of messages sent by different processes to a single process. Concretely in my case this means that tracer Y can, for instance, receive the stopped issued by Z before the trace event messages due to tracee X. I read about erlang:trace_delivered/1, and was planning to address my problem using the following code inside the implementation of Z:
...
erlang:trace(X, false),
Ref = erlang:trace_delivered(X),
receive
{trace_delivered, X, Ref} ->
Y ! stopped
end.
...
A somewhat similar example is provided in the docs (quoted below):
Example: Process A is Tracee, port B is tracer, and process C is the port owner of B. C wants to close B when A exits. To ensure that the trace is not truncated, C can call erlang:trace_delivered(A) when A exits, and wait for message {trace_delivered, A, Ref} before closing B.
My example differs from the one in the docs in these two respects:
Process C knows the exact point of execution of A prior to invoking erlang:trace_delivered(A); in my case, I do not know the point X is in when erlang:trace_delivered(X) is invoked by Z.
By contrast to process C, my process Z switches off tracing on X before invoking erlang:trace_delivered(X).
What are the semantics of erlang:trace_delivered/1 in this case?
Does it still guarantee that {trace_delivered, X, Ref} is received by process Z after all trace messages have been delivered to tracer Y?
And does erlang:trace_delivered/1 automatically keep track of the point at which the execution of tracee X is at to be able to provide the aforementioned guarantee?
Your help is greatly appreciated!

You are correct about the ordering of messages. If process A sends multiple messages to process B they will be guaranteed to arrive in order. If A sends multiple messages to processes B and C the message order will only be guaranteed per-process. So for example:
A sends B message 1
A sends C message 2
A sends B message 3
A sends C message 4
The only guarantees in this scenario is that B will receive message 1 before message 3, and C will receive message 2 before message 4.
To answer your questions:
No, if erlang:trace_delivered/1 is invoked prior to other trace events being generated those messages will arrive later. The docs only guarantee that prior trace messages are delivered prior to {trace_delivered, ...}:
When it is guaranteed that all trace messages are delivered to the tracer up to the point that Tracee reached at the time of the call to erlang:trace_delivered(Tracee), then a {trace_delivered, Tracee, Ref} message is sent to the caller of erlang:trace_delivered(Tracee) .
The guarantees around erlang:trace_delivered/1 always hold true. But it doesn't guarantee that {trace_delivered, ...} will always be the last message.

Related

How to guarantee message queue/order in Erlang?

If one server receives multiple requests from one process by using pid ! Msg, but the process time for each request is different, then how to guarantee the sender receives the reply in order?
From the Erlang FAQ:
10.8 Is the order of message reception guaranteed?
Yes, but only within one process.
If there is a live process and you send it message A and then message B, it's guaranteed that if message B arrived, message A arrived before it.
On the other hand, imagine processes P, Q and R. P sends message A to Q, and then message B to R. There is no guarantee that A arrives before B. (Distributed Erlang would have a pretty tough time if this was required!)
That is, if the server processes the requests in the order they arrive, and sends the responses in the order the requests were processed, then the sender will receive the responses in order.
the Erlang receive clause can do pattern matching. So what you can do is create a reference for each message that you want to receive and then pattern match on that reference.
Check out this gist if you look at line 26 you will see that the receive clause is waiting for a message with a specific pid. In this case the messages will arrive in an arbitrary order but by virtue of this receive, they will be put into order.

Stop and start Erlang tracer without losing trace events

I have a question regarding tracers in Erlang, and how these can be switched on and off without losing any trace events. Suppose I have a process P1 which is being traced using the send and receive trace flags, like so:
erlang:trace(P1Pid, true, [set_on_spawn, send, 'receive', {tracer, T1Pid}])
Since the set_on_spawn flag is specified, once a (sub-)process P2 is spawned by P1, the same flags (i.e. set_on_spawn, send, 'receive') will apply to P2 as well. Now suppose I would like to create a new tracer on just P2, such that a tracer T1 handles traces from P1, and tracer T2 handles traces from P2. In order to do so, (since Erlang allows only one tracer per process), I would need to first unset the trace flags (i.e. set_on_spawn, send, 'receive') from P2 (since these are automatically inherited due to the set_on_spawn flag) and set them again on P2, as follows:
% Unset trace flags on P2.
erlang:trace(P2Pid, false, [set_on_spawn, send, 'receive']),
% We might lose trace events at this instant which were raised
% by process P2 while un-setting the tracer on P2 and setting
% it again.
% Now set again trace flags on P2, directing the trace to
% a new tracer T2.
erlang:trace(P2Pid, true, [set_on_spawn, send, 'receive', {tracer, T2Pid}]),
In the lines between setting and un-setting the tracer, a number of trace events which are raised by process P2 might be lost due to a race condition here.
My question is this: can this be achieved without losing trace events?
Does Erlang provide the means by which this 'tracer handover' (i.e. from T1 to T2) can be done in an atomic fashion?
Alternatively, is it possible to pause the Erlang VM and in doing so, pause tracing, thereby avoid losing trace events?
I have looked deeper into the problem and might have found a semi-desirable (see points below) partial work around. After reading the Erlang documentation, I came across the erlang:suspend_process/1 and erlang:resume_process/1 BIFs. Using these two, I can achieve the desired behaviour like so:
% Suspend process P2. According to the Erlang docs, this function
% blocks the caller (i.e. the current tracer) until P2 is suspended.
% This way, we do not lose trace events.
erlang:suspend_process(P2Pid),
% Unset trace flags on P2.
erlang:trace(P2Pid, false, [set_on_spawn, send, 'receive']),
% We should not lose any trace events from P2, since it is
% currently suspended, and therefore cannot generate any.
% However, we can still lose receive trace events that are
% generated as a result of other processes sending messages
% to P2.
% Now set again trace flags on P2, directing the trace to
% a new tracer T2.
erlang:trace(P2Pid, true, [set_on_spawn, send, 'receive', {tracer, T2Pid}]),
% Finally, resume process P2, so that we can receive any trace
% messages generated by P2 on the new tracer T2.
erlang:resume_process(P2Pid).
My only three concerns using this method are the following:
The Erlang documentation for erlang:suspend_process/1 and erlang:resume_process/1 explicitly states that these are to be used for debugging purposes only. My question is why cannot these be used in production when, as illustrated in the example, unless the process P2 is suspended, we face the risk of losing trace events (while switching from tracer T1 to tracer T2)?
We are actually messing around with the system (i.e. we're interfering with its scheduling). Is there a risk associated with this (apart from the fact that one can forget to call erlang:resume_process/1 on a previously suspended process)?
More importantly, even though we can prevent process P2 from taking any action, we cannot prevent other processes from sending messages to P2. These messages will result in {trace, Pid, receive, ...} trace events which might be lost while we are switching traces. Is there a way in which this can be avoided?
NB: A process P that was previously suspended by process P' is automatically resumed if P' (the one that invoked erlang:suspend_process/1) dies.

Erlang/OTP How to notify parent process that child processes are idle and no messages in their mailbox

I would like to design a process hierarchy where there is a a parent process P which acts like a gatekeeper and delegates the work(messages/events from its client processes) to it's children processes C1,C2..Cn which collaborate with each other and may send the result back to P. The children processes cannot talk to any process outside, only P.
The challenge is that though P may have multiple messages from its clients, it should accept only one message, delegate the work to C1..Cn and ONLY accept the next message from its clients
when all its children processes are done(or idle) and there are no more messages circulating between C1 to Cn.
P finishes accepting messages from C1..Cn so that it can return the result to its clients
Constraints:
Idle for me is when they are waiting with a receive (blocking) or simply exited.
C1 to Cn are finite state machines. Some or all of them may send messages back to C. Or there may be no messages to be sent back to C. Even if no messages are sent back to C, C has to figure out that all of them are done with no messages between them.
If any of C1 to Cn have been pre-empted, then it is considered busy(this may be obvious but I thought I'll put it here for completion) and C will not receive the next message
Is there an OTP pattern or library which will do this for me (before I hack something?). I know that process_info can let me know if the mailbox of a process are empty and I could keep on checking the children's mailboxes from P but it would be unnecessary polling from P.
EDIT GENERAL: I am trying to implement a reactive variant of Flow Based Programming on the Erlang platform. This has the notion of 'hierarchical processes' or composites which themselves may contain composite processes until we reach some boxes of actual code...I am going to research(looking at monitor,process_info,process_flag) but I wanted to respond to your excellent answers
EDIT RECURSIVE PARENTS: Each of C1 and Cn can themselves be parent/composite processes. If I just spawn processes and let them exit immediately, I'll have to create the chain of Composites everytime as C1..Cn may themselves be composites (which spawn composites..and so on). Finally, when we reach a leaf box(which is not a composite node), they are supposed to be finite state machines.. so I'm not sure of spawning and making them exit quickly if the are FSMs.
EDIT TKOWAL: Since I am trying to create a generic parent/composite process, it does not know 'when' the task ends. All it does is relay the messages it receives from its children to it's siblings with the 'constraint' that it will not accept the next message from its client/siblings until its children are 'done'. The children C1..Cn may send not just one but many messages. I understand from your proposal, that wait_for_task_finish will stop blocking the moment it gets the first message. But more messages may be emitted too by P's children. P should wait for all messages. Also, having a task_end symbol will not work for the same reason(i.e. multiple messages possible from the children)
Given how inexpensive it is to start up Erlang processes, your gatekeeper could start new children for each incoming task, and then wait for them all to exit normally once they complete their work.
But in general, it sounds like you're looking for a process pool. There are a few of these already available, such as poolboy and sidejob. Pools can be harder to get right than you think, so I advise using an existing proven pool implementation before attempting to write your own.
After edits, this became entirely different question, so I am posting new answer.
If you are trying to write Flow Based Programming, then you are probably solving wrong problem. FBP is effective, because almost everything is asynchronous and you start processing next request immediately after you finished with previous one.
So, the answer is - don't wait for children to finish:
In FBP, there is no time dependencies between the components. So if I
have a chunk of data, it should be able to flow from one end of the
diagram to the other regardless of how any other pieces of data are
being handled. In order to program an FBP system, you have to minimize
your dependencies.
source
When creating parent and children, you know all the connections between blocks, so just configure children to send processed data directly to next block. For example: P1 has children C1 and C2. You send message to P1, it delegates it to C1, packet flows couple of times between C1 and C2 and after that, C1 or C2 sends it directly to P2.
Blocks should be stateless. They output should not depend on previous requests, so even if C1 and C2 are processing data from two different requests to P1 - it is OK. There could be situations, where P1 gets data packet D1 and then D2, but will output answers in different order R2 and then R1. It is also OK. You can use Erlang reference to tag messages and then check, which response is from which request.
I don't think, there is ready library for that, but it is really easy to hack, unless I missed something. Your P process should look like this:
ready_for_next_task() ->
receive
{task, Task, CallerPid} ->
send_task_to_workers(Task)
end,
wait_for_task_finish(CallerPid).
wait_for_task_finish(CallerPid) ->
receive
{task_end, Response} ->
CallerPid ! Response
end,
ready_for_next_task().
In wait_for_task_finish/1 you have only one clause for receive, so it will not accept next task, until current one is finished. If you are waiting for multiple responses from workers, you can simply add second clause to receive with some partial response and call wait_for_task_finish/1 recursively.
It is always better to have some indicator, that the processing ended, because you don't have guarantees on message delivery time. This means, that you could check, that all processes currently are waiting for message and think, that they ended processing, but actually, they did not started yet or one of them send message to other and you caught them before the second one had it in message box.
If the processes C1..Cn have only parts of actual work and don't know about the progress, than the gatekeeper P should know how many parts there were, receive all of them one by one and then call ready_for_next_task/1.

What guarantees does erlang's "monitor" give?

While reading the ERTS user's guide, I found this section:
The only signal ordering guarantee given is the following. If an entity sends multiple signals to the same destination
entity, the order will be preserved. That is, if A sends a signal S1 to B, and later sends the signal S2 to B, S1 is
guaranteed not to arrive after S2.
I've also happened across this while doing further research googling:
Erlang Reference Manual, 13.5:
Message sending is asynchronous and safe, the message is guaranteed to eventually reach the recipient, provided that the recipient exists.
That seems very vague and I'd like to know what guarantees I can rely on in the following scenario:
A,B are processes on two different nodes.
Assume A does not crash and B was a valid node at some point.
A and B monitor each other.
A sends messages M1,M2,M3 to B
In the above scenario, is it possible that B receives M1,M3 (M2 is dropped),
without any sort of 'DOWN'/'EXIT'/heartbeat timeout being received at A?
There are no other guarantees other than the ordering guarantee. Note that by default you don't even know who the sender is, unless the sender encodes this in the message.
Your example could happen:
A sends M1 and M2
B receives M1
The node on which B resides gets disconnected
The node on which B resides comes up again
A sends M3 to B
B receives M3
M2 can be lost on the network link in this scenario. It is highly unlikely this happens, but it can happen. The usual trick is to have some kind of notion of such errors. Either by having a timeout trigger, or by monitoring the node or Pid which is the recipient of the message.
Updated scenario:
In the updated scenario, provided I read it correctly, then A would get a 'DOWN' style message at some point, and likewise, it would get a message telling you that the node is up again, if you monitor the node.
Though often, such things are better modeled using an idempotent protocol if at all possible.
Reading through the erlang mailing-list and the academic faq, it seems like there are a few guarantees provided by the ERTS implementation, however I was not able to determine whether or not they are guaranteed at a language/specification level, too.
If you assume TCP is "reliable", then the current implementation guarantees that
given A,B are processes on different nodes (&hosts) and A monitors B, A sends to B, assuming A doesn't crash, any message delivery failures* between the two nodes or host/node/process failures on B will lead to A getting a 'DOWN' message (or 'EXIT' in the case of links). [ See 1 and 2 ]
*From what I have read on the mailing-list thread , this property is almost entirely based on the fact that TCP is used, so "message delivery failure" means any situation where TCP decides that a failure has occurred/the connection needs to be closed.
The academic faq talks about this like it's also a language/specification level guarantee, however I couldn't yet find anything to back that up.

erlang : ordering of trace messages originating from a single process

That is the simple question, i can not find a clear answer to:
Can one assume that the order of trace messages belonging to a single process are sent in the order in which corresponding events occur ?
(The icing on the cake would of course be the source where is is specified :) )
thank you
Messages from a process A to a process B are guaranteed to always be ordered. It would be right to assume the trace events will also be ordered.
This guarantee doesn't hold when many processes message another one: if A and C both message B and A fires before C, there is no guarantee that A's message will be there first. Similarly, if A messages both B and C, there is no guarantee that C won't have its messages before B.
This could cause confusion if there is IO being done while tracing -- IO goes through a specific process (the group leader) that acts as a server, so outputting trace vs. stuff that is happening right now might give funny results.

Resources