How does message loops in erlang work, are they sync when it comes to processing messages?
As far as I understand, the loop will start by "receive"ing a message and then perform something and hit another iteration of the loop.
So that has to be sync? right?
If multiple clients send messages to the same message loop, then all those messages are queued and performed one after another, or?
To process multiple messages in parallell, you would have to spawn multiple message loops in different processes, right?
Or did I misunderstand all of it?
Sending a message is asynchronous. Processing a message is synchronous - one message is receive'd at a time - because each process has its own (and only one) mailbox.
From the manual (Erlang concurrency
Each process has its own input queue for messages it receives. New messages received are put at the end of the queue. When a process executes a receive, the first message in the queue is matched against the first pattern in the receive, if this matches, the message is removed from the queue and the actions corresponding to the the pattern are executed.
However, if the first pattern does not match, the second pattern is tested, if this matches the message is removed from the queue and the actions corresponding to the second pattern are executed. If the second pattern does not match the third is tried and so on until there are no more pattern to test. If there are no more patterns to test, the first message is kept in the queue and we try the second message instead. If this matches any pattern, the appropriate actions are executed and the second message is removed from the queue (keeping the first message and any other messages in the queue). If the second message does not match we try the third message and so on until we reach the end of the queue. If we reach the end of the queue, the process blocks (stops execution) and waits until a new message is received and this procedure is repeated.Of course the Erlang implementation is "clever" and minimizes the number of times each message is tested against the patterns in each receive.
So you could create prios with the regex, but the concurrency is done via multiple processes.
Related
I would like to be able to catch messages going to my GenServer's handle_info in tests, to check those are what I intend to.
1/ Is there a way to print somehow every message coming through?
2/ Using assert_receive is there a way to catch those messages? Should I set the assert_receive before or after the call to the external service that will result in the handle_info trigger? What syntax should I use?
I tried many combinations of assert_receive and I tried a receive do... to try and display messages getting in, with no success.
Both ExUnit.Assertions.assert_receive/3 and ExUnit.Assertions.assert_received/2 do assert messages coming into the current process’ mailbox. The former is to be called either before or after the message was actually sent:
Asserts that a message matching pattern was or is going to be received within the timeout period, specified in milliseconds.
the latter is to be called after:
Asserts that a message matching pattern was received and is in the current process’ mailbox.
That said, both are unlikely a good fit to test the existing GenServer. Messages are to arrive at the GenServer’s messagebox, this functionality is provided by OTP and you should not test it. If you need to log messages, add a call to Logger.log/3 to the handle_info/2 and check the log actually happens with ExUnit.CaptureLog.capture_log/2. If it performs some action upon message arrival, test this action.
In general, you should test your code, not OTP.
I'm working on a twilio-programable-sms chatbot that needs to provide a good chunk of information to a user at the outset of the first conversation. Currently, what we've written is about 562 characters. For some of our users, this gets broken up into chunks of 160 characters that do not necessarily show up in their SMS app in the right order.
To account for this, we're trying to break our message down into 160 character or less distinct messages that each send one-after-the-other.
However, my teammates and I are currently unsure how to accomplish this. Our application is currently written to provide a twiml response for each message that is received from a user. I've been unable to find a way to create a twiml response that indicates a number of consecutive messages, and the theoretical solutions we've come up with feel hacky and flawed.
To demonstrate, currently our code looks like this. As you can see, when a new user sends in the keyword "start" we join 4 messages together in one long text response. However we'd like each message to be sent individually, one after the other, about a second or two apart.
case #body
when "start"
if !!#user
CreateMessage::SubscriptionMessage.triage_subscribable_type(!!#user)
else
[
CreateMessage::AlphaMessage.personalized_welcome(#conversation.from, true),
CreateMessage::SubscriptionMessage.introduce_bcd,
CreateMessage::SubscriptionMessage.for_example,
CreateMessage::SubscriptionMessage.intvite_to_start
].join("\n\n")
end
We'd like to avoid creating a background worker/cron job, if possible - but welcome any and all suggested solutions.
I think your question is more on how to design synchronous(webhook response) vs asynchronous responses/messages. I have not used twiml but the concepts are same.
If you don't want to use a background job, then send fir N-1 messages using API with time delay in between, and the last message as response.
If you are OK with using background jobs, then send 1st message as response and queue a job for sending the remaining messages using API.
In a BufferedOutput plugin,
if write(chunk) throws exception or the fluentd process dies when it is processing the chunk, according to the docs it says the chunk will still stay in the queue but does that mean the events/records processed before the crash will be processed again after fluentd restarts?
If that is the case, write(chunk) has to be atomic for "exactly once processing". Then, is the method written here in the filterstream-method section good for the purpose? i.e. Are the events in the MultiEventStream being processed atomically?
write(chunk) may be retried sometimes if any errors occurs in that method. So that method should be written as idempotent.
I cannot understand what you're doing. Each methods are designed for these purposes:
filter_stream in Filter: select/reject events, or enrich/shave fields of records (once per each event, not retried)
format in Output: format events to string/binary, then it will be written into chunks (once per each event, not retried)
write in Output: read data from chunk and write/send it to destination (at least once per chunks, retried for errors)
I'm not entirely sure the differences between the PID and Reference and when to use which.
If I were to spawn a new process with spawn/1 pid. I can kill it with the PID no? Why would I need a reference?
Likewise I see monitor/1 receiving a message with a ref and pid number.
Thanks!
Pid is process identifier. You can get one when you create new process with spawn, or you can get Pid of yourself with self(). It allows you to interact with given process. Especially send messages to it by Pid ! Message. And some other stuff, like killing it explicitly (should not do) or obtaining some process information with erlang:process_info.
And you can create relations between process with erlang:link(Pid) and erlang:monitor(process, Pid) (that's between Pid process, and process execution this function). In short, it gives you "notifications" when another process dies.
Reference is just almost unique value (of different type). One might say, that it gives you some reference to here and now, which you could recognize later. For example, if we are sending a message to another process, and we expect a response, we would like to make sure, that the message we will receive is associated to our request, and not just any message from someone else. The easiest way to do it is to tag the message with a unique value, and wait until a response with exactly the same tag.
Tag = make_ref(),
Pid ! {Tag, Message},
receive
{Tag, Response} ->
....
In this code, with use of pattern matching, we make sure that (we wait in receive until) Response is exactly for the Message we sent. No matter other messages from other processes. This is the most common use of reference you can encounter.
And now back to monitor. When calling Ref = monitor(process, Pid) we make this special connection with Pid process. Ref that is returned is just some unique reference, that we could use to demonitor this process. That is all.
One might ask, if we are able to create monitor with Pid, why do we need Ref for demonitoring? Couldn't we just use Pid again. In theory we could, but monitors are implemented in such a way, that multiple monitors could be established between two same processes. So when demonitoring, we have to remove only one of such connections. It is done in this way to make monitoring more transparent. If you have library of function that's creating and removing one monitor, you would not like to interfere with other libraries and functions and monitors they might be using.
According this page:
References are erlang objects with exactly two properties:
They can be created by a program (using make_ref/0), and,
They can be compared for equality.
You should use it ever you need to bind an unique identifier to some "object". Any time you could generate new one using erlang:make_ref/0. Documentation says:
make_ref() -> reference()
Returns an almost unique reference.
The returned reference will re-occur after approximately 2^82 calls;
therefore it is unique enough for practical purposes.
When you call erlang:monitor/2 function, it returns you reference to give you availability to cancel monitor (erlang:demonitor/1 function). This reference only identifies certain call of erlang:monitor/1. If you need operate with process (kill it, for example), you still have to use process pid.
Likewise I see monitor/1 receiving a message with a ref and pid number.
Yep, monitor sends messages like {'DOWN', Ref, process, Pid, Reason}. What to use (pid or ref) is only depends on your application logic, but (IMO) in most usual cases, there is no matter what to use.
I have a backout queue for my queue manager.
I want to build a message flow which will read this queue and if any message comes to the queue it should take the message and wrap it in a specially formatted XML message and put it in the normal exception queue which gets the handled exceptions.
But, the message coming to the backout queue can be in any format and I have to make an xml where that message is going be a field.
So, what could be the best settings for my flow(Regarding MQMD properties like CCSID, format etc) and which parser should I use (DFDL or BLOB or MRM)?
Kindly advice.
Since you don't know what kind of message arrived to backout queue, you should not parse it with specific parsers (like XMLNSC etc). Probably the more generic params you will set on MQInput, the better you will do further down the flow to determine what's inside the message.
So, I would start with default Message domain (BLOB) and leave other params untouched as well. Connect some logging node (e.g. Trace node) to Catch and Failure terminals. Connect Out terminal to a Compute node which includes ESQL to determine error type and decide on further actions (e.g. route to label). Then in each label decide what part of the message should be mapped to final exception message and to the mapping.
If you need those MQMD properties of the message currently in backout queue in your resulting message, just extract the values and put/concatenate/whatever to resulting message XML part. I don't think you should copy MQMD (and other) headers to result message as is, because these might be the reason why original message got into backout queue and your resulting message will get there again. Construct resulting message headers from scratch.
If something bad happens while doing these transformations, you will see the problem in Trace. Then modify error handling logic appropriately to avoid mishandling in the future.