sync call from process with many incoming msgs - erlang

Need to implement sync call from proces which receives many incoming messages from other processes. Problem in distinguish - when msg in return to call arrived. Do i need to spawn additional process for extracting msgs from queue into buffer while return msg not encountered and then send it to main process and after it every else accepted.

The trick is to use a reference as a token for replication:
replicate() ->
{ok, Token} = db:ask_replicate(...),
receive
{replication_completed, Token} ->
ok
end
where Token is created with a call to make_ref(). Since no other message will match Token, you are safe. Other messages will be placed in the mailbox for later scrutiny.
However, the above solution does not take process crashes into account. You need a monitor on the DB server as well. The simplest way to get the pattern right is to let the mediator be a gen_server. Alternatively, you can read a chapter in LearnYouSomeErlang: http://learnyousomeerlang.com/what-is-otp#the-basic-server look at the synchronous call in the kitty_server.

Related

How to explicitly acknowledge/fail Amazon SQS FIFO queue from the listener without throwing an exception?

My application only listens to a certain queue, the producer is the 3rd party application. I receive the messages but sometimes based on some logic I need to send fail message to the producer so that the message is resend to my listener again until I decide to consume it and acknowledge it. My current implementation of this process is just throwing some custom exception. But this is not a clean solution, therefore can any one help me to send FAIL to producer without throwing exception.
My JMS Listener Factory settings:
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactoryForQexpress(SQSErrorHandler errorHandler) {
SQSConnectionFactory connectionFactory = SQSConnectionFactory.builder()
.withRegion(RegionUtils.getRegion(StaticSystemConstants.getQexpressSqsRegion()))
.withAWSCredentialsProvider(new ClasspathPropertiesFileCredentialsProvider(StaticSystemConstants.getQexpressSqsCredentials()))
.build();
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setConcurrency("3-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setErrorHandler(errorHandler);
return factory;
}
My Listener Settings:
#JmsListener(destination = StaticSystemConstants.QUEXPRESS_ORDER_STATUS_QUEUE, containerFactory = "jmsListenerContainerFactoryForQexpress")
public void receiveQExpressOrderStatusQueue(String text) throws JSONException {
LOG.debug("Consumed QExpress status {}", text);
//here i need to decide either acknowlege or fail
...
if (success) {
updateStatus();
} else {
//todo I need to replace this with explicit FAIL message
throw new CustomException("Not right time to update status");
}
}
Please, share your experience on this. Thank you!
SQS -- internally speaking -- is fully asynchronous and completely decouples the producer from the consumer.
Once the producer successfully hands off a message to SQS and receives the message-id in response, the producer only knows that SQS has received and committed the message to its internal storage and that the message will be delivered to a consumer at least once.¹ There is no further feedback to the producer.
A consumer can "snooze" a message for later retry by simply not deleting it (see setSessionAcknowledgeMode docs) or by actively resetting the visibility timeout on the message instead of deleting it, which triggers SQS to leave the message in the in flight status until the timer expires, at which point it will again deliver the message for the consumer to retry.
Note, too, that a single SQS queue can have multiple producers and/or multiple consumers, as long as all the producers ask for and consumers provide identical services, but there is no intrinsic concept of which consumer or which producer. There is no consumer-to-producer backwards communication channel, and no mechanism for a producer to inquire about the status of an earlier message -- the design assumption is that once SQS has received a message, it will be delivered,² so no such mechanism should be needed.
¹at least once. Unless the queue is a FIFO queue, SQS will typically deliver the message exactly once, but there is not an absolute guarantee that the message will not be delivered more than once. Because SQS is a massive, distributed system that stores redundant copies of messages, it is possible in some edge case conditions for messages to be delivered more than once. FIFO queues avoid this possibility by leveraging stronger internal consistency guarantees, at a cost of reduced throughput of 300 TPS.
²it will be delivered assuming of course that you actually have a consumer running. SQS does not block the producer, and will allow you to enqueue an unbounded number of messages waiting for a consumer to arrive. It accepts messages from producers regardless of whether there are currently any consumers listening. The messages are held until consumed or until the MessageRetentionPeriod (default 4 days, max 14 days) timer expires for each message, whichever comes first.

How to design a connector in go

I am building a simple connector component in go, with these responsibilities:
Open, keep & manage connection to an external service (i.e. run in background).
Parse incoming data into logical messages and pass these messages to business logic component.
Send logical messages from business logic to external service.
I am undecided how to design the interface of the connector in go.
Variant A) Channel for inbound, function call for outbound messages
// Listen for inbound messages.
// Inbound messages are delivered to the provided channel.
func Listen(msg chan *Message) {...}
// Deliver msg to service
func Send(msg *Message) {...}
Variant B) Channel for inbound and outbound messages
// Listen for inbound messages + send outbound messages.
// Inbound messages are delivered to the provided msgIn channel.
// To send a message, put a message into the msgOut channel.
func ListenAndSend(msgIn chan *Message, msgOut chan *Message) {...}
Variant B seems cleaner and more "go-like" to me, but I am looking for answers to:
Is there an "idiomatic" way to do this in go?
alternatively, in which cases should variant A or B be preferred?
any other notable variants for this kind of problem?
Both approaches allow for only one listener (unless you keep track of the amount of listeners, which is a somewhat fragile approach), which is a limitation. It all depends on your programmatic preferences but I'd probably go with callbacks for incoming messages and a send method:
func OnReceive(func(*Message) bool) // If callback returns false, unregister it.
func Send(*Message)
Other than that, both of your proposed models are completely valid. The second seems more "orthogonal". An advantage of using a send method is that you can make sure it never blocks, as opposed to a "bare" channel.

Simple chat system over websockets with reconnection feature

I have seen many examples of chat room systems over websocket implemented with erlang and cowboy.
Most of the examples I have seen use gproc. In practice each websocket handler registers itself with gproc and then broadcasts/receives messages from it.
Since a user could close by accident the webpage I am thinking about connecting to the websocket handler a gen_fsm which actually broadcasts/receives all the messages from gproc. In this way the gen_fsm could switch from a "connected" state to a "disconnected" state whenever the user exits and still buffer all the messages. After a while if the user is not back online the gen_fsm will terminate.
Is this a good solution? How can I make the new websocket handler to recover the gen_fsm process? Should I register the gen_fsm using the user name or is there any better solution?
What i do is the folowing :
When an user connects to the site, i swpawn a gen_server reprensenting the user. Then, the gen server registers itself in gproc as {n,l, {user, UserName}}. (It can register properties like {p,l, {chat, ChannelID}} to listen to chat channels. (see gproc pub/sub))
Ok so now the user websocket connection starts the cowboy handler (i use Bullet). The handlers asks gproc the pid() of the user's gen_server and registrers itself as a receiver of messages. So now, when the user gen_server receives messages, it redirects them to the websocket handler.
When the websocket connexion ends, the handler uregister from the user gen_server, so the user gen_server will keep messages until the next connection, or the next timeout. At the timeout, you can simply terminate the server (messages will be lost but it is ok).
See : (not tested)
-module(user_chat).
-record(state, {mailbox,receiver=undefined}).
-export([start_link/1,set_receiver/1,unset_receiver/1]).
%% API
start_link(UserID) ->
gen_server:start_link(?MODULE,[UserID],[]).
set_receiver(UserID) ->
set_receiver(UserID,self()).
unset_receiver(UserID) ->
%% Just set the receiver to undefined
set_receiver(UserID,undefined).
set_receiver(UserID, ReceiverPid) ->
UserPid = gproc:where({n,l,UserID}),
gen_server:call(UserPid,{set_receiver,ReceiverPid}).
%% Gen server internals
init([UserID]) ->
gproc:reg({n,l,{user,UserID}}),
{ok,#state{mailbox=[]}}.
handle_call({set_receiver,ReceiverPid},_From,#state{mailbox=MB}=State) ->
NewMB = check_send(MB,State),
{reply,ok,State#state{receiver=ReceiverPid,mailbox=NewMB}}.
handle_info({chat_msg,Message},#state{mailbox=MB}=State) ->
NewMB = check_send([Message|MB],State),
{noreply, State#state{mailbox=NewMB}}.
%% Mailbox empty
check_send([],_) -> [];
%% Receiver undefined, keep messages
check_send(Mailbox,#state{receiver=undefined}) -> Mailbox
%% Receiver is a pid
check_send(Mailbox,#state{receiver=Receiver}) when is_pid(Receiver) ->
%% Send all messages
Receiver ! {chat_messages,Mailbox},
%% Then return empty mailbox
[].
With the solution you propose you may have many processes pending and you will have to write a "process cleaner" for all user that never come back. Anyway it will not support a shutdown of the chat server VM, all messages stored in living FSM will vanish if the node is down.
I think that a better way should be to store all messages in a database like mnesia, with sender, receiver, expiration date... and check for any stored message at connection, and have a message cleaner process to destroy all expired messages from time to time.

How can I apply timeout function to LMAX Disruptor Queue?

To developers/users of LMAX Disruptor http://code.google.com/p/disruptor/ :
My question:
Can anyone suggest an approach to how apply a timeout function to Disruptor e.g. using EventHandler?
Here is one scenario that came up in my line of work:
Outbox - messages sent to the Server over a network
Inbox - ACK messages received from the Server
ACK Handler - marks outbox messages as ACKed
Timeout Handler - marks outbox message as NACKed (much needed, but where can it fit into the Disruptor design?)
Is there anyone who share the same opinion?
Or can anyone point out why it is unnecessary.
I hope the ensuing debate would be brief.
Thank you.
To clarify the timeout-handler would "fire" after a certain period of time when a message could not be delivered?
The way it works with disruptor is you have a ringbuffer for inbound and a ringbuffer for outbound messges... so email comes in, place it into the inbound ring buffer using an appropriate event. then process the message (i.e. decode, analye, log, store) and send it along to another sytem by placing it into the outbound ringbuffer... another handler takes the message and stores it into a database or sends it to another server using smtp... if a error / timeout etc. occurs, your create an event in the inbound ringbuffer signaling the error (NACK) and process this message. does that make sense?!?

How to keep track of a process per browser window and access it at each event in Nitrogen?

In Nitrogen, the Erlang web framework, I have the following problem. I have a process that takes care of sending and receiving messages to another process that acts as a hub. This process acts as the comet process to receive the messages and update the page.
The problem is that when the user process a button I get a call to event. How do I get ahold of that Pid at an event.
the code that initiates the communication and sets up the receiving part looks like this, first I have an event which starts the client process by calling wf:comet:
event(start_chat) ->
Client = wf:comet(fun() -> chat_client() end);
The code for the client process is the following, which gets and joins a room at the beginning and then goes into a loop sending and receiving messages to/from the room:
chat_client() ->
Room = room_provider:get_room(),
room:join(Room),
chat_client(Room).
chat_client(Room) ->
receive
{send_message, Message} ->
room:send_message(Room, Message);
{message, From, Message} ->
wf:insert_bottom(messages, [#p{}, #span { text=Message }]),
wf:comet_flush()
end,
chat_client(Room).
Now, here's the problem. I have another event, send_message:
event(send_message) ->
Message = wf:q(message),
ClientPid ! {send_message, Message}.
except that ClientPid is not defined there, and I can't see how to get ahold of it. Any ideas?
The related threat at the Nitrogen mailing list: http://groups.google.com/group/nitrogenweb/browse_thread/thread/c6d9927467e2a51a
Nitrogen provides a key-value storage per page instance called state. From the documentation:
Retrieve a page state value stored under the specified key. Page State is different from Session State in that Page State is scoped to a series of requests by one user to one Nitrogen Page:
wf:state(Key) -> Value
Store a page state variable for the current user. Page State is different from Session State in that Page State is scoped to a series of requests by one user to one Nitrogen Page:
wf:state(Key, Value) -> ok
Clear a user's page state:
wf:clear_state() -> ok
Have an ets table which maps session id's to client Pid's. Or if nitrogen provides any sort of session management, store the Pid as session data.
Every thing that needs to be remembered needs a process. It looks like your room provider isn't.
room:join(Room) need to be room:join(Room,self()). The room need to know what your comet-process pid is.
To send a message to a client you first send the message to the room, the room will then send a message to all clients in the room. But for that to work. Every client joining the room need to submit the comet-pid. The room need to keep a list of all pid's in the room.

Resources