If a Grain subscribes to an Orleans Stream, will it never get deactivated? - orleans

If a Grain subscribes to an Orleans Stream, does that mean that this Grain will never get deactivated? Or will it get deactivated, and just become active again when a message gets published to the Stream that it was subscribed to?

In both cases, implicit and explicit, grain will be deactivated if no new events arrive, and will be reactivated when a new event comes.

Related

How do Erlang/Akka etc. send messages under the hood? Why doesn't it lead to deadlock?

Message sending is a useful abstraction, but it seems to be a bit misleading because it is not like letters sent through a post box that are literally moving through the system.
Similarly in Kafka they talk about messages but really it's just reading/writing to a distributed, append-only log.
In Erlang/Akka you actually copy the data rather than 'send it' so how does this work?
I was imagining something like Alice sends a message to Bob by
acquiring a lock to Alice's queue (i.e. mailbox)
write the message to the queue
release the lock
do something else
Given that you can send a message to anyone how does this not result in a massive deadlock with processes all waiting to message Alice. It seems like it might be useful to have multiple intermediate mailboxes for popular actors so you can write to that and then go do something else faster.
The receiver is not locking its mailbox when it is waiting for a message; only when it checks it, briefly. If there is no matching message, it releases the lock and goes to sleep, then gets woken up when new messages arrive. Likewise, senders also only need to aquire the lock while inserting the message. There is never any deadlock situation on this level.
Processes may still get deadlocked because of logical errors where both are expecting a message from the other at the same time, but that's a different matter, and the message passing style makes it less likely to end up in that situation, because there is no lock management to screw up on the user level.
As you mention, yes, it is useful to have intermediate mailboxes to reduce contention (a sender can add to the incoming side of the mailbox while a receiver is holding a lock to scan through the messages arrived so far), and that optimization is handled for you under the hood by the Erlang VM.

Implementation of message passing?

What is the best way to implement message passing between 2 gen_servers. e.g. example of system.
So per the image I have multiple session servers that will query the database for a list of players, (players records are added to the DB as they join), that best fit what that session needs e.g players location, level, win ratio etc.
The session will poll all players that the DB returns and take the first reply's until its reached its max players, session will then hand off players records it has accumulated to a server and the job is done.
What i'm asking is the best way to handle the message passing with least amount of error, what i am thinking so far is
Session passes messages to all players
players receive messages returns ok
session receives players ok,adds to list of players, returns message so player can stop receiving messages and change state to in session
4 if player gets no reply from session, check other received messages and reply(session is probably full or died so continue on)
Is this the best implementation for this section?
Would cast be the best way as i am not worried about getting a specific result for either player or session?
Or even if my architecture is completely off any criticism or change is welcome as I am new to Erlang and OTP?
Also session is probably a poor choice of name I was thinking of changing to game instance.

When a StreamSubscription is paused; are events buffered or dropped?

The StreamSubscription class has a pause() method. The docs don't indicate whether events are buffered while a stream is paused (and then all fired once resumed), or dropped; which is it?
A StreamSubscription is always expected to buffer events while it is paused.
It may pass the pause state on to its source to avoid being swamped, but even if it can't, it will buffer data until it runs out of memory.
For a broadcast stream, where events are typically not part of a greater whole, you might not want the events. In that case you can cancel the subscription and create a new one when you need events again. Broadcast streams should generally allow resubscribing after a cancel, but some may have been set up in such a way that it isn't possible, e.g., by dropping its resources after the last client cancels.
For a single subscription stream, where events are often a sequence of chunks of a bigger thing, dropping events should probably never happen.
The docs also include this text:
Currently DOM streams silently drop events when the stream is paused. This is a bug and will be fixed.
This suggests that intention is that events will be buffered and then released once you unpause. If you wish not to receive the events during this period, you are best cancelling and resubscribing.

A data buffer to subscribe and unsubcribe with real-time data using rabbitmq

Basically I want to create a data buffer that a client could occasionally subscribe to, get all data from the last while, keep listening on it for real-time data, then unsubscribe after some time, and repeat.
I'm thinking of using a TTL rabbitmq queue that expires. The idea is for a client to occasionally subscribe and unsubsribe from it. When the client subscribe to the queue, it should fetch all available messages on the queue. Then the client would keep on the channel to have real-time data pushed to them.
Is this a good way to go about this? I know how to pub/sub on rabbitmq. how do I make it so it pushes all data on queue everytime a client subscribe?
It depends on how much data you are talking about. The drawback to your method is that the queue could fill up with a large amount of data, if the data rate is high and the TTL is set for a long time. You also have to keep the queue alive. And you must have one queue alive from the start for every possible subscriber.
I would suggest the Recent History Exchange perhaps modifying it so that it holds more messages.

Amazon SQS End of Queue Detection

I was wondering if there was a best practice for notifying the end of an sqs queue. I am spawning a bunch of generic workers to consume data from a queue and I want to notify them that they can stop processing once they detect no more messages in the queue. Does sqs provide this type of feature?
By looking at the right_aws ruby gem source code for SQS I found that there is the ApproximateNumberOfMessages attribute on a queue. Which you can request using a standard API call.
You can find more information including examples here:
http://docs.amazonwebservices.com/AWSSimpleQueueService/latest/APIReference/Query_QueryGetQueueAttributes.html
For more information on how to do this using the right_aws gem in ruby look at:
https://github.com/rightscale/right_aws/blob/master/lib/sqs/right_sqs_gen2_interface.rb#L187
https://github.com/rightscale/right_aws/blob/master/lib/sqs/right_sqs_gen2_interface.rb#L389
Do you mean "is there a way for the producer to notify consumers that it has finished sending messages?" . If so, then no there isn't. If a consumer calls "ReceiveMessage" and gets nothing back, or "ApproximateNumberOfMessages" returns zero, that's not a guarantee that no more messages will be sent or even that there are no messages in flight. And the producer can't send any kind of "end of stream" message because only one consumer will receive it, and it might arrive out of order. Even if you used a separate notification mechanism such as an SNS topic to notify all consumers, there's no guarantee that the SNS notification won't arrive before all the messages have been delivered.
But if you just want your pool of workers to back off when there are no messages left in the queue, then consider setting the "ReceiveMessageWaitTimeSeconds" property on your queue to its maximum value of 20 seconds. When there are no more messages to process, a ReceiveMessage call will block for up to 20s to see if a message arrives instead of returning immediately.
You could have whatever's managing your thread pool query ApproximateNumberOfMessages to regularly scale/up down your thread pool if you're concerned about releasing resources. If you do, then beware that the number you get back is Approximate, and you should always assume there may be one or more messages left on the queue even if ApproximateNumberOfMessages returns zero.

Resources