Does tibco support "multicast" ?
I guess another term used is "worker queues". (as seen in the rabbitmq link below)
See : http://www.rabbitmq.com/tutorials/tutorial-two-dotnet.html
I call them "fighters", as in, several processes can be wired to one queue, and when a message arrives in the queue, ONE of the several processes will get the message,but not all of them.
In EMS and most JMS based messaging system (supporting Queues and Topics), this is ALREADY the default behavior.
In would not call that "multicast" or "worker queues", but simply "load sharing" or "load balancing". Active-Mq calls it "Clustering" (I don't like the term, but the diagram is neat).
The official name for the pattern is "Competing consumers (EIP)".
Whatever you call it, it's super easy to do in EMS. By default, queue accept multiple clients for reading (you can change this and make them exclusive, see the user doc). When a queue is read by 2 or more consumers, and a message is sent to the queue, the message will go to one of ANY consumers. Hence your expected behavior.
Please refer to the same link for another chapter (14, page 411) on "Multicast" with EMS. This is different... it's ACTUAL NETWORK BASED Multicast, meant for helping lowering network traffic when a topic does publications to a massive amount of subscribers.
FYI, EMS is only one out of three messaging solution from TIBCO. The other two are Rendez-vous(older, UDP based) and FTL (newer, low latency solution).
Related
I am using rabbitmq to communicate between microservices written in ruby on rails. Each service subscribes to a topic. All services are scaled and run as multiple instances based on need.
During subscription bunny moves all the messages from the queue into unacked state. This makes other scaled instances to be just idle, since there is no message in ready state.
Is there a way to limit the number of messages a subscription can fetch, so that other instances can take the remaining messages from the queue.
Based on the information you made available, I'm assuming you're using rubybunny. If this assumption is incorrect (there are other ruby clients available for rabbitmq) let me know and/or check the documentation related to your client.
Back to rubybunny, link provided points to necessary information, quoting it:
For cases when multiple consumers share a queue, it is useful to be
able to specify how many messages each consumer can be sent at once
before sending the next acknowledgement.
In AMQP 0.9.1 parlance this is known as QoS or message prefetching.
Prefetching is configured on a per-channel basis.
To configure prefetching use the Bunny::Channel#prefetch method like so:
ch1 = connection1.create_channel
ch1.prefetch(10)
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/stream-integrations.html says:
"ActorPublisher and ActorSubscriber cannot be used with remote actors, because if signals of the Reactive Streams protocol (e.g. request) are lost the the stream may deadlock."
Does this mean akka stream is not location transparent? How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
I must have misunderstood something. Thanks for any clarification.
They are strictly a local facility at this time.
You can connect it to an TCP sink/source and it will apply back-pressure using TCP as well though (that's what Akka Http does).
How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
Check out streams in Artery (Dec. 2016, so 18 months later):
The new remoting implementation for actor messages was released in Akka 2.4.11 two months ago.
Artery is the code name for it. It’s a drop-in replacement to the old remoting in many cases, but the implementation is completely new and it comes with many important improvements.
(Remoting enables Actor systems on different hosts or JVMs to communicate with each other)
Regarding back-pressure, this is not a complete solution, but it can help:
What about back-pressure? Akka Streams is all about back-pressure but actor messaging is fire-and-forget without any back-pressure. How is that handled in this design?
We can’t magically add back-pressure to actor messaging. That must still be handled on the application level using techniques for message flow control, such as acknowledgments, work-pulling, throttling.
When a message is sent to a remote destination it’s added to a queue that the first stage, called SendQueue, is processing. This queue is bounded and if it overflows the messages will be dropped, which is in line with the actor messaging at-most-once delivery nature. Large amount of messages should not be sent without application level flow control. For example, if serialization of messages is slow and can’t keep up with the send rate this queue will overflow.
Aeron will propagate back-pressure from the receiving node to the sending node, i.e. the AeronSink in the outbound stream will not progress if the AeronSource at the other end is slower and the buffers have been filled up.
If messages are sent at a higher rate than what can be consumed by the receiving node the SendQueue will overflow and messages will be dropped. Aeron itself has large buffers to be able to handle bursts of messages.
The same thing will happen in the case of a network partition. When the Aeron buffers are full messages will be dropped by the SendQueue.
In the inbound stream the messages are in the end dispatched to the recipient actor. That is an ordinary actor tell that will enqueue the message in the actor’s mailbox. That is where the back-pressure ends on the receiving side. If the actor is slower than the incoming message rate the mailbox will fill up as usual.
Bottom line, flow control for actor messages must be implemented at the application level. Artery does not change that fact.
we are implementing (or more reimplementing) a distributed software system. What we have are different processes (possibly running on different computers) that should communicate with each other (let's call these clients). We don't want them to directly communicate with each other, but instead use some kind of message broker.
Since we like to avoid implementing the message broker ourselves we would like to use an existing implementation. But we don't find a protocol or system that fully fulfilles our requirements.
MQTT with its publish-subscribe-mechanism seems nice and could even be used for point-to-point communication (where some specific topics are only subscribed by certain clients).
But it is (like JSM, STOMP, etc.) asynchronous. The sender sends a message into the broker and doesn't know whether it is ever delivered to it's recipient. We want that the sender gets informed about a successful delivery or an elapsed timeout (when no one is receiving the message).
Is there some protocol/implementation available that provides such synchronous messaging functionality?
(It would be nice however if asynchronous delivery would be possible, too)
The messaging by default is ( usually ) asynchronous .
You can considerer RabbitMQ, it contains the following features:
Publisher-confirms (in asynchronous way):
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
Transaction Commit:
https://www.rabbitmq.com/semantics.html
Messages TTL (to handle time out)
https://www.rabbitmq.com/ttl.html
With this features you can handle the time-out situations and the successful delivery.
If this is not enough you can use the RPC:
https://www.rabbitmq.com/tutorials/tutorial-six-java.html
Let me know if you need more information.
I read in forum that while implementing any application using AMQP it is necessary to use fewer queues. So would I be completely wrong to assume that if I were cloning twitter I would have a unique and durable queue for each user signing up? It just seems the most natural approach and if not assign a unique queue for each user how would one design something like that.
What is the most used approach for web messaging. I see RabbitHUb and Rabbit WebHooks but Webhooks doesn't seem to be a scalable solution. i am working with Rails and my AMQP server as running as a Daemon.
In RabbitMQ, queues are quite cheap. They're effectively lightweight Erlang processes, and you can run tens to hundreds of thousands of queues on a single commodity machine (i.e. my laptop). Of course, each will consume a bit of RAM, but unused-recently queues will hibernate, so they'll consume as little memory as possible. In addition, if Rabbit runs low on memory for messages, it will page old messages to disk.
The above only applies to a single machine. RabbitMQ supports a form of lightweight clustering. When you join several Rabbit nodes into a cluster, each can see the queues and exchanges on the other nodes but each runs only its own queues. So, you'll be able to have even more queues! (to the limit of Erlang clusters, which is usually a few hundred nodes) So, a cluster forms a logical broker distributed over several machines; clients connect to it and use it transparently through any of the nodes.
That said, having a single durable queue for each user seems a bit strange: in AMQP, you cannot browse messages while they're on the queue; you may only get/consume messages which takes them off the queue and publish which adds the to the end of the queue. So, you can use AMQP as a message router, but you can't use it as a sort of message database.
Here is a thread that just talks about that: http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2009-February/003041.html
Based on this answer here, I need to put emails in a queue and have a background task run and send them. How do I do this with an architecture that is of ASP.NET-MVC and WCF?
How do I build a queue (sql server)?
How do I build a background task?
You can skin this cat many different ways. The key being that the actual sending of the emails is asynchronous to the queuing of the email.
Queue messages via WCF Service using MSMQ binding via this series of blog posts, which assumes IIS 7: MSMQ, WCF, and IIS: Getting Them to Play Nice.
Queue messages to MSMQ. MSMQ is a nice (sometimes underutilized) queue service built into Windows. You'll write a Windows service to receive messages from this queue. If you have IIS 7, then check out Death to Windows Services, Long Live AppFabric. MSMQ is a breeze, but has some quirky constraints (4MB message size and availability)
Queue messages to a 'sql queue'. Create a table to hold basic queued message information and then stored procedures to wrap the queue semantics (e.g. you don't want multiple consumers to receive the same message). Not difficult, but a little time consuming to get right.
Queue messages to Service Broker (or even MSMQ) and write a Windows service that receives messages from the Service Broker Queue. Service Broker handles the queueing semantics (competing consumers) for you. The downside is that its a pain in the ass to administer.
HTH,
Z
I think your solution is independant of the fact you're using MVC.
The way I've implemented this in the past is to persist the fact you need to sent an e-mail into the database and then process this using a Windows Service.
Another way to do this would be to utilize MSMQ as your storage medium. In general, MSMQ shouldn't be used to "store" data, only as a message transport mechanism, but it's certainly an option in this case.
In terms of developing a "queue", if the e-mails need ordered delivery for some reason, simply having a "RequestedDTTM" column in your database table would allow you to send them in the order they were requested.
Lastly, I would consider implementing a simply multi-threaded e-mail sender to maximize performance. Using the TPL in .NET 4.0 would make this pretty easy. Alternatively, you could use something like the SmartThreadPool library (available at codeplex.com) to manager your e-mail sender threads.
As was mentioned in the other answer you linked to, your UI shouldn't be doing this e-mail sending.