Message Broker with synchronous delivery - message

we are implementing (or more reimplementing) a distributed software system. What we have are different processes (possibly running on different computers) that should communicate with each other (let's call these clients). We don't want them to directly communicate with each other, but instead use some kind of message broker.
Since we like to avoid implementing the message broker ourselves we would like to use an existing implementation. But we don't find a protocol or system that fully fulfilles our requirements.
MQTT with its publish-subscribe-mechanism seems nice and could even be used for point-to-point communication (where some specific topics are only subscribed by certain clients).
But it is (like JSM, STOMP, etc.) asynchronous. The sender sends a message into the broker and doesn't know whether it is ever delivered to it's recipient. We want that the sender gets informed about a successful delivery or an elapsed timeout (when no one is receiving the message).
Is there some protocol/implementation available that provides such synchronous messaging functionality?
(It would be nice however if asynchronous delivery would be possible, too)

The messaging by default is ( usually ) asynchronous .
You can considerer RabbitMQ, it contains the following features:
Publisher-confirms (in asynchronous way):
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
Transaction Commit:
https://www.rabbitmq.com/semantics.html
Messages TTL (to handle time out)
https://www.rabbitmq.com/ttl.html
With this features you can handle the time-out situations and the successful delivery.
If this is not enough you can use the RPC:
https://www.rabbitmq.com/tutorials/tutorial-six-java.html
Let me know if you need more information.

Related

Can MQTT (such as Mosquitto) be used so that a published topic is picked up by one, and only one, of the subscribers?

I have a system that relies on a message bus and broker to spread messages and tasks from producers to workers.
It benefits both from being able to do true pub/sub-type communications for the messages.
However, it also needs to communicate tasks. These should be done by a worker and reported back to the broker when/if the worker is finished with the task.
Can MQTT be used to publish this task by a producer, so that it is picked up by a single worker?
In my mind the producer would publish the task with a topic "TASK_FOR_USER_A" and there are X amount of workers subscribed to that topic.
The MQTT broker would then determine that it is a task and send it selectively to one of the workers.
Can this be done or is it outside the scope of MQTT brokers such as Mosquitto?
MQTT v5 has an optional extension called Shared Subscriptions which will deliver messages to a group of subscribers in a round robin approach. So each message will only be delivered to one of the group.
Mosquitto v1.6.x has implemented MQTT v5 and the shared subscription capability.
It's not clear what you mean by 1 message at a time. Messages will be delivered as they arrive and the broker will not wait for one subscriber to finish working on a message before delivering the next message to the next subscriber in the group.
If you have low enough control over the client then you can prevent the high QOS responses to prevent the client from acknowledging the message and force the broker to only allow 1 message to be in flight at a time which would effectively throttle message delivery, but you should only do this if message processing is very quick to prevent the broker from deciding delivery has failed and attempting to deliver the message to another client in the shared group.
Normally the broker will not do any routing above and beyond that based on the topic. The as mentioned in a comment on this answer the Flespi has implemented "sticky sessions" so that messages from a specific publisher will be delivered to the same client in the shared subscription pool, but this is a custom add on and not part of the spec.
What you're looking for is a message broker for a producer/consumer scenario. MQTT is a lightweight messaging protocol which is based on pub/sub model. If you start using any MQTT broker for this, you might face issues depending upon your use case. A few issues to list:
You need ordering of the messages (consumer must get the messages in the same order the producer published those). While QoS 2 guarantees message order without having shared subscriptions, having shared subscriptions doesn't provide ordered topic guarantees.
Consumer gets the message but fails before processing it and the MQTT broker has already acknowledged the message delivery. In this case, the consumer needs to specifically handle the reprocessing of failed messages.
If you go with a single topic with multiple subscribers, you must have idempotency in your consumer.
I would suggest to go for a message broker suitable for this purpose, e.g. Kafka, RabbitMQ to name a few.
As far as I know, MQTT is not meant for this purpose. It doesn't have any internal working to distribute the tasks on workers (consumers). On the Otherhand, AMQP can be used here. One hack would be to conditionalize the workers to accept only a particular type of tasks, but that needs producers to send task type as well. In this case, you won't be able to scale as well.
It's better if you explore other protocols for this type of usecase.

Mosquitto fire only one for each topic

I implemented a MQTT message broker using mosquitto on my network. I have one web app publishing things to the broker and several servers that subscribed the same topic. So i have a redundancy scenario.
My question is, using mosquitto alone, is there any way to configure it to publish data only on the first subscriber? Otherwise, all of them will do the same thing.
I don't think that is possible.
But you can do this.
Have the first subscriber program respond with an ack on the channel as soon as it gets the message, and have the redundancy program look for the ack for a small time after the initial message.
IF the ack is received the redundancy should not do anything.
So if the first subscriber gets and uses the message, the others wont do anything even if they get the message.
No this is not possible with mosquitto at the moment (without communication between the 2 subscribers as described in the other answer).
For the new release of the MQTT spec (v5)* there is a new mode called "Shared Subscriptions". This allow s multiple clients to subscribe to a single topic and messages will be delivered by round robin to each client. This is more for load balancing rather than master/slave fail over.
*There are some brokers (HiveMQ, IBM MessageSight) that already support some version of Shared Subscriptions at MQTT v3.1.1, but they implement it in slightly different ways (different topic prefixes) so they are not cross compatible.

Synchronous MQTT communication using Paho client

I have a scenario where mobile app calls rest API hosted by my application. With in this process, I need to send message to downstream system over MQTT and wait until I get the response for that message. And then I have reply back to mobile app.
The challenge here is, messaging over MQTT is asynchronous. So the message which I receive back will be in different thread (some listener class, listening on messageArrived()). How to get back to calling http thread?
Do we have synchronous communication supported by Paho library.? Something like I send a message, open some topic and wait on it till some message is received or timeout?
MQTT by it's very nature is asynchronous, as are all Pub/Sub implementations. There is no concept of a reply to a message at the protocol level, you have no way of knowing if you will EVER get a response (or you may get many) to a published message as you can't know if there is even a subscriber to the topic you publish on.
It is possible to build a system that will work this way, but you need to maintain a state machine of all in flight requests, implement a sensible timeout policy and work out what to do if you get more than one response.
You have not mentioned which of the different Paho libraries you are using, but I'm guessing Java from the method names, but without knowing what HTTP framework you are using and a host of other factors I'm not going to suggest a solution, especially as it will involve a lot of polling and synchronisation.
Is there any reason why the mobile application can't publish and subscribe to MQTT topics directly? This would remove the need for this.

RabbitMQ subscription limit the number of messages to pre fetch

I am using rabbitmq to communicate between microservices written in ruby on rails. Each service subscribes to a topic. All services are scaled and run as multiple instances based on need.
During subscription bunny moves all the messages from the queue into unacked state. This makes other scaled instances to be just idle, since there is no message in ready state.
Is there a way to limit the number of messages a subscription can fetch, so that other instances can take the remaining messages from the queue.
Based on the information you made available, I'm assuming you're using rubybunny. If this assumption is incorrect (there are other ruby clients available for rabbitmq) let me know and/or check the documentation related to your client.
Back to rubybunny, link provided points to necessary information, quoting it:
For cases when multiple consumers share a queue, it is useful to be
able to specify how many messages each consumer can be sent at once
before sending the next acknowledgement.
In AMQP 0.9.1 parlance this is known as QoS or message prefetching.
Prefetching is configured on a per-channel basis.
To configure prefetching use the Bunny::Channel#prefetch method like so:
ch1 = connection1.create_channel
ch1.prefetch(10)

akka stream ActorSubscriber does not work with remote actors

http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/stream-integrations.html says:
"ActorPublisher and ActorSubscriber cannot be used with remote actors, because if signals of the Reactive Streams protocol (e.g. request) are lost the the stream may deadlock."
Does this mean akka stream is not location transparent? How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
I must have misunderstood something. Thanks for any clarification.
They are strictly a local facility at this time.
You can connect it to an TCP sink/source and it will apply back-pressure using TCP as well though (that's what Akka Http does).
How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
Check out streams in Artery (Dec. 2016, so 18 months later):
The new remoting implementation for actor messages was released in Akka 2.4.11 two months ago.
Artery is the code name for it. It’s a drop-in replacement to the old remoting in many cases, but the implementation is completely new and it comes with many important improvements.
(Remoting enables Actor systems on different hosts or JVMs to communicate with each other)
Regarding back-pressure, this is not a complete solution, but it can help:
What about back-pressure? Akka Streams is all about back-pressure but actor messaging is fire-and-forget without any back-pressure. How is that handled in this design?
We can’t magically add back-pressure to actor messaging. That must still be handled on the application level using techniques for message flow control, such as acknowledgments, work-pulling, throttling.
When a message is sent to a remote destination it’s added to a queue that the first stage, called SendQueue, is processing. This queue is bounded and if it overflows the messages will be dropped, which is in line with the actor messaging at-most-once delivery nature. Large amount of messages should not be sent without application level flow control. For example, if serialization of messages is slow and can’t keep up with the send rate this queue will overflow.
Aeron will propagate back-pressure from the receiving node to the sending node, i.e. the AeronSink in the outbound stream will not progress if the AeronSource at the other end is slower and the buffers have been filled up.
If messages are sent at a higher rate than what can be consumed by the receiving node the SendQueue will overflow and messages will be dropped. Aeron itself has large buffers to be able to handle bursts of messages.
The same thing will happen in the case of a network partition. When the Aeron buffers are full messages will be dropped by the SendQueue.
In the inbound stream the messages are in the end dispatched to the recipient actor. That is an ordinary actor tell that will enqueue the message in the actor’s mailbox. That is where the back-pressure ends on the receiving side. If the actor is slower than the incoming message rate the mailbox will fill up as usual.
Bottom line, flow control for actor messages must be implemented at the application level. Artery does not change that fact.

Resources