Difference between stream processing and message processing - stream

What is the basic difference between stream processing and traditional message processing? As people say that kafka is good choice for stream processing but essentially kafka is a messaging framework similar to ActivMQ, RabbitMQ etc.
Why do we generally not say that ActiveMQ is good for stream processing as well.
Is it the speed at which messages are consumed by the consumer determines if it is a stream?

In traditional message processing, you apply simple computations on the messages -- in most cases individually per message.
In stream processing, you apply complex operations on multiple input streams and multiple records (ie, messages) at the same time (like aggregations and joins).
Furthermore, traditional messaging systems cannot go "back in time" -- ie, they automatically delete messages after they got delivered to all subscribed consumers. In contrast, Kafka keeps the messages as it uses a pull-based model (ie, consumers pull data out of Kafka) for a configurable amount of time. This allows consumers to "rewind" and consume messages multiple times -- or if you add a new consumer, it can read the complete history. This makes stream processing possible, because it allows for more complex applications. Furthermore, stream processing is not necessarily about real-time processing -- it's about processing infinite input streams (in contrast to batch processing, which is applied to finite inputs).
And Kafka offers Kafka Connect and Streams API -- so it is a stream-processing platform and not just a messaging/pub-sub system (even if it uses this in its core).

If you like splitting hairs:
Messaging is communication between two or more processes or components whereas streaming is the passing of event log as they occur. Messages carry raw data whereas events contain information about the occurrence of and activity such as an order.
So Kafka does both, messaging and streaming. A topic in Kafka can be raw messages or and event log that is normally retained for hours or days. Events can further be aggregated to more complex events.

Although Rabbit supports streaming, it was actually not built for it(see Rabbit´s web site)
Rabbit is a Message broker and Kafka is a event streaming platform.
Kafka can handle a huge number of 'messages' towards Rabbit.
Kafka is a log while Rabbit is a queue which means that if once consumed, Rabbit´s messages are not there anymore in case you need it.
However Rabbit can specify message priorities but Kafka doesn´t.
It depends on your needs.

Message Processing implies operations on and/or using individual messages. Stream Processing encompasses operations on and/or using individual messages as well as operations on collection of messages as they flow into the system. For e.g., let's say transactions are coming in for a payment instrument - stream processing can be used to continuously compute hourly average spend. In this case - a sliding window can be imposed on the stream which picks up messages within the hour and computes average on the amount. Such figures can then be used as inputs to fraud detection systems

Apologies for long answer but I think short answer will not be justice to question.
Consider queue system. like MQ, for:
Exactly once delivery, and to participate into two phase commit transaction
Asynchronous request / reply communication: the semantic of the communication is for one component to ask a second command to do something on its data. This is a command pattern with delay on the response.
Recall messages in queue are kept until consumer(s) got them.
Consider streaming system, like Kafka, as pub/sub and persistence system for:
Publish events as immutable facts of what happened in an application
Get continuous visibility of the data Streams
Keep data once consumed, for future consumers, for replay-ability
Scale horizontally the message consumption
What are Events and Messages
There is a long history of messaging in IT systems. You can easily see an event-driven solution and events in the context of messaging systems and messages. However, there are different characteristics that are worth considering:
Messaging: Messages transport a payload and messages are persisted until consumed. Message consumers are typically directly targeted and related to the producer who cares that the message has been delivered and processed.
Events: Events are persisted as a replayable stream history. Event consumers are not tied to the producer. An event is a record of something that has happened and so can't be changed. (You can't change history.)
Now Messaging versus event streaming
Messaging are to support:
Transient Data: data is only stored until a consumer has processed the message, or it expires.
Request / reply most of the time.
Targeted reliable delivery: targeted to the entity that will process the request or receive the response. Reliable with transaction support.
Time Coupled producers and consumers: consumers can subscribe to queue, but message can be remove after a certain time or when all subscribers got message. The coupling is still loose at the data model level and interface definition level.
Events are to support:
Stream History: consumers are interested in historic events, not just the most recent.
Scalable Consumption: A single event is consumed by many consumers with limited impact as the number of consumers grow.
Immutable Data
Loosely coupled / decoupled producers and consumers: strong time decoupling as consumer may come at anytime. Some coupling at the message definition level, but schema management best practices and schema registry reduce frictions.
Hope this answer help!

Basically Kafka is messaging framework similar to ActiveMQ or RabbitMQ. There are some effort to take Kafka towards streaming:
https://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/
Then why Kafka comes into picture when talking about Stream processing?
Stream processing framework differs with input of data.In Batch processing,you have some files stored in file system and you want to continuously process that and store in some database. While in stream processing frameworks like Spark, Storm, etc will get continuous input from some sensor devices, api feed and kafka is used there to feed the streaming engine.

Recently, I have come across a very good document that describe the usage of "stream processing" and "message processing"
https://developer.ibm.com/articles/difference-between-events-and-messages/
Taking the asynchronous processing in context -
Messaging:
Consider it when there is a "request for processing" i.e. client makes a request for server to process.
Event streaming:
Consider it when "accessing enterprise data" i.e. components within the enterprise can emit data that describe their current state. This data does not normally contain a direct instruction for another system to complete an action. Instead, components allow other systems to gain insight into their data and status.
To facilitate this evaluation, consider these key selection criteria to consider when selecting the right technology for your solution:
Event history - Kafka
Fine-grained subscriptions - MQ
Scalable consumption - Kafka
Transactional behavior - MQ

Related

mqtt timestamp in the topic name: anti-pattern?

Would the mqtt community consider placing message information in the topic name an anti-pattern?
I have a client that has a vast library written around rabbitmq, and I'm trying to tweak their client and server code to allow them to configure their services for mosquitto instead. One central requirement for them is TTL, the clients can sometimes sit for hours publishing data before the server comes back online and they do not want messages to show up that are beyond their TTL.
Their message envelope system is an elaborate json and 1) it would be painful to wrap or alter this json 2) I do not want to incur the expense of unmarshalling json to retrieve a timestamp.
The easiest thing to do is place the timestamp at the end of the topic and consume with wildcards: mytopic/mysubtopic/{timestamp} consumed by mytopic/mysubtopic/#
Are there any unintended consequences for this, and would this be considered an anti-pattern?
Whether this is an anti-pattern is a matter of opinion; the spec defines the topic as "The label attached to an Application Message..." so does not preclude your usage. I can think of a few potential "unintended consequences" to your approach (which may, or may not, apply to your specific situation):
Retain flag: As per your comment you will not be able to set the Retain flag to 1 (because all messages would be retained).
Latest Message only when comms re-established: A subscriber may only want the latest message when communications are re-established. This can be achieved by publishing messages with the retain flag set to 1 which results in your subscriber receiving the latest message (and only the latest message; subject to QOS/CleanSession) on each topic it subscribes to (docs). As per the above this will not work with your topic structure.
Order of delivery: the spec requires that "A Server MUST by default treat each Topic as an 'Ordered Topic'" but there is no such guarantee across topics. Note that ordered delivery is dependent upon settings (see the "Non normative comment" in the spec) so this may not be an issue.
Topic Alias: MQTT V5 introduces Topic Alias which can be used to reduce the amount of data transmitted. This will not provide a benefit with your structure.

Multiple unary rpc calls vs long-running bidirectional streaming in grpc?

I have a use case where many clients need to keep sending a lot of metrics to the server (almost perpetually). The server needs to store these events, and process them later. I don't expect any kind of response from the server for these events.
I'm thinking of using grpc for this. Initially, I thought client-side streaming would do (like how envoy does), but the issue is that client side streaming cannot ensure reliable delivery at application level (i.e. if the stream closed in between, how many messages that were sent were actually processed by the server) and I can't afford this.
My thought process is, I should either go with bidi streaming, with acks in the server stream, or multiple unary rpc calls (perhaps with some batching of the events in a repeated field for performance).
Which of these would be better?
the issue is that client side streaming cannot ensure reliable delivery at application level (i.e. if the stream closed in between, how many messages that were sent were actually processed by the server) and I can't afford this
This implies you need a response. Even if the response is just an acknowledgement, it is still a response from gRPC's perspective.
The general approach should be "use unary," unless large enough problems can be solved by streaming to overcome their complexity costs. I discussed this at 2018 CloudNativeCon NA (there's a link to slides and YouTube for the video).
For example, if you have multiple backends then each unary RPC may be sent to a different backend. That may cause a high overhead for those various backends to synchronize themselves. A streaming RPC chooses a backend at the beginning and continues using the same backend. So streaming might reduce the frequency of backend synchronization and allow higher performance in the service implementation. But streaming adds complexity when errors occur, and in this case it will cause the RPCs to become long-lived which are more complicated to load balance. So you need to weigh whether the added complexity from streaming/long-lived RPCs provides a large enough benefit to your application.
We don't generally recommend using streaming RPCs for higher gRPC performance. It is true that sending a message on a stream is faster than a new unary RPC, but the improvement is fixed and has higher complexity. Instead, we recommend using streaming RPCs when it would provide higher application (your code) performance or lower application complexity.
Streams ensure that messages are delivered in the order that they were sent, this would mean that if there are concurrent messages, there will be some kind of bottleneck.
Google’s gRPC team advises against using streams over unary for performance, but nevertheless, there have been arguments that theoretically, streams should have lower overhead. But that does not seem to be true.
For a lower number of concurrent requests, both seem to have comparable latencies. However, for higher loads, unary calls are much more performant.
There is no apparent reason we should prefer streams over unary, given using streams comes with additional problems like
Poor latency when we have concurrent requests
Complex implementation at the application level
Lack of load balancing: the client will connect with one server and ignore any new servers
Poor resilience to network interruptions (even small interruptions in TCP connections will fail the connection)
Some benchmarks here: https://nshnt.medium.com/using-grpc-streams-for-unary-calls-cd64a1638c8a

RabbitMQ subscription limit the number of messages to pre fetch

I am using rabbitmq to communicate between microservices written in ruby on rails. Each service subscribes to a topic. All services are scaled and run as multiple instances based on need.
During subscription bunny moves all the messages from the queue into unacked state. This makes other scaled instances to be just idle, since there is no message in ready state.
Is there a way to limit the number of messages a subscription can fetch, so that other instances can take the remaining messages from the queue.
Based on the information you made available, I'm assuming you're using rubybunny. If this assumption is incorrect (there are other ruby clients available for rabbitmq) let me know and/or check the documentation related to your client.
Back to rubybunny, link provided points to necessary information, quoting it:
For cases when multiple consumers share a queue, it is useful to be
able to specify how many messages each consumer can be sent at once
before sending the next acknowledgement.
In AMQP 0.9.1 parlance this is known as QoS or message prefetching.
Prefetching is configured on a per-channel basis.
To configure prefetching use the Bunny::Channel#prefetch method like so:
ch1 = connection1.create_channel
ch1.prefetch(10)

Reduce MQTT traffic when there are no subscribers

In the context on the MQTT protocol, is there a way to make a client not send publish messages when there are no subscribers to that topic?
In other words, is there a standard way to perform subscriber-aware publishing, reducing network traffic from publishing clients to the broker?
This important in applications where we have many sensors capable of producing huge amounts of data, but most of the time nobody will be interested in all of that data but on a small subset, and we want to save battery or avoid network congestion.
In the upcoming MQTT v5 specification the broker can indicate to a client that there are no subscribers for a topic when the client publishes to that topic. This is only possible for QoS 1 or QoS 2 publishes because a QoS 0 message does not result in a reply.
No, the publisher has absolutely no idea how many subscribers to a given topic there are, there could be zero or thousands.
This is a key point to pub/sub messaging, the near total decoupling of the information producer and consumer.
Presumably you can design your devices and applications so that the device as well as publishing data to a 'data topic', it also subscribes to another device-specific 'command topic' which controls the device data publishing. If the application is interested in data from a specific device it must know which device that is to know which data topic to subscribe to, so it can publish the 'please publish data now' command to the corresponding command topic.
I suppose there might be a somewhere-in-between solution where devices publish data less frequently when no apps are interested, and faster when at least one app is asking for data to be published.
Seems to me that one thing about MQTT is that you should ideally design the devices and applications as a system, not in isolation.

akka stream ActorSubscriber does not work with remote actors

http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/stream-integrations.html says:
"ActorPublisher and ActorSubscriber cannot be used with remote actors, because if signals of the Reactive Streams protocol (e.g. request) are lost the the stream may deadlock."
Does this mean akka stream is not location transparent? How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
I must have misunderstood something. Thanks for any clarification.
They are strictly a local facility at this time.
You can connect it to an TCP sink/source and it will apply back-pressure using TCP as well though (that's what Akka Http does).
How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
Check out streams in Artery (Dec. 2016, so 18 months later):
The new remoting implementation for actor messages was released in Akka 2.4.11 two months ago.
Artery is the code name for it. It’s a drop-in replacement to the old remoting in many cases, but the implementation is completely new and it comes with many important improvements.
(Remoting enables Actor systems on different hosts or JVMs to communicate with each other)
Regarding back-pressure, this is not a complete solution, but it can help:
What about back-pressure? Akka Streams is all about back-pressure but actor messaging is fire-and-forget without any back-pressure. How is that handled in this design?
We can’t magically add back-pressure to actor messaging. That must still be handled on the application level using techniques for message flow control, such as acknowledgments, work-pulling, throttling.
When a message is sent to a remote destination it’s added to a queue that the first stage, called SendQueue, is processing. This queue is bounded and if it overflows the messages will be dropped, which is in line with the actor messaging at-most-once delivery nature. Large amount of messages should not be sent without application level flow control. For example, if serialization of messages is slow and can’t keep up with the send rate this queue will overflow.
Aeron will propagate back-pressure from the receiving node to the sending node, i.e. the AeronSink in the outbound stream will not progress if the AeronSource at the other end is slower and the buffers have been filled up.
If messages are sent at a higher rate than what can be consumed by the receiving node the SendQueue will overflow and messages will be dropped. Aeron itself has large buffers to be able to handle bursts of messages.
The same thing will happen in the case of a network partition. When the Aeron buffers are full messages will be dropped by the SendQueue.
In the inbound stream the messages are in the end dispatched to the recipient actor. That is an ordinary actor tell that will enqueue the message in the actor’s mailbox. That is where the back-pressure ends on the receiving side. If the actor is slower than the incoming message rate the mailbox will fill up as usual.
Bottom line, flow control for actor messages must be implemented at the application level. Artery does not change that fact.

Resources