Approaches on ingesting IoT data from cloud gateway - iot

I would like to hear your insights about an IoT data ingesting case. In AWS IoT hub, thing shadows are virtual representation of physical ones. What i understood from the figure below is whenever a thing sends a data to platform via a message broker, thing shadows and rule engine portions get the same sensor data concurrently and process it.
Are my conclusions correct ?
Things shadow system is subscribed to message broker and gets sensor data, updates their shadow actors. Shadow side is also responsible for storing sensor data such an event sourcing mechanism.
The thing shadow system does not perform any rules, it is just for performing event sourcing and keeping last known state in virtual thing actors.
The same sensor data also is an inbound data to rules engine. Rules engine is just and ECA (event condition action) type system that handle streaming data and decides what it will do with them. This means every incoming data eventually will be processed in rules engine portion.

Below are my comments to your conclusions.
What i understood from the figure below is whenever a thing sends a
data to platform via a message broker, thing shadows and rule engine
portions get the same sensor data concurrently and process it.
Changes in the thing shadow can trigger an action registered in the rule engine. There are specific topics associated with a thing shadow that you can subscribe the rule engine to, in order to perform one or many action(s) in response.
Things shadow system is subscribed to message broker and gets sensor
data, updates their shadow actors. Shadow side is also responsible for
storing sensor data such an event sourcing mechanism.
You can update the device shadow by using the REST API, or dedicated MQTT topics to publish on specific shadow topics. The shadow does not constitute an event-sourcing system by itself, but a representation of the data model associated with a physical device, as you said.
You can however create a rule that listens for changes on one or more shadow instances, and register the changes into DynamoDB for instance, in a time-series manner. You'll then have an event-sourcing system allowing you to store the previous states, or changes, sent by a device during an arbitrary amount of time.
The thing shadow system does not perform any rules, it is just for
performing event sourcing and keeping last known state in virtual
thing actors.
The thing shadow keeps the desired and reported state of a physical device in the cloud. It does not execute rules, but emits messages on MQTT topics when events happen within the shadow. These messages can then be captured by the rules engine to execute actions.
The same sensor data also is an inbound data to rules engine. Rules
engine is just and ECA (event condition action) type system that
handle streaming data and decides what it will do with them. This
means every incoming data eventually will be processed in rules engine
portion.
The rules engine does not listen by default on an MQTT topic, and hence, on data sent by devices to the Device Gateway. You must register in the rules engine the topics you'd like to listen to along with their associated actions.
Other than that, the rules engine allows you to describe your rules in ANSI SQL, meaning that you are able to specify the origin of your data (the FROM in your SQL statement), the specific fields in a JSON payload you are interested in capturing (SELECT), and an optional condition specifying on what condition the rule should be triggered (WHERE).
An example of a rule listening on the fictive topic device/+/telemetry and interested in capturing all the fields in the received payload would be :
SELECT * FROM device/+/telemetry
Note how the + can be used as a placeholder for any device identifier for instance.

Related

CAN Communication: Knowing Which Node Transmitted Data

I am new to CAN communication and one of my tasks is to use a CANalyzer to learn what message IDs are being used for a product and what data is being sent/received.
The product has multiple nodes that can send/receive CAN messages. I know CAN messages are broadcasted to all the nodes, but the part I'm having a hard time determining is which node transmitted the message and which nodes received it.
So, for example, if I have 3 CAN nodes, is there a way I can determine that Node 1 sent the message and Node 2/3 are receiving the message?
Thank you in advance.
Generally, you can't know this by listening to the CAN bus alone. The same old story whenever someone asks about data on "CAN bus" is: what application layer protocol is it using? "CAN bus" doesn't tell you jack, it's just the specification physical and data link layers. The concept of identifying individual nodes does not exist on the data link layer, only on the application layer.
There's two possible ways for you to tell:
If you know the application layer used on top of the physical CAN bus and know that it uses node id, then you can tell which node that is sending what data by decoding the application-layer protocol.
On each node, you can sniff the Tx signal between the MCU and CAN transceiver with an oscilloscope. That one only goes active when a node is sending or ACK:ing. Most modern scopes has a CAN frame decoder feature, saving you the head ache of decoding the frames manually.

How to process device messages in an Edge module and sending it upstream while retaining its source and without setting message properties?

I am trying to "transparently" intercept and modify incoming device messages. Say for example three devices send data to the Edge Hub at 20 messages per second each, I want to apply a moving average to this data and then send it upstream at a rate of 1 message per second each, while retaining the original sender information visible from the Hub and without using message properties or something alike that would require further configuration on the hub data entrypoints.
I would like to do this in a mostly transparent fashion, as if the device itself was directly connected to the IoT Hub, while retaining the Modules sinks and outputs so that multiple modules of this kind can be easily stacked merely by adapting the routes. By example linking default input to module1 sink, module1 output to module2 sink and finally module2 output to upstream.
How can this be achieved? Preferably using the Node SDK's.

How to block a particular id from a socketCAN virtual network?

I have a virtual socketCAN network. How do I block a particular ID from being sent on the network?
If a node is connected to a CAN bus, at the lowest level it cannot be prevented from sending any message externally.
However, there are 3 things that can be done:
Add a gateway - a device that separates the bus into multiple small buses and passes messages from each sub-buses to the others, it does not prevent any node from sending a message, but it will not pass it to the others. This solution have a few clear drawbacks - it requires a separate device with multiple CAN interfaces (up to the number of nodes on the bus), it adds a delay for each message, and it renders the ACK bit unusable.
Apply filters for the received messages in each node. Again, this will not prevent sending the message, but will drop the load on the nodes. Most CAN controllers have hardware support for filtering by ID or a bit mask of ID.
There are some CAN controllers that can block the sending of messages, again, this will require adding such controller and setting it up for each node in the CAN bus.

Difference between stream processing and message processing

What is the basic difference between stream processing and traditional message processing? As people say that kafka is good choice for stream processing but essentially kafka is a messaging framework similar to ActivMQ, RabbitMQ etc.
Why do we generally not say that ActiveMQ is good for stream processing as well.
Is it the speed at which messages are consumed by the consumer determines if it is a stream?
In traditional message processing, you apply simple computations on the messages -- in most cases individually per message.
In stream processing, you apply complex operations on multiple input streams and multiple records (ie, messages) at the same time (like aggregations and joins).
Furthermore, traditional messaging systems cannot go "back in time" -- ie, they automatically delete messages after they got delivered to all subscribed consumers. In contrast, Kafka keeps the messages as it uses a pull-based model (ie, consumers pull data out of Kafka) for a configurable amount of time. This allows consumers to "rewind" and consume messages multiple times -- or if you add a new consumer, it can read the complete history. This makes stream processing possible, because it allows for more complex applications. Furthermore, stream processing is not necessarily about real-time processing -- it's about processing infinite input streams (in contrast to batch processing, which is applied to finite inputs).
And Kafka offers Kafka Connect and Streams API -- so it is a stream-processing platform and not just a messaging/pub-sub system (even if it uses this in its core).
If you like splitting hairs:
Messaging is communication between two or more processes or components whereas streaming is the passing of event log as they occur. Messages carry raw data whereas events contain information about the occurrence of and activity such as an order.
So Kafka does both, messaging and streaming. A topic in Kafka can be raw messages or and event log that is normally retained for hours or days. Events can further be aggregated to more complex events.
Although Rabbit supports streaming, it was actually not built for it(see Rabbit´s web site)
Rabbit is a Message broker and Kafka is a event streaming platform.
Kafka can handle a huge number of 'messages' towards Rabbit.
Kafka is a log while Rabbit is a queue which means that if once consumed, Rabbit´s messages are not there anymore in case you need it.
However Rabbit can specify message priorities but Kafka doesn´t.
It depends on your needs.
Message Processing implies operations on and/or using individual messages. Stream Processing encompasses operations on and/or using individual messages as well as operations on collection of messages as they flow into the system. For e.g., let's say transactions are coming in for a payment instrument - stream processing can be used to continuously compute hourly average spend. In this case - a sliding window can be imposed on the stream which picks up messages within the hour and computes average on the amount. Such figures can then be used as inputs to fraud detection systems
Apologies for long answer but I think short answer will not be justice to question.
Consider queue system. like MQ, for:
Exactly once delivery, and to participate into two phase commit transaction
Asynchronous request / reply communication: the semantic of the communication is for one component to ask a second command to do something on its data. This is a command pattern with delay on the response.
Recall messages in queue are kept until consumer(s) got them.
Consider streaming system, like Kafka, as pub/sub and persistence system for:
Publish events as immutable facts of what happened in an application
Get continuous visibility of the data Streams
Keep data once consumed, for future consumers, for replay-ability
Scale horizontally the message consumption
What are Events and Messages
There is a long history of messaging in IT systems. You can easily see an event-driven solution and events in the context of messaging systems and messages. However, there are different characteristics that are worth considering:
Messaging: Messages transport a payload and messages are persisted until consumed. Message consumers are typically directly targeted and related to the producer who cares that the message has been delivered and processed.
Events: Events are persisted as a replayable stream history. Event consumers are not tied to the producer. An event is a record of something that has happened and so can't be changed. (You can't change history.)
Now Messaging versus event streaming
Messaging are to support:
Transient Data: data is only stored until a consumer has processed the message, or it expires.
Request / reply most of the time.
Targeted reliable delivery: targeted to the entity that will process the request or receive the response. Reliable with transaction support.
Time Coupled producers and consumers: consumers can subscribe to queue, but message can be remove after a certain time or when all subscribers got message. The coupling is still loose at the data model level and interface definition level.
Events are to support:
Stream History: consumers are interested in historic events, not just the most recent.
Scalable Consumption: A single event is consumed by many consumers with limited impact as the number of consumers grow.
Immutable Data
Loosely coupled / decoupled producers and consumers: strong time decoupling as consumer may come at anytime. Some coupling at the message definition level, but schema management best practices and schema registry reduce frictions.
Hope this answer help!
Basically Kafka is messaging framework similar to ActiveMQ or RabbitMQ. There are some effort to take Kafka towards streaming:
https://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/
Then why Kafka comes into picture when talking about Stream processing?
Stream processing framework differs with input of data.In Batch processing,you have some files stored in file system and you want to continuously process that and store in some database. While in stream processing frameworks like Spark, Storm, etc will get continuous input from some sensor devices, api feed and kafka is used there to feed the streaming engine.
Recently, I have come across a very good document that describe the usage of "stream processing" and "message processing"
https://developer.ibm.com/articles/difference-between-events-and-messages/
Taking the asynchronous processing in context -
Messaging:
Consider it when there is a "request for processing" i.e. client makes a request for server to process.
Event streaming:
Consider it when "accessing enterprise data" i.e. components within the enterprise can emit data that describe their current state. This data does not normally contain a direct instruction for another system to complete an action. Instead, components allow other systems to gain insight into their data and status.
To facilitate this evaluation, consider these key selection criteria to consider when selecting the right technology for your solution:
Event history - Kafka
Fine-grained subscriptions - MQ
Scalable consumption - Kafka
Transactional behavior - MQ

akka stream ActorSubscriber does not work with remote actors

http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/stream-integrations.html says:
"ActorPublisher and ActorSubscriber cannot be used with remote actors, because if signals of the Reactive Streams protocol (e.g. request) are lost the the stream may deadlock."
Does this mean akka stream is not location transparent? How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
I must have misunderstood something. Thanks for any clarification.
They are strictly a local facility at this time.
You can connect it to an TCP sink/source and it will apply back-pressure using TCP as well though (that's what Akka Http does).
How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
Check out streams in Artery (Dec. 2016, so 18 months later):
The new remoting implementation for actor messages was released in Akka 2.4.11 two months ago.
Artery is the code name for it. It’s a drop-in replacement to the old remoting in many cases, but the implementation is completely new and it comes with many important improvements.
(Remoting enables Actor systems on different hosts or JVMs to communicate with each other)
Regarding back-pressure, this is not a complete solution, but it can help:
What about back-pressure? Akka Streams is all about back-pressure but actor messaging is fire-and-forget without any back-pressure. How is that handled in this design?
We can’t magically add back-pressure to actor messaging. That must still be handled on the application level using techniques for message flow control, such as acknowledgments, work-pulling, throttling.
When a message is sent to a remote destination it’s added to a queue that the first stage, called SendQueue, is processing. This queue is bounded and if it overflows the messages will be dropped, which is in line with the actor messaging at-most-once delivery nature. Large amount of messages should not be sent without application level flow control. For example, if serialization of messages is slow and can’t keep up with the send rate this queue will overflow.
Aeron will propagate back-pressure from the receiving node to the sending node, i.e. the AeronSink in the outbound stream will not progress if the AeronSource at the other end is slower and the buffers have been filled up.
If messages are sent at a higher rate than what can be consumed by the receiving node the SendQueue will overflow and messages will be dropped. Aeron itself has large buffers to be able to handle bursts of messages.
The same thing will happen in the case of a network partition. When the Aeron buffers are full messages will be dropped by the SendQueue.
In the inbound stream the messages are in the end dispatched to the recipient actor. That is an ordinary actor tell that will enqueue the message in the actor’s mailbox. That is where the back-pressure ends on the receiving side. If the actor is slower than the incoming message rate the mailbox will fill up as usual.
Bottom line, flow control for actor messages must be implemented at the application level. Artery does not change that fact.

Resources