What are the differences between system properties and application properties in azure-iot-hub message? - azure-iot-sdk

This link contains Create and read IoT Hub messages of azure. I'm confused about app property and message property.
What's the difference between them?

The list of system properties is predetermined - in some cases the values are not user settable. Typically, system properties are used by IoT Hub as part of IoT Hub's standard message processing.
You can create and set any application properties for your own purposes - application properties may be used as part of any custom routing rules you create in your hub. You may also want to use your custom application properties in any downstream D2C message processing in your solution.
An example of a system property is iothub-connection-device-id - this property is set by IoT Hub on every D2C message. This property contains the id of the device that sent the message and cannot be changed.
An example of an application property might be severity. You could then use the values (such as info, warning, and error) to route messages to different endpoints.

Related

What is the preferred way for an IoT Edge module instance to discover the ids needed to call a method on another module?

In the new preview version of the IoT Edge gateways one module can invoke methods on another via InvokeDeviceMethodAsync. This takes a device id and a module id as parameters, presumably to tell Edge how to route the call. When calling within the same gateway, the device id parameter should be the device id of the gateway instance in IoT Edge hub. The module id should be the module id of the instance of the module pushed down to the gateway from IoT edge. It is easy to hard code those id's but would obviously not be desirable. You could place the hard coded values in config files that get read by the modules on load which would be less problematic but still not ideal. Is there a way to pro grammatically discover/populate the needed values? Do the deployment json configs support variable substitution or similar upon deployment to populate instance ids?
I don't think there is a preferred way currently. You have basically three options, I mention two of them.
Using Env in the createOptions section of the deployment manifest of the module
You can push via Module Twin as property to the module
I personally would choose option 1 as you define the module-id during creation of the deployment manifest and with that you can also inject the environment variable into the specific modules create options in the manifest.
I would choose method 2 in case modules communication will change based on some domain rules, but couldn't find in my projects any use case where this is true.
BTW I would answer as comment, but missing reputation.
The typical scenario for method invocation by modules on IoT Edge device, is that the module receives telemetry messages from other modules on the same device or downstream devices connected to that IoT Edge gateway device, and based on the contents of the message, decides to invoke a method on the sender module or device to indicate some change (for example, say if the message indicates the device is running too hot, then the module can invoke a method to slow down fan speed, etc.).
In such a scenario, the module can get the device Id and module Id of the sender module from the message itself. The message object has the following properties, that provide this information -
ConnectionDeviceId
ConnectionModuleId

Can you dynamically change properties of node-RED nodes

I'm currently working on an application that uses MQTT to communicate between an android app and a LoRaWAN gateway. It uses node-RED on the gateway's side.
I can manually set up on what topic my MQTT output node publishes to. I was wondering if there was a way to make this property of the node change depending on the message it receives.
For exemple, would it be possible to send a topic attribute alongside my msg and payload and use that value to set the property inside the node.
Thanks for the help!
Yes, just leave the topic field blank in the mqtt-out node config and set the msg.topic property on the message to be the topic you want to publish that message to.

Spring Cloud DataFlow Rabbit Source: how to intercept and enrich messages in a Source

I have been successfully evaluating Spring Cloud DataFlow with a typically simple flow: source | processor | sink.
For deployment there will be multiple sources feeding into this pipeline which I can do using data flow labels. All well and good.
Each source is a different rabbitmq instance and because the processor needs to know where the message came from (because it has to call back to the source system to get further information), the strategy I'd thought of was to enrich each message with header details about the source system which is then transparently passed along to the processor.
Now, I'm well-versed in Spring, Spring Boot and Spring Integration but I cannot find out how to enrich each message in a dataflow source component.
The source component is bound to an org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration. The source uses the default Source.OUTPUT channel. How do I get hold of each message in the source to enrich it?
My processor component uses some Spring Integration DSL to do some of what it needs to do but then this processor component has both an INPUT and OUTPUT channel by definition. Not so with the RabbitSourceConfiguration source.
So, can this be done?
I think you need a custom MessageListener on the MessageListenerContainer in RabbitSourceConfiguration.
In the RabbitSourceConfiguration you can set a custom ChannelAwareMessageListener (You can possibly extend from MessagingMessageListenerAdapter as well) on the MessageListenerContainer that does what you incline to do.
In the end what worked was subclassing org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration to:
override public SimpleMessageListenerContainer container() so that I could insert a custom health check before calling super.container(). My business logic enriches each message (see next bullet) with details of where the message came from (note, this is the publisher of the messages and not the rabbit queue). There's a health check needed to validate the additional enriching information (which is provided via configuration) to ensure that messages aren't consumed from the queue and enriched with the wrong information. If the validation fails, the source component fails to start and hence no messages are consumed.
override the creation of the AmqpInboundChannelAdapter bean so that a custom subclass of DefaultAmqpHeaderMapper can be set on the adapter. This custom mapper adds the enriched headers in public Map toHeadersFromRequest(final MessageProperties source).
For me, the inability of stream/dataflow to intercept and modify messages in Source components is problematic. I really shouldn't have to fiddle around with the underlying message broker API in the ways I did. I should be able to do it with e.g. Spring Integration. Indeed I can register a global message interceptor but I cannot change the headers of the message.
This ability would go on my WIBNI (wouldn't it be nice if) list. Perhaps I'll raise a request for this.

Recommended way to process different payload types in Asp.Net WebHooks for same sender

I'm setting up an Asp.Net WebHook receiver for an internal webhook from a different server in the same application. I'm therefore using the built-in CustomWebHookReceiver. The webhook needs to send several different payload types as JSON, which need to be de-serialized into different strong types by the receiver and be processed differently.
Since the only difference between invocations of the webhook are the payload, a single webhook receiver will be configured for a single id, following the pattern of placing the shared secret in the web.config AppSetting as:
MS_WebHookReceiverSecret_<receiver>
What is the best way to implement different webhook handling behavior for the different payload types. Creating separate receivers or separate ids for the different payload types does not seem appropriate, since the security models are identical and writing a new receiver seems like overkill.
Creating different handlers seems like the right direction, but built-in settings appear to only allow a handler to specify the receiver and the priority. This leaves the option of inspecting the payload inside the handler's ExecuteAsync method and deciding how to process the message.
Is this correct, or is there a better way?
Tom, you can use the {id} part of the URI to receive as many WebHooks using the same handler and then differentiate them based on the {id}. You can find more details about how to set this up here.
Hope this helps!
Henrik

How to peek at message while dependencies are being built?

I building multitenancy into the unit of work for a set of services. I want to keep the tenancy question out of the way of day-to-day business domain work, and I do not want to touch every existing consumer in the system (I am retrofitting the multitenancy onto a system without any prior concept of a tenant).
Most messages in the system will be contexted by a tenant. However, there will be some infrastructure messages which will not be, particularly for the purpose of automating tenant creation. I need a way of determining whether to use a tenant-contexted unit of work, or a infrastructure unit of work uncontexted by a tenant because the way I interact with the database is different depending on whether I have tenant context. The unit of work is built in the process of spinning up the dependencies of the consumer.
As such I need a way of peeking at the message or its metadata before consuming it, and specifically, I need to be able to peek at it during the dependency building. I was intended to have a tag interface to mark tenant management messages out from normal business domain messages, but any form of identifying the difference could work. If I am in a unit of work resulting from an HTTP request, I can look at WebApi's HttpContext.Current and see the headers of the current request, etc. How do I do something analogous to this if I am in a unit of work resulting from messaging?
I see there is a way to intercept messages with BeforeConsumingMessage() but I need a way of correlating it to the current unit of work I am spinning up and I'm not seeing how that would work for me. Pseudocode for what I am trying to do:
if MessageContext.Message.GetType() = typeof<ITenantInfrastructureMessage>:
database = new Database(...)
else:
tenantId = MessageContext.Headers.TenantId;
database = new TenantDatabase(..., tenantId)
I am working in C#/.NET using MassTransit with RabbitMQ and Autofac with MassTransit's built-in support for both.
Your best option is to override at the IConsumerFactory<T> extension point, and extract the tenant from the message (either via a message header, or some message property) and register that in the container child lifetime scope so that subsequent resolutions from the actual consumer class (and it's dependencies) are properly matched to the tenant in the message.
In our systems, we have a TenantContext that is registered in a newly created LifetimeScope (we're using Autofac), after which we resolve the consume from the child scope, and the dependencies that use the tenant context get the proper value since it's registered as part of building the child container for the message scope.
It works extremely well, we even built up extension methods to make it easy for developers registering consumers to specify "tenant context providers" that go from a message type to the proper tenant id, which is used to build the TenantContext.
You can do similar things with activity factories in Courier routing slips (which are a specialization of a consumer).

Resources