Node Red. Spilt/Break message from mqtt broker(mosca) - mqtt

How do I break/split the message received from mqtt broker(mosca)? The whole message come with packet, topic, messageid, payload, etc. I just need the payload {"T":"t"} displayed at the debug node. I tried the split and switch node, it doesn't seem to work, no response at the output.
mqtt device
mqtt broker

You should probably be using the MQTT-in node to subscribe to the topics you want rather than the output of the Mosca broker node, which will include EVERY message sent to the broker (with all the internal detail that you don't want.
But you can move the msg.packet.payload to msg.payload with the change node. Then run that output through the JSON node which will parse the String representation of the JSON object back into a proper object.
(If you use the MQTT-in node you will still need to use the JSON node)

Related

How to send a message from Helix controller to participant?

I want to send a message to all Participant nodes of my Helix cluster from Controller node. I tried following piece of code to send message to all registered participants of my cluster but their registered Participant message listeners are not receiving the message notification from Helix.
Message msg = new Message(factory.getMessageTypes().get(0), msgId);
msg.setMsgId(msgId);
msg.setSrcName(hostSrc);
msg.setTgtSessionId("*");
msg.setMsgState(MessageState.NEW);
msg.getRecord().setSimpleField("TestMessage", "Message from controller");
Criteria recipientCriteria = new Criteria();
recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
recipientCriteria.setInstanceName("%"); // To all recipients
recipientCriteria.setSessionSpecific(true); // To deliver only live participants
recipientCriteria.setSessionSpecific("DEV_CLUSTER"); // To only participants of this cluster
messagingService.send(recipientCriteria,msg);
Note that, when I am sending this message there is no resource exists in the cluster.
After debugging further what I have observed is CriteriaEvaluator.evaluateCriteria(....) operation is returning an empty list which further results into 0 messages to be sent to Participants nodes.
Kindly let me know if I am missing anything here while defining my criteria for Participants.
Thanks !
Update-1: our observation on this issue is as follows:
The received message at the participant side is read by both the Participant message listener(Say L1) and the handler created through MessageHandlerFactory(Which internally creating a listener HelixtaskExecutor (Say L2)).
In case if the message is read by HelixTaskExecutor(L2) first, it then immediately deletes the Znode in Zookeeper and the additionally configured message listener(L1) doesn't receive this message.
In case if the message is first read my additional message listener i.e. L1 then in such scenarios we don't face this problem as this additionally added listener doesn't delete the Znode from ZooKeeper.
We are still not sure how can we handle this problem as we want to use both the listener and MessageHandlers but facing the same problem I stated above.
Any inputs are appreciated.

SpringIntegration publish to kafka and update couchbase after receiving a message from MQ

I am using SpringIntegration's IntegrationFlows to define the message flow, and used Jms.messageDrivenChannelAdapter to get the message from the MQ, now I need to parse it, send it to KAFKA and update couchbase.
IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(this.acarsMqListener)) //MQ Listener with session transacted=true
.wireTap(ACARS_WIRE_TAP_CHNL) // Logging the message
.transform(agmTransformer, "parseXMLMessage") // Parse the xml message
.filter(acarsFilter,"filterMessageOnSmi") // Filter the message based on condition
.transform(agmTransformer, "populateImi") // Parse and Populate based on the message payload
.filter(acarsFilter,"filterMessageOnSmiImi") // Filter the message based on condition
.handle(acarsProcessor, "processEvent") // Create the message
.handle(Kafka.outboundChannelAdapter(kafkaTemplate).messageKey(MESSAGE_KEY).topic(acarsKafkaTopic)) //send it to kafka
.handle(updateCouchbase, "saveToDB") // Update couchbase
.get();
As per the application logic - the message should be stored in kafka and couchbase, if there is any exception in storing the message into kafka and couchbase the message should be rolled back to the queue. Is the above message flow cater the expected behavior? Can you please suggest if any improvement can be done?

Does Firebase always guarantee added events in order?

I am developing messenger IOS app based on Firebase Realtime Database.
I want that all messages to be ordered based on timestamp.
There is a scenario as like below.
There are 3 clients. A, B and C.
1)
All clients register 'figure-1' listener to receive messages from others.
<figure-1>
ref.queryOrdered(byChild: "timestamp").queryStarting(atValue: startTime).observe(.childAdded, with:
{
....
// do work for the messages, print, save to storage, etc.
....
// save startTime to storage for next open.
startTime = max(timeOfSnapshot, startTime)
saveToStorage(startTime)
}
2)
Client A write message 1 to server with ServerValue.timestamp().
Client B write message 2 to server with ServerValue.timestamp().
Client C write message 3 to server with ServerValue.timestamp().
They sent messages extremely the same moment.
All clients have good speed wifi.
So, finally. Server data saved like 'figure-2'
<figure-2>
text : "Message 1", timestamp : 100000001
text : "Message 2", timestamp : 100000002
text : "Message 3", timestamp : 100000003
As my listener's code, i keep messages on storage and next listening timestamp for preventing downloading duplicated messages.
In this case.
Does Firebase always guarantee to trigger callback in order as like below?
Message 1
Message 2
Message 3
If it is not guaranteed, my strategy is absolutely wrong.
For example, some client received messages as like below.
Message 3 // the highest timestamp.
// app crash or out of storage
Message 1
Message 2
The client do not have chance to get message 1, 2 anymore.
I think if there are some nodes already, Firebase might trigger in order for those. Because, that is role of 'queryOrdered' functionality.
However, there are no node before register the listener and added new nodes additionally after then. What is will happen?
I suppose Firebase might send 3 packets to clients. (No matter how quickly the message arrives, Firebase has to send it out as soon as it arrives.)
Packet1 for message1
Packet2 for message2
Packet3 for message3
ClientA fail to receive for packet 1,2
ClientA success to receive for packet 3
Firebase re-send packet 1,2 again.
ClientA success to receive for packet 1,2
Eventually, all datas are consistent. But ordering is corrupted.
Does Firebase guarantee to occur events in order?
I have searched stack overflow and google and read official documents many times. However, i could not find the clear answer.
I have almost spent one week for this. Please give me piece of advice.
The order in which the data for a query is returns is consistent, and determined by the server. So all clients are guaranteed to get the results in the same order.
For new data that is sent to the database after the listeners are attached, all remote clients will receive it in the same order. The local client will see events for it's write operation right away though, before the data even reaches the database server.
In figure 2, it is actually quite simple: since each node has a unique timestamp, and they will be returned in the order of that timestamp. But even if they'd have the same timestamp, they'd be returned in the same order (timestamp first, then key) for each client.

how to change mule's imap mail fetch order?

I have an IMAP connection to fetch emails using Mule. I'm running into an issue.
Here are my 2 simple requirements:
I want to fetch emails in reverse order. (latest first)
Ignore SEEN messages but don't delete them.
I was looking at the code that mule (3.3.1) uses:
org.mule.transport.email.RetrieveMessageReceiver.poll().
The code seems to be fetching messages from message 1.
348: Message[] messages = folder.getMessages(1, batchSize);
The messages fetched here are processed in a loop in :
org.mule.transport.email.RetrieveMessageReceiver.messagesAdded(MessageCountEvent)
142: if (!messages[i].getFlags().contains(Flags.Flag.DELETED)
143: && !messages[i].getFlags().contains(Flags.Flag.SEEN))
What this whole logic is doing is that it is trying to read OLD unread messages. The code comes back to line 348 and executes
folder.getMessages(1, batchSize);
again, and gets the same messages and it keeps on waiting. How can i change the order of fetch.
FYI: Using MS Exchange for IMAP
Not sure why you say that Mule tried to read "OLD unread messages"? It actually just tries to read unread messages, ie not DELETED nor SEEN.
Anyway, theoretically the Mulesque way of sorting the messages would be to use resequencer. Unfortunately the mail message receivers do not set any of the required control properties to let Mule process the received messages as a single batch so that won't work.
So the only solution I can think of is to extend org.mule.transport.email.RetrieveMessageReceiver and register your custom version on the IMAP connector with a <service-overrides /> child element.

Mirth: How to send ACK message to sender host and port

I am receiving lab HL7 messages from a static host and a dynamic port. For each message received I need to send a ACK message back to this host and port.
I have a destination TCP Writer channel with the correct message in there. Though the port number has to be fixed.
How do I tell Mirth to send this message to the sending host and port?
Thanks in advance
Abhi
You should configure your channel to use the LLP Listener instead, which has the option to reply with a custom HL7 ACK message. The message will be send back on the same connection so you don't have to keep track of the address of the sending system.
In Mirth you send a customized ACK message.
In Scripts, select the Postprocessor (This script executes once after a message has been processed)
and write this code
var ackString = ""; //build a javascript string for your custom ack
var ackResponse = ResponseFactory.getSuccessReponse(ackString);
responseMap.put("Custom ACK", ackResponse);
Mirth will then parse the Postprocessor script, and discovers the reponseMap code. On the source tab, go to the Send ACK radio list, you can now select "Respond from" and "Custom ACK" from the options in the dropdownlist avaiable.

Resources