I want to send a message to all Participant nodes of my Helix cluster from Controller node. I tried following piece of code to send message to all registered participants of my cluster but their registered Participant message listeners are not receiving the message notification from Helix.
Message msg = new Message(factory.getMessageTypes().get(0), msgId);
msg.setMsgId(msgId);
msg.setSrcName(hostSrc);
msg.setTgtSessionId("*");
msg.setMsgState(MessageState.NEW);
msg.getRecord().setSimpleField("TestMessage", "Message from controller");
Criteria recipientCriteria = new Criteria();
recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
recipientCriteria.setInstanceName("%"); // To all recipients
recipientCriteria.setSessionSpecific(true); // To deliver only live participants
recipientCriteria.setSessionSpecific("DEV_CLUSTER"); // To only participants of this cluster
messagingService.send(recipientCriteria,msg);
Note that, when I am sending this message there is no resource exists in the cluster.
After debugging further what I have observed is CriteriaEvaluator.evaluateCriteria(....) operation is returning an empty list which further results into 0 messages to be sent to Participants nodes.
Kindly let me know if I am missing anything here while defining my criteria for Participants.
Thanks !
Update-1: our observation on this issue is as follows:
The received message at the participant side is read by both the Participant message listener(Say L1) and the handler created through MessageHandlerFactory(Which internally creating a listener HelixtaskExecutor (Say L2)).
In case if the message is read by HelixTaskExecutor(L2) first, it then immediately deletes the Znode in Zookeeper and the additionally configured message listener(L1) doesn't receive this message.
In case if the message is first read my additional message listener i.e. L1 then in such scenarios we don't face this problem as this additionally added listener doesn't delete the Znode from ZooKeeper.
We are still not sure how can we handle this problem as we want to use both the listener and MessageHandlers but facing the same problem I stated above.
Any inputs are appreciated.
Related
I'm making this addons that have to send to the raid my interrupt cooldown.
The problem is that whenever i send a message to the raid i am the only one that receive it.
This is the code that send the message:
C_ChatInfo.SendAddonMessage("KickRotation",string.format( "%0.2f",remainingCd ), "RAID")
This is the event handler:
frame:RegisterEvent("PLAYER_ENTERING_WORLD")
frame:RegisterEvent("CHAT_MSG_ADDON")
frame:SetScript("OnEvent", function(self, event, ...)
local prefix, msg, msgType, sender = ...;
if event == "CHAT_MSG_ADDON" then
if prefix == "KickRotation" then
print("[KickRotation]" ..tostring(sender) .." potrĂ interrompere tra: " ..msg);
end
end
if event == "PLAYER_ENTERING_WORLD" then
print("[KickRotation] v0.1 by Galfrad")
end
end)
Basically when the message is sended it is printed only to me.
Network messages are handled and transferred to the recipient channel (in this case, Raid Group) by the server. The reason that you are seeing the message locally, but the other people do not see it is that the message will be handled on the local system (sender) to reduce the repetition of data transmit.
Server however, only accepts and sends messages that are registered to it.
Therefore, you must first register your add-on messages to the server so the other players in the requested channel be able to receive it.
First, register your add-on messages with the name you have given already (But be sure to call the registration method only once per client):
local success = C_ChatInfo.RegisterAddonMessagePrefix("KickRotation") -- Addon name.
Next, check if your message was accepted and registered to the server. In case success is set to false (failure), you may want to handle proper warning messages and notifications to the user. The case of failure means that either server has disabled add-on messages or you have reached the limit of add-on message registrations.
Finally, send your message again check if it did not fail.
if not C_ChatInfo.SendAddonMessage("KickRotation",string.format( "%0.2f",remainingCd ), "RAID") then
print("[KickRotation] Failed to send add-on message, message rejected by the server.")
end
I am using SpringIntegration's IntegrationFlows to define the message flow, and used Jms.messageDrivenChannelAdapter to get the message from the MQ, now I need to parse it, send it to KAFKA and update couchbase.
IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(this.acarsMqListener)) //MQ Listener with session transacted=true
.wireTap(ACARS_WIRE_TAP_CHNL) // Logging the message
.transform(agmTransformer, "parseXMLMessage") // Parse the xml message
.filter(acarsFilter,"filterMessageOnSmi") // Filter the message based on condition
.transform(agmTransformer, "populateImi") // Parse and Populate based on the message payload
.filter(acarsFilter,"filterMessageOnSmiImi") // Filter the message based on condition
.handle(acarsProcessor, "processEvent") // Create the message
.handle(Kafka.outboundChannelAdapter(kafkaTemplate).messageKey(MESSAGE_KEY).topic(acarsKafkaTopic)) //send it to kafka
.handle(updateCouchbase, "saveToDB") // Update couchbase
.get();
As per the application logic - the message should be stored in kafka and couchbase, if there is any exception in storing the message into kafka and couchbase the message should be rolled back to the queue. Is the above message flow cater the expected behavior? Can you please suggest if any improvement can be done?
How do I break/split the message received from mqtt broker(mosca)? The whole message come with packet, topic, messageid, payload, etc. I just need the payload {"T":"t"} displayed at the debug node. I tried the split and switch node, it doesn't seem to work, no response at the output.
mqtt device
mqtt broker
You should probably be using the MQTT-in node to subscribe to the topics you want rather than the output of the Mosca broker node, which will include EVERY message sent to the broker (with all the internal detail that you don't want.
But you can move the msg.packet.payload to msg.payload with the change node. Then run that output through the JSON node which will parse the String representation of the JSON object back into a proper object.
(If you use the MQTT-in node you will still need to use the JSON node)
I am developing messenger IOS app based on Firebase Realtime Database.
I want that all messages to be ordered based on timestamp.
There is a scenario as like below.
There are 3 clients. A, B and C.
1)
All clients register 'figure-1' listener to receive messages from others.
<figure-1>
ref.queryOrdered(byChild: "timestamp").queryStarting(atValue: startTime).observe(.childAdded, with:
{
....
// do work for the messages, print, save to storage, etc.
....
// save startTime to storage for next open.
startTime = max(timeOfSnapshot, startTime)
saveToStorage(startTime)
}
2)
Client A write message 1 to server with ServerValue.timestamp().
Client B write message 2 to server with ServerValue.timestamp().
Client C write message 3 to server with ServerValue.timestamp().
They sent messages extremely the same moment.
All clients have good speed wifi.
So, finally. Server data saved like 'figure-2'
<figure-2>
text : "Message 1", timestamp : 100000001
text : "Message 2", timestamp : 100000002
text : "Message 3", timestamp : 100000003
As my listener's code, i keep messages on storage and next listening timestamp for preventing downloading duplicated messages.
In this case.
Does Firebase always guarantee to trigger callback in order as like below?
Message 1
Message 2
Message 3
If it is not guaranteed, my strategy is absolutely wrong.
For example, some client received messages as like below.
Message 3 // the highest timestamp.
// app crash or out of storage
Message 1
Message 2
The client do not have chance to get message 1, 2 anymore.
I think if there are some nodes already, Firebase might trigger in order for those. Because, that is role of 'queryOrdered' functionality.
However, there are no node before register the listener and added new nodes additionally after then. What is will happen?
I suppose Firebase might send 3 packets to clients. (No matter how quickly the message arrives, Firebase has to send it out as soon as it arrives.)
Packet1 for message1
Packet2 for message2
Packet3 for message3
ClientA fail to receive for packet 1,2
ClientA success to receive for packet 3
Firebase re-send packet 1,2 again.
ClientA success to receive for packet 1,2
Eventually, all datas are consistent. But ordering is corrupted.
Does Firebase guarantee to occur events in order?
I have searched stack overflow and google and read official documents many times. However, i could not find the clear answer.
I have almost spent one week for this. Please give me piece of advice.
The order in which the data for a query is returns is consistent, and determined by the server. So all clients are guaranteed to get the results in the same order.
For new data that is sent to the database after the listeners are attached, all remote clients will receive it in the same order. The local client will see events for it's write operation right away though, before the data even reaches the database server.
In figure 2, it is actually quite simple: since each node has a unique timestamp, and they will be returned in the order of that timestamp. But even if they'd have the same timestamp, they'd be returned in the same order (timestamp first, then key) for each client.
To developers/users of LMAX Disruptor http://code.google.com/p/disruptor/ :
My question:
Can anyone suggest an approach to how apply a timeout function to Disruptor e.g. using EventHandler?
Here is one scenario that came up in my line of work:
Outbox - messages sent to the Server over a network
Inbox - ACK messages received from the Server
ACK Handler - marks outbox messages as ACKed
Timeout Handler - marks outbox message as NACKed (much needed, but where can it fit into the Disruptor design?)
Is there anyone who share the same opinion?
Or can anyone point out why it is unnecessary.
I hope the ensuing debate would be brief.
Thank you.
To clarify the timeout-handler would "fire" after a certain period of time when a message could not be delivered?
The way it works with disruptor is you have a ringbuffer for inbound and a ringbuffer for outbound messges... so email comes in, place it into the inbound ring buffer using an appropriate event. then process the message (i.e. decode, analye, log, store) and send it along to another sytem by placing it into the outbound ringbuffer... another handler takes the message and stores it into a database or sends it to another server using smtp... if a error / timeout etc. occurs, your create an event in the inbound ringbuffer signaling the error (NACK) and process this message. does that make sense?!?