I am developing messenger IOS app based on Firebase Realtime Database.
I want that all messages to be ordered based on timestamp.
There is a scenario as like below.
There are 3 clients. A, B and C.
1)
All clients register 'figure-1' listener to receive messages from others.
<figure-1>
ref.queryOrdered(byChild: "timestamp").queryStarting(atValue: startTime).observe(.childAdded, with:
{
....
// do work for the messages, print, save to storage, etc.
....
// save startTime to storage for next open.
startTime = max(timeOfSnapshot, startTime)
saveToStorage(startTime)
}
2)
Client A write message 1 to server with ServerValue.timestamp().
Client B write message 2 to server with ServerValue.timestamp().
Client C write message 3 to server with ServerValue.timestamp().
They sent messages extremely the same moment.
All clients have good speed wifi.
So, finally. Server data saved like 'figure-2'
<figure-2>
text : "Message 1", timestamp : 100000001
text : "Message 2", timestamp : 100000002
text : "Message 3", timestamp : 100000003
As my listener's code, i keep messages on storage and next listening timestamp for preventing downloading duplicated messages.
In this case.
Does Firebase always guarantee to trigger callback in order as like below?
Message 1
Message 2
Message 3
If it is not guaranteed, my strategy is absolutely wrong.
For example, some client received messages as like below.
Message 3 // the highest timestamp.
// app crash or out of storage
Message 1
Message 2
The client do not have chance to get message 1, 2 anymore.
I think if there are some nodes already, Firebase might trigger in order for those. Because, that is role of 'queryOrdered' functionality.
However, there are no node before register the listener and added new nodes additionally after then. What is will happen?
I suppose Firebase might send 3 packets to clients. (No matter how quickly the message arrives, Firebase has to send it out as soon as it arrives.)
Packet1 for message1
Packet2 for message2
Packet3 for message3
ClientA fail to receive for packet 1,2
ClientA success to receive for packet 3
Firebase re-send packet 1,2 again.
ClientA success to receive for packet 1,2
Eventually, all datas are consistent. But ordering is corrupted.
Does Firebase guarantee to occur events in order?
I have searched stack overflow and google and read official documents many times. However, i could not find the clear answer.
I have almost spent one week for this. Please give me piece of advice.
The order in which the data for a query is returns is consistent, and determined by the server. So all clients are guaranteed to get the results in the same order.
For new data that is sent to the database after the listeners are attached, all remote clients will receive it in the same order. The local client will see events for it's write operation right away though, before the data even reaches the database server.
In figure 2, it is actually quite simple: since each node has a unique timestamp, and they will be returned in the order of that timestamp. But even if they'd have the same timestamp, they'd be returned in the same order (timestamp first, then key) for each client.
Related
I am building a slack application that will schedule a message when someone posts a specific type of workflow in a channel.
It will schedule a message, and if someone from a specific group of users replies before it has sent, it will delete the scheduled message.
Unfortuantely these messages are still sending, even though the list of scheduled messages is empty and the response when deleting the message is a successful one. I am also deleting the message within the 60 second limit that is noted on the API.
Scheduling the message gives me a success response, and if I use the list scheduled messages I get:
[
{
id: 'MESSAGE_ID',
channel_id: 'CHANNEL_ID',
post_at: 1620428096, // 2 minutes in the future for testing
date_created: 1620428026,
text: 'thread_ts: 1620428024.001300'
}
]
Canceling the message:
async function cancelScheduledMessage(scheduled_message_id) {
const response = await slackApi.post("/chat.deleteScheduledMessage", {
channel: SLACK_CHANNEL,
scheduled_message_id
})
return response.data
}
response.data returns { "ok": true }
If I use the list scheduled message API to retrieve what is scheduled I get an empty array []
However, the message will still send to the thread.
Is there something I am missing? I have the proper scopes set up and the API calls appear to be working.
If it helps, I am using AWS Lambda, and DynamoDB to store/retrieve the thread_ts and message IDs.
Thanks all.
For messages due in 5 minutes or less, chat.deleteScheduleMessage has a bug (as of November 2021) [1]. Although this API call may return OK, the actual message will still be delivered due to the bug.
Note that for messages within 60 seconds, this API does return an proper error code, as described in the documentation [2]. For the range (60 seconds, ~5 minutes), the API call returns OK but fails behind the scenes.
Before this bug is fixed, the only thing one can do is to only delete messages scheduled 5 minutes (the exact threshold may vary, according to Slack) or more (yes not very ideal and may not be feasible for some applications).
[1] Private communication with Slack support.
[2] https://api.slack.com/methods/chat.deleteScheduledMessage
I want to display next to a chat channel the number of messages a channel has that have been unconsumed or unread (I assume this is what unconsumed means?)
Currently I send messages to a channel that two users are subscribed to , a private chat. Then before opening up the chat window I check the channel for unconsumed messages, but it always say 0 messages even if I call setNoMessagesConsumedWithCompletion.
I am using the Swift API...What do I need to do to find out how many messages in my channel have not been read yet? At what point do they become read? (when the user opens up a chat channel and requests to getLastWithCount?)
I read in the docs you have to set something called the consumption horizon to get unconsumed message, but I don't know how you do that in SWIFT API https://www.twilio.com/docs/chat/consumption-horizon also this was for Javascript API so perhaps it is easier with Swift Api?
I figured out the solution. As per the documentation you need to update the last consumed message index. So for example if the user has a chat window open you need to record for that user (or instance of the Chat Client) what was the last message they saw before they close their chat. I am storing all the messages in a message array and update the last consumed message index with the length of the array of messages:
generalChannel?.messages?.setLastConsumedMessageIndex(NSNumber.init(value: self.messages.count), completion: { (result, count) in
if !result.isSuccessful() {
print(result.error.debugDescription)
}
})
Then if you send messages to that channel when the user is not in the channel these will be recorded as unconsumed, you can get the number by calling:
channel.getUnconsumedMessagesCount(completion: { (results, numberUnconsumed) in
print(numberUnconsumed)
})
When didReceiveStatus is called after subscribing to a channel, We are not able to retrieve the channel(s) that was just subscribed.
PNSubscribeStatus.data.subscribedChannel or PNSubscribeStatus.data.actualChannel are always null and PNSubscribeStatus.subscribedChannels gives all currently subscribed channels and not the ones that triggered the didReceiveStatus callback.
What are we doing wrong here ?
In SDK 4.0, didReceiveStatus returns a PNStatus, which according to the class documentation doesn't contain that extra information unless there's an error condition. For our application, we use that handler to monitor connection status to the PubNub server.
PubNub Message Received Channel Name in iOS
You should be able to get the channel that you received the message on but getting it depends on whether you are subscribed to the channel or to a channel group that contains the channel. This is sample code from the PubNub Objective-C for iOS SDK subscribe API Reference:
- (void)client:(PubNub *)client didReceiveMessage:(PNMessageResult *)message {
// Handle new message stored in message.data.message
if (message.data.actualChannel) {
// Message has been received on channel group stored in
// message.data.subscribedChannel
}
else {
// Message has been received on channel stored in
// message.data.subscribedChannel
}
NSLog(#"Received message: %# on channel %# at %#", message.data.message,
message.data.subscribedChannel, message.data.timetoken);
}
If you need other channels that the client is subscribed to, you can call the where-now API.
If you need to be more dynamic about what the reply-to channel should be, then just include that channel name in the message when it is published, assuming the publisher has prior knowledge about which channel this is. Or you can do a just in time lookup on your server as to which channel to reply to.
Here is PubNub support answer on this :
Actually status.data.subscribedChannel and status.data.actualChannel
dedicated to presence events and messages receiving callbacks where
information about sources of event is important.
In -client:didReceiveStatus: client doesn’t give information about
particular channel on which client has been subscribed. If client will
start track this information, there is no guarantee what it will
return expected value (as developer expect some channels to be there).
In previous version (3.x) all this information has been tracked, but
because it can be modified at any moment – result sometimes was
unpredictable.
Subscribe can be done in sequence of methods (one after another) like:
subscribe A1, subscribe C1, subscribe B1 and B2, unsubscribe C1 and B1
– this as result will end up with single call of
-client:didReceiveStatus: with resulting set of channels.
It always best practice just to check whether your channels is in
s_tatus.subscribedChannels_.
My comments:
The point of having asynchronous process is exactly not to think this as sequence of methods... We can not have the guaranty that subscriptions are done in the exact same order as the subscription request unless we block other subscription request until the previous one is done.
I have an IMAP connection to fetch emails using Mule. I'm running into an issue.
Here are my 2 simple requirements:
I want to fetch emails in reverse order. (latest first)
Ignore SEEN messages but don't delete them.
I was looking at the code that mule (3.3.1) uses:
org.mule.transport.email.RetrieveMessageReceiver.poll().
The code seems to be fetching messages from message 1.
348: Message[] messages = folder.getMessages(1, batchSize);
The messages fetched here are processed in a loop in :
org.mule.transport.email.RetrieveMessageReceiver.messagesAdded(MessageCountEvent)
142: if (!messages[i].getFlags().contains(Flags.Flag.DELETED)
143: && !messages[i].getFlags().contains(Flags.Flag.SEEN))
What this whole logic is doing is that it is trying to read OLD unread messages. The code comes back to line 348 and executes
folder.getMessages(1, batchSize);
again, and gets the same messages and it keeps on waiting. How can i change the order of fetch.
FYI: Using MS Exchange for IMAP
Not sure why you say that Mule tried to read "OLD unread messages"? It actually just tries to read unread messages, ie not DELETED nor SEEN.
Anyway, theoretically the Mulesque way of sorting the messages would be to use resequencer. Unfortunately the mail message receivers do not set any of the required control properties to let Mule process the received messages as a single batch so that won't work.
So the only solution I can think of is to extend org.mule.transport.email.RetrieveMessageReceiver and register your custom version on the IMAP connector with a <service-overrides /> child element.
To developers/users of LMAX Disruptor http://code.google.com/p/disruptor/ :
My question:
Can anyone suggest an approach to how apply a timeout function to Disruptor e.g. using EventHandler?
Here is one scenario that came up in my line of work:
Outbox - messages sent to the Server over a network
Inbox - ACK messages received from the Server
ACK Handler - marks outbox messages as ACKed
Timeout Handler - marks outbox message as NACKed (much needed, but where can it fit into the Disruptor design?)
Is there anyone who share the same opinion?
Or can anyone point out why it is unnecessary.
I hope the ensuing debate would be brief.
Thank you.
To clarify the timeout-handler would "fire" after a certain period of time when a message could not be delivered?
The way it works with disruptor is you have a ringbuffer for inbound and a ringbuffer for outbound messges... so email comes in, place it into the inbound ring buffer using an appropriate event. then process the message (i.e. decode, analye, log, store) and send it along to another sytem by placing it into the outbound ringbuffer... another handler takes the message and stores it into a database or sends it to another server using smtp... if a error / timeout etc. occurs, your create an event in the inbound ringbuffer signaling the error (NACK) and process this message. does that make sense?!?