how to change mule's imap mail fetch order? - imap

I have an IMAP connection to fetch emails using Mule. I'm running into an issue.
Here are my 2 simple requirements:
I want to fetch emails in reverse order. (latest first)
Ignore SEEN messages but don't delete them.
I was looking at the code that mule (3.3.1) uses:
org.mule.transport.email.RetrieveMessageReceiver.poll().
The code seems to be fetching messages from message 1.
348: Message[] messages = folder.getMessages(1, batchSize);
The messages fetched here are processed in a loop in :
org.mule.transport.email.RetrieveMessageReceiver.messagesAdded(MessageCountEvent)
142: if (!messages[i].getFlags().contains(Flags.Flag.DELETED)
143: && !messages[i].getFlags().contains(Flags.Flag.SEEN))
What this whole logic is doing is that it is trying to read OLD unread messages. The code comes back to line 348 and executes
folder.getMessages(1, batchSize);
again, and gets the same messages and it keeps on waiting. How can i change the order of fetch.
FYI: Using MS Exchange for IMAP

Not sure why you say that Mule tried to read "OLD unread messages"? It actually just tries to read unread messages, ie not DELETED nor SEEN.
Anyway, theoretically the Mulesque way of sorting the messages would be to use resequencer. Unfortunately the mail message receivers do not set any of the required control properties to let Mule process the received messages as a single batch so that won't work.
So the only solution I can think of is to extend org.mule.transport.email.RetrieveMessageReceiver and register your custom version on the IMAP connector with a <service-overrides /> child element.

Related

How to get solace queue statistics from Solclient API? c#

I am looking to retrieve some Solace queue stats e.g. the current messages spooled count out of the maximum limit for us to set a threshold to stop publishing more messages to the queue.
Also, to subscribe to vpn events to track message discard rates.
By the time we receive errors e.g. MaxMsgUsageExceeded/SpoolOverQuota, it will be too late.
I can't seem to find any of these on SolaceSystems.Solclient.Messaging API
https://docs.solace.com/API-Developer-Online-Ref-Documentation/net/html/7f10bcf6-19f4-beff-0768-ced843e35168.htm
Would be great if someone could help
(using C# for this)
To poll for Solace queue stats from your C# application, you could use legacy SEMP over the message bus to make a SEMP request for the details that you want. Semp (Solace Element Management Protocol) is a request/reply protocol that uses an XML schema to identify all managed objects available in a message broker. Applications can use SEMP to manage and monitor a message broker.
To allow for legacy SEMP to be used over the message bus, as opposed to the management interface, it first needs to be enabled on the Solace PubSub+ message broker at the VPN level.
To publish a SEMP request with the Solace .Net Messaging API, perform the following steps:
Create a Session.
Create the message topic. “#SEMP//SHOW”
ITopic topic = ContextFactory.Instance.CreateTopic( “#SEMP/<router name>/SHOW”);
Create a request message and set its Destination to the topic in Step 2:
IMessage requestMsg = ContextFactory.Instance.CreateMessage();
requestMsg.Destination = topic;
Set the SEMP request string as the binary attachment.
string SOLTR_VERSION = "8_4_0" //change to the message-broker's version
string SEMP_SHOW_QUEUE = "<rpc semp-version=\"soltr/" + SOLTR_VERSION +
"<show><queue><name>queueName</name><detail></detail></queue></show></rpc>";
requestMsg.BinaryAttachment = Encoding.UTF8.GetBytes(SEMP_SHOW_QUEUE);
Call the SendRequest(…) method on Session.
IMessage replyMsg;
ReturnCode rc = session.SendRequest(requestMsg, out replyMsg, timeout);
The SEMP response is returned in replyMsg.
Obtain the binary attachment data from the reply message:
replyMsg.BinaryAttachment
The binary attachment contains the SEMP reply for the command topic in the publish request.
The Solace PubSub+ message broker does raise an event when an egress message is discarded. However, it is only sent out approximately once every 60 seconds for the specified client so it is not possible to get these exact rates.
It is possible for your .NET application to subscribe to VPN-level events over the message-bus. To do this, you must first enable the Solace PubSub+ message broker to publish the events. You can then subscribe to the special topic and receive the events as messages.
The topic to subscribe to is:
#LOG/<level>/VPN/<routerName>/<eventName>/<vpnName>
The different levels can use the * wildcard. For example, if you wish to subscribe to all VPN events of all levels for the VPN apple on router QA-NY1, the topic string would be:
#LOG/*/VPN/QA-NY1/*/apple
SEMP (starting in v2) is a RESTful API for configuring, monitoring, and administering a Solace PubSub+ broker.
1-The swapper page link is SEMP V2 API
2-The Swagger metadata definitions URL is located # http://{solace-sever-url}/SEMP/v2/config/spec
3- From Visual studio, add REST API Client
4-In the configuration dialog pass swagger metadata URL (defined at step 2), for code purpose I choose SolaceSemp as input value parameter for client namespace input.
4 Once you click ok, VS will create the client along with the models under SolaceSemp namespace
5 Start using the client as per following
using SolaceSemp;
using Microsoft.Rest;
var credentials = new BasicAuthenticationCredentials();
credentials.UserName = "place user name";
credentials.Password = "place password";
using (var client = new SolaceSempClient(credentials))
{
var model = client.GetAboutApi();
}

Microsoft Azure Service Bus Message completed

In .NET Framework Microsoft.ServiceBus.Messaging had a class used to receive messages from Service Bus, BrokeredMessage. However, in .NET Standard 2.0, in order to receive messages from a Service Bus, class Message is used, from Microsoft.Azure.ServiceBus.Core.
BrokeredMessage has a method, CompleteAsync(), used to complete the receive operation of a message and indicates that the message should be marked as processed and deleted. I can't find a method for the Message class that does the same thing. Do you guys know any solution in order to mark a message as processed and deleted for the Message class?
To complete the message in the Queue using Microsoft.Azure.ServiceBus.Core, there is a method CompleteAsync available in the QueueClient, through which the messages will be received.
The lock token of the message should be passed as the parameter for the CompleteAsync method.
Example: queueClient.CompleteAsync(message.SystemProperties.LockToken)

Does Firebase always guarantee added events in order?

I am developing messenger IOS app based on Firebase Realtime Database.
I want that all messages to be ordered based on timestamp.
There is a scenario as like below.
There are 3 clients. A, B and C.
1)
All clients register 'figure-1' listener to receive messages from others.
<figure-1>
ref.queryOrdered(byChild: "timestamp").queryStarting(atValue: startTime).observe(.childAdded, with:
{
....
// do work for the messages, print, save to storage, etc.
....
// save startTime to storage for next open.
startTime = max(timeOfSnapshot, startTime)
saveToStorage(startTime)
}
2)
Client A write message 1 to server with ServerValue.timestamp().
Client B write message 2 to server with ServerValue.timestamp().
Client C write message 3 to server with ServerValue.timestamp().
They sent messages extremely the same moment.
All clients have good speed wifi.
So, finally. Server data saved like 'figure-2'
<figure-2>
text : "Message 1", timestamp : 100000001
text : "Message 2", timestamp : 100000002
text : "Message 3", timestamp : 100000003
As my listener's code, i keep messages on storage and next listening timestamp for preventing downloading duplicated messages.
In this case.
Does Firebase always guarantee to trigger callback in order as like below?
Message 1
Message 2
Message 3
If it is not guaranteed, my strategy is absolutely wrong.
For example, some client received messages as like below.
Message 3 // the highest timestamp.
// app crash or out of storage
Message 1
Message 2
The client do not have chance to get message 1, 2 anymore.
I think if there are some nodes already, Firebase might trigger in order for those. Because, that is role of 'queryOrdered' functionality.
However, there are no node before register the listener and added new nodes additionally after then. What is will happen?
I suppose Firebase might send 3 packets to clients. (No matter how quickly the message arrives, Firebase has to send it out as soon as it arrives.)
Packet1 for message1
Packet2 for message2
Packet3 for message3
ClientA fail to receive for packet 1,2
ClientA success to receive for packet 3
Firebase re-send packet 1,2 again.
ClientA success to receive for packet 1,2
Eventually, all datas are consistent. But ordering is corrupted.
Does Firebase guarantee to occur events in order?
I have searched stack overflow and google and read official documents many times. However, i could not find the clear answer.
I have almost spent one week for this. Please give me piece of advice.
The order in which the data for a query is returns is consistent, and determined by the server. So all clients are guaranteed to get the results in the same order.
For new data that is sent to the database after the listeners are attached, all remote clients will receive it in the same order. The local client will see events for it's write operation right away though, before the data even reaches the database server.
In figure 2, it is actually quite simple: since each node has a unique timestamp, and they will be returned in the order of that timestamp. But even if they'd have the same timestamp, they'd be returned in the same order (timestamp first, then key) for each client.

How many direct messages does twitter store?

I've read the Twitter REST API docs, I know that it says you can fetch 200 at a time to a max of 800. However... I can't. I'm pulling 200, using the last tweet as max_id and then sending another request but I only receive the last tweet from the first request, not the remaining from my supposed 800 limit.
So I did a little research and I found that when I was sending more direct messages from other accounts my other direct messages were disappearing (i.e, if I had 200 received messages from an account called "sup," and I sent 5 messages from an account called "foo," "sup" would only show 195 direct messages and "foo" would show 5. Those 5 messages would disappear from "sup" in both the twitter DM window, as well as from the API calls.
I'm using Twython to do this, but I don't believe that switching back to requests would change anything, as I can visibly see the messages disappearing from the chat log. Does that mean that Twitter only stores 200 total DM's? Or am I doing something completely wrong.
This is the code I was using to pull for direct messages. Keep in mind that I still don't know how to explain DM's disappearing in the twitter DM console.
test_m = twitter.get_direct_messages(count=200)
i = 0
for x in test_m:
print 'dm number = ' + str(i) + '| dm id= '+ str(x['id']) + ' |text= ' + x['text']
i += 1
m_id = test_m[-1]['id']
test_m_2 = twitter.get_direct_messages(count=200, max_id=m_id)
This code will return test_m as an array of 200 items, and test_m_2 as an array of 1 item, containing the last element of test_m.
Edit: Well, no response yet but I figured I should add that this method successfully returns more than 200 messages for the other api calls I've made (user timeline, mentions timeline, retweets). From my testing I have to assume that only 200 incoming messages are stored by twitter throughout all DM interactions. If I'm wrong, let me know!
Brian,
Twitter stores more than the last 200 messages, if you were to delete 1 of the Direct messages using destroy_direct_message, then you can access 1 addition old direct Message.
Deleting 100 old Direct Messages will give you access an additional 100 messages etc.
I neither make max_id nor page work either. not sure if the bug it in Twython or Twitter ;-(
JJ
Currently, the API stands you can get up to the latest 3200 tweets of an account but only the 200 latest received direct messages (direct_messages endpoint) from a conversation or the 800 latest sent direct messages (direct_messages/sent endpoint).
To answer your question, I do not think there is a limitation of the number of direct messages "stored" by Twitter. Recently, I have been able to retrieve a complete conversation with more than 17000 direct messages (and all the uploaded media) using this tool that I have created for this purpose.

How can I apply timeout function to LMAX Disruptor Queue?

To developers/users of LMAX Disruptor http://code.google.com/p/disruptor/ :
My question:
Can anyone suggest an approach to how apply a timeout function to Disruptor e.g. using EventHandler?
Here is one scenario that came up in my line of work:
Outbox - messages sent to the Server over a network
Inbox - ACK messages received from the Server
ACK Handler - marks outbox messages as ACKed
Timeout Handler - marks outbox message as NACKed (much needed, but where can it fit into the Disruptor design?)
Is there anyone who share the same opinion?
Or can anyone point out why it is unnecessary.
I hope the ensuing debate would be brief.
Thank you.
To clarify the timeout-handler would "fire" after a certain period of time when a message could not be delivered?
The way it works with disruptor is you have a ringbuffer for inbound and a ringbuffer for outbound messges... so email comes in, place it into the inbound ring buffer using an appropriate event. then process the message (i.e. decode, analye, log, store) and send it along to another sytem by placing it into the outbound ringbuffer... another handler takes the message and stores it into a database or sends it to another server using smtp... if a error / timeout etc. occurs, your create an event in the inbound ringbuffer signaling the error (NACK) and process this message. does that make sense?!?

Resources