Looking at Mailcore docs, I see a method to retrieve the sequence number of an email or emails by executing a fetch using the email UID. However, when looking at the Mailcore2 docs, I don't see any way to accomplish this. Is there a method for this in Mailcore2 that I am somehow not seeing, or a way to bubble up this information? I know it is possible in the command line, but I'd like to be able access it from inside my iOS app.
EDIT:
Here is why I am looking for this functionality:
We have a native iOS client that fetches the 10 newest emails at a time and saves them. Additionally, the client will fetch the next 10 older emails at a time and save them, as well as the lowest UID it has seen (minUID).
So we need to be able to continually fetch the next 10 older emails that exist on the server that the client has not yet stored or seen. (Therein lies the challenge).
Initially, we did this by fetching emails by UID in groups of 10, using our saved minUID minus 1 as the starting point for each fetch, and updating our minUID at the end of each fetch. However, as UIDs are not necessarily contiguous, the number of emails that was returned was inconsistent, and sometimes zero. To solve this problem, we thought it would be helpful to (before each fetch for the next 10 older emails), fetch the email with our stored minUID, check its current sequence number, and then fetch the next 10 older emails based on that sequence number.
To fetch messages based on a sequence number, you can use the following function syncMessagesWithFolder:folderName:requestKind:uids:modSeq:
The below example will fetch you all the new/modified messages for the folder folder above sequence number highestModSeq.
MCOIMAPFetchMessagesOperation * op = [self.imapSession syncMessagesWithFolder:folderName
requestKind:requestKind
uids:[MCOIndexSet indexSetWithRange:MCORangeMake(1, UINT64_MAX)]
modSeq:highestModSeq];
The documentation is not exactly the best place to find examples, but our wiki is increasingly becoming an excellent repository for that kind of information. What you're looking for is -[MCOIMAPSession fetchMessagesByUIDOperationWithFolder:requestKind:uids:], an example of which can be found under the IMAP examples wiki page.
Related
I am ideally looking for an API that returns all the messages posted(including replies) since a given timestamp.
conversations.history is supposed to be the one I should be using, but it does not return replies, it only returns parent message (only if the timestamp of the parent message satisfies the "oldest" param I supply - i.e. if the oldest supplied in the query is later than parent's ts but earlier than replies, neither parent nor replies will be returned).
Hence, I am trying to find if there is any other API that returns the threads based on "oldest" timestamp. i.e. all the threads which have replies since a given timestamp.
Note: I looked at conversations.replies, it is only useful if you know which thread's replies you are fetching.
Currently there is no API to do what you aspire to do.
The best work around is manually fetching all threads data in-memory and then applying filter.
Did you find an alternative solution to this question? I have the same use case and when contacting Slack support I received the same response that we need to use the combination of conversations.history & conversations.replies. This will be quite an intensive and continuously growing number of calls if we need to call conversations.replies for all threaded messages just to filter out the timestamps that fit the date range. This would be catastrophic in the long run.
Ideally slack need to update conversations.replies API to support getting all replies between oldest & latest parameter just like in history.
Another alternative I am considering is to change the implementation and use the Events API instead of the Web Client API and use queueing to store all incoming messages then this will make sure that all messages are captured and stored then apply the required filters.
I have built a small email client using mailcore and I need to know if something changed, even if I've already known about that email's UID and have 'seen' it. For example, if someone flags an email that was received 5 days ago, I need to be able to detect this without having to pull every email and see which one changed in comparison to the one I have recorded locally.
Currently I'm polling and then looking through every email in the Inbox but that doesn't look right, as it'll have huge performance implications.
I am using the Twitter public stream API to search for some keywords. I am writing my script in Java and therefore I use twitter4j. Now I stumbled over the information about status deletion notices:
Status deletion notices (delete)
These messages indicate that a given Tweet has been deleted. Client
code must honor these messages by clearing the referenced Tweet from
memory and any storage or archive, even in the rare case where a
deletion message arrives earlier in the stream that the Tweet it
references.
https://dev.twitter.com/docs/streaming-apis/messages#Status_deletion_notices_delete
So I created methods to remove records from my database when such a notice occurs. Unfortunately such a notice never occurs. I searched to find out what I am doing wrong and found some posts in the twitter developer section concerning the same problem:
https://dev.twitter.com/discussions/17393
https://dev.twitter.com/discussions/19943
https://dev.twitter.com/issues/1355
https://dev.twitter.com/discussions/12836
but unfortunately all these discussions got no answer. So for me it seems like I did no mistake with my code but twitter4j never sends me an deletion notice.
I want to respect the privacy of the twitter users - at least for legal reasons. So my question is:
What can I do to respect the privacy of the users ?
What do I have to do to satisfy my legal duties ?
One alternative seems to be to periodically iterate through all saved Tweets in my Database and request them from twitter to see whether I get a result back or not (so they were deleted). But this doesn't seem to be a practicable way because the data will get more and more and therefore at some point of time I will have limitations (in time, allowed twitter requests, ...). So what should I do?
Thanks in advance! Your help is greatly appreciated.
Ludwig
twitter4j v.3.0.6
Given the nature of the volume of tweets, it's unreasonable to assume that you would check to see if all the tweets are still there. You should make sure that you properly act on a delete notice from twitter. The onus is on them to actually send the delete notification.
That being said, I receive delete notifications from twitter. However, we aren't using the public stream, we are using sitestreams, which relies on authorizing specific social accounts and streaming all updates for those accounts (e.g. favorites, follows, blocks, tweets, retweets, etc) to us in realtime.
If you are doing a stream with filters, for example, it's probably not feasible (or at least very taxing) to run all deleted items through the same pipeline as new items. Or perhaps, to guess at which you were sent based on the times that you were running your filter.
As noted in the issue you linked to, the public streaming API will not necessarily send them out. I'd endeavor to handle them, and possibly provide a tool to manually remove any if a request comes in through another channel, but not worry too much about it, given that twitter doesn't provide the proper facility to be notified of such instances.
I am working on PHP project that should fetch emails from IMAP server, and store them in local database.
Same IMAP server can be used by other email clients, like outbox and so on.
The problem is how to know which messages I already fetched, and which I didn't? I am thinking to use search by datetime, but is it reliable(I would have cronjob, that would access user mail box every minute, and check for emails, but not sure if datetime can cause some issues, for example in case when at almost same time arrive short message and message with big attachment).
I was thinking about system tags, but user can modify them via email client, so I can rely on them, and don't want to modify them and confuse client.
Next I was thinking about custom tags, but not all IMAP servers support them(and our software need to be flexible as much as possible).
Any good idea how I could solve this problem?
Keep track of the currently highest synced UID of the folder you are syncing, and verify that the UIDVALIDITY value of the folder match.
Unique identifiers are assigned in a strictly ascending fashion in the mailbox; as each message is added to the mailbox it is assigned a higher UID than the message(s) which were added previously. Unlike message sequence numbers, unique identifiers are not necessarily contiguous.
I have asked this question in a different post here on SO:
How can a read receipt be suppressed?
I have been doing some research of my own to try and solve this problem and accessing the email account via IMAP seems like it is going to be a good solution. I have successfully been able to access my own inbox and mark messages as read with no issue.
I have been asked to perform the same task on an inbox that contains over 23,000 emails. I would like to run the test on a small amount of emails from that inbox before letting the whole 23,000 get it.
Here is the code I have been running via telnet:
LOGIN user#mailserver.com password
SELECT Inbox
STORE 1:* flags \Seen 'this line marks all the emails as read
So my question is, how can I execute that STORE command on a specific group of emails ... say emails that are going to / coming from a specific account? Is there a way to concatenate the commands like a FETCH then the STORE? Or is there a better way to go about getting a collection of emails based on certain criteria and then modifying ONLY those emails that can be accomplished through IMAP?
Take a look at the IMAP SEARCH command. The syntax is really awful, but it'll let you search for recipients or senders, for certain words in the subject or the body of messages. It will give you a list of message ids and you can use those message ids in your call to STORE.