We're using ejabberd as our xmpp server and iphone xmppframework for client side.
The problem is when we get the offline messages the timestap that is written in the message is in actual date/time format but the timezone of the server is different than the clients time zones so at this point the things are getting messy.
We're using a same approach while querying Last Activity of a user(XEP-0012) , but in last activity xep the server returns the information as "how many seconds ago the user last logged in to server" so in this way we can apply seconds difference to our clients time and found the message delivery date/time so there is no problem in Last Activity query.
But in delayed delivery , ejabberd sends an exact date and time value and clients confuses about the conversion(Date and time of each client may be very different from each others)
Does anyone know how can we fix that problem? Is there any way to configure ejabberd to return "seconds passed information" until offline message send ?
By the way we're using latest ejabberd version.
Thanks
XEP-0091 (Legacy Delayed Delivery) says:
The timezone is be understood as UTC.
So you need to convert the time from UTC to the local time of the client in order to get the correct result.
The newer specification, XEP-0203 (Delayed Delivery) also says:
[...] MUST be expressed in UTC
Related
I'm currently running a web application that uses Microsoft Graph's API and we encountered the following message today which severely impacted our application, for a whole day:
"error": {
"code": "ErrorTooManyObjectsOpened",
"message": "Too many concurrent connections opened., The process failed to get the correct properties.",
"innerError": {
"request-id": "removed",
"date": "2017-12-13T17:01:14"
}
}
please note that the request-id was removed
Let me summarize what our web application does.
Basically, we have 2 email folders that we are actively subscribed to, Junk and Folder A.
If anything hits Folder A, we strip the body of the email message and then move the message to Folder B. The subscription on our Junk folder also strips the body and sends them over to Folder B.
Sometimes the webhook subscription service skips messages that may come at the same time, therefore we have 2 cron jobs in our server that run a script and check Junk/Folder A for any messages every 5 minutes, therefore my assumption is that the cron job runs about 288*2 times per day. Not counting our subscription to the folders, we usually get around 200-300 email messages per day.
Unfortunately Microsoft's Graph error codes page does not provide us with any explanation about this code. I would really appreciate if anyone can explain what this means and how to avoid it from happening.
This is occurring because your application is exceeding the throttling thresholds.
There are several different throttling metrics that can affect Microsoft Graph requests. For a high-level overview, see the Microsoft Graph throttling guidance. Since in this case you're hitting Exchange Online via Graph, you can find more specific information from What throttling values do I need to take into consideration? in the Exchange documentation.
Architecturally, you are making a lot of unnecessary calls into the API. Rather than having both a subscription and a scheduled job, you should use just the webhook subscription and the /delta endpoint.
Each call to the /delta endpoint gives you a token that can be used to fetch any changes to a given resource since the token was originally issued. So regardless of if 1 email came in or 1,000, you only get the new emails.
Once you're using the /delta to find your changes, you then use a webhook only as a "trigger". When you receive the webhook, you can ignore the contents and instead issue a request to /delta. This ensures that you capture every incoming email even if you didn't necessarily receive separate webhook notifications.
There is a bug. After making 500 message move requests, a "cannot copy/move error" occurs. Subsequently, a "429: Too many concurrent connections opened" error occurs. Most applications miss the first error because you continually get the 429 error afterwards.
If you let the application "rest" for 30 minutes, the throttle resets itself and you can continue on. I do not think there is a time limit for hitting the 500 moves. My application did 500 moves after 6.5 hours and then we started getting the error.
And, if you keep trying your move call before the 30 min rest period, it never resets. Also, in the response, the retry-after is null... so, that doesn't help you.
If you find a work around, please let me know. We are trying a few things like setting the category, then manually moving the messages. I am also investigating making a rule the moves them for us or some other job. I cannot find a way to execute a rule from the Graph API.
See this link for more information. Also, the more people who report having this issue, hopefully the sooner it can be resolved. Outlook API Throttling documentation #144
We use Microsoft Graph to subscribe to webhooks from emails. Additionally, as a backup procedure we also crawl the messages directly.
We crawl around 5 million emails a day, and we see that each day consistently around 1%-2% of these emails are not sent to us via the webhook, although the subscription for this principal is active (and other email notifications from this user are indeed sent).
Is there any logic on the Microsoft Graph side to not send webhooks for certain types of emails by design? or is it just a problem on the notification mechanism?
(Obviously crawling them, as we do now, is a viable backup option, but that also means the processing of the email will be delayed)
I currently have a similar webhook setup and we get around 200-300 emails and I notice that the subscription service usually misses out on 1-2 per day since sometimes some emails come around at the same time. I have also noticed that the data structure is an array of objects when we get two or more emails at the same time. What we have put into place is basically a cron scheduled script that checks the mailbox on specific time intervals, such as every 5 minutes, every 10 minutes and so on. This is the only solution that has worked for my application to capture every single email.
I have an application that automates email communications between my companies service desk and customers.
When the application needs to reply to an existing email, i use FindItems to get the email and then i load the properties i need, then i use (CreateReply) to get the response email i need to send.
The strange behavior is as follows:
The email i need to reply to has a correct SentDateTime and timezone (+4 GMT) but when i create a reply from it the sent date of the "replied to email" becomes UTC which makes no sense!
I am specifying the timezone when i am connecting to exchange correctly and i cant seem to find a way to specify the timezone of the created reply message.
I really hope someone knows anything about this.
Just to point out, when i use outlook to manually reply to emails, the replied to email sent date time is correct. The problem only happens when i use exchange web services.
Regards
Yazeed
This problem happens because the EWS Managed API omits sending the TimeZone headers in most requests. For the reply, forwards you do need send the timezone headers else it will set the header information to UTC. One workaround is to use the events to add the timezone header back in see http://blogs.msdn.com/b/emeamsgdev/archive/2014/04/23/ews-missing-soap-headers-when-using-the-ews-managed-api.aspx . The source for the EWS Managed API is also available now so you could also patch the GetTimeZoneRequired method and recompile the library https://github.com/OfficeDev/ews-managed-api/blob/31951f456519786e41232fa9ff6a3ab20b56cac3/Core/ServiceObjects/Items/Item.cs .
Cheers
Glen
I've an API which uses oAuth for authentication.
Everything has been working smoothly for months until last week, when I started noticing oAuth was throwing 400 errors due to a Expired timestamp, yours X, ours Y.
I'm guessing that some changes on server (it's a shared hosting machine) reset or messed something with the local time.
I can't figure out any other reason, but I'd like to know if there's any other debuggin' technique that may help me.
PS: Using PHP 5.2 and the oAuth 1.0a php implementation
The OAuth server should always use UTC time as the basis for checking timestamps. Assuming that you send the correct timestamp based on UTC, check the 'ours' timestamp has the correct time in UTC. If not then its most likely there is a problem with the server time or the OAuth timestamp calculation on the server.
Our experiments have shown that GMail does not use UTC for the internal dates in its mailstore. Anyone know what offset it uses? We've narrowed it to between 3 and 7 hours behind UTC (exclusive), and we could figure it out with further experimentation, but maybe someone knows off-hand. UID SEARCH and the like aren't very accurate if you are assuming UTC and it's not the case. :-)
Further, we're wondering if it's consistent regardless of where you're connecting to gmail in the world.
Update: the first test showed UTC-4 or UTC-5, and a second test I did (sending hourly emails) revealed my account to be UTC-7. We're wondering if it's set when you register depending on your source IP (I also registered an account with a UTC-10 timezone and the internal store was still UTC-7. And changing your account timezone later doesn't change the internal store date an IMAP client sees, wisely, I would think).
I had written a python IMAP lib client and confirmed that the Gmail timezone is UTC-07:00 and I have confirmed it over and over again. So, any search results that you obtain (more precisely the after: startDate and before: endDate query) are shifted by that much amount according to your local timezone. This is because the mailstore uses the local time according to the sender SMTP server, which in case of gmail happens to be at UTC-07:00.