I am using /delta OneDrive APIs from Graph to sync files & folders for all the users in my organization.
According to the documentation:
There may be cases when the service can't provide a list of changes for a given token (for example, if a client tries to reuse an old token after being disconnected for a long time, or if server state has changed and a new token is required). In these cases, the service will return an HTTP 410 Gone error
There is no exact time-frame when the delta token is too old or expires.
Is there a particular time-frame after which the token is unusable in case of drive and we'll get the 410 error?
There is not defined time to live (TTL) for delta tokens, nor is the age the only factor in determining if the token is invalid. If there are substantial changes (i.e. substantial changes to the tenant and/or drive could cause it as well).
So long as your code is set up to handle a possible 410, you shouldn't see much impact from this. My general guidance would be to optimize for a "full resync" but comparing file metadata and only pulling or pushing files that have changed (i.e. compare name, path, size, dates, etc.).
Related
In my project, there is a configuration value for Oauth that I do not understand:
auth: {
maxIdTokenIatOffsetAllowedInSeconds: 600
},
Based on the document I am reading: https://nice-hill-002425310.azurestaticapps.net/docs/documentation/configuration
For maxIdTokenIatOffsetAllowedInSeconds It says:
The amount of offset allowed between the server creating the token, and the client app receiving the id_token.
What does the offset mean in this case? Is it like a timing unit?
I am assuming it means that each user can only receive one token every 600 seconds?
Can someone explains what does the offset mean? and what maxIdTokenIatOffsetAllowedInSeconds is doing to the token?
The docs you linked to are specifically for the angular-auth-oidc-client library so hopefully that's what you're using. In that case the maxIdTokenIatOffsetAllowedInSeconds is being used to determine how much clock skew is allowed between the issuing server and the consuming browser. In this case, 600 seconds would mean the clocks can be 10 minutes different from one another and the token will still be considered valid.
However, today I came across this issue and any value I pushed in higher than 299 was causing my token to be considered expired. I looked back through the changelog and found a recent-ish PR that added this check and a new configuration value that allows you to ignore it (disableIdTokenValidation).
I am using Microsoft Graph API delta queries for downloading some information (messages, contacts, events) regularly. But sometimes I get this error:
{
"error" :
{
"code" : "SyncStateNotFound",
"innerError" :
{
"date" : "2018-06-01T06:31:24",
"request-id" : "47e918a9-ce5b-42b4-8a86-12b96c93121a"
},
"message" : "The sync state generation is not found; generation=605;[highest=841][841][839][840]."
}
}
I can't provide you steps for reproducing because I don't know how to reproduce it. It happens sometimes on production environment.
So I have some questions:
What is generation in Microsoft Graph API? Is there any available documentation about it? I didn't find anything useful in the Internet.
Why delta links expire? Delta link expires by time or some iterations of using delta links? Can I save my delta link in my database and in a e.g. 1 year use this delta link for syncing again?
How to avoid delta links expiration? Are there any lifehacks?
What should I do if I got this issue? Full resync and getting new delta link?
Is it a bug or feature?
Every time you sync, a new sync token is generated. We store the current sync token along with the two previous ones. This helps us in cases where we advance the sync on the server side, but something happens transmitting the data to the client so they don't get the new token value. In such cases, we can "fallback" to the previous sync token so that the client doesn't have to resync everything. But these three stored tokens change with each sync - the oldest one gets dropped and we advance. In your case, you are passing us a delta token that is around 230 generations old. That token is long gone.
Another thing to consider is that an "inactive" sync token will hang around for around 90 days at which point we consider it stale, pour gas on it and set it on fire (not really).
I'm currently running a web application that uses Microsoft Graph's API and we encountered the following message today which severely impacted our application, for a whole day:
"error": {
"code": "ErrorTooManyObjectsOpened",
"message": "Too many concurrent connections opened., The process failed to get the correct properties.",
"innerError": {
"request-id": "removed",
"date": "2017-12-13T17:01:14"
}
}
please note that the request-id was removed
Let me summarize what our web application does.
Basically, we have 2 email folders that we are actively subscribed to, Junk and Folder A.
If anything hits Folder A, we strip the body of the email message and then move the message to Folder B. The subscription on our Junk folder also strips the body and sends them over to Folder B.
Sometimes the webhook subscription service skips messages that may come at the same time, therefore we have 2 cron jobs in our server that run a script and check Junk/Folder A for any messages every 5 minutes, therefore my assumption is that the cron job runs about 288*2 times per day. Not counting our subscription to the folders, we usually get around 200-300 email messages per day.
Unfortunately Microsoft's Graph error codes page does not provide us with any explanation about this code. I would really appreciate if anyone can explain what this means and how to avoid it from happening.
This is occurring because your application is exceeding the throttling thresholds.
There are several different throttling metrics that can affect Microsoft Graph requests. For a high-level overview, see the Microsoft Graph throttling guidance. Since in this case you're hitting Exchange Online via Graph, you can find more specific information from What throttling values do I need to take into consideration? in the Exchange documentation.
Architecturally, you are making a lot of unnecessary calls into the API. Rather than having both a subscription and a scheduled job, you should use just the webhook subscription and the /delta endpoint.
Each call to the /delta endpoint gives you a token that can be used to fetch any changes to a given resource since the token was originally issued. So regardless of if 1 email came in or 1,000, you only get the new emails.
Once you're using the /delta to find your changes, you then use a webhook only as a "trigger". When you receive the webhook, you can ignore the contents and instead issue a request to /delta. This ensures that you capture every incoming email even if you didn't necessarily receive separate webhook notifications.
There is a bug. After making 500 message move requests, a "cannot copy/move error" occurs. Subsequently, a "429: Too many concurrent connections opened" error occurs. Most applications miss the first error because you continually get the 429 error afterwards.
If you let the application "rest" for 30 minutes, the throttle resets itself and you can continue on. I do not think there is a time limit for hitting the 500 moves. My application did 500 moves after 6.5 hours and then we started getting the error.
And, if you keep trying your move call before the 30 min rest period, it never resets. Also, in the response, the retry-after is null... so, that doesn't help you.
If you find a work around, please let me know. We are trying a few things like setting the category, then manually moving the messages. I am also investigating making a rule the moves them for us or some other job. I cannot find a way to execute a rule from the Graph API.
See this link for more information. Also, the more people who report having this issue, hopefully the sooner it can be resolved. Outlook API Throttling documentation #144
As described in https://developer.apple.com/reference/cloudkit/ckserverchangetoken, the CloudKit servers return a change token as part of the CKFetchRecordZoneChangesOperation callback response. For what set of subsequent record fetches should I include the given change token in my fetch calls?
only fetches to the zone we fetched from?
or would it apply to any fetches to the db that that zone is in? or perhaps the whole container that the db is in?
what about app extensions? (App extensions have the same iCloud user as the main app, but have a different "user" as returned by fetchUserRecordIDWithCompletionHandler:, at least in my testing) Would it be appropriate to supply a change token from the main app in a fetch call from, say, a Messages app extension? I assume not, but would love to have a documented official answer.
I, too, found the scope of CKServerChangeToken a little unclear. However, after reviewing the documentation, both CKFetchDatabaseChangesOperation and CKFetchRecordZoneChangesOperation provide and manage their own server change tokens.
This is particularly useful if you decide to follow the CloudKit workflow Dave Browning outlines in his 2017 WWDC talk when fetching changes (around the 8 minute mark).
The recommended approach is to:
1) Fetch changes for a database using CKFetchDatabaseChangesOperation. Upon receiving the updated token via changeTokenUpdatedBlock, persist this locally. This token is 'scoped' to either the private or shared CKDatabase the operation was added to. The public database doesn't offer change tokens.
2) If you receive zone IDs via the recordZoneWithIDChangedBlock in the previous operation, this indicates there are zones which have changes you can fetch with CKFetchRecordZoneChangesOperation. This operation takes in it's own unique server change token via it's rather cumbersome initializer parameter: CKFetchRecordZoneChangesOperation.ZoneConfiguration. This is 'scoped' to this particular CKRecordZone. So, again, when receiving an updated token via recordZoneChangeTokensUpdatedBlock, it needs persisting locally (perhaps with a key which relates to it's CKRecordZone.ID).
The benefit here is that it probably minimises the number of network calls. Fetching database changes first prevents making calls for each record zone if the database doesn't report any changed zone ids.
Here's a code sample from the CloudKit team which runs through this workflow. Admittedly a few of the APIs have since changed and the comments don't explicitly make it clear the 'scope' of the server change tokens.
I am working on a project that requires an automated SSIS package to
connect to SurveyMonkey data store via API to incrementally download survey
results for the day or specified time period for custom reporting and low scoring task assignment.
Via OAuth I can collect a long lived access token, but due to the automated
and infinite nature of my projects lifespan, I cannot manually initiate
OAuth2 token refreshes or complete manual re-authentication cycles.
Is there another method to automatically export this data upon a scheduled
request?
Additionally, for clarification for how long is a long lived access token
valid? 60 days?
Miles from surveymonkey.com support responded to me with a great answer. I hope it can help someone down the line.
Hi Rob,
Currently our tokens should not expire - this is not guaranteed and
may change in future, but we will send out an update well ahead of
time if this does ever change. The token you receive on completion of
OAuth lets you know how long the token will last for without user
intervention, currently it returns 'null' in the 'expires_in' field.
There is no other automated way to schedule the data to be exported
currently, however it sounds like our current setup should suit your
needs
In addition to Miles's reply, it is very straightforward to pull diffs from surveymonkey using modified dates. we keep "last sync" timestamp per-survey in our database, and update it after each successful data pull.
Use the REST api directly, or (if you're using PHP) try https://github.com/oori/php-surveymonkey. We run it in production.
*note: actually, you're interested in setting the start_modified_date option for the "getRespondentList" function. but in general - see the API docs, modified date filter is available in more functions.