We have an application that creates a subscription on a calendar for changes.
https://graph.microsoft.com/v1.0/subscriptions
{
"changeType": "created,updated,deleted",
"notificationUrl": "https://example.com/notifications",
"resource": "users('room-01#example.com')/events",
"expirationDateTime":"2020-11-14T14:56:29.000Z",
"clientState": "Test"
}
This works as expected.
But now we are interested in calendar events for multiple rooms.
We want to subscribe our application to many calendars at once since we have many rooms (300+).
If we create separate subscriptions for each room we will likely hit the subscription limit of Microsoft.
We were hoping to subscribe for all resources that are member of a group:
https://graph.microsoft.com/v1.0/subscriptions
{
"changeType": "created,updated,deleted",
"notificationUrl": "https://example.com/notifications",
"resource": "groups('383efd01-29b6-4817-9a9c-8faf61a1e06a')/events",
"expirationDateTime":"2020-11-14T14:56:29.000Z",
"clientState": "Test"
}
This is not working...
Is there maybe another way to set the resource, wildcard?
Hope someone can help us or have a alternative approach, thanks in advance.
In order to avoid this throttling situation, i would have tried implementation subscribing to smaller size (say, 50 rooms), instead of 300+. The Microsoft Graph endpoint is normally highly performant. The infrastructure behind Microsoft 365 will allocate computing resources based on demand to ensure that periods of high traffic levels do not result in degraded performance. In addition to dynamic scaling of resources, another mechanism that Microsoft uses is throttling. If you make too many requests then you will start to get HTTP 429 (too many requests) response codes returned. Here is the throttling guidance.
In the same lines, you may want to consider Microsoft Graph Data Connect and see if it fits your scenario. If yes, then you can understand that it allows you to get at the same data that's available through Microsoft Graph APIs (currently only a limited subset is available), but in a scalable way.
Related
I am trying to pull Azure-Devops entities data (teams, projects, repositories, members etc...) and process that data locally,
I cannot find any documentation regarding rate-limiting and pagination,
does anyone has any experience with that?
There is some documentation for pagination on the members api:
https://learn.microsoft.com/en-us/rest/api/azure/devops/memberentitlementmanagement/members/get?view=azure-devops-rest-6.0
But that is the only one, i couldn't find any documentation for any of the git entities,
e.g: repositories.
https://learn.microsoft.com/en-us/rest/api/azure/devops/git/repositories/list?view=azure-devops-rest-6.0
If someone could point me to the right documentation,
Or shed some light on these subjects it would be great.
Thanks.
I cannot find any documentation regarding rate-limiting and pagination, does anyone has any experience with that?
There is a document about Service limits and rate limits, which introduced service limits and rate limits that all projects and organizations are subject to.
For the Rate limiting:
Azure DevOps Services, like many Software-as-a-Service solutions, uses
multi-tenancy to reduce costs and to enhance scalability and
performance. This leaves users vulnerable to performance issues and
even outages when other users of their shared resources have spikes in
their consumption. To combat these problems, Azure DevOps Services
limits the resources individuals can consume and the number of
requests they can make to certain commands. When these limits are
exceeded, subsequent requests may be either delayed or blocked.
You could refer Rate limits documentation for details
For the pagination, generally, REST API will have paginated response and ADO REST API normally have limits of 100 / 200 (depending which API) per page in each response. The way to retrieve next page information is to refer the response header x-ms-continuationtoken and use this for next request parameter as continuationToken.
But Microsoft does not document this very well - this should have been mentioned in every API call that supports continuation tokens:
Builds - List:
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds?definitions={definitions}&continuationToken={continuationToken}&maxBuildsPerDefinition={maxBuildsPerDefinition}&deletedFilter={deletedFilter}&queryOrder={queryOrder}&branchName={branchName}&buildIds={buildIds}&repositoryId={repositoryId}&repositoryType={repositoryType}&api-version=5.1
If I use above REST API with $top=50, as expected I get 50 back and a header called "x-ms-continuationtoken", then we could loop output the result with continuationtoken:
You could check this similar thread for some more details.
I think for most of the apis you have query parameter as $top/$skip.You can use these parameter to do pagination. Lets say the default run gives 200 documents in the response. For the next run skip those 200 by providing $skip=200 in the query parameter of the request to get the next 200 items. You can keep on iterating until count attribute of the response becomes 0.
For those apis were you don't have these parameter you can use continuation-token as mentioned by Leo Liu-MSFT.
It looks like you can pass $top and continuationToken to list Azure Git Refs.
The documentation is here:
https://learn.microsoft.com/en-us/rest/api/azure/devops/git/refs/list?view=azure-devops-rest-6.0
I've been looking around for a bit now and can't seem to find anything related.
I'm trying to get a users "presence" object or something similar for a full day.
I've tried to use delta on the presence call but it doesn't seem to be supported atm.
This feature is under the /beta version in Microsoft Graph.
In Microsoft Graph to get a user's presence information, you need to have delegated permissions Presence.Read, Presence.Read.All and HTTP get query request looks like below
GET https://graph.microsoft.com/beta/users/66825e03-7ef5-42da-9069-724602c31f6b/presence
The output for the above query below:
{
"#odata.context": "https://graph.microsoft.com/beta/$metadata#users('66825e03-7ef5-42da-9069-724602c31f6b')/presence/$entity",
"id": "3ec9bb05-dd2e-4b36-87b0-3a855f4b82ed",
"availability": "Offline",
"activity": "Offline"
}
Please refer to Microsoft documentation for more details.
We have a registered AAD application marked as multi-tenant. We are using this App ID to generate a Token for Microsoft Graph.
The first user is a Global Admin in the Tenant where the app is registered.
The second user is part of another Tenant.
When the second user tried to use Microsoft Graph to get information from OneDrive, we sometimes get an HTTP 429 activityLimitReached error.
We read the guide about throttling and it says to repeat the request after the Retry-After value from the response header. But in our case there is no Retry-After field in the response.
We received this error by executing one request per day. Also, after receiving the 429, we can retry and get a successful result (after several attempts). This error appears only in the OneDrive, the other services are OK.
What can we do to avoid 429 error? How can we check the current limit or increase it?
Example of request
GET https://graph.microsoft.com/v1.0/users/:userId/drives
Example of response
HTTP/1.1 429
Cache-Control: private
Transfer-Encoding: chunked
Content-Type: application/json
request-id: 377d2cdf-7be3-4286-819a-46060330365f
client-request-id: 377d2cdf-7be3-4286-819a-46060330365f
x-ms-ags-diagnostic: {"ServerInfo":{"DataCenter":"West Europe","Slice":"SliceA","Ring":"4","ScaleUnit":"000","Host":"AGSFE_IN_13","ADSiteName":"AMS"}}
Duration: 170.5668
Strict-Transport-Security: max-age=31536000
Date: Wed, 23 May 2018 11:39:08 GMT
{
"error": {
"code": "activityLimitReached",
"message": "The request has been throttled",
"innerError": {
"request-id": "377d2cdf-7be3-4286-819a-46060330365f",
"date": "2018-05-23T11:39:09"
}
}
}
What can we do to avoid 429 error? How can we check the current limit or increase it?
To avoid the 429 error, we must control our request, don't do too many request within limited time. The limit issue is known issue we canot increase it now.
Setting and publishing exact throttling limits sounds very straightforward, but in fact, it's not the best way to go. We continually monitor resource usage on SharePoint Online. Depending on usage, we fine-tune thresholds so users can consume the maximum number of resources without degrading the reliability and performance of SharePoint Online.
Above reference is from MS documentation about throttling and OneDrive for Business/SharePoint: https://learn.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
I would suggest going to UserVoice for Graph and suggest an improvement (or upvote an existing one). The feedback helps Product Group prioritize future work based on the interest in those suggested improvements. But based on the above official docs, the best solution is still to contorl our request but not the feature request.
I do not have much experience with the onedrive-api but I have certainly experienced throttling when using the onenote-api. This post on how onenote throttles may be more useful than the generic microsoft graph link in your question. In particular, long calls against the api will result in throttling occurring much sooner than more targetted calls (and you need to let calls finish before issuing new calls, be very careful with queueing multiple curl requests). Once you have been throttled trying the same call over and over will likely increase the length of time throttling occurs (I haven't seen as full day but I have seen several hours before).
I presume (but am not sure) that all microsoftgraph-api calls will internally count towards resource limits, but if you direct onedrive calls against the onedrive-api this will not count against the microsoft-graph limits and may allow the requests you need without throttling.
I would definitely recommend getting a second access token for the direct onedrive api (you can use the same refresh token) and try this approach.
GET https://www.onedrive.com/v1.0/users/:userId/drives
If it is one user in particular that has issues maybe they have exceeded their tenant resources?
I have a use case where I need to poll the OneNote API approximately every minute in order to respond to text added to pages by the user.
(Aside: I'd LOVE to use webhooks to get notifications only when something changes, but that's only supported for consumer notebooks at this time, as far as I can tell.)
Polling with this frequency works for a few users (5 or so), but then, with more users who authorized the same Microsoft application, the app seems to hit an application-level rate limit and begins receiving 429 Too Many Requests responses.
How can I ensure polling will still work as the number of users grows? And are there any rate limits that can be made public or shared in confidence for valid use cases?
So it is possible to register for webhooks on the sharepoint notebooks as onedrive items - as a notebook page gets updated the notificationUrl fires and you can then using delta calls to determine which sections (section.one files) have been updated.
I would then use the onenote-api to get the pages in the updated notebook sections GET https://www.onenote.com/api/v1.0/me/notes/sections/{id}/pages
An alternative would be to treat the sharepoint drive as a webdav server and use the propfind method with the getlastmodified property to poll the drive determine which sections of various notebooks have been updated.
But I agree it would be easier if onenote webhooks were extended to sharepoint.
I'm currently running a web application that uses Microsoft Graph's API and we encountered the following message today which severely impacted our application, for a whole day:
"error": {
"code": "ErrorTooManyObjectsOpened",
"message": "Too many concurrent connections opened., The process failed to get the correct properties.",
"innerError": {
"request-id": "removed",
"date": "2017-12-13T17:01:14"
}
}
please note that the request-id was removed
Let me summarize what our web application does.
Basically, we have 2 email folders that we are actively subscribed to, Junk and Folder A.
If anything hits Folder A, we strip the body of the email message and then move the message to Folder B. The subscription on our Junk folder also strips the body and sends them over to Folder B.
Sometimes the webhook subscription service skips messages that may come at the same time, therefore we have 2 cron jobs in our server that run a script and check Junk/Folder A for any messages every 5 minutes, therefore my assumption is that the cron job runs about 288*2 times per day. Not counting our subscription to the folders, we usually get around 200-300 email messages per day.
Unfortunately Microsoft's Graph error codes page does not provide us with any explanation about this code. I would really appreciate if anyone can explain what this means and how to avoid it from happening.
This is occurring because your application is exceeding the throttling thresholds.
There are several different throttling metrics that can affect Microsoft Graph requests. For a high-level overview, see the Microsoft Graph throttling guidance. Since in this case you're hitting Exchange Online via Graph, you can find more specific information from What throttling values do I need to take into consideration? in the Exchange documentation.
Architecturally, you are making a lot of unnecessary calls into the API. Rather than having both a subscription and a scheduled job, you should use just the webhook subscription and the /delta endpoint.
Each call to the /delta endpoint gives you a token that can be used to fetch any changes to a given resource since the token was originally issued. So regardless of if 1 email came in or 1,000, you only get the new emails.
Once you're using the /delta to find your changes, you then use a webhook only as a "trigger". When you receive the webhook, you can ignore the contents and instead issue a request to /delta. This ensures that you capture every incoming email even if you didn't necessarily receive separate webhook notifications.
There is a bug. After making 500 message move requests, a "cannot copy/move error" occurs. Subsequently, a "429: Too many concurrent connections opened" error occurs. Most applications miss the first error because you continually get the 429 error afterwards.
If you let the application "rest" for 30 minutes, the throttle resets itself and you can continue on. I do not think there is a time limit for hitting the 500 moves. My application did 500 moves after 6.5 hours and then we started getting the error.
And, if you keep trying your move call before the 30 min rest period, it never resets. Also, in the response, the retry-after is null... so, that doesn't help you.
If you find a work around, please let me know. We are trying a few things like setting the category, then manually moving the messages. I am also investigating making a rule the moves them for us or some other job. I cannot find a way to execute a rule from the Graph API.
See this link for more information. Also, the more people who report having this issue, hopefully the sooner it can be resolved. Outlook API Throttling documentation #144