How to Change Microsoft Graph requests from throttling Excel requests - microsoft-graph-api

I am calling the Microsoft Graph REST API from Node.js (JavaScript). I receive the result of GET operations for a single cell which is empty as returned with a status code 429 "TooManyRequests - The server is busy. Please try again later." error. Another SO question [ Microsoft Graph throttling Excel updates ] has answers that point to MS documentation about making smaller requests. Unfortunately, the suggestions are rather vague.
My question is does the size of the file in OneDrive have an impact on throttling? The file I am attempting to update is over 4 MB in size. However, the updates (PATCH) that I have attempted are only 251 bytes (12 cells) and I continue to get the error. Even a GET for a single cell receives this. This happened after 72 hours of inactivity. I am using a business account, and unfortunately MS support will not help, as they will only speak to Admins.
Assuming this is an unrelated issue, as I do have about 3500 rows (of about 12 columns) to update, what is the best "chunk size" to update them in? Is 50 ok? Is 100 ok? Thank You!
NOTE: This same throttling happens in the Graph Explorer, not just via code. Also, there is no Retry-After field returned in the Response Headers.

Related

Quota exceeded for quota metric 'Requests' and limit 'Requests per minute' of service 'mybusinessbusinessinformation.googleapis.com' for consumer

I'm trying to collect and update data using the Business Information API.
In order to get the API Calls to work, I'm only trying to get information from my business by using "Get-requests". However when calling several methods, I keep receiving the following errors:
"Quota exceeded for quota metric 'Requests' and limit 'Requests per minute' ".
Both in the Postman-calls or the OAuth 2.0 Playground (which in my eyes: should be a sandbox, ready for testing - very frustrating…).
When I look for my quota in the API settings: I'm not even able to change the requests per minute other than '0'. This makes it really hard to test/use the API.
I can't even find out which categories there are for a business location… 
For your information: I've already asked for increase of the quota using the forms. But it seems google isn't really responsive in this matter.
Can this be solved?
The API shall be used to update a group of 50 (or more) locations, this instead of bulk-editing with a csv-file.
Any help would be welcome.
Thanks in advance,
Kind Regards,
Seppe
If the quota approval form was ignored, you might still have a chance via the API support (https://support.google.com/business/contact/api_default).
They might be reluctant to grant you a quota if your maximum location count is this low though - the API is designed for larger use cases.
Is it documented anywhere that it's meant for larger users? I got approved being very clear it was only for a handful of locations.
BUT even though I got approved and have access there are 3 specific quotas (all per-minute) that are set to zero, even though I have tonnes of allowance for all the non-per-minute quotas. Seems like a bug to me.
I can make 10000 "Update Location requests per day" but zero per minute.

Youtube Data API Wrongly Calculated, Quota Exceeded

I have a very simple message and getting the v3 youtube data api to get the list of comments. I am just fetching the list of videos and then fetching the comments (at frequency of 5 sec) to get updated messages. using the page token as needed to minimize the load and computaion.
Today after some time while internally testing the application i started getting the quota exceeded exception. I know the youtube provided by default 10000 units and since reading the comments (and videos as well) is just 1 unit, i should expect to get similar numbers.
However, the data is wrongly calculated.
Following are request details
If you see, there are 2895 total requests LiveChatMessages-> List.
However, when i go to IAM-> Quotas, it showed 14k earlier, then 12.6k in quota usage
There seems to be some problem either with the computation or with the Documentation that defines the units for queries. Can someone help please..
PS: Just using the two apis as mentioned above in screenshot. Both are list.
If you see, there are 2895 total requests LiveChatMessages-> List. However, when i go to IAM-> Quotas, it showed 14k earlier, then 12.6k in quota usage
Yes i can see that there are 2895 requests, but how do you know what the qutoa costs are for those requests. You are using the YouTube Live Streaming api for those requests. Not the YouTube-Data-api
There is no documentation of the quota cost for the YouTube Live Streaming api calls. If Google says you used all your quota then you probably have.
I would post an issue over on the issue forum asking them to document the quota cost for the calls Issue forum

Throttling of OneNote (Graph) API

We have developed an importing solution for one of our clients. It parses and converts data contained in many OneNote notebooks, to required proprietary data structures, for the client to store and use within another information system.
There is substantial amount of data across many notebooks, requiring a considerable amount of Graph API queries to be performed, in order to retrieve all of the data.
In essence, we built a bulk-importing (batch process, essentially) solution, which goes through all OneNote notebooks under a client's account, parses sections and pages data of each, as well as downloads and stores all page content - including linked documents and images. The linked documents and images require the most amount of Graph API queries.
When performing these imports, the Graph API throttling issue arises. After certain time, even though we are sending queries at a relatively low rate, we start getting the 429 errors.
Regarding data volume, average section size of a client notebook is 50-70 pages. Each page contains links to about 5 documents for download, on average. Thus, it requires up to 70+350 requests to retrieve all the pages content and files of a single notebook section. And our client has many such sections in a notebook. In turn, there are many notebooks.
In total, there are approximately 150 such sections across several notebooks that we need to import for our client. Considering the stats above, this means that our import needs to make a total of 60000-65000 Graph API queries, estimated.
To not flood the Graph API service and keep within the throttling limits, we have experimented a lot and gradually decreased our request rate to be just 1 query for every 4 seconds. That is, at max 900 Graph API requests are made per hour.
This already makes each section import noticeably slow - but it is endurable, even though it means that our full import would take up to 72 continuous hours to complete.
However - even with our throttling logic at this rate implemented and proven working, we still get 429 "too many requests" errors from the Graph API, after about 1hr 10mins, about 1100 consequtive queries. As a result, we are unable to proceed our import on all remaining, unfinished notebook sections. This enables us to only import a few sections consequtively, having then to wait for some random while before we can manually attempt to continue the importing again.
So this is our problem that we seek help with - especially from Microsoft representatives. Can Microsoft provide a way for us to be able to perform this importing of these 60...65K pages+documents, at a reasonably fast query rate, without getting throttled, so we could just get the job done in a continuous batch process, for our client? In example, as either a separate access point (dedicated service endpoint), perhaps time-constrained eg configured for our use within a certain period - so we could within that period, perform all the necessary imports?
For additional information - we currently load the data using the following Graph API URL-s (placeholders of actual different values are brought in uppercase letters between curly braces):
Pages under the notebook section:
https://graph.microsoft.com/v1.0/users/{USER}/onenote/sections/{SECTION_ID}/pages?...
Content of a page:
https://graph.microsoft.com/v1.0/users/{USER}/onenote/pages/{PAGE_ID}/content
A file (document or image) eg link from the page content:
https://graph.microsoft.com/v1.0/{USER}/onenote/resources/{RESOURCE_ID}/$value
which call is most likely to cause the throttling?
What can you retrieve before throttling - just pageids (150 calls total) or pageids+content (10000 calls)? If the latter can you store the results (eg sql database) so that you don't have to call these again.
If you can get pageids+content can you then access the resources using preAuthenticated=true (maybe this is less likely to be throttled). I don't actually offline images as I usually deal with ink or print.
I find the onenote API is very sensitive to multiple calls without waiting for them to complete, I find more than 12 simultaneous calls via a curl multi technique problematic. Once you get throttled if you don't back off immediately you can be throttled for a long, long time. I usually have my scripts bail if I get too many 429 in a row (I have it set for 10 simultaneous 429s and it bails for 10 minutes).
We now have the solution released & working in production. Turns out that indeed adding ?preAuthenticated=true to the page requests returns the page content having resource links (for contained documents, images) in a different format. Then, as it seems, querying these resource links will not impact the API throttling counters - as we've had no 429 errors since.
We even managed to bring the call rate down to 2 seconds from 4, without any problems. So I have marked codeeye's answer as the accepted one.

Microsoft Graph throttling Excel updates

I am developing a Node.js app which connects to the Microsoft Graph API.
Often times, I get back a 429 status code, which is described as "Too Many Requests" in the error documentation.
Sometimes the message returned is:
TooManyRequests. Client application has been throttled and should not attempt to repeat the request until an amount of time has elapsed.
Other times, it returns:
"TooManyRequests. The server is busy. Please try again later.".
Unfortunately, it is not returning a Retry-After field in the headers, even though their best practices claims that it should do so.
This is entirely in development, and I have not been hitting the service much, as it has all been during debugging. I realize Microsoft is often changing how this works. I just find it difficult to develop an app around a service which does not even provide a Retry-After field, and seems to have a lot of problems (I am using the v1.0 endpoint).
When I wait 5 minutes (as I have seen recommended), the service still errors out. Here is an example return response:
{
"error": {
"code": "TooManyRequests",
"message": "The server is busy. Please try again later.",
"innerError": {
"request-id": "d963bb00-6bdf-4d6b-87f9-973ef00de211",
"date": "2017-08-31T23:09:32"
}
}
}
Could this relate at all to the operation being carried out?
I am updating a range from A2:L3533. They are all text values. I am wondering if this could impact the throttling. I have not found any guidance regarding using "smaller" operation sets.
Without seeing your code, it is hard to diagnose exactly what is going on. That said, you're Range here is enormous and almost certainly will result in issues.
From the documentation:
Large Range implies a Range of a size that is too large for a single API call. Many factors such as number of cells, values, numberFormat, and formulas contained in the range can make the response so large that it becomes unsuitable for API interaction. The API makes a best attempt to return or write to the requested data. However, the large size involved might result in an API error condition because of the large resource utilization.
To avoid this, we recommend that you read or write for large Range in multiple smaller range sizes.

Fraction of Budget and Application request limit reached

I am a little confused on the Facebook rate limits and need some clarification.
To my knowledge each application gets 100 million api calls per day per application and 600 calls per second per access token.
According to Insight I am currently making about 500K calls per day total for my application however am receiving a large number of "Application request limit reached". Also in Insight I see a table that has a column called "Fraction of Budget". Four of the endpoints listed in there are over 100% (one is around 3000%).
Is Facebook limited per endpoint as well and is there any way to make sure I don't receive these Application request limit reached errors? To my knowledge I'm not even close to the 100M api calls per day per application that Facebook lists as the upper limit.
EDIT: As a clarification, I am receiving error code 4 (API Too many calls) not error code 17 (API User too many calls). https://developers.facebook.com/docs/reference/api/errors/

Resources