I keep getting:
Response:
{"errors":[{"message":"You have made too many requests recently. Please, be chill."}]}
When trying to download all my tasks - is there a published QPS or other quota limit so I know how liong I should pause between requests?
(I work at Asana)
As stated in the documentation, the current request limit is around 100 / minute. The error response you are getting back also contains a Retry-After header which contains the number of seconds you must wait until you can make a request again.
We may also be institute a daily limit at some point in the future -- we think 100 / minute is a reasonable burst rate, but not a reasonable sustained rate throughout the day. However, we are not enforcing a daily limit yet.
Related
I receive this error while trying to export form my datagrid to Google Sheets. How can I solve it?
Don't make many requests too quickly.
You are either exceeding your quota or you are making too many requests too quickly.
Also, look into batch requests
https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/batchUpdate
As you may be trying to make a call to the API for every single cell updated, which is an easy way to run into the above error.
If you must do it on a cell by cell basis, you would have to insert a small delay between requests. Bear in mind that although the usage page says:
This version of the Google Sheets API has a limit of 500 requests per 100 seconds per project, and 100 requests per 100 seconds per user. Limits for reads and writes are tracked separately. There is no daily usage limit.
This does not mean that you can make 100 requests in 1 second and then wait 99 seconds. This will give you a quota error like what you are running into. You would have to put in a one second delay between requests, for example.
I'm currently developing a chat bot for one specific YouTube channel, which can already fetch messages from the currently active livechat. However I noticed my quota usage shooting up, so I took the "liberty" to calculate my quota cost.
My API call currently looks like this https://www.googleapis.com/youtube/v3/liveChat/messages?liveChatId=some_livechat_id&part=snippet,authorDetails&pageToken=pageTokenIfProvided, which uses up 5 units. I checked this by running one API call and comparing the quota usage before and after (so apologies, if this is inaccurate). The response contains pollingIntervalMillis set to 5086 milliseconds. Currently, my bot adds that interval to the current datetime and schedules the next fetch at that time (using Celery), so it currently fetches messages at a rate of 4-6 seconds. I'm gonna take the liberty and always wait for 6 seconds.
Calculating my API quota would result in a usage of 72.000 units per day:
10 requests per minute * 60 minutes * 24 hours = 14.400 requests per day
14.400 requests * 5 units per request = 72.000 units per day
This means that if I used the pollingIntervalMillis as a guideline for how often to request, I'd easily reach the maximum quota of 10.000 units by running the bot for 3 hours and 20 minutes. In order to not use up the quota by just fetching chat messages, I would need to run 1 API call per minute (1,3889 approximately). This is very unfeasible for a chatbot, since this is only for fetching messages and not even sending any messages to the chat.
So my question is: Is there maybe a more efficient way to fetch chat messages which won't use up the quota so much? Or will I only get this resolved by applying for a quota extension? And if this is only resolved by a quota extension, how much would I need to ask for reliably? Around 100k units? Even more?
I am also asking myself how something like Streamlabs Chatbot (previously known as AnkhBot) accomplishes this without hitting the quota limit despite thousands of users using their API client, their quota must probably be in the millions or billions.
And another question would be how I'd actually fill out the form, if the bot is still in this "early" state of development?
You pretty much hit the nail on the head. Services like Streamlabs are owned by larger companies, in their case Logitech. They not only have the money to throw around for things like increasing their API quota, but they also have professional relationships with companies like Google to decrease their per unit cost.
As for efficiency, the API costs are easily found in the documentation, but for live chat as you've found, you're going to be hitting the API for 5 units per hit. The only way to improve your overall daily cost with your calls is to perform them less frequently. While once per minute is clearly excessively long, once every 15-18 seconds could reduce the overall cost of your API quota increase, while making the chat bot adequately responsive.
Of course that all depends on your desired usage of the data, but still a recommendation if you're implementing the bot still in the realm of hobbyist usage.
My application allows users to "link" their YouTube accounts to our system and then we allow them to upload videos to their channels both automatically in some cases, and on an "upload" action on their part. We have hundreds of videos being uploaded because we have thousands of users that use our system.
Today I saw hundreds of errors in my application when our batch automatic upload job was running. The errors were for:
quotaExceeded, video upload limit reached.
My API quotas are very high (50,000,000 per day, 3,000,000 per 100 seconds, 300,000 per 100 seconds per user), so if a video is 1600 points, I have enough limit to upload 30,000+ per day, 1,800 videos per 100 seconds, and 180+ per 100 seconds per user.
I have seen other questions out there hinting at some newly enforced limit by Google to just 50 videos and then 1 video every 15 minutes? This is a very low limit for my application which has such a large quota! To be clear, at most I have seen my application handle 1,000 videos in a single day (1,600,000 points of my total 50,000,000 quota).
Is there any way I can get this artificial limit of 50 videos and then 1 per 15 minutes removed? This is a major block to my users' functionality for a very popular web app. I could understand if it were 50 videos per user and then 1 every 15 minutes, but I highly doubt my errors this morning were from one user trying to upload 300+ videos at a time. My system only uploads their "newest" videos each day, which most people only have 1-10 videos at max. Hundreds would be a very rare edge case.
You can actually confirm in your Developers Console the available quota for your application. By default, YouTube API have a quota allocation of 1 million units per day as mentioned in Quota usage. If you see that your usage reached your quota limit, you can request additional quota on the Quotas tab.
Note also that, all API requests, including invalid requests, incur a quota cost of at least one point. You may use the Quota Calculator to get an estimate of the quota cost for an API query.
On the other hand, to work efficiently with your quota and if you haven't done so, I suggest that you implement exponential backoff if you're encountering high error ratio. See this sample code which shows an exponential backoff strategy to resume a failed upload. Also, if applicable, subscribe to Push Notifications which is much more efficient than polling-based solutions.
Check the documentation for more information on how PubSubHubbub callback server receives Atom feed notifications when a channel does any of the following activities:
uploads a video
updates a video's title
updates a video's description
Hope that helps!
YouTube imposes the following quota cost limits (default values listed):
Queries per day = 1Million units
Queries per 100 sec per user = 300,000 units/100sec/user
Queries per 100 seconds = 3,000,000
What is the meaning of the last limit? How can the quota per 100 seconds exceed the total quota per day?
Here are the meaning of the different quota in the YouTube Data API
QPD(quota per day) - meaning the maximum numbers of request over a 24 hour period a client id is able to make to an API.
QPS(quota per second) - meaning a global quota per second for the application, meaning how many calls per second an application can make.
quota per seconds per user - meaning the number of queries a user, in the application can make.
The quota of 3,000,000 per 100 sec did not exceed the 1M QPD because you need to divide the 3M QPS to 100.
So meaning you only have 30,000 QPS or queries per second.
I hope this information helps you.
I believe the "Queries per 100 seconds = 3,000,000" is a inaccurate/left over/a mistake from Google's old Query limits. Clearly 3,000,000 in 100 seconds is 3x your total per day and makes no sense!
The "old" limits used to be much higher:
50,000,000 Queries per day
The per 100 seconds limits, however, did not change (or at least were not updated properly).
I was also looking for this, as the documentation from YouTube is not clear at all. Some answers, such as #nightsurgex2's, hinted at the fact that only the daily limit counts, but I wanted to be sure before sending this to production, so I wrote and ran some custom tests. I'm not going to speak about the "per user" limits, as the application we are developing does not use this.
The test application just sends a lot of dummy requests (each worth 1 quota point) and breaks when API returns an error. Please keep in mind that this will exhaust your application quota for that day, so use a dummy project if you want to try anything similar. The results were:
Finished YouTube Data: 03/17/2022 15:24:13
Took: 51759.2994 ms
Total requests: 10451
Finished YouTube Analytics: 03/17/2022 15:29:29
Took: 16080.7929 ms
Total requests: 892
Finished YouTube Analytics: 03/17/2022 18:29:57
Took: 10478830.055 ms
Total requests: 98927
The limit for YouTube Data v3 is 10k points per day. The number we got is very close to this. Trying to send further requests, even hours later, will fail. There is no per-minute limit.
YouTube Analytics v2 enforces both a per-minute and a per-day limit. These values are consistent with Google's documentation and with the test above: 720 requests per minute, or 100k per day (reset at midnight US West Coast). If you go steadily at the maximum per-minute rate, you will exhaust your daily quota in around 2 hours and 20 minutes.
YouTube Data has a much smaller quota, however, you are not likely to use it as often. You only use it to get playlist ids, video ids, and the like. The bulk of data (viewers, demographics, subscribers, etc) will come from Analytics. Even then, you will possibly have to phase our your requests or cache their result, depending on your needs.
I am not clear about what the Twitter rate limit, "350 requests per hour per access token/user", means. How they are limiting the request? In 1 request how much data i can get?
The rate limits are based on request, not the amount of data (e.g. bytes) you receive. With that in mind, you can maximize requests by using the available parameters of the particular endpoint you're calling. I'll give you a couple examples to explain what I mean:
One way is to set count, if supported, to the highest available value. On statuses/home_timeline, you can max out count at 200. If you aren't using it now, you're getting the default of 20, which means that you would (theoretically) need to do 10 more queries the get the same amount of data. More queries mean you eat up rate limit.
Using statuses/home_timeline again, notice that you can page through data using since_id and max_id, as described in Working with Timelines. Essentially, you keep track of the tweets you already requested so you can save on requests by only getting the newest tweets.
Rate limits are in 15 minute windows, so you can pace your requests to minimize the chance of running out in any give time window.
Use a combination of Streams and requests, which increases your rate limit.
There are more optimizations like this that help you save request limit, some more subtle than others. Looking at the rate limits per API, studying parameters, and thinking about how the API is used can help you minimize rate limit usage.