How can I solve the "Read Request Limit Error"? - google-sheets

I receive this error while trying to export form my datagrid to Google Sheets. How can I solve it?

Don't make many requests too quickly.
You are either exceeding your quota or you are making too many requests too quickly.
Also, look into batch requests
https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/batchUpdate
As you may be trying to make a call to the API for every single cell updated, which is an easy way to run into the above error.
If you must do it on a cell by cell basis, you would have to insert a small delay between requests. Bear in mind that although the usage page says:
This version of the Google Sheets API has a limit of 500 requests per 100 seconds per project, and 100 requests per 100 seconds per user. Limits for reads and writes are tracked separately. There is no daily usage limit.
This does not mean that you can make 100 requests in 1 second and then wait 99 seconds. This will give you a quota error like what you are running into. You would have to put in a one second delay between requests, for example.

Related

Locust RPS does not match user count

Why does Locust does not report RPS as greater than or equal to the user count? As you can see from the images below, despite have 100 users, RPS never reach close to 100.
Furthermore, there seems to be dips in the graph when running with high user count (1 million)
You can reach RPS equal to use count only if response time will be 1 second sharp.
if response time will be 500 ms - you will get 200 RPS
in case if response time will be 2000 ms - you will get 50 RPS
and so on
Check out How do I Correlate the Number of (Concurrent) Users with Hits Per Second for more comprehensive explanation if needed.
If you want to conduct the load of 100 RPS you can take a look at Locust issue 646 and choose the workaround you like the most.
In addition to response time Dmitri mentioned, your code will also play a factor in the RPS you'll be able to hit. wait_time in particular can limit RPS by increasing the amount of time between one user finishing their tasks and another one being spawned to replace it.
This answer has more details about wait_time's effect on response time but the majority of that will also apply here to you trying to hit an RPS target.
For your second graph, the dips you mentioned and the wild swings in RPS, general downward trend in RPS, and upward trend in response time are most likely mostly due to the system you're testing being unable to consistently handle the load you're throwing at it, with a bit of overloading your workers thrown in for good measure, especially at the higher end of the user count. Depending on your code, Locust may not be able to generate the 250,000 users you're wanting. Looks like it's possible Locust started falling behind after you hit 50,000 users. Each worker may only be able to easily maintain 10,000 users. You may need to make some changes to your code or increase the number of workers you're using to get better performance. See the Locust FAQ for more details.

Using up too much quota or will I be fine?

I am using a simple script to pull the total number of views of a particular video onto a webpage.
As I want it as 'realtime' as possible, I have a metatag that automatically refreshes the page every 60 seconds.
My question here is, I guess every time the page refreshes that is seen as a new call and comes from my quota. As this is running 24/7 does that mean I will exceed my quota fairly quickly given I will soon reach the 10,000 mark at this rate?
Or does each page refresh not class as a call?
I want to firstly ensure I don't go over quota and it ends up disabled but more importantly not look like I'm completely taking the mick and get seen as a spammer of some sort.
If a page refresh makes an API call, then that API call is accounted for quota usage.
Now, you say that your page refreshes at a rate of one Video.list API endpoint call per minute.
Therefore, during a full day (24 hours), you'll have 24 * 60 = 1440 calls to Videos.list.
According to the official specification of Videos.list, each call to this endpoint has a quota cost of 1 unit.
Consequently, if only accounting for 1440 calls to Videos.list, the quota cost of your page refreshing amounts to 1440 units. That's well below the allocated daily quota of 10000 units.
This implies also that the API will by no means consider you a spammer.

What's the most efficient way to handle quota for the YouTube Data API when developing a chat bot?

I'm currently developing a chat bot for one specific YouTube channel, which can already fetch messages from the currently active livechat. However I noticed my quota usage shooting up, so I took the "liberty" to calculate my quota cost.
My API call currently looks like this https://www.googleapis.com/youtube/v3/liveChat/messages?liveChatId=some_livechat_id&part=snippet,authorDetails&pageToken=pageTokenIfProvided, which uses up 5 units. I checked this by running one API call and comparing the quota usage before and after (so apologies, if this is inaccurate). The response contains pollingIntervalMillis set to 5086 milliseconds. Currently, my bot adds that interval to the current datetime and schedules the next fetch at that time (using Celery), so it currently fetches messages at a rate of 4-6 seconds. I'm gonna take the liberty and always wait for 6 seconds.
Calculating my API quota would result in a usage of 72.000 units per day:
10 requests per minute * 60 minutes * 24 hours = 14.400 requests per day
14.400 requests * 5 units per request = 72.000 units per day
This means that if I used the pollingIntervalMillis as a guideline for how often to request, I'd easily reach the maximum quota of 10.000 units by running the bot for 3 hours and 20 minutes. In order to not use up the quota by just fetching chat messages, I would need to run 1 API call per minute (1,3889 approximately). This is very unfeasible for a chatbot, since this is only for fetching messages and not even sending any messages to the chat.
So my question is: Is there maybe a more efficient way to fetch chat messages which won't use up the quota so much? Or will I only get this resolved by applying for a quota extension? And if this is only resolved by a quota extension, how much would I need to ask for reliably? Around 100k units? Even more?
I am also asking myself how something like Streamlabs Chatbot (previously known as AnkhBot) accomplishes this without hitting the quota limit despite thousands of users using their API client, their quota must probably be in the millions or billions.
And another question would be how I'd actually fill out the form, if the bot is still in this "early" state of development?
You pretty much hit the nail on the head. Services like Streamlabs are owned by larger companies, in their case Logitech. They not only have the money to throw around for things like increasing their API quota, but they also have professional relationships with companies like Google to decrease their per unit cost.
As for efficiency, the API costs are easily found in the documentation, but for live chat as you've found, you're going to be hitting the API for 5 units per hit. The only way to improve your overall daily cost with your calls is to perform them less frequently. While once per minute is clearly excessively long, once every 15-18 seconds could reduce the overall cost of your API quota increase, while making the chat bot adequately responsive.
Of course that all depends on your desired usage of the data, but still a recommendation if you're implementing the bot still in the realm of hobbyist usage.

Fraction of Budget and Application request limit reached

I am a little confused on the Facebook rate limits and need some clarification.
To my knowledge each application gets 100 million api calls per day per application and 600 calls per second per access token.
According to Insight I am currently making about 500K calls per day total for my application however am receiving a large number of "Application request limit reached". Also in Insight I see a table that has a column called "Fraction of Budget". Four of the endpoints listed in there are over 100% (one is around 3000%).
Is Facebook limited per endpoint as well and is there any way to make sure I don't receive these Application request limit reached errors? To my knowledge I'm not even close to the 100M api calls per day per application that Facebook lists as the upper limit.
EDIT: As a clarification, I am receiving error code 4 (API Too many calls) not error code 17 (API User too many calls). https://developers.facebook.com/docs/reference/api/errors/

QPS/Call quota limit?

I keep getting:
Response:
{"errors":[{"message":"You have made too many requests recently. Please, be chill."}]}
When trying to download all my tasks - is there a published QPS or other quota limit so I know how liong I should pause between requests?
(I work at Asana)
As stated in the documentation, the current request limit is around 100 / minute. The error response you are getting back also contains a Retry-After header which contains the number of seconds you must wait until you can make a request again.
We may also be institute a daily limit at some point in the future -- we think 100 / minute is a reasonable burst rate, but not a reasonable sustained rate throughout the day. However, we are not enforcing a daily limit yet.

Resources