I am using a simple script to pull the total number of views of a particular video onto a webpage.
As I want it as 'realtime' as possible, I have a metatag that automatically refreshes the page every 60 seconds.
My question here is, I guess every time the page refreshes that is seen as a new call and comes from my quota. As this is running 24/7 does that mean I will exceed my quota fairly quickly given I will soon reach the 10,000 mark at this rate?
Or does each page refresh not class as a call?
I want to firstly ensure I don't go over quota and it ends up disabled but more importantly not look like I'm completely taking the mick and get seen as a spammer of some sort.
If a page refresh makes an API call, then that API call is accounted for quota usage.
Now, you say that your page refreshes at a rate of one Video.list API endpoint call per minute.
Therefore, during a full day (24 hours), you'll have 24 * 60 = 1440 calls to Videos.list.
According to the official specification of Videos.list, each call to this endpoint has a quota cost of 1 unit.
Consequently, if only accounting for 1440 calls to Videos.list, the quota cost of your page refreshing amounts to 1440 units. That's well below the allocated daily quota of 10000 units.
This implies also that the API will by no means consider you a spammer.
Related
I receive this error while trying to export form my datagrid to Google Sheets. How can I solve it?
Don't make many requests too quickly.
You are either exceeding your quota or you are making too many requests too quickly.
Also, look into batch requests
https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/batchUpdate
As you may be trying to make a call to the API for every single cell updated, which is an easy way to run into the above error.
If you must do it on a cell by cell basis, you would have to insert a small delay between requests. Bear in mind that although the usage page says:
This version of the Google Sheets API has a limit of 500 requests per 100 seconds per project, and 100 requests per 100 seconds per user. Limits for reads and writes are tracked separately. There is no daily usage limit.
This does not mean that you can make 100 requests in 1 second and then wait 99 seconds. This will give you a quota error like what you are running into. You would have to put in a one second delay between requests, for example.
I'm currently developing a chat bot for one specific YouTube channel, which can already fetch messages from the currently active livechat. However I noticed my quota usage shooting up, so I took the "liberty" to calculate my quota cost.
My API call currently looks like this https://www.googleapis.com/youtube/v3/liveChat/messages?liveChatId=some_livechat_id&part=snippet,authorDetails&pageToken=pageTokenIfProvided, which uses up 5 units. I checked this by running one API call and comparing the quota usage before and after (so apologies, if this is inaccurate). The response contains pollingIntervalMillis set to 5086 milliseconds. Currently, my bot adds that interval to the current datetime and schedules the next fetch at that time (using Celery), so it currently fetches messages at a rate of 4-6 seconds. I'm gonna take the liberty and always wait for 6 seconds.
Calculating my API quota would result in a usage of 72.000 units per day:
10 requests per minute * 60 minutes * 24 hours = 14.400 requests per day
14.400 requests * 5 units per request = 72.000 units per day
This means that if I used the pollingIntervalMillis as a guideline for how often to request, I'd easily reach the maximum quota of 10.000 units by running the bot for 3 hours and 20 minutes. In order to not use up the quota by just fetching chat messages, I would need to run 1 API call per minute (1,3889 approximately). This is very unfeasible for a chatbot, since this is only for fetching messages and not even sending any messages to the chat.
So my question is: Is there maybe a more efficient way to fetch chat messages which won't use up the quota so much? Or will I only get this resolved by applying for a quota extension? And if this is only resolved by a quota extension, how much would I need to ask for reliably? Around 100k units? Even more?
I am also asking myself how something like Streamlabs Chatbot (previously known as AnkhBot) accomplishes this without hitting the quota limit despite thousands of users using their API client, their quota must probably be in the millions or billions.
And another question would be how I'd actually fill out the form, if the bot is still in this "early" state of development?
You pretty much hit the nail on the head. Services like Streamlabs are owned by larger companies, in their case Logitech. They not only have the money to throw around for things like increasing their API quota, but they also have professional relationships with companies like Google to decrease their per unit cost.
As for efficiency, the API costs are easily found in the documentation, but for live chat as you've found, you're going to be hitting the API for 5 units per hit. The only way to improve your overall daily cost with your calls is to perform them less frequently. While once per minute is clearly excessively long, once every 15-18 seconds could reduce the overall cost of your API quota increase, while making the chat bot adequately responsive.
Of course that all depends on your desired usage of the data, but still a recommendation if you're implementing the bot still in the realm of hobbyist usage.
I have an application where I need to get complete, realtime search results from twitter (preferably polling every 500ms or less). Based on my understanding, doing this using the search API will run into rate limits very quickly. However, the streaming API doesn't seem to support getting complete anything (only a 5% sample).
More specifically, I have a search query term which typically comes up with <20 matching tweets per hour, and I would like to be informed of these new tweets within 1-2 seconds, and it is considered a failure if I am not notified within 5 seconds. Due to the relatively low frequency of posting, missing even one tweet is very undesirable.
Is there any way I can realistically do this using twitter API, or is my only choice to write a browser extension to repeatedly refresh the search page?
The answer is "yes". Although you are rate limited (the limit is closer to 1% than 5%), that is only a cutoff based on your query results. Very roughly, you can stream about 60 tweets per second max. In your case, you say you expect under 20 tweets per hour, so you should have no problem getting all those tweets.
You also require a latency less than 5 seconds. In my experience latency has always been a second or two. I think you should be fine.
I am not clear about what the Twitter rate limit, "350 requests per hour per access token/user", means. How they are limiting the request? In 1 request how much data i can get?
The rate limits are based on request, not the amount of data (e.g. bytes) you receive. With that in mind, you can maximize requests by using the available parameters of the particular endpoint you're calling. I'll give you a couple examples to explain what I mean:
One way is to set count, if supported, to the highest available value. On statuses/home_timeline, you can max out count at 200. If you aren't using it now, you're getting the default of 20, which means that you would (theoretically) need to do 10 more queries the get the same amount of data. More queries mean you eat up rate limit.
Using statuses/home_timeline again, notice that you can page through data using since_id and max_id, as described in Working with Timelines. Essentially, you keep track of the tweets you already requested so you can save on requests by only getting the newest tweets.
Rate limits are in 15 minute windows, so you can pace your requests to minimize the chance of running out in any give time window.
Use a combination of Streams and requests, which increases your rate limit.
There are more optimizations like this that help you save request limit, some more subtle than others. Looking at the rate limits per API, studying parameters, and thinking about how the API is used can help you minimize rate limit usage.
I am a little confused on the Facebook rate limits and need some clarification.
To my knowledge each application gets 100 million api calls per day per application and 600 calls per second per access token.
According to Insight I am currently making about 500K calls per day total for my application however am receiving a large number of "Application request limit reached". Also in Insight I see a table that has a column called "Fraction of Budget". Four of the endpoints listed in there are over 100% (one is around 3000%).
Is Facebook limited per endpoint as well and is there any way to make sure I don't receive these Application request limit reached errors? To my knowledge I'm not even close to the 100M api calls per day per application that Facebook lists as the upper limit.
EDIT: As a clarification, I am receiving error code 4 (API Too many calls) not error code 17 (API User too many calls). https://developers.facebook.com/docs/reference/api/errors/