What is the quota cost of searching videos and only needing the ID returned? I tried using the quota calculator but it doesn't output anything (and I'm assuming that the cost isn't actually 0 when you only specify "part" to be id only).
Search:list has a quota cost of 101 when id is specified for part. It's 102 for snippet. You can see this by going into the developer console and then running a single request with part set to id and check the quota count for the day.
To access the quota count go to the developer console, select your project, select APIs & Auth and then APIs. Select the YouTube Data API and then select Usage (or Quotas).
Related
Need a clarification on this:
As per docs "By default, a search result set identifies matching video, channel, and playlist resources", how this matching takes place, do they search on comments also, any idea on this.
Thanks !!
The YouTube api operates on the same rate limits as the other google apis.
There are project based limits and user based limits.
You can see the limits on google developer console.
My project can make a max 1800000 requests per minute
It also has a quota cost limit of 10000 which is not really what it sound like.
Then each user can make a max of 180000 request per minute.
This not related to the amount of data a user has on their account. Its strictly related to the number of requests or the cost of the request your application or a user can make over a period of time.
You can request additional daily quota over the development 10k if you want. Just submit the form over on google cloud console.
I am not aware of increased abilities with YouTube APIs for big YouTube channels.
I am trying to use Youtube API to download caption of a video. None of my requests were successful due to quotaExceeded error. However, I have not spent any quota other than request caption list to get the id of the caption.
request = youtube.captions().download(
id="O-jAeIynN9yCRz1el0-7JaFewbFekv8NUbhAZBwVajw="
)
# TODO: For this request to work, you must replace "YOUR_FILE"
# with the location where the downloaded content should be written.
fh = io.FileIO("/Users/joehuangx/Desktop/test", "wb")
Based on the documentation, download caption requires 200 units in quota, which is within the range of daily limit.
The default quota limit for this api when you first create a project is 10,000. From time to time someone like yourself will start getting the quota exceeded error before ever making any requests. This always turns out to be that the their project quota is set to 0
As you can see from your quota picture your current quota is 0
I have posted this as an issue a number of times and there has never been a solution. You have two options
Request for a quota extension.
delete the project your just created. Create a new one enable the Youtube data api again and see if it gives you the default quota then.
I have yet to have found any way of knowing what causes this to happen and YouTube isnt telling.
The only information i have is this issue #211012781
Hi. If you're seeing Queries per day quota set to 0 and the API is indeed enabled, then this means that your project’s access to YouTube Data API Service has been disabled.
You should’ve received a notice via email regarding this action, which also contains the steps that need to be taken to regain the project’s access. But just in case you missed it, please fill out and submit the exceptions form below:
https://support.google.com/youtube/contact/yt_api_form?hl=en
From Usage Limits help page:
This version of the Google Sheets API has a limit of 500 requests per 100 seconds per project, and 100 requests per 100 seconds per user.
Let’s take it apart:
500 requests per 100 seconds per project - This is applied to my project. I use my project credentials to make each request.
100 requests per 100 seconds per user - When I make a request, I also include the OAuth token of a user that permitted me to update their workbook.
Question about the per-user part:
Is there anything the user can do themselves (like reach out to Google) to increase the quota just for them? Or is it me who needs to talk to Google to increase the quota for all users of my project simultaneously?
Thanks!
500 requests per 100 seconds per project - This is applied to my project. I use my project credentials to make each request.
100 requests per 100 seconds per user - When I make a request, I also include the OAuth token of a user that permitted me to update their workbook.
As you can see there are two types of quotas project based quotas these are the quotas that are applied to your project as a whole. Then there are user based quotes these quotas a are applied to the users of your project / application.
Project based quotas can be extended you can apply for an extension and google may grant you that extension which will increase the number of requests your project as a whole can make.
User based quotas are more like flood protection they ensure that a single user of your application can not make to many requests at once flooding the server. User based quotas can not be extended.
Is there anything the user can do themselves (like reach out to Google) to increase the quota just for them? Or is it me who needs to talk to Google to increase the quota for all users of my project simultaneously?
To answer your question there is nothing the user can do to increase the quota this is your project and only you have access to increase the project based quota.
There is nothing you can do to increase he user based quotas.
Goo.gl nicely mentions that you have a limit of 1.000.000 when using an API key.
https://developers.google.com/url-shortener/v1/getting_started
// Quotas:
By default, your registered project gets 1,000,000 requests per day for the URL Shortener API (see the Developers console for more details).
I can't find what the quota limits are when you "don't" use an API key.
The reason for this is, I could use the API key, but the server(s) will be set among various clients, and communicating in uncertain conditions where the API key could be sniffed. Aside that, OAuth would require user interaction, which wouldn't be acceptable in an automated process (which may not even haven a UI).
https://support.google.com/cloud/answer/6158857?hl=en
My expected usage would be maybe 10-20 / minute during peak hours, or max 1000 a day. Would this hit goo.gl's non API limits?
I've been trying to get all tweets of a some public(unlocked) twitter user.
I'm using the REST API:
http://api.twitter.com/1/statuses/user_timeline.json?screen_name=andy_murray&count=200&page=1'
While going over the 16 pages (page param) it allows, thus getting 3200 tweets which is ok.
BUT then I discovered the rate limit for such calls is 150 per hour(!!!), meaning like less than 10 user queries in an hour (16 pages each). (350 are allowed if u authenticate, still very low number)
Any ideas on how to solve this? the streaming\search APIs don't seem appropriate(?), and there are some web services out there that do seem to have this data.
Thanks
You can either queue up the requests and make them as the rate limit allows or you can make authenticated requests as multiple users. Each users has 350 requests/hour.
One approach would be to use the streaming API (or perhaps the more specific user streams, if that's better suited to your application) to start collecting all tweets as they occur from your target user(s) without having to bother with the traditional rate limits, and then use the REST API to backfill those users' historical tweets.
Granted, you only have 350 authenticated requests per hour, but if you run your harvester around the clock, that's still 1,680,000 tweets per day (350 requests/hour * 24 hours/day * 200 tweets/request).
So, for example, if you decided to pull 1,000 tweets per user per day (5 API calls # 200 tweets per call), you could run through 1,680 user timelines per day (70 timelines per hour). Then, on the next day, begin where you left off by harvesting the next 1,000 tweets using the oldest status ID per user as the max_id parameter in your statuses/user_timeline request.
The streaming API will keep you abreast of any new statuses your target users tweet, and the REST API calls will pretty quickly, in about four days, start running into Twitter's fetch limit for those users' historical tweets. After that, you can add additional users to fetch going forward from the streaming endpoint by adding them to the follow list, and you can stop fetching historical tweets for those users that have maxed out, and start fetching a new target group's tweets.
The Search API would seem to be appropriate for your needs, since you can search on screen name. The Search API rate limit is higher than the REST API rate limit.