Youtube API Quota Exceeded Before Any Successful Request - youtube-api

I am trying to use Youtube API to download caption of a video. None of my requests were successful due to quotaExceeded error. However, I have not spent any quota other than request caption list to get the id of the caption.
request = youtube.captions().download(
id="O-jAeIynN9yCRz1el0-7JaFewbFekv8NUbhAZBwVajw="
)
# TODO: For this request to work, you must replace "YOUR_FILE"
# with the location where the downloaded content should be written.
fh = io.FileIO("/Users/joehuangx/Desktop/test", "wb")
Based on the documentation, download caption requires 200 units in quota, which is within the range of daily limit.

The default quota limit for this api when you first create a project is 10,000. From time to time someone like yourself will start getting the quota exceeded error before ever making any requests. This always turns out to be that the their project quota is set to 0
As you can see from your quota picture your current quota is 0
I have posted this as an issue a number of times and there has never been a solution. You have two options
Request for a quota extension.
delete the project your just created. Create a new one enable the Youtube data api again and see if it gives you the default quota then.
I have yet to have found any way of knowing what causes this to happen and YouTube isnt telling.
The only information i have is this issue #211012781
Hi. If you're seeing Queries per day quota set to 0 and the API is indeed enabled, then this means that your project’s access to YouTube Data API Service has been disabled.
You should’ve received a notice via email regarding this action, which also contains the steps that need to be taken to regain the project’s access. But just in case you missed it, please fill out and submit the exceptions form below:
https://support.google.com/youtube/contact/yt_api_form?hl=en

Related

youtube search by keyword internal working

Need a clarification on this:
As per docs "By default, a search result set identifies matching video, channel, and playlist resources", how this matching takes place, do they search on comments also, any idea on this.
Thanks !!
The YouTube api operates on the same rate limits as the other google apis.
There are project based limits and user based limits.
You can see the limits on google developer console.
My project can make a max 1800000 requests per minute
It also has a quota cost limit of 10000 which is not really what it sound like.
Then each user can make a max of 180000 request per minute.
This not related to the amount of data a user has on their account. Its strictly related to the number of requests or the cost of the request your application or a user can make over a period of time.
You can request additional daily quota over the development 10k if you want. Just submit the form over on google cloud console.
I am not aware of increased abilities with YouTube APIs for big YouTube channels.

How does the userQuota limitations works on YouTube Data API V3?

I'm building an alternative client for YouTube subscriptions browsing (folder based subscriptions with according feed generated), and I'm making a lot of requests to YouTube to aggregate that data.
I'm caching a lot of requests as it is not needed to refresh them once it has been fetched on any other day before the current one.
The fact is, current-day refreshes are consuming a lot, and I reach my quota pretty fast even though those requests are read-only.
I submitted that YouTube quota increase request form, but still, I'm quite afraid.
Am I missing something with the userIp & quotaUser parameters ?
Shouldn't those requests - as they are pretty much the same that a normal user would do on the regular YouTube client - be considered as "Queries per 100 seconds per user" ?
My main quota, the "Queries per day" currently seems to handle ALL the requests coming from my app, even though I added the quotaUser parameter on all my requests made by a user on the frontend.
I think I am missing something as my app should not be considered as "data consuming" as it is sending almost nothing to YouTube in terms of data, and it is just reading data that is also available on the YouTube main client, but not in the same format..
Thanks for your help.

Too many concurrent connections opened Microsoft Graph API

I'm currently running a web application that uses Microsoft Graph's API and we encountered the following message today which severely impacted our application, for a whole day:
"error": {
"code": "ErrorTooManyObjectsOpened",
"message": "Too many concurrent connections opened., The process failed to get the correct properties.",
"innerError": {
"request-id": "removed",
"date": "2017-12-13T17:01:14"
}
}
please note that the request-id was removed
Let me summarize what our web application does.
Basically, we have 2 email folders that we are actively subscribed to, Junk and Folder A.
If anything hits Folder A, we strip the body of the email message and then move the message to Folder B. The subscription on our Junk folder also strips the body and sends them over to Folder B.
Sometimes the webhook subscription service skips messages that may come at the same time, therefore we have 2 cron jobs in our server that run a script and check Junk/Folder A for any messages every 5 minutes, therefore my assumption is that the cron job runs about 288*2 times per day. Not counting our subscription to the folders, we usually get around 200-300 email messages per day.
Unfortunately Microsoft's Graph error codes page does not provide us with any explanation about this code. I would really appreciate if anyone can explain what this means and how to avoid it from happening.
This is occurring because your application is exceeding the throttling thresholds.
There are several different throttling metrics that can affect Microsoft Graph requests. For a high-level overview, see the Microsoft Graph throttling guidance. Since in this case you're hitting Exchange Online via Graph, you can find more specific information from What throttling values do I need to take into consideration? in the Exchange documentation.
Architecturally, you are making a lot of unnecessary calls into the API. Rather than having both a subscription and a scheduled job, you should use just the webhook subscription and the /delta endpoint.
Each call to the /delta endpoint gives you a token that can be used to fetch any changes to a given resource since the token was originally issued. So regardless of if 1 email came in or 1,000, you only get the new emails.
Once you're using the /delta to find your changes, you then use a webhook only as a "trigger". When you receive the webhook, you can ignore the contents and instead issue a request to /delta. This ensures that you capture every incoming email even if you didn't necessarily receive separate webhook notifications.
There is a bug. After making 500 message move requests, a "cannot copy/move error" occurs. Subsequently, a "429: Too many concurrent connections opened" error occurs. Most applications miss the first error because you continually get the 429 error afterwards.
If you let the application "rest" for 30 minutes, the throttle resets itself and you can continue on. I do not think there is a time limit for hitting the 500 moves. My application did 500 moves after 6.5 hours and then we started getting the error.
And, if you keep trying your move call before the 30 min rest period, it never resets. Also, in the response, the retry-after is null... so, that doesn't help you.
If you find a work around, please let me know. We are trying a few things like setting the category, then manually moving the messages. I am also investigating making a rule the moves them for us or some other job. I cannot find a way to execute a rule from the Graph API.
See this link for more information. Also, the more people who report having this issue, hopefully the sooner it can be resolved. Outlook API Throttling documentation #144

How to reduce return content size from Slack rtm.start api

The api doc can be found here.
I connect to this rtm.start api as a bot user:
https://slack.com/api/rtm.start?token=BOT_TOKEN , but as the doc described:
This method returns lots of data about the current state of a team, along with a WebSocket Message Server URL
acutally the only content I care about is the WebSocket Message Server URL.
currently, I'll get about 19MB content from this api(as we are a big team with many channels and users).
It take too long for my code to make this request and sometimes cause a timout. I can increase timeout time, but as I only want the wss url and make the Websocket call, any idea how to decrease the content size from this api?
I know some parameters like simple_latest & no_unreads & mpim_aware can be used. I've tried them with https://slack.com/api/rtm.start?token=BOT_TOKEN&simple_latest=true&no_unreads=1&mpim_aware=true or something like this but did not work.
I also want to know how to make these three optional parameters work.
no_unreads=1 is the correct way to use these three parameters.
And I got another way: as per contact with Slack team, I got an additional parameter which did not appear in document:
cache_ts ==> a timestamp like 1479103245436 which indicating the latest timestamp event the client has cached.
Per my test, I set the cache_ts to now and the response content decrease from 19MB to 1MB.

Youtube Quota Exceeded Exception when it's actually not

We're using the youtube data api v3 and have been for quite some time without any problems. Recently, we've been getting this 403 exception:
The request cannot be completed because you have exceeded your quota.
In the google developer's console, it says that we are still under the quota (currently it states "units/day 163,817 of 50,000,000").
Am I missing something about how quotas work?
You can create more API keys and randomly use them all. Its good way as Im also using it without any issue and never got quota exceeded issue. You need to create separate project for each API. in PHP you can use it like
$api = array("API key # 1", "API key # 2" ,"API key # 3");
$rand_keys = array_rand($api, 1);
$usage = $api[$rand_keys];
For each request new key will be used. Better way to avoid any downtime.
Quota have been reduced yesterday (2016-04-21) from 50 Million units to just 1 Million ...
YouTube also has a quota of 3,000 requests per second. Perhaps you're hitting that.

Resources