YouTube API Quota quick overflow - youtube-api

I'm trying to create a simple YouTube API request and quickly get 403 response code (Quota limit). According to YouTube API docs, the default quota is 10000 units per day. According to the same docs, my request costs 3-5 units. However, I can get no more than 100 requests per day.
Here is a script that I wrote which consequently does the same requests:
key=<My Youtube API key>
request="https://www.googleapis.com/youtube/v3/search?type=video&part=id,snippet&order=relevance&maxResults=10&key=$key&q=hello"
for i in {0..1000}
do
echo "Try #$i"
response=`curl -i $request | grep HTTP/2 | awk '{print $2}'`
if [ $response == 403 ]
then
break
fi
echo $response
done
echo "$i tries succeeded"
It gives
97 tries succeeded
In Google console I see that my script consumed almost all 10000 units

According to the docs' quota calculator, the cost of one invocation of Search endpoint is not 3-5 units, but 100 units. (This fact is also mentioned on Search endpoint's doc page itself.) This explains that upon 100 calls to that endpoint your daily quota of 10000 units gets exhausted.

Related

Understand how k6 manages at low level a large number of API call in a short period of time

I'm new with k6 and I'm sorry if I'm asking something naive. I'm trying to understand how that tool manage the network calls under the hood. Is it executing them at the max rate he can ? Is it queuing them based on the System Under Test's response time ?
I need to get that because I'm running a lot of tests using both k6 run and k6 cloud but I can't make more than ~2000 requests per second (looking at k6 results). I was wondering if it is k6 that implement some kind of back-pressure mechanism if it understand that my system is "slow" or if there are some other reasons why I can't overcome that limit.
I read here that is possible to make 300.000 request per second and that the cloud environment is already configured for that. I also try to manually configure my machine but nothing changed.
e.g. The following tests are identical, the only changes is the number of VUs. I run all test on k6 cloud.
Shared parameters:
60 api calls (I have a single http.batch with 60 api calls)
Iterations: 100
Executor: per-vu-iterations
Here I got 547 reqs/s:
VUs: 10 (60.000 calls with an avg response time of 108ms)
Here I got 1.051,67 reqs/s:
VUs: 20 (120.000 calls with an avg response time of 112 ms)
I got 1.794,33 reqs/s:
VUs: 40 (240.000 calls with an avg response time of 134 ms)
Here I got 2.060,33 ​reqs/s:
VUs: 80 (480.000 calls with an avg response time of 238 ms)
Here I got 2.223,33 ​reqs/s:
VUs: 160 (960.000 calls with an avg response time of 479 ms)
Here I got 2.102,83 peak ​reqs/s:
VUs: 200 (1.081.380 calls with an avg response time of 637 ms) // I reach the max duration here, that's why he stop
What I was expecting is that if my system can't handle so much requests I have to see a lot of timeout errors but I haven't see any. What I'm seeing is that all the API calls are executed and no errors is returned. Can anyone help me ?
As k6 - or more specifically, your VUs - execute code synchronously, the amount of throughput you can achieve is fully dependent on how quickly the system you're interacting with responds.
Lets take this script as an example:
import http from 'k6/http';
export default function() {
http.get("https://httpbin.org/delay/1");
}
The endpoint here is purposefully designed to take 1 second to respond. There is no other code in the exported default function. Because each VU will wait for a response (or a timeout) before proceeding past the http.get statement, the maximum amount of throughput for each VU will be a very predictable 1 HTTP request/sec.
Often, response times (and/or errors, like timeouts) will increase as you increase the number of VUs. You will eventually reach a point where adding VUs does not result in higher throughput. In this situation, you've basically established the maximum throughput the System-Under-Test can handle. It simply can't keep up.
The only situation where that might not be the case is when the system running k6 runs out of hardware resources (usually CPU time). This is something that you must always pay attention to.
If you are using k6 OSS, you can scale to as many VUs (concurrent threads) as your system can handle. You could also use http.batch to fire off multiple requests concurrently within each VU (the statement will still block until all responses have been received). This might be slightly less overhead than spinning up additional VUs.

How to handle Google Ads API rate limit when calling REST API?

I am using Google Ads REST API to pull Ads data. I am not using client library.
One question, how do you programatically check current API usage when calling requests, so you can stop and wait before continuing? Other APIs like Facebook Marketing API has a header in the result that tells you how much requests you have left, so I could stop and wait. Is there a similar info on Google Ads REST API?
Thank you for reading this.
I've seen nothing in the documentation so far to suggest that there is :(
(There is, separately, a RateExceeded error, which includes a retryAfterSeconds field, if you're going too fast / the API is overloaded.)
Ultimately, I tried this method. So far, I haven't reached limit with it:
The basic developer token for Google Ads API allow 15,000 requests per day as of this answer (Link: https://developers.google.com/google-ads/api/docs/access-levels). So that's 15,000 / 24 = 625 requests every hours.
Further divisions show that I can have 625/60 = 10.4 requests every minutes. So 1 request every 6 seconds will ensure I won't reach rate limit.
So my solution is:
Measure the time it takes to complete a request call and subsequent processing
If total time is over 6 seconds, perform the next request. Else, wait so the total time is 6 seconds, then perform the next request.
The below code is what I used to perform this. Hope it helps you guys.
import time
from math import ceil
waiting_seconds = 6
start_time = time.time()
###############PERFORM API REQUEST HERE
#Measure how long it takes, should be at least 6 secs to be under API limit
end_time = time.time()
elapsed = end_time - start_time
if elapsed < waiting_seconds:
remaining = ceil(waiting_seconds - elapsed)
time.sleep(remaining)

YouTube Data API, 10,000 quota reached with just a few hundred PUT updates

I have a YouTube channel with almost 800 videos. I'm using the YouTube Data API V3 to update the titles and descriptions of each video.
Here's a cURL example of the kind of update I'm doing:
curl --request PUT \
'https://www.googleapis.com/youtube/v3/videos?part=snippet' \
--header 'Authorization: Bearer ACCESS_TOKEN' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{"id":"xxxxxxxxxxx","snippet":{"description":"Updated description, often quite long","title":"Updated title","channelId":23}}' \
--compressed
This is (finally) working great. So I set about doing a bulk update, where I generate the new titles and descriptions for each video and shoot off individual PUT request.
The trouble is, I got to about 175 successful updates before I got the warning:
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "dailyLimitExceeded",
"message": "Daily Limit Exceeded. The quota will be reset at midnight Pacific Time (PT). You may monitor your quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/youtube.googleapis.com/quotas?project=xxxxxxxxxxxxx",
"extendedHelp": "https://console.developers.google.com/apis/api/youtube.googleapis.com/quotas?project=xxxxxxxxxxxxx"
}
],
"code": 403,
"message": "Daily Limit Exceeded. The quota will be reset at midnight Pacific Time (PT). You may monitor your quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/youtube.googleapis.com/quotas?project=xxxxxxxxxxxxx"
}
}
I've used my 10,000 daily request quota in under 200 updates. How is this possible?
Is there perhaps a way I can update multiple video IDs during one PUT request. How is the quota count tallied? I can't seem to find any data on it.
The docs' quota calculator says that invoking the Videos.update endpoint on snippet part has a quota cost of 53 units.
As a consequence, with a daily quota amounting to 10000 units -- if only accounting the updates -- you cannot get more than 188 of your videos' snippet metadata updated on any given day.

Does minimize requests fields saves more quota?

Since YouTube Data API comes with quota limitation, so we could trace our quota based on sent requests with given quota in the documentation.
But the documentation only comes down to "part" level instead of "fields" level, so for example...
YouTube_Service_Public.search().list(
part="snippet",
channelId=channel_id,
maxResults=50,
order="date",
type="video",
fields="items(etag,id/videoId,snippet(publishedAt,thumbnails/default,title))"
).execute()
YouTube_Service_Public.search().list(
part="snippet",
channelId=channel_id,
maxResults=50,
order="date",
type="video"
).execute()
Does the first request cost less quota than the latter?
or do they cost equally 100 quotas since they all request "snippet" part?
Quota cost would be the same. The fields parameter is just to reduce bandwidth usage.

Rate Limit Twitter API

I'm kind of confusion with twitter api guide on rate limiting mention over here https://dev.twitter.com/docs/rate-limiting/1.1
In their guide twitter has mention the follow field would be present in the response headers which can be use to a determine the amount of api call allowed , left and will rest at info
X-Rate-Limit-Limit: the rate limit ceiling for that given request
X-Rate-Limit-Remaining: the number of requests left for the 15 minute window
X-Rate-Limit-Reset: the remaining window before the rate limit resets in UTC epoch seconds
Now they have also given a rate limit status api to query against
https://dev.twitter.com/docs/api/1.1/get/application/rate_limit_status
Now I'm kind of confuse which of the above value should I follow to see how much api call is available for me before the desired limit is reach .
Both seem to return the same. While /get/application/rate_limit_status is an API call which returns rate limits for all resources, X-rate-limits sets the header for the resource you just called.
Use /get/application/rate_limit_status to cache the no of API calls remaining, refresh at periodic intervals rather than having to make a call and then parse the header info to check if you've exceeded rate limits

Resources