Does minimize requests fields saves more quota? - youtube-api

Since YouTube Data API comes with quota limitation, so we could trace our quota based on sent requests with given quota in the documentation.
But the documentation only comes down to "part" level instead of "fields" level, so for example...
YouTube_Service_Public.search().list(
part="snippet",
channelId=channel_id,
maxResults=50,
order="date",
type="video",
fields="items(etag,id/videoId,snippet(publishedAt,thumbnails/default,title))"
).execute()
YouTube_Service_Public.search().list(
part="snippet",
channelId=channel_id,
maxResults=50,
order="date",
type="video"
).execute()
Does the first request cost less quota than the latter?
or do they cost equally 100 quotas since they all request "snippet" part?

Quota cost would be the same. The fields parameter is just to reduce bandwidth usage.

Related

SLO calculation for 90% of requests under 1000ms

I'm trying to figure out the PromQL for an SLO for latency, where we want 90% of all requests to be served in 1000ms or less.
I can get the 90th percentile of requests with this:
histogram_quantile( 0.90, sum by (le) ( rate(MyMetric_Request_Duration_bucket{instance="foo"}[1h]) ) )
And I can find what percentage of ALL requests were served in 1000ms or less with this.
((sum(rate(MyMetric_Request_Duration_bucket{le="1000",instance="foo"}[1h]))) / (sum (rate(MyMetric_Request_Duration_count{instance="foo"}[1h])))) *100
Is it possible to combine these into one query that tells me what percentage of requests in the 90th percentile were served in 1000ms or less?
I tried the most obvious (to me anyway) solution, but got no data back.
histogram_quantile( 0.90, sum by (le) ( rate(MyMetric_Request_Duration_bucket{le="1000",instance="foo"}[1h]) ) )
The goal is to get a measure that shows For the 90th percentile of requests, how many of those requests were under 1000ms? Seems like this should be simple but I can't find a PromQL query that allows me to do it.
Welcome to SO.
Out of all the requests how many are getting served under 1000ms, to find that I would divide the total number of requests under 1000ms by the total number of requests.. In my gcp world, it translates to a query like this:
You are basically measuring your
(sum(rate(istio_request_duration_milliseconds_bucket{reporter="destination",namespace="abcxyz",le="1000"}[1m]))/sum(rate(istio_request_duration_milliseconds_count{reporter="destination",namespace="abcxyz"}[1m])))*100
Once you have a graph setup with the above query in grafana, you can setup an alert on anything below 93 that way you are alerted even before your reach your SLO of 90%.
Prometheus doesn't provide a function, which could be used for calculating the share (aka the percentage) of requests served in under one second from histogram buckets. But such a function exists in VictoriaMetrics - this is Prometheus-like monitoring system I work on. The function is histogram_share(). For example, the following query returns the share of requests with durations smaller than one second served during the last hour:
histogram_share(1s, sum(rate(http_request_duration_seconds_bucket[1h])) by (le))
Then the following query can be used for alerting when the share or requests, which are served in less than one second, drops below 90%:
histogram_share(1s, sum(rate(http_request_duration_seconds_bucket[1h])) by (le)) < 0.9
Please note that all the functions, which work over histogram buckets, return the estimated results. Their accuracy highly depends on the used histogram buckets' boundaries. See this article for details.

Header "x-ms-throttle-limit-percentage" not coming in response

My application makes a lot of calls to the graph API to get the properties I need. It is impossible to reduce the number of requests in my case. And for this, I need to understand when the number of requests approaches the limit and that I need to stop doing them so as not to get 429)
The documentation says that the parameter "x-ms-throttle-limit-percentage" should come in the header when the number of requests approaches the limit from 0.8. As I understand it, 0.8 is a coefficient from 1, where 1 is the upper limit of the limit:
https://learn.microsoft.com/en-us/graph/throttling?view=graph-rest-1.0#regular-responses-requests
But I didn’t get this parameter in the header, although Retry-After with TooManyRequests.
How can I get this parameter in the response? Perhaps you need to specify additional parameters for this? Or set up Tenant for this?
Or is there another way to view throttle-limit?
Thanks in advance for your reply)
If you haven't got "x-ms-throttle-limit-percentage" parameter in header response, this means that you haven't consumed more than 0.8 of its limit, its mentioned in docs. please check the screenshot.
You can check service specific throttle limit ,please follow docs ,
We were curious to know, what service you were hitting ?

GetWorkItemsAsync fails when it retrieves 1800 workitems

GetWorkItemsAsync fails when it retrieves 1800 workitems. Example:
int[] ids = (from WorkItem info in wlinks select info.Id).ToArray();
WorkItemTrackingHttpClient tfvcClient = _tfs.GetClient<WorkItemTrackingHttpClient>();
List<Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models.WorkItem> dworkitems = tfvcClient.GetWorkItemsAsync(ids).Result;
If I pass array of Ids with 90 elements it works fine.
Is there any limit that it can get only n number of elements, how can we overcome this problem?
Yes, there is a limitation of the URL length, it will get this exception once the URL length has been exceeded.
So, as a workaround you can limit your calls to a allowed range at a time (e.g. 200 ids at a time). Then call several times for the query.
Unfortunately you’ve hit a limitation of the URL length. Once the URL
length has been exceeded, the server just gets the truncated version,
so odds are high that the truncated work item id is not valid.
I recommend limiting your calls to 200 ids at a time.
Source here :
https://github.com/Microsoft/vsts-dotnet-samples/issues/49
Reference this thread for the limitation of the URL length: What is the maximum length of a URL in different browsers?
This similar thread for your reference: Is there any restriction for number of characters in TFS REST API?

Rate Limit Twitter API

I'm kind of confusion with twitter api guide on rate limiting mention over here https://dev.twitter.com/docs/rate-limiting/1.1
In their guide twitter has mention the follow field would be present in the response headers which can be use to a determine the amount of api call allowed , left and will rest at info
X-Rate-Limit-Limit: the rate limit ceiling for that given request
X-Rate-Limit-Remaining: the number of requests left for the 15 minute window
X-Rate-Limit-Reset: the remaining window before the rate limit resets in UTC epoch seconds
Now they have also given a rate limit status api to query against
https://dev.twitter.com/docs/api/1.1/get/application/rate_limit_status
Now I'm kind of confuse which of the above value should I follow to see how much api call is available for me before the desired limit is reach .
Both seem to return the same. While /get/application/rate_limit_status is an API call which returns rate limits for all resources, X-rate-limits sets the header for the resource you just called.
Use /get/application/rate_limit_status to cache the no of API calls remaining, refresh at periodic intervals rather than having to make a call and then parse the header info to check if you've exceeded rate limits

Rate limit exceeded in Batch query

I have client application to insert files into Google drive. One time it is required to insert multiple files into Google Drive. Batch query (GTLBatchQuery) is used to insert multiple files at a time to Google drive. Some time during insert, server is throwing rate limit exceeded error:
"error" : {
"message" : "Rate Limit Exceeded",
"data" : [
{
"reason" : "rateLimitExceeded",
"message" : "Rate Limit Exceeded",
"domain" : "usageLimits"
}
],
"code" : 417
},
Please direct me correct way to enable retry on this error. I have tried setting retryenabled to service:self.driveService.retryEnabled = YES;
self.driveService.maxRetryInterval = 60.0;
But it has no effect.
Is is possible to set retry for Batch query?
Should I need to set retry enabled to GTMHTTPFetcher?
Any code snippet on Implementing exponential backoff in objective-c is appreciated.
Standard exponential backoff as shown in the Google documentation is not the correct way to deal with rate limit errors. You will simply overload Drive with retries and make the problem worse.
Also, sending multiple updates in a batch is almost guaranteed to trigger rate limit errors if you have more than 20 or so updates, so I wouldn't do that either.
My suggestion is:-
Don't use batch, or if you do, keep each batch below 20 updates
If you get a rate limit, backoff for at least 5 seconds before retrying
Try to avoid the rate limit errors by keeping your updates below 20, or keeping the submission rate below one every 2 seconds
These numbers are all undocumented and subject to change.
The reason for 3 is that there is (was, who knows) a bug in Drive that even though an update returned a rate limit error, it did actually succeed, so you can end up inserting duplicate files. See 403 rate limit on insert sometimes succeeds

Resources