Rate Limit Twitter API - twitter

I'm kind of confusion with twitter api guide on rate limiting mention over here https://dev.twitter.com/docs/rate-limiting/1.1
In their guide twitter has mention the follow field would be present in the response headers which can be use to a determine the amount of api call allowed , left and will rest at info
X-Rate-Limit-Limit: the rate limit ceiling for that given request
X-Rate-Limit-Remaining: the number of requests left for the 15 minute window
X-Rate-Limit-Reset: the remaining window before the rate limit resets in UTC epoch seconds
Now they have also given a rate limit status api to query against
https://dev.twitter.com/docs/api/1.1/get/application/rate_limit_status
Now I'm kind of confuse which of the above value should I follow to see how much api call is available for me before the desired limit is reach .

Both seem to return the same. While /get/application/rate_limit_status is an API call which returns rate limits for all resources, X-rate-limits sets the header for the resource you just called.
Use /get/application/rate_limit_status to cache the no of API calls remaining, refresh at periodic intervals rather than having to make a call and then parse the header info to check if you've exceeded rate limits

Related

Header "x-ms-throttle-limit-percentage" not coming in response

My application makes a lot of calls to the graph API to get the properties I need. It is impossible to reduce the number of requests in my case. And for this, I need to understand when the number of requests approaches the limit and that I need to stop doing them so as not to get 429)
The documentation says that the parameter "x-ms-throttle-limit-percentage" should come in the header when the number of requests approaches the limit from 0.8. As I understand it, 0.8 is a coefficient from 1, where 1 is the upper limit of the limit:
https://learn.microsoft.com/en-us/graph/throttling?view=graph-rest-1.0#regular-responses-requests
But I didn’t get this parameter in the header, although Retry-After with TooManyRequests.
How can I get this parameter in the response? Perhaps you need to specify additional parameters for this? Or set up Tenant for this?
Or is there another way to view throttle-limit?
Thanks in advance for your reply)
If you haven't got "x-ms-throttle-limit-percentage" parameter in header response, this means that you haven't consumed more than 0.8 of its limit, its mentioned in docs. please check the screenshot.
You can check service specific throttle limit ,please follow docs ,
We were curious to know, what service you were hitting ?

Getting Locust to send a predefined distribution of requests per second

I previously asked this question about using Locust as the means of delivering a static, repeatable request load to the target server (n requests per second for five minutes, where n is predetermined for each second), and it was determined that it's not readily achievable.
So, I took a step back and reformulated the problem into something that you probably could do using a custom load shape, but I'm not sure how – hence this question.
As in the previous question, we have a 5-minute period of extracted Apache logs, where each second, anywhere from 1 to 36 GET requests were made to an Apache server. From those logs, I can get a distribution of how many times a certain requests-per-second rate appeared; e.g. there's a 1/4000 chance of 36 requests being processed on any given second, 1/50 for 18 requests to be processed on any given second, etc.
I can model the distribution of request rates as a simple Python list: the numbers between 1 and 36 appear in it an equal number of times as 1–36 requests per second were made in the 5-minute period captured in the Apache logs, and then just randomly get a number from it in the tick() method of a custom load shape to get a number that informs the (user count, spawn rate) calculation.
Additionally, by using a predetermined random seed, I can make the test runs repeatable to within an acceptable level of variation to be useful in testing my API server configuration changes, since the same random list elements should be retrieved each time.
The problem is that I'm not yet able to "think in Locust", to think in terms of user counts and spawn rates instead of rates of requests received by the server.
The question becomes this:
How do you implement the tick() method of a custom load shape in such a way that the (user count, spawn rate) tuple results in a roughly known distribution of requests per second to be sent, possibly with the help of other configuration options and plugins?
You need to create a Locust User with the tasks you want it to run (e.g. make your http calls). You can define time between tasks to kind of control the requests per second. If you have a task to make a single http call and define wait_time = constant(1) you can roughly get 1 request per second. Locust's spawn_rate is a per second unit. Since you have the data you want to reproduce already and it's in 1 second intervals, you can then create a LoadTestShape class with the tick() method somewhat like this:
class MyShape(LoadTestShape):
repro_data = […]
last_user_count = 0
def tick(self):
self.last_user_count = requests_per_second
if len(self.repro_data) > 0:
requests_per_second = self.repro_data.pop(0)
requests_per_second_diff = abs(last_user_count - requests_per_second)
return (requests_per_second, requests_per_second_diff)
return None
If your first data point is 10 requests, you'd need requests_per_second=10 and requests_per_second_diff=10 to make Locust spin up all 10 users in a single second. If the next second is 25, you'd have requests_per_second=25 and requests_per_second_diff=15. In a Load Shape, spawn_rate also works for decreasing the number of users. So if next is 16, requests_per_second=16 and requests_per_second_diff=9.

How to handle Google Ads API rate limit when calling REST API?

I am using Google Ads REST API to pull Ads data. I am not using client library.
One question, how do you programatically check current API usage when calling requests, so you can stop and wait before continuing? Other APIs like Facebook Marketing API has a header in the result that tells you how much requests you have left, so I could stop and wait. Is there a similar info on Google Ads REST API?
Thank you for reading this.
I've seen nothing in the documentation so far to suggest that there is :(
(There is, separately, a RateExceeded error, which includes a retryAfterSeconds field, if you're going too fast / the API is overloaded.)
Ultimately, I tried this method. So far, I haven't reached limit with it:
The basic developer token for Google Ads API allow 15,000 requests per day as of this answer (Link: https://developers.google.com/google-ads/api/docs/access-levels). So that's 15,000 / 24 = 625 requests every hours.
Further divisions show that I can have 625/60 = 10.4 requests every minutes. So 1 request every 6 seconds will ensure I won't reach rate limit.
So my solution is:
Measure the time it takes to complete a request call and subsequent processing
If total time is over 6 seconds, perform the next request. Else, wait so the total time is 6 seconds, then perform the next request.
The below code is what I used to perform this. Hope it helps you guys.
import time
from math import ceil
waiting_seconds = 6
start_time = time.time()
###############PERFORM API REQUEST HERE
#Measure how long it takes, should be at least 6 secs to be under API limit
end_time = time.time()
elapsed = end_time - start_time
if elapsed < waiting_seconds:
remaining = ceil(waiting_seconds - elapsed)
time.sleep(remaining)

Does minimize requests fields saves more quota?

Since YouTube Data API comes with quota limitation, so we could trace our quota based on sent requests with given quota in the documentation.
But the documentation only comes down to "part" level instead of "fields" level, so for example...
YouTube_Service_Public.search().list(
part="snippet",
channelId=channel_id,
maxResults=50,
order="date",
type="video",
fields="items(etag,id/videoId,snippet(publishedAt,thumbnails/default,title))"
).execute()
YouTube_Service_Public.search().list(
part="snippet",
channelId=channel_id,
maxResults=50,
order="date",
type="video"
).execute()
Does the first request cost less quota than the latter?
or do they cost equally 100 quotas since they all request "snippet" part?
Quota cost would be the same. The fields parameter is just to reduce bandwidth usage.

Rate limit exceeded in Batch query

I have client application to insert files into Google drive. One time it is required to insert multiple files into Google Drive. Batch query (GTLBatchQuery) is used to insert multiple files at a time to Google drive. Some time during insert, server is throwing rate limit exceeded error:
"error" : {
"message" : "Rate Limit Exceeded",
"data" : [
{
"reason" : "rateLimitExceeded",
"message" : "Rate Limit Exceeded",
"domain" : "usageLimits"
}
],
"code" : 417
},
Please direct me correct way to enable retry on this error. I have tried setting retryenabled to service:self.driveService.retryEnabled = YES;
self.driveService.maxRetryInterval = 60.0;
But it has no effect.
Is is possible to set retry for Batch query?
Should I need to set retry enabled to GTMHTTPFetcher?
Any code snippet on Implementing exponential backoff in objective-c is appreciated.
Standard exponential backoff as shown in the Google documentation is not the correct way to deal with rate limit errors. You will simply overload Drive with retries and make the problem worse.
Also, sending multiple updates in a batch is almost guaranteed to trigger rate limit errors if you have more than 20 or so updates, so I wouldn't do that either.
My suggestion is:-
Don't use batch, or if you do, keep each batch below 20 updates
If you get a rate limit, backoff for at least 5 seconds before retrying
Try to avoid the rate limit errors by keeping your updates below 20, or keeping the submission rate below one every 2 seconds
These numbers are all undocumented and subject to change.
The reason for 3 is that there is (was, who knows) a bug in Drive that even though an update returned a rate limit error, it did actually succeed, so you can end up inserting duplicate files. See 403 rate limit on insert sometimes succeeds

Resources