Understanding POST statuses/filter Rate Limit - twitter

I need to do a keyword based data fetching on Twitter. I looked up the documentation and "POST statuses/filter" seemed like the best option. However, I do not understand how the rate limiting works. Does this mean that I can fire this request repeatedly? If yes, at what rate should I do so? Or do I have to fire the request only once and keep on getting data continuously? They have given clear explanations for the REST API. There's even a page showing the number of requests permissible in a 15 minute window for each REST API method. I was unable to find something similar for "POST statuses/filter".

From what I've been researching about using the Streaming API there aren't any rate limits because you just make the request once to open the connection, then you keep it open and you are sent a stream (hence the name) of tweets.
Once applications establish a connection to a streaming endpoint, they
are delivered a feed of Tweets, without needing to worry about polling
or REST API rate limits.
https://dev.twitter.com/docs/streaming-apis/streams/public

Related

How does the userQuota limitations works on YouTube Data API V3?

I'm building an alternative client for YouTube subscriptions browsing (folder based subscriptions with according feed generated), and I'm making a lot of requests to YouTube to aggregate that data.
I'm caching a lot of requests as it is not needed to refresh them once it has been fetched on any other day before the current one.
The fact is, current-day refreshes are consuming a lot, and I reach my quota pretty fast even though those requests are read-only.
I submitted that YouTube quota increase request form, but still, I'm quite afraid.
Am I missing something with the userIp & quotaUser parameters ?
Shouldn't those requests - as they are pretty much the same that a normal user would do on the regular YouTube client - be considered as "Queries per 100 seconds per user" ?
My main quota, the "Queries per day" currently seems to handle ALL the requests coming from my app, even though I added the quotaUser parameter on all my requests made by a user on the frontend.
I think I am missing something as my app should not be considered as "data consuming" as it is sending almost nothing to YouTube in terms of data, and it is just reading data that is also available on the YouTube main client, but not in the same format..
Thanks for your help.

Handle status 429 in Rails API

I did a Twitter clone using rails api + react, just for study purposes.
I have quite simple logic of requests: click in a user, load its informations and tweets, requesting for the api. However, If I do this fast like 3 times, I receive the status 429 (too many requests) with the header Retry-After: 5.
There is a way to increase the number of requests in a given time? How would be the correct approach to handle with this in such common situation?
From my understanding, the error information you have shown is correct, It means request cannot be served due to the application's rate limit having been peaked for the resource.
Rate limits are divided into 15 minute intervals. All endpoints
require authentication, so there is no concept of unauthenticated
calls and rate limits.
To overcome this situation, here is an example from the documentation itself.

API requests with Net::HTTP very slow in production

I am making Google API request through application using RestClient library to get address.
Sample request code-
require 'rest-client'
require 'json'
gmaps_api_href = "https://maps.googleapis.com/maps/api/geocode/json?latlng=18.56227673,73.76804232&language=ar"
response = RestClient.get gmaps_api_href
result = JSON.parse(response)['results']
This request works fine on my local machine and it completes within 1-2secs. But on production instance it takes 20secs to finish one request.
Due to some security measures, we can not access production instance directly. So I am unable to find pin point for this delay.
After doing trial and error, we found that
If we make request using CURL, it takes 1 sec on server
If we make request using Net::HTTP, it takes 20sec to complete same as we were observed for RestClient.
If we make request using WebRequest in small .net app, that request complete within 1 secs.
Its difficult for me to get difference between above observations.
Please let me know why it is so? and what changes I have to do to make it work in my Rails App?
Are you using a Google API key? Your example does not show use of an API key. if not, I'd guess you are getting rate-limited by Google. On your server, you've probably already deployed a version of this app, which made lots of requests to Google without an API key in the fairly recent past, and Google noticed and it's rate-limiting software may be slowing down your requests made from that server. While your local machine hasn't in the past made an enormous amount of requests to the google api, so is not being rate-limited by google's servers.
It's possible Google's rate-limiting is paying some attention (for now!) to User-Agent, and the different user-agent sent by Curl somehow evades Google's rate-limiting that was triggered by the requests sent by RestClient with it's User-Agent (and RestClient may use net-http under the hood, and have the same User-Agent as it).
While one would hope that if you were rate-limited you'd get a "429 Too Many Requests" error response instead of just a slow response, it's possible RestClient hides this from you (I haven't used RestClient), and I've also seen some unpredictable behavior from Google rate-limiting defenses, especially when not using an API key on a service that requires one for all but a few sample requests. I have seen things similar to what you report in that case.
My guess is you're being rate limited because you are not using an API key. Get and use an API key from Google. Google still has rate limits when you are using an API key, but they are clearly advertised (for free? 2500 per-day, and no more than 10 per second. more if you pay) and should give more clear and predictable error messages when exceeded. That's part of why Google requires the api key, so they can reliably rate-limit you in clear ways.
https://developers.google.com/maps/documentation/geocoding/usage-limits
https://developers.google.com/maps/documentation/geocoding/intro#BYB

Twitter API Limits when using statuses/lookup endpoint

simple question, (I was not able to find answer on twitter api doc)
following get request
https://api.twitter.com/1.1/statuses/lookup.json?include_entities=true&id=657208379442597888%2C657215510283730944
is request to get 2 tweets by theirs ids.
Simple question : in the point of the Twitter API Limits, when exectuting this request it will be considered as 1 or 2 calls ?
Regards,
It should be counted as a single request. Since you can ask for up to 100 tweets, but the rate limit for app auth is 60, that only makes sense.
However, you can prove that by just checking the rate limit response headers. If you make the same request twice (and no other app is making requests on the user's behalf, or if you're the only one using the app's auth), you should see the X-Rate-Limit-Remaining header only decrease by one.

Twitter strategy: Streaming API vs. REST API

I'm working on a kind of a twitter wall. Users can login with twitter and create their own wall, which will display the tweets for certain terms/hashtags.
I'm still looking for the best strategy to get the data out of the Twitter APIs.
Following some of my thoughts:
Strategy 1: Streaming API
Open a single stream (POST statuses/filter) for all walls
Each hashtag is added to the track parameter
When new tweets arrive, they will be processed and sent to the corresponding wall
("one account, one application, one open connection" cf. https://dev.twitter.com/discussions/14935)
Problems with the Streaming API
Streaming api is limited to 400 keywords to track
What to do if there are more than 400 keywords to track?
Streaming api is limited to 1% of the tweets of the firehose
It's very difficult to get above 1% of the firehose, but if you're tracking a term like "apple" it'd be pretty easy to exceed the 1%. (cf. https://dev.twitter.com/discussions/6349)
How can I handle such popular terms? Blacklist them?
Strategy 2: REST Search API
Store user access tokens
Poll the Search API (GET search/tweets) on behalf of the user, respecting the rate limits of 180 queries per 15 minute
(cf. https://dev.twitter.com/discussions/11141)
Problems with the REST Search API
Polling
Could get very expensive to poll the API for a lot of users.
Do you have any suggestions/recommendations which strategy would fit the best? Are there already solutions for these problems?
Best regards

Resources