I've been reading Twitter's documentation on this but I'm a bit confused because I'm new to both the API and oauth. If I get a user to login to Twitter using oAuth does that mean that the rate limit would be 350 requests for their account and not my application? In other words, is the rate limiting applied to each account used individually or 350 for my application?
From the FAQ:
Are rate limits per user, per computer or per application?
Rate limits apply in different ways. Some methods are rate limited
whilst others are fair use limited. In the majority of methods GET
(read) requests are rate limited and POST (write) methods are not. You
should check the rate limited section of the documentation for the
method you want to use to make sure.
We apply requests to rate limits in the following ways:
Rate limits for authenticated requests are applied to the user.
Rate limits for unauthenticated requests are applied to the IP that
we see.
This means applications share the unauthenticated rate limit AND the
authenticated limit. The application being used makes no difference so
switching between multiple clients on the same IP offers no rate limit
advantage – they will all share the same remaining requests.
Multiple user accounts in a Twitter client each have their own user
rate limit but share the unauthenticated requests.
Search has it's own rate limit and as all requests are anonymous it
applies to the IP we see. This means all users on the same IP share
the search rate limit.
This means that every request which you'll be doing for an authenticated user (OAuth) will check this user's rate limit, while any general non authenticated request you'll make will check your application's IP rate limit.
Javascript, Client side request - users rate limit.
Serverside request - web applications rate limit.
Client application using streaming API - no rate limit (technically
not true, but a rate limit that you won't need to worry about because
your limited in another way based on the information you track
instead and the stream updates you at a limit below the cap)
for more, look at the duplicate question, the FAQ link that was posted, and the rate limiting doc,
https://dev.twitter.com/docs/rate-limiting
Related
Need a clarification on this:
As per docs "By default, a search result set identifies matching video, channel, and playlist resources", how this matching takes place, do they search on comments also, any idea on this.
Thanks !!
The YouTube api operates on the same rate limits as the other google apis.
There are project based limits and user based limits.
You can see the limits on google developer console.
My project can make a max 1800000 requests per minute
It also has a quota cost limit of 10000 which is not really what it sound like.
Then each user can make a max of 180000 request per minute.
This not related to the amount of data a user has on their account. Its strictly related to the number of requests or the cost of the request your application or a user can make over a period of time.
You can request additional daily quota over the development 10k if you want. Just submit the form over on google cloud console.
I am not aware of increased abilities with YouTube APIs for big YouTube channels.
From Usage Limits help page:
This version of the Google Sheets API has a limit of 500 requests per 100 seconds per project, and 100 requests per 100 seconds per user.
Let’s take it apart:
500 requests per 100 seconds per project - This is applied to my project. I use my project credentials to make each request.
100 requests per 100 seconds per user - When I make a request, I also include the OAuth token of a user that permitted me to update their workbook.
Question about the per-user part:
Is there anything the user can do themselves (like reach out to Google) to increase the quota just for them? Or is it me who needs to talk to Google to increase the quota for all users of my project simultaneously?
Thanks!
500 requests per 100 seconds per project - This is applied to my project. I use my project credentials to make each request.
100 requests per 100 seconds per user - When I make a request, I also include the OAuth token of a user that permitted me to update their workbook.
As you can see there are two types of quotas project based quotas these are the quotas that are applied to your project as a whole. Then there are user based quotes these quotas a are applied to the users of your project / application.
Project based quotas can be extended you can apply for an extension and google may grant you that extension which will increase the number of requests your project as a whole can make.
User based quotas are more like flood protection they ensure that a single user of your application can not make to many requests at once flooding the server. User based quotas can not be extended.
Is there anything the user can do themselves (like reach out to Google) to increase the quota just for them? Or is it me who needs to talk to Google to increase the quota for all users of my project simultaneously?
To answer your question there is nothing the user can do to increase the quota this is your project and only you have access to increase the project based quota.
There is nothing you can do to increase he user based quotas.
I did a Twitter clone using rails api + react, just for study purposes.
I have quite simple logic of requests: click in a user, load its informations and tweets, requesting for the api. However, If I do this fast like 3 times, I receive the status 429 (too many requests) with the header Retry-After: 5.
There is a way to increase the number of requests in a given time? How would be the correct approach to handle with this in such common situation?
From my understanding, the error information you have shown is correct, It means request cannot be served due to the application's rate limit having been peaked for the resource.
Rate limits are divided into 15 minute intervals. All endpoints
require authentication, so there is no concept of unauthenticated
calls and rate limits.
To overcome this situation, here is an example from the documentation itself.
Goo.gl nicely mentions that you have a limit of 1.000.000 when using an API key.
https://developers.google.com/url-shortener/v1/getting_started
// Quotas:
By default, your registered project gets 1,000,000 requests per day for the URL Shortener API (see the Developers console for more details).
I can't find what the quota limits are when you "don't" use an API key.
The reason for this is, I could use the API key, but the server(s) will be set among various clients, and communicating in uncertain conditions where the API key could be sniffed. Aside that, OAuth would require user interaction, which wouldn't be acceptable in an automated process (which may not even haven a UI).
https://support.google.com/cloud/answer/6158857?hl=en
My expected usage would be maybe 10-20 / minute during peak hours, or max 1000 a day. Would this hit goo.gl's non API limits?
I've been trying to get all tweets of a some public(unlocked) twitter user.
I'm using the REST API:
http://api.twitter.com/1/statuses/user_timeline.json?screen_name=andy_murray&count=200&page=1'
While going over the 16 pages (page param) it allows, thus getting 3200 tweets which is ok.
BUT then I discovered the rate limit for such calls is 150 per hour(!!!), meaning like less than 10 user queries in an hour (16 pages each). (350 are allowed if u authenticate, still very low number)
Any ideas on how to solve this? the streaming\search APIs don't seem appropriate(?), and there are some web services out there that do seem to have this data.
Thanks
You can either queue up the requests and make them as the rate limit allows or you can make authenticated requests as multiple users. Each users has 350 requests/hour.
One approach would be to use the streaming API (or perhaps the more specific user streams, if that's better suited to your application) to start collecting all tweets as they occur from your target user(s) without having to bother with the traditional rate limits, and then use the REST API to backfill those users' historical tweets.
Granted, you only have 350 authenticated requests per hour, but if you run your harvester around the clock, that's still 1,680,000 tweets per day (350 requests/hour * 24 hours/day * 200 tweets/request).
So, for example, if you decided to pull 1,000 tweets per user per day (5 API calls # 200 tweets per call), you could run through 1,680 user timelines per day (70 timelines per hour). Then, on the next day, begin where you left off by harvesting the next 1,000 tweets using the oldest status ID per user as the max_id parameter in your statuses/user_timeline request.
The streaming API will keep you abreast of any new statuses your target users tweet, and the REST API calls will pretty quickly, in about four days, start running into Twitter's fetch limit for those users' historical tweets. After that, you can add additional users to fetch going forward from the streaming endpoint by adding them to the follow list, and you can stop fetching historical tweets for those users that have maxed out, and start fetching a new target group's tweets.
The Search API would seem to be appropriate for your needs, since you can search on screen name. The Search API rate limit is higher than the REST API rate limit.