Twitter Search API rate limit - twitter

I'm using the Twitter search API (for example: http://search.twitter.com/search.rss?q=%23juventus&rpp=100&page=4)
I read here: http://search.twitter.com/api/ this:
We do not rate limit the search API under ordinary circumstances, however we have put measures in place to limit the abuse of our API. If you find yourself encountering these limits, please contact us and describe your app's requirements.
The limit seems random: sometimes I do 150 requests sometimes 300, generally, after 5 minutes I can do other requests.
I was wondering if is it possible do more requests

They'll detect floods and throttle accordingly rather than publisher defined limits, which is why it appears random. It'll also no doubt be based on load from other sources at the time.
If you need lots more, then they gave you the answer - contact them telling them why.

Related

Quota exceeded for quota metric 'Requests' and limit 'Requests per minute' of service 'mybusinessbusinessinformation.googleapis.com' for consumer

I'm trying to collect and update data using the Business Information API.
In order to get the API Calls to work, I'm only trying to get information from my business by using "Get-requests". However when calling several methods, I keep receiving the following errors:
"Quota exceeded for quota metric 'Requests' and limit 'Requests per minute' ".
Both in the Postman-calls or the OAuth 2.0 Playground (which in my eyes: should be a sandbox, ready for testing - very frustrating…).
When I look for my quota in the API settings: I'm not even able to change the requests per minute other than '0'. This makes it really hard to test/use the API.
I can't even find out which categories there are for a business location… 
For your information: I've already asked for increase of the quota using the forms. But it seems google isn't really responsive in this matter.
Can this be solved?
The API shall be used to update a group of 50 (or more) locations, this instead of bulk-editing with a csv-file.
Any help would be welcome.
Thanks in advance,
Kind Regards,
Seppe
If the quota approval form was ignored, you might still have a chance via the API support (https://support.google.com/business/contact/api_default).
They might be reluctant to grant you a quota if your maximum location count is this low though - the API is designed for larger use cases.
Is it documented anywhere that it's meant for larger users? I got approved being very clear it was only for a handful of locations.
BUT even though I got approved and have access there are 3 specific quotas (all per-minute) that are set to zero, even though I have tonnes of allowance for all the non-per-minute quotas. Seems like a bug to me.
I can make 10000 "Update Location requests per day" but zero per minute.

Understanding the Youtube Data API Quota limitations

I was wondering if I could get some help understanding the Youtube Data Api, specifically Google's website says: "Each project starts with 10,000 units per day, an amount sufficient for the overwhelming majority of our API users."
Is this 10,000 per person who signs in through your project then? Because 10,000 does not in any way seem sufficient! A youtube Search alone takes 100 units if I understand the documentation, so 5 people each doing 20 searches and your project is done for the day. This seems to make building a robust Youtube browser all but impossible for a user base bigger than 2.
As you can tell I'm a bit frustrated and sure I must be missing something. Any clarifications you can offer would be appreciated.
If you check the following screen
only one of the quotas says "per user" the others are project based quotas.
You have a quota of 10000 per day for the total project.
Quota cost is calculated by the request resource.
A youtube Search alone takes 100 units per request. if I understand the documentation, so 5 people each doing 20 searches and your project is done for the day.
10000 (limit) / 100 (quota cost) = 100 search requests pre day.
It sounds to me like you have understood the quota system perfectly. If you intend to make more requests than that you will need to request additional quota.

Applying for Additional Quota for YouTube API as an Individual (without business info)

I recently began using the Youtube Data v3 API for a program that I'm writing which is purely for personal use. To give a brief summary of what it does, it checks the the live chat from my most recent (usually ongoing) livestream and performs actions based on certain keywords entered in chat (essentially commands for people to use from live chat). In order to do that, however, I have to constantly send requests to get a refreshed livechat. As it is now, it sends requests on 1 second intervals. I recently did a livestream to test out my program and it only took about 25 minutes for me to reach the daily quota limit of 10,000 units/day.
The request is:youtube.liveChatMessages().list(liveChatId=liveChatId,part="snippet")
It seems like every request I make costs 6 units, according to the math. I want to be able to host livestreams at lengths of up to 3 hours, which would require a significant quota increase. I'm aware that there is an option to fill out a form to request additional quota. However, it asks for business information such as a business name, business website, business mailing address, etc. Like I said before, I'm doing this for my own use only. I'm in no way part of a business, and just made my program as a personal project. Does anyone know if there's any way to apply for additional quota as an individual/hobbyist? If not, do you think just putting n/a in those fields would be acceptable? I did find another post where someone else had the exact same problem, but no one was able to give a helpful answer. Any advice would be greatly appreciated.
Unfortunately, and although only related, it seems as Google is for the money here. I also tried to do something similar myself (a very basic chat bot just reading the chat messages), and, although some other users on the net got some different results, they all have in common that, according to the doc how it should be done, all poll at this interval of about once a second (that's the timeout one get as part of the answer to a poll for new messages). I, along with a few others, got as most as about 5 minutes with polling once a second, some others, like you, got a few more minutes out of it. I changed the interval by hand in incrementing intervals of 5 seconds each: 5, 10, 15, etc... you get the picture. I can't remember on which value I finally tuned in, but I was only able to get about 2 1/2 hours worth with a rather long polling interval of just once every 10 seconds or so - still way enough for a simple chat bot just reading the chat. But also replying would had at least doubled the usage and hence halfed the time.
It's already a pain to get it working as an idividual as just setting up the required OAuth authentication requires one to at least provide basic information like providing a fixed callback and some legal and policy information. I always ended up in had it rejected with this standard reply "Your project seem to be for internal use only.". I even was able to got this G suite working (before it required payment) to set up an "internal" project (only possible if account belongs to a G suite organization account), but after I set up the OAuth login I got the error that my private account I wanted to use the bot on was not part of the organization and hence can't be used. TLDR: Just useless waste of time.
As far as I'm in for this for several months now there's just no way to get it done as a private individual for personal use. Yes, one can just set it up and have the required check rejected (as it uses the YouTube data API scopes), but one still stuck with that 10.000 units / day quota. Building your own powerful tool capable of doing more than just polling once every 10 to 30 seconds with just a minimum of interaction doesn't get you any further than just a few minuts, maybe one or two hours if you're lucky. If you want more you have to set up a business and pay for it - simple and short: Google wants you to pay for that service.
As Mixer got officially announced to be shut down on July 22nd you have exactly these two options:
Use one of the public available services like Streamlabs, Nightbot, etc ... They're backed by their respective "businesses" and by it don't seem to have those quota limits (although I just found some complaints on Streamlabs just from April - so about one month prior to when you posted this question where they admitted to had reached their limits - don't know if they already got it solved).
Don't use YouTube for streaming but rather Twitch - as Twitch doesn't have these limits and anybody is free to set up an API token either on the main account or on a second bot account (which is also explicitly explained in their docs). The downside of this are of course the objective sacrifices one has to suffer: a) viewers only have the quality of the streamer until one reaches at least affiliate b) caped at max 1080p60 with only 6.000kBit/s c) only short time of VOD storage
I myself wanted to use YouTube as my main platform (and currently do, but without my own stuff at the moment) and my own bot stuff and such as streaming on YouTube has some advantages over Twitch, but as YouTube wants me to pay what others (namely: Twitch) offer me for free (although overall not as good quality) it's an easy decision to make. Mixer looked promissing, as it also offered quite some neat features (overall better quality than Twitch, lower latency), but the requirements to get partner status were so high (2.000 followers along with another insane high number to reach) and Mixer itself just so little of a platform (I made the fun to count all the streamers and viewers - only a few hundred streamers with just a few 10.000s viewers the whole platform had less than some big Twitch channels on their own) - and now it's announced soon to be dead anyway.
Hope this may give you some input into what a small streamer has to consider and suffer from when chosing a platform - but after all what I experienced I have these information: Either do it like all the others: Stream on Twitch and use YouTube as an archive to export to from Twitch (although Twitch STILL doesn't have an auto-export of the latest VOD implemented - but I guess that could be done by some small script) - or if you want to stay on YouTube use some existing bot like Nightbot or any of the other services like Streamlabs.
If you get any other information on how to convince Google to increase the limit as an individual please let us know.

Ways to pull (potentially) large amounts of data from Twitter

I've been playing around with the Twitter API using Twitter4j. I am trying to pull data given a keyword and date, and example of a query I would run using the REST API would be
bagels since:2014-12-27
Which would give me all tweets containing the keyword 'bagels' since 2014-12-27.
This works in theory, but I've quickly exceeded the rate limits since each query allows up to 100 results, and only 180 queries are allowed within a 15-minute interval. There are many keywords that return more than 18k results.
Is there a better way to pull large amounts of data from Twitter? I looked at the Streaming API but I don't know if I can pull data from a certain date range.
There are a few things you can do to improve your rates:
Make sure your count is maxed at 100, which it looks like you're doing.
Use Application-Only authorization - it increases your rate limit to 450.
Use the max_id, since_id parameters to page through data and avoid querying for results you're already received. See the Working with Timelines docs to see what I mean.
Consider using Gnip if you're willing to pay to remove rate limits.

How does Twitter Search API rate limit work?

I am not clear about what the Twitter rate limit, "350 requests per hour per access token/user", means. How they are limiting the request? In 1 request how much data i can get?
The rate limits are based on request, not the amount of data (e.g. bytes) you receive. With that in mind, you can maximize requests by using the available parameters of the particular endpoint you're calling. I'll give you a couple examples to explain what I mean:
One way is to set count, if supported, to the highest available value. On statuses/home_timeline, you can max out count at 200. If you aren't using it now, you're getting the default of 20, which means that you would (theoretically) need to do 10 more queries the get the same amount of data. More queries mean you eat up rate limit.
Using statuses/home_timeline again, notice that you can page through data using since_id and max_id, as described in Working with Timelines. Essentially, you keep track of the tweets you already requested so you can save on requests by only getting the newest tweets.
Rate limits are in 15 minute windows, so you can pace your requests to minimize the chance of running out in any give time window.
Use a combination of Streams and requests, which increases your rate limit.
There are more optimizations like this that help you save request limit, some more subtle than others. Looking at the rate limits per API, studying parameters, and thinking about how the API is used can help you minimize rate limit usage.

Resources