Following a similar topic: http://developer.yahoo.com/yql/guide/usage_info_limits.html , and after reading the YQL documentation regarding: http://developer.yahoo.com/yql/guide/usage_info_limits.html, I still wonder about a certain issue:
considering YQL allows 1000 calls per IP, HTTP 304 (not modified) result still considered a hit?
meaning - are "304" results are counted as part of the 1000 calls per IP per hour?
thanks
EDIT:
I acceppted spier's answer since I got no better answer, and it;s been long enough since the question was asked :)
The following seems to indicate that cached 304 responses do not count against your limit.
http://www.yqlblog.net/blog/2010/03/12/avoiding-rate-limits-and-getting-banned-in-yql-and-pipes-caching-is-your-friend/
I don't know if the 304 responses counts to your hourly IP limit.
On the other hand if your underlying issue should be that the 1.000 calls are not enough:
What speaks against registering an application so that you get an access key and then use OAuth to authenticate? That would give you 10k calls per hour and 100k calls per day which is plenty for most use cases.
Related
I am calling the Microsoft Graph REST API from Node.js (JavaScript). I receive the result of GET operations for a single cell which is empty as returned with a status code 429 "TooManyRequests - The server is busy. Please try again later." error. Another SO question [ Microsoft Graph throttling Excel updates ] has answers that point to MS documentation about making smaller requests. Unfortunately, the suggestions are rather vague.
My question is does the size of the file in OneDrive have an impact on throttling? The file I am attempting to update is over 4 MB in size. However, the updates (PATCH) that I have attempted are only 251 bytes (12 cells) and I continue to get the error. Even a GET for a single cell receives this. This happened after 72 hours of inactivity. I am using a business account, and unfortunately MS support will not help, as they will only speak to Admins.
Assuming this is an unrelated issue, as I do have about 3500 rows (of about 12 columns) to update, what is the best "chunk size" to update them in? Is 50 ok? Is 100 ok? Thank You!
NOTE: This same throttling happens in the Graph Explorer, not just via code. Also, there is no Retry-After field returned in the Response Headers.
So I'm currently using NodeXL to get search for a particular Twitter hashtag, and I'm having trouble on understand how exactly the rate-limiting works. I looked it up in Twitter's API Rate Limits page, and also this SO post, but even after reading both, I don't really understand. The API page says:
Search will be limited at 180 queries per 15 minute window for the time being.
and also
Rate limiting in version 1.1 of the API is primarily considered on a per-user basis — or more accurately described, per access token in your control. If a method allows for 15 requests per rate limit window, then it allows you to make 15 requests per window per leveraged access token.
But I'm totally confused... probably because I've never really worked with anything database, or social network analysis before.
When it says that it always 180 queries per 15 minutes, what exactly constitutes a query? The way the search works on NodeXL is that you limit the amount of tweets you are searching for. So if I search once and set my tweet limit to 1000 tweets, is that only 1 query?
Sorry if this seems like a stupid or really elementary question, but I just don't have any experience with this stuff at all, and any help would be much appreciated, thanks!
When it says that it always 180 queries per 15 minutes, what exactly
constitutes a query?
Whenever you make one request to Twitter, its considered as one query. For Search API you can make 180 calls per 15 minute.
So if I search once and set my tweet limit to 1000 tweets, is that
only 1 query?
Yes, but you can't set count to 1000 since the maximum tweets you can return per request is 100 as it mentioned here.
You can retrieve the latest 100 tweets with the normal search query and for pagination you should use since_id and max_id to retrieve the next 100 tweets for fresh tweets.
The number of queries you can make per 15 min windows varies by API. For example, you can query 180 requests per 15 min window if you use Search API. But, if you use API like GET friends/ids, it's limited to 15 query per 15 min windows. i.e you can make call only 15 times per 15 minutes.
Here's the Rate limits chart where you can find how many requests you can make per 15 window for each API.
I'm writing some software to do charting and analysis of intraday stock data, and so far the only free (or even affordable) feed I've found which gives 15 minute data for the past week or so is Google Finance. But something I've noticed, which I don't understand and has caused many headaches, is that the responses from the API for 15 minute intervals seem to be very inconsistent.
So far I haven't seen this problem with the 30 minute interval, in this case the response is always correct. But if I specify an interval of 15 minutes (900 seconds), I get anywhere from 70 to 200 or more quotes back. The data is correct, but the responses seem to pretty much ignore the number of days I'm specifying. Also this happens for individual stocks, so it isn't a case of some stocks having missing data. Here's an example of an API request I'm sending:
https://www.google.com/finance/getprices?i=900&p=8d&f=d,o,h,l,c&q=INTC
If anyone could help I'd appreciate it, this API doesn't seem to be documented so it's been difficult to find any help with it.
Yes, google is not consistent in providing stock data. For the same reason i switched over to yahoo API, their data is pretty much consistent compared to google
So I'm aware that twitter has a rate limit of 150 requests per hour.
But for some reason I keep getting the error from twitter that I have reached my rate limit, which is impossible considering the amount of times I accessed it.
I started monitoring the changes in hits left per hour, and realized that it decreases to 0 within half an hour or so. At that point I thought the problem is with my website not having a dedicated ip, so I requested this change to be made with my hosting company.
However even after my website has moved to a dedicated ip the remaining hits per hour still decreases at the same rate without me using it.. I honestly have no idea why this is happening.
And an interesting thing:
I tried using the javascript code supplied by:
http://code.google.com/p/twitterjs/
and found that even after the limit reached 0, it still seems to be able to load tweets.
Anyone know why this is happening?
Test page I was working on:
http://ice3studio.com/twitterTesting/
- 1st section in the white box is js with php caching (which cannot grab twitter feed after limit is reached)
- 2nd section is the js code from google code
I am very new at this so I appreciate any help!
Thanks in advance :D
If the GET request is authenticated then rate limit applies to the user otherwise its the IP.
Only GET request has rate limit. POST request has no rate limit.
Twitter JS can load tweets because its running on clients end and every client has different IP. If you use this library with a same account it'll be rate limited as you are sending authenticated request.
You can always white list your IP and account. It'll increase your rate limit greatly.
I don't really understand the expression "multiple outstanding requests".
could you please give me an example?
what is an outstanding request? what is the oposite of an "outstanding request"?
An outstanding request is one which has not been served yet.
For instance, an application could make 30 concurrent requests to different web servers. 10 of them may come back with a response, while the other 20 have not been serviced. Therefore, those 20 are outstanding since they are waiting for a response.
The opposite would be one that is served or completed I guess.
You can use an analogy of a restaurant. If you ask a waiter for water and a spoon, and he only returns you a water, then your request for a spoon is outstanding. It isn't complete until he gives you your spoon or tells you that he doesn't have one to give you.