I'm reading DM via Twitter GET API and each time I check for them I consume rate limit, even if I don't retrieve any messages.
I've seen it is possible to read DM through Streaming API and I would like to know if this does not consume rate limit or at least is a separate limit before implementing it, as it will take some time to do it. Also, is it fast getting new DM or it has some kind of delay as it is streaming?
So, in short:
Does Streaming API not consume my GET API rate limit?
Does Streaming API get DM "almost" instantly?
Thank you
Streaming API limits and REST API limits are completely separate entities. Consuming direct messages via a streaming API doesn't consume any REST API rate limits. And the DMs would arrive within a user stream almost instantaneously, provided that the access token used in the user stream connection had the appropriate permission level associated with it.
Related
When trying to upload a YouTube video on our website an error pops up about exceeding our quota. Went on google cloud to see what our quota was and how much we are using. Found our quotas but there is no data showing how much we are using.
Is there a way to trouble shoot and figure out how much we are using to see if we are exceeding our quota?
Go to google cloud console for your project. Under library search for YouTube data api. Then click manage and quotas, to see how much quota you have
You can check the metric page to see an estimate of your quota used.
Is there a way to trouble shoot and figure out how much we are using to see if we are exceeding our quota?
Once you know what your current quota is, its probably only 10k unless you have requested an extension. you need to check
Queries per day which is actually quote points per day for YouTube
Then you can check the YouTube Data API (v3) - Quota Calculator
This page will show you the quota cost for each request.
Video.insert for example costs 1600 quota points, which will very quickly eat though the development quota.
If you are running out of quota the simplest thing to do is to request additional quota YouTube Data API - Quota and Compliance Audits
If your Quota named: Queries per day is 0
From issue tracker #208842985
Hi. If you're seeing Queries per day quota set to 0 and the API is indeed enabled, then this means that your project’s access to YouTube Data API Service has been disabled.
You should’ve received a notice via email regarding this action, which also contains the steps that need to be taken to regain the project’s access. But just in case you missed it, please fill out and submit the exceptions form below:
YouTube API Services - Audit and Quota Extension Form
For the last few weeks, the YouTube Data API (v3) is showing a significant increase in requests. In fact, we hit the 50k quota limit every day, which means that requests in the afternoon/evening tend to fail with a quota exceeded error:
However, this usage count is not correct. We use a single API key for making requests to the YouTube API, and the Google Cloud API counter only shows ~2k uses per day for that API key.
All requests to our server endpoint that calls the YouTube API also pass through Cloudflare, which similarly shows <2k requests per day.
We just make a single request to www.googleapis.com/youtube/v3/search?part=snippet&maxResults=1&... – is it possible that these requests count as multiple queries? Or is there any other reason that could explain the incorrect query counts? Thanks!
This is a recurrent issue with the YouTube Data API quota system, that is caused by confusing terms used within Google's cloud console.
You have to acknowledge that YouTube's Data API quota system is not accounting for the number of queries one is making. Instead, the API attaches to each kind of its endpoint a quota cost, and, thus, is accounting for the sum of quota cost of all endpoint calls one is making.
Moreover, inspecting the quota costs page I mentioned above (or the official doc page of the endpoint for that matter), you'll see that the Search.list API endpoint is quite expensive: 100 units of quota cost.
Consequently, to reach a quota cost of 50000 units, your app has to issue only 500 calls to Search.list (this amounts on average to approximately one call at every three minutes per day).
Currently we are on Google My Business API V3, and we are with 5 QPS rate limiting. I see the new release of V4 API. In change log i see some minor upgrades to the API but i don't see any talk about rate limits.
Are they same as previous rate limits?
There are changes to rate limits, earlier every API call used to fall into the same bucket of 5QPS. Now they separated the rate limiting buckets to different types of API calls.
You can find more details about the newer rate limits on following link
https://developers.google.com/my-business/content/limits
I'm considering using the Twitter Streaming API (public streams) to keep track of the latest tweets for many users (up to 100k). Despite having read various sources regarding the different rate limits, I still have couple of questions:
According to the documentation: The default access level allows up to 400 track keywords, 5,000 follow userids. What are the best practices to follow more the 5k users. Creating, for example, 20 applications to get 20 different access tokens?
If I follow just one single user, does the rule of thumb "You get about 1% of all tweets" indeed apply? And how does this changes if I add more users up to 5k?
Might using the REST API be a reasonable alternative somehow, e.g., by polling the latest tweets of users on a minute-by-minute basis?
What are the best practices to follow more the 5k users. Creating, for example, 20 applications to get 20 different access tokens?
You don't want to use multiple applications. This response from a mod sums up the situation well. The Twitter Streaming API documentation also specifically calls out devs who attempt to do this:
Each account may create only one standing connection to the public endpoints, and connecting to a public stream more than once with the same account credentials will cause the oldest connection to be disconnected.
Clients which make excessive connection attempts (both successful and unsuccessful) run the risk of having their IP automatically banned.
A rate limit is a rate limit--you can't get more than Twitter allows.
If I follow just one single user, does the rule of thumb "You get about 1% of all tweets" indeed apply? And how does this changes if I add more users up to 5k?
The 1% rule still applies, but it is very unlikely impossible for one user to be responsible for at least 1% of all tweet volume in a given time interval. More users means more tweets, but unless all 5k are very high-volume tweet-ers you shouldn't have a problem.
Might using the REST API be a reasonable alternative somehow, e.g., by polling the latest tweets of users on a minute-by-minute basis?
Interesting idea, but probably not. You're also rate-limited in the Search API. For GET/statuses/user_timeline, the rate limit is 180 queries per 15 minutes. You can only get the tweets for one user with this endpoint, and the regular GET/search/tweets doesn't accept user id as a parameter, so you can't take advantage of that (also 180 query/15 min rate limited).
The Twitter Streaming and REST API overviews are excellent and merit a thorough reading. Tweepy unfortunately has spotty documentation and Twython isn't too much better, but they both leverage the Twitter APIs directly so this will give you a good understanding of how everything works. Good luck!
To get past the 400 keywords and 5k followers, you need to apply for enterprise access.
Basic
400 keywords, 5,000 userids and 25 location boxes
One filter rule on one allowed connection, disconnection required to adjust rule
Enterprise
Up to 250,000 filters per stream, up to 2,048 characters each.
Thousands of rules on a single connection, no disconnection needed to add/remove rules using Rules API
https://developer.twitter.com/en/enterprise
Is there a max limit on the valance API. I've made a number of calls, but I put some self throttling in the program. It makes a call to the user page, loops through the data, and then makes another call. It probably averaged 1 call every second or so.
I'm looking at expanding some functionality and I'm worried that we may reach a limit if we aren't careful about how we go doing everything.
So, is there a limit to how often we can call the valance api?
The back-end LMS can be configured to rate limit on Valence Learning Framework API calls; however, by default this does not get configured as active. To be sure, you should consult with the administrators of your back-end LMS.
Update: Brightspace no longer supports this kind of rate limiting mentioned. As Brightspace evolved, D2L found that the rate limiting was not providing the value that was originally intended, and as a result D2L deprecated the feature. D2L is no longer rate limiting the Brightspace APIs and instead depend on developer self-governance and asynchronous APIs for more resource intensive operations (the APIs around importing courses, for example). When you use the Brightspace APIs, you should be mindful that you are using the same computing resources as made available to end users interacting with the web UI, and if you over-stress these resources (as can easily be done through any API), you can have a negative impact on these end users.