How does the userQuota limitations works on YouTube Data API V3? - youtube-api

I'm building an alternative client for YouTube subscriptions browsing (folder based subscriptions with according feed generated), and I'm making a lot of requests to YouTube to aggregate that data.
I'm caching a lot of requests as it is not needed to refresh them once it has been fetched on any other day before the current one.
The fact is, current-day refreshes are consuming a lot, and I reach my quota pretty fast even though those requests are read-only.
I submitted that YouTube quota increase request form, but still, I'm quite afraid.
Am I missing something with the userIp & quotaUser parameters ?
Shouldn't those requests - as they are pretty much the same that a normal user would do on the regular YouTube client - be considered as "Queries per 100 seconds per user" ?
My main quota, the "Queries per day" currently seems to handle ALL the requests coming from my app, even though I added the quotaUser parameter on all my requests made by a user on the frontend.
I think I am missing something as my app should not be considered as "data consuming" as it is sending almost nothing to YouTube in terms of data, and it is just reading data that is also available on the YouTube main client, but not in the same format..
Thanks for your help.

Related

youtube search by keyword internal working

Need a clarification on this:
As per docs "By default, a search result set identifies matching video, channel, and playlist resources", how this matching takes place, do they search on comments also, any idea on this.
Thanks !!
The YouTube api operates on the same rate limits as the other google apis.
There are project based limits and user based limits.
You can see the limits on google developer console.
My project can make a max 1800000 requests per minute
It also has a quota cost limit of 10000 which is not really what it sound like.
Then each user can make a max of 180000 request per minute.
This not related to the amount of data a user has on their account. Its strictly related to the number of requests or the cost of the request your application or a user can make over a period of time.
You can request additional daily quota over the development 10k if you want. Just submit the form over on google cloud console.
I am not aware of increased abilities with YouTube APIs for big YouTube channels.

How does Parse calculate the total number of requests?

How does Parse calculate the total number of requests?
I sent around 30 query requests to test my app. After each query, Parse sent a push notification. But, in my dashboard I only see "13 requests" in total.
Considering the query requests and the push requests, shouldn't it have been more?
Paul,
Virtually anything you do in Parse is counted against you in terms of an API request, I think the only thing that isn't counted against you is cached data, which makes sense since it already used an API request to obtain. So if your using kPFCachePolicyCacheElseNetwork, you potentially won't sacrifice an additional request only provided you have something cached. This includes the new local datastore, some things you do with local datastore count against you as well.
You can review their FAQ for reference for a thorough breakdown on their allowances, see the section titled 'What is Considered An API Request'. It literally leaves no room for assumption or misinterpretation :
Anytime you make a network call to Parse on behalf of your app using one of the Parse SDKs or REST API, it counts as an API request. This does include things like queries, saves, logins, amongst other kinds of requests. It also includes requests to send push notifications, although this is seen as a single request regardless of how many recipients are targeted. Serving Parse files counts as an API request, including static assets served from Parse Hosting. Analytics requests do have a special exemption. You can send us your analytics events any time without being limited by your app's request limit.
It's in your best interest to assume everything you do will be counted against you. This will lead you to smarter infrastructure if your app is scaleable.
But below that reference it goes on to explain that Parse requests are actually calculated on a per minute basis. I have an app that can make more than 30 requests per second, but won't max out because it doesn't reach the 1800 per minute limit.
REFERENCE : https://parse.com/plans/faq

Youtube API V3 and Etag

I use the youtube api v3 and i would like to understand how does the Etag. I would like to use it for what it takes to cache purpose but I do not know what to do in PHP.
Could you tell me the steps to follow once the etag recovered ? please. Thanks for help.
According to the youtube docs (https://developers.google.com/youtube/v3/getting-started#etags), an eTag is basically used to determine if a resource has changed. Use them for:
Optimization - Caching youtube resources in your app can reduce bandwidth and latency. When caching, store the eTag so that you can include it when getting a resource. If the resource has not changed, you will get a 304 response code (NOT MODIFIED), which means you can use your cached version. Otherwise, you will get the updated version of the resource.
Quota Usage - You can reduce the amount you tap into your quota by caching youtube data. The first time you get the resource, you will tap into your quota. Before displaying the resource, first check to see if your cached resource has changed, which will only cost you 1 quota unit. If the resource has not changed, youtube will return a 304 response. If it has changed, you can get the resource again, costing various quota units depending on what you are getting. For more on your quota: (https://developers.google.com/youtube/v3/getting-started#quota).
Overwrite protection - If you are overwriting a resource, including the eTag will ensure that you are not overwriting a newer version of the resource.
eTags are part of the HTTP 1.1 spec (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.19) and are used in the headers of the request/response. Here's a good article that talks about them at a low level: http://www.ibuildings.com/blog/2013/07/etags-uninitiated
As far as using eTags in PHP, I can only suggest a couple things since I've never done it. YouTube returns eTags for feeds AND individual items within a feed, and I'm not sure how to use them for individual items within a feed. But to get the original feed itself, essentially you would use curl and add the eTag to the header of your request (PHP cURL custom headers). You might also want to check out http_cache_etag (http://www.php.net/manual/en/function.http-cache-etag.php)
I was looking for similar information, but I couldn't find a clear example on the youtube website. On the other hand, it seems facebook is using a similar approach (Etags to check whether a resource has changed) and these two links I found on facebook developers area might be of help:
https://developers.facebook.com/docs/reference/ads-api/etags-reference/ and https://developers.facebook.com/blog/post/627/
The first one explains in a simpler and more detailed way how etags are used and provide some request/response examples.
The second link provides a PHP example on how to retrieve a resource and extract the etag and use it in a subsequent request.
Of course these links contain information related to facebook website, but for the great part they can be applied to youtube as well.
I am not sure if anyone would still be interested but I have posted an answer on how to use etag in using the youtube api here. The idea works not only for the youtube api. The post is quite long but hope it can help.

Understanding POST statuses/filter Rate Limit

I need to do a keyword based data fetching on Twitter. I looked up the documentation and "POST statuses/filter" seemed like the best option. However, I do not understand how the rate limiting works. Does this mean that I can fire this request repeatedly? If yes, at what rate should I do so? Or do I have to fire the request only once and keep on getting data continuously? They have given clear explanations for the REST API. There's even a page showing the number of requests permissible in a 15 minute window for each REST API method. I was unable to find something similar for "POST statuses/filter".
From what I've been researching about using the Streaming API there aren't any rate limits because you just make the request once to open the connection, then you keep it open and you are sent a stream (hence the name) of tweets.
Once applications establish a connection to a streaming endpoint, they
are delivered a feed of Tweets, without needing to worry about polling
or REST API rate limits.
https://dev.twitter.com/docs/streaming-apis/streams/public

Caching( optimizing) Strategy with API live stream on Rails

So I built a website that uses Twitch.tv API, which is a gaming live stream website. The requests are long and slow, and I would like to cache it somehow. The problem is that there are a lot of dynamic attributes, if they are still online, or how many viewers there are. Since the traffic to my website is low at the moment, expiring Cache early isn't going to help much. Also, I have a page where it lists all the live streams, and it requests to see if the stream is online. So even if no one is online it still takes a while to load. Is there anyway to retrieve api faster without caching?
here is twitch.tv api doc
Since you don't own the Twitch.tv API, unfortunately I would say there is really nothing you can do to make their calls faster.
The good news is that you can cache the calls you make to them, which will make things appear faster to your users.
The way to cache the calls is to create a key and then cache the return JSON from the API. To create the key I would just use the URL you are calling for the API. Then just give the cached value an expiration time of a few minutes and when it expires you make another API call to re-populate the cache.
Also I'd look at Varnish (https://www.varnish-cache.org/) which does a lot of HTTP caching really well. Could work really well for you and it has the concept of a grace period that tries to hide the expensive calls made when the cache expires.

Resources