I am new to the Twitter API, and I looked at their whitelisting policies and I am a little confused... I'm basically writing a twitter aggregrator that crawls the public tweets of a set of users (not more than 200) hourly. I wanted to apply for whitelisting, and they seem to offer account based and IP based whitelisting. Since I am using a shared hosting, my outbound IP address might vary (and twitter does'nt allow IP ranges for whitelisting). So I am considering using account based whitelisting.
However, while using OAuth, is it possible for me to use account based whitelisting for a background process that crawls the API hourly?
You won't be able to whitelist each of the 200 accounts you want to crawl. However. Assuming each of those 200 has OAuthd with you, you can use their access token to crawl their timelines. This eats into their rate limits, not the one for your service. This has the obvious downside of eating into their rate limits though.
Related
I know this isn't exactly a how-to question, but Linked-In Support directed me to StackOverflow when I asked them this question, and I cannot find the answer anywhere after googling/searching the forums:
Per the LinkedIn APIs Terms of Use (https://developer.linkedin.com/documents/linkedin-apis-terms-use), section E.1, second bullet point:
Don’t try to exceed or circumvent your limitations on calls and use.
This includes creating multiple Applications for identical, or largely
similar, usage (e.g., having one Application per customer). If we
believe that you have exceeded or circumvented our limitations, or if
you have tried to, we may temporarily suspend or permanently block
your access to the APIs, disable your developer account, or both.
It sounds like I'm not allowed to create multiple instances of an application. However, the nature of my software is that each of my clients gets a subdomain and runs an instance of my app on a server particular to that client. Each client thus needs their own OAuth redirect_uri, and the only solution that I can think of is to create an application for each of my clients (which are organizations and not individual users).
Does this practice violate the TOS, and if so, what's a viable alternative?
If this practice is allowed, what is the maximum number of applications (and API keys) I can create?
Thanks in advance.
Register a single client/app but add multiple RedirectURI's for that instance, one for each customer/domain. This is allowed per LinkedIn documentation by adding multiple URLs in the OAuth 2.0 Redirect URLs text area, separated by a comma:
OAuth 2.0 Redirect URLs: Comma separated list of absolute URLs allowed
for OAuth 2.0 redirections.
Recently, YouTube decided to make video tags unavailable publicly. So to get the tags for a given video, I need to make an authenticated request to the API as the owner of the video. This is not a problem in my case as I'm fetching my own videos.
However, I'm confused about the authentication flow since YouTube strongly recommends to use OAuth2. Since I'm always going to authenticate as the same user (the owner of the video, aka myself), I definitely don't need to have any browser page for the actual user of the app to do anything. I see how I could have done it using ClientLogin (hardcoding login and password into the app) but I'm not sure how to approach this using OAuth2.
One last detail - that is not necessarily relevant since a high-level answer would be enough - is that I'm developing on iOS. Also I looked at this https://developers.google.com/accounts/docs/OAuth2 and particularly the web server case which seems closest to mine but was not able to get a clear idea from it.
Thanks in advance for your help and don't hesitate if you need me to be more specific.
There is no OAuth flow that supports your use case.
In general, you should not be distributing your YouTube login as part of your application. Even if this were available via ClientLogin, after a certain number of logins, you would likely be presented with a challenge because the authentication servers would detect a strange usage pattern.
OAuth is not for distribution a single user's login to a large N, where N is the number of users of your application. OAuth is meant for your application to act on behalf of an end user, and because tags are no longer exposed to end users through the UI, it does not make sense to expose them to users via the API either. More details can be found here:
http://apiblog.youtube.com/2012/08/video-tags-just-for-uploaders.html
How many videos do you have? What is the purpose for needing the tag metadata? From a pragmatic perspective, here are a few alternative implementations that would be easier and would not require users to log in as you:
Store a single file mapping video IDs to tags on a server somewhere and fetch this periodically. Google App Engine is a good place to do this.
Put the tag data in the description in a predictable format (you host the videos), and generate the metadata from this.
I'm using https://github.com/jnunemaker/twitter to tweet to a users twitter when they post on their blog running on ROR. .e.g
Tweet : "I just posted a blog - 'I love ruby on rails' http://link-to-blog.com"
My question is, as I'm making many sites for different people do I have to create a new twitter developer application, with individual consumer keys & secrets, for each blog or is there a way to use the same twitter application?
Thanks,
Alex
You technically can use the same application in a variety of websites. Just use the keys/tokens twitter gives you in all your sites.
Nonetheless, this is a bad practice, since twitter will not be accounting your accesses to the API from the pages that are not the one you specify in the Callback URL. Furthermore, your users will return to that (and only to that) page that you specified in the callback URL, which can be very misleading for those that are in other site.
And finally the most important reasons are the following two:
You'll get to the request limit quicker than if you had several applications
You'll get to the user limit quicker than if you had several applications
The limits that twitter manages are not very big so I can tell you that the twitter functionalities won't work if you get a good peak of visits (happened to me twice). Or they may not work if you're site receives a lot of visits at a certain time. No matter if your cache your API or not, you'll end up filling the limit.
Here is the twitter documentation about this:
Caching. We recommend that you cache API responses in your application or on your site if you expect high-volume usage. For example, don't try to call the Twitter API on every page load of your hugely popular website. Instead, call our API once a minute and save the response to your local server, displaying your cached version on your site. Refer to the Terms of Service for specific information about caching limitations.
Rate limiting by active user. If your site keeps track of many Twitter users (for example, fetching their current status or statistics about their Twitter usage), please consider only requesting data for users who have recently signed in to your site.
Scale your use of the API with the number of users you have. When using OAuth to authenticate requests with the API, the rate limit applied is specific to that user_token. This means, every user who authorizes your application to act on their behalf, has their own bucket of API requests for you to use.
Request only what you need, and only when you need it. For example, polling the REST API looking for new data is inefficient for both your application, and the Twitter API. Instead consider using one of the Streaming APIs as a signal of when to make REST API requests.
If you have any question, don't hesitate to comment below. I had terrible experiences with this when my site got mentioned by a few important twitter accounts
I wonder if anyone can help me, I'm getting a little confused as to
which API to use. If anyone can offer some guidance I would really
appreciate it.
I'm trying to create an website where users can monitor Twitter for
certain hashtags. The site will continually search twitter for any new
updates and store any tweet related to that particular hashtag. This process will run for up to 60 days.
As far as I can gather, my two options are:
Using the Search API
The problem with this API is that if I have a 1000 users all
monitoring different hashtags, I am quickly going to reach my API
limit since I will be making a fair few requests, potentially once
every 2-3 minutes. Is there a way to use oauth in conjunction with the
search API so that the limits are user based and not application
based? That way, the limit will be user specific and I won't have to
worry.
Using the Stream API
I thought this might be a better solution, but it seems you are
limited to how many connections you can have open. The documentation
seems unclear as to how this works... is the connection limit per twitter account
or service ip? For example, if my site had 1000 users each of those users was
monitoring a hashtag, would those 1000 stream api connections be
against my servers ip or would they against the user?
You will want to use the Streaming API. You will open a single connection that will track the terms for all of the users. When users add new terms to track you will restart the stream with the new terms. The single stream will be for a bot Twitter account you create and not your users accounts.
What is the best practice if I have 20K twitter user base and I want to track user's specific keywords via statuses/filter?
Should I distribute the processing on multiple nodes, lets say open a streaming connection tracking keywords for 5K users each (on different IPs or same IP with different authenticating users)?
Or just apply for a bigger access level and use a single connection to get the whole thing.
Thanks,
Alam Sher
Bigger access level. Using multiple accounts to circumvent access limits is
Frowned upon by Twitter
Complicates your processing