Twitter - Constant search for term - twitter

I wonder if anyone can help me, I'm getting a little confused as to
which API to use. If anyone can offer some guidance I would really
appreciate it.
I'm trying to create an website where users can monitor Twitter for
certain hashtags. The site will continually search twitter for any new
updates and store any tweet related to that particular hashtag. This process will run for up to 60 days.
As far as I can gather, my two options are:
Using the Search API
The problem with this API is that if I have a 1000 users all
monitoring different hashtags, I am quickly going to reach my API
limit since I will be making a fair few requests, potentially once
every 2-3 minutes. Is there a way to use oauth in conjunction with the
search API so that the limits are user based and not application
based? That way, the limit will be user specific and I won't have to
worry.
Using the Stream API
I thought this might be a better solution, but it seems you are
limited to how many connections you can have open. The documentation
seems unclear as to how this works... is the connection limit per twitter account
or service ip? For example, if my site had 1000 users each of those users was
monitoring a hashtag, would those 1000 stream api connections be
against my servers ip or would they against the user?

You will want to use the Streaming API. You will open a single connection that will track the terms for all of the users. When users add new terms to track you will restart the stream with the new terms. The single stream will be for a bot Twitter account you create and not your users accounts.

Related

Make authenticated requests to YouTube from iOS always with same user

Recently, YouTube decided to make video tags unavailable publicly. So to get the tags for a given video, I need to make an authenticated request to the API as the owner of the video. This is not a problem in my case as I'm fetching my own videos.
However, I'm confused about the authentication flow since YouTube strongly recommends to use OAuth2. Since I'm always going to authenticate as the same user (the owner of the video, aka myself), I definitely don't need to have any browser page for the actual user of the app to do anything. I see how I could have done it using ClientLogin (hardcoding login and password into the app) but I'm not sure how to approach this using OAuth2.
One last detail - that is not necessarily relevant since a high-level answer would be enough - is that I'm developing on iOS. Also I looked at this https://developers.google.com/accounts/docs/OAuth2 and particularly the web server case which seems closest to mine but was not able to get a clear idea from it.
Thanks in advance for your help and don't hesitate if you need me to be more specific.
There is no OAuth flow that supports your use case.
In general, you should not be distributing your YouTube login as part of your application. Even if this were available via ClientLogin, after a certain number of logins, you would likely be presented with a challenge because the authentication servers would detect a strange usage pattern.
OAuth is not for distribution a single user's login to a large N, where N is the number of users of your application. OAuth is meant for your application to act on behalf of an end user, and because tags are no longer exposed to end users through the UI, it does not make sense to expose them to users via the API either. More details can be found here:
http://apiblog.youtube.com/2012/08/video-tags-just-for-uploaders.html
How many videos do you have? What is the purpose for needing the tag metadata? From a pragmatic perspective, here are a few alternative implementations that would be easier and would not require users to log in as you:
Store a single file mapping video IDs to tags on a server somewhere and fetch this periodically. Google App Engine is a good place to do this.
Put the tag data in the description in a predictable format (you host the videos), and generate the metadata from this.

Ruby Twitter Applications

I'm using https://github.com/jnunemaker/twitter to tweet to a users twitter when they post on their blog running on ROR. .e.g
Tweet : "I just posted a blog - 'I love ruby on rails' http://link-to-blog.com"
My question is, as I'm making many sites for different people do I have to create a new twitter developer application, with individual consumer keys & secrets, for each blog or is there a way to use the same twitter application?
Thanks,
Alex
You technically can use the same application in a variety of websites. Just use the keys/tokens twitter gives you in all your sites.
Nonetheless, this is a bad practice, since twitter will not be accounting your accesses to the API from the pages that are not the one you specify in the Callback URL. Furthermore, your users will return to that (and only to that) page that you specified in the callback URL, which can be very misleading for those that are in other site.
And finally the most important reasons are the following two:
You'll get to the request limit quicker than if you had several applications
You'll get to the user limit quicker than if you had several applications
The limits that twitter manages are not very big so I can tell you that the twitter functionalities won't work if you get a good peak of visits (happened to me twice). Or they may not work if you're site receives a lot of visits at a certain time. No matter if your cache your API or not, you'll end up filling the limit.
Here is the twitter documentation about this:
Caching. We recommend that you cache API responses in your application or on your site if you expect high-volume usage. For example, don't try to call the Twitter API on every page load of your hugely popular website. Instead, call our API once a minute and save the response to your local server, displaying your cached version on your site. Refer to the Terms of Service for specific information about caching limitations.
Rate limiting by active user. If your site keeps track of many Twitter users (for example, fetching their current status or statistics about their Twitter usage), please consider only requesting data for users who have recently signed in to your site.
Scale your use of the API with the number of users you have. When using OAuth to authenticate requests with the API, the rate limit applied is specific to that user_token. This means, every user who authorizes your application to act on their behalf, has their own bucket of API requests for you to use.
Request only what you need, and only when you need it. For example, polling the REST API looking for new data is inefficient for both your application, and the Twitter API. Instead consider using one of the Streaming APIs as a signal of when to make REST API requests.
If you have any question, don't hesitate to comment below. I had terrible experiences with this when my site got mentioned by a few important twitter accounts

Steps to use twitter api - console application

I am trying to develop a very basic console application that will retrieve a user's homepage (twitter updates from people followed by the user) and save it (json). I've read a lot on the internet, but still am unsure of whether i need to 'register' such an application, and if yes, how could I possibly do it for a console app.
I'd like to get a step-by-step rundown on how I should proceed with the development. Its just a tad complex for a noob like me in this field. I'm aware that off-the-shelf libraries for doing this job are aplenty, but I lack a general understanding of how I should approach this.
Much appreciated,
Abhi
The answer really depends on a few things.
If your application is not going to try to access information about protected users (users can opt to be protected so their information and tweets are kept private) your application will not need to be authorized by any user and will not need to be registered or deal with OAuth. Without using OAuth, you will be limited to making 150 requests per hour, per IP address.
If your application needs to make more than 150 requests an hour, or needs to access protected user information, then you will need to register your application and make requests on behalf of a user. This user could be your twitter account. This will give you up to 300 requests per hour per authorized user.
I can't give you much detail as to how to best write a console application with TweetSharp., but I am familiar with Twitterizer (I wrote it).

Twitter API and rate limiting - I am confused

I am new to the Twitter API, and I looked at their whitelisting policies and I am a little confused... I'm basically writing a twitter aggregrator that crawls the public tweets of a set of users (not more than 200) hourly. I wanted to apply for whitelisting, and they seem to offer account based and IP based whitelisting. Since I am using a shared hosting, my outbound IP address might vary (and twitter does'nt allow IP ranges for whitelisting). So I am considering using account based whitelisting.
However, while using OAuth, is it possible for me to use account based whitelisting for a background process that crawls the API hourly?
You won't be able to whitelist each of the 200 accounts you want to crawl. However. Assuming each of those 200 has OAuthd with you, you can use their access token to crawl their timelines. This eats into their rate limits, not the one for your service. This has the obvious downside of eating into their rate limits though.

Twitter Streaming API Best Practices(Multiple or Single connection)

What is the best practice if I have 20K twitter user base and I want to track user's specific keywords via statuses/filter?
Should I distribute the processing on multiple nodes, lets say open a streaming connection tracking keywords for 5K users each (on different IPs or same IP with different authenticating users)?
Or just apply for a bigger access level and use a single connection to get the whole thing.
Thanks,
Alam Sher
Bigger access level. Using multiple accounts to circumvent access limits is
Frowned upon by Twitter
Complicates your processing

Resources