twitter stream api with rails - ruby-on-rails

I am working on real time data visualization project that consumes twitter streaming api
s'.For processing tweets in a server side that is based on rails Framework.
With twitter ruby gem, i can able to fetch the stream tweets
topics = ["coffee", "tea"]
client.filter(:track => topics.join(",")) do |tweet|
puts tweet.text
end
With this i need to build a JSON API in Rails.
UPDATE: With JSON API, need to integrate with AngularJS. For building API at real-time, whether i need to store it any database or not needed.

I suggest you consider Sinatra to build the API, but you can certainly do it in Rails. Simply when a client makes a REST call to the endpoint defined in routes.rb, the controller method will itself make a REST call to Twitter and then transform and serialize their result to JSON to return to your client.
Just remember that your clients need to send the CSRF token with their requests to your services for Rails to let them through and maintain session.
In JQuery it might look something like this:
$.ajaxSetup({
beforeSend: function(xhr) {
xhr.setRequestHeader('X-CSRF-Token', $('meta[name="csrf-token"]').attr('content'));
}
});
As for memory concerns with the volume of data, that depends on how much data you are retrieving, what you are doing with it, the power of your machine, etc. I wouldn't worry if you aren't hitting the Firehose. Let's worry about a memory issue later if it happens. There are always things you can do like caching results, etc. without using a database.

Related

In a Rails API, is there a standard way to "paginate" or "chunk" large API requests responses?

Scenario:
I have a Rails API with a /tasks.json endpoint that serves all the Tasks.
For large Accounts this can sometimes be close to 1,000,000 records.
Sending 1,000,000 records over the wire in JSON generally doesn't work due to a network timeout, etc.
I can easily "paginate" the number of tasks being sent so the endpoint uses something like /tasks.json?limit=10000&page=1 but how does the client know to send this?
Is there a "standard" way to handle large requests like this and break them up naturally into chunks?
Does each client need to handle this on their end manually?
Thanks!
You should use kaminari gem. It handles all requests and paginates it. It works both on Rails Api apps and rails standard apps.
https://github.com/kaminari/kaminari

Rails Google analytics from the controller

Is there a way, or a gem, that can send information to Google Analytics without the page load occurring?
I have a URL shortener, that redirects to the original URL (obviously), but I would like to track who clicked it.
Is there a way to send Google Analytics the request/headers or whatever it needs, from the controller, just prior to the redirect, without them having to actually load a page?
You can use the measurement protocol to do that. There may be a gem that wraps the functionality, but it's pretty basic as is. Essentially, you're just sending sending HTTP hits to the GA servers with your data in the query parameters.
Here are the docs:
https://developers.google.com/analytics/devguides/collection/protocol/

Caching calls to an external API in a rails app

The rails app (4) calls an external API using HTTParty. The API is read only. Caching is required as the data does not change often (24h) and the API allow only a limited number of calls per hour.
I guess I need some kind of hash based cache where I will use "params/sent/to/the/api" as key.
Rails tools for caching seems only to be for pages,fragments or SQL.
What should I do to cache calls to an external API?
It'll be something like this. Basically, the Rails.cache.fetch call will wrap your API call. It won't hit the API unless the cache has expired.
class Results
def get(url, params)
Rails.cache.fetch([url, params], :expires => 1.hour) do
HTTParty.get('url/to/api')
end
end
end
Be sure you have a cache set in your environment. Memcache works great for this sort of thing.

Rails app with no databse and continually updated models

I'm wondering what the best way to go about developing a rails application with the following features:
All of the data comes from a SOAP request to a 3rd party
A background task will make this soap request every ~10s
The background task will parse the response and then update an ActiveRecord model accordingly
The data isn't written to a database at all, if the app fails, when we start it back up the data will come from the soap request again
Users will make a request to the app which will simply show data in the model (i.e. from the soap request).
The idea is to avoid making the SOAP request for every single user as the data won't change that frequently. Not using a database avoids reading and writing of data that only ever comes from the request anyway.
I imagine that all of this can be completely quite simply with a few gems but I've had a bit of trouble sorting through what meets my requirements and what doesn't.
Thanks
I'm not sure what benefit you're getting from using ActiveRecord in this case.
Perhaps consider some other type of persistance for the SOAP calls?
If the results form the WebService are really not changing, I would recommend the Rails caching mechanism. Wherever in your Rails app, you can do:
Rails.cache.fetch "a_unique_cache_key" do
... do your SOAP request and return the result
end
This will do the work within the block just once and fetch its result from the rails cache store in the future.
The cache store be of various types (one of which is the memcache store). I usually go with the file store for medium traffic sites, but you may choose another:
http://guides.rubyonrails.org/caching_with_rails.html

avoiding iframes, but having some iframe like activity in Rails

I have two sites, my main site and a help site with documentation. The main site is rails but the other is simple a wordpress like blog. Currently I have it being pulled into the main site with an iframe, but is there a good way to pull in the html from the other site as some sort of cross-domain (sub-domain actually) partial? Or should I just stick with what works?
If the data sources were all on the same domain, you would be able to utilize straight AJAX to fetch your supplemental content and graft it onto your page. But, since the data is from different domains, the same origin security policy built into web browsers will prevent that from working.
A popular work around is called JSONP, which will let you fetch the data from any cooperating server. Your implementation might look something like this (using jQuery):
$.getJSON(
"http://my.website.com/pageX?callback=?",
function(data) {
$("#help").append(data)
}
)
The only hitch is that the data returned by your server must be wrapped as a javascript function call. For example, if your data was:
<h1>Topic Foo</h1>
Then your server must respond to the JSONP request like this:
callbackFunction("<h1>Topic Foo</h1>")

Resources