Aggregating feeds in Rails application - ruby-on-rails

I am thinking of writing a daemon to loop through feeds and then add them into the database as ActiveRecord objects.
Firstly, one problem I am facing is that I cannot reliably retrieve the author/user of a story using the feed-normalizer gem. It appears that some times, it does not recognize the tag (I don't know if anyone else has faced this problem).
Secondly, I haven't seen anyone convert RSS feeds back into database entries. I need to do this as each entry will have associations with other ActiveRecord objects. I can't find any gems to do this specifically, but could I somehow hack something like acts_as_feed to do that?

Don't use SimpleRSS. It won't decode HTML entities for you, and it occasionally ignores the structure of the feed.
I've found it easiest to parse the feed as XML with XMLSimple, but you can use any XML parser.

SimpleRSS exposes a very simple API and works pretty well on most feeds. I recommend not looking at the implementation as its "parser" is a bunch of regexes (which is so wrong on so many levels), but it works well.
Daemons is a good gem for running it in the background.
If you are using active record, you should follow the instructions for using AR outside of rails and then inline define the model classes. This will cut down on bloat a bit.
RSS feeds are pretty inconsistent, this is the fall through we use
date = i[:pubDate] || i[:published] || i[:updated]
body = i[:description] || i[:content] || i[:summary] || ""
url = i[:guid] || i[:link]
Also, from experience, make sure you try to rescue everything (and remember that timeouts are not caught by normal rescue). It sucks to have to constantly bounce RSS daemons that get bad data.

The best approach is to use a Rails Engine connected to a Feed API like Superfeedr's.
Polling RSS feeds implies that you'll need to run your own asynchronous workers and/or a queue system which can be fairly complex to build and maintain overtime. You'll also have to handle hundreds of formats and inconsistencies. Here's a blog post that shows how to consume RSS feeds in a Rails application.

Related

How to scrape different URLs from database with Nokogiri with different requirements

I tried using Feedjira to assist with content analysis from newsfeeds, but it appears that RSS feeds now only link to content rather than including them with RSS as I found out in "Feedjira not adding content and author". I plan to use Feedjira to get the URL for the article, but then use Nokogiri to scrape the article and pick out the relevant parts.
The problem is that each media outlet will have a different format for their pages and I need to know the best way for Nokogiri to take the URL from the database (supplied by Feedjira) and depending on the associated feed title (also the database from Feedjira sync) scrape the page in a specific way and save it to a separate table in the database. Anyone got any suggestions?
I don't know your special use case but I'm also doing content analysis using news feeds.
Maybe you'll have a look on Readability which provides you a generic content scraper.
The problem you've encountered is that every feed generator does it a bit differently, just as with HTML generators. You can assume certain fields are going to be in place in an RDF, RSS or ATOM feed, however the author of the feed could use optional tags that you could find very useful, so you have to write code to look for them.
I wrote several feed aggregators in the past, including one that was handling well over 1000 feeds daily. By sniffing out the feed type, ATOM vs. RSS vs RDF, then I could make sensible checks for fields that were interesting given that format, and extract the data if it was available.
Pre-canned parsers get it wrong too often, either grabbing data you don't want and making a mess of the output, or skipping data you do want leaving gaps in the output, so be prepared to write code if you want it done correctly.
You'll probably want to take advantage of a backing database too, to keep track of what you looked at last and when you're supposed to look at it again; That's part of being a good network citizen. You'll also want to keep track whether a feed was down the last n times you looked so you can trim out dead sites.

Get list of files from Apache index page using Ruby/Rails

I am attempting to create a radar animation using data from the National Weather Service. For static images they make it easy by always having the same filename. However, for the historical images, they are timestamped, and always change. Thus, to get the previous N images, you would have to know the filenames beforehand. They do, however, provide a directory which provides a listing for each site. See the example here:
http://radar.weather.gov/ridge/RadarImg/N0R/FWS/
What I need is from my Rails app to extract the last N images from that directory. Is that possible? I could imagine one option would be to download and then scrape that page, but I am assuming there is a better way?
Thanks!
Following on from the above you could try something like I just tried in the console..
require 'open-uri'
require 'nokogiri'
doc = Nokogiri::HTML(open('http://radar.weather.gov/ridge/RadarImg/N0R/FWS/'))
doc.xpath('//table/tr/td').each do |tabrow|
puts tabrow.content
end
That's a pretty basic stab in the dark but should give you food for thought to get you on the way
You'll have to download them using a library like curb, parse the page with something like Nokogiri and then combine the images using whatever tool works best for you.
Rails is designed to handle web requests, not run as a background job, but there are tools that can facilitate this for you or you can always make scripts for rails runner to execute in the Rails environment.

Is it possible to make a website with Ruby On Rails that scrapes data from another website and displays it

I am teaching myself Ruby on Rails. I would like to make website that whenever someone visits it, will scrape another website and display some data. Is this possible?
Yes, it's possible.
Just remember one things: Don't crawl data within your controller action. Crawling data might be a long running process. The target website might be slow or down, and it will block your entire website. You should use some cron job or job queue to crawl data, and store in your database. The rails app gets data from database, not from the other website directly.
Totally. You can use Nokogiri to take in the contents of a web page, parse it and then display it on your site. It requires some knowledge of the site you are consuming in the sense of the class/id of the elements.
Nokogiri gem
Yes. You should use Nokogiri or regular expressions to extract data what you want and then display it.
Here is a small code example to get you going
require 'open-uri'
open('http://www.stackoverflow.com'){ |f| puts f.read }
This will print to your terminal window the HTML from this site, if you don't do so already, use the utility irb to see it work, finally here is a basic way to strip out much of the HTML if you need to ..
include ActionView::Helpers::SanitizeHelper
open('http://www.stackoverflow.com'){ |f| puts strip_tags(f.read) }

Ruby Rss parser and event trigger

I'm using RSS library so i can parse Atom and RSS in Ruby and Rails and store it in a model.
I've looked at the standard RSS library, but is there one library that will auto-detect that there is a new rss feed so i can update my database ?
what are the best practice to trigger an instruction in order to store the new rss feed ?
should i use threads to handle that problem ?is it going to be slow?
thank you for your help
OK heres the deal.
If you want a real fast feed parser go for Feedzirra. Does not work on windows. http://github.com/pauldix/feedzirra
Autodiscovery?
-Theres truffle-hog if you don't want to do GET redirects. http://github.com/pauldix/truffle-hog
-Theres feedbag if you want to do GET redirects to find feeds from given urls. This is slower though. http://github.com/damog/feedbag
Feedzirra is the best bet if you want to poll for new entries for your feed. But if you want a more non-polling solution to your problem then i would suggest going through the pubsubhubbub spec. Make sure while parsing your feeds they are pubsubhubbub enabled. Check for the link tag. If it points to pubsubhubbub.appspot.com or any other pubsub enabled hub then just subscribe to the feed by sending a subscription request to the hub. You can then define a endpoint in your app which will in turn receive updated entry pings for your feed subscription from the hub. Just read the raw POST data and store it in your database. Stats are that 95% of the blogger blogs are pubsub enabled. That is a lot of data in your hands already. :)
If you are polling for changes then you should check the last-modified or etag from the header rather than parse the entire feed again. Saves you from wasting resources. Feedzirra takes care of this for you.
I am not sure what you mean by "auto-detect" a new feed?
Are you looking for code that can discover when someone creates a new feed on a site? Or, do you mean discover when an existing feed has a new article?
The first is tough because your code needs to know what site to look at so it needs some sort of auto-discovery of sites with new feeds. Searching the google for "new rss feeds" doesn't return anything that looks useful, at least not on the first page. If you, or your users, know of a new site then you can have an interface to add new sites to search. Then you grab the page at that URL, look for the RSS/Atom auto-discovery links, and go from there. Auto-discovery links can open a can of worms because of duplicate content being served using different protocols (RDF, RSS and Atom), so you have to determine which to use, or multiple feeds with alternate content listed.
If you mean you want to discover when an existing feed has new articles, then you have to keep track of the last time your code looked at the feed, and the last article that was seen, then retrieve the feed and see if any articles were not in your list of previously seen articles. Your code needs to be sensitive to the time-to-live information in a lot of feeds too. Hitting the feed every fifteen minutes when they update once a week is bad form. Most aggregation code can do those things already but you might need to configure a database and tell the code how to find it.
Generally, for this sort of task I set up a crontab entry on a production Linux or Unix system and fire off the job periodically, looking in the database for feeds whose last-run-time plus the stored time-to-live value is in the past.
Does that help any?
Very easy solution is to use Dynamic attribute-based finders
When you are filling your model with RSS feed data, instead of Model.create(...) use Model.find_or_create_by_column(value, :other_column => other_value).
You can specify a date as unique value or RSS message title ... (whatever you want)
I think this is pretty easy. You can make some cron task to fill your model once per hour for example. Only new feeds will be added.
There is no chance to get some "event" when RSS is updated without downloading whole RSS feed again.

How do I add a twitter search feed to my Ruby on Rails application?

I want to add a twitter feed for a specific keyword search to my rails application. Where should I start?
You might start with one of the Twitter API libraries written for Ruby.
You may want to consider grabbing the RSS feed for the search and parsing that. I show that in Railscasts episode 168. If you need something more fancy the API is the way to go as Dav mentioned.
But whichever solution you choose, it's important to cache the search results locally on your end. This way your site isn't hitting Twitter every time someone goes to the page. This improves performance and will make your site more stable (not breaking when twitter breaks). You can have the cache auto-update every 10 minutes (or whatever fits) using a cron task.
We download and store the tweets in a local database. I recently wrote a blog post about how I achieved this:
http://www.arctickiwi.com/blog/16-download-you-twitter-feed-using-ruby-on-rails-with-oauth
You can then use will_paginate to handle your pagination and go back as far as you want.

Resources