Ruby Rss parser and event trigger - ruby-on-rails

I'm using RSS library so i can parse Atom and RSS in Ruby and Rails and store it in a model.
I've looked at the standard RSS library, but is there one library that will auto-detect that there is a new rss feed so i can update my database ?
what are the best practice to trigger an instruction in order to store the new rss feed ?
should i use threads to handle that problem ?is it going to be slow?
thank you for your help

OK heres the deal.
If you want a real fast feed parser go for Feedzirra. Does not work on windows. http://github.com/pauldix/feedzirra
Autodiscovery?
-Theres truffle-hog if you don't want to do GET redirects. http://github.com/pauldix/truffle-hog
-Theres feedbag if you want to do GET redirects to find feeds from given urls. This is slower though. http://github.com/damog/feedbag
Feedzirra is the best bet if you want to poll for new entries for your feed. But if you want a more non-polling solution to your problem then i would suggest going through the pubsubhubbub spec. Make sure while parsing your feeds they are pubsubhubbub enabled. Check for the link tag. If it points to pubsubhubbub.appspot.com or any other pubsub enabled hub then just subscribe to the feed by sending a subscription request to the hub. You can then define a endpoint in your app which will in turn receive updated entry pings for your feed subscription from the hub. Just read the raw POST data and store it in your database. Stats are that 95% of the blogger blogs are pubsub enabled. That is a lot of data in your hands already. :)
If you are polling for changes then you should check the last-modified or etag from the header rather than parse the entire feed again. Saves you from wasting resources. Feedzirra takes care of this for you.

I am not sure what you mean by "auto-detect" a new feed?
Are you looking for code that can discover when someone creates a new feed on a site? Or, do you mean discover when an existing feed has a new article?
The first is tough because your code needs to know what site to look at so it needs some sort of auto-discovery of sites with new feeds. Searching the google for "new rss feeds" doesn't return anything that looks useful, at least not on the first page. If you, or your users, know of a new site then you can have an interface to add new sites to search. Then you grab the page at that URL, look for the RSS/Atom auto-discovery links, and go from there. Auto-discovery links can open a can of worms because of duplicate content being served using different protocols (RDF, RSS and Atom), so you have to determine which to use, or multiple feeds with alternate content listed.
If you mean you want to discover when an existing feed has new articles, then you have to keep track of the last time your code looked at the feed, and the last article that was seen, then retrieve the feed and see if any articles were not in your list of previously seen articles. Your code needs to be sensitive to the time-to-live information in a lot of feeds too. Hitting the feed every fifteen minutes when they update once a week is bad form. Most aggregation code can do those things already but you might need to configure a database and tell the code how to find it.
Generally, for this sort of task I set up a crontab entry on a production Linux or Unix system and fire off the job periodically, looking in the database for feeds whose last-run-time plus the stored time-to-live value is in the past.
Does that help any?

Very easy solution is to use Dynamic attribute-based finders
When you are filling your model with RSS feed data, instead of Model.create(...) use Model.find_or_create_by_column(value, :other_column => other_value).
You can specify a date as unique value or RSS message title ... (whatever you want)
I think this is pretty easy. You can make some cron task to fill your model once per hour for example. Only new feeds will be added.
There is no chance to get some "event" when RSS is updated without downloading whole RSS feed again.

Related

Import data from another source into Adobe Analytics

I’m trying to tie data from another product with my data inside of Adobe Analytics.
We have Adobe Analytics javascript on our website collecting data and we use a third party tool to track how users interact with certain parts of the website. We’re trying to use the Adobe API to tie the data together.
So far we’ve gone down the path of using the Data Insertion API, but it wasn’t quite right as it’s meant to be used as a replacement for the JS, from what I can tell.
We also explored using the Data Sources API. Now the documentation for this suggests you can use a transaction ID to tie offline data with the data collected from the JS, we’ve tried this and it doesn’t match the data up. We’re now exploring using Visitor ID to tie the sessions together but we’re having problems uploading any rows with the Visitor ID column, Adobe just returns the error “Column header: ‘visitorid’ is not a valid column header”. We’ve tried several different variations of visitor id, such as “visitor_id”, “visitor-id”, “vistor id”, etc and still no luck.
The end goal is for us to be able to upload data to Adobe that will update/add eVars for already existing sessions earlier that day. How would I go about doing this? Is there something I'm missing or doing wrong?
Edit: I managed to solve this problem by using the Adobe SAINT API. When a user arrives at the site, we push an eVar for that user with a unique ID and then the day after we use the SAINT API and the unique ID in the eVar we pushed previously to add the additional data we needed.
It could be a good idea to look back at the Data Insertion API and combine it with the visitorId approach where you tie existing/old visitorID's to new eVars and use the timestamp to "update" the dataset.
Although this is experimental, it might be worth a try.
Best regards,

How to scrape different URLs from database with Nokogiri with different requirements

I tried using Feedjira to assist with content analysis from newsfeeds, but it appears that RSS feeds now only link to content rather than including them with RSS as I found out in "Feedjira not adding content and author". I plan to use Feedjira to get the URL for the article, but then use Nokogiri to scrape the article and pick out the relevant parts.
The problem is that each media outlet will have a different format for their pages and I need to know the best way for Nokogiri to take the URL from the database (supplied by Feedjira) and depending on the associated feed title (also the database from Feedjira sync) scrape the page in a specific way and save it to a separate table in the database. Anyone got any suggestions?
I don't know your special use case but I'm also doing content analysis using news feeds.
Maybe you'll have a look on Readability which provides you a generic content scraper.
The problem you've encountered is that every feed generator does it a bit differently, just as with HTML generators. You can assume certain fields are going to be in place in an RDF, RSS or ATOM feed, however the author of the feed could use optional tags that you could find very useful, so you have to write code to look for them.
I wrote several feed aggregators in the past, including one that was handling well over 1000 feeds daily. By sniffing out the feed type, ATOM vs. RSS vs RDF, then I could make sensible checks for fields that were interesting given that format, and extract the data if it was available.
Pre-canned parsers get it wrong too often, either grabbing data you don't want and making a mess of the output, or skipping data you do want leaving gaps in the output, so be prepared to write code if you want it done correctly.
You'll probably want to take advantage of a backing database too, to keep track of what you looked at last and when you're supposed to look at it again; That's part of being a good network citizen. You'll also want to keep track whether a feed was down the last n times you looked so you can trim out dead sites.

How to check data sent to Omniture/adobe-analytics is correct or not

I am a beginner to Omniture/adobe web analytics. I want to know the some information like
How can we track data coming into Omniture?
How do we know if the tags are firing as expected?
I installed Omnibug extension and can track what are the parameters and their values being sent to Omniture, but not sure how can we track data in Omniture that was being sent.
Also, I tried to find unique visitors, visits, pageviews based on pageName. Is it possible to filter unique visitors based on pageName? If yes, can anyone guide me by providing list of instructions
Thanks
What you need to do to truly verify that the expected data is landing in Adobe Analytics is look at the Click Stream feeds and map the results against the data you expect to be there. https://marketing.adobe.com/resources/help/en_US/sc/clickstream/
It is not trivial, but is the deepest way of verifying the final result of page code, data collection, processing rules, vista and finally pre/post results.

How can I use the YouTube SUP API to retrieve recent uploads of some predefined users?

I wish to be able to check for the latest videos (in near realtime or at most a couple of minutes out) for a set of users (up to 200 or so) in a single call to the YouTube API and then store the IDs of uploaded videos in my own database. The only solution I believe there is for this is the YouTube SUP API but I'm not entirely clear on how it works and was wondering if someone could please explain it. I have read the entire API documentation on it but am still not completely clear.
I was assuming that one can call the SUP URL (http://gdata.youtube.com/sup) and check if the user hash has had any activity recently and if they have, then do something with that. My issue is I don't understand how you interpret the activity from ["b305e88","afd4"] in the SUP feed and is there any way to specify a subset of users or must you search through the entire feed? It seems to take a fair few seconds to load the SUP feed.
On the SUP API page it also states that you can visit a URL such as https://gdata.youtube.com/feeds/api/users/bbc/events?v=2 to obtain the hash key for a user's feed, but as you can see if you try to visit it, the link appears to be broken. How else could I obtain the hash?
I'm currently wanting to do this in a Rails project while using the youtube_it gem but I don't believe this has support for it. Correct me if I'm wrong.
Edit
My mistake. The developer key is required to obtain the events of a user such as https://gdata.youtube.com/feeds/api/users/bbc/events?v=2&key=YOUR_DEVELOPER_KEY
Still no progress with the SUP method although I'm potentially considering using a channel and just automatically subscribing to each user. Every minute I will then poll for the list of new videos by the users.
I'd suggest using PubSubHubbub: http://apiblog.youtube.com/2010/10/pubsubhubbub-for-youtube-activities.html
A handler in your web application will automatically receive a POST whenever one of the feeds you're watching is updated, and the content of the POST will be the updated feed itself, saving you the trouble of having to fetch it.
There isn't much documentation specific to using PuSH and the YouTube API beyond that blog post, but the general PuSH docs all apply: https://pubsubhubbub.appspot.com/
Failing that, SUP should still work, so we could try to debug that further if you'd rather use that.

How do I add a twitter search feed to my Ruby on Rails application?

I want to add a twitter feed for a specific keyword search to my rails application. Where should I start?
You might start with one of the Twitter API libraries written for Ruby.
You may want to consider grabbing the RSS feed for the search and parsing that. I show that in Railscasts episode 168. If you need something more fancy the API is the way to go as Dav mentioned.
But whichever solution you choose, it's important to cache the search results locally on your end. This way your site isn't hitting Twitter every time someone goes to the page. This improves performance and will make your site more stable (not breaking when twitter breaks). You can have the cache auto-update every 10 minutes (or whatever fits) using a cron task.
We download and store the tweets in a local database. I recently wrote a blog post about how I achieved this:
http://www.arctickiwi.com/blog/16-download-you-twitter-feed-using-ruby-on-rails-with-oauth
You can then use will_paginate to handle your pagination and go back as far as you want.

Resources