Tracking Page Popularity in a Time Frame in Rails - ruby-on-rails

I'm very new to web development and this seems like a basic question, so perhaps I just lack the correct terminology to search it on Google.
On my site I plan to have many dynamically generated pages, based off data in a MySQL server, and I would like to know which ones people have been visiting the most, in say, the last 24 hours, so that I can place these most popular page on the front page of the site. How would I/would I be able to accomplish this in a Rails application.

What you're looking for is a web analytics solution to analyze your traffic, and possibly your marketing effectiveness. Here are some of the most prominent services you could use with your website:
Google Analytics
Chartbeat
Reinvigorate
HaveAMint
GetClicky
Piwik
Woopra
Personally, I use Google Analytics as its setup is darn simple: configure your account, add a Javascript snippet on each of the pages you want to track, and you're done.
You could also look out for web analytics software that you would host. All in all, take a look at this Wikipedia page for more information.

Related

What is a good strategy for staying up-to-date with external API's?

My project is reliant on several API's, like Twitter and Youtube for example. Recently, Youtube deprecated their old API, and it caused issues with my team's iPad app.
We could have stayed ahead of the change if we were paying attention to Youtube's announcements of the upcoming deprecation. But alas, we were not and the idea of staying up to date with all of our dependencies manually(browsing the web) seems exhausting and inefficient.
I have found the following tool to help notify when changes occur with external library dependencies, https://libraries.io. However, this does not help with API dependencies.
Besides checking the API source webpages every so often, I was wondering if anyone had suggestions on how to stay notified and up-to-date with news regarding updates to a specified list of external API's?
After some time looking at different options, I have found a solution that is not perfect, but seems to work best at fitting this need.
Solution Description
This solution uses a combination of Twitter, Google Scripts, and website blogtrottr.com. I am creating a twitter list of reliable dev handles that often post updates on new API. For example I made a list that contained #twitterapi and #YouTubeDev. Used Google Scripts to create an online feed out of the twitter list. Then used blogtrottr to email me every time that feed gets a new posting.
Steps to Implement
Create a twitter list of reliable handles that often post about updates to their API
Create an RSS Feed from that Twitter list. The details for how to do this can be found here.
Plug that url that you get from Google Script into blogtrotter.
I did find some other ways to do this, but so far this is the only solution that was 100% free!

Tracking users' clicks and page visits in Rails

I would like to monitor users' page visits and clicks in my Rails app to make recommendations. My questions are:
Is there a Rails gem for this, or Google Analytics is the standard? If latter is true, then how should I link a page visit to a particular user profile?
It is typical in Rails to have a section in application.html.erb, which is shared for all pages. If I add Google Analytics pageview tracking code to in application.html.erb, will it be able to track all individual pages?
There are other ways, but the vast majority probably use Google Analytics. Several gems exist that help you integrate with GA to get at the data. See here: https://www.ruby-toolbox.com/categories/Web_Analytics.
Based on your first question, it seems you may want more insight than GA can provide. I've used ClickTale (http://www.clicktale.com) and Woopra (http://www.woopra.com) before, to good effect. This article lists several other alternatives, too - notice the high marks for Clicky: http://imimpact.com/web-stats-alternatives-to-google-analytics/.
Google Analytics (and almost all of these others) will take care of your second question automatically whenever the user loads a new page, since it keyed by URL. That means that, although you put the GA script code in a single place, each unique page is tracked individually.
If you have AJAX requests that change that page without changing the URL, you'll need to dig in to the GA script API. Essentially you'll need to push a new url (possibly with a # in it) whenever you want to track an AJAX-driven link/button click. See here: http://davidwalsh.name/ajax-analytics
I am biased, but I would recommend checking out impressionist, if you need to integrate the page views into the app in real-time. With analytics you will always have some lag time and you are also relying on an external dependency. Impressionist is good if you need this kind of control, but if you are just looking for simple metrics and don't need to pull them into the app, then analytics is probably the way to go.
Check out Ahoy, at https://github.com/ankane/ahoy. With just a few lines of code in your app, you can track page views and tie them to user accounts.
You can further customize Ahoy to track custom events, both the client (with JavaScript) and server.
Ahoy does not depend on any third-party services.

How can I get an RoR image scraping / parsing tool to work w/ sites that require a password for entry?

I recently contracted a dev to build an image scraping tool, similar to Facebook's, and it works really well for any sites that don't need a password for entry, but in the near future I want to expand its utility to work across sites like ideeli.com, fab.com, or other sites that require a password for entry.
Also, I would assume that a user would already be logged in to one of these sites before they attempt to scrape any images from it.
Any ideas for how to go about creating this functionality?
Thanks for taking the time to answer!
I'd use the "mechanize" gem (https://github.com/tenderlove/mechanize), it's a great tool for automating interactions with websites.
Many sites, however, will ban you if you're caught automating the login process. I've had trouble with Google in the past.

Search Engine without crawling?

Is there a way to collect web content in order to use it in a search engine without passing by the web crawling phase? Any alternative to web crawling?
Thanks
No, to collect the content you have to...collect the content. :-)
Yes (and sort-of no).
:)
You can download existing data dumps from various websites (wikipedia, stackoverflow, etc.) and construct a partial index that way. It obviously won't be a complete index of the internet.
You could also use meta-search to construct your search engine. This is where you use the APIs of other search engines and use THEIR search results as the basis of your index. Examples include citosearch and opensearch. duckduckgo uses yahoo's boss api (and now yahoo uses bing...) as part of their search engine.
There are also real-time streaming APIs that you could use instead of crawling the web. Look at datasift as an example. There are lots more resources you could cleverly use and avoid/minimize crawling.
If you want to be updated with the latest content on pages, then you can use something like pubsubhubbub protocol to get push notifications for subscribed links.
Or use paid services like superfeedr that make use of the same protocol.
directly or indirectly you have to crawl the web in order to get the content.
Well if you don't want to crawl, you can follow a wiki-like approach, where users can submit links to sites (with title, description and tags). So a collaborative link collection can be built.
To avoid spam a +/- system can be involved, to vote useful sites or tags up and useless ones down.
To avoid spammers mass voting SERPs you can weight votes by user reputation.
User reputation can be gained by submitting useful sites. Or somehow tracing usage patterns.
And considering other abuse patterns too.
Well, you got the point, I think.
As spammers gradually discover weaknesses of traditional search engines (see Google bomb, content scraper sites, etc.), a community based approach may work. But it would suffer severely from the cold start effect, and when community is small the system is easy to abuse and poison...
At least Wikipedia and Stack Exchange is not spammed to useless levels so far...
PS: http://xkcd.com/810/

ASP.NET Tracking Code & Unique Visitors

I am trying to find a way to track and produce reports for my site (out of interest). Does anyone know of any articles/projects etc that you can
Track pages / unique visitors etc
Tracking 1) relative to timestamp etc
in asp.net mvc or just asp.net ?
P.S - I know google analytics etc is available but looking to create some basic stats for myself out of interest about how web analytics work ?
There are a couple of good ways to try and determine unique visitors, none of them are exact (which is why different analytics will report different numbers).
The first is to use a cookie. Create a cookie for the user for each time frame that you want to track uniques, so you could create one that expires in a day and one that expires in a month. You can then use both of those to track how many unique daily/monthly visitors you have. Of course this is not perfect since people can clear or refuse cookies, but it is pretty accurate.
The other way is to track uniques using a combination of the IP address and User Agent of the requesting user, this is probably slightly less accurate since if a company has a good IT group lots of internal users will have the same User Agent and since they are all coming from the same internal network could have the same IP address.
If you are interested in reading more about the different methods there is a great article about it here: http://www.google.com/support/urchin45/bin/answer.py?answer=28325
I blogged about simple asp.net module.
You can check it here
http://ilkeraksu.com/post/2009/07/14/Very-very-simple-But-very-very-efficient-Aspnet-Tracking-module.aspx
I would recommend using google analytics instead of reinventing the wheel. All you have to do is stick a bit of javascript in your master page and your done.
Yo can check Piwik out. Its an open source web analytics written using PHP and mysql.
you can find great article in http://www.codeproject.com/KB/aspnet/PageTracking.aspx
which is upgraded version of http://www.15seconds.com/Issue/021119.htm
with help of a Session Tracker class that runs in Application_PreRequestHandlerExecute and mailing reports on session end and lot of usefull tips
thanks Wayne Plourde for all that stuff

Resources