I would like to monitor users' page visits and clicks in my Rails app to make recommendations. My questions are:
Is there a Rails gem for this, or Google Analytics is the standard? If latter is true, then how should I link a page visit to a particular user profile?
It is typical in Rails to have a section in application.html.erb, which is shared for all pages. If I add Google Analytics pageview tracking code to in application.html.erb, will it be able to track all individual pages?
There are other ways, but the vast majority probably use Google Analytics. Several gems exist that help you integrate with GA to get at the data. See here: https://www.ruby-toolbox.com/categories/Web_Analytics.
Based on your first question, it seems you may want more insight than GA can provide. I've used ClickTale (http://www.clicktale.com) and Woopra (http://www.woopra.com) before, to good effect. This article lists several other alternatives, too - notice the high marks for Clicky: http://imimpact.com/web-stats-alternatives-to-google-analytics/.
Google Analytics (and almost all of these others) will take care of your second question automatically whenever the user loads a new page, since it keyed by URL. That means that, although you put the GA script code in a single place, each unique page is tracked individually.
If you have AJAX requests that change that page without changing the URL, you'll need to dig in to the GA script API. Essentially you'll need to push a new url (possibly with a # in it) whenever you want to track an AJAX-driven link/button click. See here: http://davidwalsh.name/ajax-analytics
I am biased, but I would recommend checking out impressionist, if you need to integrate the page views into the app in real-time. With analytics you will always have some lag time and you are also relying on an external dependency. Impressionist is good if you need this kind of control, but if you are just looking for simple metrics and don't need to pull them into the app, then analytics is probably the way to go.
Check out Ahoy, at https://github.com/ankane/ahoy. With just a few lines of code in your app, you can track page views and tie them to user accounts.
You can further customize Ahoy to track custom events, both the client (with JavaScript) and server.
Ahoy does not depend on any third-party services.
Related
I recently decided to start to take advantage of rich snippets to improve my personal website's content for the search engines and, IMHO most importantly, the site readers – hi, Mam! ;-). One of these are Google Authorship. Personally, I think the idea behind Google Authorship is a sound one: it helps to brings a sense of identity, personality and – arguably, most importantly – credibility to what is still largely an anonymous web.
Normally, I would link my article to Google Authorship using the following line of HTML:
<A REL="author" HREF="https://plus.google.com/112431363835029530079?rel=author">Jordan Clark</A>
However, in the instance of a website that publishes articles that are written by multiple authors, manually entering each another’s Google+ UID string starts to become a tiresome process.
Is is valid to do the following:
(a) Link to the author like so, using the script "author.php" (or other type of server-side script).
<A REL="author" HREF="/author.php?by=Alice&rel=author/[UID]?rel=author">Alice</A>
(b) The file "author.php" scripts simply do a quick check for Alice's (or whoever) User ID string provided by Google, and then uses a simple HTTP redirect header to pass this data to Google.
What I would like to know is:
Is it okay to use a local script to redirect to your Google+ user profile? (i.e. will it affect the PageRank of already indexed page or have any other unforeseen negative effects on new and indexed pages?)
Why do I not see more people linking with Google’s “prettified” version:
http://profiles.google.com/clarky.y2k?rel=author
Are there any drawbacks to using the “prettified” version of this method?
Ideally, I would like to use the intermediate PHP script, as I have already described above (see part 1). However, any tips, suggestions or other ways you may have implemented on your websites are very welcome!
For item (1), you can maintain your own app's profiles (author.php in your case) for your authors. On your own app's profile page (author.php), you would add a link from that page to Google and specify the rel="me" attribute on that link. So Alice's profile page might say something like "Find Alice on Google+.
This indirect authorship linking is supported. You also will need the link from Alice's Google+ profile that lists her as a contributor to your site. Once the linking is setup in both directions, authorship can start to show up. Authorship won't always display in all cases and can take some time for it to start appearing as Google would need to reindex your pages.
For item (2), I don't think the profiles URL will enable authorship. Some people use that URL as a vanity URL, but as far as I know it isn't supported for use with things like authorship, badges, etc.
You should test if your redirects are followed using the Rich Snippets Testing Tool: http://www.google.com/webmasters/tools/richsnippets
rel="author" is no longer supported.
I'm building a service where people gets notified (mails) when they follow a link with the format www.domain.com/this_is_a_hash. The people that use this server can share this link on different places like, twitter, tumblr, facebook and more...
The main problem I'm having is that as soon as the link is shared on any of this platforms a lot of request to the www.domain.com/this_is_a_hash are coming to my server. The problem with this is that each time one of this requests hits my server a notification is sent to the owner of the this_is_a_hash, and of course this is not what I want. I just want to get notifications when real people is going into this resource.
I found a very interesting article here that talks about the huge amount of request a server receives when posting to twitter...
So what I need is to avoid search engines to hit the "resource" url... the www.mydomain.com/this_is_a_hash
Any idea? I'm using rails 3.
Thanks!
If you don’t want these pages to be indexed by search engines, you could use a robots.txt to block these URLs.
User-agent: *
Disallow: /
(That would block all URLs for all user-agents. You may want to add a folder to block only those URLs inside of it. Or you could add the forbidden URLs dynamically as they get created, however, some bots might cache the robots.txt for some time so they might not recognize that a new URL should be blocked, too.)
It would, of course, only hold back those bots that are polite enough to follow the rules of your robots.txt.
If your users would copy&paste HTML, you could make use of the nofollow link relationship type:
cute cat
However, this would not be very effective, as even some of those search engines that support this link type still visit the pages.
Alternatively, you could require JavaScript to be able to click the link, but that’s not very elegant, of course.
But I assume they only copy&paste the plain URL, so this wouldn’t work anyway.
So the only chance you have is to decide if it’s a bot or a human after the link got clicked.
You could check for user-agents. You could analyze the behaviour on the page (e.g. how long it takes for the first click). Or, if it’s really important to you, you could force the users to enter a CAPTCHA to be able to see the page content at all. Of course you can never catch all bots with such methods.
You could use analytics on the pages, like Piwik. They try to differentiate users from bots, so that only users show up in the statistics. I’m sure most analytics tools provide an API that would allow sending out mails for each registered visit.
I've been doing some programming off and on for my brother, who is a stock trader. I'm wondering if it is possible to receive a push notification when a site server adds a page. For example, the site smallcapfortunes.com frequently adds pages that are simple extensions off the main URL. For example, the site regularly adds pages under URLs such as /neca/, /stev/, etc.
Are there existing methods to execute this? Or is this something I need to write myself? Has anyone here written anything like that?
I know there are existing sites to track basic updates to a single page. In my research, though, I haven't found anything like this.
Please let me know if there are any other details I need to provide.
Generally you can only get a push notification if a specific website offers that service.
Some websites publish a structured (XML) site map. If the one you're interested in does that, you could pull that sitemap on a regular basis and look for differences.
you're most likely going to want to use http://scrapy.org/ to go through the site and find new /neca/ and /stev/ urls, etc, then just trigger the script every so often.
How do people track page views or analytics for partials in Ruby on Rails? For instance, we have a form rendered in a partial, but we would like to track and analyze user behavior relating to the form.
We use Google Analytics. As far as we know, we're not supposed to embed Google Analytics code in the partial view, or it compromises the data.
We started using Mixpanel, but it begins charging after 25,000 data points. We're looking for a free solution to what should be a pretty basic problem.
You can use google analytics event tracking. you can define a event for each partial and they will track them fine without messing with the automatic page tracking.
google code link for events
personally I found mixpanel rather lacking and expensive when I tried it for work.
Recently search engines have been able to page dynamic content on social networking sites. I would like to understand how this is done. Are there static pages created by a site like Facebook that update semi frequently. Does Google attempt to store every possible user name?
As I understand it, a page like www.facebook.com/username, is not an actual file stored on disk but is shorthand for a query like: select username from users and display the information on the page. How does Google know about every user, this gets even more complicated when things like tweets are involved.
EDIT: I guess I didn't really ask what I wanted to know about. Do I need to be as big as twitter or facebook in order for google to make special ways to crawl my site? Will google automatically find my users profiles if I allow anyone to view them? If not what do I have to do to make that work?
In the case of tweets in particular, Google isn't 'crawling' for them in the traditional sense; they've integrated with Twitter to provide the search results in real-time.
In the more general case of your question, dynamic content is not new to Facebook or Twitter, though it may seem to be. Google crawls a URL; the URL provides HTML data; Google indexes it. Whether it's a dynamic query that's rendering the page, or whether it's a cache of static HTML, makes little difference to the indexing process in theory. In practice, there's a lot more to it (see Michael B's comment below.)
And see Vartec's succinct post on how Google might find all those public Facebook profiles without actually logging in and poking around FB.
OK, that was vastly oversimplified, but let's see what else people have to say..
As far as I know Google isn't able to read and store the actual contents of profiles, because the Google bot doesn't have a Facebook account, and it would be a huge privacy breach.
The bot works by hitting facebook.com and then following every link it can find. Whatever content it sees on the page it hits, it stores. So even if it follows a dynamic url like www.facebook.com/username, it will just remember whatever it saw when it went there. Hopefully in that particular case, it isn't all the private data of said user.
Additionally, facebook can and does provide special instructions that search bots can follow, so that google results don't include a bunch of login pages.
profiles can be linked from outside;
site may provide sitemap