Why can't we use page views instead of events in Site Catalyst - adobe-analytics

We have a requirement to capture the number of users who successfully logged in or updated his/her profile.
On reading about this, we see that events are the right ones to use to capture this metric.
Just wondering, why can't we use the s.pageName to know the number of successful logins? We set a particular pagename to that variable, and the count of that page name tells us the number of successful logins or updated his/her profile.

You can create a calculated metric for your count of page views on the success page in Adobe Analytics as an alternative to capturing an event.

They are just different and have different advantages.
One reason using events can be helpful is that you could track an event across multiple domains or if you want to track different types of registrations (popup vs checkout). My employer runs dozens of websites and events are very useful to track errors across domains or checkout events.
Using a calculate metric can be very powerful as well. The biggest strength here being that (hopefully) you have been tracking pageNames since day one. If you use an event to add tracking, you will get tracking from the day you tagged it. If you use a calculated metric, you will be able to retroactively see data from years past.
Generally, we use a calculate metric in most cases where it doesn't provide any data issues.

Related

A/B testing(show new feature only for 50% of users)

I'am creating a new feature for my iOS app. After I publish the app I wants to show the new feature only for 50% of the users, so I can do some testing which version makes more orders. I have no idea how to do it without using some third parties like Optimizely.
Also is it possible to do this using Google Tag Manager(GTM).
So can someone please help me to figure this out.
Thank you very much for your time.:)
It’s hard to do it on your own, though not impossible of course: Optimizelys of the world are just programs. You’ll need to solve these problems:
Targeting: Some algorithm that will assign user session to either control or (one of) treatment(s). This has to be random, of course, or you may as well stop there.
Routing: Send sessios to the targeted experience.
Logging: You’ll need to intelligently log events from sessions as they traverse their targeted experience. These may be many, so be careful not to add latency to your app path. Your statistical analysis will be based on these.
Experience stability: how do you ensure (if you do) that a returning user sees the same experience he’s already seen.
Note as well, that Optimizely will only help you if all your changes are on the device and not on the server. If you need to instrument server changes as well, you’ll have to look into Sitespect or Variant.
I finally figured out how to do the A/B testing with 'Google Tag Manager'(GTM).
In GTM you can create a variable called 'Google Analytics Content Experiment'. With this variable you can select how many percentage of users going to see each Variation(your experiments). You can create up to 10 variations for single experiment.
GTM is so cool and powerful. GTM contains so many features that could save lot of time and I totally recommend it for anyone who is going to do A/B testing.

Save game/user statistic in a central database with swift and sql on IOS 8

I am making a game for IOS 8 using the Swift language. For each level I was planning to save data such as how long it takes to solve the level, what (x,y) position a player dies, how many attempts before solving the level etc. I will then use this information to improve the game by adjusting the difficulty of levels.
So I figured I would have a simple SQlite DB stores locally with this information. And then I wonder how I should upload this information to one central database. Any ideas?
For example, what kind of unique identifier can I use? I don't care about the individual data, just the average time to complete a level and the average nr of attempts to solve a level..
But, if I have in-app purchases, how can a user that delete the app, or get another iphone restore the purchases made? This is again related to a unique identifier that is connected to the user, not just the iphone.
You really should be using analytics tools to do this. I would recommend using Flurry. Its free to use, and goes into your game very easily.
http://www.flurry.com/solutions/analytics
I have used it in a number of products. You can send in the anonymous data you want based on events. So every time a level is complete, you can report it to flurry as an event, and pass parameters of level, score, and completion time.
You can look at a nice dash board to see the results and averages.
Its easy to incorporate it into a Swift app. Another SO answer goes through this in detail.

Is there any way to "backdate" requests to google server-side analytics?

I have an iOS app which can be used offline. I need to do anonymous page view tracking, so our customers can tell which pages people are most interested in (to drive future investments). So when the user is offline, we save a timestamped page view list, and if the user happens to be online when they use the app, we send these historic records up, and also do real-time tracking.
I'm keeping some summary statistics in my GAE app, so I can report the page views with historic accuracy. However, I'm also feeding these views into google analytics, using some python code I ported from google's server-side samples.
That all works great (except for language tracking, which I may have solved thanks to a separate question here on SO). However, I'd love for google analytics to be able to understand the historical hits in context. Right now, if I connect up after looking at several pages offline, GA thinks I just popped through a bunch of pages over the course of a couple seconds.
There is no documented utm variable for timestamping. The google analytics SDK for iOS (which I'm not using) has this ominous note:
Known Issues
Possible inaccurate timestamps: timestamps are recorded at the time the application dispatches to Google Analytics, so if a user experiences long periods of offline use, the timestamps may not be 100% accurate.
That seems like a bit of an understatement. Wouldn't offline timestamps be 100% inaccurate?
Anyway, the fact that the SDK doesn't handle this right makes me think I'm not going to be able to solve this. But I figured some SO wizard might have an idea...
In fact, timestamp is a "relative" (client side) information used by Analytics to compute things like "time on page".
When the page is view in "absolute" (date and time) is always the time you send the request.

Best way to go about creating in-house analytics for my Rails 3 app?

I have a Rails 3 app that I'm looking to create in-house analytics for. The items I need to track are impressions (and unique impressions), clicks that come from those impressions, and conversions that come from those clicks. And these are all user-specific so each user can see how many impressions, clicks, and conversions they've received.
What is the best way to go about this? Should I create a separate rails app and call it with pixels? Or should I include all the analytics code in the same app?
Also, are there any analytics platforms already out there that I can customize to meet my needs?
Thanks!
Tim
Before you start re-inventing the wheel, Google Analytics provide a developer API (via OAuth, among other choices) that may provide you with the ability to do what you need (provide each user with a view of their own data).
http://code.google.com/apis/analytics/docs/gdata/home.html
Building your own, while it may seem like an initially basic thing to do, could have serious performance implications further down the line, and Google provide a very detailed view of the the data.
If you really want to write your own, I would strongly urge you not to hit the database for each request you want to track. Keep the data in Redis, or one of the alternatives and periodically persist it to the database via a background task.
If, however, you don't want to put your data into the clutches of our Google Overlord :) then you might indeed consider rolling your own. I have twice before - and I'm doing it again right now: better this time, of course!
If your traffic is not very high and you're running on any decent server platform then adding a tracking system is not going to tax your Rails app noticeably (I know that depends on what 'decent server platform' means but this stuff is pretty cheap these days). Writing to a database is typically very fast - you'd have to have shedloads of clicks to not want to do this straightaway. You can probably bypass most if not all of your before_filters and so on to get a lightning response. One app that runs 2.3.9 uses Metal to do this, for example.
In my new tracking system I have an STI table that goes with models derived from an Activity model; in here you can record both impressions and clicks. Impressions are recorded as the page is built and clicks are recorded using AJAX.
I'm not going to bother with fancy graphs and so on - I'm happy with raw numbers - but these could be added, of course.
At the moment my system is just in the usual app/ folder but I'll probably move it to an engine so I can re-use it more easily.
Hope that helps!
BTW I use Google Analytics as well for a range of sites and it's OK - I just like to do this bit myself.
Depending on how you are going to associate Google Analytics data with a specific user then you might need to double-check the privacy implications. Google doesn't allow their data to be associated with any identifying information about the users being tracked.
If there is a problem then you could try out Piwik as it's open source and you can do what you like with it. It's written in PHP, not Ruby so that might be an issue. As #d11wtq mentions, tracking systems can have performance issues if not built in the right way so you'd be better off starting from something that's already proven to work if possible.

Using ATOM, RSS or another syndication feed for paid content

I work for a publishing house and we're discussing different ways to sell our content over digital channels.
Besides the web, we're closely watching the development of content publishing on tablets (e.g. iPad) and smartphones (e.g. iPhone). Right now, it looks like there are four different approaches:
Conventional publishing houses release Apps like The Daily, Wired or Time Magazine. Personally I name them Print-Content-Meets-Offline-Website Magazines. Very nice to look at, but slow, very heavy regarding datasize and often inconsistent on the usability side. Besides that: These magazines don't co-exist well in a world where Facebook and Twitter is where users spend most of their time and share content.
Plain and stupid PDF. More or less lightweight, but as interactive and shareable as a granite block. A model mostly used by conventional publishers and apps like Zinio.
Websites with customized views for different devices (like Die Zeit's tablet-enhanced website). Lightweight, but (at least until now) not able to really exploit a hardware platform as a native app can.
Apps like Flipboard, Reeder or Zite go a different way: Relaying on Twitter-, Facebook- and/or syndication-feeds like RSS and Atom, they give the user a very personalized way to consume news and media. Besides that, the data behind it is as lightweight as possible, the architecture to distribute the data is fast and has proven for years to be reliable.
Personally, I think #4 is the way to go. Unluckily the mentioned Apps only distribute free content and as a publishing house we're also interested in distributing paid content.
I did some research googled around and came to the conclusion, that there is no standardized way to protect and sell individual articles in a syndication feed.
My question:
Do you have any hints or ideas how this could be implemented in a plattform-agnostic way? Or is there an existing solution I just haven't found yet?
Update:
This article explains exactly what we're looking for:
"What publishers and developers need is
a standard API that enables
distribution of content for authorized
purposes, monitors its use, offers
standard advertising units and
subscription requirements, and
provides a way to share revenues."
Just brainstorming, so take it for what it's worth:
Feedreaders can't do buying but most of them have at least let you authenticate to feeds, right? If your free feed was authenticated, you would be able to tie the retrieval of atom entries to a given user account. The retrieval could check the user account against purchased articles and make sure they were populated with fully paid content.
For unpurchased content, the feed gets populated with a link that takes you to a Buy The Article page. You adjust that user account and the next time the feed is updated, the feed gets shows the full content. You could even offer "article tracks" or something like that where someone can by everything written by a given author or everything matching some search criteria. You could adjust rates accordingly.
You also want to be able to allow people to refer articles to others via social media sites and blogs and so forth. To facilitate this, the article URLs (and the atom entry ids) would need to be the same whether they are purchased or not. Only the content of the feed changes depending on the status of the account accessing the feed.
The trick, it seems to me, is providing enough enticement to get people to create an account. Presumably, you'd need interesting things to read and probably some percentage of it free so that it leaves people wanting more.
Another problem is preventing redistribution of paid content to free channels. I don't know that there is a way to completely prevent this. You'd need to monitor the usage of your feeds by account to look for access anomalies, but it's a hard problem.
Solution we're currently following:
We'll use the same Atom feed for paid and free content. A paid content entry in the feed will have no content (besides title, summary, etc.). If a user chooses to buy that content, the missing content is fetched from a webservice and inserted into the feed.
Downside: The buying-process is not implemented in any existing feedreader.
Anyone got a better idea?
I was looking for something else, but I've came across with Flattr RSS plugin for WordPress.
I didn't have time to look it through, but maybe you can find some useful ideas in it.

Resources