Rails current visitor count - ruby-on-rails

How does one implement a current visitors count for individual pages in Rails?
For example, a property website has a list of properties and a remark that says:-
"there are 6 people currently looking at this property" for each individual listing.
I'm aware of the impressionist gem, which is able to log unique impressions for each controller. Just wondering if there is a better way than querying
impressions.where("created_at <= ?", 5.minutes.ago).count
for each object in the array.

Before you get downvoted, I'll give you an idea of how to do it
Recording visitors is in the realm of analytics, of which Google Analytics is the most popular & recognized
Analytics
Analytics systems work with 3 parts:
Capture
Processing
Display
The process of capturing & processing data is fundamentally the same -- put a JS widget on your site to send a query to the server with attached user data. Processing the data puts it into your database
Displaying The Data
The difference for many people is the display of the data they capture
Google Analytics displays the data in their dashboard
Ebay displays the data as x people bought in the past hour
You want to show the number of people viewing an item
The way to do this is to hard-code the processing aspect of the data into your app
I can't explain the exact way to do this, because it's highly dependent on your stack, but this is the general way to do it

Related

Import data from another source into Adobe Analytics

I’m trying to tie data from another product with my data inside of Adobe Analytics.
We have Adobe Analytics javascript on our website collecting data and we use a third party tool to track how users interact with certain parts of the website. We’re trying to use the Adobe API to tie the data together.
So far we’ve gone down the path of using the Data Insertion API, but it wasn’t quite right as it’s meant to be used as a replacement for the JS, from what I can tell.
We also explored using the Data Sources API. Now the documentation for this suggests you can use a transaction ID to tie offline data with the data collected from the JS, we’ve tried this and it doesn’t match the data up. We’re now exploring using Visitor ID to tie the sessions together but we’re having problems uploading any rows with the Visitor ID column, Adobe just returns the error “Column header: ‘visitorid’ is not a valid column header”. We’ve tried several different variations of visitor id, such as “visitor_id”, “visitor-id”, “vistor id”, etc and still no luck.
The end goal is for us to be able to upload data to Adobe that will update/add eVars for already existing sessions earlier that day. How would I go about doing this? Is there something I'm missing or doing wrong?
Edit: I managed to solve this problem by using the Adobe SAINT API. When a user arrives at the site, we push an eVar for that user with a unique ID and then the day after we use the SAINT API and the unique ID in the eVar we pushed previously to add the additional data we needed.
It could be a good idea to look back at the Data Insertion API and combine it with the visitorId approach where you tie existing/old visitorID's to new eVars and use the timestamp to "update" the dataset.
Although this is experimental, it might be worth a try.
Best regards,

How to check data sent to Omniture/adobe-analytics is correct or not

I am a beginner to Omniture/adobe web analytics. I want to know the some information like
How can we track data coming into Omniture?
How do we know if the tags are firing as expected?
I installed Omnibug extension and can track what are the parameters and their values being sent to Omniture, but not sure how can we track data in Omniture that was being sent.
Also, I tried to find unique visitors, visits, pageviews based on pageName. Is it possible to filter unique visitors based on pageName? If yes, can anyone guide me by providing list of instructions
Thanks
What you need to do to truly verify that the expected data is landing in Adobe Analytics is look at the Click Stream feeds and map the results against the data you expect to be there. https://marketing.adobe.com/resources/help/en_US/sc/clickstream/
It is not trivial, but is the deepest way of verifying the final result of page code, data collection, processing rules, vista and finally pre/post results.

Not able to see time zone, place or geolocation of any tweets

I am following two tutorials right now and both are up and running and I've gotten plenty of tweets/sentiment scores from them:
1) Twitter Stream Analytics on Azure https://azure.microsoft.com/en-us/documentation/articles/stream-analytics-twitter-sentiment-analysis-trends/
2) Twitter Analysis with Spark Streaminghttp://ampcamp.berkeley.edu/3/exercises/realtime-processing-with-spark-streaming.html
I am using the free oauth tool provided from apps.twitter.com.
Problem
I've tried getPlace, getGeoLocation in the Spark Streaming app and every tweet I get has a null value for those two fields. I have tried filtering for tweets that only have values for getPlace, get GeoLocation and I get null for both (I ran the app for almost 20 minutes).
I've also tried getting TimeZone in the Azure app (so I can get some sort of geography data) and even then I kept getting null values for TimeZone.
Possible Obstacles
1) Does the free twitter api filter out the place/geoLocation information so I end up buying a subscription to a better api?
2) Do I need to explicitly search for tweets that have geoLocation/Places? Rather than getting all tweets and then filtering out ones that have geoLocation/Places? If so, can I execute this search in Spark Streaming?This is the code that I have in Spark Streaming:
val stream = TwitterUtils.createStream(ssc, None, filters)
val hashTags = stream.map(status => Tweet(status.getPlace().getName(), classifyTweet(status.getText())))
Thank you for the help!
I've personally used the free Twitter api to get locations and publish them on a a map on PowerBi. So you can rule out the first obstacle.
One thing to note is that location field is only available if the client specifically allows the application to have location, which renders it quite rare to be found. The ratio for data with location in my sample data was about 8%.
Don't have an answer for spark side, just wanted to help you rule out the first possibility.
Hope this helps.

Displaying tweets from multiple users (similar to Embedded Timelines) without twitter-side user lists

I am new to Twitter and need some tips.
I need to display tweet feed from multiple users on some webpage.
The first thing I stumbled upon is Embedded Timelines. It allows to display tweets from list of users but the gotcha is that those lists should be maintained on Twitter-side (i.e. I cannot specify #qwe and #asd only on my side and get timeline without adding those users into list on Twitter-side).
The thing is that list of users that should be included into timeline is dynamic and managing those lists through Twitter API will probably be painful. Not to mention that my website will probably generate tons of those lists and I feel that I will violate some api quotas sooner or later.
So, my question is - am I stuck with using Embedded Timelines that refer some user list on Twitter-side and managing those lists through, say Twitter REST api, or there is a simplier way to do what I want?
It's pretty simple to display tweets for multiple users.
Links to start with
This post explains some of the search queries you can make
This post is a simple library to make requests to the twitter API that 'just works'
Your Query
Okay, so you want multiple users. The endpoint you're looking at using is the search/tweets one: https://api.twitter.com/1.1/search/tweets.json.
The query string uses :from and you can interpolate multiple froms with AND/OR.
An example query for the GET request:
?q=from:user1+OR+from:user2
Read more about the search API queries here.
Your "over-the-quote" issue
This is something you're going to need to figure out yourself - depending on the number of requests you expect to make, and the twitter imposed limits, maybe some sort of caching or saving information when you hit your limit, and only pull back from the cache whilst you're hitting your limit..

YouTube Analytics API returns no rows for demographic query - but does return views

When querying the YouTube Analytics API for demographics for a channel for a 1 day range (metric:viewerPercentage, dimensions:ageGroup,gender) in some cases no rows are returned. The api IS returning views for that day however.
2 reasons for this come to mind:
1. The data is not available yet because it is still being processed.
2. There is no known demographics for that (i.e. the gender and age of the user are not known)?
Am I safe to assume it's not (1) in this case because a query for views did return results? If I can't assume on then is it true that there's no difference in the response/results between "not processed" and "processed but all users are of unknown demographics?"
In other words if, when (2) were the case, the API would return a row with all zero's 0 for each demographic, that would enable us to interpret things correctly (but I'm pretty sure that's not how API queries with a dimension work).
Thanks for any guidance!
I can't provide any hard-and-fast guidance about the YouTube Analytics data processing pipeline, i.e. whether the demographic data will always be available at exactly the same time that the view count data is available in a report.
To get a more authoritative answer about this sort of specific question, I'd recommend going to the YouTube Analytics web interface (http://youtube.com/analytics) and try running an identical report from there. The web interface normally gives you some warning if you're requesting a report that relies on data that isn't yet available.

Resources