Building an Average Load Time per Visit Metric in AA - adobe-analytics

I'm trying to build a per-visit average page load time metric in Adobe Analytics for use in analysis such as average page load time versus conversion rate (Example: Users with avg load time between 0-1s convert at X rate, users between 1-2s convert at Y rate, users between 2-3s convert at Z rate, etc).
We currently have page load time implemented as an event and an eVar on all pages, being captured in ms (ex: On loading the home page, we'll see eVar10=1782 and event10=1782). The eVar is set as a text string set to expire on hit with most recent allocation, while the event is set as an "up is bad" always record numeric with participation enabled.
My first instinct was simply a calculated metric with the Event divided by Page Views, but that ends up aggregating at too high a level (the grand total of all load speeds is divided by the grand total of page views). I tried throwing in various summation functions, but it all ended up equally garbage.
Is it possible to build Average-Per-Visit metrics in AA? Is my implementation even going in the right direction?

Could you do something like this? This might not be exactly what you want, but the calculated metric might help. If you divide your page load time metric by "page load time instances" you can get an average. Which if you have an evar for session id or username, etc, you can find the average load time per session or user. In the example below I use Day as the dimension, but you could use whatever evar you wanted, like username or session id.


Came across this thread as I was looking for something similar. The way I went about this is:
Check the CM here along with the breakdown
You can create a Pageload time/Pageview for each ECID and then further break it down by Visits. This will give you the average Page laod time for a user for a visit.
This you can cross tab with Orders. In my case visits which had orders typically

Related

Prepping Data For Usage Clustering

Dataset: I'm given the number of minutes individual customers use a product each day and am trying to cluster this data in order to find common usage patterns.
My question: How can I format the data so that, for example, a power user with high levels of use for a year looks the same as a different power user who has only been able to use the device for a month before I ended data collection?
So far I've turned each customer into an array where each cell is the number of minutes used that day. This array starts when the user first uses the product and ends after the user's first year of use. All entries in the cells must be double values (e.x. 200.0 minutes used) for the clustering model. I've considered either setting all cells/days after the last day of data collection to either -1.0 or NULL. Are either of these a valid approach? If not what would you suggest?
For the problem where you want both users (one that used the product a lot every day for a year, and the other used it a lot for one month), create a new entry where it's values are:
avg_usage per time_bin
time_bin can be a month, a day or another time bin which best fits your needs.
This way, a user which use a product, let's say 200 minutes per day for one year, will get:
200 * 30 * 12 / 12 = 6000 minutes per month
and the other user, which joined just last month, will also get, with the exact same usage will get:
200 * 30 * 1 / 1 = 6000 minutes per month.
This way, it doesn't matter when you have started to use the product, the only thing that matter, is the usage rate.
An important thing you might take into consideration, that products, may be forgotten for some time. for example, a computer, and I'm away for a vacation. Those days I didn't use my computer, doesn't have (maybe) an effect of my general usage of this product. So, based on your data, product and intuition you might consider removing gaps like the one I mentioned, and not take it into account inside the calculation.
The amount of time a user has used your product could be a signal of something, but if indeed he only started some time ago, and still using it until today, it may be something you need to take into consideration, and for that use, this average binning technique may help.

How do I create multiple weighted edges between nodes

I want to keep all the browse history,
To calculate the behaviour among browsing pages.
So I designed the following graph to show my idea,
As you can see, there are 4 edges between page A and page B,
So how could I create the kind of relationships and nodes ?
how could I get the
average browsing time (20min)
min browsing time
max browsing time
Any suggestion and ideas?
Thanks
I'm a bit confused. What does the relationship mean? Does it represent the amount of time spent on page A before the user browses to page B?
Just going from your model and your goals, maybe something like this would work?
MATCH (a:Page)-[r:browsed_to]->(b:Page)
RETURN avg(r.time_spent)
For min and max time you could replace avg with min and max

How can I measure the total time a user spends online with InfluxDB?

I'm measuring how long users are logged into a service. Every minute, for each user, their new total online time is sent to InfluxDB. I'd like to graph, in Grafana, the cumulative online time for all users.
What kind of query would I need to do that? I initially thought that I'd want sum(onlineTime) and group by time(1m), but I realized that's summing the values within that timeframe, not summing the totals of all users, so when a user wasn't logged in, the total would drop, because there were not data points for them.
I'm a bit confused about what I'm doing now. If I'm sending the wrong data, I can change that too.
So this depends on the time data you send back to InfluxDB
The time data is equal to the total time spent till that instant of time
In this case you would have to take the "last" value and add it up for all the users
The time is equal to the small increments
In this case you would have to add this multiple incremental value for a period of time.

How to automatically measure\monitor the average of sums of a consecutive samplers in jmeter?

I have the following JMeter test plan.
+Test Plan
+Login Thread Group
HttpRequest1
HttpRequest2
HttpRequest3
Is there a way to automatically view\monitor the average of sums of HttpRequest1 ,2 and 3?
I couln't found a way to do it in "Summary Report" or "Aggregate Report"
Is it possible? or do I have to do it manually?
Do you explicitly mean 'the average of sums' As in the average of the total sum for each request over the duration of the test run? If so, then I'm not aware of any JMeter listeners will show you the sum of elapsed time for a sampler, it's not something typically required. Instead, you could probably get what you need fairly easily from reading the jtl file at the command line.
But perhaps you meant something else, you might find that using a Transaction Controller serves you requirements. This will record and show the total elapsed time for multiple requests. So in your example, you might wrap HTTPRequest1, 2 & 3 in a transaction controller and this would give you the sum of all three requests. Then, the Aggregate and Summary listeners would show you the average for this transaction as a separate line.

How to calculate user's info completeness in Rails

I have problem in my new rails project.I want to implement a function which can show the user's info completeness by a bar like Linkedin.
I think I can use a variable to record the completeness,but I don't have any idea about how to calculate it.
P.S I have two Model,one is the User Model,another is the Info Model.
This is, in fact, completely arbitrary. It's based entirely on which activities on the site you want to encourage.
A couple of mechanisms you can consider:
Model "accomplishments" with a completed/not completed status. Count up the ones you care about. Store the accomplishments based on activity either as they happen or at the end of the day in some batch job. For each user, calculate the percentage with the usual math (accomplishments completed/sum of available accomplishments) * 100 = percentage.
A variation of the same, but weighted based on what you consider more valuable contributions. In this case, the math is basically sum of (weight n * accomplishment n)/total weight.
The previous Careers.stackoverflow.com model made a geeky joke about Spinal Tap by making it possible to have counts greater than 100%. You can do that simply by undercounting the maximum accomplishments.

Resources