I want to keep all the browse history,
To calculate the behaviour among browsing pages.
So I designed the following graph to show my idea,
As you can see, there are 4 edges between page A and page B,
So how could I create the kind of relationships and nodes ?
how could I get the
average browsing time (20min)
min browsing time
max browsing time
Any suggestion and ideas?
Thanks
I'm a bit confused. What does the relationship mean? Does it represent the amount of time spent on page A before the user browses to page B?
Just going from your model and your goals, maybe something like this would work?
MATCH (a:Page)-[r:browsed_to]->(b:Page)
RETURN avg(r.time_spent)
For min and max time you could replace avg with min and max
Related
I'm trying to build a per-visit average page load time metric in Adobe Analytics for use in analysis such as average page load time versus conversion rate (Example: Users with avg load time between 0-1s convert at X rate, users between 1-2s convert at Y rate, users between 2-3s convert at Z rate, etc).
We currently have page load time implemented as an event and an eVar on all pages, being captured in ms (ex: On loading the home page, we'll see eVar10=1782 and event10=1782). The eVar is set as a text string set to expire on hit with most recent allocation, while the event is set as an "up is bad" always record numeric with participation enabled.
My first instinct was simply a calculated metric with the Event divided by Page Views, but that ends up aggregating at too high a level (the grand total of all load speeds is divided by the grand total of page views). I tried throwing in various summation functions, but it all ended up equally garbage.
Is it possible to build Average-Per-Visit metrics in AA? Is my implementation even going in the right direction?
Could you do something like this? This might not be exactly what you want, but the calculated metric might help. If you divide your page load time metric by "page load time instances" you can get an average. Which if you have an evar for session id or username, etc, you can find the average load time per session or user. In the example below I use Day as the dimension, but you could use whatever evar you wanted, like username or session id.

Came across this thread as I was looking for something similar. The way I went about this is:
Check the CM here along with the breakdown
You can create a Pageload time/Pageview for each ECID and then further break it down by Visits. This will give you the average Page laod time for a user for a visit.
This you can cross tab with Orders. In my case visits which had orders typically
Dataset: I'm given the number of minutes individual customers use a product each day and am trying to cluster this data in order to find common usage patterns.
My question: How can I format the data so that, for example, a power user with high levels of use for a year looks the same as a different power user who has only been able to use the device for a month before I ended data collection?
So far I've turned each customer into an array where each cell is the number of minutes used that day. This array starts when the user first uses the product and ends after the user's first year of use. All entries in the cells must be double values (e.x. 200.0 minutes used) for the clustering model. I've considered either setting all cells/days after the last day of data collection to either -1.0 or NULL. Are either of these a valid approach? If not what would you suggest?
For the problem where you want both users (one that used the product a lot every day for a year, and the other used it a lot for one month), create a new entry where it's values are:
avg_usage per time_bin
time_bin can be a month, a day or another time bin which best fits your needs.
This way, a user which use a product, let's say 200 minutes per day for one year, will get:
200 * 30 * 12 / 12 = 6000 minutes per month
and the other user, which joined just last month, will also get, with the exact same usage will get:
200 * 30 * 1 / 1 = 6000 minutes per month.
This way, it doesn't matter when you have started to use the product, the only thing that matter, is the usage rate.
An important thing you might take into consideration, that products, may be forgotten for some time. for example, a computer, and I'm away for a vacation. Those days I didn't use my computer, doesn't have (maybe) an effect of my general usage of this product. So, based on your data, product and intuition you might consider removing gaps like the one I mentioned, and not take it into account inside the calculation.
The amount of time a user has used your product could be a signal of something, but if indeed he only started some time ago, and still using it until today, it may be something you need to take into consideration, and for that use, this average binning technique may help.
I am graphing with Grafana (2.6.0) and I have an InfluxDB (0.10.2) database with the following data in it:
> select * from "WattmeterMainskwh" where time > now() - 5m
name: WattmeterMainskwh
-----------------------
time value
1457579891000000000 15529.322
1457579956000000000 15529.411
1457580011000000000 15529.425
1457580072000000000 15529.460
1457580135000000000 15529.476
...etc...
This data collects my household kilowatt usage as measured by a kWH gauge that steadily increments the usage value across months or years. I cannot easily reset the counter, nor do I wish to do so.
My goal is to create a graph that shows my daily kWH use over 24 hour periods starting at midnight, or at a minimum showing relative kWH over the interval displayed. This type of graph would be useful in many other circumstances as well where I could imagine "errors across the day" or "visitors since opening time" or "BGP resets per calendar week" were useful but the collection counter was not reset to zero upon the reset or turn-over of the time interval. This kind of counting is actually quite common in my experience.
This graph works, but doesn't show me what I'm looking for:
SELECT derivative(mean("value")) FROM "WattmeterMainskwh" WHERE $timeFilter GROUP BY time($interval) fill(null)
That graph just shows the difference between one sample and the previous sample. What I want is a steadily increasing line starting from the left side of the graph and increasing towards the right side of the graph, with zero as the bottom of the Y axis, and the graph starting at zero at the farthest left X value.
This graph works too and shows me the correct curve, but it's off by fifteen thousand or so. So far, it's the closest to what I want but since this is an ever-increasing counter that can't be reset I need to subtract some from the Y axis. Ideally, I'd like to subtract whatever the value was at the previous midnight from each sample to get a relative number based on a day instead of an absolute based on all time.
SELECT sum("value") FROM "WattmeterMainskwh" WHERE $timeFilter GROUP BY time($interval) fill(null)
And here's the graph from that previous statement:
Graph that is off by 15k
This attempt didn't work - I apparently can't take a sum of a derivative group:
SELECT sum(derivative(mean("value"))) FROM "WattmeterMainskwh" WHERE $timeFilter GROUP BY time($interval) fill(null)
This doesn't work, either - I can't perform functions within "derivative":
SELECT derivative(sum("value")-first("value")) FROM "WattmeterMainskwh" WHERE $timeFilter GROUP BY time($interval) fill(null)
Of course, I could just create a new value that had calculations applied to it before I wrote it into InfluxDB, but that seems to me to be a data-redundant and sloppy way to solve this problem, as well as being quite inflexible if I want to look at other intervals on a whim. I'm hoping that there is some way to do this more elegantly within the combination of InfluxDB & Grafana, but I'm just not able to find it with the search terms I've used or the thinking I've put towards interpreting the documentation.
Is this type of graph even possible with InfluxDB/Grafana? As far as I can tell a continuous query is not a solution, and the lack of nested SELECTs makes even the hackish ways of doing this not obvious to me.
BONUS: It would be really great to have the graph show midnight every night as a "zero" location, instead of "zero" being the first point in the displayed interval, so looking at five days of normal data would show five distinct "waves" of increasing daily aggregate energy usage, with the wave Y value going back down to zero at 12:00:01 on each day. But I'll take whatever I can get.
Nested functions have only partial support. However, you can effectively nest functions by chaining Continuous Queries.
Use a CQ to calculate the derivative(mean(value)) and store that in a new measurement foo. Then for your graph you can query select sum(value) from foo.
(I know this answer is quite late, but it might help others. Oh, and please excuse me for all the Dutch in my graphs; I had to keep it in dutch for the highest possible WAF)
You could do what I do for my kWh calculations:
Which results in a simple query like this:
SELECT distinct("kwh_combined") FROM "smartmeter" WHERE $timeFilter GROUP BY time($__interval) fill(linear)
In order to get your total count.. or if you want it in a nice graph like this which shows the number of kWh's used per hour in the bars and the yellow line (I normally run in dark mode, excuse the yellow) which is my current WATT power draw:
This data (or at least your hourly usage in bars) can be retrieved by a query like this:
Which is this exact query (for B):
SELECT spread("kwh_combined") FROM "smartmeter" WHERE $timeFilter GROUP BY time(1h) fill(null)
... where the 'kwh_combined' is (still) my counter just counting up and up.
All this results in me being able to 'query' the InfluxDB for a certain time period, like "last 24 hours" to come up with a nice panel like this: (ignore the encircled prices, that was for a question I posted I just made 10 minutes ago, check my PS)
I hope this helps you or anyone else; it took me some figuring out, but I'm happy to give something back to the community :)
PS: Don't be as stupid as I was and hardcode your electrical and gas prices into your dashboard but store them with your measurements as they could change over time.
I had the same problem (same application even) and solved it here. In your case, the query should be roughly:
SELECT value-value_fill FROM
(SELECT first(value) as value_fill FROM WattmeterMainskwh WHERE time>now()-7d GROUP BY time(1d)),
(SELECT first(value) as value FROM WattmeterMainskwh WHERE time>now()-7d GROUP BY time(1h))
fill(previous)
My goal here is to generate a system similar to that of the front page of reddit.
I have things and for the sake of simplicity these things have votes. The best system I've generated is using time decay. With a halflife of 7 days, if a vote is worth 20 points today, then in seven days, it it worth 10 points, and in 14 days it will only be worth 5 points.
The problem is, that while this produces results I am very happy with, it doesn't scale. Every vote requires me to effectively recompute the value of every other vote.
So, I thought I might be able to reverse the idea. A vote today is worth 1 point. A vote seven days from now is worth 2 points, and 14 days from now is worth 4 points and so on. This works well because for each vote, I only have to update one row. The problem is that by the end of the year, I need a datatype that can hold fantastically huge numbers.
So, I tried using a linear growth which produced terrible rankings. I tried polynomial growth (squaring and cubing the number of days since site launch and submission) and it produced slightly better results. However, as I get slightly better results, I'm quickly re-approaching unmaintainable numbers.
So, I come to you stackoverflow. Who's got a genius idea or link to an idea on how to model this system so it scales well for a web application.
I've been trying to do this as well. I found what looks like a solution, but unfortunately, I forgot how to do math, so I'm having trouble understanding it.
The idea is to store the log of your score and sort by that, so the numbers won't overflow.
This doc describes the math.
https://docs.google.com/View?id=dg7jwgdn_8cd9bprdr
And the comment where I found it is here:
http://blog.notdot.net/2009/12/Most-popular-metrics-in-App-Engine#comment-25910828
Okay, thought of one solution to do that on every vote. The catch is that it requires a linked list with atomic pop/push on both sides to store votes (e.g. Redis list, but you probably don't want it in RAM).
It also requires that decay interval is constant (e.g. 1 hour)
It goes like this:
On every vote, update the score push the next time of decay of this vote to the tail of the list
Then pop the first vote from the head of the list
If it's not old enough to decay, push it back to the head
Otherwise, subtract the required amount from the total score and push the updated information to the tail
Repeat from step 2 until you hit a fresh enough vote (step 3)
You'll still have to check the heads in background to clear the posts that no one votes on anymore, of course.
It's late here so I'm hoping someone can check my math. I think this is equivalent to exponential decay.
MySQL has a BIGINT max of 2^64
For simplicity, lets use 1 day as our time interval. Let n be the number of days since the site launched.
Create an integer variable. Lets call it X and start it at 0
If an add operation would bring a score over 2^64, first, update every score by dividing it by 2^n, then set X equal to n.
On every vote, add 2^(n-X) to the score.
So, mentally, this makes better sense to me using base 10. As we add things up, our number gets longer and longer. We stop caring about the numbers in the lower digit places because the values we're incrementing scores by have a lot of digits. Which means that the lower digits kind of stop counting for very much. So if they don't count, why not just slide the decimal place over to a point that we care about and truncate the digits below the decimal place at some point. To do this, we need to slide the decimal place over on the amount we're adding each time as well.
I can't help but feel like there's something wrong with this.
Here are two possible pseudo queries that you could use. I know that they don't really address scalability, but I think that they do provide methods so that you can
SELECT article.title AS title, SUM(vp.point) AS points
FROM article
LEFT JOIN (SELECT 1 / DATEDIFF(NOW(), vote.created_at) as point, article_id
FROM vote GROUP BY article_id) AS vp
ON vp.article_id = article.id
or (not in a join, which will be a bit faster I think, but harder to hydrate),
SELECT SUM(1 / DATEDIFF(NOW(), created_at)) AS points, article_id
FROM vote
WHERE article_id IN (...) GROUP BY article_id
The benefit of these queries is that they can be run at any time with the same data and they will always return the same answers. They don't destroy any data.
If you need to, you can also run the queries in a background job and they will still give the same result.
I have problem in my new rails project.I want to implement a function which can show the user's info completeness by a bar like Linkedin.
I think I can use a variable to record the completeness,but I don't have any idea about how to calculate it.
P.S I have two Model,one is the User Model,another is the Info Model.
This is, in fact, completely arbitrary. It's based entirely on which activities on the site you want to encourage.
A couple of mechanisms you can consider:
Model "accomplishments" with a completed/not completed status. Count up the ones you care about. Store the accomplishments based on activity either as they happen or at the end of the day in some batch job. For each user, calculate the percentage with the usual math (accomplishments completed/sum of available accomplishments) * 100 = percentage.
A variation of the same, but weighted based on what you consider more valuable contributions. In this case, the math is basically sum of (weight n * accomplishment n)/total weight.
The previous Careers.stackoverflow.com model made a geeky joke about Spinal Tap by making it possible to have counts greater than 100%. You can do that simply by undercounting the maximum accomplishments.