Sliding window aggregate Big Query 15 minute aggregation - time-series

I have a table like this
Row time viewCount
1 00:00:00 31
2 00:00:01 44
3 00:00:02 78
4 00:00:03 71
5 00:00:04 72
6 00:00:05 73
7 00:00:06 64
8 00:00:07 70
I would like to aggregate this into
Row time viewCount
1 00:00:00 31
2 00:15:00 445
3 00:30:00 700
4 00:45:00 500
5 01:00:04 121
6 01:15:00 475
.
.
.
Please help. Thanks in advance

Supposing that you actually have a TIMESTAMP column, you can use an approach like this:
#standardSQL
SELECT
TIMESTAMP_SECONDS(
UNIX_SECONDS(timestamp) -
MOD(UNIX_SECONDS(timestamp), 15 * 60)
) AS time,
SUM(viewCount) AS viewCount
FROM `project.dataset.table`
GROUP BY time;
It relies on conversion to and from Unix seconds in order to compute the 15 minute intervals. Note that it will not produce a row with a zero count for an empty 15 minute interval unlike Mikhail's solution, however (it's not clear if this is important to you).

Below is for BigQuery Standard SQL
Note: you provided simplified example of your data and below follows it - so instead of each 15 minutes aggregation, it uses each 2 sec aggregation. This is for you to be able to easy test / play with it. It is easily can be adjusted to 15 minutes by changing SECOND to MINUTE in 3 places and 2 to 15 in 3 places. Also this example uses TIME data type for time field as it is in your example so it is limited to just 24 hour period - most likely in your real data you have DATETIME or TIMESTAMP. In this case you will also need to replace all TIME_* functions with respective DATETIME_* or TIMESTAMP_* functions
So, finally - the query is:
#standardSQL
WITH `project.dataset.table` AS (
SELECT TIME '00:00:00' time, 31 viewCount UNION ALL
SELECT TIME '00:00:01', 44 UNION ALL
SELECT TIME '00:00:02', 78 UNION ALL
SELECT TIME '00:00:03', 71 UNION ALL
SELECT TIME '00:00:04', 72 UNION ALL
SELECT TIME '00:00:05', 73 UNION ALL
SELECT TIME '00:00:06', 64 UNION ALL
SELECT TIME '00:00:07', 70
),
period AS (
SELECT MIN(time) min_time, MAX(time) max_time, TIME_DIFF(MAX(time), MIN(time), SECOND) diff
FROM `project.dataset.table`
),
checkpoints AS (
SELECT TIME_ADD(min_time, INTERVAL step SECOND) start_time, TIME_ADD(min_time, INTERVAL step + 2 SECOND) end_time
FROM period, UNNEST(GENERATE_ARRAY(0, diff + 2, 2)) step
)
SELECT start_time time, SUM(viewCount) viewCount
FROM checkpoints c
JOIN `project.dataset.table` t
ON t.time >= c.start_time AND t.time < c.end_time
GROUP BY start_time
ORDER BY start_time, time
and result is:
Row time viewCount
1 00:00:00 75
2 00:00:02 149
3 00:00:04 145
4 00:00:06 134

Related

Development of a feature per row or from today's date

I have a problem. I want to predict when the customer will place another order in how many days if an order comes in.
I have already created my target variable next_purchase_in_days. This specifies in how many days the customer will place an order again. And I would like to predict this.
Since I have too few features, I want to do feature engineering. I would like to specify how many orders the customer has placed in the last 90 days. For example, I have calculated back from today's date how many orders the customer has placed in the last 90 days.
Is it better to say per row how many orders the customer has placed? Please see below for the example.
So does it make more sense to calculate this from today's date and include it as a feature or should it be recalculated for each row?
customerId fromDate next_purchase_in_days
0 1 2021-02-22 24
1 1 2021-03-18 4
2 1 2021-03-22 109
3 1 2021-02-10 12
4 1 2021-09-07 133
8 3 2022-05-17 61
10 3 2021-02-22 133
11 3 2021-02-22 133
Example
# What I have
customerId fromDate next_purchase_in_days purchase_in_last_90_days
0 1 2021-02-22 24 0
1 1 2021-03-18 4 0
2 1 2021-03-22 109 0
3 1 2021-02-10 12 0
4 1 2021-09-07 133 0
8 3 2022-05-17 61 1
10 3 2021-02-22 133 1
11 3 2021-02-22 133 1
# Or does this make more sense?
customerId fromDate next_purchase_in_days purchase_in_last_90_days
0 1 2021-02-22 24 1
1 1 2021-03-18 4 2
2 1 2021-03-22 109 3
3 1 2021-02-10 12 0
4 1 2021-09-07 133 0
8 3 2022-05-17 61 1
10 3 2021-02-22 133 0
11 3 2021-02-22 133 0
You can address this in a number of ways, but something interesting to consider is the interaction between Date & Customer ID.
Dates have meaning to humans beyond just time keeping. They are associated with emotional, and culturally importance. Holidays, weekends, seasons, anniversaries etc. So there is a conditional relationship between the probability of a purchase and Events: P(x|E)
Customer Ids theoretically represent a single person, or at the very least a single business with a limited number of people responsible for purchasing.
Certain people/corporations are just more likely to spend.
So here are a number of ways to address this:
Find a list of holidays relevant to the users. For instance if they are US based find a list of US recognized holidays. Then create a
feature based on each date: Date_Till_Next_Holiday or (DTNH for
short).
Dates also have cyclical aspects that can encode probability. Day of the > year (1-365), Days of the week (1-7), week numbers (1-52),
Months (1-12), Quarters (1-4). I would create additional columns
encoding each of these.
To address the customer interaction, have a running total of past purchases. You could call it Purchases_to_date, and would be an
integer (0...n) where n is the number of previous purchases.
I made a notebook to show you how to do running totals.
Humans tend to share purchasing patterns with other humans. You could run a k-means cluster algorithm that splits customers into 3-4
groups based on all the previous info, and then use their
cluster-number as a feature. Sklearn-Kmeans
So based on all that you could engineer 8 different columns. I would then run Principle Component Analysis (PCA) to reduce that to 3-4 features.
You can use Sklearn-PCA to do PCA.

List unique dates and add line at the beginning of a new month

I have long (multiple thousand lines and growing) list of data in Sheets which have a date and additional columns with data. Here's a simplified example of this list (=TAB1):
Date Number Product-ID
02.09.2021 123 1
02.09.2021 2 1
01.09.2021 15 1
01.09.2021 675 2
01.09.2021 45 2
01.09.2021 52 1
31.08.2021 2 1
31.08.2021 78 1
31.08.2021 44 1
31.08.2021 964 2
30.08.2021 1 2
29.08.2021 ...
...
Three remarks:
The date is formatted to European standard DD.MM.YYYY
There definitely is more than one line per day per product (could be a big number depending on the day)
(for the formulas below) In the European standard Sheets uses ; instead of , as in =IF(A;B;C)
In a different tab (=TAB2), I want to add up all the numbers for a unique date for Product-ID 1. So far I've done it like this:
Date Sum (if Product-ID=1)
=UNIQUE('TAB1'!A2:A) =ARRAYFORMULA(SUMIF('TAB1'!A:A&'TAB1'!C:C;A2:A&"1";'TAB1'!B:B))
02.09.2021 125
01.09.2021 67
31.08.2021 124
30.08.2021 1
29.08.2021 ...
...
This works fine so far. Here's what I want to do now:
For every month (here: August and September 2021) I need an additional line above the current date (in this case: above 02.09.2021) AND above a completed month to sum over the whole month for column B. Here's how it should look like:
Date Sum (if Product-ID=1)
September 2021 192
02.09.2021 125
01.09.2021 67
August 2021 125
31.08.2021 124
30.08.2021 1
29.08.2021 ...
Of course, the line for the next day (03.09.2021) should be added above 02.09.2021 and below the sum for the month when it's automatically added to TAB1 on the next day.
I tried to play around with s.th. like =IF(DAY(UNIQUE('TAB1'!A2:A))=1;...;...) but didn't get far.
Is there anyone with an idea how to realize s.th. like this?
You want to learn about QUERY().
in cell A1 of an empty tab.
=QUERY('TAB1'!A2:C,"select A,SUM(B) where C = 1 group by A")
it makes a very big difference whether your product ids are text or numbers. the above was written as if they are numbers, but you might have just been simplifying. If they are text you would write it like this:
=QUERY('TAB1'!A2:C,"select A,SUM(B) where C = '1XYZ' group by A")
note the single quotes.
if the IDs are a MIX of text and letters then you need to force them all to text values in the original data by highlighting the IDs column and choosing Format>Number>Plain Text from the menu bar.
UPDATE:
I understand the requirements better now for intermixing a cumulative month total into the output. This may work.
=ARRAYFORMULA({QUERY({EOMONTH('TAB1'!A2:A,0),'TAB1'!B2:C},"select 'Total',Col1,SUM(Col2) where Col3 = 1 group by 'Total',Col1 label 'Total''',SUM(Col2)''",0);QUERY('TAB1'!A2:C,"select '',A,SUM(B) where C = 1 group by '',A label '''',SUM(B)''",0)},"order by Col2,Col1",0))

query result in set of interval ranges in postgresql(rails)

I have a timestamp column for which i have to calculate the time difference and divide it into certain set of intervals
for time difference in hours i have written this query
result = ActiveRecord::Base.connection.exec_query("SELECT id,(EXTRACT(EPOCH FROM CURRENT_TIMESTAMP - image_retouch_items.created_at)/3600)::INTEGER AS latency FROM image_retouch_items WHERE status= 0;");
The result of my query is
"id" "latency"
104 5928
106 5917
158 5751
162 5736
95 5940
85 5950
How to get result as set of intervals(hours),like for row for which time difference lie between the range of 0-24 hr increment the count .
i.e.
interval count
0-24 2
24-48 3
48-72 0
How to get that in single query

Influxdb - Subtracting value from previous row, group by time

Is it possible to get individual data from cumulative?
Output of the following query is
SELECT mean("value") FROM "statsd_value" WHERE "type_instance" = 'counts' AND time > now() - 5m GROUP BY time(10s) fill(none)
TimeStamp Value
1463393810 0
1463393820 10
1463393830 23
1463393840 34
1463393850 67
1463393860 90
1463393870 104
Basically, the above data is cumulative data, I want to get individual data from that similar to this
TimeStamp Value
1463393820 10
1463393830 13
1463393840 11
1463393850 33
1463393860 23
1463393870 14
Is it possible to form query to get data in this way?
InfluxQL provides a difference function that will give you the functionality that you're looking for.
The query would look like this:
SELECT difference(mean("value")) FROM "statsd_value" WHERE "type_instance" = 'counts' AND time > now() - 5m GROUP BY time(10s) fill(none)
TimeStamp Value
1463393820 10
1463393830 13
1463393840 11
1463393850 33
1463393860 23
1463393870 14

How to do mean in aggregate function with a condition in syntax in SPSS

I need to do mean in aggregate function by id and year with a condition. It should be simple - BUT couldn't make it.
An example:
ID year result
1 2011 50
1 2012 68
1 2012 45
2 2011 12
2 2011 80
2 2012 20
but I don't understand where to put the condition:
AGGREGATE
/OUTFILE='test'
/BREAK=CUSTOMER_ID CUSTOMERIDCD year
/test_mean_under60=MEAN(result) **IF result > 60**
/N_BREAK=N.
You can't do conditional statements in AGGREGATE. One way to accomplish your end goal though is to use TEMPORARY. and SELECT IF before the aggregate. Example below:
DATA LIST FREE / Id year result.
BEGIN DATA
1 2011 50
1 2012 68
1 2012 45
2 2011 12
2 2011 80
2 2012 20
END DATA.
DATASET DECLARE test.
TEMPORARY.
SELECT IF result > 60.
AGGREGATE OUTFILE='test'
/BREAK = ID year
/test_mean_over60 = MEAN(result)
/N_BREAK=N.
EXECUTE.

Resources