How to join two table with a conditon - join

TABLO1
Inst_count Inst_Type Card_Group Txn_Type Comm_rate
---------- --------- ---------- -------- ---------
2 N -1 -1 1,70
2 N 36 -1 1,71
2 N 37 -1 1,72
2 V -1 -1 1,73
2 V 36 -1 1,74
2 V 37 -1 1,75
TABLO2
Inst_count Inst_Type Card_Group Txn_Type Isk_rate Day
---------- --------- ---------- -------- -------- ---
2 N -1 -1 1,0 10
2 N 36 -1 1,1 11
2 N 37 -1 1,2 12
2 V -1 -1 1,3 13
2 V 36 -1 1,4 14
2 V 38 -1 1,5 15
Result_Table
Inst_count Inst_Type TABLO1.Card_Group TABLO2.Card_Group TABLO1.Txn_Type TABLO2.Txn_Type Isk_rate Day Comm_rate
---------- --------- ----------------- ----------------- --------------- --------------- -------- --- ---------
2 N -1 -1 -1 -1 1,0 10 1,70
2 N 36 36 -1 -1 1,1 11 1,71
2 N 37 37 -1 -1 1,2 12 1,72
2 V -1 -1 -1 -1 1,3 13 1,73
2 V 36 36 -1 -1 1,4 14 1,74
2 V -1 38 -1 -1 1,5 15 1,73
2 V 37 -1 -1 -1 1,3 13 1,75
Inst_count and Inst_Type must be always equal.
But Card_Group and Txn_Type columns must be match in the other table with default value(-1) if not exist.
How can write this sql?
Thanks.

Here we are using OUTER JOIN which sets all your tables values into one resulset(You can also use UNION ALL for this purpose). Then we are using a left join for each table to make sure each column will have its values which absence of other. Now you can try this with your own query. If you can not achieve your goal then i suggest you to gather your samples into dbfiddle link. Then we can make further explanations.
SELECT DISTINCT NVL (A.COL1, A1.COL1) COL1A,
NVL (B.COL1, B1.COL1) COL1B,
NVL (A.COL2, A1.COL2) COL2,
NVL (A.COL3, A1.COL3) COL3,
NVL (B.COL3, B1.COL3) COL4
FROM TABLEA A
FULL OUTER JOIN TABLEB B ON B.COL1 = A.COL1
LEFT OUTER JOIN TABLEB B1 ON B1.COL1 = -1
LEFT OUTER JOIN TABLEA A1 ON A1.COL1 = -1

Related

Calculate the longest continuous running time of a device

I have a table created with the following script:
n=15
ts=now()+1..n * 1000 * 100
status=rand(0 1 ,n)
val=rand(100,n)
t=table(ts,status,val)
select * from t order by ts
where
ts is the time, status indicates the device status (0: down; 1: running), and val indicates the running time.
Suppose I have the following data:
ts status val
2023.01.03T18:17:17.386 1 58
2023.01.03T18:18:57.386 0 93
2023.01.03T18:20:37.386 0 24
2023.01.03T18:22:17.386 1 87
2023.01.03T18:23:57.386 0 85
2023.01.03T18:25:37.386 1 9
2023.01.03T18:27:17.386 1 46
2023.01.03T18:28:57.386 1 3
2023.01.03T18:30:37.386 0 65
2023.01.03T18:32:17.386 1 66
2023.01.03T18:33:57.386 0 56
2023.01.03T18:35:37.386 0 42
2023.01.03T18:37:17.386 1 82
2023.01.03T18:38:57.386 1 95
2023.01.03T18:40:37.386 0 19
So how do I calculate the longest continuous running time? For example, both the 7th and 8th records have the status 1, I want to sum their val values. Or the 14th-15th records, I want to sum up their val values.
You can use the built-in function segment to group the consecutive identical values. The full script is as follows:
select first(ts), sum(iif(status==1, val, 0)) as total_val
from t
group by segment(status)
having sum(iif(status==1, val, 0)) > 0
The result:
segment_status first_ts total_val
0 2023.01.03T18:17:17.386 58
3 2023.01.03T18:22:17.386 87
5 2023.01.03T18:25:37.386 58
9 2023.01.03T18:32:17.386 66
12 2023.01.03T18:37:17.386 177

Terrible performance using XGBoost H2O

Very different Model Performance using XGBoost on H2O
I am training a XGBoost model using 5-fold croos validation on a very imbalanced binary classification problem. The dataset has 1200 columns (multi-document word2vec document embeddings).
The only parameters specified to train the XGBoost model were:
min_split_improvement = 1e-5
seed=1
nfolds = 5
The reported performance on train data was extremely high (probably overfitting!!!):
Confusion Matrix (Act/Pred) for max f1 # threshold = 0.2814398407936096:
A D Error Rate
----- ----- --- ------- -------------
A 16858 2 0.0001 (2.0/16860.0)
D 0 414 0 (0.0/414.0)
Total 16858 416 0.0001 (2.0/17274.0)
AUC: 0.9999991404060721
The performance on cross validation data was terrible:
Confusion Matrix (Act/Pred) for max f1 # threshold = 0.016815993119962513:
A D Error Rate
----- ----- --- ------- ----------------
A 16003 857 0.0508 (857.0/16860.0)
D 357 57 0.8623 (357.0/414.0)
Total 16360 914 0.0703 (1214.0/17274.0)
AUC: 0.6015883863129724
I know H2O cross validation generates an extra model using the whole data available and different performances are expected.
But, could be the cause that generated too bad performance on the resulting model?
Ps: XGBoost on a multi node H2O cluster with OMP
Model Type: classifier
Performance do modelo < XGBoost_model_python_1575650180928_617 >:
ModelMetricsBinomial: xgboost
** Reported on train data. **
MSE: 0.0008688085383330077
RMSE: 0.029475558320971762
LogLoss: 0.00836528606162877
Mean Per-Class Error: 5.931198102016033e-05
AUC: 0.9999991404060721
pr_auc: 0.9975495622569983
Gini: 0.9999982808121441
Confusion Matrix (Act/Pred) for max f1 # threshold = 0.2814398407936096:
A D Error Rate
----- ----- --- ------- -------------
A 16858 2 0.0001 (2.0/16860.0)
D 0 414 0 (0.0/414.0)
Total 16858 416 0.0001 (2.0/17274.0)
Maximum Metrics: Maximum metrics at their respective thresholds
metric threshold value idx
--------------------------- ----------- -------- -----
max f1 0.28144 0.99759 195
max f2 0.28144 0.999035 195
max f0point5 0.553885 0.998053 191
max accuracy 0.28144 0.999884 195
max precision 0.990297 1 0
max recall 0.28144 1 195
max specificity 0.990297 1 0
max absolute_mcc 0.28144 0.997534 195
max min_per_class_accuracy 0.28144 0.999881 195
max mean_per_class_accuracy 0.28144 0.999941 195
max tns 0.990297 16860 0
max fns 0.990297 413 0
max fps 0.000111383 16860 399
max tps 0.28144 414 195
max tnr 0.990297 1 0
max fnr 0.990297 0.997585 0
max fpr 0.000111383 1 399
max tpr 0.28144 1 195
Gains/Lift Table: Avg response rate: 2.40 %, avg score: 2.42 %
group cumulative_data_fraction lower_threshold lift cumulative_lift response_rate score cumulative_response_rate cumulative_score capture_rate cumulative_capture_rate gain cumulative_gain
-- ------- -------------------------- ----------------- ------- ----------------- --------------- ----------- -------------------------- ------------------ -------------- ------------------------- ------- -----------------
1 0.0100151 0.873526 41.7246 41.7246 1 0.907782 1 0.907782 0.417874 0.417874 4072.46 4072.46
2 0.0200301 0.776618 41.7246 41.7246 1 0.834968 1 0.871375 0.417874 0.835749 4072.46 4072.46
3 0.0300452 0.0326301 16.4004 33.2832 0.393064 0.303206 0.797688 0.681985 0.164251 1 1540.04 3228.32
4 0.0400023 0.0224876 0 24.9986 0 0.0263919 0.599132 0.518799 0 1 -100 2399.86
5 0.0500174 0.0180858 0 19.9931 0 0.0201498 0.479167 0.418953 0 1 -100 1899.31
6 0.100035 0.0107386 0 9.99653 0 0.0136044 0.239583 0.216279 0 1 -100 899.653
7 0.149994 0.00798337 0 6.66692 0 0.00922284 0.159784 0.147313 0 1 -100 566.692
8 0.200012 0.00629476 0 4.99971 0 0.00709438 0.119826 0.112249 0 1 -100 399.971
9 0.299988 0.00436827 0 3.33346 0 0.00522157 0.0798919 0.0765798 0 1 -100 233.346
10 0.400023 0.00311204 0 2.49986 0 0.00370085 0.0599132 0.0583548 0 1 -100 149.986
11 0.5 0.00227535 0 2 0 0.00267196 0.0479333 0.0472208 0 1 -100 100
12 0.599977 0.00170271 0 1.66673 0 0.00197515 0.039946 0.0396813 0 1 -100 66.6731
13 0.700012 0.00121528 0 1.42855 0 0.00145049 0.0342375 0.034218 0 1 -100 42.8548
14 0.799988 0.000837358 0 1.25002 0 0.00102069 0.0299588 0.0300692 0 1 -100 25.0018
15 0.899965 0.000507632 0 1.11115 0 0.000670878 0.0266306 0.0268033 0 1 -100 11.1154
16 1 3.35288e-05 0 1 0 0.00033002 0.0239667 0.0241551 0 1 -100 0
Performance da validação cruzada (xval) do modelo < XGBoost_model_python_1575650180928_617 >:
ModelMetricsBinomial: xgboost
** Reported on cross-validation data. **
MSE: 0.023504756648164406
RMSE: 0.15331261085822134
LogLoss: 0.14134815775808462
Mean Per-Class Error: 0.4160864407653825
AUC: 0.6015883863129724
pr_auc: 0.04991836222189148
Gini: 0.2031767726259448
Confusion Matrix (Act/Pred) for max f1 # threshold = 0.016815993119962513:
A D Error Rate
----- ----- --- ------- ----------------
A 16003 857 0.0508 (857.0/16860.0)
D 357 57 0.8623 (357.0/414.0)
Total 16360 914 0.0703 (1214.0/17274.0)
Maximum Metrics: Maximum metrics at their respective thresholds
metric threshold value idx
--------------------------- ----------- --------- -----
max f1 0.016816 0.0858434 209
max f2 0.00409934 0.138433 318
max f0point5 0.0422254 0.0914205 127
max accuracy 0.905155 0.976323 3
max precision 0.99221 1 0
max recall 9.60076e-05 1 399
max specificity 0.99221 1 0
max absolute_mcc 0.825434 0.109684 5
max min_per_class_accuracy 0.00238436 0.572464 345
max mean_per_class_accuracy 0.00262155 0.583914 341
max tns 0.99221 16860 0
max fns 0.99221 412 0
max fps 9.60076e-05 16860 399
max tps 9.60076e-05 414 399
max tnr 0.99221 1 0
max fnr 0.99221 0.995169 0
max fpr 9.60076e-05 1 399
max tpr 9.60076e-05 1 399
Gains/Lift Table: Avg response rate: 2.40 %, avg score: 0.54 %
group cumulative_data_fraction lower_threshold lift cumulative_lift response_rate score cumulative_response_rate cumulative_score capture_rate cumulative_capture_rate gain cumulative_gain
-- ------- -------------------------- ----------------- -------- ----------------- --------------- ----------- -------------------------- ------------------ -------------- ------------------------- --------- -----------------
1 0.0100151 0.0540408 4.34129 4.34129 0.104046 0.146278 0.104046 0.146278 0.0434783 0.0434783 334.129 334.129
2 0.0200301 0.033963 2.41183 3.37656 0.0578035 0.0424722 0.0809249 0.094375 0.0241546 0.0676329 141.183 237.656
3 0.0300452 0.0251807 2.17065 2.97459 0.0520231 0.0292894 0.0712909 0.0726798 0.0217391 0.089372 117.065 197.459
4 0.0400023 0.02038 2.18327 2.77762 0.0523256 0.0225741 0.0665702 0.0602078 0.0217391 0.111111 118.327 177.762
5 0.0500174 0.0174157 1.92946 2.60779 0.0462428 0.0188102 0.0625 0.0519187 0.0193237 0.130435 92.9463 160.779
6 0.100035 0.0103201 1.59365 2.10072 0.0381944 0.0132217 0.0503472 0.0325702 0.0797101 0.210145 59.3649 110.072
7 0.149994 0.00742152 1.06366 1.7553 0.0254925 0.00867473 0.0420687 0.0246112 0.0531401 0.263285 6.3664 75.5301
8 0.200012 0.00560037 1.11073 1.59411 0.0266204 0.00642966 0.0382055 0.0200645 0.0555556 0.318841 11.0725 59.4111
9 0.299988 0.00366149 1.30465 1.49764 0.0312681 0.00452583 0.0358935 0.0148859 0.130435 0.449275 30.465 49.7642
10 0.400023 0.00259159 1.13487 1.40692 0.0271991 0.00306994 0.0337192 0.0119311 0.113527 0.562802 13.4872 40.6923
11 0.5 0.00189 0.579844 1.24155 0.0138969 0.00220612 0.0297557 0.00998654 0.057971 0.620773 -42.0156 24.1546
12 0.599977 0.00136983 0.990568 1.19972 0.0237406 0.00161888 0.0287534 0.0085922 0.0990338 0.719807 -0.943246 19.9724
13 0.700012 0.000980029 0.676094 1.1249 0.0162037 0.00116698 0.02696 0.0075311 0.0676329 0.78744 -32.3906 12.4895
14 0.799988 0.00067366 0.797286 1.08395 0.0191083 0.000820365 0.0259787 0.00669244 0.0797101 0.86715 -20.2714 8.39529
15 0.899965 0.000409521 0.797286 1.05211 0.0191083 0.000540092 0.0252155 0.00600898 0.0797101 0.94686 -20.2714 5.21072
16 1 2.55768e-05 0.531216 1 0.0127315 0.000264023 0.0239667 0.00543429 0.0531401 1 -46.8784 0
For the non cross-validation case, try splitting your data up front into training and validation frames.
I expect you will get a worse AUC for the validation case.
Although for highly imbalanced cases, sometimes you just need to go by the error rate for each class.
Since there are so many true negatives, that can dominate the AUC (vast majority of predictions are correctly predicting “not interesting”). Some people will upsample the minority class in this situation using row weights to make the model more sensitive to them.

How does Weka evaluate classifier model

I used random forest algorithm and got this result
=== Summary ===
Correctly Classified Instances 10547 97.0464 %
Incorrectly Classified Instances 321 2.9536 %
Kappa statistic 0.9642
Mean absolute error 0.0333
Root mean squared error 0.0952
Relative absolute error 18.1436 %
Root relative squared error 31.4285 %
Total Number of Instances 10868
=== Confusion Matrix ===
a b c d e f g h i <-- classified as
1518 1 3 1 0 14 0 0 4 | a = a
3 2446 0 0 0 1 1 27 0 | b = b
0 0 2942 0 0 0 0 0 0 | c = c
0 0 0 470 0 1 1 2 1 | d = d
9 0 0 9 2 19 0 3 0 | e = e
23 1 2 19 0 677 1 22 6 | f = f
4 0 2 0 0 13 379 0 0 | g = g
63 2 6 17 0 15 0 1122 3 | h = h
9 0 0 0 0 9 0 4 991 | i = i
I wonder how Weka evaluate errors(mean absolute error, root mean squared error, ...) using non numerical values('a', 'b', ...).
I mapped each classes to numbers from 0 to 8 and evaluated errors manually, but the evaluation was different from Weka.
How to reimplemen the evaluating steps of Weka?

How can one flatten a cypher result to show summary data in columns?

I have the rank (say 1 to 103) and count of nodes for each rank in a cypher.
day.name rank count
12/14/2016 1 1
12/14/2016 2 2
12/14/2016 3 3
12/14/2016 4 10
...
12/14/2016 11 20000
12/14/2016 12 20500
...
12/14/2016 21 15000
12/14/2016 22 15000
I'd like to display in columns summary data such as count that was in rank 1 to 3, rank 4 to 10, etc.
In this example, the desired result would be
'rank_1_to_3' 'rank_4_to_10' 'rank_11_to_20' 'rank_21_to_100'
---------------------------------------------------------------------------
6 10 40500 30000
Here's the Cypher:
match (d:Domain {name:'hotelguides.com'})
match (c:Temp {name:"top30k_rank_1_to_100"})
match (day:Day {name:'2016-12-22'})
match (day)-[]-(kua:KeywordURLAssociation)-[]-(d)
match (k:Keyword)-[]-(kua)-[]-(r:Rank)
with day, kua, r, k return day.name, r.name, count(k)
I think this query can get you started. I will leave applying it to your particular domain up to you. Essentially, you can use filter to filter a collection of results into another collection with a subset of those results. With each smaller result you can use reduce to summarize that collection. Something like this..
with [
{name:'12/14/2016', rank:1, count:1},
{name:'12/14/2016', rank:2, count:2},
{name:'12/14/2016', rank:3, count:3},
{name:'12/14/2016', rank:4, count:10},
{name:'12/14/2016', rank:11, count:20000},
{name:'12/14/2016', rank:12, count:20500},
{name:'12/14/2016', rank:21, count:15000},
{name:'12/14/2016', rank:22, count:15000}
] as days
return reduce(sum_c=0, d in filter(d in days where d.rank < 4) | sum_c + d.count) as rank_1_3
, reduce(sum_c=0, d in filter(d in days where d.rank > 3 and d.rank < 11) | sum_c + d.count) as rank_4_to_10
, reduce(sum_c=0, d in filter(d in days where d.rank > 10 and d.rank < 21) | sum_c + d.count) as rank_11_to_20
, reduce(sum_c=0, d in filter(d in days where d.rank > 20 and d.rank < 101) | sum_c + d.count) as rank_21_100

how to group a date column based on date range in oracle

I have a table which contains a feedback about a product.It has feedback type (positive ,negative) which is a text column, date on which comments made. I need to get total count of positive ,negative feedback for particular time period . For example if the date range is 30 days, I need to get total count of positive ,negative feedback for 4 weeks , if the date range is 6 months , I need to get total count of positive ,negative feedback for each month. How to group the count based on date.
+------+------+----------+----------+---------------+--+--+--+
| Slno | User | Comments | type | commenteddate | | | |
+------+------+----------+----------+---------------+--+--+--+
| 1 | a | aaaa | positive | 22-jun-2016 | | | |
| 2 | b | bbb | positive | 1-jun-2016 | | | |
| 3 | c | qqq | negative | 2-jun-2016 | | | |
| 4 | d | ccc | neutral | 3-may-2016 | | | |
| 5 | e | www | positive | 2-apr-2016 | | | |
| 6 | f | s | negative | 11-nov-2015 | | | |
+------+------+----------+----------+---------------+--+--+--+
Query i tried is
SELECT type, to_char(commenteddate,'DD-MM-YYYY'), Count(type) FROM comments GROUP BY type, to_char(commenteddate,'DD-MM-YYYY');
Here's a kick at the can...
Assumptions:
you want to be able to switch the groupings to weekly or monthly only
the start of the first period will be the first date in the feedback data; intervals will be calculated from this initial date
output will show feedback value, time period, count
time periods will not overlap so periods will be x -> x + interval - 1 day
time of day is not important (time for commented dates is always 00:00:00)
First, create some sample data (100 rows):
drop table product_feedback purge;
create table product_feedback
as
select rownum as slno
, chr(65 + MOD(rownum, 26)) as userid
, lpad(chr(65 + MOD(rownum, 26)), 5, chr(65 + MOD(rownum, 26))) as comments
, trunc(sysdate) + rownum + trunc(dbms_random.value * 10) as commented_date
, case mod(rownum * TRUNC(dbms_random.value * 10), 3)
when 0 then 'positive'
when 1 then 'negative'
when 2 then 'neutral' end as feedback
from dual
connect by level <= 100
;
Here's what my sample data looks like:
select *
from product_feedback
;
SLNO USERID COMMENTS COMMENTED_DATE FEEDBACK
1 B BBBBB 2016-08-06 neutral
2 C CCCCC 2016-08-06 negative
3 D DDDDD 2016-08-14 positive
4 E EEEEE 2016-08-16 negative
5 F FFFFF 2016-08-09 negative
6 G GGGGG 2016-08-14 positive
7 H HHHHH 2016-08-17 positive
8 I IIIII 2016-08-18 positive
9 J JJJJJ 2016-08-12 positive
10 K KKKKK 2016-08-15 neutral
11 L LLLLL 2016-08-23 neutral
12 M MMMMM 2016-08-19 positive
13 N NNNNN 2016-08-16 neutral
...
Now for the fun part. Here's the gist:
find out what the earliest and latest commented dates are in the data
include a query where you can set the time period (to "WEEKS" or "MONTHS")
generate all of the (weekly or monthly) time periods between the min/max dates
join the product feedback to the time periods (commented date between start and end) with an outer join in case you want to see all time periods whether or not there was any feedback
group the joined result by feedback, period start, and period end, and set up a column to count one of the 3 possible feedback values
x
with
min_max_dates -- get earliest and latest feedback dates
as
(select min(commented_date) min_date, max(commented_date) max_date
from product_feedback
)
, time_period_interval
as
(select 'MONTHS' as tp_interval -- set the interval/time period here
from dual
)
, -- generate all time periods between the start date and end date
time_periods (start_of_period, end_of_period, max_date, time_period) -- recursive with clause - fun stuff!
as
(select mmd.min_date as start_of_period
, CASE WHEN tpi.tp_interval = 'WEEKS'
THEN mmd.min_date + 7
WHEN tpi.tp_interval = 'MONTHS'
THEN ADD_MONTHS(mmd.min_date, 1)
ELSE NULL
END - 1 as end_of_period
, mmd.max_date
, tpi.tp_interval as time_period
from time_period_interval tpi
cross join
min_max_dates mmd
UNION ALL
select CASE WHEN time_period = 'WEEKS'
THEN start_of_period + 7 * (ROWNUM )
WHEN time_period = 'MONTHS'
THEN ADD_MONTHS(start_of_period, ROWNUM)
ELSE NULL
END as start_of_period
, CASE WHEN time_period = 'WEEKS'
THEN start_of_period + 7 * (ROWNUM + 1)
WHEN time_period = 'MONTHS'
THEN ADD_MONTHS(start_of_period, ROWNUM + 1)
ELSE NULL
END - 1 as end_of_period
, max_date
, time_period
from time_periods
where end_of_period <= max_date
)
-- now put it all together
select pf.feedback
, tp.start_of_period
, tp.end_of_period
, count(*) as feedback_count
from time_periods tp
left outer join
product_feedback pf
on pf.commented_date between tp.start_of_period and tp.end_of_period
group by tp.start_of_period
, tp.end_of_period
, pf.feedback
order by pf.feedback
, tp.start_of_period
;
Output:
negative 2016-08-06 2016-09-05 6
negative 2016-09-06 2016-10-05 7
negative 2016-10-06 2016-11-05 8
negative 2016-11-06 2016-12-05 1
neutral 2016-08-06 2016-09-05 6
neutral 2016-09-06 2016-10-05 5
neutral 2016-10-06 2016-11-05 11
neutral 2016-11-06 2016-12-05 2
positive 2016-08-06 2016-09-05 17
positive 2016-09-06 2016-10-05 16
positive 2016-10-06 2016-11-05 15
positive 2016-11-06 2016-12-05 6
-- EDIT --
New and improved, all in one easy to use procedure. (I will assume you can configure the procedure to make use of the query in whatever way you need.) I made some changes to simplify the CASE statements in a few places and note that for whatever reason using a LEFT OUTER JOIN in the main SELECT results in an ORA-600 error for me so I switched it to INNER JOIN.
CREATE OR REPLACE PROCEDURE feedback_counts(p_days_chosen IN NUMBER, p_cursor OUT SYS_REFCURSOR)
AS
BEGIN
OPEN p_cursor FOR
with
min_max_dates -- get earliest and latest feedback dates
as
(select min(commented_date) min_date, max(commented_date) max_date
from product_feedback
)
, time_period_interval
as
(select CASE
WHEN p_days_chosen BETWEEN 1 AND 10 THEN 'DAYS'
WHEN p_days_chosen > 10 AND p_days_chosen <=31 THEN 'WEEKS'
WHEN p_days_chosen > 31 AND p_days_chosen <= 365 THEN 'MONTHS'
ELSE '3-MONTHS'
END as tp_interval -- set the interval/time period here
from dual --(SELECT p_days_chosen as days_chosen from dual)
)
, -- generate all time periods between the start date and end date
time_periods (start_of_period, end_of_period, max_date, tp_interval) -- recursive with clause - fun stuff!
as
(select mmd.min_date as start_of_period
, CASE tpi.tp_interval
WHEN 'DAYS'
THEN mmd.min_date + 1
WHEN 'WEEKS'
THEN mmd.min_date + 7
WHEN 'MONTHS'
THEN mmd.min_date + 30
WHEN '3-MONTHS'
THEN mmd.min_date + 90
ELSE NULL
END - 1 as end_of_period
, mmd.max_date
, tpi.tp_interval
from time_period_interval tpi
cross join
min_max_dates mmd
UNION ALL
select CASE tp_interval
WHEN 'DAYS'
THEN start_of_period + 1 * ROWNUM
WHEN 'WEEKS'
THEN start_of_period + 7 * ROWNUM
WHEN 'MONTHS'
THEN start_of_period + 30 * ROWNUM
WHEN '3-MONTHS'
THEN start_of_period + 90 * ROWNUM
ELSE NULL
END as start_of_period
, start_of_period
+ CASE tp_interval
WHEN 'DAYS'
THEN 1
WHEN 'WEEKS'
THEN 7
WHEN 'MONTHS'
THEN 30
WHEN '3-MONTHS'
THEN 90
ELSE NULL
END * (ROWNUM + 1)
- 1 as end_of_period
, max_date
, tp_interval
from time_periods
where end_of_period <= max_date
)
-- now put it all together
select pf.feedback
, tp.start_of_period
, tp.end_of_period
, count(*) as feedback_count
from time_periods tp
inner join -- currently a bug that prevents the procedure from compiling with a LEFT OUTER JOIN
product_feedback pf
on pf.commented_date between tp.start_of_period and tp.end_of_period
group by tp.start_of_period
, tp.end_of_period
, pf.feedback
order by tp.start_of_period
, pf.feedback
;
END;
Test the procedure (in something like SQLPlus or SQL Developer):
var x refcursor
exec feedback_counts(10, :x)
print :x

Resources