I have some difficulties with mySQL commands that I want to do.
SELECT a.timestamp, name, count(b.name)
FROM time a, id b
WHERE a.user = b.user
AND a.id = b.id
AND b.name = 'John'
AND a.timestamp BETWEEN '2010-11-16 10:30:00' AND '2010-11-16 11:00:00'
GROUP BY a.timestamp
This is my current output statement.
timestamp name count(b.name)
------------------- ---- -------------
2010-11-16 10:32:22 John 2
2010-11-16 10:35:12 John 7
2010-11-16 10:36:34 John 1
2010-11-16 10:37:45 John 2
2010-11-16 10:48:26 John 8
2010-11-16 10:55:00 John 9
2010-11-16 10:58:08 John 2
How do I group them into 5 minutes interval results?
I want my output to be like
timestamp name count(b.name)
------------------- ---- -------------
2010-11-16 10:30:00 John 2
2010-11-16 10:35:00 John 10
2010-11-16 10:40:00 John 0
2010-11-16 10:45:00 John 8
2010-11-16 10:50:00 John 0
2010-11-16 10:55:00 John 11
This works with every interval.
PostgreSQL
SELECT
TIMESTAMP WITH TIME ZONE 'epoch' +
INTERVAL '1 second' * round(extract('epoch' from timestamp) / 300) * 300 as timestamp,
name,
count(b.name)
FROM time a, id
WHERE …
GROUP BY
round(extract('epoch' from timestamp) / 300), name
MySQL
SELECT
timestamp, -- not sure about that
name,
count(b.name)
FROM time a, id
WHERE …
GROUP BY
UNIX_TIMESTAMP(timestamp) DIV 300, name
I came across the same issue.
I found that it is easy to group by any minute interval is
just dividing epoch by minutes in amount of seconds and then either rounding or using floor to get ride of the remainder. So if you want to get interval in 5 minutes you would use 300 seconds.
SELECT COUNT(*) cnt,
to_timestamp(floor((extract('epoch' from timestamp_column) / 300 )) * 300)
AT TIME ZONE 'UTC' as interval_alias
FROM TABLE_NAME GROUP BY interval_alias
interval_alias cnt
------------------- ----
2010-11-16 10:30:00 2
2010-11-16 10:35:00 10
2010-11-16 10:45:00 8
2010-11-16 10:55:00 11
This will return the data correctly group by the selected minutes interval; however, it will not return the intervals that don't contains any data. In order to get those empty intervals we can use the function generate_series.
SELECT generate_series(MIN(date_trunc('hour',timestamp_column)),
max(date_trunc('minute',timestamp_column)),'5m') as interval_alias FROM
TABLE_NAME
Result:
interval_alias
-------------------
2010-11-16 10:30:00
2010-11-16 10:35:00
2010-11-16 10:40:00
2010-11-16 10:45:00
2010-11-16 10:50:00
2010-11-16 10:55:00
Now to get the result with interval with zero occurrences we just outer join both result sets.
SELECT series.minute as interval, coalesce(cnt.amnt,0) as count from
(
SELECT count(*) amnt,
to_timestamp(floor((extract('epoch' from timestamp_column) / 300 )) * 300)
AT TIME ZONE 'UTC' as interval_alias
from TABLE_NAME group by interval_alias
) cnt
RIGHT JOIN
(
SELECT generate_series(min(date_trunc('hour',timestamp_column)),
max(date_trunc('minute',timestamp_column)),'5m') as minute from TABLE_NAME
) series
on series.minute = cnt.interval_alias
The end result will include the series with all 5 minute intervals even those that have no values.
interval count
------------------- ----
2010-11-16 10:30:00 2
2010-11-16 10:35:00 10
2010-11-16 10:40:00 0
2010-11-16 10:45:00 8
2010-11-16 10:50:00 0
2010-11-16 10:55:00 11
The interval can be easily changed by adjusting the last parameter of generate_series. In our case we use '5m' but it could be any interval we want.
You should rather use GROUP BY UNIX_TIMESTAMP(time_stamp) DIV 300 instead of round(../300) because of the rounding I found that some records are counted into two grouped result sets.
For postgres, I found it easier and more accurate to use the
date_trunc
function, like:
select name, sum(count), date_trunc('minute',timestamp) as timestamp
FROM table
WHERE xxx
GROUP BY name,date_trunc('minute',timestamp)
ORDER BY timestamp
You can provide various resolutions like 'minute','hour','day' etc... to date_trunc.
The query will be something like:
SELECT
DATE_FORMAT(
MIN(timestamp),
'%d/%m/%Y %H:%i:00'
) AS tmstamp,
name,
COUNT(id) AS cnt
FROM
table
GROUP BY ROUND(UNIX_TIMESTAMP(timestamp) / 300), name
Not sure if you still need it.
SELECT FROM_UNIXTIME(FLOOR((UNIX_TIMESTAMP(timestamp))/300)*300) AS t,timestamp,count(1) as c from users GROUP BY t ORDER BY t;
2016-10-29 19:35:00 | 2016-10-29 19:35:50 | 4 |
2016-10-29 19:40:00 | 2016-10-29 19:40:37 | 5 |
2016-10-29 19:45:00 | 2016-10-29 19:45:09 | 6 |
2016-10-29 19:50:00 | 2016-10-29 19:51:14 | 4 |
2016-10-29 19:55:00 | 2016-10-29 19:56:17 | 1 |
You're probably going to have to break up your timestamp into ymd:HM and use DIV 5 to split the minutes up into 5-minute bins -- something like
select year(a.timestamp),
month(a.timestamp),
hour(a.timestamp),
minute(a.timestamp) DIV 5,
name,
count(b.name)
FROM time a, id b
WHERE a.user = b.user AND a.id = b.id AND b.name = 'John'
AND a.timestamp BETWEEN '2010-11-16 10:30:00' AND '2010-11-16 11:00:00'
GROUP BY year(a.timestamp),
month(a.timestamp),
hour(a.timestamp),
minute(a.timestamp) DIV 12
...and then futz the output in client code to appear the way you like it. Or, you can build up the whole date string using the sql concat operatorinstead of getting separate columns, if you like.
select concat(year(a.timestamp), "-", month(a.timestamp), "-" ,day(a.timestamp),
" " , lpad(hour(a.timestamp),2,'0'), ":",
lpad((minute(a.timestamp) DIV 5) * 5, 2, '0'))
...and then group on that
How about this one:
select
from_unixtime(unix_timestamp(timestamp) - unix_timestamp(timestamp) mod 300) as ts,
sum(value)
from group_interval
group by ts
order by ts
;
I found out that with MySQL probably the correct query is the following:
SELECT SUBSTRING( FROM_UNIXTIME( CEILING( timestamp /300 ) *300,
'%Y-%m-%d %H:%i:%S' ) , 1, 19 ) AS ts_CEILING,
SUM(value)
FROM group_interval
GROUP BY SUBSTRING( FROM_UNIXTIME( CEILING( timestamp /300 ) *300,
'%Y-%m-%d %H:%i:%S' ) , 1, 19 )
ORDER BY SUBSTRING( FROM_UNIXTIME( CEILING( timestamp /300 ) *300,
'%Y-%m-%d %H:%i:%S' ) , 1, 19 ) DESC
Let me know what you think.
select
CONCAT(CAST(CREATEDATE AS DATE),' ',datepart(hour,createdate),':',ROUNd(CAST((CAST((CAST(DATEPART(MINUTE,CREATEDATE) AS DECIMAL (18,4)))/5 AS INT)) AS DECIMAL (18,4))/12*60,2)) AS '5MINDATE'
,count(something)
from TABLE
group by CONCAT(CAST(CREATEDATE AS DATE),' ',datepart(hour,createdate),':',ROUNd(CAST((CAST((CAST(DATEPART(MINUTE,CREATEDATE) AS DECIMAL (18,4)))/5 AS INT)) AS DECIMAL (18,4))/12*60,2))
This will do exactly what you want.
Replace
dt - your datetime
c - call field
astro_transit1 - your table
300 as seconds for each time gap increase
SELECT
FROM_UNIXTIME(300 * ROUND(UNIX_TIMESTAMP(r.dt) / 300)) AS 5datetime,
(SELECT
r.c
FROM
astro_transit1 ra
WHERE
ra.dt = r.dt
ORDER BY ra.dt DESC
LIMIT 1) AS first_val
FROM
astro_transit1 r
GROUP BY UNIX_TIMESTAMP(r.dt) DIV 300
LIMIT 0 , 30
Based on #boecko answer for MySQL, I used a CTE (Common Table Expression) to accelerate the query execution time :
so this :
SELECT
`timestamp`,
`name`,
count(b.`name`)
FROM `time` a, `id` b
WHERE …
GROUP BY
UNIX_TIMESTAMP(`timestamp`) DIV 300, name
becomes :
WITH cte AS (
SELECT
`timestamp`,
`name`,
count(b.`name`),
UNIX_TIMESTAMP(`timestamp`) DIV 300 AS `intervals`
FROM `time` a, `id` b
WHERE …
)
SELECT * FROM cte GROUP BY `intervals`
In a large amount of data, the speed is accelerated by more than 10!
As timestamp and time are reserved in MySQL, don't forget to use `...` on each table and column name !
Hope it will help some of you.
Related
Rails version: 7.0
PostgreSQL version: 14
What is the way to find the price by the quantity in the products table?
products table
min_quantity | max_quantity | price
1 | 4 | 200
5 | 9 | 185
10 | 24 | 175
25 | 34 | 150
35 | 999 | 100
1000 | null | 60
Expected result
3 ===> 200
50 ===> 100
2500 ===> 60
You can achieve that with a where condition checking that the given value is between min_quantity and max_quantity;
with products(min_quantity, max_quantity, price) as (
values
(1, 4, 200)
, (5, 9, 185)
, (10, 24, 175)
, (25, 34, 150)
, (35, 999, 100)
, (1000, null, 60)
)
select
price
from products
where
case
when max_quantity is null
then 3 >= min_quantity
else
3 between min_quantity and max_quantity
end
-- 200
But as you might have a null value for max_quantity when the min_quantity is 1000 then you'll need a way to handle that. So you can use a case expression to only compare the input with min_quantity.
If the same applies for min_quantity and it can hold null values, then another branch in the case expression might suffice.
As Rails doesn't have specific support for these situations, you'll be off to go with just "raw" SQL in your where;
where(<<~SQL, input: input)
where
case
when max_quantity is null
then :input > min_quantity
else
:input between min_quantity and max_quantity
end
SQL
I can't figure out how to attach the .csv or workbook. Can someone explain?
I feel like I am overcomplicating this. I have a simple line chart, comparing 2 SUM values, in 3 measures, across an assigned day number. I've attached a .csv with sample data & tableau wbk. If I need to provide anything else, please let me know.
The viz also needs to by dynamic, so they can toggle between months past using a parameter [Selected Month] to toggle between months; in this example September is selected.
The [Contributing Day#] number is a field in the data provided by the user, not a calculation. I cannot use the workday feature, because weekend dates and holidays are sometimes considered a [Contributing Day#].
A date is matched to a Contributing_Day#. I can't compare date to date, they want to see it across Contributing_Day# instead.
I need to aggregate daily for the smaller number of days; in this example I would want to exclude Contributing_Day# 6 & 7 from the viz since one month only went to 5.
It needs to be MTD minus today, meaning if only Contributing_Day# <= 4 has passed, all 3 measures would null after Contributing_Day# 4
And if that wasn't enough, I need to exclude a dimension [Role] = 'Role' string type as well.
Measure_1 = SUM(Value_Sum) for [Selected Month]
Measure_2 = SUM(Value_Sum) for [Selected Month] - 1
Measure_3 = SUM(Value_Goal) for [Selected Month]
September Date | [Contributing_Day#] | Measure_1 | Measure_3
9/1/22 | 1 | 105 | 96
9/2/22 | 2 | 112 | 92
9/6/22 | 3 | 129 | 99
9/7/22 | 4 | 108 | 100
9/8/22 | 5 | 108 | 98
August Date | [Contributing_Day#] | Measure_2
8/1/22 | 1 | 105
8/2/22 | 2 | 112
8/3/22 | 3 | 129
8/4/22 | 4 | 108
8/5/22 | 5 | 108
8/8/22 | 6 | 112
8/9/22 | 7 | 92
Here is what I have tried that was almost working but not, and a screen shot of my visual when I have September selected and October.
Measure_1
{Fixed [Date],[Role] :SUM(IF MONTH([Date]) = MONTH(TODAY())AND DAY ([Date]) < DAY(TODAY())AND
[Role] <> 'Role' THEN [Value_1] END) }
Measure_2
{Fixed [Date],[Role] :
SUM(
IF ISNULL([Measure_1]) THEN NULL
ELSEIF MONTH([Date]) = MONTH(TODAY())-1
AND DAY ([Date]) < DAY(TODAY())
AND [Role] <> 'Role'
THEN [Value_1] END) }
Measure_3
{Fixed [Date],[Role] :
SUM(
IF MONTH([Date]) = MONTH(TODAY())
AND DAY ([Date]) < DAY(TODAY())
AND [Role] <> 'Role'
THEN [Value_2] END) }
Other calculations I've tried...
SUM(
{ FIXED [Date] :
MAX(IF DATEPART('year',[Date]) = DATEPART('year',[Select Month])
AND DATEPART('month',[Date]) <= DATEPART('month',[Select Month])
AND [Contributing_Day#] = 1
THEN 1 ELSE NULL END)
})
COUNTD(IF DATEPART('year',[Date]) = DATEPART('year',[Select Month])
AND [Contributing_Day#] = 1
THEN [Date] ELSE NULL END)
[% of Month Complete]
would be count of [Contributing_Day#] past / total count of [Contributing_Day#]
But I haven't been able to get that to work
[Role Flag]
{Fixed [Date],[Role] :
SUM(IF [Role] <> 'Role' THEN 0 ELSE 1 END) }
Measure_1
{Fixed [Date],[Role] : IF [% of Month Complete] = 1 THEN
SUM([Value_1]) ELSEIF MAX(MONTH([Date])) = MAX(MONTH([Select Month]))
AND MAX(DAY([Date])) < MAX(DAY([Select Month])) AND MAX([Role Flag])
<> 1 THEN SUM([Value_1]) END }
Measure_2
{Fixed [Date],[Role] :
IF MAX([Measure_1]) = 0 THEN NULL
ELSEIF [% of Month Complete] = 1
AND MAX(MONTH([Date])) = MAX(MONTH([Select Month]))-1
AND MAX([Role Flag]) <> 1
THEN SUM([Value_1])
ELSEIF MAX(MONTH([Date])) = MAX(MONTH([Select Month]))-1
AND MAX(DAY([Date])) < MAX(DAY([Select Month]))
AND MAX([Role Flag]) <> 1
THEN SUM([Value_1]) END }
I have an employees table with mutations to their contracts
EmpID Start End Function Hours SalesPercentage
1 01-01-2020 31-12-2020 FO Desk 40 1
1 01-01-2020 31-01-2021 FO Desk 32 1
1 01-02-2021 FO Desk 32 0.50
2 01-01-2021 31-01-2021 BO 32 0
2 01-02-2021 BO/FO 32 .25
For dynamic calculation of the amount of emplyees and their sales percentages I need to turn this into a tabel with an entry per month:
Year Month EmpID Hours SalesPercentage
2020 1 1 40 1
2020 2 1 40 1
..
2020 12 1 40 1
2021 1 1 32 1
2021 1 2 32 0
2021 2 1 32 0.50
2021 2 2 32 0.25
I have a simple Year Month table that I would like to append the mutation data to, but joining on multiple columns is not possible as far as I can tell. Is there a way around this?
Try this below
It generates a list of all year/month combinations for each row, then expands it and removes extra columns
let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{ {"Start", type date}, {"End", type date}}),
#"Added Custom" = Table.AddColumn(
#"Changed Type",
"newcol",
each
let
begin = Date.StartOfMonth([Start]),
End2 = if [End] = null then [Start] else [End]
in
List.Accumulate(
{0..(Date.Year(End2)-Date.Year([Start]))*12+(Date.Month(End2)-Date.Month([Start]))},
{},
(s,c) => s&{Date.AddMonths(begin,c)}
)
),
#"Expanded newcol" = Table.ExpandListColumn(#"Added Custom", "newcol"),
#"Added Custom2" = Table.AddColumn(#"Expanded newcol", "Year", each Date.Year([newcol])),
#"Added Custom3" = Table.AddColumn(#"Added Custom2", "Month", each Date.Month([newcol])),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom3",{"Start", "End", "Function", "newcol"})
in #"Removed Columns"
I have a table which contains a feedback about a product.It has feedback type (positive ,negative) which is a text column, date on which comments made. I need to get total count of positive ,negative feedback for particular time period . For example if the date range is 30 days, I need to get total count of positive ,negative feedback for 4 weeks , if the date range is 6 months , I need to get total count of positive ,negative feedback for each month. How to group the count based on date.
+------+------+----------+----------+---------------+--+--+--+
| Slno | User | Comments | type | commenteddate | | | |
+------+------+----------+----------+---------------+--+--+--+
| 1 | a | aaaa | positive | 22-jun-2016 | | | |
| 2 | b | bbb | positive | 1-jun-2016 | | | |
| 3 | c | qqq | negative | 2-jun-2016 | | | |
| 4 | d | ccc | neutral | 3-may-2016 | | | |
| 5 | e | www | positive | 2-apr-2016 | | | |
| 6 | f | s | negative | 11-nov-2015 | | | |
+------+------+----------+----------+---------------+--+--+--+
Query i tried is
SELECT type, to_char(commenteddate,'DD-MM-YYYY'), Count(type) FROM comments GROUP BY type, to_char(commenteddate,'DD-MM-YYYY');
Here's a kick at the can...
Assumptions:
you want to be able to switch the groupings to weekly or monthly only
the start of the first period will be the first date in the feedback data; intervals will be calculated from this initial date
output will show feedback value, time period, count
time periods will not overlap so periods will be x -> x + interval - 1 day
time of day is not important (time for commented dates is always 00:00:00)
First, create some sample data (100 rows):
drop table product_feedback purge;
create table product_feedback
as
select rownum as slno
, chr(65 + MOD(rownum, 26)) as userid
, lpad(chr(65 + MOD(rownum, 26)), 5, chr(65 + MOD(rownum, 26))) as comments
, trunc(sysdate) + rownum + trunc(dbms_random.value * 10) as commented_date
, case mod(rownum * TRUNC(dbms_random.value * 10), 3)
when 0 then 'positive'
when 1 then 'negative'
when 2 then 'neutral' end as feedback
from dual
connect by level <= 100
;
Here's what my sample data looks like:
select *
from product_feedback
;
SLNO USERID COMMENTS COMMENTED_DATE FEEDBACK
1 B BBBBB 2016-08-06 neutral
2 C CCCCC 2016-08-06 negative
3 D DDDDD 2016-08-14 positive
4 E EEEEE 2016-08-16 negative
5 F FFFFF 2016-08-09 negative
6 G GGGGG 2016-08-14 positive
7 H HHHHH 2016-08-17 positive
8 I IIIII 2016-08-18 positive
9 J JJJJJ 2016-08-12 positive
10 K KKKKK 2016-08-15 neutral
11 L LLLLL 2016-08-23 neutral
12 M MMMMM 2016-08-19 positive
13 N NNNNN 2016-08-16 neutral
...
Now for the fun part. Here's the gist:
find out what the earliest and latest commented dates are in the data
include a query where you can set the time period (to "WEEKS" or "MONTHS")
generate all of the (weekly or monthly) time periods between the min/max dates
join the product feedback to the time periods (commented date between start and end) with an outer join in case you want to see all time periods whether or not there was any feedback
group the joined result by feedback, period start, and period end, and set up a column to count one of the 3 possible feedback values
x
with
min_max_dates -- get earliest and latest feedback dates
as
(select min(commented_date) min_date, max(commented_date) max_date
from product_feedback
)
, time_period_interval
as
(select 'MONTHS' as tp_interval -- set the interval/time period here
from dual
)
, -- generate all time periods between the start date and end date
time_periods (start_of_period, end_of_period, max_date, time_period) -- recursive with clause - fun stuff!
as
(select mmd.min_date as start_of_period
, CASE WHEN tpi.tp_interval = 'WEEKS'
THEN mmd.min_date + 7
WHEN tpi.tp_interval = 'MONTHS'
THEN ADD_MONTHS(mmd.min_date, 1)
ELSE NULL
END - 1 as end_of_period
, mmd.max_date
, tpi.tp_interval as time_period
from time_period_interval tpi
cross join
min_max_dates mmd
UNION ALL
select CASE WHEN time_period = 'WEEKS'
THEN start_of_period + 7 * (ROWNUM )
WHEN time_period = 'MONTHS'
THEN ADD_MONTHS(start_of_period, ROWNUM)
ELSE NULL
END as start_of_period
, CASE WHEN time_period = 'WEEKS'
THEN start_of_period + 7 * (ROWNUM + 1)
WHEN time_period = 'MONTHS'
THEN ADD_MONTHS(start_of_period, ROWNUM + 1)
ELSE NULL
END - 1 as end_of_period
, max_date
, time_period
from time_periods
where end_of_period <= max_date
)
-- now put it all together
select pf.feedback
, tp.start_of_period
, tp.end_of_period
, count(*) as feedback_count
from time_periods tp
left outer join
product_feedback pf
on pf.commented_date between tp.start_of_period and tp.end_of_period
group by tp.start_of_period
, tp.end_of_period
, pf.feedback
order by pf.feedback
, tp.start_of_period
;
Output:
negative 2016-08-06 2016-09-05 6
negative 2016-09-06 2016-10-05 7
negative 2016-10-06 2016-11-05 8
negative 2016-11-06 2016-12-05 1
neutral 2016-08-06 2016-09-05 6
neutral 2016-09-06 2016-10-05 5
neutral 2016-10-06 2016-11-05 11
neutral 2016-11-06 2016-12-05 2
positive 2016-08-06 2016-09-05 17
positive 2016-09-06 2016-10-05 16
positive 2016-10-06 2016-11-05 15
positive 2016-11-06 2016-12-05 6
-- EDIT --
New and improved, all in one easy to use procedure. (I will assume you can configure the procedure to make use of the query in whatever way you need.) I made some changes to simplify the CASE statements in a few places and note that for whatever reason using a LEFT OUTER JOIN in the main SELECT results in an ORA-600 error for me so I switched it to INNER JOIN.
CREATE OR REPLACE PROCEDURE feedback_counts(p_days_chosen IN NUMBER, p_cursor OUT SYS_REFCURSOR)
AS
BEGIN
OPEN p_cursor FOR
with
min_max_dates -- get earliest and latest feedback dates
as
(select min(commented_date) min_date, max(commented_date) max_date
from product_feedback
)
, time_period_interval
as
(select CASE
WHEN p_days_chosen BETWEEN 1 AND 10 THEN 'DAYS'
WHEN p_days_chosen > 10 AND p_days_chosen <=31 THEN 'WEEKS'
WHEN p_days_chosen > 31 AND p_days_chosen <= 365 THEN 'MONTHS'
ELSE '3-MONTHS'
END as tp_interval -- set the interval/time period here
from dual --(SELECT p_days_chosen as days_chosen from dual)
)
, -- generate all time periods between the start date and end date
time_periods (start_of_period, end_of_period, max_date, tp_interval) -- recursive with clause - fun stuff!
as
(select mmd.min_date as start_of_period
, CASE tpi.tp_interval
WHEN 'DAYS'
THEN mmd.min_date + 1
WHEN 'WEEKS'
THEN mmd.min_date + 7
WHEN 'MONTHS'
THEN mmd.min_date + 30
WHEN '3-MONTHS'
THEN mmd.min_date + 90
ELSE NULL
END - 1 as end_of_period
, mmd.max_date
, tpi.tp_interval
from time_period_interval tpi
cross join
min_max_dates mmd
UNION ALL
select CASE tp_interval
WHEN 'DAYS'
THEN start_of_period + 1 * ROWNUM
WHEN 'WEEKS'
THEN start_of_period + 7 * ROWNUM
WHEN 'MONTHS'
THEN start_of_period + 30 * ROWNUM
WHEN '3-MONTHS'
THEN start_of_period + 90 * ROWNUM
ELSE NULL
END as start_of_period
, start_of_period
+ CASE tp_interval
WHEN 'DAYS'
THEN 1
WHEN 'WEEKS'
THEN 7
WHEN 'MONTHS'
THEN 30
WHEN '3-MONTHS'
THEN 90
ELSE NULL
END * (ROWNUM + 1)
- 1 as end_of_period
, max_date
, tp_interval
from time_periods
where end_of_period <= max_date
)
-- now put it all together
select pf.feedback
, tp.start_of_period
, tp.end_of_period
, count(*) as feedback_count
from time_periods tp
inner join -- currently a bug that prevents the procedure from compiling with a LEFT OUTER JOIN
product_feedback pf
on pf.commented_date between tp.start_of_period and tp.end_of_period
group by tp.start_of_period
, tp.end_of_period
, pf.feedback
order by tp.start_of_period
, pf.feedback
;
END;
Test the procedure (in something like SQLPlus or SQL Developer):
var x refcursor
exec feedback_counts(10, :x)
print :x
As per application need, Need to do below task in Power Pivot report. (Not in SQL).
I have two date columns in one table
Column Hears :
/* Column Heads :*/ Activity| City | Start_Date | End_Date
/* Row 1 :*/ A1 | C1 | 01/01/2014 | 05/01/2014
/* Row 2 :*/ A1 | C1 | 06/01/2014 | 07/01/2014
/* Row 3 : */ A2 | C2 | 06/01/2014 | 07/01/2014
/* Row 4 : */ A3 | C3 | 03/01/2014 | 04/01/2014
Expected output like - If user selects date range 02/01/2014 to 07/01/2014
Column Header
City | #StartCount| #EndCount
C1 | 1 | 2
C2 | 1 | 1
C3 | 1 | 1
Here #StartCount is
Need to grouped by City.
Its Count of Distinct Activity
It should consider date boundries as : All activity for which start date 'greater or equeal(>=)' Input Start date and (less than '<') End Date
Here #EndCount is
Need to grouped by City.
Its Count of Distinct Activity
It should consider date boundries as : All activity for which Ends date 'Less or equeal(<=)' Input End date and (Greater than '>') End Date
Could you please suggest me expression to be used for such case.
Calculated measure or dax can be used..
The only way I can match your sample data is to ignore the "distinct activities" requirement in your question, since city C1 has only activity A1 and yet the final result counts to 2.
I'm also probably making a lot of assumptions that may be wrong because your sample data is quite limited. But here's my attempt:
declare #Activities table (Activity char(2),City char(2),Start_Date date,End_Date date)
insert into #Activities (Activity,City,Start_Date,End_Date) values
('A1','C1','20140101','20140105'),
('A1','C1','20140106','20140107'),
('A2','C2','20140106','20140107'),
('A3','C3','20140103','20140104')
declare #Start date
declare #End date
select #Start = '20140102', #End = '20140107'
select City,
SUM(CASE WHEN Start_Date < #Start THEN 0 ELSE 1 END) as StartCount,
SUM(CASE WHEN End_Date > #End THEN 0 ELSE 1 END) as EndCount
from #Activities where Start_Date < #End and End_Date > #Start
group by City
(Some of this relies on SQL Server syntax, but since you haven't tagged your question with a database system, I get to pick)
SQL Server syntax
select activity, city,
SUM(case when (startdate >= '2014/01/02' and startdate <= '2014/01/07') then 1 else 0 End) as IsStart,
SUM(case when (enddate >= '2014/01/02' and enddate <='2014/01/07') then 1 else 0 End) as IsEnd
from temp
Group by activity, city
demo: http://sqlfiddle.com/#!3/b2a64/1