(sql plus) How to prevent auto rounding in oracle database? - sqlplus

I have the values 33.75, 48.75 and 26.25
so how can I display them as they are without rounding them to 34, 49 and 26?
SQL> SELECT p.price "Price in SAR"
2 FROM payment p;
Price in SAR
------------
49
49
34
26
26
26
34
49

I suspect it's due to the datatype of the column:
https://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT313
create table t1 (col1 number (5));
insert into t1 values (33.75);
insert into t1 values (48.75);
insert into t1 values (26.25);
insert into t1 values (10.50);
create table t2 (col1 number (5,2));
insert into t2 values (33.75);
insert into t2 values (48.75);
insert into t2 values (26.25);
insert into t2 values (10.50);
select * from t1;
COL1
----------
34
49
26
11
SQL> select * from t2;
COL1
----------
33.75
48.75
26.25
10.5
SQL> select to_char(col1, '99.99') from t2;
TO_CHA
------
33.75
48.75
26.25
10.50

Related

how to group a date column based on date range in oracle

I have a table which contains a feedback about a product.It has feedback type (positive ,negative) which is a text column, date on which comments made. I need to get total count of positive ,negative feedback for particular time period . For example if the date range is 30 days, I need to get total count of positive ,negative feedback for 4 weeks , if the date range is 6 months , I need to get total count of positive ,negative feedback for each month. How to group the count based on date.
+------+------+----------+----------+---------------+--+--+--+
| Slno | User | Comments | type | commenteddate | | | |
+------+------+----------+----------+---------------+--+--+--+
| 1 | a | aaaa | positive | 22-jun-2016 | | | |
| 2 | b | bbb | positive | 1-jun-2016 | | | |
| 3 | c | qqq | negative | 2-jun-2016 | | | |
| 4 | d | ccc | neutral | 3-may-2016 | | | |
| 5 | e | www | positive | 2-apr-2016 | | | |
| 6 | f | s | negative | 11-nov-2015 | | | |
+------+------+----------+----------+---------------+--+--+--+
Query i tried is
SELECT type, to_char(commenteddate,'DD-MM-YYYY'), Count(type) FROM comments GROUP BY type, to_char(commenteddate,'DD-MM-YYYY');
Here's a kick at the can...
Assumptions:
you want to be able to switch the groupings to weekly or monthly only
the start of the first period will be the first date in the feedback data; intervals will be calculated from this initial date
output will show feedback value, time period, count
time periods will not overlap so periods will be x -> x + interval - 1 day
time of day is not important (time for commented dates is always 00:00:00)
First, create some sample data (100 rows):
drop table product_feedback purge;
create table product_feedback
as
select rownum as slno
, chr(65 + MOD(rownum, 26)) as userid
, lpad(chr(65 + MOD(rownum, 26)), 5, chr(65 + MOD(rownum, 26))) as comments
, trunc(sysdate) + rownum + trunc(dbms_random.value * 10) as commented_date
, case mod(rownum * TRUNC(dbms_random.value * 10), 3)
when 0 then 'positive'
when 1 then 'negative'
when 2 then 'neutral' end as feedback
from dual
connect by level <= 100
;
Here's what my sample data looks like:
select *
from product_feedback
;
SLNO USERID COMMENTS COMMENTED_DATE FEEDBACK
1 B BBBBB 2016-08-06 neutral
2 C CCCCC 2016-08-06 negative
3 D DDDDD 2016-08-14 positive
4 E EEEEE 2016-08-16 negative
5 F FFFFF 2016-08-09 negative
6 G GGGGG 2016-08-14 positive
7 H HHHHH 2016-08-17 positive
8 I IIIII 2016-08-18 positive
9 J JJJJJ 2016-08-12 positive
10 K KKKKK 2016-08-15 neutral
11 L LLLLL 2016-08-23 neutral
12 M MMMMM 2016-08-19 positive
13 N NNNNN 2016-08-16 neutral
...
Now for the fun part. Here's the gist:
find out what the earliest and latest commented dates are in the data
include a query where you can set the time period (to "WEEKS" or "MONTHS")
generate all of the (weekly or monthly) time periods between the min/max dates
join the product feedback to the time periods (commented date between start and end) with an outer join in case you want to see all time periods whether or not there was any feedback
group the joined result by feedback, period start, and period end, and set up a column to count one of the 3 possible feedback values
x
with
min_max_dates -- get earliest and latest feedback dates
as
(select min(commented_date) min_date, max(commented_date) max_date
from product_feedback
)
, time_period_interval
as
(select 'MONTHS' as tp_interval -- set the interval/time period here
from dual
)
, -- generate all time periods between the start date and end date
time_periods (start_of_period, end_of_period, max_date, time_period) -- recursive with clause - fun stuff!
as
(select mmd.min_date as start_of_period
, CASE WHEN tpi.tp_interval = 'WEEKS'
THEN mmd.min_date + 7
WHEN tpi.tp_interval = 'MONTHS'
THEN ADD_MONTHS(mmd.min_date, 1)
ELSE NULL
END - 1 as end_of_period
, mmd.max_date
, tpi.tp_interval as time_period
from time_period_interval tpi
cross join
min_max_dates mmd
UNION ALL
select CASE WHEN time_period = 'WEEKS'
THEN start_of_period + 7 * (ROWNUM )
WHEN time_period = 'MONTHS'
THEN ADD_MONTHS(start_of_period, ROWNUM)
ELSE NULL
END as start_of_period
, CASE WHEN time_period = 'WEEKS'
THEN start_of_period + 7 * (ROWNUM + 1)
WHEN time_period = 'MONTHS'
THEN ADD_MONTHS(start_of_period, ROWNUM + 1)
ELSE NULL
END - 1 as end_of_period
, max_date
, time_period
from time_periods
where end_of_period <= max_date
)
-- now put it all together
select pf.feedback
, tp.start_of_period
, tp.end_of_period
, count(*) as feedback_count
from time_periods tp
left outer join
product_feedback pf
on pf.commented_date between tp.start_of_period and tp.end_of_period
group by tp.start_of_period
, tp.end_of_period
, pf.feedback
order by pf.feedback
, tp.start_of_period
;
Output:
negative 2016-08-06 2016-09-05 6
negative 2016-09-06 2016-10-05 7
negative 2016-10-06 2016-11-05 8
negative 2016-11-06 2016-12-05 1
neutral 2016-08-06 2016-09-05 6
neutral 2016-09-06 2016-10-05 5
neutral 2016-10-06 2016-11-05 11
neutral 2016-11-06 2016-12-05 2
positive 2016-08-06 2016-09-05 17
positive 2016-09-06 2016-10-05 16
positive 2016-10-06 2016-11-05 15
positive 2016-11-06 2016-12-05 6
-- EDIT --
New and improved, all in one easy to use procedure. (I will assume you can configure the procedure to make use of the query in whatever way you need.) I made some changes to simplify the CASE statements in a few places and note that for whatever reason using a LEFT OUTER JOIN in the main SELECT results in an ORA-600 error for me so I switched it to INNER JOIN.
CREATE OR REPLACE PROCEDURE feedback_counts(p_days_chosen IN NUMBER, p_cursor OUT SYS_REFCURSOR)
AS
BEGIN
OPEN p_cursor FOR
with
min_max_dates -- get earliest and latest feedback dates
as
(select min(commented_date) min_date, max(commented_date) max_date
from product_feedback
)
, time_period_interval
as
(select CASE
WHEN p_days_chosen BETWEEN 1 AND 10 THEN 'DAYS'
WHEN p_days_chosen > 10 AND p_days_chosen <=31 THEN 'WEEKS'
WHEN p_days_chosen > 31 AND p_days_chosen <= 365 THEN 'MONTHS'
ELSE '3-MONTHS'
END as tp_interval -- set the interval/time period here
from dual --(SELECT p_days_chosen as days_chosen from dual)
)
, -- generate all time periods between the start date and end date
time_periods (start_of_period, end_of_period, max_date, tp_interval) -- recursive with clause - fun stuff!
as
(select mmd.min_date as start_of_period
, CASE tpi.tp_interval
WHEN 'DAYS'
THEN mmd.min_date + 1
WHEN 'WEEKS'
THEN mmd.min_date + 7
WHEN 'MONTHS'
THEN mmd.min_date + 30
WHEN '3-MONTHS'
THEN mmd.min_date + 90
ELSE NULL
END - 1 as end_of_period
, mmd.max_date
, tpi.tp_interval
from time_period_interval tpi
cross join
min_max_dates mmd
UNION ALL
select CASE tp_interval
WHEN 'DAYS'
THEN start_of_period + 1 * ROWNUM
WHEN 'WEEKS'
THEN start_of_period + 7 * ROWNUM
WHEN 'MONTHS'
THEN start_of_period + 30 * ROWNUM
WHEN '3-MONTHS'
THEN start_of_period + 90 * ROWNUM
ELSE NULL
END as start_of_period
, start_of_period
+ CASE tp_interval
WHEN 'DAYS'
THEN 1
WHEN 'WEEKS'
THEN 7
WHEN 'MONTHS'
THEN 30
WHEN '3-MONTHS'
THEN 90
ELSE NULL
END * (ROWNUM + 1)
- 1 as end_of_period
, max_date
, tp_interval
from time_periods
where end_of_period <= max_date
)
-- now put it all together
select pf.feedback
, tp.start_of_period
, tp.end_of_period
, count(*) as feedback_count
from time_periods tp
inner join -- currently a bug that prevents the procedure from compiling with a LEFT OUTER JOIN
product_feedback pf
on pf.commented_date between tp.start_of_period and tp.end_of_period
group by tp.start_of_period
, tp.end_of_period
, pf.feedback
order by tp.start_of_period
, pf.feedback
;
END;
Test the procedure (in something like SQLPlus or SQL Developer):
var x refcursor
exec feedback_counts(10, :x)
print :x

QlikView Join; What I'm doing wrong?

I want to join two tables, and insert the new calculate data on the first one, look at the example:
Table1:
Measure Value Date
Units 1.00 1
Dollar 25.00 1
Units 1.00 2
Dollar 25.00 2
Table2:
Date Rate
1 1.05
2 1.09
I would like to include on the Table 1, this lines that means (Dollar: Value * Rate on the same Date)
Measure Value Date
LocalValue 26,25 1
LocalValue 27,25 2
I'm try to do like this, but I continue have a problem:
JOIN (Table2)
LOAD
'LocalValue' as [Measure],
[Value]*[Rate] AS [Value]
RESIDENT Table1
WHERE [Measure] = 'Dollar'
but I'm geting this error message:
Error Field not found -
What I'm doing wrong?
Example here:
Table1:
Load * inline
[
Measure,Value,Date
Units,1,1
Dollar,25,1
Units,1,2
Dollar,25,2
];
Table2:
Load * inline
[
Date,Rate
1,2
2,3
];
Table1:
JOIN (Table2)
LOAD
'LocalValue' as [Measure],
[Value]*[Rate] AS [Value]
RESIDENT Table1
WHERE [Measure] = 'Dollar'
At the point when you try "join (Table2) " fields Rate and Value do not exists in the same table from which you are trying to load (Table1).
You need to have fields Value and Rate in one table before join the LocalValue calculation.
Your script need to look like this:
Table1:
Load * inline
[
Measure,Value,Date
Units,1,1
Dollar,25,1
Units,1,2
Dollar,25,2
];
join
Table2:
Load * inline [
Date,Rate
1,2
2,3
];
JOIN (Table1)
//Table1:
LOAD
'LocalValue' as [Measure],
[Value]*[Rate] AS [Value]
RESIDENT Table1
WHERE [Measure] = 'Dollar'
And the result table will be:
Measure Value Date Rate
Dollar 25 1 2
Units 1 1 2
Dollar 25 2 3
Units 1 2 3
LocalValue 50 - -
LocalValue 75 - -
Stefan

How to use group by and where clause with left outer join in Hive

I have to join two tables using left outer join method in Hive. Please find my table details and required output below,
Apologize for big explanation. I gave all the information for more clarity in my doubts.
Table1 columns:
date1, srcid, claimno1
Date1 srcid claimno1
31-01-2015 ABC 432
31-01-2015 ABC 451
31-01-2015 ABC 435
01-02-2015 ABC 451
Table2 columns:
date2,claimno2,amount
Date2 claimno1 amount
31-01-2015 432 150.51
01-02-2015 451 250.61
31-02-2015 451 320.72
01-02-2015 432 381.65
Required Output:
Date1 srcid claimno1 amount
31-01-2015 ABC 432 381.65
31-01-2015 ABC 451 250.61
31-01-2015 ABC 432 381.65
01-02-2015 ABC 451 NULL
Condition:
1) from table1 need to select the record with date as 31-01-2015
2) from table2 need to select the record with date as 01-02-2015
Assumption:
1) claimno1 from table1 is not unique and occur many time for same date1
2) claimno2 from table2 is unique for particular date2
Doubts:
1) Do I need to give all the selected columns in GROUP BY ? (else getting SemanticException error)
2) What is the difference when i used the same condition in WHERE clause and ON clause?
I have used below queries but couldn't able to get required output,
SELECT
a.date1, a.srcid, a.claimno1, b.amount
FROM
table1 a
LEFT OUTER JOIN
tableb ON (a.claimno1 = b.claimno2)
WHERE
a.date1 = '31-01-2015'
AND b.date2 = '01-02-2015'
GROUP BY a.date1 , a.srcid;
FAILED: SemanticException [Error 10002]: Invalid column reference 'claimno1'
SELECT
a.date1, a.srcid, a.claimno1, b.amount
FROM
table1 a
LEFT OUTER JOIN
tableb ON (a.claimno1 = b.claimno2)
WHERE
a.date1 = '31-01-2015'
AND b.date2 = '01-02-2015'
GROUP BY a.date1 , a.srcid , a.claimno1;
FAILED: SemanticException [Error 10002]: Invalid column reference 'amount'
SELECT
a.date1, a.srcid, a.claimno1, b.amount
FROM
table1 a
LEFT OUTER JOIN
tableb ON (a.claimno1 = b.claimno2)
WHERE
a.date1 = '31-01-2015'
AND b.date2 = '01-02-2015'
GROUP BY a.date1 , a.srcid , a.claimno1 , b.amount;
Result: No records selected
SELECT
a.date1, a.srcid, a.claimno1, b.amount
FROM
table1 a
LEFT OUTER JOIN
tableb ON (a.claimno1 = b.claimno2
AND a.date1 = '31-01-2015'
AND b.date2 = '01-02-2015')
GROUP BY a.date1 , a.srcid , a.claimno1 , b.amount;
Result: records selected but amount values are NULL
Apologize for big explanation. I gave all the information for more clarity in my doubts.
Thanks in advance.

Get elements by two hour range in ruby / rails [duplicate]

I have some difficulties with mySQL commands that I want to do.
SELECT a.timestamp, name, count(b.name)
FROM time a, id b
WHERE a.user = b.user
AND a.id = b.id
AND b.name = 'John'
AND a.timestamp BETWEEN '2010-11-16 10:30:00' AND '2010-11-16 11:00:00'
GROUP BY a.timestamp
This is my current output statement.
timestamp name count(b.name)
------------------- ---- -------------
2010-11-16 10:32:22 John 2
2010-11-16 10:35:12 John 7
2010-11-16 10:36:34 John 1
2010-11-16 10:37:45 John 2
2010-11-16 10:48:26 John 8
2010-11-16 10:55:00 John 9
2010-11-16 10:58:08 John 2
How do I group them into 5 minutes interval results?
I want my output to be like
timestamp name count(b.name)
------------------- ---- -------------
2010-11-16 10:30:00 John 2
2010-11-16 10:35:00 John 10
2010-11-16 10:40:00 John 0
2010-11-16 10:45:00 John 8
2010-11-16 10:50:00 John 0
2010-11-16 10:55:00 John 11
This works with every interval.
PostgreSQL
SELECT
TIMESTAMP WITH TIME ZONE 'epoch' +
INTERVAL '1 second' * round(extract('epoch' from timestamp) / 300) * 300 as timestamp,
name,
count(b.name)
FROM time a, id
WHERE …
GROUP BY
round(extract('epoch' from timestamp) / 300), name
MySQL
SELECT
timestamp, -- not sure about that
name,
count(b.name)
FROM time a, id
WHERE …
GROUP BY
UNIX_TIMESTAMP(timestamp) DIV 300, name
I came across the same issue.
I found that it is easy to group by any minute interval is
just dividing epoch by minutes in amount of seconds and then either rounding or using floor to get ride of the remainder. So if you want to get interval in 5 minutes you would use 300 seconds.
SELECT COUNT(*) cnt,
to_timestamp(floor((extract('epoch' from timestamp_column) / 300 )) * 300)
AT TIME ZONE 'UTC' as interval_alias
FROM TABLE_NAME GROUP BY interval_alias
interval_alias cnt
------------------- ----
2010-11-16 10:30:00 2
2010-11-16 10:35:00 10
2010-11-16 10:45:00 8
2010-11-16 10:55:00 11
This will return the data correctly group by the selected minutes interval; however, it will not return the intervals that don't contains any data. In order to get those empty intervals we can use the function generate_series.
SELECT generate_series(MIN(date_trunc('hour',timestamp_column)),
max(date_trunc('minute',timestamp_column)),'5m') as interval_alias FROM
TABLE_NAME
Result:
interval_alias
-------------------
2010-11-16 10:30:00
2010-11-16 10:35:00
2010-11-16 10:40:00
2010-11-16 10:45:00
2010-11-16 10:50:00
2010-11-16 10:55:00
Now to get the result with interval with zero occurrences we just outer join both result sets.
SELECT series.minute as interval, coalesce(cnt.amnt,0) as count from
(
SELECT count(*) amnt,
to_timestamp(floor((extract('epoch' from timestamp_column) / 300 )) * 300)
AT TIME ZONE 'UTC' as interval_alias
from TABLE_NAME group by interval_alias
) cnt
RIGHT JOIN
(
SELECT generate_series(min(date_trunc('hour',timestamp_column)),
max(date_trunc('minute',timestamp_column)),'5m') as minute from TABLE_NAME
) series
on series.minute = cnt.interval_alias
The end result will include the series with all 5 minute intervals even those that have no values.
interval count
------------------- ----
2010-11-16 10:30:00 2
2010-11-16 10:35:00 10
2010-11-16 10:40:00 0
2010-11-16 10:45:00 8
2010-11-16 10:50:00 0
2010-11-16 10:55:00 11
The interval can be easily changed by adjusting the last parameter of generate_series. In our case we use '5m' but it could be any interval we want.
You should rather use GROUP BY UNIX_TIMESTAMP(time_stamp) DIV 300 instead of round(../300) because of the rounding I found that some records are counted into two grouped result sets.
For postgres, I found it easier and more accurate to use the
date_trunc
function, like:
select name, sum(count), date_trunc('minute',timestamp) as timestamp
FROM table
WHERE xxx
GROUP BY name,date_trunc('minute',timestamp)
ORDER BY timestamp
You can provide various resolutions like 'minute','hour','day' etc... to date_trunc.
The query will be something like:
SELECT
DATE_FORMAT(
MIN(timestamp),
'%d/%m/%Y %H:%i:00'
) AS tmstamp,
name,
COUNT(id) AS cnt
FROM
table
GROUP BY ROUND(UNIX_TIMESTAMP(timestamp) / 300), name
Not sure if you still need it.
SELECT FROM_UNIXTIME(FLOOR((UNIX_TIMESTAMP(timestamp))/300)*300) AS t,timestamp,count(1) as c from users GROUP BY t ORDER BY t;
2016-10-29 19:35:00 | 2016-10-29 19:35:50 | 4 |
2016-10-29 19:40:00 | 2016-10-29 19:40:37 | 5 |
2016-10-29 19:45:00 | 2016-10-29 19:45:09 | 6 |
2016-10-29 19:50:00 | 2016-10-29 19:51:14 | 4 |
2016-10-29 19:55:00 | 2016-10-29 19:56:17 | 1 |
You're probably going to have to break up your timestamp into ymd:HM and use DIV 5 to split the minutes up into 5-minute bins -- something like
select year(a.timestamp),
month(a.timestamp),
hour(a.timestamp),
minute(a.timestamp) DIV 5,
name,
count(b.name)
FROM time a, id b
WHERE a.user = b.user AND a.id = b.id AND b.name = 'John'
AND a.timestamp BETWEEN '2010-11-16 10:30:00' AND '2010-11-16 11:00:00'
GROUP BY year(a.timestamp),
month(a.timestamp),
hour(a.timestamp),
minute(a.timestamp) DIV 12
...and then futz the output in client code to appear the way you like it. Or, you can build up the whole date string using the sql concat operatorinstead of getting separate columns, if you like.
select concat(year(a.timestamp), "-", month(a.timestamp), "-" ,day(a.timestamp),
" " , lpad(hour(a.timestamp),2,'0'), ":",
lpad((minute(a.timestamp) DIV 5) * 5, 2, '0'))
...and then group on that
How about this one:
select
from_unixtime(unix_timestamp(timestamp) - unix_timestamp(timestamp) mod 300) as ts,
sum(value)
from group_interval
group by ts
order by ts
;
I found out that with MySQL probably the correct query is the following:
SELECT SUBSTRING( FROM_UNIXTIME( CEILING( timestamp /300 ) *300,
'%Y-%m-%d %H:%i:%S' ) , 1, 19 ) AS ts_CEILING,
SUM(value)
FROM group_interval
GROUP BY SUBSTRING( FROM_UNIXTIME( CEILING( timestamp /300 ) *300,
'%Y-%m-%d %H:%i:%S' ) , 1, 19 )
ORDER BY SUBSTRING( FROM_UNIXTIME( CEILING( timestamp /300 ) *300,
'%Y-%m-%d %H:%i:%S' ) , 1, 19 ) DESC
Let me know what you think.
select
CONCAT(CAST(CREATEDATE AS DATE),' ',datepart(hour,createdate),':',ROUNd(CAST((CAST((CAST(DATEPART(MINUTE,CREATEDATE) AS DECIMAL (18,4)))/5 AS INT)) AS DECIMAL (18,4))/12*60,2)) AS '5MINDATE'
,count(something)
from TABLE
group by CONCAT(CAST(CREATEDATE AS DATE),' ',datepart(hour,createdate),':',ROUNd(CAST((CAST((CAST(DATEPART(MINUTE,CREATEDATE) AS DECIMAL (18,4)))/5 AS INT)) AS DECIMAL (18,4))/12*60,2))
This will do exactly what you want.
Replace
dt - your datetime
c - call field
astro_transit1 - your table
300 as seconds for each time gap increase
SELECT
FROM_UNIXTIME(300 * ROUND(UNIX_TIMESTAMP(r.dt) / 300)) AS 5datetime,
(SELECT
r.c
FROM
astro_transit1 ra
WHERE
ra.dt = r.dt
ORDER BY ra.dt DESC
LIMIT 1) AS first_val
FROM
astro_transit1 r
GROUP BY UNIX_TIMESTAMP(r.dt) DIV 300
LIMIT 0 , 30
Based on #boecko answer for MySQL, I used a CTE (Common Table Expression) to accelerate the query execution time :
so this :
SELECT
`timestamp`,
`name`,
count(b.`name`)
FROM `time` a, `id` b
WHERE …
GROUP BY
UNIX_TIMESTAMP(`timestamp`) DIV 300, name
becomes :
WITH cte AS (
SELECT
`timestamp`,
`name`,
count(b.`name`),
UNIX_TIMESTAMP(`timestamp`) DIV 300 AS `intervals`
FROM `time` a, `id` b
WHERE …
)
SELECT * FROM cte GROUP BY `intervals`
In a large amount of data, the speed is accelerated by more than 10!
As timestamp and time are reserved in MySQL, don't forget to use `...` on each table and column name !
Hope it will help some of you.

stored procedure for fetching data from multiple tables

I have Stored procedure like this
Select k.HBarcode, m.make,t.plateno ,v.vtype ,l.locname,mdl.model,c.Colname
from transaction_tbl t,
KHanger_tbl k,
make_tbl m,
vtype_tbl v,
Location_tbl l,
Model_tbl mdl,
Color_tbl C
where t.tbarcode=#carid and
t.mkid=m.mkid and
v.vtid=t.vtid and
t.locid=l.locid and
mdl.mdlid=t.mdlid and
t.colid=c.colid and
t.transactID=k.transactID
while executing this am getting output
HBarcode make plateno vtype locname model Colname
34 BMW 44554 Normal Fashion Avenue 520 Red
I have two more tables, from transaction table I can get transactid (above ex:t.transactID),then I can get the corresponding
tid from "KHanger_tbl",then i want show uniquename for corresponding tid from "Terminal" table
1-KHanger_tbl
transactid HBarcode tid
--------------------------------------- ----------------------------------
19 34 7
22 002 5
21 1 7
23 200005 6
2- Terminals_tbl
tid UniqueName
----------- --------------------------------------------------
5 Key Room-1
6 Podium -1
7 Key Room - 2
Expected output
UniqueName HBarcode make plateno vtype locname model Colname
-------------------------------------------------------------------------------
KeyRoom-2 34 BMW 44554 Norma Fashion Avenue 520 Red
so how I can write stored procedure for this, if any one knows,please help me
Something like this maybe ?
Select term.UniqueName ,
k.HBarcode, m.make,t.plateno ,v.vtype ,l.locname,mdl.model,c.Colname
from transaction_tbl t,
KHanger_tbl k,
make_tbl m,
vtype_tbl v,
Location_tbl l,
Model_tbl mdl,
Color_tbl C
,
Terminals_tbl Term
where t.tbarcode=#carid and
t.mkid=m.mkid and
v.vtid=t.vtid and
t.locid=l.locid and
mdl.mdlid=t.mdlid and
t.colid=c.colid and
t.transactID=k.transactID
and
term.tid = t.transactID

Resources