I'm using the following database:
CREATE TABLE datas (d_id INTEGER PRIMARY KEY, name_id numeric, countdata numeric);
INSERT INTO datas VALUES(1,1,20); //(NULL,1,20);
INSERT INTO datas VALUES(2,1,47); //(NULL,1,47);
INSERT INTO datas VALUES(3,2,36); //(NULL,2,36);
INSERT INTO datas VALUES(4,2,58); //(NULL,2,58);
INSERT INTO datas VALUES(5,2,87); //(NULL,2,87);
CREATE TABLE names (n_id INTEGER PRIMARY KEY, name text);
INSERT INTO names VALUES(1,'nameA'); //(NULL,'nameA');
INSERT INTO names VALUES(2,'nameB'); //(NULL,'nameB');
What I would like to do, is to select all values (rows) of names - to which all columns of datas will be appended, for the row where datas.countdata is maximum for n_id (and of course, where name_id = n_id).
I can somewhat get there with the following query:
sqlite> .header ON
sqlite> SELECT * FROM names AS n1
LEFT OUTER JOIN (
SELECT d_id, name_id, countdata FROM datas AS d1
WHERE d1.countdata IN (
SELECT MAX(countdata) FROM datas
WHERE name_id=1
)
) AS p1 ON n_id=name_id;
n1.n_id|n1.name|p1.d_id|p1.name_id|p1.countdata
1|nameA|2|1|47
2|nameB|||
... however - obviously - it only works for a single row (the one explicitly set by name_id=1).
The problem is, the SQL query fails whenever I try to somehow reference the "current" n_id:
sqlite> SELECT * FROM names AS n1
LEFT OUTER JOIN (
SELECT d_id, name_id, countdata FROM datas AS d1
WHERE d1.countdata IN (
SELECT MAX(countdata) FROM datas
WHERE name_id=n1.n_id
)
) AS p1 ON n_id=name_id;
SQL error: no such column: n1.n_id
Is there any way of achieving what I want in Sqlite2??
Thanks in advance,
Cheers!
Oh, well - that wasn't trivial at all, but here is a solution:
sqlite> SELECT * FROM names AS n1
LEFT OUTER JOIN (
SELECT d1.*
FROM datas AS d1, (
SELECT max(countdata) as countdata,name_id
FROM datas
GROUP BY name_id
) AS ttemp
WHERE d1.name_id = ttemp.name_id AND d1.countdata = ttemp.countdata
) AS p1 ON n1.n_id=p1.name_id;
n1.n n1.name p1.d_id p1.name_id p1.countdata
---- ------------ ---------- ---------- -----------------------------------
1 nameA 2 1 47
2 nameB 5 2 87
Well, hope this ends up helping someone, :)
Cheers!
Notes: note that just calling max(countdata) screws up competely d_id:
sqlite> select d_id,name_id,max(countdata) as countdata from datas group by name_id;
d_id name_id countdata
---- ------------ ----------
3 2 87
1 1 47
so to get correct corresponding d_id, we must do max() on datas separately - and then perform sort of an intersect with the full datas (except that intersect in sqlite requires that there are equal number of columns in both datasets, which is not the case here - and even if we made it that way, as seen above d_id will be wrong, so intersect will not work).
One way to do that is in using a sort of a temporary table, and then utilize a multiple table SELECT query so as to set conditions between full datas and the subset returned via max(countdata), as shown below:
sqlite> CREATE TABLE ttemp AS SELECT max(countdata) as countdata,name_id FROM datas GROUP BY name_id;
sqlite> SELECT d1.*, ttemp.* FROM datas AS d1, ttemp WHERE d1.name_id = ttemp.name_id AND d1.countdata = ttemp.countdata;
d1.d d1.name_id d1.countda ttemp.coun ttemp.name_id
---- ------------ ---------- ---------- -----------------------------------
2 1 47 47 1
5 2 87 87 2
sqlite> DROP TABLE ttemp;
or, we can rewrite the above so a SELECT subquery (sub-select?) is used, like this:
sqlite> SELECT d1.* FROM datas AS d1, (SELECT max(countdata) as countdata,name_id FROM datas GROUP BY name_id) AS ttemp WHERE d1.name_id = ttemp.name_id AND d1.countdata = ttemp.countdata;
d1.d d1.name_id d1.countda
---- ------------ ----------
2 1 47
5 2 87
Related
I have a table like this
TRX_NUMBER is a invoice, and this field have a return number inside of the invoice.
And I want to select table and join to the same table use CUSTOMER_TRX_ID and PREVIOUS_CUSTOMER_TRX_ID as the connection (ON)
And the result what I want
Can you help me about this ?
Here's one option:
Sample data:
SQL> with invoice (customer_trx_id, trx_number, previous_customer_trx_id) as
2 (select 81196, 'ARR05-09', 22089 from dual union all
3 select 22089, 'IJU86-09', null from dual union all
4 select 13931, 'IJU07-09', null from dual
5 )
Query begins here:
6 select a.trx_number, b.trx_number as retur
7 from invoice a left join invoice b on a.customer_trx_id = b.previous_customer_trx_id
8 where not exists (select null
9 from invoice c
10 where c.customer_trx_id = a.previous_customer_trx_id);
TRX_NUMBER RETUR
--------------- --------
IJU86-09 ARR05-09
IJU07-09
SQL>
Use an alias to take the same table multiple times.
For example we have a table INVOICE:
SELECT t1.TRX_NUMBER AS TRX_NUMBER, t2.TRX_NUMBER AS RETUR
FROM INVOICE t1
LEFT JOIN INVOICE t2 ON t1.CUSTOMER_TRX_ID = t2.PREVIOUS_CUSTOMER_TRX_ID
if only one level then add a condition
where not exists (select null
from invoice t3
where t3.customer_trx_id = t1.previous_customer_trx_id)
or exclude everything where there is a previous number - it means they are already a level lower
where t1.PREVIOUS_CUSTOMER_TRX_ID is null
I want to do a LIKE search on two tables. One table has a column of search terms and the other table has the column in which to perform the LIKE searches. Here are the tables:
create table #TableA
(
UserName Varchar(50)
)
create table #TableB
(
Department Varchar(50),
Keyword Varchar(50)
)
Insert Into #TableA VALUES('bob_sales')
Insert Into #TableA VALUES('mary_accounting')
Insert Into #TableA VALUES('sammi_accountant')
Insert Into #TableA VALUES('fred_bestSellerEver123')
Insert Into #TableB VALUES('Accounting', 'accounting')
Insert Into #TableB VALUES('Accounting', 'accountant')
Insert Into #TableB VALUES('Sales', 'sales')
Insert Into #TableB VALUES('Sales', 'seller')
I'd like to run a query that uses LIKE %keyword% and gives me:
bob_sales | Sales
mary_accounting | Accounting
sammi_accountant | Accounting
fred_bestSellerEver123 | Sales
Another method, without join, just for fun:
select department,
(select top 1 username from #tablea a
where a.username like '%' + b.keyword + '%') UserName
from #tableb b
SqlFiddleDemo
SELECT
ta.UserName
,tb.Department
FROM TableA ta
JOIN TableB tb
ON ta.UserName LIKE '%' + tb.[keyword] + '%'
/* If needed add COLLATE Latin1_General_CI_AS */
Remarks:
If your data can contains something like: sammi_accountant_accounting you should add DISTINCT to SELECT statement to avoid duplicates.
For bob_sales_accounting bob will appear twice because it belongs to 2 groups.
Database is Teradata
I have two table which I am trying to join. Following are the table structures. When I join these table I expect to get two rows as output but getting 4 rows.what is reason for this behavior. Join based on three keys should uniquely identify a row but still getting 4 rows as output. Any help is appreciated.
TableA
Weekkey|segment|type|users
201501|1|A|100
201501|1|B|100
TableB
Weekkey|segment|type|revenue
201501|1|A|200
201501|1|B|200
when I join these two table using the following query i get the following result
select a.* ,b.user
from tablea a left join tableb b on a.weekkey=b.weekkey
and a.segment=b.segment
and a.type=b.type
Weekkey|segment|type|revenue|users
201501|1|A|200|100
201501|1|B|200|100
201501|1|A|200|100
201501|1|B|200|100
Using sql server, here is ddl and sample data along with the query you posted. The output you state you are getting doesn't happen here.
create table #tablea
(
Weekkey int
, segment int
, type char(1)
, users int
)
insert #tablea
select 201501, 1, 'A', 100 union all
select 201501, 1, 'B', 100
create table #TableB
(
Weekkey int
, segment int
, type char(1)
, revenue int
)
insert #TableB
select 201501, 1, 'A', 200 union all
select 201501, 1, 'B', 200
select a.*
, b.revenue
from #tablea a
left join #tableb b on a.weekkey = b.weekkey
and a.segment = b.segment
and a.type = b.type
drop table #tablea
drop table #TableB
I have two tables in hive:
Table1: uid,txid,amt,vendor Table2: uid,txid
Now I need to join the tables on txid which basically confirms a transaction is finally recorded. There will be some transactions which will be present only in Table1 and not in Table2.
I need to find out number of avg of transaction matches found per user(uid) per vendor. Then I need to find the avg of these averages by adding all the averages and divide them by the number of unique users per vendor.
Let's say I have the data:
Table1:
u1,120,44,vend1
u1,199,33,vend1
u1,100,23,vend1
u1,101,24,vend1
u2,200,34,vend1
u2,202,32,vend2
Table2:
u1,100
u1,101
u2,200
u2,202
Example For vendor vend1:
u1-> Avg transaction find rate = 2(matches found in both Tables,Table1 and Table2)/4(total occurrence in Table1) =0.5
u2 -> Avg transaction find rate = 1/1 = 1
Avg of avgs = 0.5+1(sum of avgs)/2(total unique users) = 0.75
Required output:
vend1,0.75
vend2,1
I can't seem to find count of both matches and occurrence in just Table1 in one hive query per user per vendor. I have reached to this query and can't find how to change it further.
SELECT A.vendor,A.uid,count(*) as totalmatchesperuser FROM Table1 A JOIN Table2 B ON A.uid = B.uid AND B.txid =A.txid group by vendor,A.uid
Any help would be great.
I think you are running into trouble with your JOIN. When you JOIN by txid and uid, you are losing the total number of uid's per group. If I were you I would assign a column of 1's to table2 and name the column something like success or transaction and do a LEFT OUTER JOIN. Then in your new table you will have a column with the number 1 in it if there was a completed transaction and NULL otherwise. You can then do a case statement to convert these NULLs to 0
Query:
select vendor
,(SUM(avg_uid) / COUNT(uid)) as avg_of_avgs
from (
select vendor
,uid
,AVG(complete) as avg_uid
from (
select uid
,txid
,amt
,vendor
,case when success is null then 0
else success
end as complete
from (
select A.*
,B.success
from table1 as A
LEFT OUTER JOIN table2 as B
ON B.txid = A.txid
) x
) y
group by vendor, uid
) z
group by vendor
Output:
vend1 0.75
vend2 1.0
B.success in line 17 is the column of 1's that I put int table2 before the JOIN. If you are curious about case statements in Hive you can find them here
Amazing and precise answer by GoBrewers14!! Thank you so much. I was looking at it from a wrong perspective.
I made little changes in the query to get things finally done.
I didn't need to add a "success" colummn to table2. I checked B.txid in the above query instead of B.success. B.txid will be null in case a match is not found and be some value if a match is found. That checks the success & failure conditions itself without adding a new column. And then I set NULL as 0 and !NULL as 1 in the part above it. Also I changed some variable names as hive was finding it ambiguous.
The final query looks like :
select vendr
,(SUM(avg_uid) / COUNT(usrid)) as avg_of_avgs
from (
select vendr
,usrid
,AVG(complete) as avg_uid
from (
select usrid
,txnid
,amnt
,vendr
,case when success is null then 0
else 1
end as complete
from (
select A.uid as usrid,A.vendor as vendr,A.amt as amnt,A.txid as txnid
,B.txid as success
from Table1 as A
LEFT OUTER JOIN Table2 as B
ON B.txid = A.txid
) x
) y
group by vendr, usrid
) z
group by vendr;
I have a simple problem which gave em a headache
I need to sort integers in a Database table TDBGrid ( its ABS database from component ace ) with the following order
0
1
11
111
121
2
21
211
22
221
and so on
which means every number starting with 1 should be under 1
1
11
111
5
55
can anyone help me?
thanks
This should work to get stuff in the right order:
Convert the original number to a string;
Right-pad with zeroes until you have a string of 3 characters wide;
(optional) Convert back to integer.
Then sorting should always work the way you want. Probably it's best to let the database do that for you. In MySQL you'd do something like this:
select RPAD(orderid,3,'0') as TheOrder
from MyTable
order by 1
I just ran this in SQL Server Management Studio - note I mixed up the rows in the input so they were not in sorted order:
create table #temp( ID Char(3));
insert into #temp (ID)
select '111' union
select '221';
select '0' union
select '21' union
select '1' union
select '11' union
select '211' union
select '121' union
select '2' union
select '22' union
select * from #temp order by ID;
I got the following output:
ID
----
0
1
11
111
121
2
21
211
22
221
(10 row(s) affected)
If you're getting different results, you're doing something wrong. However, it's hard to say what because you didn't post anything about how you're retrieving the data from the database.
Edit: Some clarification by the poster indicates that the display is in a TDBGrid attached to a table using Component Ace ABS Database. If that indeed is the case, then the answer is to create an index on the indicated column, and then set the table's IndexName property to use that index.
select cast(INT_FIELD as varchar(9)) as I
from TABxxx
order by 1