query from multiple tables using inner joins takes long to execute - join

I have an sql query below that is taking too long to execute. kindly check the query and optimise it for me, i need to count number of files from a file_Actions table but combining it three other tables using inner join
SELECT count(*) as total
FROM (SELECT t1.cfid as cfid,MAX(t1.timestamp) d
FROM file_actions t1
INNER JOIN case_files t2 ON t2.cfid=t1.cfid
INNER JOIN case_file_allocations t3 ON t1.cfid=t3.cfid
INNER JOIN cbeta_user t4
WHERE t4.id=t1.user_id
AND t4.team_leader='$user' and t2.closed<>'yes' AND
t2.deleted<>1 AND
t3.reallocated<>'yes' GROUP BY t1.cfid) a
WHERE d < '$yesterday'
I think it is the inner joins that causes the query to take so long to execute causing the system to slow

Try including the WHERE d < '$yesterday' into the subquery a. Remove the fields and place the Count(*). If your tables aren't indexed on those values that you are using for conditions and relations, try to make an index.
SELECT count(*) as total
FROM file_actions t1
INNER JOIN case_files t2
ON t2.cfid=t1.cfid
INNER JOIN case_file_allocations t3
ON t1.cfid=t3.cfid
INNER JOIN cbeta_user t4
WHERE t4.id=t1.user_id
AND t4.team_leader='$user' and t2.closed<>'yes'
AND t2.deleted<>1
AND t3.reallocated<>'yes'
AND d < '$yesterday'
GROUP BY t1.cfid
Recommended reading: http://www.code-fly.com/5-tips-to-make-your-sql-queries-faster/

Related

Hive-How to join tables with OR clause in ON statement

I've got the following problem. In my oracle db I have query as follows:
select * from table1 t1
inner join table2 t2 on
(t1.id_1= t2.id_1 or t1.id_2 = t2.id_2)
and it works perfectly.
Nowadays I need to re-write query on hive. I've seen that OR clause doesn't work in JOINS in hive (error warning : 'OR not supported in JOIN').
Is there any workaround for this except splitting query between two separate and union them?
Another way is to union two joins, e.g.,
select * from table1 t1
inner join table2 t2 on
(t1.id_1= t2.id_1)
union all
select * from table1 t1
inner join table2 t2 on
(t1.id_2 = t2.id_2)
Hive does not support non-equi joins. Common approach is to move join ON condition to the WHERE clause. In the worst case it will be the CROSS JOIN + WHERE filter, like this:
select *
from table1 t1
cross join table2 t2
where (t1.id_1= t2.id_1 or t1.id_2 = t2.id_2)
It may work slow because of rows multiplication by CROSS JOIN.
You can try to do two LEFT joins instead of CROSS and filter out cases when both conditions are false (like INNER JOIN in your query). This may perform faster than cross join because will not multiply all the rows. Also columns selected from second table can be calculated using NVL() or coalesce().
select t1.*,
nvl(t2.col1, t3.col1) as t2_col1, --take from t2, if NULL, take from t3
... calculate all other columns from second table in the same way
from table1 t1
left join table2 t2 on t1.id_1= t2.id_1
left join table2 t3 on t1.id_2 = t3.id_2
where (t1.id_1= t2.id_1 OR t1.id_2 = t3.id_2) --Only joined records allowed likke in your INNER join
As you asked, no UNION is necessary.

Join / Aggregate Function Query

I have the following code and output:
SELECT CustomerCategoryName, COUNT(a.CustomerID) AS CustomersInThisCategory
FROM Sales.Customers AS a
RIGHT JOIN Sales.CustomerCategories AS b on a.CustomerCategoryID = b.CustomerCategoryID
GROUP BY CustomerCategoryName
ORDER BY CustomersInThisCategory DESC
This generates the following output:
When I add the following COUNT aggregate funcation and Inner Join:
SELECT CustomerCategoryName, COUNT(a.CustomerID) AS CustomersInThisCategory, COUNT(c.OrderID) AS Orders
FROM Sales.Customers AS a
RIGHT JOIN Sales.CustomerCategories AS b on a.CustomerCategoryID = b.CustomerCategoryID
INNER JOIN Sales.Orders AS c ON a.CustomerID = c.CustomerID
GROUP BY CustomerCategoryName
ORDER BY CustomersInThisCategory DESC
The output changes to:
I am not sure as to why the CustomersInThisCategory column is changing to the same as the Orders column? I'm also not sure why the results in the first ouput with 0 values are being removed in the second query as I still have the Right join present.
Any feedback would be much appreciated.
For your first query, count ( distinct a.customerId) should give you the unique customer ids in a category.
Regarding your second question, right join is performed before inner join. So the inner join will splice the records, for which a match is not found.
Hope my answer helps.

SQL Server Query Performance - Normal Join vs Subquery

I have two queries that return the same data.
Query1, which is normal join takes a long time to execute:
SELECT TOP 1000 bigtable.*, tbl1.name, tb2.name FROM
bigtable INNER JOIN tbl1 on bigtable.id1 = tbl1.id1 AND
INNER JOIN tbl2 on tbl1.id1 = tbl2.id1
order by bigtable.id desc
Query2 that uses a sub-query returns fairly quickly:
SELECT subtable.*, tbl1.name, tb2.name FROM
(SELECT TOP 1000 FROM bigtable) subtable
INNER JOIN tbl1 on subtable.id1 = tbl1.id1 AND
INNER JOIN tbl2 on tbl1.id1 = tbl2.id1
order by subtable.id desc
bigtable contains 100k rows or so. tbl1 is a very small table (less than 10 rows). I would rather not use subqueries. If I skip the order by clause, both queries run quickly. I have tried adding indexes to the fields being joined, adding a DESC index on id etc. but nothing seems to help.
Any help is appreciated!
===> Update:
This turned out to be an non-issue. After creating another table similar to tbl1 with the same rows, I found that the Query1 ran under a second (with the copied table). Rebuilt stats on tbl1 and it fixed it.
I think that the two queries are not equivalent - try to write the second one as
SELECT subtable.*, tbl1.name, tb2.name FROM
(SELECT TOP 1000 FROM bigtable order by bigtable.id desc) subtable
INNER JOIN tbl1 on subtable.id1 = tbl1.id1 AND
INNER JOIN tbl2 on tbl1.id1 = tbl2.id1
order by subtable.id desc
I expect the expensive operation to be the ordering of the big table, which is now present in both versions.

Redshift - Efficient JOIN clause with OR

I have the need to join a huge table (10 million plus rows) to a lookup table (15k plus rows) with an OR condition. Something like:
SELECT t1.a, t1.b, nvl(t1.c, t2.c), nvl(t1.d, t2.d)
FROM table1 t1
JOIN table2 t2 ON t1.c = t2.c OR t1.d = t2.d;
This is because table1 can have c or d as NULL, and I'd like to join on whichever is available, leaving out the rest. The query plan says there is a Nested Loop, which I realize is because of the OR condition. Is there a clean, efficient way of solving this problem? I'm using Redshift.
EDIT: I am trying to run this with a UNION, but it doesn't seem to be any faster than before.
If you have a preferred column you can NVL() (aka COALESCE()) them and join on that.
SELECT t1.a, t1.b, nvl(t1.c, t2.c), nvl(t1.d, t2.d)
FROM table1 t1
JOIN table2 t2
ON t1.c = NVL(t2.c,t2.d);
I'd also suggest that you should set the lookup table to DISTSTYLE ALL to ensure that the larger table is not redistributed.
[ Also, 10 million rows isn't big for Redshift. Not trying to be snotty just saying that we get excellent performance on Redshift even when querying (and joining) tables with hundreds of billions of rows. ]
How about doing two (left) joins? With the small lookup table performance shouldn't be too bad even.
SELECT t1.a, t1.b, nvl(t1.c, t2.c), nvl(t1.d, t3.d)
FROM table1 t1
LEFT JOIN table2 t2 ON t1.d = t2.d and t1.c is null
LEFT JOIN table2 t3 ON t1.c = t3.c and t1.d is null
Your original query only returns rows that match at least one of c or d in the lookup table. If that's not guaranteed you may need to add filters...for example rows in t1 where both c and d are null or have values not present in table2.
Don't really need the null checks in the joins, but might be slightly faster.

PSQL - Select size of tables for both partitioned and normal

Thanks in advance for any help with this, it is highly appreciated.
So, basically, I have a Greenplum database and I am wanting to select the table size for the top 10 largest tables. This isn't a problem using the below:
select
sotaidschemaname schema_name
,sotaidtablename table_name
,pg_size_pretty(sotaidtablesize) table_size
from gp_toolkit.gp_size_of_table_and_indexes_disk
order by 3 desc
limit 10
;
However I have several partitioned tables in my database and these show up with the above sql as all their 'child tables' split up into small fragments (though I know they accumalate to make the largest 2 tables). Is there a way of making a script that selects tables (partitioned or otherwise) and their total size?
Note: I'd be happy to include some sort of join where I specify the partitoned table-name specifically as there are only 2 partitioned tables. However, I would still need to take the top 10 (where I cannot assume the partitioned table(s) are up there) and I cannot specify any other table names since there are near a thousand of them.
Thanks again,
Vinny.
Your friends would be pg_relation_size() function for getting relation size and you would select pg_class, pg_namespace and pg_partition joining them together like this:
select schemaname,
tablename,
sum(size_mb) as size_mb,
sum(num_partitions) as num_partitions
from (
select coalesce(p.schemaname, n.nspname) as schemaname,
coalesce(p.tablename, c.relname) as tablename,
1 as num_partitions,
pg_relation_size(n.nspname || '.' || c.relname)/1000000. as size_mb
from pg_class as c
inner join pg_namespace as n on c.relnamespace = n.oid
left join pg_partitions as p on c.relname = p.partitiontablename and n.nspname = p.partitionschemaname
) as q
group by 1, 2
order by 3 desc
limit 10;
select * from
(
select schemaname,tablename,
pg_relation_size(schemaname||'.'||tablename) as Size_In_Bytes
from pg_tables
where schemaname||'.'||tablename not in (select schemaname||'.'||partitiontablename from pg_partitions)
and schemaname||'.'||tablename not in (select distinct schemaname||'.'||tablename from pg_partitions )
union all
select schemaname,tablename,
sum(pg_relation_size(schemaname||'.'||partitiontablename)) as Size_In_Bytes
from pg_partitions
group by 1,2) as foo
where Size_In_Bytes >= '0' order by 3 desc;

Resources