Access 'Not Equal To' Join - join

I have two tables, neither with a primary id. The same combination of fields uniquely identifies the records in each and makes the records between the two tables relate-able (I think).
I need a query to combine all the records from one table and only the records from the second not already included from the first table. How do I do this using 'not equal to' joins on multiple fields? My results so far only give me the records of the first table, or no records at all.

Try the following:
SELECT ECDSlides.[Supplier Code], ECDSlides.[Supplier Name], ECDSlides.Commodity
FROM ECDSlides LEFT JOIN (ECDSlides.Commodity = [Mit Task Details2].Commodity) AND (ECDSlides.[Supplier Code] = [Mit Task Details2].[Supplier Code])
WHERE [Mit Task Details2].Commodity Is Null;

This might be what you are looking for
SELECT fieldA,fieldB FROM tableA
UNION
SELECT fieldA,fieldB FROM tableB
Union should remove automatically. 'Union All' would not.
If, for some reason, you get perfect duplicates and they are not removed, you could try this :
SELECT DISTINCT * FROM (
SELECT fieldA,fieldB FROM tableA
UNION
SELECT fieldA,fieldB FROM tableB
) AS subquery

Related

How to fix DF-JOIN-002 Error in Azure Data Factory (Only two Join conditions allowed)

I have a data flow with a Union on two tables then joining the results of the Union to another table. I keep receiving the following error when I try debugging the pipeline or previewing the data.
DF-JOIN-002 at Join 'Join1'(Line 40/Col 26): Only 2 join condition(s) allowed
I'm basically trying to build a pipeline to automate this query:
SELECT DISTINCT k.acct_id, s.Id, Email, FirstName, LastName FROM table_3 s
INNER JOIN
( (SELECT acct_id, event_date FROM table_1)
UNION (SELECT acct_id, event_date FROM table_2)) k
ON k.acct_id = s.Archtics_acct_id__c
WHERE event_date = 'xxxx-xx-xx'
enter image description here
I figured it out after some time. I had to delete the Join activity and add it again. It was still linked to another source. Even though only two sources were selected in the join settings

Populating Fact Tables(Data Warehouse) and Querying

I am not sure how to query my fact tables(covid and vaccinations), I populated the dimensions with dummy data, I am supposed to leave the fact tables empty? As far as I know, they would get populated when I write the queries.
I am not sure how to query the tables I have tried different things, but I get an empty result.
Below is a link to the schema.
I want to find out the "TotalDeathsUK"(fact table COVID) for the last year caused by each "Strain"(my strain table has 3 strain in total.
You can use MERGE to poulate your fact table COVIDFact :
MERGE
INTO factcovid
using (
SELECT centerid,
dateid,
patientid,
strainid
FROM yourstagingfacttable ) AS f
ON factcovid.centerid = f.centerid AND factcovid.dateid=f.dateid... //the join columns
WHEN matched THEN
do nothing WHEN NOT matched THEN
INSERT VALUES
(
f.centerid,
f.dateid,
f.patientid,
f.strainid
)
And for VaccinationsFact :
MERGE
INTO vaccinations
using (
SELECT centerid,
dateid,
patientid,
vaccineid
FROM yourstagingfacttable ) AS f
ON factcovid.centerid = f.centerid //join condition(s)
WHEN matched THEN
do nothing WHEN NOT matched THEN
INSERT VALUES
(
f.centerid,
f.dateid,
f.patientid,
f.vaccineid
)
For the TotalDeathUK measure :
SELECT S.[Name] AS Strain, COUNT(CF.PatientID) AS [Count of Deaths] FROM CovidFact AS CF
LEFT JOIN Strain AS S ON S.StrainID=CF.StrainID
LEFT JOIN Time AS T ON CF.DateID=T.DateID
LEFT JOIN TreatmentCenter AS TR ON TR.CenterID=CF.CenterID
LEFT JOIN City AS C ON C.CityID = TR.CityID
WHERE C.Country LIKE 'UK' AND T.Year=2020
AND Result LIKE 'Death' // you should add a Result column to check if the Patient survived or died
GROUP BY S.[Name]

Join tables in Hive using LIKE

I am joining tbl_A to tbl_B, on column CustomerID in tbl_A to column Output in tbl_B which contains customer ID. However, tbl_B has all other information in related rows that I do not want to lose when joining. I tried to join using like, but I lost rows that did not contain customer ID in the output column.
Here is my join query in Hive:
select a.*, b.Output from tbl_A a
left join tbl_B b
On b.Output like concat('%', a.CustomerID, '%')
However, I lose other rows from output.
You could also achieve the objective by a simple hive query like this :)
select a.*, b.Output
from tbl_A a, tbl_B b
where b.Output like concat('%', a.CustomerID, '%')
I would suggest first extract all ID's from free floating field which in your case is 'Output' column in table B into a separate table. Then join this table with ID's to Table B again to populate in each row the ID and then this second joined table which is table B with ID's to table A.
Hope this helps.

Hive join query to list columns from only one table

I am writing a hive query to join two tables; table1 and table2. In the result I just need all columns from table1 and no columns from table2.
I know the solution where I can select all the columns manually by specifying table1.column1, table1.column2.. and so on in the select statement. But I have about 22 columns in table 1. Also, I have to do the same for multiple other tables ans its painful process.
I tried using "SELECT table1.*", but I get a parse exception.
Is there a better way to do it?
Hive 0.13 onwards the following query syntax works:
SELECT a.* FROM a JOIN b ON (a.id = b.id)
This query will select all columns from a. So instead of typing all the column names (making the query cumbersome), it is a better idea to use tablealias.*

DB2 joins difficulties

I have the following situation (simplified):
2 BiTemp Tables
basicdata (id, btmp_tsd, name, prename)
extendeddata (id, btmp_tsd, basicid, codename, codevalue)
In extendeddata, there can be multible entries for one basicdata with each a different codename and value.
I have to create an SQL to select all rows which have changed since a specified time. For the basicdata table this is relatively simple:
SELECT ID, BTMP_TSD, NAME, PRENAME
FROM BASICDATA BD
WHERE BTMP_TSD =
(SELECT MAX(BTMP_TSD)
FROM BASICDATA BD2
WHERE BD2.ID = BD.PRTNR_ID
AND BD2.BTMP_TSD > :MINTSD
AND BD2.BTMP_TSD <= :MAXTSD
)
ORDER BY ID
WITH UR
Now I will need to Join on the second table to get the codevalue for the codename 'test'. The problem is, it may not exist, in this case, the row should be collected anyway. But if there is a row but not within the timerange, I should not get a result.
I hope I was able to explain my issue. Joins are one of the things I still don't see trough...
Edit:
Okay here's a sample
basicdata:
id,btmp_tsd,name,prename
1,2013-05-25,test,user
2,2013-06-26,user,two
3,2013-06-26,peter,hans
1,2013-06-20,test,us3r
2,2013-10-30,us3r,two
extendeddata:
id,btmp_tsd,basicid,codename,codevalue
1,2013-05-25,1,superadmin,1
2,2013-06-26,3,admin,1
3,2013-11-25,1,superadmin,0
Okay now having these entries and I want all userid's which have had any changes since 2013-10-01 I should get
User1 (Because the extendeddata superadmin had a change)
User2 (Had a Name change and I want him even tough he has no entry on the extendeddata table)
not User3 (He has an entries on both tables but it's not in the specified range)
The following query should do what you want.
select *
from basicdata b left outer join extendeddata e on b.id=e.basicid
where b.btmp_tsd >= '2013-10-01'
or e.btmp_tsd >= '2013-10-01'
DISCLAIMER: I didn't test the sql. So syntax might not be 100% perfect.

Resources