Want to create new table comparing two table using stored procedure - stored-procedures

i have three table
1) first table->illnessarea
ID Area
1 Heart
2 Ear
2) Second table ->Specialisation
ID Specialisation
12 Cardiovascular
3) Temp Table
Areaname Specialisationname
Heart Cardiovascular
Want new table using stored procedure
AreaId SpecializationID
1 12
I am not much exp in Stored procedure.
Help Me

Related

Looking up data in an Oracle (12.1) table using keys from a text file

I have a table with approximately 8 million rows in it. It has a uniqueness constraint on a column called Customer_Identifier. This is a varchar(10) field, is not the primary key, but is unique.
I wish to retrieve some customer rows from this table using SQL Developer. I have been given a text file with each record containing a search key value in the columns 1-10. This query will need to be reused a few times, with different customer_identifier values. Sometimes I will be given a few customer_identifier values (<1000 of them). Sometimes many (between 1000 and 10000 of them). For the times when I want fewer than 1000 values, it's pretty straightforward to use an IN clause. I can edit the text file to wrap the keys in quotes and insert commas as appropriate. But SQL developer has a hard limit of 1000 values in an IN clause.
I only have read rights to the database, so creating and managing a new physical table is out of the question :-(.
Is there a way that I can treat the text file as a table in Oracle 12.1, and thus use it to join to my customer table on the customer_identifier column?
Brgds
Chris
Yes, you can treat a text file as an external table. But you may need DBA assistance to create a new directory, if you don't have access to a directory defined in the database.
Thanks to Oracle Base
**Create a directory object pointing to the location of the files.**
CREATE OR REPLACE DIRECTORY ext_tab_data AS '/data';
**Create the external table using the CREATE TABLE..ORGANIZATION EXTERNAL syntax. This defines the metadata for the table describing how it should appear and how the data is loaded.**
CREATE TABLE countries_ext (
country_code VARCHAR2(5),
country_name VARCHAR2(50),
country_language VARCHAR2(50)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_tab_data
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
country_code CHAR(5),
country_name CHAR(50),
country_language CHAR(50)
)
)
LOCATION ('Countries1.txt','Countries2.txt')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
**Once the external table created, it can be queried like a regular table.**
SQL> SELECT *
2 FROM countries_ext
3 ORDER BY country_name;
COUNT COUNTRY_NAME COUNTRY_LANGUAGE
----- ---------------------------- -----------------------------
ENG England English
FRA France French
GER Germany German
IRE Ireland English
SCO Scotland English
USA Unites States of America English
WAL Wales Welsh
7 rows selected.
SQL>

Qlikview - join two table

i need to join two table in Qlikview to get result.
Table:
I need to join this two table to get result table like this
Any idea? Can i use cross table and how?
For Table1 you can use CrossTable functionality to "rotate" the table but keeping the first column.
For example:
CrossTable(Location, Quantity)
Load
Reason,
LocA,
LocB
From
[Data.xlsx] (ooxml, embedded labels, table is Table1)
;
The result table after this will be:
Location Reason Quantity
LocA R1 5
LocA R2 4
LocA R3 5
LocA R4 3
LocB R1 2
LocB R2 2
LocB R3 3
LocB R4 5
(you can learn more about CrossTable at Qlik's help site - CrossTable)
After having Table1 in this format you can create composite key (as x3ja suggested). Composite key is basically two (or more) fields concatenated. In your case the join between the tables should be on two fields - Location and Reason.
// CrossTable the data to get it in correct format
Table1_Temp:
CrossTable(Location, Quantity)
Load
Reason,
LocA,
LocB
From
[Data.xlsx] (ooxml, embedded labels, table is Table1)
;
// Resident load to form the composite key
// based on Location and Reason fields
Table1:
Load
Location & '|' & Reason as Key,
Quantity
Resident
Table1_Temp
;
// We dont need Table1_Temp table anymore
Drop Table Table1_Temp;
//Load the second table and create the same composite key
Table2:
Load
Location & '|' & Reason as Key,
Location,
Reason,
Answer
From
[Data.xlsx] (ooxml, embedded labels, table is Table2)
;
After the reload your data model will look like:
And the data:
Notice that the values for Answer, Location, Reason are null in the bottom two rows. This is because the data in Table2 (based on your screenshots) don't contains combination for LocB and R2 and LocA and R4 but Table1 does.
If you want to keep only the combinations that are present in both tables then the approach is similar but with two differences:
Table2 should be loaded first
use keep function to exclude the non common records for being loaded in Table1
(keep at Qlik's help site - keep)
If you want to see the script in action just comment the first tab and uncomment the second one in the example qvw
There are a couple of ways you could do this.
Using association. Load Table 1 twice and concatenate, creating a composite key. So you'd end up with fields ReasonLocation and Quantity. Then load Table 2 creating the same composite key, giving you ReasonLocation, Location, Reason & Answer. Then the tables would associate on that composite key.
Using a join. Load Table1, left join in Table 1 based on Reason with an if statement like if [Location] = 'LocA' then [LocA] else [LocB]. That may need you to load it into a temp table first and do the if statement in a resident load.
You could also combine the two and join the tables in #1 based on the ReasonLocation field.
Hope that helps - sorry it's not fully worked through...

create new field based on multiple resident tables

Given multiple in-resident tables, I'd like to create a new field based on fields in different tables.
table1:
LOAD * INLINE [
id1,val1
a1,car1
a2,car1
];
table2:
LOAD * INLINE [
id2,id1,val2
b1,a1,type1
b2,a2,type2
];
table3:
LOAD * INLINE [
id3,id2,val3
c1,b1,mfr1
c2,b2,mfr2
];
For the sake of argument, assume table1 has ~1M rows, table2 ~1K rows, and table3 ~10 rows. I'd like to create a new field that is either added to table1 or perhaps in a new table linked by id1, resulting in:
id1 val1 newval
a1 car1 car1type1mfr1
a2 car2 car2type2mfr2
Efforts:
newtable:
load val1 & val2 & val3 as newval;
No errors but no newtable or newval.
newtable:
left join (table2)
load val1&val2 as newval resident table1;
Errs with Field not found - <val2>. (Obviously I want to extend this to include table3, but if I can't do it with 2 tables then 3 just won't work.
The real data includes seven tables for this new field (lots of foreign keys). The data is being loaded from QVDs (the data is shared across multiple QVWs), closely mimicking a SQL database; none of the tables are row-wise redundant, so combining db tables into a single QVD table may be inefficient. (Plus refreshing the data is incredibly easier one table at a time.) A colleague suggested I load-join each of the QVDs into one huge table, but that doesn't seem right (nor have I successfully chain-joined even a few tables).
Using QV 12.0 desktop on win10-x64 for deployment on QVS.
#TheBudac's was part of the way there, but it only merged two of the three. Most of the problems were stemming from incorrect multi-table joins. My confusion was in the "join" syntax in Qlik; the docs make sense to me now that I see what's happening, but it wasn't as obvious to me initially.
Here's what eventually worked best for me:
temptable:
load id1 as id1a, val1 as val1a
resident table1;
left join (temptable)
load id2 as id2a, id1 as id1a, val2 as val2a
resident table2;
left join (temptable)
load id2 as id2a, val3 as val3a
resident table3;
newtable:
load id1a as id1,
val1a & val2a & val3a as newval
resident temptable;
drop table temptable;
This produced these tables:
and this tree:
Quick walk-through:
Because I'm using left join, I start with the largest table; other joins would dictate different starting condition requirements. In my case, table1 was representing the largest, so I start with that:
temptable:
load id1 as id1a, val1 as val1a
resident table1;
Each join should be against the temporary table we're working on. Renaming variables is important so that Qlik doesn't create unnecessary synthetic keys.
left join (temptable)
load id2 as id2a, id1 as id1a, val2 as val2a
resident table2;
The use of resident is important in that it does not re-query (SQL) or re-load (QVD or other file).
Repeat with the third and further tables, always joining against temptable with the new table.
Now we use that temporary table to create our new table. You can choose to augment table1 with this data instead (certainly feasible), but for me since I'm generating several new calculated fields (not shown here), it made sense to keep them logically separated.
newtable:
load id1a as id1,
val1a & val2a & val3a as newval
resident temptable;
drop table temptable;
Note that I rename the relevant key back to its original value so that this table correctly links to table. Dropping the temporary table helps clean things up, but it does no harm to keep it around (and doing so helps in debugging/learning).
Your join is the wrong way round and QlikView can only work results after they have been joined,not in process, so you will have to do another resident load to get the values concatenated into Newval. The drop table commands are important or you will end up with massive unintentional syn tables
newtable:
left join (table1)
load * resident table2; drop table 2;
Resulttable:
load id1,
val1&val2 as NewVal
resident newtable; drop newtable;

Change Data Capture with table joins in ETL

In my ETL process I am using Change Data Capture (CDC) to discover only rows that have been changed in the source tables since the last extraction. Then I do the transformation only for this rows. The problem is when I have for example 2 tables which I want to join into one dimension, and only one of them has changed. For example I have table Countries and Towns as following:
Countries:
ID Name
1 France
Towns:
ID Name Country_ID
1 Lyon 1
Now lets say a new row is added to Towns table:
ID Name Country_ID
1 Lyon 1
2 Paris 2
The Countries table has not been changed, so CDC for these tables shows me only the row from Towns table. The problem is when I do the join between Countries and Towns, there is no row in Countries change set, so the join will result in empty set.
Do you have an idea how to solve it? Of course there might be more difficult cases, involving 3 and more tables, and consequential joins.
This is a typical problem found when doing Realtime Change-Data-Capture, or even Incremental-only daily changes.
There's multiple ways to solve this.
One way would be to do your joins on the natural keys in the dimension or mapping table, to get the associated country (SELECT distinct country_name, [..other attributes..] from dim_table where country_id = X).
Another alternative would be to do the join as part of the change capture process - when a row is loaded to towns, a trigger goes off that loads the foreign key values into the associated staging tables (country, etc).
There is allot i could babble on for more information on but i will be specific to what is in your question. I would suggest the following to get the results...
1st Pass is where everything matches via the join...
Union All
2nd Pass Gets all towns where there isn't a country
(left outer join with a where condition that
requires the ID in the countries table to be null/missing).
You would default the Country ID value in that unmatched join to something designated as a "Unmatched Value" typically 0 or -1 is used or a series of standard -negative numbers that you could assign descriptions to later to identify why data is bad for your example -1 could be "Found Town Without Country".

Stored procedure in Oracle with Case and When

I am having a scenario, where I am having 5 different tables:
Table 1 - Product, Columns - ProductId, BatchNummer, Status, GroupId, OrderNummer
Table 2 - ProductGrop, Columns - GropId, ProductType, Description
Table 3 - Electronics, Columns - EId, Description, BatchNummer, OrderNummer, OrderData
Table 4 - Manual, Columns - MId, Description, Status, OrderNummer, ProcessStep
Table 5 - ProcessedProduct, columns same as Product with one extra column of datetime
Now, according to business flow, I need to populate all the data from Product table, and have to check if the underlying table (Electronics or Manual, which depends on ProductType column of ProductGoup) has ordernuumer value, then Insert a record in table 5 "ProcessedProduct" else skip the records.
For this requirement, i want to create a procedure. But I am stuck on how to check which underlying table (Electronics/Manual) shall i have to refer and how it can be achieved.
Moreover how should i write the loop for inserting the records.
Note: I cannot change the tables schema.
With a PL/SQL procedure you can just switch within a LOOP, but you don't need an imperative algoritm if you just need to check if OrderNummer is either into Electronics or Manuals.
Supposing the detail table is chosen by ProductType value either "Electronics" or "Manuals", you could:
INSERT INTO ProcessedProduct (ProductId, BatchNummer, Status, GroupId, OrderNummer, TS)
SELECT ProductId, BatchNummer, Status, GroupId, OrderNummer, SYSDATE
FROM Product p
INNER JOIN ProductGroup pg USING (GroupId)
WHERE EXISTS (
SELECT NULL FROM Electronics e
WHERE p.OrderNummer = e.OrderNummer
AND pg.ProductType = 'Electronics'
UNION
SELECT NULL FROM Manuals m
WHERE m.OrderNummer = m.OrderNummer
AND pg.ProductType = 'Manuals')
Plain SQL is always the fastest way, and "WHERE EXISTS" is usually the fastest condition.

Resources