I have a Stored Procedure which returns multiple tables. I now need a way to get notified whenever a field which is contained in the SP changes.
The basic tables get changed by a number of other sources, ranging from manual inserts to programms.
Do I need to put manual triggers on all tables which are used in the SP or is there a better, more elegant way?
If you are using teradata, you can use the below snippet after each of your dml statements :
SET lv_activity_count = activity_count;
SET lv_message = ' Number of rows merged in table1 is '|| lv_activity_count ;
Related
Problem statement: we want to restrict the direct access to all the external STAGES to all the USER'S or DEVELOPER'S. We wanted these STAGES to be accessed through stored procedure or table UDF using LIST command.
We have following two working solutions :
Solution 1
Write a stored procedure and return a list command result set as single array or varchar with column delimiter (i.e. | ) and record delimiter (i.e. new line character)
Run view after calling above stored procedure using RESULT_SCAN(), LAST_QUERY_ID() and LATERAL SPLIT_TO_TABLE() functions to show the result as flattened row level output similar to original LIST command.
LIMITATIONS
As the view is using LAST_QUERY_ID and RESULT_SCAN functions its mandatory to call the SP before the select on view every time otherwise select on view will fail.
Solution 2
Write a stored procedure and create volatile table inside it to store the list command result.
Run the select query on the volatile table created from the above stored procedure.
Both above solutions are working fine for us but with stored procedure approach the result can't be seen as tabular format with a single call to the stored procedure. As snowflake doesn't supports PL/SQL function like "dbms_output.put_line()" .
It's always a 2-step solution.
Call stored procedure
View result from that stored procedure output using RESULT_SCAN(), LAST_QUERY_ID() functions or creating table inside the stored procedure and run query on the created table after stored procedure execution is complete.
Expected solution
We want to view the output of a stored procedure or UDF in a single call with similar representation like any other query output in Snowflake Web UI worksheets.
We want to know if there is any possibility to use LIST command in snowflake UDF or UDTF any LANGUAGE like SQL or JAVASCRIPT will work for us.
Or else if we can call a stored procedure (i.e. explained in above two solutions) inside a UDF this will also suffice our need.
Please let us know if there is any possible solution for the any of the above expected output.
I did not completely understand the requirement, but are you trying to do something like below
CREATE OR REPLACE PROCEDURE LIST_EXTERNAL_STAGE(stage_name varchar)
RETURNS VARIANT
LANGUAGE JAVASCRIPT
EXECUTE AS CALLER
AS
$$
var return_stage_rows = [];
var snowstmt = snowflake.createStatement({
sqlText: 'LIST '+ STAGE_NAME
});
var query_result = snowstmt.execute();
var end_of_table = snowstmt.getRowCount();
var counter = 0;
while (counter <= end_of_table) {
query_result.next();
return_stage_rows.push(query_result.getColumnValue(1),query_result.getColumnValue(2),query_result.getColumnValue(3),query_result.getColumnValue(4))
counter += 1;
}
return return_stage_rows;
$$;
call LIST_EXTERNAL_STAGE('#EMPLOYEE_STAGE');
I'm using DB2 for z/OS as my database. I have written one stored procedure in DB2 where it will return some result set. Currently I have declared one cursor and calling OPEN Cur at the end of the stored procedure. I,m calling my procedure from Java and I'm getting the result set using ResultSet resultSet = callableStatement.getResultSet();My SP is working for few hundred records. But getting failed when table contains millions of data:
Caused by: com.ibm.db2.jcc.am.SqlException: DB2 SQL Error:
SQLCODE=-904, SQLSTATE=57011, SQLERRMC=00C90084;00000100;DB2-MANAGED
SPACE WITHOUT SECONDARY ALLOCATION OR US, DRIVER=4.24.92
I want to know
Is it possible to return Cursor as OUT parameter in my SP ?
What is the difference between taking data using OPEN curs way and CURSOR as OUT parameter ?
How to solve issue when data is huge ?
Will CURSOR as OUT parameter solve the issue ?
EDITED (SP detail):
DYNAMIC RESULT SET 1
P1: BEGIN
-- Declare cursor
DECLARE cursor1 CURSOR WITH RETURN FOR
select a.TABLE_A_ID as TABLE_A_ID,
b.TABLE_B_ID as TABLE_B_ID
from TABLE_A a
left join TABLE_C c on
a.TABLE_A_ID = c.TABLE_A_ID
inner join TABLE_B b on
b.CONTXT_ID = a.CONTXT_ID
AND b.CONTXT_POINT_ID = a.CONTXT_POINT_ID
AND b.CONTXT_ART_ID = a.CONTXT_ART_ID
where c.TABLE_A_ID is null ;
OPEN cursor1;
Refer to the documentation here for suggestions for handling this specific condition. Consider each suggestion.
Talk with your DBA for Z/OS and decide on the best course of action in your specific circumstances.
As we cannot see your stored-procedure source code, more than one option might exist, especially if the queries in the stored-procedures are unoptimised.
While usually it's easier to allocate more temporary space for the relevant tablespace(s) at the Db2-server end, that may simply temporarily mask the issue rather than fix it. But if the stored-procedure has poor design or unoptimised queries, then fix that first.
An SQL PL procedure can return a CURSOR as an output parameter, but that cursor is usable by the calling SQL PL code. It may not be usable by Java.
You ask "how to solve issue when data is huge", although you don't define in numbers the meaning of huge. It is a relative term. Code your SQL procedure properly, index every query in that procedure properly and verify the access plans carefully. Return the smallest possible number of rows and columns in the result-set.
Consider simple commands:
INSERT INTO TABLE table_name (fldX) VALUES (valueX)
UPDATE table_name SET fldX = valueX WHERE table_id = ? AND version_id = ?
More often we insert or update only some of the table fields but all examples of CRUD Stored Procedures on INSERT/UPDATE contain all updatable table fields and to use this SPs we need to fill all those parameters.
Problems with such SPs arise when:
- user wants to set only subset (1+) of fields initially so he can't use SP which inserts all fields
- user doesn't always know about all table fields so he can't use SP which updates all fields. I don't want to write the same SP for possible subset of fields.
- user should't have access to SP which updates all fields.
- user/user action doesn't always have permission to change values of all table fields.
- user want to update just 1 field but needs to use all table fields in CALL {table}_update( ... )
In this examples user DOES have access to "PRIMARY KEY" and "VERSION" (timestamp/numeric) columns of record.
Posssible solutions:
Solution 0: Keep using INSERT/UPDATE statements
Advantages of this approach:
- it works
Disadvantages of this approach:
- security concerns when allowing users direct access to tables
- no way to save user's data when DELETEing record
Solution 1: Send part of DML statement as parameter and SP will use it to create Dynamic SQL stament:
CALL table_update(p_id AS INT, p_changes CHAR(1000))
-- Parameters: p_changes = "fld1 = 1, fld2 = '01.01.2019', fld3 = 'abc'"
st1 = 'UPDATE table SET' || p_changes || 'WHERE id = p_id';
EXECUTE st1;
Disadvantages of this approach:
- possible SQL injection
- no caching and optimization available on Dynamic SQL
- can't check input string, etc...
Solution2: Send not used values AS NULLs and add string parameter with list of columns with values to use actual parameters (even if it's null)
PROCEDURE table_update(upd_fields VARCHAR(1000), fld1 CHR(30), fld2 CHAR(30))
-- If we want to update only fld1 we should execute
CALL table_update('fld1', value1, NULL)
Disadvantages of this approach:
- If SP will change ordering of parameters in the future this system will break
- Statements again will be prepared dynamically so no caching.
- complexity of creating 'ins_fields' or 'upd_fields' parameters
Solution 3: Send updates using XML string with all changes.
PROCEDURE table_update(record_updates XML)
-- or even
PROCEDURE table_edit(table_changes XML) -- all INSERT/UPDATE/DELETE statemens together
Advantages of this approach:
- can be used for INSERTs/UPDATEs/DELETEs in 1 SP call
- only 1 parameter to list all updated fields.
Disadvantages of this approach:
- transfers more data (because XML)
- lower performance because of need to parse XML on server
- increases Stored Procedure complexity.
- increases Client code complexity (to create XML string)
So what solutions am I missing? Which are considered mainstream/best/universal?
How does this problem is solved in modern ORMs? DO they use 1st, 2nd or 3rd approach?
Personally I would like to use 3rd solution (with XML parameter) but I need examples:
1) Examples of schemas for such XML parameters.
2) Examples of Stored Procedures that parse XML parameter
Currently used environment: executing direct INSERT/UPDATE/DELETE statements using SQL passthrough (ODBC) from Visual FoxPro applcation. DBMS: DB2 for z/OS v10.
Simple answer; DB2 allows parameter overloading:
PROCEDURE update_widgets(p_id INTEGER, p_color VARCHAR(40) )
PROCEDURE update_widgets(p_id INTEGER, p_quantity INTEGER )
PROCEDURE update_widgets(p_id INTEGER, p_price DECIMAL(9,2) )
PROCEDURE update_widgets(p_id INTEGER, p_quantity INTEGER, p_price DECIMAL(9,2) )
PROCEDURE update_widgets(p_id INTEGER, p_color VARCHAR(40) , p_quantity INTEGER, p_price DECIMAL(9,2) )
. . .
As long as your arguments are not ambiguous, you can have as many variants as you like
Another solution is to have all possible updateable rows be nullable parameters, and use null as a no-update check:
UPDATE widgets SET price = :p_price WHERE id=:p_id AND :p_price IS NOT NULL;
UPDATE widgets SET color = :p_color WHERE id=:p_id AND :p_color IS NOT NULL;
. . .
In Tsql I can execute a stored procedure in Query Analyzer and view the content of a resultset right there query analyzer window without know anything about the query structure (tables, columns, ...)
--Tsql sample
exec myproc parm1, parm2, parm3
Now I am working with PLsql and Toad (which I am relatively new at for Toad). I need to view the content of a resultset of a convoluted stored procedure, and I don't know what the number of columns is -- let alone their data types (this proc is composed of several freaky subqueries -- which I can view individually, but they get pivoted, and the number of columns varies in the final resultset). How can I view the content of this resultset in Toad when I execute the procedure when I don't know how many columns there are or their data types?
Below is code that I have mustered together for viewing the content of a result set of stored procedures where I know how many columns there are and their data types ahead of time. In my code sample below I use a sys_refcursor that I named x_out and I also create a temporary table to store the content of the resultset for additional viewing. Is there a way I can do this when I don't know how many columns there are in the resultset? How to do this with PLsql -- Toad?
create global temporary table tmpResult (fld1 number, fld2 varchar(50), fld3 date);
declare
x_out sys_refcursor;
tmpfld1 number;
tmpfld2 varchar2(50);
tmpfld3 date;
BEGIN
myschema.mypkg.myproc(parm1, parm2, x_out);
LOOP
FETCH x_out INTO tmpfld1, tmpfld2, tmpfld3;
DBMS_OUTPUT.Put_Line ('fld1:-- '||tmpfld1||': fld2:-- '||tmpfld2||': fld3:-- '||tmpfld3);
-- I also insert the result set to a temp table for additional viewing of the data from the stored procedure
Insert Into tmpResult values(tmpfld1, tmpfld2, tmpfld3);
EXIT WHEN x_out%NOTFOUND;
END LOOP;
END;
Toad can automatically retrieve the cursor for you. You have a few options, #3 perhaps is the easiest if you just want to see the data.
If you have the myschema.mypkg loaded in the Editor you can hit F11 to execute it. In the dialog that shows select your package member to the left and select the Output Options tab. Check the option to fetch cursor results or use the DBMS Output options. Click OK and the package executes. Depending on your Toad version you'll see a grid at the bottom of Editor for your results or you'll see a PL/SQL results tab. If you see the latter double click the (CURSOR) value in the output column for your output argument. I suggest using the fetch option as long as your dataset isn't so large that it will cause Out of Memory errors.
Locate your package in the Schema Browser and rt-click, Execute Package. You'll see the same dialog as mentioned in #1. Follow the remaining steps there.
Use a bind variable from an anonymous block. Using your example you'd want something like this...
declare
x_out sys_refcursor;
begin
myschema.mypkg.myproc(parm1, parm2, x_out);
:retval := x_out;
end;
Execute this with F9 in the Editor. In the Bind Variable popup set the datatype of retval to Cursor. Click OK. Your results are then shown in the data grid. Again if your dataset is very large you may run out of memory here.
StackOverflow not letting me post this other solution:
I try posting part of this other solution (if SOF lets me) - this the 2nd half of the other way:
BEGIN
myschema.mypkg.myproc(parm1, parm2, parm3 x_out);
FOR rec_ IN get_columns LOOP
DBMS_OUTPUT.put_line(rec_.name || ': ' || rec_.VALUE);
END LOOP;
END;
and here is the 1st half of the other way:
DECLARE
x_out SYS_REFCURSOR;
CURSOR get_columns IS
...
You should bind the cursor to ":data_grid" in order to show SP result in Toad Data Grid pane.
Call Store Procedure in PL/SQL Script:
Run with F9 not F5
Toad is the best I know when it comes to DB IDE.
Press "F9" and bind it. That is all
I am trying to do something like this:
merge MembershipTEST as T
using (select OrganisationID, Name From MembershipPending) as S
on T.OrganisationID = S.OrganisationID
and T.Name = S.Name
when not matched then
insert (MembershipID,OrganisationID, Name)
values(
(EXEC [dbo].[spGetNextIntKeyByTableName]
#PKColName = 'MembershipID',#TableName = 'FWBMembership'),
S.OrganisationID,
S.Name );
Bascially the identitykey is from a sp
Is it possible?
Update 1: Answer is NO
read the online doc http://msdn.microsoft.com/en-us/library/bb510625%28v=sql.105%29.aspx
VALUES ( values_list) Is a comma-separated list of constants,
variables, or expressions that return values to insert into the target
table. Expressions cannot contain an EXECUTE statement.
Build an SSIS package. Just create a new Data Flow task. Use two OLEDB sources make one of them execute the stored procedure and the other one select from the table you are looking to MERGE with. Make sure they are both ordered by the same thing. Go into advanced settings for each OLEDB source and set is sorted to true and then set the sort values for the items you are ORDERing BY. Then move both data flows to to a MERGE JOIN. Then send the Data flow to a Conditional Split and set as INSULL(OrganisationID). Use the resulting data flow to go to an OLEDB destination.
Sorry for the lack of visuals I will add them later when I'm on lunch hour to busy to add them now.