So I have the following informix procedure
CREATE FUNCTION selectSollmandant(INOUT sollmandat SOLLMANDAT_ROW, inkassodatum INT8) RETURNING SMALLINT
DEFINE ret SMALLINT;
LET ret = 0;
trace "Entering select sollmandat " || sollmandat.vers_schein_nr;
PREPARE sollStmt FROM "SELECT s::SOLLMANDAT_ROW FROM sollmandat s WHERE vers_schein_nr = ? ORDER BY lfdnr desc";
DECLARE _sollmCsr CURSOR FOR sollStmt;
IF SQLCODE != 0 THEN
CALL print_to_proto("DECLARE letztZahlCsr " || SQLCODE);
RETURN 0;
END IF;
TRACE "log =========== 1";
OPEN _sollmCsr USING sollmandat.vers_schein_nr;
TRACE "log =========== 2" || SQLCODE;
IF SQLCODE != 0 THEN
TRACE "log =========== 3" || SQLCODE;
CALL print_to_proto("OPEN sollmandat " || SQLCODE);
RETURN 0;
END IF;
TRACE "sollmandant iban is =========== 4" || SQLCODE;
WHILE (1=1) LOOP .... end loop and return...
Problem is that my function returns before reaching the while loop and it never hits log2, log 3 or log 4.
Can you please help me? I don't see what I am missing.
Thanks for the help.
I managed to solve the issue, but before I get into how i did it I want to try and clarify what the above posted code means actually.
CLARIFICATION
Ok so the SOLLMANDAT_ROW is a ROW_TYPE that I defined (think of it a struct like data object for stored procedures). The above mentioned function is part of a big a big UDR and we use data row for easier data manipulation. The function should select some row out of my sollmadant table and store the row in my custom defined SOLLMANDAT_ROW.
In order to be able to store the selected rows in ROW_TYPES, the row must be explicilty casted to the that specific row type, hence the syntax SELECT s::SOLLMANDAT_ROW FROM....
ACTUAL SOLUTION
It turned out that the problem was cursor relate, you see , in the context in which i was running the function, the queried table was a synonym, and my code was breaking at the OPEN cursor statement. What I did in order to get pass the issue was to refer to my row data like this:
SELECT ROW([row colums gere ])::SOLLMANDAT_ROW [rest of select statement]
After making this change, the function is behaving as it should.
I don't really know why informix does not "like" the syntax of my first select when trying to store rows from synonym tables into specific custom row types. And if anybody can provide a explanation I would be very grateful.
I hope this post will be helpful to others, and I thank you for your time.
Related
I'm using DB2 for z/OS as my database. I have written one stored procedure in DB2 where it will return some result set. Currently I have declared one cursor and calling OPEN Cur at the end of the stored procedure. I,m calling my procedure from Java and I'm getting the result set using ResultSet resultSet = callableStatement.getResultSet();My SP is working for few hundred records. But getting failed when table contains millions of data:
Caused by: com.ibm.db2.jcc.am.SqlException: DB2 SQL Error:
SQLCODE=-904, SQLSTATE=57011, SQLERRMC=00C90084;00000100;DB2-MANAGED
SPACE WITHOUT SECONDARY ALLOCATION OR US, DRIVER=4.24.92
I want to know
Is it possible to return Cursor as OUT parameter in my SP ?
What is the difference between taking data using OPEN curs way and CURSOR as OUT parameter ?
How to solve issue when data is huge ?
Will CURSOR as OUT parameter solve the issue ?
EDITED (SP detail):
DYNAMIC RESULT SET 1
P1: BEGIN
-- Declare cursor
DECLARE cursor1 CURSOR WITH RETURN FOR
select a.TABLE_A_ID as TABLE_A_ID,
b.TABLE_B_ID as TABLE_B_ID
from TABLE_A a
left join TABLE_C c on
a.TABLE_A_ID = c.TABLE_A_ID
inner join TABLE_B b on
b.CONTXT_ID = a.CONTXT_ID
AND b.CONTXT_POINT_ID = a.CONTXT_POINT_ID
AND b.CONTXT_ART_ID = a.CONTXT_ART_ID
where c.TABLE_A_ID is null ;
OPEN cursor1;
Refer to the documentation here for suggestions for handling this specific condition. Consider each suggestion.
Talk with your DBA for Z/OS and decide on the best course of action in your specific circumstances.
As we cannot see your stored-procedure source code, more than one option might exist, especially if the queries in the stored-procedures are unoptimised.
While usually it's easier to allocate more temporary space for the relevant tablespace(s) at the Db2-server end, that may simply temporarily mask the issue rather than fix it. But if the stored-procedure has poor design or unoptimised queries, then fix that first.
An SQL PL procedure can return a CURSOR as an output parameter, but that cursor is usable by the calling SQL PL code. It may not be usable by Java.
You ask "how to solve issue when data is huge", although you don't define in numbers the meaning of huge. It is a relative term. Code your SQL procedure properly, index every query in that procedure properly and verify the access plans carefully. Return the smallest possible number of rows and columns in the result-set.
I'm trying to use Contains() in a search procedure. The full text indexes are created and working. The issue arises because you cannot used Contains() call on a NULL variable or parameter, it throws an error.
This takes 9 sec to run (passing in non-null param):
--Solution I saw on another post
IF #FirstName is null OR #FirstName = '' SET #FirstName = '""'
...
Select * from [MyTable] m
Where
(#FirstName = '""' OR CONTAINS(m.[fname], #FirstName))
This runs instantly (passing in non-null param)
IF #FirstName is null OR #FirstName = '' SET #FirstName = '""'
...
Select * from [MyTable] m
Where
CONTAINS(m.[fname], #FirstName)
Just by adding that extra 'OR' in front of the 'contains' completely changed the Query Plan. I have also tried using a 'case' statement instead of 'OR' to no avail, I still get the slow query.
Has anyone solved this the problem of null parameters in full text searching or experience my issue? Any thoughts would help, thanks.
I'm using SQL Server 2012
You are checking value of bind variable in SQL. Even worse, you do it in OR with access predicate. I am not an expert on SQL Server, but it is generally a bad practice, and such predicates lead to full table scans.
If you really need to select all values from table when #FirstName is null then check it outside of SQL query.
IF #FirstName is null
<query-without-CONTAINS>
ELSE
<query-with-CONTAINS>
I believe, in the majority of times #FirstName is not null. This way you will access table using your full text index most of the time. Getting all the rows from table is a lost cause anyway.
From a logical standpoint, the first query takes longer to execute because it has to evaluate 2 conditions:
#FirstName = '""'
and ,in case the first condition fails, which should be the majority of the time,
CONTAINS(m.[fname], #FirstName)
My guess is that in your table, you don't have any null or empty FirstName, that's why the results are the same. Otherwise, you would have a few "" in the result set as FirstName.
Maybe you should try reversing the order to see if it makes any difference:
WHERE (CONTAINS(m.[fname], #FirstName) OR #FirstName = '""')
I am putting up some code which will be copying billions of data from one table to another and we don't want the procedure to stop in case of exception. so i am putting up the script like (not putting 100% compilable syntax)
dml_errors exception;
errors number;
error_count number;
pragma exception_init (dml_errors, -24381);
---------
open cursor;
begin loop;
fetch cursor bulk collect into tbl_object limit batch_size;
exit when tbl_object.count = 0;
-- perform business logic
begin
forall in tbl_object save exceptions;
insert into table;
tbl_object.delete;
exception
when dml_errors then
errors := sql%bulk_exceptions.count;
error_count := error_count + errors;
insert into log_table (tstamp, line) values (sysdate, SUBSTR('[pr_procedure_name:'||r_guid||'] Batch # '||batch_number - 1||' had '||errors||' errors',1,300));
end;
end loop;
close cursor;
end procedure;
now based on this pseduo-code I have 2 questions
I am deleting my collection in forall loop. If there is an exception and i decided to fetch some information from my collection in dml_errors block, would i have collection elements in there ? If yes, then would it safe to delete them after logging ?
Since i am keeping my forall in begin-exception-end block, would it keep iterating ?
Are you sure you need to use PL/SQL here? Unless you're doing a lot of processing in the business logic that you aren't showing us that can't be done in SQL, I would tend to use DML error logging. That will be more efficient, less code, and give you better logging.
DBMS_ERRLOG.CREATE_ERROR_LOG( 'DESTINATION_TABLE' );
INSERT INTO destingation_table( <<columns>> )
SELECT <<columns>>
FROM source_table
LOG ERRORS
REJECT LIMIT UNLIMITED;
I don't see any reason to delete from your tbl_object collection. That doesn't seem to be gaining you anything. It's just costing some time. If your indentation is indicative of your expected control flow, you're thinking that the delete is part of the forall loop-- that would be incorrect. Only the insert is part of the forall, the delete is a separate operation.
If your second question is "If exceptions are raised in iteration N, will the loop still do the N+1th fetch", the answer is yes, it will.
One thing to note-- since error_count is not initialized, it will always be NULL. You'd need to initialize it to 0 in order for it to record the total number of errors.
It's well known that Model.find_or_create_by(X) actually does:
select by X
if nothing found -> create by X
return a record (found or created)
and there may be race condition between steps 1 and 2. To avoid a duplication of X in the database one should use an unique index on the set of fields of X. But if you apply an unique index then one of competing transactions would fail with exception (when trying to create a copy of X).
How can I implement 'a safe version' of #find_or_create_by which would never raise any exception and always work as expected?
The answer is in the doc
Whether that is a problem or not depends on the logic of the application, but in the particular case in which rows have a UNIQUE constraint an exception may be raised, just retry:
begin
CreditAccount.find_or_create_by(user_id: user.id)
rescue ActiveRecord::RecordNotUnique
retry
end
Solution 1
You could implement the following in your model(s), or in a Concern if you need to stay DRY
def self.find_or_create_by(*)
super
rescue ActiveRecord::RecordNotUnique
retry
end
Usage: Model.find_or_create_by(X)
Solution 2
Or if you don't want to overwrite find_or_create_by, you can add the following to your model(s)
def self.safe_find_or_create_by(*args, &block)
find_or_create_by *args, &block
rescue ActiveRecord::RecordNotUnique
retry
end
Usage: Model.safe_find_or_create_by(X)
It's the recurring problem of "SELECT-or-INSERT", closely related to the popular UPSERT problem. The upcoming Postgres 9.5 supplies the new INSERT .. ON CONFLICT DO NOTHING | UPDATE to provide clean solutions for each.
Implementation for Postgres 9.4
For now, I suggest this bullet-proof implementation using two server-side plpgsql functions. Only the helper-function for the INSERT implements the more expensive error-trapping, and that's only called if the SELECT does not succeed.
This never raises an exception due to a unique violation and always returns a row.
Assumptions:
Assuming a table named tbl with a column x of data type text. Adapt to your case accordingly.
x is defined UNIQUE or PRIMARY KEY.
You want to return the whole row from the underlying table (return a record (found or created)).
In many cases the row is already there. (Does not have to be the majority of cases, SELECT is a lot cheaper than INSERT.) Else it may be more efficient to try the INSERT first.
Helper function:
CREATE OR REPLACE FUNCTION f_insert_x(_x text)
RETURNS SETOF tbl AS
$func$
BEGIN
RETURN QUERY
INSERT INTO tbl(x) VALUES (_x) RETURNING *;
EXCEPTION WHEN UNIQUE_VIOLATION THEN -- catch exception, no row is returned
-- do nothing
END
$func$ LANGUAGE plpgsql;
Main function:
CREATE OR REPLACE FUNCTION f_x(_x text)
RETURNS SETOF tbl AS
$func$
BEGIN
LOOP
RETURN QUERY
SELECT * FROM tbl WHERE x = _x
UNION ALL
SELECT * FROM f_insert_x(_x) -- only executed if x not found
LIMIT 1;
EXIT WHEN FOUND; -- else keep looping
END LOOP;
END
$func$ LANGUAGE plpgsql;
Call:
SELECT * FROM f_x('foo');
SQL Fiddle demo.
The function is based on what I have worked out in this related answer:
Is SELECT or INSERT in a function prone to race conditions?
Detailed explanation and links there.
We could also create a generic function with polymorphic return type and dynamic SQL to work for any given column and table (but that's beyond the scope of this question):
Refactor a PL/pgSQL function to return the output of various SELECT queries
Basics for UPSERT in this related answer by Craig Ringer:
How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
There is a method called find_or_create_by in rails
This link will help you to understand it better
But personally I prefer to have a find first and if nothing found then create, (I think it has more control)
Ex:
user = User.find(params[:id])
#User.create(#attributes) unless user
HTH
while not TBLOrder.Eof do
begin
TBLOrder.Locate('OrderID', Total, []);
TBLOrder.Delete;
end;
This just deletes every single row in my Access Database, which is really annoying.
I'm trying to get the program to delete the selected row (which is Total).
From what I understand, It should locate the selected row, which is equal to Total. e.g. If Total = 3 it should find the row where OrderID = 3 and then delete that row.
Any help is appreciated.
Try this instead (Max's routine requires you to loop through the entire dataset, which is fine unless it's got many rows in it):
while (TblOrder.Locate('OrderID', Total, [])) do
TblOrder.Delete;
TDataSet.Locate returns a Boolean; if it's True, the found record is made the active record and you can then delete it. If it returns False (meaning the record is not found), the call to Delete is never made.
BTW, the problem with your original code is that you test for Eof, but never check to see if the Locate finds the record; you just delete whatever record you're on, and then test for Eof again. If you're not at Eof, you call Locate, ignore whether or not it found the record, and delete whatever row you're on. This then repeats over and over again until there are no more records, at which point Eof returns true and you break the loop.
If there is just one row that contains an ORDERID equal to 3, you don't need the WHILE loop.
If you expect more than one row with an ORDERID equal to 3, do this:
TBLOrder.first; // you could also do the locate here if it's a big table
while not TBLOrder.Eof do
begin
if TBLOrder.FieldByName('OrderID').AsInteger = 3 then
TBLOrder.delete
else
TBLOrder.next;
end;
Otherwise, you could also use SQL.