In production, I am facing this problem.
There is a delete which is taking long time to execute and is finally throwing SQL error of -243.
I got the query using onstat -g.
Is there any way to find out what is causing it to take this much time and finally error out?
It uses COMMITTED READ isolation.
This is causing high Informix cpu usage as well.
EDIT
Environment - Informix 9.2 on Solaris
I do not see any issue related to indexes or application logic, but I suspect some informix corruption.
The session holds 8 locks on different tables while executing this DELETE query.
But, I do not see any locks on the table on which the delete is performed.
Would it be something like, informix is unable to get lock on the table?
DELETE doesn't care about your isolation level. You are getting 243 because another process is locking the table while you're trying to run your delete operation.
I would put your delete into an SP and commit each Xth record:
CREATE PROCEDURE tmp_delete_sp (
p_commit_records INTEGER
)
RETURNING
INTEGER,
VARCHAR(64);
DEFINE l_current_count INTEGER;
SET LOCK MODE TO WAIT 5; -- Wait 5 seconds if another process is locking the table.
BEGIN WORK;
FOREACH WITH HOLD
SELECT .....
DELETE FROM table WHERE ref = ^^ Ref from above;
LET l_current_count = l_current_count + 1;
IF (l_current_count >= p_commit_records) THEN
COMMIT WORK;
BEGIN WORK;
LET l_current_count = 0;
END IF;
END FOREACH;
COMMIT WORK;
RETURN 0, 'Deleted records';
END PROCEDURE;
Some syntax issues there, but it's a good starting block for you. Remember, inserts and updates get incrementally slower as you use more logical logs.
Informix was restarted ungracefully many times, which led to informix instability.
This was the root cause.
Related
I'm using DB2 for z/OS as my database. I have written one stored procedure in DB2 where it will return some result set. Currently I have declared one cursor and calling OPEN Cur at the end of the stored procedure. I,m calling my procedure from Java and I'm getting the result set using ResultSet resultSet = callableStatement.getResultSet();My SP is working for few hundred records. But getting failed when table contains millions of data:
Caused by: com.ibm.db2.jcc.am.SqlException: DB2 SQL Error:
SQLCODE=-904, SQLSTATE=57011, SQLERRMC=00C90084;00000100;DB2-MANAGED
SPACE WITHOUT SECONDARY ALLOCATION OR US, DRIVER=4.24.92
I want to know
Is it possible to return Cursor as OUT parameter in my SP ?
What is the difference between taking data using OPEN curs way and CURSOR as OUT parameter ?
How to solve issue when data is huge ?
Will CURSOR as OUT parameter solve the issue ?
EDITED (SP detail):
DYNAMIC RESULT SET 1
P1: BEGIN
-- Declare cursor
DECLARE cursor1 CURSOR WITH RETURN FOR
select a.TABLE_A_ID as TABLE_A_ID,
b.TABLE_B_ID as TABLE_B_ID
from TABLE_A a
left join TABLE_C c on
a.TABLE_A_ID = c.TABLE_A_ID
inner join TABLE_B b on
b.CONTXT_ID = a.CONTXT_ID
AND b.CONTXT_POINT_ID = a.CONTXT_POINT_ID
AND b.CONTXT_ART_ID = a.CONTXT_ART_ID
where c.TABLE_A_ID is null ;
OPEN cursor1;
Refer to the documentation here for suggestions for handling this specific condition. Consider each suggestion.
Talk with your DBA for Z/OS and decide on the best course of action in your specific circumstances.
As we cannot see your stored-procedure source code, more than one option might exist, especially if the queries in the stored-procedures are unoptimised.
While usually it's easier to allocate more temporary space for the relevant tablespace(s) at the Db2-server end, that may simply temporarily mask the issue rather than fix it. But if the stored-procedure has poor design or unoptimised queries, then fix that first.
An SQL PL procedure can return a CURSOR as an output parameter, but that cursor is usable by the calling SQL PL code. It may not be usable by Java.
You ask "how to solve issue when data is huge", although you don't define in numbers the meaning of huge. It is a relative term. Code your SQL procedure properly, index every query in that procedure properly and verify the access plans carefully. Return the smallest possible number of rows and columns in the result-set.
I have a Stored Procedure which returns multiple tables. I now need a way to get notified whenever a field which is contained in the SP changes.
The basic tables get changed by a number of other sources, ranging from manual inserts to programms.
Do I need to put manual triggers on all tables which are used in the SP or is there a better, more elegant way?
If you are using teradata, you can use the below snippet after each of your dml statements :
SET lv_activity_count = activity_count;
SET lv_message = ' Number of rows merged in table1 is '|| lv_activity_count ;
I have a stored procedure which tries to read uncommitted data, inspite of specifying the isolation level to Read committed (*CS).
Below is my stored procedure.
CREATE PROCEDURE SP_TEST_DATA_GET ( IN P_PROCESSNM VARCHAR(17) ,
IN P_Status char(1))
RESULT SETS 1
LANGUAGE SQL
SET OPTION COMMIT=*CS
P1 : BEGIN
DECLARE CURSOR1 CURSOR WITH RETURN FOR
SELECT DATA
FROM IAS_TEST_DATA
WHERE ( PROCESSNM IS NULL OR PROCESSNM = P_PROCESSNM )
AND Status=P_Status ;
OPEN CURSOR1 ;
END P1``
I am using Db2 v6 iseries.
How can I avoid reading the uncommitted data, it seems specifying the isolation level in stored procedure doesn't work.
Please advice.
You seem to misunderstand how the transaction isolation works. "Read committed" means just that: this unit of work can only read data committed by others, and waits until locks are released on uncommitted changes. You may want to study the manual; it says in particular that "any row changed (or a row that is currently locked with an UPDATE row lock) by another activation group ... cannot be read until it is committed".
In DB2 for i v6 and later you can use SKIP LOCKED DATA clause in the SELECT statement to accomplish what you seem to want.
I am putting up some code which will be copying billions of data from one table to another and we don't want the procedure to stop in case of exception. so i am putting up the script like (not putting 100% compilable syntax)
dml_errors exception;
errors number;
error_count number;
pragma exception_init (dml_errors, -24381);
---------
open cursor;
begin loop;
fetch cursor bulk collect into tbl_object limit batch_size;
exit when tbl_object.count = 0;
-- perform business logic
begin
forall in tbl_object save exceptions;
insert into table;
tbl_object.delete;
exception
when dml_errors then
errors := sql%bulk_exceptions.count;
error_count := error_count + errors;
insert into log_table (tstamp, line) values (sysdate, SUBSTR('[pr_procedure_name:'||r_guid||'] Batch # '||batch_number - 1||' had '||errors||' errors',1,300));
end;
end loop;
close cursor;
end procedure;
now based on this pseduo-code I have 2 questions
I am deleting my collection in forall loop. If there is an exception and i decided to fetch some information from my collection in dml_errors block, would i have collection elements in there ? If yes, then would it safe to delete them after logging ?
Since i am keeping my forall in begin-exception-end block, would it keep iterating ?
Are you sure you need to use PL/SQL here? Unless you're doing a lot of processing in the business logic that you aren't showing us that can't be done in SQL, I would tend to use DML error logging. That will be more efficient, less code, and give you better logging.
DBMS_ERRLOG.CREATE_ERROR_LOG( 'DESTINATION_TABLE' );
INSERT INTO destingation_table( <<columns>> )
SELECT <<columns>>
FROM source_table
LOG ERRORS
REJECT LIMIT UNLIMITED;
I don't see any reason to delete from your tbl_object collection. That doesn't seem to be gaining you anything. It's just costing some time. If your indentation is indicative of your expected control flow, you're thinking that the delete is part of the forall loop-- that would be incorrect. Only the insert is part of the forall, the delete is a separate operation.
If your second question is "If exceptions are raised in iteration N, will the loop still do the N+1th fetch", the answer is yes, it will.
One thing to note-- since error_count is not initialized, it will always be NULL. You'd need to initialize it to 0 in order for it to record the total number of errors.
while not TBLOrder.Eof do
begin
TBLOrder.Locate('OrderID', Total, []);
TBLOrder.Delete;
end;
This just deletes every single row in my Access Database, which is really annoying.
I'm trying to get the program to delete the selected row (which is Total).
From what I understand, It should locate the selected row, which is equal to Total. e.g. If Total = 3 it should find the row where OrderID = 3 and then delete that row.
Any help is appreciated.
Try this instead (Max's routine requires you to loop through the entire dataset, which is fine unless it's got many rows in it):
while (TblOrder.Locate('OrderID', Total, [])) do
TblOrder.Delete;
TDataSet.Locate returns a Boolean; if it's True, the found record is made the active record and you can then delete it. If it returns False (meaning the record is not found), the call to Delete is never made.
BTW, the problem with your original code is that you test for Eof, but never check to see if the Locate finds the record; you just delete whatever record you're on, and then test for Eof again. If you're not at Eof, you call Locate, ignore whether or not it found the record, and delete whatever row you're on. This then repeats over and over again until there are no more records, at which point Eof returns true and you break the loop.
If there is just one row that contains an ORDERID equal to 3, you don't need the WHILE loop.
If you expect more than one row with an ORDERID equal to 3, do this:
TBLOrder.first; // you could also do the locate here if it's a big table
while not TBLOrder.Eof do
begin
if TBLOrder.FieldByName('OrderID').AsInteger = 3 then
TBLOrder.delete
else
TBLOrder.next;
end;
Otherwise, you could also use SQL.