I have 12 stored procedures in netezza. I call these procedures using nzsql command from a shell script. I want to run these procedure in parallel to increase throughput. How can I do this?
Serializable
If the stored procedures do not affect the same tables, then you can just fork the calls from bash:
nzsql -Atc "call sp1();" &
nzsql -Atc "call sp2();" &
nzsql -Atc "call sp3();" &
...
wait
See other answers about forking.
Not Serializable
If the stored procedures affect the same tables, you'll need to set serializability off in the connection or transaction that is affected. I haven't done this in a stored procedure (and you may not be able to), but this should work:
nzsql -Atc "set serializable = false; call sp1();" &
nzsql -Atc "set serializable = false; call sp2();" &
nzsql -Atc "set serializable = false; call sp3();" &
...
wait
See the docs for more information on the serializable isolation level. You'll be responsible for making sure that the data the stored procedures are modifying do not collide in some fashion, as you'll get dirty reads.
To elaborate on #Jeremy Fortune's answer, There are three scenarios in which the system aborts a transaction to preserve serializability:
An update or delete statement is running concurrently with another
update or delete statement on the same table.
Two concurrent transactions that each perform a SELECT FROM and an
INSERT INTO the same table. This could occur as a self-inserting statement or multiple statements in any order. Note that up to 31 concurrent inserts into the same table are supported, provided that no more than one of these also selects from the same table.
Two concurrent transactions, the first of which selects from table1
and updates, inserts or deletes from table2 while the second transaction selects from table2 and updates, inserts or deletes from
table1.
You can read more about it here.
However serialized transaction can be in a queue before failing and system automatically retries until it all time outs after X minutes, X is defined by serialization_queue_timeout system variable.
However this only applies to implicit transactions (transactions without BEGIN and COMMIT block), and most of store procedure transactions are explicit transactions (it's also advantage of using store procedure, everything gets rolled back if something fails, unless you have used AUTOCOMMIT ON option placed somewhere inside the store procedure), which won't let you take advantage of the serialization queue.
Related
Hi and apologies in advance if the question has already been asked. I haven't been able to come across the answer.
I'm wondering if there is a table that holds a record of oracle usernames that have executed a particular procedure or function.
I'm trying to create a procedure that can be called as a subprogram by another procedure. The procedure which i'm looking to create will create a log entry every time the other procedure is executed. Example below;
User_Name = The Oracle user name of the person who executes the function.
Name = The name of the procedure or function.
LastCompileDT = The date/time the function or procedure was last compiled.
I'm a bit stuck on where to source the data from.
I've come across the all_source table but it only gives me the owner of the procedure and not the executing user.
Any feedback would be greatly appreciated.
Thanks
There might be a couple of ways to do that. Maybe someone else can suggest a method of extracting all this data from one data dictionary view. However, my method would be like this:
User_Name: use the keyword USER. It returns the Oracle user that executed the procedure:
SELECT USER FROM DUAL;
However, if you are interested in the OS user who executed that procedure, then you can use the following
SELECT sys_context( 'userenv', 'os_user' ) FROM DUAL;
More on this here. To my knowledge, this can be fetched on the fly only, and it is not logged anywhere by default. So you need to run it when you call the procedure.
Procedure Name: &
LastCompileDT : can be fetched from the view USER_OBJECTS
SELECT OBJECT_NAME, LAST_DDL_TIME
FROM USER_OBJECTS
WHERE OBJECT_TYPE = 'PROCEDURE'
AND OBJECT_NAME = '<YOUR PROCEDURE NAME>';
Rather than rolling your own audit, you could use the inbuilt auditing table provided.
See https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_4007.htm
--Create a test procedure as an example
CREATE PROCEDURE my_test_proc
AS
BEGIN
NULL;
END my_test_proc;
--Turning on auditing executions of the proc
AUDIT EXECUTE ON my_test_proc BY ACCESS WHENEVER SUCCESSFUL;
--Run the proc
EXEC my_test_proc;
--check audit history
SELECT *
FROM dba_common_audit_trail cat
WHERE cat.object_name = 'MY_TEST_PROC';
The dba_common_audit_trail table has columns DB_USER, and OBJECT_NAME for your User_Name/Name.
For the last compiled time see Hawk's answer, or if you want to see a history of last DDL times you can add this to the audit
--Turn on auditing of creating procs
AUDIT CREATE PROCEDURE BY ACCESS;
I have clients-->|cascade rule|-->orders_table-->|cascade rule|-->order_details
in my order_details I have after delete trigger that increment the quantity in my product table
CREATE OR ALTER TRIGGER TABLEAU_DETAIL_VENTES_AD0 FOR TABLEAU_DETAIL_VENTES
ACTIVE AFTER DELETE POSITION 0
AS
declare variable qte numeric_15_2;
begin
select qte_article from tableau_articles where id_article = old.id_article
into :qte;
qte = :qte + old.qte;
update tableau_articles
set qte_article = :qte
where id_article = old.id_article;
end
If I delete a client than all orders depending on it will be deleted
and the orders_detail so on.
The problem is that order_details after delete trigger will be fired and incrementing the product quantity I don't want that to happen.
My question: is there any way to know if the trigger has been fired by cascade rule or sql delete statement that come from the application?
I want to achieve something like:
If trigger triggered by the cascade rule then disable_all_triggers. Thanks in advance for your help.
You can try to wrap your delete code in stored procedure with execute statement for in/activate the trigers
CREATE PROCEDURE DeleteClient(
ID INTEGER)
AS
begin
execute statement 'alter trigger TABLEAU_DETAIL_VENTES_AD0 inactive;';
/*
Your Delete statement here
*/
execute statement 'alter trigger TABLEAU_DETAIL_VENTES_AD0 active;';
END^
I end up using context variables in my clients table i add after delete trigger and set a flag using rdb$set_context
SET TERM ^ ;
CREATE OR ALTER TRIGGER TABLEAU_CLIENTS_AD0 FOR TABLEAU_CLIENTS
ACTIVE AFTER DELETE POSITION 0
AS
declare variable id integer;
begin
execute statement 'select rdb$set_context(''USER_SESSION'', ''myvar'', 100) from rdb$database' into :id;
end
^
SET TERM ; ^
in the detail orders i check my flag with rdb$get_context and skip the trigger if the flag exist with the value associated
select rdb$get_context('USER_SESSION', 'myvar') from rdb$database into :i;
if (i = 100) then exit;
You can't determine that, but you can determine if your foreign key is still valid. Since Firebird cascaded deletes are sequential (rows that are referenced in a foreign keys are deleted first), you can check if your old.id_article is still valid before updating the record.
I'm not sure you would achieve what you want like that. What if you just delete an order and its items. Wouldn't you want to increment quantities in that case?
Anyway... I wouldn't deactivate triggers from within triggers. That is bad design.
Use some sort of variable... update a flag in a support table. From within the client delete trigger you can set this variable. Then in the order_items delete trigger you can check it to see if you need to update quantities.
Another better option is to analyze the situation better and determine why and when you actually want to update quantities. If you are deleting an old order which has already been fulfilled and delivered, you probably wouldn't want to. If you are canceling a new order, you probably would. So maybe updating the quantities depends actually more on the state of the order (or some other variable) then simply on the fact that you are deleting an order_items row.
Ok, so you say orders cannot be deleted, except when deleting the client. Then maybe you should flag the client or its order with a flag that states the client is being deleted. In the order_items delete trigger you update article quantities only if the client is not being deleted.
Wondering if someone can help me understand how a DB2 before insert trigger behaves. I have a Grails app that inserts rows to a DB2 database. The table in question has a before insert trigger that updates the date and user for the update:
CREATE TRIGGER WTESTP.SCSMA11I NO CASCADE BEFORE INSERT ON
WTESTP.SCSMA01T REFERENCING NEW AS NEWROW FOR EACH ROW MODE
DB2SQL BEGIN ATOMIC SET NEWROW.LST_UPDT_TMSP =
CURRENT_TIMESTAMP ; SET NEWROW.USER_ID = RTRIM ( USER ) ; END ;
In my Grails application I set all the values, including the user id:
flatAdjustmentInstance.setUserID("TS37813")
We use a generic application ID and password via JNDI to make the connection to the database. For auditing purposes I need to set the value of the user to whomever logged into the application. Is the only solution to remove the trigger entirely and just be really sure it is set?
The DB2 USER variable (also called "special register") contains the authorization ID of the current database connection. If an application wishes to pass another user ID to DB2, it can do so by calling the API function sqleseti() or the stored procedure WLM_SET_CLIENT_INFO() -- more info here. The trigger can then reference another special register, CURRENT CLIENT_USERID.
I want to find when the stored procedure was last executed, so that I can delete the unused stored procedures. One way is to scan through the code and find out the list of used stored procedures and delete the unused one's, since the no of stored procedures is in thousands, I would like to know if there is an option in DB2 to find this easily.
You don't say what platform or version of DB2 you are using.
If you are running on DB2 for Linux/UNIX/Windows and are on V9.7 or later, you can look at the LASTUSED column in SYSCAT.PACKAGES, which can be joined to SYSCAT.PROCEDURES via SYSCAT.ROUTINEDEP:
select
proc.procschema
,proc.procname
,pkg.lastused
from
syscat.procedures proc
,syscat.routinedep rd
,syscat.packages pkg
where
proc.specificname = rd.routinename
and rd.bname = pkg.pkgname
and pkg.lastused <> '01/01/0001'
order by
pkg.lastused desc;
If a procedure has never been executed, LASTUSED will have the value '01/01/0001'. The query above filters these out.
Also, please note that you may want to filter on PROCSCHEMA so you're not seeing all of the system stored procedures...
Without explicit logging or tracing, it is not possible to get this information for each and every Stored Procedure of our Database. However, we can get this detail along with many other relevant information for the stored procedure having it’s execution plan currently cached on the server by using - sys.dm_exec_procedure_stats It’s a system dynamic view that returns aggregate performance statistics for cached stored procedures.
This view returns one row for each cached stored procedure plan, and the lifetime of the row is as long as the stored procedure remains cached. When a stored procedure is removed from the cache, the corresponding row is eliminated from this view.
USE DBName // replace with your DB name
GO
SELECT
O.name,
PS.last_execution_time
FROM
sys.dm_exec_procedure_stats PS
INNER JOIN sys.objects O
ON O.[object_id] = PS.[object_id]
GO
The above script will return the name of all the cached stored procedure of the current database with their last execution time.
For more details, please check here
In DB2, i think you can schedule a stored procedure to run at a particular time, at an interval, or when a specified event occurs. The administrative task scheduler manages these requests.
Procedure
To schedule execution of a stored procedure:
Add a task for the administrative task scheduler by using the ADMIN_TASK_ADD stored procedure. When you add your task, specify which stored procedure to run and when to run it. Use one of the following parameters or groups of parameters of ADMIN_TASK_ADD to control when the stored procedure is run:
interval - The stored procedure is to execute at the specified regular interval.
point-in-time - The stored procedure is to execute at the specified times.
trigger-task-name - The stored procedure is to execute when the specified task occurs.
trigger-task-name trigger-task-cond trigger-task-code - The stored procedure is to execute when the specified task and task result occur.
Optionally, you can also use one or more of the following parameters to control when the stored procedure runs:
begin-timestamp
Earliest permitted execution time
end-timestamp
Latest permitted execution time
max-invocations
Maximum number of executions
When the specified time or event occurs for the stored procedure to run, the administrative task scheduler calls the stored procedure in DB2®.
Optional: After the task finishes execution, check the status by using the ADMIN_TASK_STATUS function. This function returns a table with one row that indicates the last execution status for each scheduled task. If the scheduled task is a stored procedure, the JOB_ID, MAXRC, COMPLETION_TYPE, SYSTEM_ABENDCD, and USER_ABENDCD fields contain null values. In the case of a DB2 error, the SQLCODE, SQLSTATE, SQLERRMC, and SQLERRP fields contain the information that DB2 returned from calling the stored procedure.
More information about ADMIN_TASK_STATUS click here
Is there any way to manually track the changes done to a clientdataset's delta and update the changes manually on to then db. i have dynamically created a clientdataset and with out a provider i am able to load it with a tquery, now user will do some insert update and delete operations on the data available in the cds, and at final stage these data(modified) should be post in to database by using a tquery(not apply updates)..
After populating your data set from the TQuery call MergeChangeLog so that the records do not stand out as newly inserted, and be sure that LogChanges is set.
Then when at the final stage, before updating the query with the dataset, set StatusFilter so that only the records that you want to take action on should be showing. For instance;
ClientDataSet1.StatusFilter := [usDeleted];
You can also use UpdateStatus on a record to see if it has been modified etc..
But be careful that, is seems that there will be multiple versions of a record, and it is a bit difficult to understand how the "change log" keeps track. And there also can be multiple actions on a record, like modifying it a few times and then deleting it.
Change:= TPacketDataSet.create;
Change.Data:= YourClientDataSet.Delta;
while not Change.Eof do
begin
Change.InitAltRecBuffers(False);
if Change.UpdateStatus = usUnmodified then
Change.InitAltRecBuffers(True);
case Change.UpdateStatus of
usModified: ;//your logic read codes in Provider.pas for further hint
usInserted: ;//your logic read codes in Provider.pas for further hint
usDeleted: ;//your logic read codes in Provider.pas for further hint
end;
Change.Next;
end;
Above should work regardless of number of modified
Cheers
Pham