I'm curious about a nested procedures sight. I have a procedure, Proc1 whichs accepts a C# modelled UDT which is defined at schema level.
Proc2 - insert: (called from inside Proc1) identifies records that are in the UDT but not in a table and creates new records.
Proc3 - update: (called from inside Proc1)
Would this proc be able to see (have sight) of the newly created records from Proc2? IE, is there a "commit" when proc2 finishes?
Proc4 - delete (call from inside Proc1) deletes all properly identified records.
There are no implicit commits when a procedure finishes. But since all procedures in the same call stack are part of the same transaction, they will inherently be part of the same transaction scope (I ignore the possibility that you have defined one of your procedures to use an autonomous transaction and I assume you aren't explicitly ending a transaction by issuing an explicit commit or rollback). So each procedure will see the uncommitted results of all the code run earlier in the same session.
Related
We have a master stored procedure that loops through a table and for each row calls a stored procedure based on data from that table. There is likely to be a lot of rows and so we are worried about the the master stored procedure timing out due to the time it would take to loop through all of the rows.
The stored procedures called do not have a dependency on each other so we would like to speed up the process by executing the stored procedures without waiting for the previous one to finish. Is there a way to achieve this in snowflake?
Javascript stored procedures are single-threaded so there is no way to get them to run parts of their code in parallel.
You might be able to improve performance by changing your design, for example by running instances of the SP via tasks as you can get tasks to run in parallel
We have a master stored procedure that loops through a table and for each row calls a stored procedure based on data from that table.
Having one stored procedure call others will be single threaded. You can do this instead:
Create a task for each of your child stored procedures. Set its schedule to 1 minute; specify no overlapping execution, and do not enable it.
Have your main stored procedure read from the table that drives it.
Instead of having your main stored procedure call other stored procedures, have it alter their tasks to resume them.
The first thing each child stored procedure should do is to run a SQL statement to alter their task disabling it.
This will allow asynchronous, parallel execution. The main stored procedure just needs to figure out what other stored procedures do or don't need to run and what their call parameters should be. It kicks off their execution by enabling the schedule for their task and moves on to the next code statement. A minute later, the child stored procedure starts and disables its schedule so when it completes it won't run again until the main stored procedure alters its task back to enabled.
Your task create statements for the child stored procedures should be something like this:
create or replace task MY_SP_TASK
warehouse = 'MY_WAREHOUSE'
schedule = '1 MINUTE '
allow_overlapping_execution = false
as
call MY_SP('PARAM1', 'PARAM2')
If you need the main stored procedure to change the parameters in the call, you can have the main stored procedure run the "create or replace task" and change them. You'll then want to enable the task:
alter task MY_SP_TASK resume;
If you don't need to change the child's call parameters, you can just run the alter statement to resume it and that's all. Then on the child stored procedure among the first things the code should do is disable its own taskL
alter task MY_SP_TASK suspend;
I have a process whereby I have an NHibernate session which I use to run a query against the database. I then iterate through the collection of results, and for each iteration, using the same NHibernate session, I call a SQL Stored Procedure (using CreateSQLQuery() & ExecuteUpdate()), which ends up performing an update on a field for that entity.
When it has finished iterating over the list (and calling the SP x times), if I check the database directly in SSMS, I can see that the UPDATE for each row has been applied.
However, in my code, if I then immediately run the same initial query again, to retrieve that list of entities, it does not reflect the updates that the SP made for each row - the value is still NULL.
I haven't got any cache behavior specified against the configuration of NHibernate in my application, and have experimented with different SetCacheMode() when calling the query, but nothing seems to make any difference - the values that I can see directly in the DB have been updated, are not being brought back as updated when I re-query (using Session.QueryOver()) the database (using that same session).
By calling CreateSQLQuery (to update database, single row or multiple rows does not matter), actually you are doing DML-style operation which does not update the in-memory state.
Any call to CreateSQLQuery or CreateQuery will not use/reflect tracking. These are considered out-of-the-scope of Unit Of Work.
These operations directly affect the underlying database neglecting any in-memory state.
14.3. DML-style operations
As already discussed, automatic and transparent object/relational mapping is concerned with the management of object state. This implies that the object state is available in memory, hence manipulating (using the SQL Data Manipulation Language (DML) statements: INSERT, UPDATE, DELETE) data directly in the database will not affect in-memory state. However, NHibernate provides methods for bulk SQL-style DML statement execution which are performed through the Hibernate Query Language (HQL). A Linq implementation is available too.
They (may) work on bulk data. They are necessary in some scenarios for performance reasons. With these, tracking does not work; so yes, in-memory state become invalid. You have to use them carefully.
if I then immediately run the same initial query again, to retrieve that list of entities, it does not reflect the updates that the SP made for each row - the value is still NULL.
This is due to first (session) level cache. This is always enabled by default and cannot be disabled with ISession.
When you first load the objects, its a database hit. You get the objects from database - loop through them - execute commands those are out of Unit Of Work (as explained above) - and again execute same query twice to load same objects under same ISession instance. Second call does not hit the database at all.
It just return the instances from memory. As your in-memory instances are not updated at all, you always get original instances.
To get the updated instances, close the first session and reload the instances with new session.
For more details, please refer to: How does Hibernate Query Cache work
I need to call a vendor procedure that searches the database for possible matches. The input parameters are entered in a global temp table, then a procedure needs to be called that fills another global temp table with possible matches. Any thoughts on the best way to do this with APEX?
This is a vendor database. I really can't change anything. The vendor procedure requires that I load parameters into their GTT, run their procedure, then get the results from their result GTT. I'm new to APEX and just trying to figure out the best way to handle that...what type of apex object do I use to load the parameters to the parameter GTT? How do I call the procedure when the parameter row is saved? What apex object should I use to display the result GTT...a report, a grid...?
As data in a global temporary table (GTT) is "private", i.e. can be accessed in the same transaction or a session (which would probably be your choice, so you'd create a GTT with the ON COMMIT PRESERVE ROWS), as long as you do everything in the same session, that would work.
On the other hand, if there are several sessions involved, you're probably out of luck and will have to change the approach. The most obvious is to use a normal table (not a global temporary one), or - if possible - Apex collections.
I'm trying to create a trigger on derby which simply calls a procedure. The stored procedure does not change anything and gets no parameters. It simply check that the time is within an interval (for example between 08:00 and 16:00). On creation of trigger i receive the following error:
"42Z9D: Procedures that modify SQL data are not allowed in BEFORE triggers."
But the procedure makes no changes.
When defining a procedure one should specify if the procedure modifies data or not. If it executes any sql or not. As mentioned in the link provided above by Bryan I should use one the options:
{ NO SQL | MODIFIES SQL DATA | CONTAINS SQL | READS SQL DATA }
If you dont use this options the default value will be assumed that is CONTAINS SQL.
In Delphi, whenever I use a TQuery to perform a SELECT on a database, I follow the Query.Open with a try..finally, with Query.Close in the finally section. This makes sense to me, as the Query would still be storing data (using memory) unnecessarily otherwise.
But my question has to do with when I use a Query to perform an INSERT or DELETE, thus requiring the execution of the SQL with Query.ExecSQL
My question is, must I use Query.Close after Query.ExecSQL?
My thoughts are, because this is a command to be executed on the database, which presumably does not return any data to the Query, there is no need to do a Query.Close
But maybe someone out there has more in-depth knowledge of what, if anything, might be returned and stored in a Query after a Query.ExecSQL is called, for which a Query.Close would be beneficial?
Thank you.
No it is not needed as ExecSQL does not maintain a recordset.
from the documentation (emphasis mine):
Executes the SQL statement for the query. Call ExecSQL to execute the
SQL statement currently assigned to the SQL property. Use ExecSQL to
execute queries that do not return a cursor to data (such as INSERT,
UPDATE, DELETE, and CREATE TABLE).
Note: For SELECT statements, call Open instead of ExecSQL.
ExecSQL prepares the statement in SQL property for execution if it has not already been prepared. To speed performance, an application should ordinarily call Prepare before calling ExecSQL for the first time.