I have the following problem. I have to stored procedures (debug messages double-indended):
CREATE PROC innerProc
AS
BEGIN
SELECT 'innerProc 1',##TRANCOUNT
BEGIN TRAN
SELECT 'innerProc 2',##TRANCOUNT
ROLLBACK
SELECT 'innerProc 3',##TRANCOUNT
END
GO -----------------------------------------
CREATE PROC outerProc
AS
BEGIN
SELECT 'outerProc 1',##TRANCOUNT
BEGIN TRAN
SELECT 'outerProc 2',##TRANCOUNT
EXEC innerProc
SELECT 'outerProc 3',##TRANCOUNT
ROLLBACK
SELECT 'outerProc 4',##TRANCOUNT
END
GO -----------------------------------------
EXEC outerProc
What they do?
outerProc begins transaction (##TRANCOUNT = 1)
executes innerProc (##TRANCOUNT at the beginning of the proc = 1)
innerProc begins another transaction (##TRANCOUNT = 2)
innerProc rollbacks transaction (##TRANCOUNT = 0)
AND HERE IS THE PROBLEM: ##TRANCOUNT at the beginning of the innerProc is not equal to ##TRANCOUNT at the end. What am I doing wrong? Is it correct approach?
I believe you need to use named transactions or else you're killing all transactions when you rollback on the nested one, even if it is scoped to just the inner sproc
http://msdn.microsoft.com/en-us/library/ms188929.aspx
Further reading: http://msdn.microsoft.com/en-us/library/ms181299.aspx
ROLLBACK TRANSACTION without a
savepoint_name or transaction_name
rolls back to the beginning of the
transaction. When nesting
transactions, this same statement
rolls back all inner transactions to
the outermost BEGIN TRANSACTION
statement. In both cases, ROLLBACK
TRANSACTION decrements the ##TRANCOUNT
system function to 0. ROLLBACK
TRANSACTION savepoint_name does not
decrement ##TRANCOUNT.
Related
Currently, I encountered an issue on Azure Synapse Analytics. I have a parent_cust_industry table which is full refresh - The table loads using stored procedure as below:
CREATE PROCEDURE [test].[test_proc] AS
BEGIN
-- LOAD TYPE: Full refresh
IF EXISTS (SELECT 1 FROM sys.tables WHERE SCHEMA_NAME(schema_id) = 'test' AND name = 'test_ld' )
BEGIN
DROP TABLE [test].[test_ld]
END
ELSE IF NOT EXISTS (SELECT 1 FROM sys.tables WHERE SCHEMA_NAME(schema_id) = 'test' AND name = 'test_ld' )
BEGIN
CREATE TABLE [test].[test_ld]
WITH
(
DISTRIBUTION = REPLICATE
, CLUSTERED COLUMNSTORE INDEX
)
AS
SELECT CAST(src.[test_code] as varchar(5)) as [test_code],
CAST(NULLIF(src.[test_period], '') as varchar(5)) as [test_period],
CAST(NULLIF(src.[test_id], '') as varchar(8)) as [test_id]
FROM [test].[test_temp] as src
END
IF NOT EXISTS ( SELECT 1 FROM sys.tables WHERE SCHEMA_NAME(schema_id) = 'test' AND name = 'test_hd' )
BEGIN
RENAME OBJECT [test].[test] TO [test_hd]
END
IF NOT EXISTS ( SELECT 1 FROM sys.tables WHERE SCHEMA_NAME(schema_id) = 'test' AND name = 'test' )
BEGIN
RENAME OBJECT [test].[test_ld] TO [test]
END
IF EXISTS ( SELECT 1 FROM sys.tables WHERE SCHEMA_NAME(schema_id) = 'test' AND name = 'test_hd' )
BEGIN
DROP TABLE [test].[test_hd]
END
END
;
The error happens when there is another stored procedure runs at the same time to load data to another table and it requires the [test].[test] table which cause invalid object for [test].[test].
Normally, the [test].[test_proc] would finish the data load first before other store procs depend on it. But in rare occasion, the data is considerably large, it took more time to process and can cause the invalid object error.
Is there a locking mechanism that I can apply to the stored procedure [test].[test_proc] so that if the two store procs happen to run at the same time, the [test].[test_proc] would finish first then the remaining store procedure can start reading the data from [test].[test] table ?
As you do not have access to traditional SQL Server locking procs like sp_getapplock and the default transaction isolation level of Azure Synapse Analytics, dedicated SQL pools is READ UNCOMMITTED you have limited choices.
You could route all access to this proc through a single Synapse Pipeline and set its concurrency setting to 1. This would ensure only one pipeline execution could happen at once, causing subsequent calls to the same pipeline to queue up.
Set the pipeline concurrency in the Pipeline settings here:
So you could have a single main pipeline that routes to others, eg using the Switch or If activities and ensure the proc cannot be called by other pipelines - should work.
I am using Delphi 2010 with FIB Components like TpFIBDataset, TpFIBTransaction and TpFIBDataset with Firebird database.
I have already set TpFIBDataset's 'AutoCommit' property to 'False', then also when I execute below statement in the try..finally block and rollback the transaction data still get posted.
Code:
FIBDataset.Post;
Below is the sample code.
Code:
try
FIBDatabase.StartTransaction;
....
Block of Code;
...
finally
if saveALL then
FIBDatabase.CommitRetaining
else
FIBDatabase.RollbackRetaining;
end;
The Transaction on the dataset must also be checked and changed
FIBDataset.AutoCommit := false;
You need to Close the query as well. In this case
FIBDataset.Close;
FIBDatabase.Rollback;
EDIT
I would also advise you to allocate the one transaction component to all the datasets (rather than the database). And use the start, commit, rollback methods of the transaction component. Further, you must assign the transaction component before you do any operations.
I'm trying to code a Sybase ASE (15) stored procedure which deletes a customer. It is possible that the DELETE failes due to a "foreign key constraint violation", in which case the stored procedure should rollback the transaction and return.
CREATE PROCEDURE dbo.spumb_deleteCustomer #customertodelete int AS BEGIN
BEGIN TRANSACTION TRX_UMBDELCUSTOMER
DELETE CREDITCARDS WHERE CUSTOMERID = #customertodelete
DELETE CUSTOMER_SELECTION_MAP WHERE CUSTOMERID = #customertodelete
DELETE CUSTOMERS WHERE ID = #customertodelete
SELECT #rcnt = ##ROWCOUNT
IF (#rcnt <> 1) BEGIN
PRINT 'FAILED TO DELETE CUSTOMER'
ROLLBACK TRANSACTION TRX_UMBDELCUSTOMER
RETURN
END
COMMIT TRANSACTION TRX_UMBDELCUSTOMER
END
When running this SP in a cursor, execution aborts after the first invalid DELETE. How can I make the cursor continue (or, rather, the SP not raise an error)?
Thanks, Simon
You should check ##error !=0 to indicate an error rather than look for ##rowcount and handle those errors otherwise the calling client may get unexpected messages returned. If you do check ##rowcount you need to save it immediately to a variable after every delete because each command resets it.
If you're specifically looking to pick up foreign key violation messages you can check ##error for 546 or 547 as these will be returned in ##error when it fails:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc00729.1500/html/errMessageAdvRes/BABCCECF.htm
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc00729.1500/html/errMessageAdvRes/BABHJIEC.htm
content = Content.find(params[:content_id])
content.body.insert(start_index, span_open)
content.save!
content.body.insert(end_index + span_open.length, span_open)
content.save!
puts "=========================================="
c = Content.find(params[:content_id])
puts c.body
so the above is what I've been trying to do. lots of saves.. it should save right?
in the console I see
===========================================
le modified text (body attr) here
I'm inserting span's into text, and in the console (above) it shows the changes successful in the puts statement. But when I rerender the page, everything is back the way it was (inspect element shows no spans)
One thing that I find weird is that the puts statement executes before the
"Processing NameOfController#action (for ....)"
with all the DB calls and such. I scroll down to where Content.find would be (it's there twice, so this is easy) and I see this:
SHOW FIELDS FROM `contents`
Content Load (1.6ms) SELECT * FROM `contents` WHERE (`contents`.`id` = 328)
SQL (0.2ms) BEGIN
SQL (0.1ms) COMMIT
SQL (0.1ms) BEGIN
SQL (0.2ms) COMMIT
CACHE (0.0ms) SELECT * FROM `contents` WHERE (`contents`.`id` = 328)
SQL (0.1ms) BEGIN
SQL (0.1ms) COMMIT
Now, it says that it's loading the second call from cache... what's up with that? since I've changed it since the last .find()?
I'm using Ruby on Rails 2.3.8
UPDATE: incorporating Dan Seaver's suggestions:
content = Content.uncached_find(params[:content_id])
content.body = content.body.insert(start_index, span_open)
content.save!
content.body = content.body.insert(end_index + span_open.length, span_close)
content.save!
a = content.body
# ActiveRecord::Base.connection.update("
# UPDATE `contents`
# SET body = '#{content.body}'
# WHERE id = #{params[:content_id]}")
puts "=========================================="
content = Content.uncached_find(params[:content_id])
puts (a == content.body).inspect
output / terminal:
==========================================
false
Content Load (1.5ms) SELECT * FROM `contents` WHERE (`contents`.`id` = 351)
SQL (0.1ms) BEGIN
SQL (0.1ms) COMMIT
SQL (0.1ms) BEGIN
SQL (0.2ms) COMMIT
Content Load (0.3ms) SELECT * FROM `contents` WHERE (`contents`.`id` = 351)
The way that Rails SQL Caching works is that queries are cached within an action:
However, it’s important to note that query caches are created at the start of an action and destroyed at the end of that action and thus persist only for the duration of the action. If you’d like to store query results in a more persistent fashion, you can in Rails by using low level caching.
This article describes how you can avoid the cache with the following
Update Not having previously used uncached, it seems like is defined within active record, so you will have to add a class method to content like the following:
def self.uncached_find(content_id)
uncached do
find(content_id)
end
end
Then use Content.uncached_find(params[:content_id]) (documentation) where you would use Content.find
Update 2 I see the issue now!! You aren't actually modifying anything. String#insert returns a modified string, it does not modify content.body in place, you need to do the following:
content.body = content.body.insert(start_index, span_open)
Try the above line with your save, and it should work
to force updating:
ActiveRecord::Base.connection.update("
UPDATE `contents`
SET body = '#{content.body}'
WHERE id = #{params[:content_id]}")
We are developing a migrate program. There are nearly 80 million records are there in DB. The code is as follows:
static int mymigration(struct progargs *args)
{
exec sql begin declare section;
const char *selectQuery;
const char *updateQuery;
long cur_start;
long cur_end;
long serial;
long number;
char frequency[3];
exec sql end declare section;
selectQuery = "select * from mytable where number >= ? and number <= ? for update of frequency ,status";
updateQuery = "update mytable set frequency = ?, "
" status = ? "
" where current of my_cursor";
cur_start= args->start;
cur_end = args->end;
exec sql prepare my_select_query from :selectQuery;
/* Verify the sql code for error here */
exec sql declare my_select_cursor cursor with hold for my_select_query;
exec sql open my_select_cursor using :cur_start, :cur_end;
/* Verify the sql code for error here */
exec sql prepare my_update_query from :updateQuery;
/* Verify the sql code for error here */
while (1)
{
number = 0;
serial = 0;
memset(frequency,0,sizeof(frequency));
exec sql fetch my_select_cursor into number,:serial,:frequency;
if (sqlca.sqlcode != SQL_OK)
break;
exec sql execute my_update_query using :frequency, :frequency;
}
exec sql close my_select_trade_cursor;
}
While implementing this, we are getting the error message "-255". We found one solution as to add being work and commit work. Since we have large amount of data, this might clutter the transaction log.
Is there any other solution available for this problem? The IBM website for informix shows the usage is correct.
Appreciate the help in advance.
Thanks,
Mathew Liju
Error -255 is "Not in transaction".
I see no BEGIN WORK (or COMMIT WORK or ROLLBACK WORK) statements.
You need to add BEGIN WORK before you open the cursor with the FOR UPDATE clause. You then need to decide whether to commit periodically to avoid overlong transactions. The fact that you use a FOR HOLD cursor shows that you had thought about using sub-transactions; if you were not going to do so, you would not use that clause.
Note that Informix has 3 primary database logging modes:
Unlogged (no transaction support)
Logged (by default, each statement is a singleton transaction; an explicit BEGIN WORK starts a multi-statement transaction terminated by COMMIT WORK or ROLLBACK WORK).
Logged MODE ANSI (slightly simplistically, you are automatically in a transaction; you need an explicit COMMIT or ROLLBACK to terminate a transaction, and may then, optionally, use an explicit BEGIN, but the BEGIN is not actually necessary).
From the symptoms you describe, you have a logged but not MODE ANSI database. Therefore, you must explicitly code the BEGIN WORK statements.