How to commit execute procedure in firebird from asterisk? - stored-procedures

I have execute procedure in Firebird and calling from asterisk.
The transaction not commited after complete procedure, but after asterisk$> reload transaction commited.
Thank you very much.

Data manipulation language statements are not committed automatically.
You must issue a "commit" statement to commit any DML changes to the database.
If after reload you've seen the execution results commited, be sure that when going down for reload asterisk transaction handler has issued the commit statement.

Related

Rails wrap two process to the db transaction

I've never done that before and I don't know if it is possible. I've got two process which I thing should most likely be wrapped in a database transaction to have the guarantee that either all changes in the database from the transaction will be made or no changes will be made.
process.campaign_code.update(state: 'used')
Campaign.find(process.campaign_code.campaign_id).increment!(:used_campaign_codes_amount, 1)
Try ActiveRecord::Base.transaction:
https://api.rubyonrails.org/classes/ActiveRecord/Transactions/ClassMethods.html
ActiveRecord::Base.transaction do
process.campaign_code.update!(state: 'used')
Campaign.find(process.campaign_code.campaign_id).increment!(:used_campaign_codes_amount, 1)
end
If any of the commands in the transaction throw an error, all SQL transactions in the block will be reverted.
EDIT: It's also worth changing update to update! to ensure an error is thrown if the update is unsuccessful, otherwise it can fail without throwing an error and the block will continue.

Where do I put rails Transactions and how do I execute them

I have a case where I need to create like 10000 entries in a table and after some research I decided to use a transaction to do it.
My problem is I haven't found any documentation or guide that will tell me where I put a transaction or how I execute it
This can be achieved very easily:
ActiveRecord::Base.transaction do
... your code ...
end
The code inside the block will run within a database transaction. If any error occurs during execution, all the changes will be rolled back.

What does the querying of the transaction actually do?

I am wondering what does the querying if the connection is in transaction or not actually do ?
Example :
....
try
if not DATA_MODULE.ACRDatabase1.InTransaction then
DATA_MODULE.ACRDatabase1.StartTransaction;
....
DATA_MODULE.ACRDatabase1.Commit();
except
DATA_MODULE.ACRDatabase1.Rollback;
Does it temporarily stop the current transaction if it detects that there is another transaction going on and waits for the other transaction to complete and only then executes or what? Or does it just misfire (rollback) if there's another transaction detected?
Attempting to start a transaction which has already been started will raise an exception. The call to InTransaction simply determines if the transaction has already started or not and returns a True/False response.
I prefer this...it protects you if you have any problems while editing...and only rollback if you have a problem with the Commit. If any exception is raised after the StartTransaction...you will never get to the Commit. You will always run the finally and will make sure you are not in a Tranaction, if so Rollback. I try not to use Try Except, don't have to worry about the Raise
try
DATA_MODULE.ACRDatabase1.StartTransaction;
....
DATA_MODULE.ACRDatabase1.Commit();
finally
if DATA_MODULE.ACRDatabase1.InTransaction then
DATA_MODULE.ACRDatabase1.Rollback;
at the very least the code should be reraising the exception or your user will never know why the data is not saving. ReRaising Exception

Deadlock on concurrent update, but I can see no concurrency

What could trigger a deadlock-message on Firebird when there is only a single transaction writing to the DB?
I am building a webapp with a backend written in Delphi2010 on top of a Firebird 2.1 database. I am getting an concurrent-update error that I cannot make sense of. Maybe someone can help me debug the issue or explain scenarios that may lead to the message.
I am trying an UPDATE to a single field on a single record.
UPDATE USERS SET passwdhash=? WHERE (RECID=?)
The message I am seeing is the standard:
deadlock
update conflicts with concurrent update
concurrent transaction number is 659718
deadlock
Error Code: 16
I understand what it tells me but I do not understand why I am seeing it here as there are no concurrent updates I know of.
Here is what I did to investigate.
I started my appplication server and checked the result of this query:
SELECT
A.MON$ATTACHMENT_ID,
A.MON$USER,
A.MON$REMOTE_ADDRESS,
A.MON$REMOTE_PROCESS,
T.MON$STATE,
T.MON$TIMESTAMP,
T.MON$TOP_TRANSACTION,
T.MON$OLDEST_TRANSACTION,
T.MON$OLDEST_ACTIVE,
T.MON$ISOLATION_MODE
FROM MON$ATTACHMENTS A
LEFT OUTER JOIN MON$TRANSACTIONS T
ON (T.MON$ATTACHMENT_ID = A.MON$ATTACHMENT_ID)
The result indicates a number of connections but only one of them has non-NULLs in the MON$TRANSACTION fields. This connection is the one I am using from IBExperts to query the monitor-tables.
Am I right to think that connection with no active transaction can be disregarded as not contributing to a deadlock-situation?
Next I put a breakpoint on the line submitting the UPDATE-Statement in my application server and executed the request that triggers it. When the breakpoint stopped the application I then reran the Monitor-query above.
This time I could see another transaction active just as I would expect:
Then I let my appserver execute the UPDATE and reap the error-message as shown above.
What can trigger the deadlock-message when there is only one writing transaction? Or are there more and I am misinterpreting the output? Any other suggestions on how to debug this?
Firebird uses MVCC (Multiversion Concurrency Control) for its transaction model. One of the features is that - depending on the transaction isolation - you will only see the last version committed when your transaction started (consistency and concurrency isolation levels), or that were committed when your statement started (read committed). A change to a record will create a new version of the record, which will only become visible to other active transactions when it has been committed (and then only for read committed transactions).
As a basic rule there can only be one uncommitted version of a record. So attempts by two transactions to update the same record will fail for one of those transaction. For historical reasons these type of errors are grouped under the deadlock error family, even though it is not actually a deadlock in the normal concurrency vernacular.
This rule is actually a bit more restrictive depending on your transaction isolation: for consistency and concurrency level there can also be no newer committed versions of a record that is not visible to your transaction.
My guess is that for you something like this happened:
Transaction 1 started
Transaction 2 started with concurrency or consistency isolation
Transaction 1 modifies record (new version created)
Transaction 1 commits
Transaction 2 attempts to modify same record
(Note, step 1+3 and 2 could be in a different order (eg 1,3,2, or 2,1,3))
Step 5 fails, because the new version created in step 3 is not visible to transaction 2. If instead read committed had been used then step 5 would succeed as the new version would be visible to the transaction at that point.

How to test for 2-phase commit behavior on a stored procedure?

this particular stored procedure does an archive purge:
1) select tokens from transactional db
2) put tokens in temp table
3) loop through tokens:
3.1)using tokens from temp table, retrieve data from transactional tables and insert to tables in a separate archive db (via federation)
3.2) COMMIT inserts.
3.3) then using same token this time delete the data from the transactional
3.4) COMMIT deletes.
2 phase commit allows us to have just one commit at the end of the loop
my question is how to simulate scenarios to make proc fail in the insert phase or delete phase? this is to ensure that even though run has failed, data retains integrity - no half-processed tokens or such.
to force a run-time error, I usually put in a SELECT 0/0 in the code. just put this in before the COMMIT of your choice and watch the fireworks that result!
If you have unique keys involved you can put a record in place that would cause a duplicate key violiation.
Hope this will help somebody else!
I just recently found that the best way was via signals.
In the middle of the delete phase, I put in an error signal so process would fail on that token and exit the loop, so it should rollback whatever it has inserted in the insert phase for that token.
DECLARE rollback_on_token_101 CONDITION FOR SQLSTATE '99001';
inside the loop middle of delete phase
IF TOKEN_SUCCESS_COUNT=100 THEN
SIGNAL rollback_on_token_101
SET MESSAGE_TEXT = 'rolling back on mid-delete phase on token # 101 ';
END IF;

Resources