Cancel long running query or transaction in firebird - delphi

how can i safely abort/cancel currently running query/transaction. using gfix -shut may corrupt the database. i'm using delphi and firebird 2.5
Thanks in advance
Reynaldi

Gfix can not corrupt database. It uses safe termination of running queries and rollbacks all active transactions.
You can cancel given query by executing DELETE FROM MON$STATEMENTS WHERE MON$STATEMENT_ID = ...
You can shut a whole attachment by executing DELETE FROM MON$CONNECTIONS WHERE CONNECTION_ID = ...
All queries should be run from parallel attachment.

Next to the answer provided by Andrej, the Firebird 2.5 API also includes the fb_cancel_operation command which cancels all running actions on a database handle. You would need to check if your Delphi component supports this.

Related

MySQL Error 2013: Lost connection to MySQL server during query

I've read all post with the same or very close headline, but still can't find a proper solution or explanation to my problem.
I'm working with MySQL Workbench 6.3 CE. I have been able to create a database with several tables, and create a conexion with python to write data to it. Still, I had a problem related to a varchar field that needed to be set to more than 45 characters. When I try to set it to bigger limits, like VARCHAR(70), no matter how many times I try, wether I set higher limits for timeout, I get the 2013 error, saying my connection was closed during the query.
I'm using the above version of workbench, on windows 10, and I'm trying to modify that field from the workbench. Afer that first time, I can't drop a table either, nor can I connect from python.
What is happening?
Ok, apparently what was happening is that I had a block, and there where a lot of query waiting in a situation of "waiting for table metadata block".
I did the following in the console of workbench
Select concat('KILL ',id,';') from information_schema.processlist where user='root'
that generates a list of all those processes. I copy that list in a new tab, and execute a massive kill of processes. After that it worked again.
Can anybody explain me how did I arrive to that situation and what precautions to take in my python scripts so as to avoid it?
Thnak you

AS400 / SQL Server 2008 R2 Data Transfer Performance Improvement

We have recently converted our JD Edwards EnterpriseOne system from and AS400/DB2 platform to Windows & SQL Server. In the old system we had a RPG/CL program that would transfer data from AS400 library to the accounting system for further processing. The end users needed to initiate this process so it was executed via a menu command.
To replicate this behavior after the conversion I created a stored procedure in SQL Server 2008 R2 that inserts records into the SQL Server database from the as400 via a linked server and then updates the records on the as400 to indicate that the records have been processed. To allow the end users the ability to execute this process, I created a SSRS (2005) report that executes the stored procedure.
When the SSRS report is executed interactively, we intermittently get an error 'For security resasons DTD is prohibited in this XML document' which from my research is caused by SSRS running out of memory.
Does anyone know of another/better way to transfer the data?
The transfer/update of the stored procedures is essentially
INSERT INTO [SQL DEST TABLE]
SELECT *
FROM [AS400 Linked Server/Table]
UPDATE OPENQUERY (AS400_LINK, 'MY SELECT QUERY')
SET FLAG = PROCESSED;
You should get better database performance can allow a server to perform work with it's own data, where possible, rather than having to transmit it back and forth.
I will make a few guesses about the circumstances, and if you correct me, I will gladly adjust the answer to fit your situation. This sounds like you are extracting data from an accounting transaction table in DB2, and that when done, you want to update the flag in those same records. That might indicate that the records could stay in that table essentially forever, or perhaps that some other process clears them out. There is no WHERE on your SELECT, so I will assume they do get removed. I will assume that we don't know if more records might get added to the transaction table at any time, including any period between extracting the info and updating the flag.
I wonder if you could update the flag immediately upon extraction, before they have actually been processed SQL Server? Would this be allowed logistically, and within your business requirements?
Suppose you...
extract the DB2 unprocessed transaction data into a workfile,
transfer the workfile to SQL Server,
perform whatever processing you want to do in SQL Server
tell DB2 to update the transaction table based on the workfile
So in DB2, #1 might look like
INSERT INTO workfile
SELECT *
FROM transactions
WHERE flag = unprocessed
During step 3, your SQL Server job could update the flag in the workfile to an error status, for any records that SQL Server cannot process properly.
Step 4 on DB2 could be
UPDATE transactions
SET flag = processed
WHERE transid IN (SELECT transid
FROM workfile
WHERE flag <> error
)
Hopefully, processing errors on SQL Server would be fairly rare. If that process updates transactions in the workfile, only on an error, this should be faster than transmitting updates for each success. The UPDATE statement above, should be able to to run faster on DB2, since it is driven by the workfile on the same server, rather than data be transmitted back up to DB2.

Remove Garbage form Firebird DataBase

Firebird 2.1.3 database seems to be creating a lot of garbage from uncompleted transactions this is causing the database to run very slowly until its garbage is removed via a database sweep or server restart. My database size its 30gb+.
Have you any idea what could be causing this?
Do any of the new stored procedures create excess garbage?
Please Help me.?
A Firebird database getting slow after a period of time is usually a sign of bad client transaction management. This can be easily checked by inspecting various transaction counters from the header page, which can be queried by running:
gstat -h <yourdatabase>
when your database becomes slow. For example: Pretty much all access libraries, when running transactions in auto commit mode (basically when you don't care about starting explicit transactions in your client application), are using COMMIT RETAINING, which basically blocks moving OIT/OAT forward.
Beside the gstat command-line tool, with Firebird 2.1 you also have the monitoring tables, in particular MON$TRANSACTIONS, to identify long-running transactions.

Talend - Lock wait timeout exceeded

I use the ETL Talend Open Studio (TOS). I want transfered a data base A into a data base B. I use a tMap component. When I use a tLogRow to look results, it's ok. TOS shows data correctly. But when I make the transfer, TOS writes "Lock wait timeout exceeded; try restarting transaction".
I don't understand this problem... It's ok for the reading of data but there is a problem for the writing of data.
Can you help me, please ?
Try running your job using a single connection to Mysql ( I assume you are using it as the error is a mysql error )
The error above can occur when you attempt to insert/update/delete from two or more connections concurrently.
to create a single connection and have all components share it you will need a pair of components: "tMysqlConnection" and "tMysqlCommit"
the Connection component should be placed before you attempt to query the database. Once you have it in the job, you can link the tMysqlInput components to it by selecting "use existing connection"
The Commit component will issue the commit command and close the transaction.
You will need Connection components for each separate DB server you are working with.
The base A contains 300 articles. I think that this problem is caused by Talend Open Studio. TOS can't execute more 100 articles. I have tried to "cut" the base A in three bases. Then, I run TOS. The error has missing. It's strange... but it works.

BDE, Delphi, ODBC, SQL Native Client & Dead lock

We have some Delphi code that uses the BDE to Access SQL Server 2008 through the SQL Server Native Client ODBC driver (2005 version). Our issue is that we're experiencing some deadlock issues in a loop doing inserts to multiple tables.
The whole loop is done within a [TDatabase].StartTransaction. Looking at the SQL Server Profiler we clearly see that at one point during the loop the SPID (Session ID?) change, and then we naturally end up with a deadlock. (Both SPID doing inserts to the same table)
It seems like the BDE at some point does a second connection to the DB...
(Although I would love to skip the BDE, it's currently not possible. )
Anyone with experiences to share?
In case your app is multithreaded: BDE is not threadsafe. You have to use a separate BDE session (explicitly created instance of TSession) for each thread; the global Session created automatically for the main thread is not sufficient. Also, all database access components (TDatabase, TQuery, etc.) can only be used in the context of the thread where their corresponding instance of TSession has been created.
Verify in the ODBC installation if SQL Server driver is configured to do connection pooling.
Appear that Native Client installation activates it for default... (At least, mine installation had connection pooling active and I don't activated it).
This probably comes too late for the asker, but maybe it helps others.
Everytime there is a cursor that doesn't get closed, the BDE/ODBC combo will establish a new connection for successive querys. The "spid change" is probably the result of a non-closed cursor.
To solve this problem you have to find the BDE-component that caused this stil-opened cursor. Then you call a method that will eventually close the cursor (TTable.Close, TTable.Last ...).
After that the "spid change" should be gone and therefore the deadlock.
Some tips to find that component:
During the lock, execute the following statement (for example using Management Studio):
EXEC sp_who2.
Look in column BlkBy. The blocked connection has a number in it.
This number is the spid (Server Process ID) of the blocking connection.
Then you execute DBCC INPUTBUFFER(spid).
In column EventInfo you will find the sql-statement that has been issued by your programm.
With that information you should be able to find the BDE-component that causes your trouble.

Resources