Oracle 9i database - ORA-00054 RESOURCE BUSY - oracle9i

Looking for some advice. Yesterday, a colleague tried to update a row in an Oracle 9i database via Toad. She killed Toad in Task Manager as it hung as Not Responding. Now, when we try to update the row, we get the ORA-00054: resource busy and acquire with NOWAIT specified. We access the database with a VPN connection which has since been disconnected. I'm not a DBA but I thought that oracle would clean up any old sessions/locks after some time?
Is there any way to fix this or should I contact a DBA to have a look?

Related

How to simulate "Mysql2::Error: MySQL client is not connected"

Can anyone suggest a way of forcing the above exception to occur, in the context of a Rails app?
I have a particular situation where it arises (involving scheduled database maintenance) and I need to be able to trigger it locally so I can test my application handles it correctly.
I would guess there's either something I could do to the DB itself, or else some method I could call on the ActiveRecord connection that would trigger this, but I haven't been able to figure it out.
You are probably getting this error because the MySQL connection is killed during maintenance while a SQL query is being made. (Here is a test case of this scenario https://github.com/brianmario/mysql2/blob/a8c96fbe277723e53985983415f9875e759e1d47/spec/mysql2/client_spec.rb#L597)
To reproduce this locally, you can run a long running SQL query in rails. E.g.
ActiveRecord::Base.connection.execute("select sleep(100)")
While that is running, find and kill the rails SQL connections by running
SELECT id FROM INFORMATION_SCHEMA.PROCESSLIST WHERE `db` = '<your-database-name>';
kill <id>; -- Run for each id listed in the previous query.
Find connection ID
ActiveRecord::Base.connection.raw_connection.thread_id
# or
ActiveRecord::Base.connection_pool.connections.map { |conn| conn.raw_connection.thread_id }
or by SQL like mentioned in Cameron's answer
By mysql client invoke
KILL <ID>; -- which you have got by #thread_id
Future attempts to query via this connection will fail with "Mysql2::Error: MySQL client is not connected"
Notes:
Option reconnect: true in database.yml will lead to immediate reconnect after a KILL. You able observe it by calling #thread_id again, it will return new ID.
ActiveRecord::Base.connection uses separate connection for thread where it have been called. While we have killed single connection, another threads will be able to query mysql without error.
You able to access all process connections by
ActiveRecord::Base.connection_pool.connections
You may wonder why in console for pool size, for say, 5 (ActiveRecord::Base.connection_pool.size) you have got ActiveRecord::Base.connection_pool.connections.count == 1. In this case you may checkout more connections by
ActiveRecord::Base.connection_pool.checkout

Copying remote Firebird table to local database

I have a remote Firebird 3.0 server with a database. In this database, there is a big table. The client very often queries this table during their work. There are too many clients and bad internet connection, so the work with this table is terrible. I made a local copy of this table via IBExpert into a temporary database, which is distributed with client application.
But now there is a need in a change of some values in this table (add new values and edit some olds). So I need some kind of synchronization - copying of remote modified table to client's local database.
The client application was made by use of Delphi Berlin 10.1. So the synchronization should be done by Delphi code.
Can you give me an idea, how it will be correctly to synchronize such a big table, please?
You could fire POST_EVENT on master database (for insert, update, delete (triggers)) to notify client applications that there are changes.
Then your client would need to fire procedure (on local DB) to do a sync. This could be done by EXECUTE STATEMENT ON EXTERNAL
FOR EXECUTE STATEMENT ('SELECT ... WHERE CURRENT_TIMESTAMP >= tablename.modifiedon')
ON EXTERNAL 'SERVER/PORT:DBPATH'
You should include date of insert/modified/delete in master DB.

How to refresh query on client when server has update whithout disconnected and reconnect

I developed an Client/Server data base application, using Firebird IBdatabase, IBquery. I need to know how to refresh the data on the server AND client when one of them has update/insert query. The reason being that when I run a query on the client, after I inserted records into a table, the new records do not reflect in the queries. Until i disconnected and reconnect again
I'm using a Firebird DB with InterBase VCL, developing in Delphi XE2
You don't have to disconnect the connection, but you will have to refresh (or close and reopen) the IBQuery. This is the case for most databases.
If you do not want this, you will have to send a notification from the database to all clients. I don't know if this would be doable from FireBird, but it is not common at all for databases to do this.
The transaction type for your select query is probably snapshot. You can either start a new snapshot transaction each time you want to refresh, or use transaction type read committed.

Dealing with service dependencies that time out or fail

I have writen a windows service that overwrite Logon and Logoff methods of ISenesLogon2 to check out when logon and logoff occure and then insert the log information into the sql server on server computer.
But it has problem when i turn on the client computer just after the server.
In this situation my service could not insert in sql server.
I think it's because of that the sql server did not load completely before the winservice tried to access to it.
So i want to find a way to check programmatically if the sql server is ready and then try to work with?
Your service cant start until its dependencies remote or otherwise have also started. Checking SQL Server is easy, try and connect to it and retry until you succeed.
Only problem is services have timeouts on startup, you cant sit and repeat this indefinitely.
Things that cannot be reliably started in a reasonable timeframe should not be services or they should fail as soon as possible. Otherwise you will end up with a lot of support requests for your service timing out.
Services are also usually not interactive to the user, so the failure is worse because you cant directly tell the user that your not up unless you do a tray icon.

Oracle ODP.Net and connection pooling

this is really two questions in one I guess.
We've developed a .Net app that accesses an Oracle database, and have noticed that after changing the user's Oracle password, the app continues to work for a short time with the old password in the connection string. Presumably this is something to do with the way existing connections are pooled?
When first investigating this we tried turning off pooling in the connection string, however the app wouldn't work, throwing the error "Unable to enlist in a distributed transaction" at the point it tries to open a connection. While we probably wouldn't want to turn off connection pooling in a production app, I'm curious as to why MSDTC seems to need it?
We are using Oracle 11g (11.1.2) and latest ODP.Net (11.2 I think).
Thanks in advance
Andy
Please see some of the finding below:
For Question One: (application still connected with old DB password)
If we connect the database with connection pooling option, connection pool manager would create and maintain the number of connection sessions when first calling the open or close of OracleConnection object. (number of this connection sessions depend on "min" & "max" pool size in connection string). In Oracle, I think you could check active session like:
SELECT s.inst_id,
s.sid,
s.serial#,
p.spid,
s.username,
s.program
FROM gv$session s
JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id
WHERE s.type != 'BACKGROUND';
And according to Oracle doc, this connection pooling service will close the connection sessions after 3 minutes of in-active state. [ http://docs.oracle.com/html/E10927_01/featConnecting.htm ]
So the most possible reason could be, your application still
connected to the database by using this Pool and still connected for
a short time, even after you changed the database password.
There could be also one possibility of "Oracle Client Cache"
feature in ODP.net. But not quite sure, you can check at, [
http://www.oracle.com/technetwork/issue-archive/2008/08-jul/o48odpnet-098170.html ]
For Question Two: (why MSDTC needed)
If you are using nested Database connection in your code, it will be promoted to DTC. [ http://petermeinl.wordpress.com/2011/03/13/avoiding-unwanted-escalation-to-distributed-transactions/ ] Actually there was Oracle Service for Microsoft Transaction Server (OraMTS) act as among ODP.net, DTC, and Oracle Database.
But you didn't happend this problem (MSDTC) before disabled the connection pooling. It seems like your code is reusing the same connection out of undelining connection pool, and it might eliminate the need to promote DTC. There was similar question on StaffOverflow. [ Why isn't my transaction escalating to DTC? ]

Resources