I'm using in-memory SQLite databases with AnyDac on Delphi XE2. I noticed that my queries return results even when I forget to connect the database first after restarting the program, which is probably caused by the autoconnect capability of AnyDac. The thing is that I guess that this must also mean that the in-memory databases stay in memory even after the program itself has terminated, which is kind of a memory leak.
I looked through the AnyDac documentation and searched online, but I could not find any way of how I am supposed to disconnect from a database using AnyDac correctly. I noticed that when I call the "close" method of a TADConnection the sqlite file seems to stay open. I guess the same happens with my in-memory databases.
Can anyone please tell me how to completely close, disconnect from and remove a in-memory SQLite database in a correct and safe way?
ADConnection.Close completely remove in-memory SQLite DB. With next ADConnection.Open, explicitly or implicitly, new empty in-memory DB is created.
This can be easy confirmed by simple test:
ADConnection1.Open;
ADConnection1.ExecSQL('create table TEST (A, B)');
ADConnection1.ExecSQL('insert into TEST values (1, 2)');
// show value of TEST.A
ShowMessage(VarToStr(ADConnection1.ExecSQLScalar('select A from TEST')));
ADConnection1.Close;
ADConnection1.Open;
// next statement generates exception - [FireDAC][Phys][SQLite] ERROR: no such table: TEST
ShowMessage(VarToStr(ADConnection1.ExecSQLScalar('select A from TEST')));
Related
I recently modified my iOS app to enable serialized mode for both a database encrypted using SQLCipher and a non-encrypted database (also SQLite). I also maintain a static sqlite3 connection for each database, and each is only opened once (by simply checking for null values) and shared throughout the lifetime of the app.
The app is required to have a sync-like behavior which will download a ton of records from a remote database at regular intervals using a soap request and update the contents of the local encrypted database. Of course, the person using the app may or may not be updating or reading from the database, depending on what they're doing, so I made the changes mentioned in the above paragraph.
When doing short term testing, there doesn't appear to be any issue with how things work, and I've yet experience any problem.
However, some users are reporting that they've lost access to the encrypted database, and I'm trying to figure out why.
My thoughts are as follows: Methods written by another developer declared all sqlite3_stmt's to be static (I believe this code was in the problematic release). In the past I've noticed crashes when two threads using a particular method run simultaneously. One thread finalizes, modifies or replaces a sqlite3_stmt while another thread is using it. A crash doesn't always occur because he has wrapped most of his SQLite code in try/catch blocks. If it's true that SQLite uses prepare and finalize to implement locking, could the orphaning of sqlite3_stmt's which occurs due to their static nature in this context be putting the database into an inoperable state? For example, when a statement acquires an exclusive lock after being stepped is replaced by an assignment in the same method running in another thread?
I realize that this doesn't necessarily mean that the database will become permanently unusable, but, consider this scenario:
At some point during the app's lifetime it will re-key the encrypted database and that key is stored in another database. Suppose that it successfully re-keys the encrypted database, but then the new key is not stored in the other database because of what I mentioned above.
Provided that the database hasn't become corrupted at some point (I'm not really counting on this being the case), this is the only explanation I can come up with for why the user may not be able to use the encrypted database after restarting the iOS app, seeing as the app would be the only one to access the database file.
Being that I can't recreate this issue, I can only speculate about what the reasoning might be. What thoughts do you have? Does this seem like a plausible scenario for something that happens rarely? Do you have another idea of something to look into?
If the database is rekeyed, and the key for the database is not successfully stored in the other database, then it could certainly cause the problem.
We are having a strange situation while trying to dbexport/dbimport an Informix database.
while importing the DB we got the error:
1213 - Character to numeric conversion error
I checked at which does does the import stops.
I edited the corresponding file (sed -n '1745813,1745815p' table.unl) and have seen data that look to be corrupt.
3.0]26.0]018102]0.0]20111001.0]0.0]77.38]20111012.0]978]04]0.0072]6.59]6.59]29.93]29.93]77.38]
3.0]26.0]018102]0.0]20111001.0]0.0]143.69]20111012.0]978]04]0.0144]6.59]6.59]48.79]48.79]143.69]
]0.000/]]-0.000000000000000000000000000000000000000000000000000044]8\00\00\07Ú\00\00Õ²\00\00\07P27\00\00\07Ú\00\00i]-0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000999995+']-49999992%(000000000000000000.0]-989074999997704800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0]-999992%(0000000000000000000000.0]]]Ú\00\00]*00000015056480000000000000000000000000000000000000000000000000000000000.0]-92%'9999)).'000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0]-;24944999992%(000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0]-81%-999994;2475200000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0]]-97704751999992%(00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0]
The first two lines are OK. The rest seems to be corrupt data.
I do not know how the data appears here since it does not appear in select statement.
I exported only the affected table and figured out that the same data is there.
I looked for a filter that matches all the rows, I used it in another export. This time the corrupt data is not there.
Any idea about what might be the reason behind this?
Best Regards
Arthur
Arthur,
Trying answering the question, why the database is generating corrupted data.
You will need to investigate.
The common causes is :
Occur some crash at your OS/Hardware
Occur some lack of energy
Occur some crash at your database or they process are killed by some admin.
After any of problems above, your FS become corrupt and probably at recovery (fsck) mess the database data.
Probably you are working with a Journaling FS , which ext3,ext4,ntfs is...
If you don't know anything about any events like described before, you need to investigate into the online.log of our Informix database , looking for any start of the engine without a regular shutdown before. Look at your OS logs will help too to found for any involuntary restart of the OS (lack of energy or crash).
Now about the solutions.
Recover a backup
Then you can export just the table corrupted and replace it at your dbexport.
You can do this with archecker. (must be Informix version grater of 10.FC4)
This article maybe will help you if need : Table Level Restore - Pretty Useful Stuff
Export your table just like your describe at the comments.
But this will not recover the corrupt data, they just will "save" the "good" data and discard the "bad" data.
created a new table copy of the first one.
Insert into table 2 select * from table1 where (my filter which matched all rows)
recreated table indexes
renamed tables
Depending how is bad is the corrupted data sometimes you not able to export all "good" data at just one select, you need workaround the "bad" data , check this IBM article:
Unloading around table corruption
Ways to prevent this kind of problem or make easily any recover
First, of course, there is no way to prevent any crash...
What you can do is try minimize the damage after any crash.
Do not use journal file systems!
(at linux, use ext2 FS or RAW devices)
Enable KAIO (for RAW) or DIRECT_IO (of any FS) at Informix configuration.
This will prevent the database to use the OS cache, making more secure the process of writing data at your disk. At some situations this can slower down or speed up your database, depends a lot of your hardware/storage.
Configure your backup to work and test/check it with some frequency.
I recommend configure the backup of full database + logical logs backup.
Depending the version of the Informix and which license you have, you maybe have the rights to configure a cold RSS server ("cluster" secondary node) which will work as active-passive mode at different server and will reduce dramatically you chances to loose any data after any crash at the mainly server.
After any crash, run oncheck to detect the if occur some corruption :
How to use oncheck to detect corruption
If an app crashes when writing into a sqlite db (or CoreData), sometimes the db file will be broken, after which initialisation of the db may fail to open.
What I'm doing now is deleting the db file if it fails to open, and copying a new one to be used.
I'm wondering what's the BEST WAY to deal with such situation?
Due to the atomic commit nature of SQLite, you should never experience database corruption, if you are, it could be due to enabling features such as "Write Caching" within iOS or in the hard drive itself, or could possibly even be caused by hardware failure.
SQLite maintains a journal file to rollback commits and return the database to a consistent state in the event of a power failure or other abrupt shutdown. If corruption occurs, it means that the OS responded to SQLite stating a write operation had completed when in actuality, it wasn't physically committed to the media yet. Please ensure Write Caching is disabled when using it in your App. For more information, please see the SQLite Atomic Commit reference.
Otherwise the common method people seem to "repair" a SQlite DB is to .dump the DB file into another. Like so echo ".dump" | sqlite old.db | sqlite new.db
Hope this helps...
[source]
I have a very strange problem with transactions in Interbase 7.5 which seem to be stuck.
I can track the problem with IBConsole -> right click DB -> Performance Monitor -> Transactions
Usually this list should show only a few active transaction. But I get several hundred active transactions when I start my application (a web module for an apache webserver using Delphi 7 Interbase components, e.g. IBQuery, IBTransaction, ...)
Transaction type is always listed as snapshot, if this is of relevance.
I have already triple checked all sql statements and cannot find anything that should produce such problems...
Is there any way get the sql statements of a specific transaction?
Any other suggestion how to find such a problem would be very welcome.
Is there any way get the sql statements of a specific transaction?
Yes, you can SELECT from TMP$STATEMENTS WHERE TRANSACTION_ID = .... That's from memory, but should get you started.
In IB Performance Monitor, you can locate the transaction from the statements tab, using the button on the toolbar. Can't remember if you can go the other way in that app. It's been a long time since I wrote it!
Active IBX data-sets require an active transaction all the time. If you don't have active data-sets just don't forget to commit all the active transactions.
If you have active data-sets, you can configure all your components to use the same TIbTransaction object, and you can also configure the unique TIbTransaction to commit or rollback after a idle time-out period via the IdleTimer and DefaultAction properties.
Terminating the transaction (by manually or automatically committing or rolling back) will close all the linked datasets (TIBQuery, TIBTable and the like).
You may be tempted to use the CommitRetaining or RollbackRetaining methods to terminate the transaction without closing the related data-sets, but this may affect the performance of the server, and my advise is to always avoid using it.
If you want to improve your application, you should consider changing your database connection layer or introducing a in-memory capable dataset over IBX, for example, Delphi's TClientDataSet, which allows you to retrieve data and retain it in memory while closing all the underlying datasets (and transactions), while allowing you to use the traditional Insert/Append/Edit/Delete methods to modify the data and then apply that changes to the database in a new short-time transaction.
I have a powershell 2.0 script on an XP OS. The purpose of the script is to extract data from an old database (Sybase) and populate a SQL Server 2008 database. The model that I am using is to create OLEDB connections to the Sybase database. The script calls a series of stored procedures from the Sybase database. The results are used to create an XML string. The XML string is queried for the input data for the SQL Server stored procedures. AFter each data element is created in the SQL Server database the XML string is saved to a file. Every database connection is closed after execution is completed. It is simple model, but uses staggering amounts of memory. For transferring only 1000 rows of data the script memory footprint grows to 3G. When the script completes the memory does not drop. In an attempt to rectify this problem I have added logic to free every variable when not used and call garbage collection in every finally clause of every try block. I am aware that this is overkill, but am trying to find anything that will reduce the memory usage. I am in the processing of looking for a memory trace tool, but I am also looking for expert opinions as to possibly start tracking down this critical issue. I know I am missing something obvious so any advice would be appreciated.
I found the source of the leak. I am using an OLEDB connection to a Sybase Server instance to get data to load into SQL Server. 95% of the leak was isolated to a single function that invoked a Sybase stored procdure. Instead of getting the results in a result set, this procedure had the option of iether returning the results as output parameters or a result set. I initially chose the output parameter option. For reasons that are not clear to me, this output parameter mechanism caused a massive memory leak. I changed the logic to use a result set and the resolved the leak. Not clear why output parameters were an issue, but I found an optional approach that corrected the problem.