I have a Delphi application with 3 forms, I'm using Access 2003 and Microsoft.Jet.OLEDB.4.0, I had an ADOconnection in the main form and use it in all forms.
I use 2 .mdb files, where my.mdb has links to org.mdb tables.
Everything works, but very slowly. So after long hours of searching I came to this.
I don't know why, but after I run this query all other queries increase in speed dramatically (From 10 seconds under 1 second). (Even queries that don't unclude linked tables).
Table tb_odsotnost is in my.mdb
Table Userinfo is linked.
with rQueries.ADOQuery1 do
begin
Close;
SQL.Clear;
SQL.Add('SELECT DISTINCT tb_odsotnost.UserID, Userinfo.Name FROM tb_odsotnost');
SQL.Add('LEFT JOIN Userinfo ON Userinfo.UserID = tb_odsotnost.UserID');
SQL.Add('WHERE datum BETWEEN '+startDate+' AND'+endDate);
SQL.Add('ORDER BY Userinfo.Name ASC');
Open;
end;
I tried to run my app on another computer with win7 and MS Access 2007 and the result was the same.
Ok, for now I just run this query onFormActivate but this is not a permanent solution.
When you run a query against a linked table, Access (or Jet or ADO) acquires a lock on the database for the ldb file. If you close the query, that lock has to be reacquired the next time you query the linked table. The recommended method to get around this is to always keep a background dataset open so that the lock doesn't have to be obtained each time (forcing the lock to remain in effect).
See http://office.microsoft.com/en-us/access-help/improve-performance-of-an-access-database-HP005187453.aspx and look at the "Improve performance of linked tables" section.
If that doesn't help, look at your table definitions in Access to see if you have subdatasheets defined for your table fields in one-to-many relationships.
Related
For some reason, I need to use FDTable in a Delphi Project to Fetch a large number of records (Interbase Database), unfortunately, to open the FDTable takes too much time (up to 2min and sometimes more) even worse when to ApplyUpdate, I tried everything possible by changing the fetch options: Recsmax, Rowsize, Mode, etc. as mention on some pages, Like: https://docwiki.embarcadero.com/RADStudio/Sydney/en/Fetching_Rows_(FireDAC)
Set the RecsMax Option to a small value (50 or 100) helps a lot with the performance but it will not fetch 1 record with Filter applied even with FetchAll.
As I mention before I need to do this with FDtable, FDQuery is not an option as we all know dealing with queries is better.
Is there a recommendation to smoothly open and fetch the data (100k+ records)?
It's Possible to fetch records with Filter + RecsMax?
The query table must have Primary key.
You can configure TFDTable as follows
FDTable1.FetchOptions.Items := [fiMeta]; // at least
FDTable1.FetchOptions.Unidirectional := False;
FDTable1.FetchOptions.CursorKind := ckAutomatic; // or ckDynamic
FDTable1.CachedUpdates := False;
// LiveWindowParanoic
// When it is false there are problems with Locate and Recno
FDTable1.FetchOptions.LiveWindowParanoic := False // Play with True/False.
This is the configuration for the best performance
FDTable component is there mainly for BDE compatibility.
FDQuery with "select * from Table1" is exactly the same thing as using FDTable, but you can filter resultset on server side.
100k+ records is a lot of data to transfer, blobs add additional overhead, as they're usually fetched separately.
I suggest you rethink the functionality design.
I'm usign the dbExpress components within the Embarcadero C++Builder XE environment.
I have a relatively large table with something between 20k and 100k records, which I display in a DBGrid.
I am using a DataSetProvider, which is connected to a SQLQuery and a ClientDataSet, which is connected to the DataSetProvider.
I also need to analyze the data and therefore I need to run through the whole table. For smaller tables I always used code, which is basically something like this:
Form1->ClientDataSet1->First();
while(!Form1->ClientDataSet1->Eof){
temp=Form1->ClientDataSet1->FieldByName("FailReason")->AsLargeInt;
//do something with temp
Form1->ClientDataSet1->Next();
}
Of course this works out, but it is very slow, when I need to run through the whole DBGrid. For some 50000 records in can take up to some minutes. My suspicion is that the most perform is lost since the DBGrid needs to be repainted as the actual Dataset increments its address.
Therefore I am looking for a method which allows me to either read the data without manipulating the actual ClientDataSet. Maybe a method which copies the data of a column into a variable, or another way to run through the datasets, which is more efficient. I am sure if I would have a copy in a variable the operation would take less than a few seconds...
I googled now for hours, but didn't find anything useful so far.
Best regards,
Bodo
if your cds is connected to some db-aware control(-s) (via TDataSource) then first of all consider using DisableControls()
another option would be to avoid utilizing FieldByName within the loop
I am using firebird DB, and for testing reasons I want to know how many times a specific table has been accessed, without doing it manually with some counter in the code.
Firebird does not keep a historical record of table access. You might be able to use the Firebird trace facility to track this yourself, but this requires the trace to be active the whole time (which can have an impact on performance), alternatively you can use third-party (paid) tools like FBScanner.
You can also try to use the monitoring tables, specifically MON$RECORD_STATS, but those statistics are only maintained for as long as the database is open (ie has active connections), once the last connection is closed (and assuming database linger is off) the database gets closed, those statistics are expunged.
MON$RECORD_STATS does not contain table access itself, but things like number of records read, inserted, deleted, etc. Associated tables can be found through RDB$TABLE_STATS:
select t.MON$TABLE_NAME, r.MON$STAT_ID, r.MON$STAT_GROUP, r.MON$RECORD_SEQ_READS,
r.MON$RECORD_IDX_READS, r.MON$RECORD_INSERTS, r.MON$RECORD_UPDATES,
r.MON$RECORD_DELETES, r.MON$RECORD_BACKOUTS, r.MON$RECORD_PURGES,
r.MON$RECORD_EXPUNGES, r.MON$RECORD_LOCKS, r.MON$RECORD_WAITS,
r.MON$RECORD_CONFLICTS, r.MON$BACKVERSION_READS, r.MON$FRAGMENT_READS,
r.MON$RECORD_RPT_READS
from MON$TABLE_STATS t
inner join MON$RECORD_STATS r
on t.MON$STAT_GROUP = r.MON$STAT_GROUP and t.MON$RECORD_STAT_ID = r.MON$STAT_ID
For details see doc/README.monitoring_tables.txt in your Firebird install, or README.monitoring_tables.txt (Firebird 3).
The stored Procedure was working before the DB defrag. After the successful defrag one of the stored procedure stopped working (very slow with out any output). The indexes are complete. The stored procedure is doing something wrong, just cant nail it down. All other Stored procedures are working fine. Any idea what would have gone wrong?
Try to generate statistics for tables used in the procedure
update statistics TableName
or on indexes
update index statistics TableName
update index statistics TableName IndexName
Furthermore you can see which statement in the SP is the problem from master..sysprocesses.
Find process running the SP -sysprocesses has id and dbid in it, and you can use object_name(id,dbid) to identify yours, then you'll see stmtnum and linenum.
Get a dba or someone with sa_role to run sp_showplan on the running spid - that'll show the query plan.
If you've changed no index or data volumes at all then the above answer must be right - statistics need updating. If this is Sybase 15 you should normally do UPDATE INDEX statistics, or it's quite likely you'll get some bad query plans (if you're allowing MERGE JOINs and HASH JOINs.)
I'm working with ADO connecting to SQL Server 2005.
My TADODataSet selects 1 million records. using a TDBGrid and setting the TADODataSet.CursorLocation to clUseServer works. but the TDBGrid chokes!
How can I select 1 million records, avoid paging, and still be able to display records in the grid without fetching ALL records to the client side, letting the Grid read ahead as I scroll up and down?
SQL enterprise manager can execute a query and select 1 million records asynchronously without any problems (also MS-ACCESS).
TGrid is not your problem. Your problem is TADODataset is trying to load all the records. If you must run a query that returns so many records, you should set ExecuteOptions, try eoAsyncExecute and eoAsyncFetch. It may also help to set CacheSize.
Why do you need to fetch 1M records into a grid? No human being can look at so many records. Usually is far better to reduce the number of records before loading them into a UI.
If you have a good reason to show so many records into a grid, you'd need a dataset that 1) doesn't load the whole recordset when opened 2) doesn't cache previous record or it could run out of memory (under 32 bit Windows especially) far before reaching the end of the recordset if the record size is not small enough. To obtain such result beyond CursorLocation you have to set CursorType and CacheSize properly.
You can use a TClientDataset to implement incremental fetching setting the ADO dataset CursorType to ForwardOnly and CacheSize to a suitable value. Because TClientDataset caches read records, you want to avoid the source dataset to load all of them as well. A standard DB grid needs a bidirectional cursor, thereby it won't work with an unidirectional one. With so many records the client dataset cache can exhaust memory anyway. I'd suggest to use the Midas Speed Fix unit if you're using a Delphi version before 2010.
To avoid "out of memory" errors you may need to implement some kind of pagination. Anyway, check if the behaviour of other CursorType can help you.
You can try AnyDAC and TADTable. Its Live Data Window mode solves your and similar issues. The benefits are:
minimizes memory usage and allows to work with large data volumes, similar to an unidirectional dataset;
enables bi-directional navigation, in contrast to an unidirectional dataset;
gives always fresh data, reducing the need to refresh dataset;
does not delay to fetch all result set records, required to perform sorting, record location, jumping to last record, etc.