For some reason, I need to use FDTable in a Delphi Project to Fetch a large number of records (Interbase Database), unfortunately, to open the FDTable takes too much time (up to 2min and sometimes more) even worse when to ApplyUpdate, I tried everything possible by changing the fetch options: Recsmax, Rowsize, Mode, etc. as mention on some pages, Like: https://docwiki.embarcadero.com/RADStudio/Sydney/en/Fetching_Rows_(FireDAC)
Set the RecsMax Option to a small value (50 or 100) helps a lot with the performance but it will not fetch 1 record with Filter applied even with FetchAll.
As I mention before I need to do this with FDtable, FDQuery is not an option as we all know dealing with queries is better.
Is there a recommendation to smoothly open and fetch the data (100k+ records)?
It's Possible to fetch records with Filter + RecsMax?
The query table must have Primary key.
You can configure TFDTable as follows
FDTable1.FetchOptions.Items := [fiMeta]; // at least
FDTable1.FetchOptions.Unidirectional := False;
FDTable1.FetchOptions.CursorKind := ckAutomatic; // or ckDynamic
FDTable1.CachedUpdates := False;
// LiveWindowParanoic
// When it is false there are problems with Locate and Recno
FDTable1.FetchOptions.LiveWindowParanoic := False // Play with True/False.
This is the configuration for the best performance
FDTable component is there mainly for BDE compatibility.
FDQuery with "select * from Table1" is exactly the same thing as using FDTable, but you can filter resultset on server side.
100k+ records is a lot of data to transfer, blobs add additional overhead, as they're usually fetched separately.
I suggest you rethink the functionality design.
Related
Well, I'm studying the "packetRecord" property (TClientDataSet) and i have a doubt about it. I will explain how i think this works and if i'm wrong, correct me please.
1 - If i configure th packetRecord = 50, when i do "SELECT * FROM history", if table history have 200k of rows, the TClientDataSet will do something like this: "SELECT * FROM history limit 50", so when i need more 50 rows the ClientDataSet will search more 50 in Database.
The property packetRecord just make senses if TClientDataSet don't get all rows in Database, at least for me.
Am i correct ?
It will probably execute the entire query and ask to return just 50 records, but that is an implementation detail that I think is not chosen by the ClientDataSet but rather by the provider, the dataset or the driver.
But in general, that is more or less how it works, yes.
Did some browsing through the code. If the ClientDataSet is linked to a (local) TDataSetProvider, that provider just opens the dataset it is connected to. After opening it, it sets the DataSet.BlockReadSize property to the number of records to retrieve (=packetRecords).
So in the end it comes down on the implementation of BlockReadSize and the dsBlockRead state of the TDataSet that is used.
With a client-server setup this must be the same thing. In fact, there doesn't even have to be a dataset or even a database. There's also an TXMLTransformProvider, and people could implement custom providers too. TXMLTransformProvider ignores this value completely.
So, like I said above, there is no general rule on how this works and even if this properties has any effect.
see TDataPacketWriter.WriteDataSet. no matter whether underlying dataset supports block read mode or not it (datapacket writer) will stop writing data packet as soon as requested amount of records processed (or Eof state reached)
I usually use TADOQuery with persistent fields (1 for each table), but now I find myself in a conundrum:
I have to run multiple queries at the same time (read only).
I found lots of documentation on threading. This, however, implies a newly created TADOQuery for each operation, so now I'm looking for the best way of working with them.
Like I said, I usually use persistent fields, but in this case I'm not so sure they're best, since they have to be created for each TADOQuery instance, which has a very short life.
The way I see it, I have 4 options:
1 - Create a MyTADOQuery class with it's own persistent fields for each table
2 - Add manually the persistent fields to each new TADOQuery
3 - ADOQuery.FieldByName('field').Value approach
4 - ADOQuery.Field[i].Value approach
Option 1 seems overkill(haven't actually tried it), Option 3 is slow.
My common sense tells me Option 4 is the way to go, but I have to ask:
Which of the above (or other - please do tell) is the fastest and cheapest way of working with newly created TADOQuery instances?
Thank you
#MartynA gave a very good idea using multiple recordsets with a single TADODataSet by using a SP which returns multiple recordsets (Not all data providers support multiple Recordsets. This could NOT be done with MS-Access for example. since it does not support returning multiple recordsets) - You did not specify which DB you use.
With SQL Server you don't have to use a SP, and return multiple recordsets as follow:
qry.SQL.Text := 'SELECT * FROM Table1; SELECT * FROM Table2';
You need to use the ADO qry.RecordsSet (_RecordSet) directly. To move to the next recordset use qry.NextRecordset e.g.:
var
RS: _RecordSet;
qry.Open;
RS := qry.Recordset;
repeat
while not RS.EOF do
begin
for I := 0 to RS.Fields.Count - 1 do
FieldValue := RS.Fields[I].Value;
// or access Fields by name: RS.Fields['Field'].Value
RS.MoveNext;
end;
RS := qry.NextRecordset(RecordsAffected);
until VarIsEmpty(RS);
This is IMHO the fastest approach.
In any case, I personally try to avoid persistent fields always.
The only case where I use persistent fields is when I need to add calculated/lookup fields to the TDataSet.
In that case I will dynamically populate the persistent fields (run-time) and then add the extra calculated/lookup fields dynamically.
If you wisely use ADOQuery.FieldByName('Field') it will not be (relatively) slow (Don't use it repeatedly inside the iteration loop - assign it to a TField once before you iterate the TDataSet).
ADOQuery.Field[i].Value is faster, but sometimes you must access the field by it's Name. It all depends on the scenario. ADOQuery.FieldByName is IMHO more readable and easier to maintain because you know exactly which filed you refer to.
I have a Delphi application with 3 forms, I'm using Access 2003 and Microsoft.Jet.OLEDB.4.0, I had an ADOconnection in the main form and use it in all forms.
I use 2 .mdb files, where my.mdb has links to org.mdb tables.
Everything works, but very slowly. So after long hours of searching I came to this.
I don't know why, but after I run this query all other queries increase in speed dramatically (From 10 seconds under 1 second). (Even queries that don't unclude linked tables).
Table tb_odsotnost is in my.mdb
Table Userinfo is linked.
with rQueries.ADOQuery1 do
begin
Close;
SQL.Clear;
SQL.Add('SELECT DISTINCT tb_odsotnost.UserID, Userinfo.Name FROM tb_odsotnost');
SQL.Add('LEFT JOIN Userinfo ON Userinfo.UserID = tb_odsotnost.UserID');
SQL.Add('WHERE datum BETWEEN '+startDate+' AND'+endDate);
SQL.Add('ORDER BY Userinfo.Name ASC');
Open;
end;
I tried to run my app on another computer with win7 and MS Access 2007 and the result was the same.
Ok, for now I just run this query onFormActivate but this is not a permanent solution.
When you run a query against a linked table, Access (or Jet or ADO) acquires a lock on the database for the ldb file. If you close the query, that lock has to be reacquired the next time you query the linked table. The recommended method to get around this is to always keep a background dataset open so that the lock doesn't have to be obtained each time (forcing the lock to remain in effect).
See http://office.microsoft.com/en-us/access-help/improve-performance-of-an-access-database-HP005187453.aspx and look at the "Improve performance of linked tables" section.
If that doesn't help, look at your table definitions in Access to see if you have subdatasheets defined for your table fields in one-to-many relationships.
I'm working with ADO connecting to SQL Server 2005.
My TADODataSet selects 1 million records. using a TDBGrid and setting the TADODataSet.CursorLocation to clUseServer works. but the TDBGrid chokes!
How can I select 1 million records, avoid paging, and still be able to display records in the grid without fetching ALL records to the client side, letting the Grid read ahead as I scroll up and down?
SQL enterprise manager can execute a query and select 1 million records asynchronously without any problems (also MS-ACCESS).
TGrid is not your problem. Your problem is TADODataset is trying to load all the records. If you must run a query that returns so many records, you should set ExecuteOptions, try eoAsyncExecute and eoAsyncFetch. It may also help to set CacheSize.
Why do you need to fetch 1M records into a grid? No human being can look at so many records. Usually is far better to reduce the number of records before loading them into a UI.
If you have a good reason to show so many records into a grid, you'd need a dataset that 1) doesn't load the whole recordset when opened 2) doesn't cache previous record or it could run out of memory (under 32 bit Windows especially) far before reaching the end of the recordset if the record size is not small enough. To obtain such result beyond CursorLocation you have to set CursorType and CacheSize properly.
You can use a TClientDataset to implement incremental fetching setting the ADO dataset CursorType to ForwardOnly and CacheSize to a suitable value. Because TClientDataset caches read records, you want to avoid the source dataset to load all of them as well. A standard DB grid needs a bidirectional cursor, thereby it won't work with an unidirectional one. With so many records the client dataset cache can exhaust memory anyway. I'd suggest to use the Midas Speed Fix unit if you're using a Delphi version before 2010.
To avoid "out of memory" errors you may need to implement some kind of pagination. Anyway, check if the behaviour of other CursorType can help you.
You can try AnyDAC and TADTable. Its Live Data Window mode solves your and similar issues. The benefits are:
minimizes memory usage and allows to work with large data volumes, similar to an unidirectional dataset;
enables bi-directional navigation, in contrast to an unidirectional dataset;
gives always fresh data, reducing the need to refresh dataset;
does not delay to fetch all result set records, required to perform sorting, record location, jumping to last record, etc.
My question is very simple. I have a TClientDataSet that is linked to a TADOQuery via a TDataSetProvider. I can put data into the TClientDataSet from the TADOQuery, but how do I get data from the TClientDataSet back into the TADOQuery?
Data is automatically transferred from the TADOQuery to the TClientDataSet when I run a query and then set the TClientDataSet's Active property to True, but if I deactivate the TADOQuery and then activate it again, how can I get the data back from the TClientDataSet?
I am running the same query on several databases and using the TClientDataSet to concatenate the results. This is working fine. My problem now is that I need to get the concatenated result set back from the TClientDataSet into the TADOQuery so that I can use the TADOQuery's SaveToFile procedure (for compatibility reasons). How can I do this?
I don't do TADOQuery as I use dbExpress, but I imagine that one needs to use the same technique. After you have posted your changes to TClientDataSet, call 'ApplyUpdates (0)', which transfers the data from the clientdataset to its provider.
You could always write the dataset back out to a temp table and then query it. Ouch!!
I've just about finished looking into this. My application allows the user to generate reports by querying their databases. I can get this to work and it is very efficient for small result sets - however, as this is a reporting application, and it's entirely possible that hundreds of thousands of records can be returned, using a ClientDataSet gives massive performance problems. Once you get above around 50,000 records (reasonable, given the customer base), processing starts to increase exponentially, so this is now basically moot.