My question is very simple. I have a TClientDataSet that is linked to a TADOQuery via a TDataSetProvider. I can put data into the TClientDataSet from the TADOQuery, but how do I get data from the TClientDataSet back into the TADOQuery?
Data is automatically transferred from the TADOQuery to the TClientDataSet when I run a query and then set the TClientDataSet's Active property to True, but if I deactivate the TADOQuery and then activate it again, how can I get the data back from the TClientDataSet?
I am running the same query on several databases and using the TClientDataSet to concatenate the results. This is working fine. My problem now is that I need to get the concatenated result set back from the TClientDataSet into the TADOQuery so that I can use the TADOQuery's SaveToFile procedure (for compatibility reasons). How can I do this?
I don't do TADOQuery as I use dbExpress, but I imagine that one needs to use the same technique. After you have posted your changes to TClientDataSet, call 'ApplyUpdates (0)', which transfers the data from the clientdataset to its provider.
You could always write the dataset back out to a temp table and then query it. Ouch!!
I've just about finished looking into this. My application allows the user to generate reports by querying their databases. I can get this to work and it is very efficient for small result sets - however, as this is a reporting application, and it's entirely possible that hundreds of thousands of records can be returned, using a ClientDataSet gives massive performance problems. Once you get above around 50,000 records (reasonable, given the customer base), processing starts to increase exponentially, so this is now basically moot.
Related
I'm usign the dbExpress components within the Embarcadero C++Builder XE environment.
I have a relatively large table with something between 20k and 100k records, which I display in a DBGrid.
I am using a DataSetProvider, which is connected to a SQLQuery and a ClientDataSet, which is connected to the DataSetProvider.
I also need to analyze the data and therefore I need to run through the whole table. For smaller tables I always used code, which is basically something like this:
Form1->ClientDataSet1->First();
while(!Form1->ClientDataSet1->Eof){
temp=Form1->ClientDataSet1->FieldByName("FailReason")->AsLargeInt;
//do something with temp
Form1->ClientDataSet1->Next();
}
Of course this works out, but it is very slow, when I need to run through the whole DBGrid. For some 50000 records in can take up to some minutes. My suspicion is that the most perform is lost since the DBGrid needs to be repainted as the actual Dataset increments its address.
Therefore I am looking for a method which allows me to either read the data without manipulating the actual ClientDataSet. Maybe a method which copies the data of a column into a variable, or another way to run through the datasets, which is more efficient. I am sure if I would have a copy in a variable the operation would take less than a few seconds...
I googled now for hours, but didn't find anything useful so far.
Best regards,
Bodo
if your cds is connected to some db-aware control(-s) (via TDataSource) then first of all consider using DisableControls()
another option would be to avoid utilizing FieldByName within the loop
Quick question (hopefully)
I have a large dataset (>100,000 records) that I would like to use as a lookup to determine existence or non-existence of multiple keys. The purpose of this is to find FK violations before trying to commit them to the database to try and avoid the resultant EDatabaseError messing up my transaction.
I had been using TClientDataSet/TDatasetProvider with the FindKey method, as this allowed a client-side index to be set up and was faster (2s to scan each key rather than 10s for ADO). However, moving to large datasets the population of the CDS is starting to take far more time than the local index is saving.
I see that I have a few options for alternatives:
client cursor with TADOQuery.locate method
ADO SELECT statements for each check (no client cache)
ADO SEEK method
Extend TADOQuery to mimic FindKey
The Locate method seems easiest and doesn't spam the server with the SELECT/SEEK methods. I like the idea of extending the TADOQuery, but was wondering whether anyone knew of any ready-made solutions for this rather than having to create my own?
I would create a temporary table in the database server. Insert all 100,000 records into this temp table. Do bulk inserts of say 3000 records at a time, to minimise round trips to the server. Then run select statements on this temp table to check for foreign key violations etc. If all okay, do an insert SQL from the temp table to the main table.
Suppose I have an application which fetches a custom XML packet from the server which represents a dataset. Then, suppose I wish to execute a SQL statement on that data via a dataset. What can I use to do this? I don't need to know the code necessarily, but just what to use to make this possible and a general explanation of how.
For example, I may fetch a list of customers in XML format from the server. Then, I can use any third-party parser to dump that XML data into some client dataset. Then, execute a query on that dataset, for example select * from customers where ZipCode = '12345' without fetching this data from the server again.
XML is not the only limitation, that's just an example. I might want to do the same to some application settings loaded from an INI file. Either way, the concept is that the original source of the data is unknown.
Whether the dataset stores its temporary data in the memory or on the disk doesn't matter, but it would be excellent if it could keep it in the disk.
TXQuery (http://code.google.com/p/txquery/) is a component that provides a local SQL engine for executing SQL queries against one or more TDataSets. The only issues I have had with it is updating data via a TDBGrid of a query joining multiple tables (TDataSets) - specifically which table is being updated.
AnyDac v6 (now FireDac) also has a local SQL engine. http://www.da-soft.com/anydac/docu/frames.html?frmname=topic&frmfile=Local_SQL.html
Edit: For the example SQL in your question, because it only involves a single table, you do this with just a Filter on the datatset. For example
ADataSet.Filtered := False;
ADataSet.Filter := 'ZipCode=' + QuotedStr('12345');
ADataSet.Filtered := True;
Such a feature can be done using a local database. You just insert the TDataSet result into a local in-memory (or file-based) stand-alone database, then you can use regular SQL queries on it, including JOIN.
You can for instance use SQLite3, or the free edition of NexusDB.
NexusDB embedded has the benefit of being a native Delphi database, so stick to the DB.pas TDataSet paradigm.
Another option is to use the so-called Virtual Table mechanism of SQLite3, which allows to expose any data (even from TDataSet, XML, JSON or in-memory objects) to the SQLite3 engine, just as regular tables. Then you can run SQL statements on those "virtual" tables, including JOINs. With this approach, you do not require to INSERT the data into regular tables, but the data remain in their original form. Of course, you will miss some performance features like indexes, which should be handled on the virtual table provider side. We use this feature as the database core of our mORMot ORM/SOA framework, and this is pretty powerful.
The general process that you want to perform is complicated by the difference in data representation. SQL data is stored in tables made up of distinguishable records. XML is a structured representation of data, but in tree form rather than table/row form.
Each of these data forms may be qualified by a schema that provides a context for the data.
You have two general paths that you can follow:
Take the XML, and based on the schema insert it into a set of interlinked tables, then perform the SQL query. - if you have the schema, you can use code generators to make a parser, and then based ont the parse tree, you can insert into a local db with tables constructed on the fly. You can set up my SQL pretty easily from https://dev.mysql.com/doc/refman/5.7/en/installing.html and then in your version of delphi make a connection to the database, first fill it in, then query. This would satisfy your desire to have the data stored on the disk. unless you purge the tables when done, the data are still available in the local machine db.
This seems like more work than:
Use Xpath or Xquery and work directly on the XML. For this, a package like saxon in your favorite environment, or expat in python would work nicely.
Let me know if either of these paths seems as if it may be fruitful.
I'm working with ADO connecting to SQL Server 2005.
My TADODataSet selects 1 million records. using a TDBGrid and setting the TADODataSet.CursorLocation to clUseServer works. but the TDBGrid chokes!
How can I select 1 million records, avoid paging, and still be able to display records in the grid without fetching ALL records to the client side, letting the Grid read ahead as I scroll up and down?
SQL enterprise manager can execute a query and select 1 million records asynchronously without any problems (also MS-ACCESS).
TGrid is not your problem. Your problem is TADODataset is trying to load all the records. If you must run a query that returns so many records, you should set ExecuteOptions, try eoAsyncExecute and eoAsyncFetch. It may also help to set CacheSize.
Why do you need to fetch 1M records into a grid? No human being can look at so many records. Usually is far better to reduce the number of records before loading them into a UI.
If you have a good reason to show so many records into a grid, you'd need a dataset that 1) doesn't load the whole recordset when opened 2) doesn't cache previous record or it could run out of memory (under 32 bit Windows especially) far before reaching the end of the recordset if the record size is not small enough. To obtain such result beyond CursorLocation you have to set CursorType and CacheSize properly.
You can use a TClientDataset to implement incremental fetching setting the ADO dataset CursorType to ForwardOnly and CacheSize to a suitable value. Because TClientDataset caches read records, you want to avoid the source dataset to load all of them as well. A standard DB grid needs a bidirectional cursor, thereby it won't work with an unidirectional one. With so many records the client dataset cache can exhaust memory anyway. I'd suggest to use the Midas Speed Fix unit if you're using a Delphi version before 2010.
To avoid "out of memory" errors you may need to implement some kind of pagination. Anyway, check if the behaviour of other CursorType can help you.
You can try AnyDAC and TADTable. Its Live Data Window mode solves your and similar issues. The benefits are:
minimizes memory usage and allows to work with large data volumes, similar to an unidirectional dataset;
enables bi-directional navigation, in contrast to an unidirectional dataset;
gives always fresh data, reducing the need to refresh dataset;
does not delay to fetch all result set records, required to perform sorting, record location, jumping to last record, etc.
I've got a DB Express TSimpleDataset connected to a Firebird database. I've just added several thousand rows of data to the dataset, and now it's time to call ApplyUpdates.
Unfortunately, this results in several thousand database hits as it tries to INSERT each row individually. That's a bit disappointing. What I'd really like to see is the dataset generate a single transaction with a few thousand INSERT statements in it and send the whole thing at once. I could set that up myself if I had to, but first I'd like to know if there's any method for it built in to the dataset or the DBX framework.
Don't know if it is possible with a TSimpleDataset (never used it), but surely you can if you use a TClientDataset + TDatasetProvider + <put your db dataset here>. You can write a BeforeUpdateRecord to handle the apply process yourself. Basically, it allows you to bypass the standard apply process, access the dataset delta with changes made to records, and then use your own code and components to apply changes to the database. For example you could call stored procedures to modify data, and so on.
However, there is a difference between a transaction and what is called "array DML", "bulk insert" or the like. Even if you use a single transaction (and an "apply" AFAIK happens in a single transaction), within the transaction you may still need to send "n" INSERTs. Some databases supports a way of sending a single INSERT (or update, delete) with an array of parameters to be inserted, reducing the number of single statements to be used - but that may be very database specific and AFAIK dbExpress/Datasnap do not support it - you still could use the BeforeUpdateRecord event to take advantage of specific database capabililties.