As title says, is it possible to check if a DataSet has fetched all it rows?
I am using components that descends from FDQuery.
I need it becouse I was sure my DataSets was fetching all rows everytime, but I noticed that changing the connection.FetchOptions.Mode to fmAll (from fmOnDemand), the time they need to open increases by 1.5 multiplier.
If I understand your q correctly, the documentation answers it:
ProviderEOF is a shortcut for the TFDDataSet.SourceEOF property and allows you >to specify whether all rows are fetched from a DB.
The property is for the TClientDataSet compatibility.
http://docwiki.embarcadero.com/Libraries/Tokyo/en/FireDAC.Comp.Client.TFDCustomMemTable.ProviderEOF
I am obliged to #Victoria for pointing out that SourceEOF is the better way of checking, see
http://docwiki.embarcadero.com/Libraries/Tokyo/en/FireDAC.Comp.DataSet.TFDDataSet.SourceEOF
Related
What's the point in adding fields to TClientDataset if and can do Cds.FieldByName('field').Value?
Is it faster to have the reference?
Is it 'clearer'?
The problem with
DataSet.FieldByName('field').Value
is a performance one. Each time this is executed, it causes a serial search through the fields collection of the dataset to locate the one with the required name. This search is not optimised in any way, for instance by using a binary search or hashing algorithm. So, if there are many fields and/or you are doing this access while iterating the records in the dataset, it can have a significant impact on performance.
That's one reason, but not the only one, to define "persistent" TFields using the Object Inspector. You can obtain a reference to a particular TField by using the symbolic name known to the compiler and this happens once only, at compile time. So yes, it is faster than FieldByName. up to you whether you find it clearer.
Other reasons to use persistent TFields include the ease which which calculated fields can be set up and, more importantly, the fact that the calculated field(s) do not need to be accessed via FieldByName in the OnCalcFields event. The performance hit of using FieldByName versus persistent fields is, of course, multiplied by the number of fields referenced in the OnCalcField event and OnCalcFields is called at least once for each record in the dataset, even if you do not iterate the dataset records in your own code.
The above is true of all TDataSet descendants, not just TClientDataSets.
Well, I'm studying the "packetRecord" property (TClientDataSet) and i have a doubt about it. I will explain how i think this works and if i'm wrong, correct me please.
1 - If i configure th packetRecord = 50, when i do "SELECT * FROM history", if table history have 200k of rows, the TClientDataSet will do something like this: "SELECT * FROM history limit 50", so when i need more 50 rows the ClientDataSet will search more 50 in Database.
The property packetRecord just make senses if TClientDataSet don't get all rows in Database, at least for me.
Am i correct ?
It will probably execute the entire query and ask to return just 50 records, but that is an implementation detail that I think is not chosen by the ClientDataSet but rather by the provider, the dataset or the driver.
But in general, that is more or less how it works, yes.
Did some browsing through the code. If the ClientDataSet is linked to a (local) TDataSetProvider, that provider just opens the dataset it is connected to. After opening it, it sets the DataSet.BlockReadSize property to the number of records to retrieve (=packetRecords).
So in the end it comes down on the implementation of BlockReadSize and the dsBlockRead state of the TDataSet that is used.
With a client-server setup this must be the same thing. In fact, there doesn't even have to be a dataset or even a database. There's also an TXMLTransformProvider, and people could implement custom providers too. TXMLTransformProvider ignores this value completely.
So, like I said above, there is no general rule on how this works and even if this properties has any effect.
see TDataPacketWriter.WriteDataSet. no matter whether underlying dataset supports block read mode or not it (datapacket writer) will stop writing data packet as soon as requested amount of records processed (or Eof state reached)
I have silverlight 3.0 project that has a listbox that is databound to a list of items. What I want to do is limit the number of items displayed in the listbox to be <= 10. I originally accomplished this by limiting the data bound to the list to 10 items by doing a .Take(10) on my orignal data and databinding the result.
The problem w/ the .Take(10) approach is that the original datasource may change and since .Take() returns a reference (or copy not sure) of the original data I sometimes do not see changes in the data reflected in my UI.
I'm trying to figure out a better way of handling this rather than the .Take() approach. It seems you shouldn't 'filter' your data using LINQ functions if you have more than one UI element bound to the same data. My only thought on how to do this better is to make a custom container that will limit the count, but that seems like it might be a mountain of work to make a custom stackpanel or equivalent.
Take(10) does not make a copy, it just appends another step to the LINQ query. But all execution is still deferred till someone pulls the items of the query.
If you were setting the items statically, a copy would indeed be created, by running the query once. But since you set the constructed query as the ItemsSource property of the list box, it can run and update it any time, so it is the right approach.
The real reason why you sometimes do not see changes in the data reflected in the UI is that the list box has no way to determine why the data returned by the query have changed and it surely doesn't want to keep constantly trying to refetch the data and maybe update itself. You need to let it know.
How can you let it know? The documentation for ItemsSource says that "you should set the ItemsSource to an object that implements the INotifyCollectionChanged interface so that changes in the collection will be reflected (...).". Apparently the default way of doing things by .Net itself does not work in your case.
So there are some examples how to implement that yourself e.g. in this SO answer. If even the top-level source data collection (over which you are doing the LINQ query) does not support these notifications (which you would just forward), you might need to update the list box manually from your other code which changes the underlying data.
I have two TDBLookupComboBox controls that I'd like to connect to the same dataset, but have each one display a different subset of the data. If I only needed one box, I'd use filtering on the dataset, but I need to be able to display both of them at the same time, and I'm not aware of any way to do that. Does anyone know if it can be done, and if so, how?
If you're using a TClientDataSet, you can clone the cursor (TClientDataSet.CloneCursor) into another TClientDataSet that doesn't have the ProviderName property set. Both ClientDataSet now point to the same data in memory but can have their own filters.
I have defined a Delphi TTable object with calculated fields, and it is used in a grid on a form. I would like to make a copy of the TTable object, including the calculated fields, open that copy, do some changes to the data with the copy, close the copy, and then refresh the original copy and thusly the grid view. Is there an easy way to get a copy of a TTable object to be used in such a way?
The ideal answer would be one that solves the problem as generically as possible, i.e., a way of getting something like this:
newTable:=getACopyOf(existingTable);
You can use the TBatchMove component to copy a table and its structure.
Set the Mode property to specify the desired operation. The Source and Destination properties indicate the datasets whose records are added, deleted, or copied. The online help has additional details.
(Although I reckon you should investigate a TClientDataSet approach - it's certainly more scalable and faster).
Let me propose several things:
Let us suppose that you want to make changes programmatically. You could then use DisableControls and EnableControls methods of the TTable to disallow screen updates during that time.
If you want to have two screens with the same data (f.e. to compare data during online changes), you could actually create the same screen twice, with the TTable object being on the screen itself. It will have the exact same configuration (but not carry over previously made changes on the first screen but read the data from the database). Changes made on one screen will not be automatically refreshed on the other.
Another way: Try using TDataSetProvider with TTable as Dataset (source) feeding a TClientDataSet. ApplyUpdates would feed back the changes to the TTable. Since the calculated fields are read only, they are not affected. (untested, but should work)
I believe that the second approach (TClientDataset) is probably the best method to use in this scenario. An alternative would be to use a memory table (kbmMemTable for instance). Either way, you would clone your original table and then after making your changes loop thru the memory version of your dataset and update your original table.
You should be able to select the table on the form, copy it using Ctrl-C, then paste it into any text editor. You will get the text version of the object's properties which you can then edit as needed. When you are done, select all the text again and you can copy it to the clipboard and paste it back onto a form.