Fast read of a Nexus database table - delphi

I want to read the entire contents of a table into memory, as quickly as possible. I am using Nexus database, but there might be some techniques I could use that are applicable to all database types in Delphi.
The table I am looking at has 60,000 records with 20 columns. So not a huge data set.
From my profiling, I have found the following so far:
Accessing tables directly using TnxTable is no faster or slower than using a SQL query and 'SELECT * FROM TableName'
The simple act of looping through the rows, without actually reading or copying any data, takes the majority of the time.
The performance I am getting is
Looping through all records takes 3.5 seconds
Looping through all the records, reading the values and storing them, takes 3.7 seconds (i.e. only 0.2 seconds more)
A sample of my code
var query:TnxQuery;
begin
query.SQL.Text:='SELECT * FROM TableName';
query.Active:=True;
while not query.Eof do
query.Next;
This takes 3.5 seconds on a 60,000 row table.
Does this performance sound reasonable? Are there other approaches I can take that would let me read the data faster?
I am currently reading data from a server on the same computer, but eventually this may be from another server on a LAN.

You should be using BlockRead mode with a TnxTable for optimal read speed:
nxTable.BlockReadOptions := [gboBlobs, gboBookmarks];
//leave out gboBlobs if you want to access blobs only as needed
//leave out gboBookmarks if no bookmark support is required
nxTable.BlockReadSize := 1024*1024; //1MB
// setting block read size performs an implicit First
// while block read mode is active only calls to Next and First are allowed for navigation
try
while not nxTable.Eof do begin
// do something....
nxTable.Next;
end;
finally
nxTable.BlockReadSize := 0;
end;
Also, if you don't need to set a range on a specifc index, make sure to use the sequential access index for fastest possible access.

Related

Delphi: search string in a TStringList

I have a large txt (ipaddress.txt) with a lot of lines, each line is a IP Address, eg.:
44.XXX.XXX.XXX
45.XXX.XXX.XXX
46.XXX.XXX.XXX
47.XXX.XXX.XXX
48.XXX.XXX.XXX
i load this list in a TStringList, eg.:
FIpAddressList := TStringList.Create();
FIpAddressList.Sorted := true;
FIpAddressList.LoadFromFile('ipaddress.txt');
and i want check if a IP Address is in the list, eg.:
function IsIPinList(const IPAddress : string): Boolean;
begin
Result := (FIpAddressList.IndexOf(IPAddress) <> -1);
end;
it works... but is slow with huge TStringList.
Is there any way to make this process faster?
UPDATE
The list is static with monthly update, with less or more 5'000 lines.
The function is invoked at each request on a server (eg. 5 times per seconds).
The list is loaded only when the server service start.
One way to make this quicker is to arrange for the list to be sorted so that the search can be performed using binary search, O(log n), rather than linear search, O(n).
If the list is static then the best thing to do is to sort the list outside your program so that you can sort it exactly once.
If the list is more dynamic then you will have to sort it, and keep any modifications ordered. In this scenario, sorting the list will only bring benefits if the number of searches you make is sufficient to overcome the extra cost of sorting and maintaining the order.
Another approach is to use dictionary containing your items. Typically this will involve hashing each string. Whilst the lookup complexity is O(n) the cost of hashing can be significant.
Yet another way to speed things up is to convert the IP address strings to 32 bit integers. In fact this is sure to yield a huge performance benefit, and you should do this irrespective of any other concerns. It is always faster and clearer to work with a 32 bit integer than a string representation of an IP address.
These are just some possible approaches, there are more. Which to choose depends very much on the usage trade offs.
Whilst you probably are just looking for some code to solve your problem, it is my view that this problem is more algorithmic than code based. You need to better understand the requirements, data size constraints, etc. before selecting an algorithm. Once you have decided on an algorithm, or narrowed the choice down to a small number to compare, implementation should be straightforward.
Another possibility is that you have misdiagnosed your problem. Even linear search over a list of 5000 IP addresses stored as strings should be quicker than you are reporting:
My computer can search such a list 2,000 times a second using linear search.
Once you sort the list and switch to binary search, then it can manage 1,000,000 searches a second.
Switch to storing an ordered array of integers, and it achieves over 10,000,000 searches a second.
With a hash based dictionary of integers gives performance twice as fast again.
So, the performance of your search could be improved easily by a factor of 20,000, I still don't think that your performance problems are down to what you believe. I wonder if your real problem that that you are reading the file from disk every time you search.

TClientDataSet and limit by memory

We have a system that creates reports out of our data. And we can deal with a lot of data. The idea of over 150,000 rows is not out of the question.
Unfortunately, our experience with TClientDataSet is its limitations, because it often results in an 'insufficient memory for this operation' error, when the data gets too big.
So the question is this: Does there exist a generally available implementation of TDataSet that can handle a large amount of data (such as streaming directly to a file and not keeping the entire dataset in memory)?
I am willing to implement such a class myself. But as far as I understand TClientDataSet, it needs to be able to contain the data itself before it can save it to a file/stream. In addition, loading the data again should also be possible as a stream rather than loading in an entire TClientDataSet object, because then we wouldn't have solved the issue.
You can use either FireBird or Interbase in embedded mode.
Is there really any need to cache all the data on the client before reporting? If not, maybe rethink how you're querying and processing data to generate these reports and see if there's a way that involves less client-side data (which comes with a bonus of less data transmitted over the network).
If you've been down that road before and you really do need all this data client side, then you could look at custom data structures. A TList<T> of records (even if you need to build your own indexes) takes a lot less memory than a TClientDataSet does.
KBMMemTable is a nice alternative to TClientDataset
http://www.components4programmers.com/products/kbmmemtable/
We are using it for years and it is very useful and fast.
Wanted to underline that the capacities of the TClientDataset could be something bigger.
Test on the limits of TClientDataset - appending xxx,xxx records, putting the single record in whole (repeated) to create an idea on the size.
// Begin Load Record to TCLientDataset for Reverse (on Reverse) Processing
dxMemData1.Append;
dxMemData1['NT_Rec_No'] := 1000;
dxMemData1['NT_User'] := 'DEV\Administrator';
dxMemData1['NT_Type'] := 'Information';
dxMemData1['Ora_Timestamp'] := '20170706033859.000';
dxMemData1['Ora_Host'] := 'DEV0001';
dxMemData1['Ora_SID'] := 'Oracle.orcl';
dxMemData1['Ora_Event_Id'] := '34';
dxMemData1['NT_Message'] := Memo1.Text;
dxMemData1.Post;
// End Load Record to TCLientDataset for Reverse (on Reverse) Processing
String on the memo1 is a of 100 characters - ansi
did several tests and managed to keep going to something from 600,000 to 900,000 records without crashing.
Difference can be made by making the text on the memo bigger - this did decrease the max number before crash - which means it is not a matter of exact max. record number - but of size consumed - my guess.
Tested the same with TdxMemData (devexpress), this time I could reach almost the double of records

Optimizing looping through dataset

I have a code some thing like this
dxMemOrdered : TdxMemData;
while not qrySandbox2.EOF do
begin
dxMemOrdered.append;
dxMemOrderedTotal.asCurrency := qrySandbox2.FieldByName('TOTAL').asCurrency;
dxMemOrdered.post;
qrySandbox2.Next;
end;
this code executes in a thread. When there are huge records say "400000" it is taking around 25 minutes to parse through it. Is there any way that i can reduce the size by optimizing the loop? Any help would be appreciated.
Update
Based on the suggestions i made the following changes
dxMemOrdered : TdxMemData;
qrySandbox2.DisableControls;
while not qrySandbox2.Recordset.EOF do
begin
dxMemOrdered.append;
dxMemOrderedTotal.asCurrency := Recordset.Fields['TOTAL'].Value;
dxMemOrdered.post;
qrySandbox2.Next;
end;
qrySandbox2.EnableControls;
and my output time have improved from 15 mins to 2 mins. Thank you guys
Without seeing more code, the only suggestion I can make is make sure that any visual control that is using the memory table is disabled. Suppose you have a cxgrid called Grid that is linked to your dxMemOrdered memory table:
var
dxMemOrdered: TdxMemData;
...
Grid.BeginUpdate;
try
while not qrySandbox2.EOF do
begin
dxMemOrdered.append;
dxMemOrderedTotal.asCurrency := qrySandbox2.FieldByName('TOTAL').asCurrency;
dxMemOrdered.Post;
qrySandbox2.Next;
end;
finally
Grid.EndUpdate;
end;
Some ideas in order of performance gain vs work to do by you:
1) Check if the SQL dialect that you are using lets you use queries that directly SELECT from/INSERT to. This depends on the database you're using.
2) Make sure that if your datasets are not coupled to visual controls, that you call DisableControls/EnableControls around this loop
3) Does this code have to run in the main program thread? Maybe you can send if off to a separate thread while the user/program continues doing something else
4) When you have to deal with really large data, bulk insertion is the way to go. Many databases have options to bulk insert data from text files. Writing to a text file first and then bulk inserting is way faster than individual inserts. Again, this depends on your database type.
[Edit: I just see you inserting the info that it's TdxMemData, so some of these no longer apply. And you're already threading, missed that ;-). I leave this suggestions in for other readers with similar problems]
It's much better to let SQL do the work instead of iterating though a loop in Delphi. Try a query such as
insert into dxMemOrdered (total)
select total from qrySandbox2
Is 'total' the only field in dxMemOrdered? I hope that it's not the primary key otherwise you are likely to have collisions, meaning that rows will not be added.
There's actually a lot you could do to speed up your thread.
The first would be to look at the problem in a broader perspective:
Am I fetching data from a cached / fast disk, possibly moved in memory?
Am I doing the right thing, when aggregating totals by hand? SQL engines are expecially optimized to do those things, all you'd need to do is to define an additional logical field where to store the SQL aggregated result.
Another little optimization that may bring an improvement over large amounts of looping is to not use constructs like:
Recordset.Fields['TOTAL'].Value
Recordset.FieldByName('TOTAL').Value
but to add the fields with the fields editor and then directly accessing the right field. You'll save a whole loop through the fields collection, that otherwise is performed on every field, on every next record.

How does AdoQuery handle blobs?

I am testing some databases components such as SDAC and others and I found out something interesting:
When I execute a query with TADOQuery and this query has a lot of blob fields and I get all rows (fetchall) the memory of my application gets close to 1.8GB and everything works fine.
Using other components, the same query executed on the same database trows an Out of Memory exception because it exceeds 1.8GB of memory usage.
I know I should not return all those rows, i should use pagination and blablabla. But i am curious how can ADO manage to get all rows and other components cant.
I think that ADO is compressing the blobs in memory, but this is only a guess.
Does anyone knows why memory usage in ADO is so good?
I cannot say about SDAC, but will say about AnyDAC TADQuery:
if exclude fiBlobs from FetchOptions.Items, then AnyDAC will not fetch BLOB values immediately. But will defer fetching until the application will really need a BLOB value;
setting FormatOptions.InlineDataSize to more small value, will reduce memory usage on fetching large result set with multiple character fields;
specifying FormatOptions.MapRules, application may choose more compact data type representation.
Also there are few other techniques, allowing to reduce memory usage when fetching large result sets. To use them properly a developer should know what kind of data will be returned. The price of some options usage may be a slightly reduced fetch performance.

How to handle billions of objects without "Outofmemory" error

I have an application which may needs to process billions of objects.Each object of is of TRange class type. These ranges are created at different parts of an algorithm which depends on certain conditions and other object properties. As a result, if you have 100 items, you can't directly create the 100th object without creating all the prior objects. If I create all the (billions of) objects and add to the collection, the system will throw Outofmemory error. Now I want to iterate through each object mainly for two purposes:
To apply an operation for each TRange object(eg:Output certain properties)
To get a cumulative sum of a certain property.(eg: Each range has a weight property and I want to retreive totalweight that is a sum of all the range weights).
How do I effectively create an Iterator for these object without raising Outofmemory?
I have handled the first case by passing a function pointer to the algorithm function. For eg:
procedure createRanges(aProc: TRangeProc);//aProc is a pointer to function that takes a //TRange
var range: TRange;
rangerec: TRangeRec;
begin
range:=TRange.Create;
try
while canCreateRange do begin//certain conditions needed to create a range
rangerec := ReturnRangeRec;
range.Update(rangerec);//don't create new, use the same object.
if Assigned(aProc) then aProc(range);
end;
finally
range.Free;
end;
end;
But the problem with this approach is that to add a new functionality, say to retrieve the Total weight I have mentioned earlier, either I have to duplicate the algorithm function or pass an optional out parameter. Please suggest some ideas.
Thank you all in advance
Pradeep
For such large ammounts of data you need to only have a portion of the data in memory. The other data should be serialized to the hard drive. I tackled such a problem like this:
I Created an extended storage that can store a custom record either in memory or on the hard drive. This storage has a maximum number of records that can live simultaniously in memory.
Then I Derived the record classes out of the custom record class. These classes know how to store and load themselves from the hard drive (I use streams).
Everytime you need a new or already existing record you ask the extended storage for such a record. If the maximum number of objects is exceeded, the storage streams some of the least used record back to the hard drive.
This way the records are transparent. You always access them as if they are in memory, but they may get loaded from hard drive first. It works really well. By the way RAM works in a very similar way so it only holds a certain subset of all you data on your hard drive. This is your working set then.
I did not post any code because it is beyond the scope of the question itself and would only confuse.
Look at TgsStream64. This class can handle a huge amounts of data through file mapping.
http://code.google.com/p/gedemin/source/browse/trunk/Gedemin/Common/gsMMFStream.pas
But the problem with this approach is that to add a new functionality, say to retrieve the Total weight I have mentioned earlier, either I have to duplicate the algorithm function or pass an optional out parameter.
It's usually done like this: you write a enumerator function (like you did) which receives a callback function pointer (you did that too) and an untyped pointer ("Data: pointer"). You define a callback function to have first parameter be the same untyped pointer:
TRangeProc = procedure(Data: pointer; range: TRange);
procedure enumRanges(aProc: TRangeProc; Data: pointer);
begin
{for each range}
aProc(range, Data);
end;
Then if you want to, say, sum all ranges, you do it like this:
TSumRecord = record
Sum: int64;
end;
PSumRecord = ^TSumRecord;
procedure SumProc(SumRecord: PSumRecord; range: TRange);
begin
SumRecord.Sum := SumRecord.Sum + range.Value;
end;
function SumRanges(): int64;
var SumRec: TSumRecord;
begin
SumRec.Sum := 0;
enumRanges(TRangeProc(SumProc), #SumRec);
Result := SumRec.Sum;
end;
Anyway, if you need to create billions of ANYTHING you're probably doing it wrong (unless you're a scientist, modelling something extremely large scale and detailed). Even more so if you need to create billions of stuff every time you want one of those. This is never good. Try to think of alternative solutions.
"Runner" has a good answer how to handle this!
But I would like to known if you could do a quick fix: make smaller TRange objects.
Maybe you have a big ancestor? Can you take a look at the instance size of TRange object?
Maybe you better use packed records?
This part:
As a result, if you have 100 items,
you can't directly create the 100th
object without creating all the prior
objects.
sounds a bit like calculating Fibonacci. May be you can reuse some of the TRange objects instead of creating redundant copies? Here is a C++ article describing this approach - it works by storing already calculated intermediate results in a hash map.
Handling billions of objects is possible but you should avoid it as much as possible. Do this only if you absolutely have to...
I did create a system once that needed to be able to handle a huge amount of data. To do so, I made my objects "streamable" so I could read/write them to disk. A larger class around it was used to decide when an object would be saved to disk and thus removed from memory. Basically, when I would call an object, this class would check if it's loaded or not. If not, it would re-create the object again from disk, put it on top of a stack and then move/write the bottom object from this stack to disk. As a result, my stack had a fixed (maximum) size. And it allowed me to use an unlimited amount of objects, with a reasonable good performance too.
Unfortunately, I don't have that code available anymore. I wrote it for a previous employer about 7 years ago. I do know that you would need to write a bit of code for the streaming support plus a bunch more for the stack controller which maintains all those objects. But it technically would allow you to create an unlimited number of objects, since you're trading RAM memory for disk space.

Resources