Fading values general function/procedure - delphi
I am after a general function/procedure that would calculate me fade times and values based on data provided, something like this:
I have byte values saved in a byte array: these are the start values. Then I have some memorized values in some other array: these are to be the new values. Then I have time to be provided, which is time needed a start value to get to new value.
I need to get updates on the values each time they change (up to 0.1 seconds accurate). I know that if value A changes for 10 and value B changes for 100 in the same time, let's say 1 second, I'll get value A updated 10 times, while value B will be updated 100 times.
So far I have been planning on using a timer, with interval let's say 50ms, which would constantly be calculating the difference based on the value of the change and the time needed, something like: change step := (Difference between start and new value / {divided by} (fade time / timer interval) ).
But given the fact that value changes are different, fade times as well, and that I could execute another value fading before the first fading has ended, is making all of this confusing and difficult for me.
So, what I would need is an option to, let's say, given values at index 1, 2, 3, 4, 5, 6 and 7 to be changed to their new values in 30 seconds, then at some point somewhere in between I could order the values at index 11, 13 and 17 to change to their new values in 9 seconds, etc...
Also, in case that value A would have a fading towards value B in progress, and another fade from A to C would be ordered, I would like it to be added to a queue list, to be executed right after the first fade is finished. And at that time, the B value from the first command would become the A value in the second command. This is due to these facts: The A in the example above should always be read at the very moment of the fade start. This way, it is a starting value no matter what was done before the fade or between the fade command and fade execution. Therefore, I could set Fade1 to Current -> B # 10s and queue a Fade2 for Current -> C # 10s, whereas the Current in the second case is actually value otherwise saved as B, and let's assume the Current in Fade1 is same as value saved as C. This way the value would be in a loopback, changing every 10 seconds. So basically, the command for adding a fade should only have something like SetNewFade: Dest:=B; Time:=10;.
So I could add ->B#10s, ->C#10s, ->B#10s, ->C#10s, and it would just loop from B to C and backwards until queue list is empty. I hope I managed to make this clear enough for you to understand. I really can't describe better what I need to achieve.
Also, as all of the fades would be provided in a Listbox, I would like to be able to delete fades in the queue as desired. But, if the currently running fade is deleted, the value should jump to a new value as if the fade would be already completed, and normally then start the new fade in queue list, if there's any.
How would that be the easiest to create? Is using Timer with fixed interval a good idea? Would it cause any delays if a lot of values would be pending for fade? Is using dynamic arrays for values and times (and populating them on StartFade event and release them after fading is complete) a shot in the dark or a good guess?
Here an example which I hope makes it clearer:
A: array [0..20] of Byte;
B: array [0..20] of Byte;
C: array [0..20] of Byte;
Current: array [0..20] of Byte;
Button1 applies the A values to the Current values, Button2 applies the B values, and Button3 applies the C values, and so on...
So I set time in an Edit box, let's say 5 seconds, and click on Button1. With that, I added the fade from Current towards values in array A with time 5 seconds. Since it's the first in queue, it starts to execute immediately. Before the fade is actually completed, I set time 20 seconds and press Button2. So I just added another fade in a queue list: from Current towards the values in array B. Since I'm changing the same values (index 0..20), this is starting to be executed right after the first fade completes. Note: the fading process is constantly updating the Current array, until it has the same values as the fade command's array! Therefore, the second fade will fade again from Current to B, with Current actually being same as A.
Now where things gets even more complicated is when I actually set just values indexed 0, 1, 2, 3 and 4 from the arrays to be faded #5sec to A, and then I apply the values indexed 5, 6, 7, 8 and 9 to be faded #10sec to B values: in that case, since the indexes I am fading are different ones, both fade commands should execute right away.
In case one value is in both fades (such as if I'd add value indexed 4 to the second fade), only this value would need to be added to a queue list. So the other fades right away, while the one that is already fading in the first fade, waits for it to finish, and then starts to fade as per the second command.
Some additional information:
Lengths of the arrays are not fixed at the moment, but could be set fixed if this is important. It is for sure a multiplier of 512 with a maximum of 2047.
The number of arrays is unknown and is to be modified in runtime as needed. They will probably be stored as records, (such as StoredVals: array of record;, Index: array of Integer (index of the values; this is to tell which values are stored in this record), and Value: array of Byte; (these are actual values that are faded, based on Current[StoredVals[0].Index[0]] for example. Current is keeping data of all values, meanwhile the records of A, B, C etc... keeps only the values of those which are indexed inside that record).
The lengths of the arrays are, based on the description above, not always equal since they aren't always changing the same amount of values.
The arrays are filled from the database at initialization. Since they can be created on runtime, they are filled from the Current values and stored as new array as well. But this is always also written in a database as well then. They are kind of memorized values, so that I can recall them with buttons. For that matter, I would like to have an option to recall those values immediately (as I do now already) or via the fading option. Now, to avoid the issues for a value in the queue, I was thinking of sending that immediate change through the fading process as well, only with time 0 seconds. That way, the values which are not in queue would be applied immediately, while if some value is currently fading, it will be applied after that fade is complete. That said, this fade process would be in the command flow all the time.
If there's any other extra clarification needed, please don't hesitate to ask!
I know this is really complicated, and that's why I'm looking for your help. Any partial help or suggestions would be appreciated.
I'm after a general function/procedure...
Actually, you seem to be after a complete program. You are thinking about solving it as a whole, and that's clouding, which is why you have so many questions. You need to learn breaking this task up in smaller parts, and to summarize the requirements more clearly. The question in its current form is close to being off-topic, and it probably would fit better at SE Programmers. But since this fits right up my alley, I would like to step you through.
Requirements
There is a set of values X of length N.
One or more values in this set can be assigned a new value.
The modification from an old value to the new value should be performed in steps within a specific duration.
This results in intermediate values during this transition.
This transition is value/index specific, i.e. the duration for X[0] could differ from that for X[1].
A transition has to be entirely completed before another new value can be assigned.
New values may be requested for assignment while a transition is in progress.
This concludes that new requests should be stored in a queue, such that when a transition is completed, en new value request can be pulled from the queue resulting in a new transition.
I am pretty sure this is a correct summary of your question.
Transitions
Your proposal to use a Timer to perform a piece of the total transition on every interval is sound. Now, there are two ways to calculate those pieces:
Divide the total transition into a fixed number of small transitions, set the Timer's interval to the total duration divided by that number and handle the sum of all processed smaller transitions on every interval. This is what you propose in the calculation of the change step. The drawback with this method is a twofold:
A Timer's interval is an approximation and will not be exact because of various reasons, one of them being dependend on the Windows messaging model which' timing is affected by many processes, including yours,
A possible rough or unsmooth progress because of it.
Recalculate the part of the processed transition at every interval. That way the progress will always be smooth, whether the next interval takes two times more or not.
The second solution is preferred, and this translates into the following general routine you are looking for. Let's start simple by assuming a single item:
function InBetweenValue(BeginValue, EndValue: Byte; Duration,
StartTick: Cardinal): Byte
var
Done: Single;
begin
Done := (GetTickCount - StartTick) / Duration;
if Done >= 1.0 then
Result := EndValue
else
Result := Round(BeginValue + (EndValue - BeginValue) * Done);
end;
Is using a Timer with fixed interval a good idea?
With this approach, the Timer's interval does not affect the calculation: at any given time the result of InBetweenValue will be correct. The only thing the Timer is needed for is driving the progress. If you want a 67 Hz refresh rate then set its interval to 15 milliseconds. If you want a refresh rate of 20 Hz, then set the interval to 50 milliseconds.
Performance
Would it cause any delays if a lot of values would be pending for fade?
No, not for the implied reason. The time needed for all calculations may depend on the size of the queue, but that will for sure not be a significant factor. (If so, then you have problems of a much more troubling caliber). Possible "delays" will be manifested in a a lesser refresh rate due to missed or merged Windows Timer messages, depending on how busy the computer is with everything it's doing.
Data storage
Is using dynamic arrays for values and times (and populating them on "StartFade" event and release them after fading is complete) a shot in the dark or a good guess?
Let's first analyze what data needs to be handled. There is a single set of in-between current values of arbitrary length, and each value has its own four attributes: begin value, end value, transition duration and transition start time. So you have the choice between:
Storing 5 sets: one set of current values and four sets of attributes, or
Storing 1 set: a single set of current values wherein each value has four attribute members.
The first solution requires trouble with keeping all five sets synchronized. The second requires another dimension. I would prefer the latter.
Whether you use arrays or something else is up to you. Choose what you are most comfortable with, what fits the purpose or what matches the input or required output best. Whether you choose static of dynamic arrays depends on the variability of the input and makes no measurable difference in performance. Dynamic arrays require runtime length management, where static arrays do not.
But since you need a dynamic solution anyway, then I suggest thinking outside the box. For example, the RTL offers no default built-in management tools for arrays, but it does have collection classes that do, e.g. TList.
For the rest of this answer, I will assume the decision of using an Object for an element and a List for keeping track of them.
Design
Now that the two most pressing points have been addressed, the design can be worked out.
There is a List with items, and each item has its current value and four attributes: begin, end, duration and start time. Each item must be capable of getting new attribute values. There is a formula for calculating the current value, based on the attributes. And there is a Timer which should automate a multiple of these calculations.
Furthermore, a multiple of transition commands should be stored for an Item. Since we have an Item with members already, let's add those commands as member of the Item too.
Something missing? No. Let's go.
Implementation
We need:
A type for a Transition with two members: end value and duration,
A type for a multiple of these transitions, preferably with queue characteristics,
A type for an Item with six members: begin value, end value, duration, start time, current value and transitions,
A type for a List of such Items,
A routine for calculating the current value of an Item,
A routine for popping up a new transition when the current value reached the end value,
A routine for doing this calculation and popping on all Items,
A Timer to drive this over-all routine,
A routine for updating an Item's attributes. Recapitulate. Do we need the ability to set all the attributes? Doesn't a transition have all the settings needed?
A type for an Object holding this all together.
This should help you set up the interface part of the code. Linger, and contain eagerness to start coding the implementation.
Hereby my try-out, originated as described above:
unit Modulation;
interface
uses
System.SysUtils, System.Classes, System.Generics.Collections, WinAPI.Windows,
VCL.ExtCtrls;
type
TTransition = record
EndValue: Byte;
Duration: Cardinal;
end;
TTransitions = class(TQueue<TTransition>);
TByte = class(TObject)
private
FBeginValue: Byte;
FCurrentValue: Byte;
FEndValue: Byte;
FDuration: Cardinal;
FStartTick: Cardinal;
FTransitions: TTransitions;
procedure PopTransition;
public
procedure AddTransition(ATransition: TTransition);
constructor Create;
destructor Destroy; override;
function HasTransition: Boolean;
function InTransition: Boolean;
procedure Recalculate;
property CurrentValue: Byte read FCurrentValue;
end;
TBytes = class(TObjectList<TByte>);
TByteModulator = class(TObject)
private
FItems: TBytes;
FOnProgress: TNotifyEvent;
FTimer: TTimer;
function Finished: Boolean;
function GetCurrentValue(Index: Integer): Byte;
function GetItemCount: Integer;
procedure SetItemCount(Value: Integer);
procedure Proceed(Sender: TObject);
protected
procedure DoProgress;
public
procedure AddTransition(Index: Integer; ATransition: TTransition);
constructor Create;
destructor Destroy; override;
property CurrentValues[Index: Integer]: Byte read GetCurrentValue; default;
property ItemCount: Integer read GetItemCount write SetItemCount;
property OnProgress: TNotifyEvent read FOnProgress write FOnProgress;
end;
implementation
{ TByte }
procedure TByte.AddTransition(ATransition: TTransition);
begin
if ATransition.Duration < 1 then
ATransition.Duration := 1;
FTransitions.Enqueue(ATransition);
Recalculate;
end;
constructor TByte.Create;
begin
inherited Create;
FTransitions := TTransitions.Create;
FDuration := 1;
end;
destructor TByte.Destroy;
begin
FTransitions.Free;
inherited Destroy;
end;
function TByte.HasTransition: Boolean;
begin
Result := FTransitions.Count > 0;
end;
function TByte.InTransition: Boolean;
begin
Result := FCurrentValue <> FEndValue;
end;
procedure TByte.PopTransition;
var
Transition: TTransition;
begin
Transition := FTransitions.Dequeue;
FBeginValue := FCurrentValue;
FEndValue := Transition.EndValue;
FDuration := Transition.Duration;
FStartTick := GetTickCount;
end;
procedure TByte.Recalculate;
var
Done: Single;
begin
Done := (GetTickCount - FStartTick) / FDuration;
if Done >= 1.0 then
begin
FCurrentValue := FEndValue;
if HasTransition then
PopTransition;
end
else
FCurrentValue := Round(FBeginValue + (FEndValue - FBeginValue) * Done);
end;
{ TByteModulator }
const
RefreshFrequency = 25;
procedure TByteModulator.AddTransition(Index: Integer;
ATransition: TTransition);
begin
FItems[Index].AddTransition(ATransition);
FTimer.Enabled := True;
end;
constructor TByteModulator.Create;
begin
inherited Create;
FItems := TBytes.Create(True);
FTimer := TTimer.Create(nil);
FTimer.Enabled := False;
FTimer.Interval := MSecsPerSec div RefreshFrequency;
FTimer.OnTimer := Proceed;
end;
destructor TByteModulator.Destroy;
begin
FTimer.Free;
FItems.Free;
inherited Destroy;
end;
procedure TByteModulator.DoProgress;
begin
if Assigned(FOnProgress) then
FOnProgress(Self);
end;
function TByteModulator.Finished: Boolean;
var
Item: TByte;
begin
Result := True;
for Item in FItems do
if Item.InTransition or Item.HasTransition then
begin
Result := False;
Break;
end;
end;
function TByteModulator.GetCurrentValue(Index: Integer): Byte;
begin
Result := FItems[Index].CurrentValue;
end;
function TByteModulator.GetItemCount: Integer;
begin
Result := FItems.Count;
end;
procedure TByteModulator.Proceed(Sender: TObject);
var
Item: TByte;
begin
for Item in FItems do
Item.Recalculate;
DoProgress;
FTimer.Enabled := not Finished;
end;
procedure TByteModulator.SetItemCount(Value: Integer);
var
I: Integer;
begin
for I := FItems.Count to Value - 1 do
FItems.Add(TByte.Create);
FItems.DeleteRange(Value, FItems.Count - Value);
end;
end.
And a tiny plug-and-play demonstration program (note that the labels only show the last request):
unit Unit2;
interface
uses
System.SysUtils, System.Classes, Vcl.Controls, Vcl.Forms,
VCL.ComCtrls, VCL.StdCtrls, Modulation;
type
TForm2 = class(TForm)
private
FBars: array of TProgressBar;
FLabels: array of TLabel;
FByteModulator: TByteModulator;
procedure FormClick(Sender: TObject);
procedure Progress(Sender: TObject);
public
constructor Create(AOwner: TComponent); override;
destructor Destroy; override;
end;
var
Form2: TForm2;
implementation
{$R *.dfm}
{ TForm2 }
const
Count = 10;
constructor TForm2.Create(AOwner: TComponent);
var
I: Integer;
begin
inherited Create(AOwner);
FByteModulator := TByteModulator.Create;
FByteModulator.ItemCount := Count;
FByteModulator.OnProgress := Progress;
SetLength(FBars, Count);
SetLength(FLabels, Count);
for I := 0 to Count - 1 do
begin
FBars[I] := TProgressBar.Create(Self);
FBars[I].SetBounds(10, 10 + 30 * I, 250, 25);
FBars[I].Smooth := True;
FBars[I].Max := High(Byte);
FBars[I].Parent := Self;
FLabels[I] := TLabel.Create(Self);
FLabels[I].SetBounds(270, 15 + 30 * I, 50, 25);
FLabels[I].Parent := Self;
end;
OnClick := FormClick;
end;
destructor TForm2.Destroy;
begin
FByteModulator.Free;
inherited Destroy;
end;
procedure TForm2.FormClick(Sender: TObject);
var
Transition: TTransition;
Index: Integer;
begin
Transition.EndValue := Random(High(Byte) + 1);
Transition.Duration := Random(3000);
Index := Random(Count);
FLabels[Index].Caption := Format('%d > %d # %f',
[FByteModulator.CurrentValues[Index], Transition.EndValue,
Transition.Duration / MSecsPerSec]);
FByteModulator.AddTransition(Index, Transition);
end;
procedure TForm2.Progress(Sender: TObject);
var
I: Integer;
begin
for I := 0 to Count - 1 do
FBars[I].Position := FByteModulator.CurrentValues[I];
end;
initialization
Randomize;
end.
Succes.
Related
Unexpected result from Max function in FreePascal
The running example is really simple to understand: program Project1; uses SysUtils, Math; var fValue: double; fValueMax: double; begin fValue := 7.0207503445953527; fValueMax := Max(0, fValue); writeln(fValue); writeln(fValueMax); readln; end. However the result is completely unexpected. For some reason, the Max function does not only return the larger number from the two arguments but also changes it's value. In the example code above, the expected value of fValueMax is exactly fValue, but instead fValueMax is bigger. The difference is approximately E-7, so small, but still unexpected and crashes my following code (which is not published here to keep the question clear and simple).
I should state upfront that the last time I used Pascal was close to 25 years ago. But I pulled down Free Pascal out of curiosity and tried this: program Project1; uses SysUtils, Math; var fValue: double; fValueMax: double; fSingle: single; fValue2: double; fValue2b: double; fValueMax2: double; begin fValue := 7.0207503445953527; fSingle := 7.0207503445953527; fValueMax := Max(0, fValue); writeln(fValue); // prints 7.0207503445953527E+000 writeln(fValueMax); // prints 7.0207505226135254E+000 writeln(fSingle); // prints 7.020750523E+00 fValue2 := 7.0207503445953527; fValue2b := 0.0; fValueMax2 := Max(fValue2b, fValue2); writeln(fValue2); // prints 7.0207503445953527E+000 writeln(fValueMax2); // prints 7.0207503445953527E+000 readln; end. My first two writeln commands show the same result that you reported seeing. I suspected that perhaps Max was returning a value with less precision that the double you expected to get back, so I created fSingle and assigned it the same literal as you assigned to fValue, and sure enough, its value looks very close to what you're getting back in fValueMax. So finally, instead of invoking Max with fValue and the literal 0, I called it with two variables of type double, one of which I had set to 0.0. In this case you can see that the input (fValue2) and the output (fValueMax2) have exactly the same value. So while I don't know exactly what Pascal's rules are for determining which overload to call, I wonder if your original call to Max was somehow resolving to the version that takes two single values and returns the same. While you may be aware of this, I feel compelled to throw in the usual caution about how floating-point types like single and double won't always be able to exactly represent the values you want them to. Here's a good overview.
Long delay when looping through a TList of big records
I use Delphi 10.1 Berlin in Windows 10. I have two records of different sizes. I wrote code to loop through two TList<T> of these records to test elapsed times. Looping through the list of the larger record runs much slower. Can anyone explain the reason, and provide a solution to make the loop run faster? type tTestRecord1 = record Field1: array[0..4] of Integer; Field2: array[0..4] of Extended; Field3: string; end; tTestRecord2 = record Field1: array[0..4999] of Integer; Field2: array[0..4999] of Extended; Field3: string; end; procedure TForm1.Button1Click(Sender: TObject); var _List: TList<tTestRecord1>; _Record: tTestRecord1; _Time: TTime; i: Integer; begin _List := TList<tTestRecord1>.Create; for i := 0 to 4999 do begin _List.Add(_Record); end; _Time := Time; for i := 0 to 4999 do begin if _List[i].Field3 = 'abcde' then begin Break; end; end; Button1.Caption := FormatDateTime('s.zzz', Time - _Time); // 0.000 _List.Free; end; procedure TForm1.Button2Click(Sender: TObject); var _List: TList<tTestRecord2>; _Record: tTestRecord2; _Time: TTime; i: Integer; begin _List := TList<tTestRecord2>.Create; for i := 0 to 4999 do begin _List.Add(_Record); end; _Time := Time; for i := 0 to 4999 do begin if _List[i].Field3 = 'abcde' then begin Break; end; end; Button2.Caption := FormatDateTime('s.zzz', Time - _Time); // 0.045 _List.Free; end;
First of all, I want to consider the entire code, even the code that populates the list which I do realise you have not timed. Because the second record is larger in size more memory needs to be copied when you make an assignment of that record type. Further when you read from the list the larger record is less cache friendly than the smaller record which impacts performance. This latter effect is likely less significant than the former. Related to this is that as you add items the list's internal array of records has to be resized. Sometimes the resizing leads to a reallocation that cannot be performed in-place. When that happens a new block of memory is allocated and the previous content is copied to this new block. That copy is clearly ore expensive for the larger record. You can mitigate this by allocating the array once up front if you know it's length. The list Capacity is the mechanism to use. Of course, not always will you know the length ahead of time. Your program does very little beyond memory allocation and memory access. Hence the performance of these memory operations dominates. Now, your timing is only of the code that reads from the lists. So the memory copy performance difference on population is not part of the benchmarking that you performed. Your timing differences are mainly down to excessive memory copy when reading, as I will explain below. Consider this code: if _List[i].Field3 = 'abcde' then Because _List[i] is a record, a value type, the entire record is copied to an implicit hidden local variable. The code is actually equivalent to: var tmp: tTestRecord2; ... tmp := _List[i]; // copy of entire record if tmp.Field3 = 'abcde' then There are a few ways to avoid this copy: Change the underlying type to be a reference type. This changes the memory management requirements. And you may have good reason to want to use a value type. Use a container class that can return the address of an item rather than a copy of an item. Switch from TList<T> to dynamic array TArray<T>. That simple change will allow the compiler to access individual fields directly without copying entire records. Use the TList<T>.List to obtain access to the list object's underlying array holding the data. That would have the same effect as the previous item. Item 4 above is the simplest change you could make to see a large difference. You would replace if _List[i].Field3 = 'abcde' then with if _List.List[i].Field3 = 'abcde' then and that should yield a very significant change in performance. Consider this program: {$APPTYPE CONSOLE} uses System.Diagnostics, System.Generics.Collections; type tTestRecord2 = record Field1: array[0..4999] of Integer; Field2: array[0..4999] of Extended; Field3: string; end; procedure Main; const N = 100000; var i: Integer; Stopwatch: TStopwatch; List: TList<tTestRecord2>; Rec: tTestRecord2; begin List := TList<tTestRecord2>.Create; List.Capacity := N; for i := 0 to N-1 do begin List.Add(Rec); end; Stopwatch := TStopwatch.StartNew; for i := 0 to N-1 do begin if List[i].Field3 = 'abcde' then begin Break; end; end; Writeln(Stopwatch.ElapsedMilliseconds); end; begin Main; Readln; end. I had to compile it for 64 bit to avoid an out of memory condition. The output on my machine is around 700. Change List[i].Field3 to List.List[i].Field3 and the output is in single figures. The timing is rather crude, but I think this demonstrates the point. The issue of the large record not being cache friendly remains. That is more complicated to deal with and would require a detailed analysis of how the real world code operated on its data. As an aside, if you care about performance then you won't use Extended. Because it has size 10, not a power of two, memory access is frequently mis-aligned. Use Double or Real which is an alias to Double.
Do dynamic arrays support a non-zero lower bound (for VarArrayCreate compatibility)?
I'm going maintain and port to Delphi XE2 a bunch of very old Delphi code that is full of VarArrayCreate constructs to fake dynamic arrays having a lower bound that is not zero. Drawbacks of using Variant types are: quite a bit slower than native arrays (the code does a lot of complex financial calculations, so speed is important) not type safe (especially when by accident a wrong var... constant is used, and the Variant system starts to do unwanted conversions or rounding) Both could become moot if I could use dynamic arrays. Good thing about variant arrays is that they can have non-zero lower bounds. What I recollect is that dynamic arrays used to always start at a lower bound of zero. Is this still true? In other words: Is it possible to have dynamic arrays start at a different bound than zero? As an illustration a before/after example for a specific case (single dimensional, but the code is full of multi-dimensional arrays, and besides varDouble, the code also uses various other varXXX data types that TVarData allows to use): function CalculateVector(aSV: TStrings): Variant; var I: Integer; begin Result := VarArrayCreate([1,aSV.Count-1],varDouble); for I := 1 to aSV.Count-1 do Result[I] := CalculateItem(aSV, I); end; The CalculateItem function returns Double. Bounds are from 1 to aSV.Count-1. Current replacement is like this, trading the space zeroth element of Result for improved compile time checking: type TVector = array of Double; function CalculateVector(aSV: TStrings): TVector; var I: Integer; begin SetLength(Result, aSV.Count); // lower bound is zero, we start at 1 so we ignore the zeroth element for I := 1 to aSV.Count-1 do Result[I] := CalculateItem(aSV, I); end;
Dynamic arrays always have a lower bound of 0. So, low(A) equals 0 for all dynamic arrays. This is even true for empty dynamic arrays, i.e. nil. From the documentation: Dynamic arrays are always integer-indexed, always starting from 0.
Having answered your direct question already, I also offer you the beginnings of a generic class that you can use in your porting. type TSpecifiedBoundsArray<T> = class private FValues: TArray<T>; FLow: Integer; function GetHigh: Integer; procedure SetHigh(Value: Integer); function GetLength: Integer; procedure SetLength(Value: Integer); function GetItem(Index: Integer): T; procedure SetItem(Index: Integer; const Value: T); public property Low: Integer read FLow write FLow; property High: Integer read GetHigh write SetHigh; property Length: Integer read GetLength write SetLength; property Items[Index: Integer]: T read GetItem write SetItem; default; end; { TSpecifiedBoundsArray<T> } function TSpecifiedBoundsArray<T>.GetHigh: Integer; begin Result := FLow+System.High(FValues); end; procedure TSpecifiedBoundsArray<T>.SetHigh(Value: Integer); begin SetLength(FValues, 1+Value-FLow); end; function TSpecifiedBoundsArray<T>.GetLength: Integer; begin Result := System.Length(FValues); end; procedure TSpecifiedBoundsArray<T>.SetLength(Value: Integer); begin System.SetLength(FValues, Value); end; function TSpecifiedBoundsArray<T>.GetItem(Index: Integer): T; begin Result := FValues[Index-FLow]; end; function TSpecifiedBoundsArray<T>.SetItem(Index: Integer; const Value: T); begin FValues[Index-FLow] := Value; end; I think it's pretty obvious how this works. I contemplated using a record but I consider that to be unworkable. That's down to the mix between value type semantics for FLow and reference type semantics for FValues. So, I think a class is best here. It also behaves rather weirdly when you modify Low. No doubt you'd want to extend this. You'd add a SetBounds, a copy to, a copy from and so on. But I think you may find it useful. It certainly shows how you can make an object that looks very much like an array with non-zero lower bound.
What is the canonical way to write a hasher function for TEqualityComparer.Construct?
Consider the following record: TMyRecord = record b: Boolean; // 3 bytes of padding in here with default record alignment settings i: Integer; end; I wish to implement IEqualityComparer<TMyRecord>. In order to do so I want to call TEqualityComparer<TMyRecord>.Construct. This needs to be supplied with a TEqualityComparison<TMyRecord> which presents no problems to me. However, Construct also requires a THasher<TMyRecord> and I would like to know the canonical method for implementing that. The function needs to have the following form: function MyRecordHasher(const Value: TMyRecord): Integer; begin Result := ??? end; I expect that I need to call BobJenkinsHash on both fields of the record value and then combine them some how. Is this the right approach, and how should I combine them? The reason I don't use TEqualityComparison<TMyRecord>.Default is that it uses CompareMem and so will be incorrect due to the record's padding.
The Effective Java (by Joshua Bloch) section about overriding hashCode could be useful. It shows how the individual parts of the object (or record) can be combined to efficiently construct a hashCode. A good hash function tends to produce unequal hash codes for unequal objects. This is exactly what is meant by the third provision of the hashCode contract. Ideally, a hash function should distribute any reasonable collection of unequal instances uniformly across all possible hash values. Achieving this ideal can be extremely difficult. Luckily it is not too difficult to achieve a fair approximation. Here is a simple recipe: Store some constant nonzero value, say 17, in an int variable called result. For each significant field f in your object (each field taken into account by the equals method, that is), do the following: a. Compute an int hash code c for the field: ..... details omitted .... b. Combine the hash code c computed in step a into result as follows: result = 37*result + c; Return result. When you are done writing the hashCode method, ask yourself whether equal instances have equal hash codes. If not, figure out why and fix the problem. This can be translated into Delphi code as follows: {$IFOPT Q+} {$DEFINE OverflowChecksEnabled} {$Q-} {$ENDIF} function CombinedHash(const Values: array of Integer): Integer; var Value: Integer; begin Result := 17; for Value in Values do begin Result := Result*37 + Value; end; end; {$IFDEF OverflowChecksEnabled} {$Q+} {$ENDIF} This then allows the implementation of MyRecordHasher: function MyRecordHasher(const Value: TMyRecord): Integer; begin Result := CombinedHash([IfThen(Value.b, 0, 1), Value.i]); end;
TVirtualStringTree - resetting non-visual nodes and memory consumption
I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paradigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?
You probably shouldn't switch to VST unless you have a use for at least some of the nice features of VST that a standard listbox / listview don't have. But there is of course a large memory overhead compared to a flat list of items. I don't see a real benefit in using TVirtualStringTree only to be able to expand and collapse items that span multiple lines. You write mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). but you can implement that easily without changing the item count. If you set the Style of the listbox to lbOwnerDrawVariable and implement the OnMeasureItem event you can adjust the height as required to draw either only the first or all lines. Drawing the expander triangle or the little plus symbol of a tree view manually should be easy. The Windows API functions DrawText() or DrawTextEx() can be used both to measure and draw the (optionally word-wrapped) text. Edit: Sorry, I completely missed the fact that you are using a listview right now, not a listbox. Indeed, there is no way to have rows with different heights in a listview, so that's no option. You could still use a listbox with a standard header control on top, but that may not support everything you are using now from listview functionality, and it may itself be as much or even more work to get right than dynamically showing and hiding listview rows to simulate collapsing and expanding.
If I understand it correctly, the memory requirement of TVirtualStringTree should be: nodecount * (SizeOf(TVirtualNode) + YourNodeDataSize + DWORD-align-padding) To minimize the memory footprint, you could perhaps initialize the nodes with only pointers to offsets to a memory-mapped file. Resetting nodes which have already been initialized doesn't seem necessary in this case - the memory footprint should be nodecount * (44 + 4 + 0) - for 5 million records, about 230 MB. IMHO you can't get any better with the tree but using a memory-mapped file would allow you to read the data directly from the file without allocating even more memory and copying the data to it. You could also consider using a tree structure instead of a flat view to present the data. That way you could initialize child nodes of a parent node on demand (when the parent node is expanded) and resetting the parent node when it's collapsed (therefore freeing all its child nodes). In other words, try not to have too many nodes at the same level.
To meet your requirement "to expand/collapse records that span multiple lines", I'd simply use a drawgrid. To check it out, drag a drawgrid onto a form, then plug in the following Delphi 6 code. You can collapse and expand 5,000,000 multiline records (or whatever quantity you want) with essentially no overhead. It's a simple technique, doesn't require much code, and works surprisingly well. unit Unit1; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, Grids, StdCtrls; type TForm1 = class(TForm) DrawGrid1: TDrawGrid; procedure DrawGrid1DrawCell(Sender: TObject; ACol, ARow: Integer; Rect: TRect; State: TGridDrawState); procedure DrawGrid1SelectCell(Sender: TObject; ACol, ARow: Integer; var CanSelect: Boolean); procedure DrawGrid1TopLeftChanged(Sender: TObject); procedure DrawGrid1DblClick(Sender: TObject); procedure FormCreate(Sender: TObject); private procedure AdjustGrid; end; var Form1: TForm1; implementation {$R *.dfm} // Display a large number of multi-line records that can be expanded or collapsed, using minimal overhead. // LinesInThisRecord() and RecordContents() are faked; change them to return actual data. const TOTALRECORDS = 5000000; // arbitrary; a production implementation would probably determine this at run time // keep track of whether each record is expanded or collapsed var isExpanded: packed array[1..TOTALRECORDS] of boolean; // initially all FALSE function LinesInThisRecord(const RecNum: integer): integer; begin // how many lines (rows) does the record need to display when expanded? result := (RecNum mod 10) + 1; // make something up, so we don't have to use real data just for this demo end; function LinesDisplayedForRecord(const RecNum: integer): integer; begin // how many lines (rows) of info are we currently displaying for the given record? if isExpanded[RecNum] then result := LinesInThisRecord(RecNum) // all lines show when expanded else result := 1; // show only 1 row when collapsed end; procedure GridRowToRecordAndLine(const RowNum: integer; var RecNum, LineNum: integer); var LinesAbove: integer; begin // for a given row number in the drawgrid, return the record and line numbers that appear in that row RecNum := Form1.DrawGrid1.TopRow; // for simplicity, TopRow always displays the record with that same number if RecNum > TOTALRECORDS then RecNum := 0; // avoid overflow LinesAbove := 0; while (RecNum > 0) and ((LinesDisplayedForRecord(RecNum) + LinesAbove) < (RowNum - Form1.DrawGrid1.TopRow + 1)) do begin // accumulate the tally of lines in expanded or collapsed records until we reach the row of interest inc(LinesAbove, LinesDisplayedForRecord(RecNum)); inc(RecNum); if RecNum > TOTALRECORDS then RecNum := 0; // avoid overflow end; LineNum := RowNum - Form1.DrawGrid1.TopRow + 1 - LinesAbove; end; function RecordContents(const RowNum: integer): string; var RecNum, LineNum: integer; begin // display the data that goes in the grid row. for now, fake it GridRowToRecordAndLine(RowNum, RecNum, LineNum); // convert row number to record and line numbers if RecNum = 0 then result := '' // out of range else begin result := 'Record ' + IntToStr(RecNum); if isExpanded[RecNum] then // show line counts too result := result + ' line ' + IntToStr(LineNum) + ' of ' + IntToStr(LinesInThisRecord(RecNum)); end; end; procedure TForm1.AdjustGrid; begin // don't allow scrolling past last record if DrawGrid1.TopRow > TOTALRECORDS then DrawGrid1.TopRow := TOTALRECORDS; if RecordContents(DrawGrid1.Selection.Top) = '' then // move selection back on to a valid cell DrawGrid1.Selection := TGridRect(Rect(0, TOTALRECORDS, 0, TOTALRECORDS)); DrawGrid1.Refresh; end; procedure TForm1.DrawGrid1DrawCell(Sender: TObject; ACol, ARow: Integer; Rect: TRect; State: TGridDrawState); var s: string; begin // time to draw one of the grid cells if ARow = 0 then s := 'Data' // we're in the top row, get the heading for the column else s := RecordContents(ARow); // painting a record, get the data for this cell from the appropriate record // draw the data in the cell ExtTextOut(DrawGrid1.Canvas.Handle, Rect.Left, Rect.Top, ETO_CLIPPED or ETO_OPAQUE, #Rect, pchar(s), length(s), nil); end; procedure TForm1.DrawGrid1SelectCell(Sender: TObject; ACol, ARow: Integer; var CanSelect: Boolean); var RecNum, ignore: integer; begin GridRowToRecordAndLine(ARow, RecNum, ignore); // convert selected row number to record number CanSelect := RecNum <> 0; // don't select unoccupied rows end; procedure TForm1.DrawGrid1TopLeftChanged(Sender: TObject); begin AdjustGrid; // keep last page looking good end; procedure TForm1.DrawGrid1DblClick(Sender: TObject); var RecNum, ignore, delta: integer; begin // expand or collapse the currently selected record GridRowToRecordAndLine(DrawGrid1.Selection.Top, RecNum, ignore); // convert selected row number to record number isExpanded[RecNum] := not isExpanded[RecNum]; // mark record as expanded or collapsed; subsequent records might change their position in the grid delta := LinesInThisRecord(RecNum) - 1; // amount we grew or shrank (-1 since record already occupied 1 line) if isExpanded[RecNum] then // just grew else delta := -delta; // just shrank DrawGrid1.RowCount := DrawGrid1.RowCount + delta; // keep rowcount in sync AdjustGrid; // keep last page looking good end; procedure TForm1.FormCreate(Sender: TObject); begin Caption := FormatFloat('#,##0 records', TOTALRECORDS); DrawGrid1.RowCount := TOTALRECORDS + 1; // +1 for column heading DrawGrid1.ColCount := 1; DrawGrid1.DefaultColWidth := 300; // arbitrary DrawGrid1.DefaultRowHeight := 12; // arbitrary DrawGrid1.Options := DrawGrid1.Options - [goVertLine, goHorzLine, goRangeSelect] + [goDrawFocusSelected, goThumbTracking]; // change some defaults end; end.
You shouldn't use ResetNode because this method invokes InvalidateNode and initializes node again, leading to opposite effect than expected. I don't know if it's possible to induce VST to free memory size specified in NodeDataSize without actually removing node. But why not set NodeDataSize to size of Pointer ( Delphi, VirtualStringTree - classes (objects) instead of records ) and manage data yourself? Just an idea...
Give "DeleteChildren" a try. Here's what this procedure's comment says: // Removes all children and their children from memory without changing the vsHasChildren style by default. Never used it, but as I read it, you can use that in the OnCollapsed event to free the memory allocated to nodes that just became invisible. And then re-generate those nodes in OnExpading so the user never knows the node went away from memory. But I can't be sure, I never had a need for such behaviour.