Virtual TListView Item->SubItems->Assign() during OnData triggers refresh and hence never ending updates - c++builder

Maintaining an older project using c++ Builder 2009
In a TListView instance (ViewStyle = vsReport), setup to operate virtual (OwnerData = true) I wanted to try and improve speed as much as possible. Every tiny bit helps. In the OnData event I noticed that Item->SubItems->Capacity = 0 to start with, and it increases per 4 as sub items are added. I read in the docs that Capacity is read only, yet I want to avoid TStrings' internal reallocating as much as possible. Since I also need to do caching I figured I'd use a TStringList as cache that has already grown to the required capacity. I assume TStrings Assign() will then immediately allocate an array big enough to store the required amount of Strings in ?
Item->SubItems->Assign(Cache.SubItems) ;
While this works I noticed this triggers ListView to call OnData again, and again and ... causing it to never stop.
Easily fixed again by doing this:
for (int x = 0 ; x < Cache.SubItems->Count ; x++)
Item->SubItems->Add(Cache.SubItems->Strings[x]) ;
But of course the whole point was to be able to tell SubItems the amount of strings from the start.
I realize I may be running against an old VCL issue ? That has long been resolved ? Or is there a point to this behavior that I don't understand right now ?
Is there a way to 'enable' Capacity to accept input ? So that it allocates just enough space for the strings that will be added ?

The TListView::OnData event is triggered whenever the ListView needs data for a given list item, such as (but not limited to) drawing operations.
Note that when the OnData event is triggered, the TListItem::SubItems has already been Clear()'ed beforehand. TStringList::Clear() sets the Capacity to 0, freeing its current string array. That is why the Count and Capacity are always 0 when entering your OnData handler.
The SubItems property is implemented as a TSubItems object, which derives from TStringList. The TStrings::Capacity property setter is implemented in TStringList, and does what you expect to pre-allocate the string array. But that is all it does - allocate VCL memory for the array, nothing more. There is still the aspect of updating the ListView subitems themselves at the Win32 API layer, and that has to be done individually as each string is added to the SubItems.
When your OnData handler calls SubItems->Assign(), you are calling TStrings::Assign() (as TStringList and TSubItems do not override it). However, TStrings::Assign() DOES NOT pre-allocate the string array to the size of the source TStrings object, as one would expect (at least, not in CB2009, I don't know if modern versions do or not). Internally, Assign() merely calls Clear() and then TStrings::AddStrings() (which neither TStringList nor TSubItems override to pre-allocate the array). AddStrings() merely calls TStrings::AddObject() in a loop (which both TStringList and TSubItems do override).
All of that clearing and adding logic is wrapped in a pair of TStrings::(Begin|End)Update() calls. This is important to note, because TSubItems reacts to the update counter. When the counter falls to 0, TSubItems triggers TListView to make some internal updates, which includes calling Invalidate() on itself, which triggers a whole repaint, and thus triggering a new series of OnData events for list items that need re-drawing.
On the other hand, when you call SubItems->Add() in your own manual loop, and omit the (Begin|End)Update() calls, you are skipping the repaint of the whole ListView. TSubItems overrides TStrings::Add/Object() to (among other things) update only the specific ListView item that it is linked to. It does not repaint the whole ListView.
So, you should be able to set the Capacity before entering your manual loop, if you really want to:
Item->SubItems->Capacity = Cache.SubItems->Count;
for (int x = 0; x < Cache.SubItems->Count; ++x)
Item->SubItems->Add(Cache.SubItems->Strings[x]);
In which case, you can use AddStrings() instead of a manual loop:
Item->SubItems->Capacity = Cache.SubItems->Count;
Item->SubItems->AddStrings(Cache.SubItems);

Related

Are generic TObjectlist operations Delete and Move typesafe for descendants?

I have TKeyframe as a class, and Keyframes: TObjectlist<TKeyframe> in a base class TTrack, but in descendants of TTrack, Keyframes contains descendants of TKeyframe with additional fields and appropriate typecasting.
TTrack has methods that call Keyframes.Move and Keyframes.Delete methods, that generally seem to be working properly with the descendants of TKeyframe. Deleting a keyframe that isn't at the end of the list appears to be working properly except for a specific situation, when one of the additional fields in the next Keyframe is getting set to a NaN by some subsequent operation that I haven't been able to isolate.
The source of TList Delete uses System.Move to move what are just pointers for a class, so it looks safe to me. So is it safe to use with descendants of T or not?
TList Move casts to T for the item being moved, so looks dodgier, but I haven't had any problems with it so far.
Yes, there is nothing in TObjectList<T> that affects polymorphism and System.Move just moves memory.
To find out what code sets some field to an unexpected value I suggest to use a data breakpoint.

Alternative to calling the Loaded method explicitly for dynamically created instances of a VCL component in Delphi 6?

I have several custom VCL components that do important tasks in their override of the TComponent Loaded() method. This creates a nuisance when creating instances dynamically since the Loaded() method is not called by the Delphi global loader during run-time, like it does for components that were placed on forms/frames at design-time. I also have to place the Loaded override in the public section of the class declaration so whatever code that creates an instance of the component can call it. Finally I have to remember to call Loaded() for dynamically created instances or subtle bugs will creep into the application, a problem that has bit me several times already.
Is there a better solution or approach to this?
If you need to call Loaded in your code you're doing it wrong. If you depend on a third party control that does, then I would fix that person's control. See below for how.
Let me make up a hypothetical example: Suppose I had 5 published properties, which once they are all loaded, can generate a complex curve or even better, generate a fractal, something that takes a long time.
At designtime I want to preview this curve, as soon as it's loaded, but I don't want the curve to be recalculated 5 times during DFM streaming, because each parameter P1 through P5 (type Double) has a SetP1 method, which invokes a protected method called Changed, and rebuilds my curve. Instead I have the SetP1 method return if csDesigning or csLoading are in the component states, and I then invoke Changed once, from Loaded. Clearly I can't rely on property setter methods alone, in all cases, to invoke all changes. So I need Loaded to tell me to do my first generation of some expensive work, that I want to be done 1 time exactly, not N times, where N is the number of DFM properties that would have been loaded that had method set procedures that invoked a method named Changed or something like that.
In your case, at runtime, you should not be relying on Loaded getting invoked at all. You should be instead, having your property set methods call Changed. If you need some way to change multiple properties at once, and then do some expensive thing only once, then implement a TMyComponent.BeginUpdate/TMyComponent.EndUpdate type of method call, and avoid extra work.
I can think of NO useful places where doing something from Loaded makes any sense, except for cases like the ones above, which should be specific to designtime and DFM based class use. I would expect a properly designed TComponent or TControl to properly initialize itself merely by being created in code, and by having its properties set.
So for my hypothetical TMyFractal component, I would do this when creating it in code without it ever having used DFM loading, or invoking Loaded:
cs := TMyFractal.Create(Self);
cs.Parent := Self; {Parent to a form}
cs.Align := alClient;
cs.BeginUpdate;
cs.P1 := 1.03; // does NOT trigger Regenerate
cs.P2 := 2.3;
cs.P3 := 2.4;
cs.P4 := 2.5;
cs.EndUpdate; // triggers expensive Regenerate method .
cs.Show;
// later someone wants to tweak only one parameter and I don't want to make them
// call regenerate:
cs.P5 := 3.0; // Each param change regenerates the whole curve when not loading or in a beginupdate block.
In my TMyFractal.Change method, I would invoke the expensive RegenerateCurve method once, each time any coefficient P1-P4 is modified at runtime, after initial setup, and once exactly when the component is streamed in from DFM, where Loaded is only used to handle the fact that I can hardly expect to do a beginupdate/endupdate in my control like I would have done in the code above.

Adding the same Object twice to a TObjectDictionary frees the object

Look at this code:
dic:=TObjectDictionary<Integer, TObject>.Create([doOwnsValues]);
testObject:=TObject.Create;
dic.AddOrSetValue(1,testObject);
dic.AddOrSetValue(1,testObject);
The code
Creates a Dictionary that owns the contained values
Adds a value
Adds the same value again, using the same key
The surprising thing is that the object is freed when you add it the second time.
Is this intended behaviour? Or a bug in the Delphi libraries?
The documentation simply says "If the object is owned, when the entry is removed from the dictionary, the key and/or value is freed". So it seems a little odd to Free an object that I have just asked it to Add!
Is there any way to tell the TObjectDictionary to not do this? Currently, each time I add a value I have to check first if that Key-Value combination is already in the Dictionary.
Delphi 2010
[EDIT:
After reading all the comments:
My conclusions (for what they are worth)]
This seems to be the intended behaviour
There is no way of modifying this behaviour
Don't use TObjectDictionary (or any of the other similar classes) for anything other than the common "Add these objects to the container. Leave them there. Do some stuff. Free the container and all the objects you added" usage. If you are doing anything more complicated, it's better to manage the objects yourself.
The behaviour is poorly documented and you should read the source if you want to really know what's going on
[/EDIT]
TObjectDictionary<TKey,TValue> is in fact just a TDictionary<TKey,TValue> which has some extra code in the KeyNotify and ValueNotify methods:
procedure TObjectDictionary<TKey,TValue>.ValueNotify(const Value: TValue;
Action: TCollectionNotification);
begin
inherited;
if (Action = cnRemoved) and (doOwnsValues in FOwnerships) then
PObject(#Value)^.Free;
end;
This is, IMO, a rather simple minded approach, but in the ValueNotify method, it is impossible to tell for which key this is, so it simply frees the "old" value (there is no way to check if this value is set for the same key).
You can either write your own class (which is not trivial), deriving from TDictionary<K,V>, or simply not use doOwnsValues. You can also write a simple wrapper, e.g. TValueOwningDictionary<K,V> that uses TDictionary<K,V> to do the brunt of the work, but handles the ownership issues itself. I guess I would do the latter.
Thats because with reusing the key youre replacing the object and since the dictionary owns the object it frees the old one. Dictionary doesn't compare the value, only key, so it doesn't detect that the value (object) is same. Not a bug, as designed (IOW user error).
On second thought - perhaps the designer of the dict should have taken more care to have both doOwnsValues and AddOrSetValue()... one can argue both ways... I suggest you file it in QC, but I wouldn't hold my breath - it has been so now in at least two releases so it's unlikely to change.
This behaviour is by design and the design is sound.
Were the class to take responsibility for not freeing duplicates, it would have to iterate over the whole container every time a modification was made, both adding and removing. The iteration would check for any duplicate values and check accordingly.
It would be disasterous to impose this diabolical performance drain on all users of the class. If you wish to put duplicates in the list then you will have to come up with a bespoke lifetime management policy that suits your specific needs. In this case it is unreasonable to expect the general purpose container to support your particular usage pattern.
In the comments to this answer, and many of the others, it has been suggested that a better design would have been to test in AddOrSetValue whether or not the value being set was already assigned to the specified key. If so, then AddOrSetValue could return immediately.
I think it's clear to anyone that checking for duplicates in full generality is too expensive to contemplate. However, I contend that there are good design reasons why checking for duplicate K and V in AddOrSetValue would also be poor design.
Remember that TObjectDictionary<K,V> is derived from TDictionary<K,V>. For the more general class, comparing equality of V is potentially an expensive operation because we have no constraints on what V is, it being generic. So for TDictionary<K,V> there are performance reasons why we should not include the putative AddOrSetValue test.
It could be argued that we make a special exception for TObjectDictionary<K,V>. That would certainly be possible. It would require a little re-engineering of the coupling between the two classes, but it is quite feasible. But now you have a situation where TDictionary<K,V> and TObjectDictionary<K,V> have different semantics. This is a clear downside and must be weighed against the potential benefit from the AddOrSetValue test.
These generic container classes are so fundamental that design decisions have to take into account a huge spread of use cases, consistency considerations and so on. It is not, in my view, reasonable to consider TObjectDictionary<K,V>.AddOrSetValue in isolation.
Since the Delphi TDictionary implementation doesn't allow for more than one of the same keys you could check the excellent Generic collections library from Alex Ciobanu. It comes with a TMultiMap or for your case TObjectMultiMap that allows for multiple values per key.
Edit:
If you don't want multiple values per key, but rather want to avoid adding duplicates to the Dictionary then you can try TDistinctMultiMap or a TObjectDistinctMultiMap from the same Collections library.
So it seems a little odd to Free an object that I have just asked it to Add!
You didn't ask the dictionary to add - you called 'AddorSet', and since the key was already found, your call was a 'set', not an 'add'. Regardless, I see nothing odd here in terms of Delphi's behavior: In Delphi, objects are only object references, and there is no reference counting or ownership for simple objects.
Since in this case the dictionary owns the objects, it is doing exactly what it's supposed to do: "If the object is owned, when the entry is removed from the dictionary, the key and/or value is freed". You removed the value when you overwrote entry[1] - therefore the object referred to in 'testObject' is immediately deleted and your reference to 'testObject' is invalid.
Currently, each time I add a value I have to check first if it's already in the Dictionary.
Why is that? The behavior you described should only occur if you overwrite a previously used key with a reference to the same object.
Edit:
Perhaps there is something 'odd' after all - try this test code:
procedure testObjectList;
var ol:TObjectList;
o,o1:TObject;
begin
ol:=TObjectList.create;
ol.OwnsObjects:=true;//default behavior, not really necessary
try
o:=TObject.create;
ol.add(o);
ol[0]:=o;
showmessage(o.ClassName);//no av-although ol[0] is overwritten with o again, o is not deleted
o1:=TObject.create;
ol[0]:=o1;
showmessage(o.ClassName);//av - when o is overwritten with o1, o is deleted
finally
ol.free
end;
end;
This in spite of what it says in the (Delphi 7) help: "TObjectList controls the memory of its objects, freeing an object when its index is reassigned"
I think it is a bug.
I ran into it a week ago.
I use the TObjectDictionary for storing some real time telemetria datas, which are very often updated with new datas.
for example:
Type TTag = class
updatetime : TDateTime;
Value : string ;
end ;
TTagDictionary:= TObjectDictionary<string,TTag>.Create([doOwnsValues]);
procedure UpdateTags(key: string; newValue: String) ;
var
tag : TTag ;
begin
if TTagDictionary.TryGetValue(key,tag) then begin // update the stored tag
tag.Value = newValue ;
tag.updatetime := now ;
TTagDictionary.AddorSetValue(key,tag) ;
else begin
tag := TTag.Create ;
tag.updatetime := now ;
tag.Vluae := newValue ;
TTagDictionary.AddorSetValue(key,tag) ;
end ;
end ;
After several updates I ended up with some nasty access violations and with an dictionary full of freed objects.
It is a very poor designed container.
At update it need check if the new object is the same as the old only and then it must NOT free the object.

Stop client code from freeing shared objects in Delphi

I have implemented the FlyWeight pattern in my Delphi application. Everything has worked great, everything is a lot faster and takes less memory, but there is one thing I am worried about.
My implementation will only work as long as client code never calls Free() on the shared objects. In the Flyweight pattern, the FlyweightFactory itself is supposed to "maintain a reference to flyweights" i.e. to the shared objects.
My problem is that there is no (obvious) way to stop other code from destroying the objects once they have a reference. I could live with this, but it would be a "big win" if I could pass these objects round freely without worrying about accidental freeing.
To show a (contrived) example:
flyweight1:=FlyweightFactory.GetFlyweight(42);
WriteLn('Description is '+flyweight.Description);
flyweight1.Free;
flyweight2:=FlyweightFactory.GetFlyweight(42);
WriteLn('Description is '+flyweight.Description);
// Object has already been Freed!; behaviour is undefined
I have considered overriding the destructor as shown here to stop the flyweight object being freed altogether. This is not an option in my case as
a) I only want to stop cached objects from being Freed, not objects that aren't part of the cache. There is a lot of legacy code that doesn't use the cache; they still need to create and free objects manually.
b) I do want the FlyweightFactory to Free the objects during finalization; I agree with Warren P that a "zero leaked memory" policy is best.
I'll leave with a quote from the Flyweight chapter of GoF
Sharability implies some form of
reference counting or garbage
collection to reclaim storage when
it's no longer needed. However,
neither is necessary if the number of
flyweights is fixed and small. In that
case, the flyweights are worth keeping
around permanently.
In my case the flyweights are "fixed" and (sufficiently) small.
[UPDATE See my answer for details of how I solved this problem]
My answer to the question you link to still applies. The objects must know by means of a private boolean flag that they are cached objects. Then they can elect not to destroy themselves in Destroy and FreeInstance. There really is no alternative if you want to allow Free to be called.
To deal with finalization you would want to add the cached objects to a list of cached objects. That list of objects can be freed at finalization time. Of course the flag to disable freeing would have to be reset whilst you walked the list.
Having made this point regarding finalization, I would advise you to register an expected memory leak and just leak this memory. It makes the code much simpler and there's nothing to lose. Any memory you don't free will be reclaimed by the OS as soon as your executable closes. One word of caution: if your code is compiled into a DLL then leaking could be troublesome if your DLL is loaded, unloaded, loaded again etc.
What all this is telling you is that you are swimming against the current. Is it possible that you could achieve your goals with a different solution that fitted better with the way Delphi is steering you?
I suggest to add a reference count in order to known if your shared object is still used.
Every client should use the pattern AddRef / Release (AddRef increases the count; Release decrements it; if count reaches zero Free is called)
The AddRef may be called directly by your GetFlyweight method; Release has to be used instead of Free.
If you refactor your class and extract an interface from it the AddRef/Release pattern in naturally implemented in then interface implementation. (You could derive from TInterfacedObject or implement IInterface by your self)
Ideally you seldom want 2 ways of using the same things. It just complicates matters in the long run. In 6 months time, you might not be sure whether a particular piece of code is using the new flyweight paradigm or the old paradigm.
The best way to prevent someone calling Free or Destroy is to make sure it's not even there. And within the Delphi world, the only way to do that is to use interfaces.
To expand on your contrived example:
type
TFlyweightObject = class
public
constructor Create(ANumber: Integer);
function Description: string;
end;
TFlyweightFactory = class
public
function GetFlyweight(ANumber: Integer): TFlyweightObject;
end;
This being an object can easily be destoyed by a rogue client. You could make the following changes:
type
IFlyweight = interface
//place guid here
function Description: string;
end;
TFlyweightObject = class(TInterfacedObject, IFlyweight)
public
constructor Create(ANumber: Integer);
function Description: string;
end;
TFlyweightFactory = class
public
function GetFlyweight(ANumber: Integer): IFlyweight;
end;
Now any code that is updated to use the flyweight paradigm is forced to use it as intended. It's also easier to recognise the old code that still needs to be refactored because it doesn't use the interface. Old code would still construct the "flyweight" object directly.
You could also hide a destructor by making it protected or private. Programmers won't see it outside the scope of the unit in which it is declared in.
But I am posting this answer more like a curiosity because this will not prevent freeing an object by using FreeAndNil or by using a "Protected Hack"
I managed to get around the problems I cited in my original question using the following techniques, suggested by David Heffernan in his answer.
a) I only want to stop cached objects
from being Freed, not objects that
aren't part of the cache. There is a
lot of legacy code that doesn't use
the cache; they still need to create
and free objects manually.
I fixed this by subclassing the Flyweight class and overriding destroy, BeforeDestruction and FreeInstance in the subclass only. This left the parent class as is. The cache contains instances of the subclass (which can't be freed), whereas objects outside the cache can be freed as per usual.
b) I do want the FlyweightFactory to
Free the objects during finalization;
I agree with Warren P that a "zero
leaked memory" policy is best.
To solve this, I added a private boolean flag that has to be set to true before the object can be freed. This flag can only be set from the Cache Unit, it is not visible to other code. This means that the flag cannot be set outside by code outside the cache.
The destructor just looks like this:
destructor TCachedItem.destroy;
begin
if destroyAllowed then
inherited;
end;
If client code trys to Free a cached object, the call will have no effect.

Delphi 6 OleServer.pas Invoke memory leak

There's a bug in delphi 6 which you can find some reference online for when you import a tlb the order of the parameters in an event invocation is reversed. It is reversed once in the imported header and once in TServerEventDIspatch.Invoke.
you can find more information about it here:
http://cc.embarcadero.com/Item/16496
somewhat related to this issue there appears to be a memory leak in TServerEventDispatch.Invoke with a parameter of a Variant of type Var_Array (maybe others, but this is the more obvious one i could see). The invoke code copies the args into a VarArray to be passed to the event handler and then copies the VarArray back to the args after the call, relevant code pasted below:
// Set our array to appropriate length
SetLength(VarArray, ParamCount);
// Copy over data
for I := Low(VarArray) to High(VarArray) do
VarArray[I] := OleVariant(TDispParams(Params).rgvarg^[I]);
// Invoke Server proxy class
if FServer <> nil then FServer.InvokeEvent(DispID, VarArray);
// Copy data back
for I := Low(VarArray) to High(VarArray) do
OleVariant(TDispParams(Params).rgvarg^[I]) := VarArray[I];
// Clean array
SetLength(VarArray, 0);
There are some obvious work-arounds in my case: if i skip the copying back in case of a VarArray parameter it fixes the leak. to not change the functionality i thought i should copy the data in the array instead of the variant back to the params but that can get complicated since it can hold other variants and seems to me that would need to be done recursively.
Since a change in OleServer will have a ripple effect i want to make sure my change here is strictly correct.
can anyone shed some light on exactly why memory is being leaked here? I can't seem to look up the callstack any lower than TServerEventDIspatch.Invoke (why is that?)
I imagine that in the process of copying the Variant holding the VarArray back to the param list it added a reference to the array thus not allowing it to be release as normal but that's just a rough guess and i can't track down the code to back it up.
Maybe someone with a better understanding of all this could shed some light?
Interestingly enough, i think the solution was in the link i provided in the question, but i didn't understand the implication until digging into it a bit more.
A few things to clarify:
When a variant containing an array is assigned from the VarArray back to the Params, a copy is made. This is explained within the delphi help pages.
Assigning over an existing variant will definitely free the memory associated with the previous value of the Variant, so the array contained by the variant prior to the assignment would have been freed on assignment.
VarClear will free the memory associated with the variant and tests show that a VarClear on the variant hold in the Params after the assignment will in fact remove the memory leak.
It appears that the issue has to do with the indiscriminate write back of param values. The particular event i'm dealing with does not have any parameters marked as var, so the COM object is not expecting changes to invocation param to free new memory that's been allocated.
Roughly the COM object allocated an array, invokes the event and then frees it's own memory after the event. The Oleserver however allocates some new memory when it copied the array param back to the param list which the COM object wouldn't even know about since it didn't pass anything by reference and is not expecting changes to its params. There must be some additional marshalling magic there that i'm neglecting, if anyone knows the details i'd definitely be curious.
The TVariantArg's vt field has a flag to indicate whether it is passed by value or reference. As far as i can discern, we should only be copying the value back if the param is marked as being passed by reference.
Furthermore it may be necessary to do more than just assign the variant if this is in fact pass by reference, although that could be taken care of by the marshalling, still not sure about this part.
The solution for now is to change the code to:
if ((TDispParams(Params).rgvarg^[I].vt and VT_BYREF) <> 0) then begin
OleVariant(TDispParams(Params).rgvarg^[I]) := VarArray[I];
end;

Resources