Inheritance of TCollectionItem - delphi

I'm planning to have collection of items stored in a TCollection.
Each item will derive from TBaseItem which in turn derives from TCollectionItem,
With this in mind the Collection will return TBaseItem when an item is requested.
Now each TBaseItem will have a Calculate function, in the the TBaseItem this will just return an internal variable, but in each of the derivations of TBaseItem the Calculate function requires a different set of parameters.
The Collection will have a Calculate All function which iterates through the collection items and calls each Calculate function, obviously it would need to pass the correct parameters to each function
I can think of three ways of doing this:
Create a virtual/abstract method for each calculate function in the base class and override it in the derrived class, This would mean no type casting was required when using the object but it would also mean I have to create lots of virtual methods and have a large if...else statement detecting the type and calling the correct "calculate" method, it also means that calling the calculate method is prone to error as you would have to know when writing the code which one to call for which type with the correct parameters to avoid an Error/EAbstractError.
Create a record structure with all the possible parameters in and use this as the parameter for the "calculate" function. This has the added benefit of passing this to the "calculate all" function as it can contain all the parameters required and avoid a potentially very long parameter list.
Just type casting the TBaseItem to access the correct calculate method. This would tidy up the TBaseItem quite alot compared to the first method.
What would be the best way to handle this collection?

If they all have different method signatures, then you're not really gaining anything by having virtual methods - they might as well be static. I would be in favor of a "generic"/"canonical" set of parameters as in your case 2, and virtual/overridden Calculate methods, at least based on the description you've given so far.

Related

How the hashcode is calculated

When we write:
"exampleString".hashCode
Is there a genral mathematical way, some algorithm that calculates it or it gets it from somewhere else in Dart laungage?
And is the hashcode of a String value still the same in another languages like java, c++...?
To answer your questions, hashcodes are Dart specific, and perhaps even execution-run specific. To see how the various hashcodes work, see the source code for any given class.
The Fine Manual says:
A hash code is a single integer which represents the state of the object that affects operator == comparisons.
All objects have hash codes. The default hash code implemented by Object represents only the identity of the object, the same way as the default operator == implementation only considers objects equal if they are identical (see identityHashCode).
If operator == is overridden to use the object state instead, the hash code must also be changed to represent that state, otherwise the object cannot be used in hash based data structures like the default Set and Map implementations.
Hash codes must be the same for objects that are equal to each other according to operator ==. The hash code of an object should only change if the object changes in a way that affects equality. There are no further requirements for the hash codes. They need not be consistent between executions of the same program and there are no distribution guarantees.
Objects that are not equal are allowed to have the same hash code. It is even technically allowed that all instances have the same hash code, but if clashes happen too often, it may reduce the efficiency of hash-based data structures like HashSet or HashMap.
If a subclass overrides hashCode, it should override the operator == operator as well to maintain consistency.
That last point is important. If two objects are considered == by whatever strategy you want, they must also always have the same hashcode. The inverse is not necessarily true.

Trying to understand this profiling result from F# code

A quick screenshot with the point of interest:
There are 2 questions here.
This happens in a tight loop. The 12.8% code is this:
{
this with Side = side; PositionPrice = position'; StopLossPrice = sl'; TakeProfitPrice = tp'; Volume = this.Volume + this.Quantity * position'
}
This object is passed around a lot and has 23 fields so it's not tiny. It looks like immutability is great for stable code, but it's horrible for performance.
Since this recursive loop is run in parallel, I need to store it's context variables in an object.
I am looking for a general idea of what makes sense, not something specific to that code because I have a few tight loops with a bunch of math which I need to profile as well. I am sure I'll find the same pattern in several places.
The flaw here is that I store both the context for the calculations and its variables in a singe type that gets passed in the loop. As the variable fields get updated, the whole object has to be recreated.
What would make sense here (in general for this type of situations)?
make the fields that can change mutable. In this case, that means keeping the type as is (23 fields) and make some fields mutable (only 5 fields get regularly changed)
move the mutable fields to their own type to have a general context object and one holding all the variables. In this case, that means having a context with (23 - 5 fields) and a separate 5 fields type
make the mutable fields variables and move them out of the type. In this case, these 5 fields would be passed as variables in the recursive loop?
and for the second question:
I have no idea what the 10.0% line with get_Tag is. I have nothing called 'Tag' in the code, so I assume that's a dotnet internal thing.
I have a type called Side and there is a field with the same name used in the loop, but what is the 'Tag' part?
What I would suggest is not to modify your existing immutable type at all. Instead, create a new type with mutable fields that is only used within your tight loop. If the type leaves that loop, convert it back to your immutable type (assuming you don't need a copy to go through the rest of your program with every iteration).
get_Tag in this case is likely the auto-generated get-only property on a discriminated union, it's just how the F# compiler represents this sort of type in CLR. The property can most easily be seen when looking at F# code from C#, here's a great page on F# decompiled:
https://fsharpforfunandprofit.com/posts/fsharp-decompiled/#unions
For the performance issues I can only offer some suggestions:
If you can constrain the context object to your code only, then try making a mutable version and see which effect it has.
You mention that the context object is quite large, is it possible to split it up?

Can I call a spreadsheet function from a custom function in SpreadsheetGear

I am using SpreadsheetGear and have been given a spreadsheet with a VBA macro that I need to convert to a custom function (as SSG can't handle VBA code). The macro calls Excel functions (Match, Index) and I can't see how to call these functions from within my custom function. Any ideas if this is possible and how to do it?
Thanks
You can evaluate Excel formula strings by using the ISheet.EvaluateValue(...) method. Example:
// A1:A3 on "Sheet1" will be summed up.
object val = workbook.Worksheets["Sheet1"].EvaluateValue("SUM(A1:A3)");
// Ensure we actually get a numeric value back
if(val != null && val is double)
{
double sum = (double)val;
Console.WriteLine(sum);
}
However, it sounds like you will be using this method inside your custom function, so I should point out that you cannot use this method for any sheet that belongs to any workbook in the IWorkbookSet that is currently being calculated. Please review the rules that are laid out in the Remarks section for the Function.Evaluate(...) method, pasted below, particularly the ones I bolded:
The implementation of this method must follow a number of rules:
The method must be thread safe. Some versions of SpreadsheetGear call this method from multiple threads at the same time.
The method must not use any API in the workbook set which is being calculated except for IArguments.CurrentWorksheet.Name, IArguments.CurrentWorksheet.Index or IArguments.CurrentWorksheet.Workbook.Name.
All access to cells must be through arguments.
Accessing the IArguments indexer has the side effect of converting references to ranges or arrays to a single simple value. Use GetArrayDimensions and GetArrayValue to access the individual values of a range or array. For example, if the first custom function argument is the range A1:C3, setting a watch on arguments[0] in a debugger will convert this range to a single simple value.
No references to interfaces or objects passed to, or acquired during the execution of this method should be used after this method completes execution.
So if you call ISheet.EvaluateValue(...) within your custom function, it would have to be done on some sheet that belongs to an IWorkbookSet other than the one currently being calculated. It would also need to be done in a thread-safe manner, as per the first rule mentioned above.

String lifetime management, in records

I am working on getting rid of shortstring.
One of the many places shortstring is currently used within our programs is in records.
Alot of these records are kept in AVL trees.
The AVL tree used is a generic one, holding a pointer to a number of bytes (ElemSize), which have worked well so far.
The memory for each record in the AVL tree is allocated with GetMem, and copied with Move.
However, with string being a pointer to a reference-counted structure, copying back the memory to a record no longer works, as the sting referenced is often freed (automatically by reference count).
With only a pointer and a size of the "data block", I assume it is not possible to have the reference count of the strings increased.
I'm looking for a way to get the reference count of the stings to be taken into account when storing the record in a AVL tree.
Can I pass the record type to the tree constructor, then cast the pointer to this type and thus get the references increased? Or a similar fix, where I can isolate the changes to primarily be in the AVL unit and calls to it's constructor.
Current code for allocation of space to store the record in AVL; XData is a pointer to the record to be stored:
New(RootPtr); { create new memory space }
GetMem(RootPtr^.TreeData, ElemSize);
WITH RootPtr^ DO BEGIN
{ copy data }
Move(XData^, RootPtr^.TreeData^, ElemSize);
In essence the question you are asking is:
How can I allocate, copy and deallocate a record when all I know about its type is its size?
The simple answer is that you can use GetMem, Move and FreeMem provided that the record does not contain managed types. You wish to work with records that contain Delphi strings, which are managed. And so your current approach using GetMem and Move does not suffice.
There are plenty of ways to solve this. You could write your own code to do reference counting, so long as you knew where in the record the managed types were. I don't recommend this. You could make your user data be a class and use polymorphism to help.
The option I'd like to discuss continues to support records and indeed allows the user to choose whatever type they like. The reasoning is as follows:
If the type contains managed types, then operating on it requires knowledge of the type. If the tree is to be generic, then it cannot have that knowledge. Ergo, the knowledge must be supplied by the user of the tree.
This leads you to events. Let the tree offer events that the user can supply handlers for. The types would look like this:
type
PTreeNodeUserData = type Pointer;
TTreeNodeCreateUserDataEvent = function: PTreeNodeUserData of object;
TTreeNodeDestroyUserDataEvent = procedure(Data: PTreeNodeUserData) of object;
TTreeNodeCopyUserDataEvent = procedure(Source, Dest: PTreeNodeUserData) of object;
Then you can arrange for your tree to publish events with these types that the user can subscribe to.
The point being that this allows the user of the tree to supply the missing knowledge about the user data type.
One of the main benefits of using records is the simplicity with which they can be copied (without using Move). So your best solution is to simply replace Move with a normal assignment operator :=. This will correctly consider the reference counts for all managed types involved.
Is there a particular reason you're not using the normal assignment operator?
PS: You need to ensure that the memory for all managed types (including long strings) is correctly initialised and finalised. I suggest you do some additional reading on the Initialize and Finalize routines.
The tree is general, it can hold a given lump of data. I hoped I could extend the functionality without making a new tree class per record.
In that case you need your "copy behaviour" to be variable depending on what it's working with. As couple of options:
If your tree is wrapped in a class you can easily modify it to use a callback event to perform the copy operation. (This option might be easiest even if you first have to work on encapsulating the tree in a class.)
Modify your nodes and/or data to be objects with polymorphic copy functionality. Then each subtype will know how to copy itself correctly, and you can write something along the lines of Root.TreeData := XData.CreateCopy;
If you are working at such a low level, and don't want compiler to help you, then you need to use PChar-strings instead of regular strings.

iOS: Object equality when enumerating

Say I have a NSArray, and each item is an NSDictionary with three keys keyA, keyB, and keyC - each referring to objects of unknown type (id).
If I wanted to write a method that found the given element with those three keys i.e.
-(NSDictionary *) itemThatContainsKeys:(id)objectA and:(id)objectB and:(id)objectC
would I run into trouble by simply enumerating through and testing object equality via if([i objectForKey:(keyA) isEqualTo:objectA]) etc? I would be passing in the actual objects that were set in the dictionary initialization - ie not strings with the same value but different locations.
Is this bad practise?
Is there a better way to do this without creating a database?
You can override isEqual to stipulate the notion of equality for your type. The same rules apply as in other languages:
If you provide an implementation of equals you should provide an implementation of 'hash'
Objects that are 'equal' should have the same 'hash'
Equals should be transitive -> if A equals B, and B equals C, then C must equal A.
Equals should be bi-directional -> if A equals B, then B must equal A.
This will ensure predictable behavior in classes like NSSet, that use hash for performance, falling back to equals on when there's a collision.
As Jason Whitehorn notes, Objective-C also has the convention of providing another isEqualToMyType method for convenience.
AppCode, EqualsBuilder, Boiler-plate code
It would be nice if there was something like Apache's 'EqualsBuilder' class, but in the meantime AppCode does a fine job of implementing these methods for you.
The isEqual: method compares object identity unless overwritten by a subclass. Depending on what class the target is, this May or may not be what you want. What I prefer is to use a more class specific comparison like isEqualToNumber: simply because of it's explicitness. But, isEqual should work depending on the target.
Other than that, and not knowing more specifics of what you're doing, it's hard to say if there is a better way to accomplish what you're after. But, here are my thoughts;
An array of a dictionary almost sounds like you might need a custom class to represent some construct in your app. Perhaps the dictionary could be replaced with a custom object on which you implement an isEqualToAnotherThing: method. This should simplify your logic.

Resources