Why does this RTTI optimization make things slower? - delphi

I've got an operation that's being called repeatedly in a loop. With a TRttiField:
if (field.name = '') or (field.Name[1] <> 'F') then
continue;
Profiling shows that I'm spending a lot of time in UStrAsg and UStrClr because of this. Field.Name has to make a virtual call to TRttiInstanceFieldEx.GetName, which has to perform a UTF8-to-string conversion on the underlying RTTI structure's name. This is happening twice per loop iteration.
I tried to cut all that out by bypassing the string conversions:
handle := PFieldExEntry(field.Handle);
if (handle.name = '') or (handle.Name[1] <> 'F') then
continue;
I expected to see about a 5% speed gain from this. Instead, the loop takes several seconds longer to execute, approximately 20-25% slower! I checked the generated ASM to make sure it wasn't doing anything screwy like making string copies from the RTTI structures to the local stack, but it isn't. I can't see any reason why this should have gotten slower. Anyone have any idea what might be going on here?

The field your new code reads is declared as a ShortString. As of Delphi 5, the compiler converts ShortStrings to long strings before generating the code for most string operations. (At least, that's the way it was with non-Unicode Delphi. Maybe Unicode Delphi restores some ShortString-related optimizations.)
Whereas the TRttiField wrapper might take advantage of the knowledge that it's populating a UTF-8 string with data that's already occupying one byte per character, I'd expect the ShortString-to-string code that your new loop employs might use a more general-purpose conversion routine, and you're paying the price for generality.
You might try foregoing string-conversion operations entirely. Instead, get a pointer to the first byte:
handle := PFieldExEntry(field.Handle);
NameP := PAnsiChar(#handle.name);
if (NameP[0] = #0) or (NameP[1] <> 'F') then
continue;
Note that although it's declared as a ShortString, it's not really one. It doesnn't really occupy 256 bytes. Instead, it occupies the minimum amount of memory required to hold its length byte and its characters.

Related

creating a fast case statement (assembly)

I have a project that has intensive use of the case statement with many procedures coming off it. I know you can place case statements in a two tear arrangement divide in blocks of 10 and a second case statement to separate individual procedures. But I have a better idea if I can pull it off.
I want to call it assembly case
Prolist: array [1..500] of Pointer =
(#Procedure1, #Procedure2, #Procedure3, #Procedure4, #Procedure5);
Procedure ASMCase(Prolist: array of Pointer; No: Word; Var InRange: Boolean);
var Count : DWord;
PTR: Pointer;
Pro : Procedure;
begin
Count := No * 4;
InRange := boolean(Count <= SizeOf(Prolist));
If not InRange then Exit;
PTR := Pointer(DWord(#Prolist[1]) + Count);
If PTR <> nil then Pro := #PTR else Exit;
Pro; /run procedure
end;​
The point is I'm creating a direct jump to the procedure.
In my case procedures can have an identical header and global data can be accessed for any odd information. Writing it in assembly would be faster I think but what I'm not sure on is running the procedures. Please do not ask why am I doing this as I have 500 procedures with many calls on the case statement and time is of essence with a fast processor.
It's expensive to pass that array by value. Pass it by const.
I can't see the point of the InRange flag and test. Don't pass out of range indices. And if you have to test, do it right. Don't use SizeOf which measures byte size. Use high or perhaps Length, if you have to. Which I doubt.
The pointer assignment test (PTR <> nil) is bogus. That condition always evaluates true. And the array indexing is very weird. What's wrong with []?
On top of that, your array is 1-based (usually a bad choice) but open arrays are always 0-based. Likely that's going to trip you up.
In short, I'd throw away all of that code. It's both wrong and needless. I'd just write it like this:
ProList[No]();
In order for this to compile your array would need to be defined as an array of procedural type rather than array of Pointer. Adding some type safety would be a good move.
It's pretty hard to see asm making much difference here. The compiler is going to emit optimal code.
If you are concerned with out of bound access, enable range checking in debug mode. Disable it for release if performance is paramount.
Bear in mind that global data structures don't tend to scale well as you add complexity. Most experienced programmers go to some length to avoid global state. Are you sure that global state is the right choice for you?
If you do need to improve performance, first identify opportunity for improvement. Reading from an array and calling a function are not likely candidates. Look at the procedures that you call. The bottlenecks are surely there.
One final point. Try to forget that you ever learn to use # with function pointers. Doing so yields an untyped pointer, of type Pointer that can be assigned to any pointer type. And thus you completely abandon type checking. Your procedure could have the wrong signature altogether and the compiler is not able to tell you. Declare your array of procedures with a type safe procedure type.

Why can't I use an Int64 in a for loop?

I can write for..do process for integer value..
But I can't write it for int64 value.
For example:
var
i:int64;
begin
for i:=1 to 1000 do
end;
The compiler refuses to compile this, why does it refuse?
The Delphi compiler simply does not support Int64 loop counters yet.
Loop counters in a for loop have to be integers (or smaller).
This is an optimization to speed up the execution of a for loop.
Internally Delphi always uses an Int32, because on x86 this is the fastest datatype available.
This is documented somewhere deep in the manual, but I don't have a link handy right now.
If you must have a 64 bit loop counter, use a while..do or repeat..until loop.
Even if the compiler did allow "int64" in a Delphi 7 for-loop (Delphi 7???), it probably wouldn't complete iterating through the full range until sometime after the heat death of the Sun.
So why can't you just use an "integer"?
If you must use an int64 value ... then simply use a "while" loop instead.
Problem solved :)
Why to use a Int64 on a for-loop?
Easy to answer:
There is no need to do a lot of iterations to need a Int64, just do a loop from 5E9 to 5E9+2 (three iterations in total).
It is just that values on iteration are bigger than what Int32 can hold
An example:
procedure Why_Int64_Would_Be_Great_On_For_Loop;
const
StartValue=5000000000; // Start form 5E9, 5 thousand millons
Quantity=10; // Do it ten times
var
Index:Int64;
begin
for Index:=StartValue to StartValue+Quantity-1
do begin // Bla bla bla
// Do something really fast (only ten times)
end;
end;
That code would take no time at all, it is just that index value need to be far than 32bit integer limit.
The solution is to do it with a while loop:
procedure Equivalent_For_Loop_With_Int64_Index;
const
StartValue=5000000000; // Start form 5E9, 5 thousand millons
Quantity=10; // Do it ten times
var
Index:Int64;
begin
Index:=StartValue;
while Index<=StartValue+Quantity
do begin // Bla bla bla
// Do something really fast (only ten times)
Inc(Index);
end;
end;
So why the compiler refuses to compile the foor loop, i see no real reason... any for loop can be auto-translated into a while loop... and pre-compiler could do such before compiler (like other optimizations that are done)... the only reason i see is the lazy people that creates the compiler that did not think on it.
If for is optimized and so it is only able to use 32 bit index, then if code try to use a 64 bit index it can not be so optimized, so why not let pre-compiler optimizator to chage that for us... it only gives bad image to programmers!!!
I do not want to make anyone ungry...
I only just say something obvious...
By the way, not all people start a foor loop on zero (or one) values... sometimes there is the need to start it on really huge values.
It is allways said, that if you need to do something a fixed number of times you best use for loop instead of while loop...
Also i can say something... such two versions, the for-loop and the while-loop that uses Inc(Index) are equally fast... but if you put the while-loop step as Index:=Index+1; it is slower; it is really not slower because pre-compiler optimizator see that and use Inc(Index) instead... you can see if buy doing the next:
// I will start the loop from zero, not from two, but i first do some maths to avoid pre-compiler optimizator to convert Index:=Index+Step; to Inc(Index,Step); or better optimization convert it to Inc(Index);
Index:=2;
Step:=Index-1; // Do not put Step:=1; or optimizator will do the convertion to Inc()
Index:=Step-2; // Now fix, the start, so loop will start from zero
while Index<1000000 // 1E6, one millon iterations, from 0 to 999999
do begin
// Do something
Index:=Index+Step; // Optimizator will not change this into Inc(Index), since sees that Step has changed it's value before
end;
The optimizer can see a variable do not change its value, so it can convert it to a constant, then on the increment assign if adding a constant (variable:=variable+constant) it will optimize it to Inc(variable,constant) and in the case it sees such constant is 1 it will also optimes it to Inc(variable)... and such optimizatons in low level computer language are very noticeble...
In Low level computer language:
A normal add (variable:=variable1+variable2) implies two memory reads plus one sum plus one memory write... lot of work
But if is a (variable:=variable+othervariable) it can be optimized holding variable inside the processor cache.
Also if it is a (variable:=variable1+constant) it can also be optimized by holding constant on the processor cache
And if it is (variable:=variable+constant) both are cached on processor cache, so huge fast compared with other options, no acces to RAM is needed.
In such way pre-compiler optimizer do another important optimization... for-loops index variables are holded as processor registers... much more faster than processor cache...
Most mother processor do an extra optimization as well (at hardware level, inside the processor)... some cache areas (32 bit variables for us) seen that are intensivly used are stored as special registers to fasten access... and such for-loop / while-loop indexes are ones of them... but as i said.. most mother AMD proccesors (the ones that uses MP technology does that)... i do not yet know any Intel that do that!!! such optimization is more relevant when multi-core and on super-computing... so maybe that is the reason why AMD has it and Intel not!!!
I only want to show one "why", there are a lot more... another one could be as simple as the index is stored on a database Int64 field type, etc... there are a lot of reasons i know and a lot more i did not know yet...
I hope this will help to understand the need to do a loop on a Int64 index and also how to do it without loosing speed by correctly eficiently converting loop into a while loop.
Note: For x86 compiles (not for 64bit compilation) beware that Int64 is managed internally as two Int32 parts... and when modifing values there is an extra code to do, on adds and subs it is very low, but on multiplies or divisions such extra is noticeble... but if you really need Int64 you need it, so what else to do... and imagine if you need float or double, etc...!!!

TStringList of objects taking up tons of memory in Delphi XE

I'm working on a simulation program.
One of the first things the program does is read in a huge file (28 mb, about 79'000 lines,), parse each line (about 150 fields), create a class for the object, and add it to a TStringList.
It also reads in another file, which adds more objects during the run. At the end, it ends up being about 85'000 objects.
I was working with Delphi 2007, and the program used a lot of memory, but it ran OK. I upgraded to Delphi XE, and migrated the program over and now it's using a LOT more memory, and it ends up running out of memory half way through the run.
So in Delphi 2007, it would end up using 1.4 gigs after reading in the initial file, which is obviously a huge amount, but in XE, it ends up using almost 1.8 gigs, which is really huge and leads to running out and getting the error
So my question is
Why is it using so much memory?
Why is it using so much more memory in XE than 2007?
What can I do about this? I can't change how big or long the file is, and I do need to create an object for each line and to store it somewhere
Thanks
Just one idea which may save memory.
You could let the data stay on the original files, then just point to them from in-memory structures.
For instance, it's what we do for browsing big log files almost instantly: we memory-map the log file content, then we parse it quick to create indexes of useful information in memory, then we read the content dynamically. No string is created during the reading. Only pointers to each line beginning, with dynamic arrays containing the needed indexes. Calling TStringList.LoadFromFile would be definitively much slower and memory consuming.
The code is here - see the TSynLogFile class. The trick is to read the file only once, and make all indexes on the fly.
For instance, here is how we retrieve a line of text from the UTF-8 file content:
function TMemoryMapText.GetString(aIndex: integer): string;
begin
if (self=nil) or (cardinal(aIndex)>=cardinal(fCount)) then
result := '' else
result := UTF8DecodeToString(fLines[aIndex],GetLineSize(fLines[aIndex],fMapEnd));
end;
We use the exact same trick to parse JSON content. Using such a mixed approach is used by the fastest XML access libraries.
To handle your high-level data, and query them fast, you may try to use dynamic arrays of records, and our optimized TDynArray and TDynArrayHashed wrappers (in the same unit). Arrays of records will be less memory consuming, will be faster to search in because the data won't be fragemented (even faster if you use ordered indexes or hashes), and you'll be able to have high-level access to the content (you can define custom functions to retrieve the data from the memory mapped file, for instance). Dynamic arrays won't fit fast deletion of items (or you'll have to use lookup tables) - but you wrote you are not deleting much data, so it won't be a problem in your case.
So you won't have any duplicated structure any more, only logic in RAM, and data on memory-mapped file(s) - I added a "s" here because the same logic could perfectly map to several source data files (you need some "merge" and "live refresh" AFAIK).
It's hard to say why your 28 MB file is expanding to 1.4 GB worth of objects when you parse it out into objects without seeing the code and the class declarations. Also, you say you're storing it in a TStringList instead of a TList or TObjecList. This sounds like you're using it as some sort of string->object key/value mapping. If so, you might want to look at the TDictionary class in the Generics.Collections unit in XE.
As for why you're using more memory in XE, it's because the string type changed from an ANSI string to a UTF-16 string in Delphi 2009. If you don't need Unicode, you could use a TDictionary to save space.
Also, to save even more memory, there's another trick you could use if you don't need all 79,000 of the objects right away: lazy loading. The idea goes something like this:
Read the file into a TStringList. (This will use about as much memory as the file size. Maybe twice as much if it gets converted into Unicode strings.) Don't create any data objects.
When you need a specific data object, call a routine that checks the string list and looks up the string key for that object.
Check if that string has an object associated with it. If not, create the object from the string and associate it with the string in the TStringList.
Return the object associated with the string.
This will keep both your memory usage and your load time down, but it's only helpful if you don't need all (or a large percentage) of the objects immediately after loading.
In Delphi 2007 (and earlier), a string is an Ansi string, that is, every character occupies 1 byte of memory.
In Delphi 2009 (and later), a string is a Unicode string, that is, every character occupies 2 bytes of memory.
AFAIK, there is no way to make a Delphi 2009+ TStringList object use Ansi strings. Are you really using any of the features of the TStringList? If not, you could use an array of strings instead.
Then, naturally, you can choose between
type
TAnsiStringArray = array of AnsiString;
// or
TUnicodeStringArray = array of string; // In Delphi 2009+,
// string = UnicodeString
Reading though the comments, it sounds like you need to lift the data out of Delphi and into a database.
From there it is easy to match organ donors to receivers*)
SELECT pw.* FROM patients_waiting pw
INNER JOIN organs_available oa ON (pw.bloodtype = oa.bloodtype)
AND (pw.tissuetype = oa.tissuetype)
AND (pw.organ_needed = oa.organ_offered)
WHERE oa.id = '15484'
If you want to see the patients that might match against new organ-donor 15484.
In memory you only handle the few patients that match.
*) simplified beyond all recognition, but still.
In addition to Andreas' post:
Before Delphi 2009, a string header occupied 8 bytes. Starting with Delphi 2009, a string header takes 12 bytes. So every unique string uses 4 bytes more than before, + the fact that each character takes twice the memory.
Also, starting with Delphi 2010 I believe, TObject started using 8 bytes instead of 4. So for each single object created by delphi, delphi now uses 4 more bytes. Those 4 bytes were added to support the TMonitor class I believe.
If you're in desperate need to save memory, here's a little trick that could help if you have a lot of string value that repeats themselve.
var
uUniqueStrings : TStringList;
function ReduceStringMemory(const S : String) : string;
var idx : Integer;
begin
if not uUniqueStrings.Find(S, idx) then
idx := uUniqueStrings.Add(S);
Result := uUniqueStrings[idx]
end;
Note that this will help ONLY if you have a lot of string values that repeat themselves. For exemple, this code use 150mb less on my system.
var sl : TStringList;
I: Integer;
begin
sl := TStringList.Create;
try
for I := 0 to 5000000 do
sl.Add(ReduceStringMemory(StringOfChar('A',5)));every
finally
sl.Free;
end;
end;
I also read in a lot of strings in my program that can approach a couple of GB for large files.
Short of waiting for 64-bit XE2, here is one idea that might help you:
I found storing individual strings in a stringlist to be slow and wasteful in terms of memory. I ended up blocking the strings together. My input file has logical records, which may contain between 5 and 100 lines. So instead of storing each line in the stringlist, I store each record. Processing a record to find the line I need adds very little time to my processing, so this is possible for me.
If you don't have logical records, you might just want to pick a blocking size, and store every (say) 10 or 100 strings together as one string (with a delimiter separating them).
The other alternative, is to store them in a fast and efficient on-disk file. The one I'd recommend is the open source Synopse Big Table by Arnaud Bouchez.
May I suggest you try using the jedi class library (JCL) class TAnsiStringList, which is like TStringList fromDelphi 2007 in that it is made up of AnsiStrings.
Even then, as others have mentioned, XE will be using more memory than delphi 2007.
I really don't see the value of loading the full text of a giant flat file into a stringlist. Others have suggested a bigtable approach such as Arnaud Bouchez's one, or using SqLite, or something like that, and I agree with them.
I think you could also write a simple class that will load the entire file you have into memory, and provide a way to add line-by-line object links to a giant in-memory ansichar buffer.
Starting with Delphi 2009, not only strings but also every TObject has doubled in size. (See Why Has the Size of TObject Doubled In Delphi 2009?). But this would not explain this increase if there are only 85,000 objects. Only if these objects contain many nested objects, their size could be a relevant part of the memory usage.
Are there many duplicate strings in your list? Maybe trying to only store unique strings will help reducing the memory size. See my Question
about a string pool for a possible (but maybe too simple) answer.
Are you sure you don't suffer from a case of memory fragementation?
Be sure to use the latest FastMM (currently 4.97), then take a look at the UsageTrackerDemo demo that contains a memory map form showing the actual usage of the Delphi memory.
Finally take a look at VMMap that shows you how your process memory is used.

Delphi constant bitwise expressions

Probably a stupid question, but it's an idle curiosity for me.
I've got a bit of Delphi code that looks like this;
const
KeyRepeatBit = 30;
...
// if bit 30 of lParam is set, mark this message as handled
if (Msg.lParam and (1 shl KeyRepeatBit) > 0) then
Handled:=true;
...
(the purpose of the code isn't really important)
Does the compiler see "(1 shl KeyRepeatBit)" as something that can be computed at compile time, and thus it becomes a constant? If not, would there be anything to gain by working it out as a number and replacing the expression with a number?
Yes, the compiler evaluates the expression at compile time and uses the result value as a constant. There's no gain in declaring another constant with the result value yourself.
EDIT: The_Fox is correct. Assignable typed constants (see {$J+} compiler directive) are not treated as constants and the expression is evaluated at runtime in that case.
You can make sure iike this, for readability alone:
const
KeyRepeatBit = 30;
KeyRepeatMask = 1 shl KeyRepeatBit ;
It converts it to a constant at compile time.
However, even if it didn't, this would have no noticeable impact on your application's performance.
You might handle a few thousand messages per second if your app is busy. Your old Pentium I can do gazillions of shifts and ands per second.
Keep your code readable, and profile it to find bottlenecks that you then optimize - usually by looking at the algorithm, and not such a low level as whether you're shifting or not.
I doubt that using a number (would be 1073741824, by the way) here would really improve performance. You seem to be in some Windows message context here and this will possible add more delay than a single and that is lightning fast even if the number is not optimized at compiled time (anyway, I think it is optimized).
The only exception I could imagine would be the case that this particular piece of code is run really often, but as I said I think this gets optimized at compile time and so even in this case it won't make a difference at all.
Maybe it's offtopic to your question but I use a case record for these kind of things, example:
TlParamRecord = record
case Integer of
0: (
RepeatCount: Word;
ScanCode: Byte;
Flags: Set of (lpfExtended, lpfReserved=4, lpfContextCode,
lpfPreviousKeyState, lpfTransitionState);
);
1: (lParam: LPARAM);
end;
see article on my blog for more details

Why most Delphi examples use FillChar() to initialize records?

I just wondered, why most Delphi examples use FillChar() to initialize records.
type
TFoo = record
i: Integer;
s: string; // not safe in record, better use PChar instead
end;
const
EmptyFoo: TFoo = (i: 0; s: '');
procedure Test;
var
Foo: TFoo;
s2: string;
begin
Foo := EmptyFoo; // initialize a record
// Danger code starts
FillChar(Foo, SizeOf(Foo), #0);
s2 := Copy("Leak Test", 1, MaxInt); // The refcount of the string buffer = 1
Foo.s = s2; // The refcount of s2 = 2
FillChar(Foo, SizeOf(Foo), #0); // The refcount is expected to be 1, but it is still 2
end;
// After exiting the procedure, the string buffer still has 1 reference. This string buffer is regarded as a memory leak.
Here (http://stanleyxu2005.blogspot.com/2008/01/potential-memory-leak-by-initializing.html) is my note on this topic. IMO, declare a constant with default value is a better way.
Historical reasons, mostly. FillChar() dates back to the Turbo Pascal days and was used for such purposes. The name is really a bit of a misnomer because while it says FillChar(), it is really FillByte(). The reason is that the last parameter can take a char or a byte. So FillChar(Foo, SizeOf(Foo), #0) and FillChar(Foo, SizeOf(Foo), 0) are equivalent. Another source of confusion is that as of Delphi 2009, FillChar still only fills bytes even though Char is equivalent to WideChar. While looking at the most common uses for FillChar in order to determine whether most folks use FillChar to actually fill memory with character data or just use it to initialize memory with some given byte value, we found that it was the latter case that dominated its use rather than the former. With that we decided to keep FillChar byte-centric.
It is true that clearing a record with FillChar that contains a field declared using one of the "managed" types (strings, Variant, Interface, dynamic arrays) can be unsafe if not used in the proper context. In the example you gave, however, it is actually safe to call FillChar on the locally declared record variable as long as it is the first thing you ever do to the record within that scope. The reason is that the compiler has generated code to initialize the string field in the record. This will have already set the string field to 0 (nil). Calling FillChar(Foo, SizeOf(Foo), 0) will just overwrite the whole record with 0 bytes, including the string field which is already 0. Using FillChar on the record variable after a value was assigned to the string field, is not recommended. Using your initialized constant technique is a very good solution this problem because the compiler can generate the proper code to ensure the existing record values are properly finalized during the assignment.
If you have Delphi 2009 and later, use the Default call to initialize a record.
Foo := Default(TFoo);
See David's answer to the question How to properly free records that contain various types in Delphi at once?.
Edit:
The advantage of using the Default(TSomeType) call, is that the record is finalized before it is cleared. No memory leaks and no explicit dangerous low level call to FillChar or ZeroMem. When the records are complex, perhaps containing nested records etc, the risk of making mistakes is eliminated.
Your method to initialize the records can be made even simpler:
const EmptyFoo : TFoo = ();
...
Foo := EmptyFoo; // Initialize Foo
Sometimes you want a parameter to have a non-default value, then do like this:
const PresetFoo : TFoo = (s : 'Non-Default'); // Only s has a non-default value
This will save some typing and the focus is set on the important stuff.
FillChar is fine to make sure you don't get any garbage in a new, uninitialized structure (record, buffer, arrray...).
It should not be used to "reset" the values without knowing what your are resetting.
No more than just writing MyObject := nil and expecting to avoid a memory leak.
In particulart all managed types are to be watched carefully.
See the Finalize function.
When you have the power to fiddle directly with the memory, there is always a way to shoot yourself in the foot.
FillChar is usually used to fill Arrays or records with only numeric types and array. You are correct that it shouldn't be used to when there are strings (or any ref-counted variables) in the record.
Although your suggestion of using a const to initialize it would work, an issue comes into play when I have a variable length array that I want to initialize.
The question may also be asking:
why FillChar
and not ZeroMemory?
There is no ZeroMemory function in Windows. In the header files (winbase.h) it is a macro that, in the C world, turns around and calls memset:
memset(Destination, 0, Length);
ZeroMemory is the language neutral term for "your platform's function that can be used to zero memory"
The Delphi equivalent of memset is FillChar.
Since Delphi doesn't have macros (and before the days of inlining), calling ZeroMemory meant you had to suffer the penalty of an extra function call before you actually got to FillChar.
So in many ways, calling FillChar is a performance micro-optimization - which no longer exists now that ZeroMemory is inlined:
procedure ZeroMemory(Destination: Pointer; Length: NativeUInt); inline;
Bonus Reading
Windows also contains the SecureZeroMemory function. It does the exact same thing as ZeroMemory. If it does the same thing as ZeroMemory, why does it exist?
Because some smart C/C++ compilers might recognize that setting memory to 0 before getting rid of the memory is a waste of time - and optimize away the call to ZeroMemory.
I don't think Delphi's compiler is as smart as many other compilers; so there's no need for a SecureFillChar.
Traditionally, a character is a single byte (no longer true for Delphi 2009), so using fillchar with a #0 would initalize the memory allocated so that it only contained nulls, or byte 0, or bin 00000000.
You should instead use the ZeroMemory function for compatibility, which has the same calling parameters as the old fillchar.
This question has a broader implication that has been in my mind for ages. I too, was brought up on using FillChar for records. This is nice because we often add new fields to the (data) record and of course FillChar( Rec, SizeOf( Rec), #0 ) takes care of such new fields. If we 'do it properly', we have to iterate through all fields of the record, some of which are enumerated types, some of which may be records themselves and the resulting code is less readable as well be possibly erroneous if we dont add new record fields to it diligently. String fields are common, thus FillChar is a no-no now. A few months ago, I went around and converted all my FillChars on records with string fields to iterated clearing, but I was not happy with the solution and wonder if there is a neat way of doing the 'Fill' on simply types (ordinal / float) and 'Finalize' on variants and strings?
Here is a better way to initialize stuff without using FillChar:
Record in record (Cannot initialize)
How to initialize a static array?

Resources