Why Delphi "for" acting like that? - delphi

Delphi XE2, simple code:
function FastSwap(Value: uint16): uint16; register; overload;
asm
bswap eax
shr eax, 16
end;
...
type
PPicEleHdr = ^TPicEleHdr;
TPicEleHdr = packed record
zero, size, count: word;
end;
var
count: integer;
buf: TBytes;
begin
...
peh := #buf[offs];
count := integer(FastSwap(peh.count));
for i := 0 to count - 1 do begin
and here at for I see in CPU window
UnitExtract.pas.279: for i := 0 to count - 1 do begin
0051E459 8B45DC mov eax,[ebp-$24]
0051E45C 48 dec eax
0051E45D 85C0 test eax,eax
0051E45F 0F82CD000000 jb $0051e532
0051E465 40 inc eax
0051E466 8945AC mov [ebp-$54],eax
0051E469 C745F400000000 mov [ebp-$0c],$00000000
so when count is 0 nothing works properly, test eax, eax (eax = $FFFFFFFF after dec eax) not affecting Carry flag while jb acting by Carry flag.
Is there something I don't understand about using for?

By a process of reverse engineering, I infer that i is an unsigned 32 bit integer, Cardinal. So the compiler performs the for loop arithmetic in an unsigned context. This means that Count-1 is interpreted as unsigned, and so your loop runs from 0 to high(i).
To flesh this out, this is what happens step by step:
Count is $00000000.
Count-1 is evaluated and has value $FFFFFFFF.
Interpreted as an unsigned integer $FFFFFFFF is 232-1.
Your loop body executes for all values 0 <= i < 232.
The solution is to make your loop variable be a signed integer, for example Integer.
When you switch i to be of type Integer, the following happens:
Count is $00000000.
Count-1 is evaluated and has value $FFFFFFFF.
Interpreted as a signed integer $FFFFFFFF is -1.
The loop body does not execute.

As written, this won't compile, since you don't have a declaration for i.
But my psychic debugging senses say that i is declared somewhere as a cardinal (unsigned integer), and thus when it tries to evaluate 0 - 1, it gets MAXINT instead of -1, because unsigned integers can't represent negative values.
You should never use unsigned integers as either the index variable or the bounding variables of a for loop if there's any chance at all that they can go negative. Otherwise, you get errors like this. In fact, you should probably just not use unsigned integers in general. They're not as useful as they look (if you need a value higher than the maximum signed value for a size, it's likely that you'll end up needing a value higher than twice that at some point, so what you really need is the next larger integer size) and they tend to cause strange bugs like this one.

Related

I want to create a Fibonacci sequence using a for loop, but the integers are not adding up

procedure TForm1.Button1Click(Sender: TObject);
var
term1: integer;
term2: integer;
term3: integer;
j: integer;
begin
term1 := (0);
term2 := (1);
for j := 1 to 100 do;
begin
term3 :=( term1 + term2);
Memo1.Text:=inttostr(term3);
term1 := term2;
term2 := term3;
end;
end;
end.
This is what I have so far, but term1 and term2 don't want to add up. I have tried some different things, but for some reason the integers never want to add up.
There are several problems with your code
The semicolon after for j := 1 to 100 do prevents your next code that is withing begin..end block to be run in a loop. Why? The code that is to be run in each cycle of for loop is the one that follows the do until the first semicolon. Since you put semicolon just after the do this basically means that empty block of code is ran in a loop. Your begin..end block comes after that. Removing the semicolon after do will fix that.
You are using Memo1.Text:=inttostr(term3); to write the result into Memo. The problem with this is that this will rewrite entire text of the Memo every time so you will end up with only one line showing the last number. You should use Memo1.Lines.Add(inttostr(term3)); instead so that new line is added each time.
Lastly you are using Integer type for your variables. Since numbers in Fibonacci sequence grows very fast you will quickly exceed the maximum value that can be stored in Integer which in Delphi is Signed 32 bit Integer with a max value of 2147483647. You will have to use bigger integer types like 64 bit Integer type and since you are only dealing with positive numbers you should therefore use Unsigned 64 bit Integer that in declared in Delphi by UInt64 type. You can read more about Delphi default Integer types in documentation. Unfortunately not even UInt64 will is big enough for value of all first 100 numbers of Fibonacci sequence. So you will have to use one of the BigIntegers libraries for Delphi to do this properly. There are several of them available on internet.
You have an erroneous ; on your loop that you need to remove:
for j := 1 to 100 do;
^

Delphi "for ... to" statement runs from end value to start value

I'm writing a simple app in Embarcadero Delphi 2010. A simple code with two cycles:
procedure TForm1.Button1Click(Sender: TObject);
var
a:array [0..255] of integer;
i:integer;
k,q:integer;
begin
k:=0;
for I := 0 to 255 do
begin
a[i]:=i;
end;
for I := 0 to 255 do
begin
q:= a[i];
k:=k+q;
end;
Label1.Caption:=inttostr(k);
end;
According to Watch List, in second cycle variable "i" starts from value 256 and going to 0 (256, 255, 254, ..., 0), but array's elements is correct (0, 1, 2, 3, ...). Variable "i" declared only locally, no global variables.
Why does this happens? Is it normal behaviour?
The short answer is because of compiler optimization. The long answer is:
In your Pascal code, the integer I has two (actually three) purposes. First, it is the loops control variable (or loop counter), that is, it controls how many times the loop is run. Secondly, it acts as index to the array a. And in the first loop it also acts as the value assigned to the array elements. When compiled to machine code, these roles are handled by different registers.
If optimization is set in compiler settings, the compiler creates code that decrements the control variable from a start value down towards zero, if it can do so, without changing the end result. This it does, because a comparison against a non-zero value can be avoided, thus being faster.
In following disassembly of the first loop, you can see that the roles of variable I are handled as:
Register eax acts as loop control variable and value to be
assigned to array elements
Register edx is pointer to array element (incremented with 4
(bytes) per turn)
disassembly:
Unit25.pas.34: for I := 0 to 255 do
005DB695 33C0 xor eax,eax // init
005DB697 8D9500FCFFFF lea edx,[ebp-$00000400]
Unit25.pas.36: a[i]:=i;
005DB69D 8902 mov [edx],eax // value assignment
Unit25.pas.37: end;
005DB69F 40 inc eax // prepare for next turn
005DB6A0 83C204 add edx,$04 // same
Unit25.pas.34: for I := 0 to 255 do
005DB6A3 3D00010000 cmp eax,$00000100 // comparison with end of loop
005DB6A8 75F3 jnz $005db69d // if not, run next turn
Since eax has two roles, it must count upward. Note that it requires three commands for each loop to manage the loop counting: inc eax, cmp eax, $00000100 and jnz $005db69d.
In the disassembly of the second loop, the roles of variable I are handled similarily as in the first loop, except I is not assigned to the elements. Therefore the loop control only acts as a loop counter and can be run downward.
Register eax is loop control variable
Register edx is pointer to array element (incremented with 4
(bytes) per turn)
disassembly:
Unit25.pas.39: for I := 0 to 255 do
005DB6AA B800010000 mov eax,$00000100 // init loop counter
005DB6AF 8D9500FCFFFF lea edx,[ebp-$00000400]
Unit25.pas.41: q:= a[i];
005DB6B5 8B0A mov ecx,[edx]
Unit25.pas.42: k:=k+q;
005DB6B7 03D9 add ebx,ecx
Unit25.pas.43: end;
005DB6B9 83C204 add edx,$04 // prepare for next turn
Unit25.pas.39: for I := 0 to 255 do
005DB6BC 48 dec eax // decrement loop counter, includes intrinsic comparison with 0
005DB6BD 75F6 jnz $005db6b5 // jnz = jump if not zero
Note that in this case only two commands are needed to manage loop counting: dec eax and jnz $005db6b5.
In Delphi XE7, in the Watches window, variable I is shown during the first loop as incrementing values but during the second loop as E2171 Variable 'i' inaccessible here due to optimization. In earlier versions I recall it was showing decrementing values which I believe you see.
I have copied your exact code and when I ran it the variable "i" counts normally in both for cycles. Have you ran the second cycle step by step? The "i" is really 256 at the start of the second cycle because of the first one but as soon as the second cycle starts "i" becomes 0 and it counts normally to 255.
I don't see how or why would it count from 256 to 0?
UPDATE:
I haven't even thought of this, but here's your explanation I belive: http://www.delphigroups.info/2/45/418603.html
" It's a compiler optimization - you're not using "I" inside your loop
hence compiler thought of a better way to count . Your loop count
will still be accurate..."

Is it safe to always access odd-sized datastructures in multiples of 4 bytes?

If SizeOf(datastructure) is not a multiple of 4. It can be awkward to access the remaining few bytes.
I want to optimize this by reading the remaining 1,2 or 3 bytes in a 4 byte variable and then masking off the bytes I don't need.
This should not give exceptions, because blocks should be allocated in dword (or bigger) aligned chunks.
Let me give an example:
function MurmurHash3(data: PInteger; size: integer; seed: Cardinal): Cardinal;
const
....
{$ifdef purepascal}
var
i: integer;
k,hash: cardinal;
remaining: cardinal;
begin
hash:= seed;
for i:= 0 to (size shr 2)-1 do begin
... do murmur stuff
end; {for i}
remaining:= data[size shr 2]; //access the remaining 1,2,3 bytes
//mask off the bytes we don't need.
remaining:= remaining and ($ffffffff shr ((size and $3)*8));
....
Is this safe or will this land me in trouble?
This is safe if and only if it is valid to read beyond the end of the data to a multiple of 4. If the byte array is held in an integer array then you are fine. If it is actually held in a byte array then you could be reading off the end of the buffer.
What I mean by this is that although your function accepts PInteger, that might have been done for sake of convenience when addressing the array. If the caller of the function has cast PByte to PInteger at the call site, then it is at least plausible that you could be reading off the end of the array and potentially encounter a runtime memory fault.
You mention that a desire to optimise is driving this. I'm not sure that the final step of a hash calculation will need optimising. The loop is where you incur the cost. I doubt you'd suffer very much by using Move to perform a copy of the straggling bytes into an integer variable. If you really want to optimise, break 3 into 2 then 1, and then 2 and 1 can be handled by register access.

Invalid floating point operation calling Trunc()

I'm getting a (repeatable) floating point exception when i try to Trunc() a Real value.
e.g.:
Trunc(1470724508.0318);
In reality the actual code is more complex:
ns: Real;
v: Int64;
ns := ((HighPerformanceTickCount*1.0)/g_HighResolutionTimerFrequency) * 1000000000;
v := Trunc(ns);
But in the end it still boils down to:
Trunc(ARealValue);
Now, i cannot repeat it anywhere else - just at this one spot. Where it fails every time.
It's not voodoo
Fortunately computers are not magic. The Intel CPU performs very specific observable actions. So i should be able to figure out why the floating point operation fails.
Going into the CPU window
v := Trunc(ns)
fld qword ptr [ebp-$10]
This loads the 8-byte floating point value at ebp-$10 into floating point register ST0.
The bytes at memory address [ebp-$10] are:
0018E9D0: 6702098C 41D5EA5E (as DWords)
0018E9D0: 41D5EA5E6702098C (as QWords)
0018E9D0: 1470724508.0318 (as Doubles)
The call succeeds, and the floating point register the contains the appropriate value:
Next is the actual call to the RTL Trunc function:
call #TRUNC
Next is the guts of Delphi RTL's Trunc function:
#TRUNC:
sub esp,$0c
wait
fstcw word ptr [esp] //Store Floating-Point Control Word on the stack
wait
fldcw word ptr [cwChop] //Load Floating-Point Control Word
fistp qword ptr [esp+$04] //Converts value in ST0 to signed integer
//stores the result in the destination operand
//and pops the stack (increments the stack pointer)
wait
fldcw word ptr [esp] //Load Floating-Point Control Word
pop ecx
pop eax
pop edx
ret
Or i suppose i could have just pasted it from the rtl, rather than transcribing it from the CPU window:
const cwChop : Word = $1F32;
procedure _TRUNC;
asm
{ -> FST(0) Extended argument }
{ <- EDX:EAX Result }
SUB ESP,12
FSTCW [ESP] //Store foating-control word in ESP
FWAIT
FLDCW cwChop //Load new control word $1F32
FISTP qword ptr [ESP+4] //Convert ST0 to int, store in ESP+4, and pop the stack
FWAIT
FLDCW [ESP] //restore the FPCW
POP ECX
POP EAX
POP EDX
end;
The exception happens during the actual fistp operation.
fistp qword ptr [esp+$04]
At the moment of this call, the ST0 register will contains the same floating point value:
Note: The careful observer will note the value in the above screenshot doesn't match the first screenshot. That's because i took it on a different run. I'd rather not have to carefully redo all the constants in the question just to make them consistent - but trust me: it's the same when i reach the fistp instruction as it was after the fld instruction.
Leading up to it:
sub esp,$0c: I watch it push the the stack down by 12 bytes
fstcw word ptr [esp]: i watch it push $027F into the the current stack pointer
fldcw word ptr [cwChop]: i watch the floating point control flags change
fistp qword ptr [esp+$04]: and it's about to write the Int64 into the room it made on the stack
and then it crashes.
What can actually be going on here?
It happens with other values as well, it's not like there's something wrong with this particular floating point value. But i even tried to setup the test-case elsewhere.
Knowing that the 8-byte hex value of the float is: $41D5EA5E6702098C, i tried to contrive the setup:
var
ns: Real;
nsOverlay: Int64 absolute ns;
v: Int64;
begin
nsOverlay := $41d62866a2f270dc;
v := Trunc(ns);
end;
Which gives:
nsOverlay := $41d62866a2f270dc;
mov [ebp-$08],$a2f270dc
mov [ebp-$04],$41d62866
v := Trunc(ns)
fld qword ptr [ebp-$08]
call #TRUNC
And at the point of the call to #trunc, the floating point register ST0 contains a value:
But the call does not fail. It only fails, every time in this one section of my code.
What could be possibly happening that is causing the CPU to throw an invalid floating point exception?
What is the value of cwChop before it loads the control word?
The value of cwChop looks to be correct before the load control word, $1F32. But after the load, the actual control word is wrong:
Bonus Chatter
The actual function that is failing is something to convert high-performance tick counts into nanoseconds:
function PerformanceTicksToNs(const HighPerformanceTickCount: Int64): Int64;
//Convert high-performance ticks into nanoseconds
var
ns: Real;
v: Int64;
begin
Result := 0;
if HighPerformanceTickCount = 0 then
Exit;
if g_HighResolutionTimerFrequency = 0 then
Exit;
ns := ((HighPerformanceTickCount*1.0)/g_HighResolutionTimerFrequency) * 1000000000;
v := Trunc(ns);
Result := v;
end;
I created all the intermeidate temporary variables to try to track down where the failure is.
I even tried to use that as a template to try to reproduce it:
var
i1, i2: Int64;
ns: Real;
v: Int64;
vOver: Int64 absolute ns;
begin
i1 := 5060170;
i2 := 3429541;
ns := ((i1*1.0)/i2) * 1000000000;
//vOver := $41d62866a2f270dc;
v := Trunc(ns);
But it works fine. There's something about when it's called during a DUnit unit test.
Floating Point control word flags
Delphi's standard control word: $1332:
$1332 = 0001 00 11 00 110010
0 ;Don't allow invalid numbers
1 ;Allow denormals (very small numbers)
0 ;Don't allow divide by zero
0 ;Don't allow overflow
1 ;Allow underflow
1 ;Allow inexact precision
0 ;reserved exception mask
0 ;reserved
11 ;Precision Control - 11B (Double Extended Precision - 64 bits)
00 ;Rounding control -
0 ;Infinity control - 0 (not used)
The Windows API required value: $027F
$027F = 0000 00 10 01 111111
1 ;Allow invalid numbers
1 ;Allow denormals (very small numbers)
1 ;Allow divide by zero
1 ;Allow overflow
1 ;Allow underflow
1 ;Allow inexact precision
1 ;reserved exception mask
0 ;reserved
10 ;Precision Control - 10B (double precision)
00 ;Rounding control
0 ;Infinity control - 0 (not used)
The crChop control word: $1F32
$1F32 = 0001 11 11 00 110010
0 ;Don't allow invalid numbers
1 ;Allow denormals (very small numbers)
0 ;Don't allow divide by zero
0 ;Don't allow overflow
1 ;Allow underflow
1 ;Allow inexact precision
0 ;reserved exception mask
0 ;unused
11 ;Precision Control - 11B (Double Extended Precision - 64 bits)
11 ;Rounding Control
1 ;Infinity control - 1 (not used)
000 ;unused
The CTRL flags after loading $1F32: $1F72
$1F72 = 0001 11 11 01 110010
0 ;Don't allow invalid numbers
1 ;Allow denormals (very small numbers)
0 ;Don't allow divide by zero
0 ;Don't allow overflow
1 ;Allow underflow
1 ;Allow inexact precision
1 ;reserved exception mask
0 ;unused
11 ;Precision Control - 11B (Double Extended Precision - 64 bits)
11 ;Rounding control
1 ;Infinity control - 1 (not used)
00011 ;unused
All the CPU is doing is turning on a reserved, unused, mask bit.
RaiseLastFloatingPointError()
If you're going to develop programs for Windows, you really need to accept the fact that floating point exceptions should be masked by the CPU, meaning you have to watch for them yourself. Like Win32Check or RaiseLastWin32Error, we'd like a RaiseLastFPError. The best i can come up with is:
procedure RaiseLastFPError();
var
statWord: Word;
const
ERROR_InvalidOperation = $01;
// ERROR_Denormalized = $02;
ERROR_ZeroDivide = $04;
ERROR_Overflow = $08;
// ERROR_Underflow = $10;
// ERROR_InexactResult = $20;
begin
{
Excellent reference of all the floating point instructions.
(Intel's architecture manuals have no organization whatsoever)
http://www.plantation-productions.com/Webster/www.artofasm.com/Linux/HTML/RealArithmetica2.html
Bits 0:5 are exception flags (Mask = $2F)
0: Invalid Operation
1: Denormalized - CPU handles correctly without a problem. Do not throw
2: Zero Divide
3: Overflow
4: Underflow - CPU handles as you'd expect. Do not throw.
5: Precision - Extraordinarily common. CPU does what you'd want. Do not throw
}
asm
fwait //Wait for pending operations
FSTSW statWord //Store floating point flags in AX.
//Waits for pending operations. (Use FNSTSW AX to not wait.)
fclex //clear all exception bits the stack fault bit,
//and the busy flag in the FPU status register
end;
if (statWord and $0D) <> 0 then
begin
//if (statWord and ERROR_InexactResult) <> 0 then raise EInexactResult.Create(SInexactResult)
//else if (statWord and ERROR_Underflow) <> 0 then raise EUnderflow.Create(SUnderflow)}
if (statWord and ERROR_Overflow) <> 0 then raise EOverflow.Create(SOverflow)
else if (statWord and ERROR_ZeroDivide) <> 0 then raise EZeroDivide.Create(SZeroDivide)
//else if (statWord and ERROR_Denormalized) <> 0 then raise EUnderflow.Create(SUnderflow)
else if (statWord and ERROR_InvalidOperation) <> 0 then raise EInvalidOp.Create(SInvalidOp);
end;
end;
A reproducible case!
I found a case, when Delphi's default floating point control word, that was the cause of an invalid floating point exception (although I never saw it before now because it was masked). Now that i'm seeing it, why is it happening! And it's reproducible:
procedure TForm1.Button1Click(Sender: TObject);
var
d: Real;
dover: Int64 absolute d;
begin
d := 1.35715152325557E020;
// dOver := $441d6db44ff62b68; //1.35715152325557E020
d := Round(d); //<--floating point exception
Self.Caption := FloatToStr(d);
end;
You can see that the ST0 register contains a valid floating point value. The floating point control word is $1372. There floating point exception flag are all clear:
And then, as soon as it executes, it's an invalid operation:
IE (Invalid operation) flag is set
ES (Exception) flag is set
I was tempted to ask this as another question, but it would be the exact same question - except this time calling Round().
The problem occurs elsewhere. When your code enters Trunc the control word is set to $027F which is, IIRC, the default Windows control word. This has all exceptions masked. That's a problem because Delphi's RTL expects exceptions to be unmasked.
And look at the FPU window, sure enough there are errors. Both IE and PE flags are set. It's IE that counts. That's means that earlier in the code sequence there was a masked invalid operation.
Then you call Trunc which modifies the control word to unmask the exceptions. Look at your second FPU window screenshot. IE is 1 but IM is 0. So boom, the earlier exception is raised and you are led to think that it was the fault of Trunc. It was not.
You'll need to trace back up the call stack to find out why the control word is not what it ought to be in a Delphi program. It ought to be $1332. Most likely you are calling into some third party library which modifies the control word and does not restore it. You'll have to locate the culprit and take charge whenever any calls to that function return.
Once you get the control word back under control you'll find the real cause of this exception. Clearly there is an illegal FP operation. Once the control word unmasks the exceptions, the error will be raised at the right point.
Note that there's nothing to worry about the discrepancy between $1372 and $1332, or $1F72 and $1F32. That's just an oddity with the CTRL control word that some of the bytes are reserved and ignore you exhortations to clear them.
Your latest update essentially asks a different question. It asks about the exception raised by this code:
procedure foo;
var
d: Real;
i: Int64;
begin
d := 1.35715152325557E020;
i := Round(d);
end;
This code fails because the job of Round() is to round d to the nearest Int64 value. But your value of d is greater than the largest possible value that can be stored in an Int64 and hence the floating point unit traps.

Char and Chr in Delphi

The difference between Chr and Char when used in converting types is that one is a function and the other is cast
So: Char(66) = Chr(66)
I don't think there is any performance difference (at least I've never noticed any, one probably calls the other).... I'm fairly sure someone will correct me on this!
EDIT Thanks to Ulrich for the test proving they are in fact identical.
EDIT 2 Can anyone think of a case where they might not be identical, e.g. you are pushed towards using one over the other due to the context?
Which do you use in your code and why?
I did a small test in D2007:
program CharChr;
{$APPTYPE CONSOLE}
uses
Windows;
function GetSomeByte: Byte;
begin
Result := Random(26) + 65;
end;
procedure DoTests;
var
b: Byte;
c: Char;
begin
b := GetSomeByte;
IsCharAlpha(Chr(b));
b := GetSomeByte;
IsCharAlpha(Char(b));
b := GetSomeByte;
c := Chr(b);
b := GetSomeByte;
c := Char(b);
end;
begin
Randomize;
DoTests;
end.
Both calls produce the same assembly code:
CharChr.dpr.19: IsCharAlpha(Chr(b));
00403AE0 8A45FF mov al,[ebp-$01]
00403AE3 50 push eax
00403AE4 E86FFFFFFF call IsCharAlpha
CharChr.dpr.21: IsCharAlpha(Char(b));
00403AF1 8A45FF mov al,[ebp-$01]
00403AF4 50 push eax
00403AF5 E85EFFFFFF call IsCharAlpha
CharChr.dpr.24: c := Chr(b);
00403B02 8A45FF mov al,[ebp-$01]
00403B05 8845FE mov [ebp-$02],al
CharChr.dpr.26: c := Char(b);
00403B10 8A45FF mov al,[ebp-$01]
00403B13 8845FE mov [ebp-$02],al
Edit: Modified sample to mitigate Nick's concerns.
Edit 2: Nick's wish is my command. ;-)
The help says: Chr returns the character with the ordinal value (ASCII value) of the byte-type expression, X. *
So, how is a character represented in a computer's memory? Guess what, as a byte*. Actually the Chr and Ord functions are only there for Pascal being a strictly typed language prohibiting the use of bytes* where characters are requested. For the computer the resulting char is still represented as byte* - to what shall it convert then? Actually there is no code emitted for this function call, just as there is no code omitted for a type cast. Ergo: no difference.
You may prefer chr just to avoid a type cast.
Note: type casts shall not be confused with explicit type conversions! In Delphi 2010 writing something like Char(a) while a is an AnsiChar, will actually do something.
**For Unicode please replace byte with integer*
Edit:
Just an example to make it clear (assuming non-Unicode):
var
a: Byte;
c: char;
b: Byte;
begin
a := 60;
c := Chr(60);
c := Chr(a);
b := a;
end;
produces similar code
ftest.pas.46: a := 60;
0045836D C645FB3C mov byte ptr [ebp-$05],$3c
ftest.pas.47: c := Chr(60);
00458371 C645FA3C mov byte ptr [ebp-$06],$3c
ftest.pas.48: c := Chr(a);
00458375 8A45FB mov al,[ebp-$05]
00458378 8845FA mov [ebp-$06],al
ftest.pas.49: b := a;
0045837B 8A45FB mov al,[ebp-$05]
0045837E 8845F9 mov [ebp-$07],al
Assigning byte to byte is actually the same as assigning byte to char via CHR().
chr is a function, thus it returns a new value of type char.
char(x) is a cast, that means the actual x object is used but as a different type.
Many system functions, like inc, dec, chr, ord, are inlined.
Both char and chr are fast. Use the one that is most appropriate each time,
and reflects better what you want to do.
Chr is function call, it is a bit (tiny-tiny) more expensive then type cast. But i think Chr is inlined by compiler.
They are identical, but they don't have to be identical. There's no requirement that the internal representation of characters map 1-to-1 with their ordinal values. Nothing says that a Char variable holding the value 'A' must hold the numeric value 65. The requirement is that when you call Ord on that variable, the result must be 65 because that's the code point designated for the letter A in your program's character encoding.
Of course, the easiest implementation of that requirement is for the variable to hold the numeric value 65 as well. Because of this, the function calls and the type-casts are always identical.
If the implementation were different, then when you called Chr(65), the compiler would go look up what character is at code point 65 and use it as the result. When you write Char(65), the compiler wouldn't worry about what character it really represents, as long as the numeric result stored in memory was 65.
Is this splitting hairs? Yes, absolutely, because in all current implementations, they're identical. I liken this to the issue of whether the null pointer is necessarily zero. It's not, but under all implementations, it ends up that way anyway.
chr is typesafe, char isn't: Try to code chr(256) and you'll get a compiler error. Try to code char(256) and you will either get the character with the ordinal value 0 or 1, depending on your computers internal representation of integers.
I'll suffix the above by saying that that applies to pre-unicode Delphi. I don't know if chr and char have been updated to take unicode into account.

Resources