I was curious how long a dynamic array could be so I tried
SetLength(dynArray, High(Int64));
That has a value of 9,223,372,036,854,775,807 and I figure that would be the largest number of indexes I could reference anyway. It gave me a:
ERangeError with message 'Range check error'.
So I tried:
SetLength(dynArray, MaxInt);
and got the same error!
Interestingly I could call it with
SetLength(dynArray, Trunc(Power(2, 32));
Which is actually twice the size of MaxInt!
I tried
SetLength(dynArray, Trunc(Power(2, 63) - 1));
Which is the same as High(Int64), and that failed too.
Short of continued trial and error, does someone know the maximum size? Does it depend on the size of the elements in the array?
I am using Delphi 2009. Will it be different for different versions (obviously when Commadore comes out it should be greater!)
The answer is clear from System.DynArraySetLength procedure, from line 20628:
Inc(neededSize, Sizeof(Longint)*2);
if neededSize < 0 then
Error(reRangeError);
The maximum value you can allocate without raising a range check error is therefore theoretically Maxint - SizeOf(Longint) * 2. Practically, you will get an out-of-memory error depending on how much memory is available.
There's no point in speculating about the maximum theoretical length of a dynamic array, as the maximum practical length is much smaller.
The size of the data structure and the data contained in it has to be smaller than the maximum memory an application can allocate, minus the memory the application code itself, the stack and the other data will need. On Windows (32 bit, the only version we mere mortals can target with Delphi right now) this is a virtual address range of 2 GByte, or 3 GByte with a special switch for the OS loader, for each application. I'm not sure though that a Delphi application can even handle the 3 GByte memory space, as the last third would have negative values for offsets in all places where integers are used instead of LongWords.
So you could try to allocate a dynamic array of say 80% or 90% of MaxInt div SizeOf(array element) - with the most probable result that the allocation of that memory block fails at runtime.
Also: giving an int64 length and getting no exception would not mean that the array has the intended length. Consider this code:
procedure TForm1.Button1Click(Sender: TObject);
var
a: array of byte;
l: int64;
begin
l := $4000000000;
SetLength(a, l);
Caption := IntToStr(Length(a));
end;
If range checking is off this will compile without hints and warnings, and run without exceptions. It has only the slight problem that the array has length 0 after the call to SetLength(). So for the checks in your question you should definitely read back the length of the dynamic array after a successful SetLength(), it is the only way to be sure that the compiler and runtime did what you intended.
Note that afaik elementcount is also limited, more than 2^31-1 is unlikely. It could be that size has the same limit (to avoid signed<>unsigned issues in the RTL) I doubt that more of than 2GB is possible even in /3GB mode.
MMaths:
max_array_bytesize = 2^31 - 9
max_array_elements_number = [(2^31 - 9) / array_element_bytesize]
Code:
max_array_elements_number := (MaxInt-Sizeof(Longint)*2) div SizeOf(array_element);
Example:
type
TFoo = <type_description>;
TFooDynArray = array of TFoo
const
cMaxMemBuffSize = MaxInt-Sizeof(Longint)*2;
var
A : TFooDynArray;
B : array of int64;
MaxElems_A : integer;
MaxElems_B : integer;
begin
MaxElems_A := cMaxMemBuffSize div SizeOf(TFoo);
MaxElems_B := cMaxMemBuffSize div SizeOf(int64);
ShowMessage('Max elements number for array:'#13#10+
'1) A is '+IntToStr(MaxElems_A)+#13#10+
'2) B is '+IntToStr(MaxElems_B)
);
end;
Related
The following code which attempts to convert a value well beyond the double precision range
StrToFloat('1e99999999')
correctly reports an incorrect floating point value in Delphi 10.2r3 with the Windows 32 bits compiler, but when compiled with the Window 64 bits compiler, it silently returns a 0 (zero).
Is there a way to have StrToFloat report an error when the floating point value is incorrect?
I have tried TArithmeticException.exOverflow, but this has no effect in that case.
I also tried TArithmeticException.exPrecision but it triggers in many usual approximation cases (f.i. it triggers when converting '1e9').
Issue was noticed with Delphi 10.2 update 3
addendum: to workaround the issue, I have started a clean-room alternative implementation of string to double conversion, initial version with tests can be found in dwscript commit 2ba1d4a
This is a defect that is present in all versions of Delphi that use the PUREPASCAL version of StrToFloat. That maps through to InternalTextToExtended which reads the exponent like this:
function ReadExponent: SmallInt;
var
LSign: SmallInt;
begin
LSign := ReadSign();
Result := 0;
while LCurrChar.IsDigit do
begin
Result := Result * 10;
Result := Result + Ord(LCurrChar) - Ord('0');
NextChar();
end;
if Result > CMaxExponent then
Result := CMaxExponent;
Result := Result * LSign;
end;
The problem is the location of
if Result > CMaxExponent then
This test is meant to be inside the loop, and in the asm x86 version of this code it is. As coded above, with the max exponent test outside the loop, the 16 bit signed integer result value is too small for your exponent of 99999999. As the exponent is read, the value in Result overflows, and becomes negative. So for your example it turns out that an exponent of -7937 is used rather than 99999999. Naturally this leads to a value of zero.
This is a clear bug and I have submitted a bug report: RSP-20333.
As for how to get around the problem, I'm not aware of another function in the Delphi RTL that performs this task. So I think you will need to do one of the following:
Roll your own StrToFloat.
Pre-process the string, and handle out of range exponents before they read StrToFloat.
Use one of the functions from the C runtime library that performs the same task.
Finally, I am grateful for you asking this question because I can see that my own program is affected by this defect and so I can now fix it!
Update:
You may also be interested to look at a related bug that I found when investigating: RSP-20334. It might surprise you to realise that, StrToFloat('߀'), when using the PUREPASCAL version of StrToFloat, returns 1936.0. The trick is that the character that is being passed to StrToFloat is a non-Latin digit, in this case U+07C0.
I'm porting a Delphi project to 64 bits and I have a problem with a line of code which has the IN operator.
The compiler raise this error
E2010 Incompatible types: 'Integer' and 'Int64'
I wrote this sample app to replicate the problem.
{$APPTYPE CONSOLE}
{$R *.res}
uses
System.SysUtils;
Var
I : Integer;
L : Array of string;
begin
try
if I in [0, High(L)] then
except
on E: Exception do
Writeln(E.ClassName, ': ', E.Message);
end;
readln;
end.
This code works ok in 32 bits, but why doesn't it compile in Delphi XE2 64 bits? How I can fix this issue?
*UPDATE *
It seems which my post caused a lot of confusion (sorry for that) , just for explain the original code which i'm porting is more complex, and I just wrote this code as a sample to illustrate the issue. the original code uses an in operator to check if a value (minor than 255) belongs to a group of values (all minor or equal to 255) like so
i in [0,1,3,50,60,70,80,127,High(LArray)]
This code can't be compiled because the High function is returning a 8 byte value, which is not a ordinal value. and the In operator can be only used in sets with ordinal values.
FYI, the size of the results returned by the High function is different depending of the parameter passed as argument.
Check this sample
Writeln(SizeOf(High(Byte)));
Writeln(SizeOf(High(Char)));
Writeln(SizeOf(High(Word)));
Writeln(SizeOf(High(Integer)));
Writeln(SizeOf(High(NativeInt)));
Writeln(SizeOf(High(TBytes)));
Finally, you can fix your code casting the result of High function to integer.
if I in [0, Integer(High(L))] then
UPDATE
Check the additional info provided by David and remember to be very careful when you use the in operator to check the membership of a value in set with variable values. The in operator only checks the least significant byte of each element (in delphi 32 bits).
Check this sample
i:=257;
Writeln( 1 in [i]);
This return true because the low byte of 257 is 1.
And in Delphi 64 bits, the values greater than 255 are removed of the set. So this code
i:=257;
Writeln( 1 in [i]);
will return false because is equivalent to
Writeln( 1 in []);
What RRUZ says is quite correct.
To add a little bit more explanation, in 64 bit Delphi, dynamic array indices can be 64 bits wide. This is clearly needed, for example, when working with a large TBytes memory block. And so the high function must return a value of a type wide enough to hold all possible indices. So, high when applied to a dynamic array, returns a value of type Int64.
Once you start compiling 64 bit code the in operator is unsuited to the problem you are trying to solve. Whilst you could use the cast that RRUZ suggests, it may be clearer to write the code like this
if (I=low(L)) or (I=high(L)) then
Whilst the in operator makes for quite readable code, it is my opinion that a cast to Integer is not acceptable here. That will simply set a trap for you to fall into when you first have an array with more than high(Integer) elements. When that happens the code with the cast will stop working.
But in fact the problems run far deeper than this. The in version of the code fails long before you reach high(Integer) elements. It turns out that your code, whilst it compiles, does not really work. For example, consider this program:
program WeirdSets;
{$APPTYPE CONSOLE}
uses
SysUtils;
var
a: array of Integer;
begin
SetLength(a, 257);
Writeln(BoolToStr(Length(a) in [0, Length(a)], True));
end.
You would expect this program to output True but in fact it outputs False. If instead you were to write
Writeln(BoolToStr(Length(a) in [0, 257], True));
then the compiler reports:
[DCC Error] WeirdSets.dpr(9): E1012 Constant expression violates subrange bounds
The fundamental issue here is that sets are limited to 256 elements so as soon as you have an array with length greater than that, your code stops working.
Sadly, Delphi's support for sets is simply inadequate and is in urgent need of attention.
I also wonder whether you actually meant to write
if I in [0..High(L)] then
If so then I would recommend that you use the InRange function from Math.
if InRange(I, 0, High(L)) then
or even better
if InRange(I, low(L), High(L)) then
The most serious problem with the OP code is that in operator is limited to set size, i.e. [0..255]. Try this in any 32 bit version of Delphi to avoid the 64 bit issue:
var
I: Integer;
L: array of Integer;
begin
SetLength(L, 1000);
I:= 999;
Assert(I in [0, High(L)]); // fails !
end;
The OP is lucky if Length(L) <= 256 always, otherwise it is a bug you probably never thought of.
To find this bug switch range checking on:
{$R+}
procedure TForm1.Button2Click(Sender: TObject);
var
I: Integer;
A: array of Integer;
begin
SetLength(A, 1000);
I:= 999;
if I in [0, High(A)] then ShowMessage('OK!'); // Project .. raised exception
// class ERangeError with message 'Range check error'.
end;
First of all, I'm not a very experienced programmer. I'm using Delphi 2009 and have been working with sets, which seem to behave very strangely and even inconsistently to me. I guess it might be me, but the following looks like there's clearly something wrong:
unit test;
interface
uses
Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms,
Dialogs, StdCtrls;
type
TForm1 = class(TForm)
Button1: TButton;
Edit1: TEdit;
procedure Button1Click(Sender: TObject);
private
test: set of 1..2;
end;
var Form1: TForm1;
implementation
{$R *.dfm}
procedure TForm1.Button1Click(Sender: TObject);
begin
test := [3];
if 3 in test then
Edit1.Text := '3';
end;
end.
If you run the program and click the button, then, sure enough, it will display the string "3" in the text field. However, if you try the same thing with a number like 100, nothing will be displayed (as it should, in my opinion). Am I missing something or is this some kind of bug? Advice would be appreciated!
EDIT: So far, it seems that I'm not alone with my observation. If someone has some inside knowledge of this, I'd be very glad to hear about it. Also, if there are people with Delphi 2010 (or even Delphi XE), I would appreciate it if you could do some tests on this or even general set behavior (such as "test: set of 256..257") as it would be interesting to see if anything has changed in newer versions.
I was curious enough to take a look at the compiled code that gets produced, and I figured out the following about how sets work in Delphi 2010. It explains why you can do test := [8] when test: set of 1..2, and why Assert(8 in test) fails immediately after.
How much space is actually used?
An set of byte has one bit for every possible byte value, 256 bits in all, 32 bytes. An set of 1..2 requires 1 byte but surprisingly set of 100..101 also requires one byte, so Delphi's compiler is pretty smart about memory allocation. On the othter hand an set of 7..8 requires 2 bytes, and set based on a enumeration that only includes the values 0 and 101 requires (gasp) 13 bytes!
Test code:
TTestEnumeration = (te0=0, te101=101);
TTestEnumeration2 = (tex58=58, tex101=101);
procedure Test;
var A: set of 1..2;
B: set of 7..8;
C: set of 100..101;
D: set of TTestEnumeration;
E: set of TTestEnumeration2;
begin
ShowMessage(IntToStr(SizeOf(A))); // => 1
ShowMessage(IntToStr(SizeOf(B))); // => 2
ShowMessage(IntToStr(SizeOf(C))); // => 1
ShowMessage(IntToStr(SizeOf(D))); // => 13
ShowMessage(IntToStr(SizeOf(E))); // => 6
end;
Conclusions:
The basic model behind the set is the set of byte, with 256 possible bits, 32 bytes.
Delphi determines the required continuous sub-range of the total 32 bytes range and uses that. For the case set of 1..2 it probably only uses the first byte, so SizeOf() returns 1. For the set of 100.101 it probably only uses the 13th byte, so SizeOf() returns 1. For the set of 7..8 it's probably using the first two bytes, so we get SizeOf()=2. This is an especially interesting case, because it shows us that bits are not shifted left or right to optimize storage. The other interesting case is the set of TTestEnumeration2: it uses 6 bytes, even those there are lots of unusable bits around there.
What kind of code is generated by the compiler?
Test 1, two sets, both using the "first byte".
procedure Test;
var A: set of 1..2;
B: set of 2..3;
begin
A := [1];
B := [1];
end;
For those understand Assembler, have a look at the generated code yourself. For those that don't understand assembler, the generated code is equivalent to:
begin
A := CompilerGeneratedArray[1];
B := CompilerGeneratedArray[1];
end;
And that's not a typo, the compiler uses the same pre-compiled value for both assignments. CompiledGeneratedArray[1] = 2.
Here's an other test:
procedure Test2;
var A: set of 1..2;
B: set of 100..101;
begin
A := [1];
B := [1];
end;
Again, in pseudo-code, the compiled code looks like this:
begin
A := CompilerGeneratedArray1[1];
B := CompilerGeneratedArray2[1];
end;
Again, no typo: This time the compiler uses different pre-compiled values for the two assignments. CompilerGeneratedArray1[1]=2 while CompilerGeneratedArray2[1]=0; The compiler generated code is smart enough not to overwrite the bits in "B" with invalid values (because B holds information about bits 96..103), yet it uses very similar code for both assignments.
Conclusions
All set operations work perfectly well IF you test with values that are in the base-set. For the set of 1..2, test with 1 and 2. For the set of 7..8 only test with 7 and 8. I don't consider the set to be broken. It serves it's purpose very well all over the VCL (and it has a place in my own code as well).
In my opinion the compiler generates sub-optimal code for set assignments. I don't think the table-lookups are required, the compiler could generate the values inline and the code would have the same size but better locality.
My opinion is that the side-effect of having the set of 1..2 behave the same as set of 0..7 is the side-effect of the previous lack of optimization in the compiler.
In the OP's case (var test: set of 1..2; test := [7]) the compiler should generate an error. I would not classify this as a bug because I don't think the compiler's behavior is supposed to be defined in terms of "what to do on bad code by the programmer" but in terms of "what to do with good code by the programmer"; None the less the compiler should generate the Constant expression violates subrange bounds, as it does if you try this code:
(code sample)
procedure Test;
var t: 1..2;
begin
t := 3;
end;
At runtime, if the code is compiled with {$R+}, the bad assignment should raise an error, as it does if you try this code:
(code sample)
procedure Test;
var t: 1..2;
i: Integer;
begin
{$R+}
for i:=1 to 3 do
t := i;
{$R-}
end;
According to the official documentation on sets (my emphasis):
The syntax for a set constructor is: [
item1, ..., itemn ] where each item is
either an expression denoting an
ordinal of the set's base type
Now, according to Subrange types:
When you use numeric or character
constants to define a subrange, the
base type is the smallest integer or
character type that contains the
specified range.
Therefore, if you specify
type
TNum = 1..2;
then the base type will be byte (most likely), and so, if
type
TSet = set of TNum;
var
test: TSet;
then
test := [255];
will work, but not
test := [256];
all according to the official specification.
I have no "inside knowledge", but the compiler logic seems rather transparent.
First, the compiler thinks that any set like set of 1..2 is a subset of set of 0..255. That is why set of 256..257 is not allowed.
Second, the compiler optimizes memory allocation - so it allocates only 1 byte for set of 1..2. The same 1 byte is allocated for set of 0..7, and there seems to be no difference between the both sets on binary level. In short, the compiler allocates as little memory as possible with alignment taken into account (that means for example that compiler never allocates 3 bytes for set - it allocates 4 bytes, even if set fits into 3 bytes, like set of 1..20).
There is some inconsistency in a way the compiler treats sets, which can be demonstrated by the following code sample:
type
TTestSet = set of 1..2;
TTestRec = packed record
FSet: TTestSet;
FByte: Byte;
end;
var
Rec: TTestRec;
procedure TForm9.Button3Click(Sender: TObject);
begin
Rec.FSet:= [];
Rec.FByte:= 1; // as a side effect we set 8-th element of FSet
// (FSet actually has no 8-th element - only 0..7)
Assert(8 in Rec.FSet); // The assert should fail, but it does not!
if 8 in Rec.FSet then // another display of the bug
Edit1.Text := '8';
end;
A set is stored as a number and can actually hold values that are not in the enumeration on which the set is based. I would expect an error, at least when Range Checking is on in the compiler options, but this doesn't seem to be the case. I'm not sure if this is a bug or by design though.
[edit]
It is odd, though:
type
TNum = 1..2;
TSet = set of TNum;
var
test: TSet;
test2: TNum;
test2 := 4; // Not accepted
test := [4]; // Accepted
From the top of my head, this was a side effect of allowing non contiguous enumeration types.
The same holds for .NET bitflags: because in both cases the underlying types are compatible with integer, you can insert any integer in it (in Delphi limited to 0..255).
--jeroen
As far as I'm concerned, no bugs there.
For exemple, take the following code
var aByte: Byte;
begin
aByte := 255;
aByte := aByte + 1;
if aByte = 0 then
ShowMessage('Is this a bug?');
end;
Now, you can get 2 result from this code. If you compiled with Range Checking TRUE, an exception will be raise on the 2nd line. If you did NOT compile with Range Checking, the code will execute without any error and display the message dialogs.
The situation you encountered with the sets is similar, except that there is no compiler switch to force an exception to be raised in this situation (Well, as far as I know...).
Now, from your exemple:
private
test: set of 1..2;
That essentially declare a Byte sized set (If you call SizeOf(Test), it should return 1). A byte sized set can only contain 8 elements. In this case, it can contains [0] to [7].
Now, some exemple:
begin
test := [8]; //Here, we try to set the 9th bit of a Byte sized variable. It doesn't work
Test := [4]; //Here, we try to set the 5th bit of a Byte Sized variable. It works.
end;
Now, I need to admit I would kind of expect the "Constant expression violates subrange bounds" on the first line (but not on 2nd)
So yeah... there might be a small issue with the compiler.
As for your result being inconsistent... I'm pretty sure using set values out of the set's subrange values isn't guaranteed to give consistent result over different version of Delphi (Maybe not even over different compiles... So if your range is 1..2, stick with [1] and [2].
I would like to use the FFTW C library from Delphi 2009 and according to this documentation;
http://www.fftw.org/install/fftw_usage_from_delphi.txt
to increase the performance inside the FFTW library (such that it can use SIMD extensions) arrays passed in of either Single (float) or Double (double) need to be aligned either at 4 or 8 byte boundaries. I found documentation talking about alignment of record structures, but nothing specific about arrays. Is there a way to do this in Delphi 2009.
So the code (copied from the above documentation) would look like this;
var
in, out : Array of Single; // Array aligned at 4 byte boundary
plan : Pointer;
{$APPTYPE CONSOLE}
begin
...
SetLength(in, N);
SetLength(out, N);
plan := _fftwf_plan_dft_1d(dataLength, #in[0], #out[0],
FFTW_FORWARD, FFTW_ESTIMATE);
Also in the above documentation they talk about 8 and 16 byte boundaries but it looks to me it should be 4 and 8 byte boundaries, if any could clear that up to that would be great.
Thanks,
Bruce
Note that you can create data structures with any custom alignment you might need. For example to align your FFT data on 128 byte boundaries:
procedure TForm1.Button1Click(Sender: TObject);
type
TFFTData = array[0..63535] of double;
PFFTData = ^TFFTData;
var
Buffer: pointer;
FFTDataPtr: PFFTData;
i: integer;
const
Alignment = 128; // needs to be power of 2
begin
GetMem(Buffer, SizeOf(TFFTData) + Alignment);
try
FFTDataPtr := PFFTData((LongWord(Buffer) + Alignment - 1)
and not (Alignment - 1));
// use data...
for i := Low(TFFTData) to High(TFFTData) do
FFTDataPtr[i] := i * pi;
finally
FreeMem(Buffer);
end;
end;
Edit:
Regarding the comment about twice the memory being allocated: The stack variable FFTData is of type PFFTData, not of TFFTData, so it's a pointer. It's not that obvious because of the syntax enhancement allowing to omit the ^ for dereferencing the pointer. The memory is allocated with GetMem(), and to work with the proper type instead of the untyped memory block the typecast is employed. I should probably have called it FFTDataPtr.
Delphi provides no way to control the alignment of any memory it allocates. You're left to either rely on the documented behavior for the memory manager currently installed, or allocate memory with some slack space and then align it yourself, as Mghie demonstrates.
If you're concerned that Delphi's memory manager is not providing the desired alignment for dynamic arrays, then you can go ahead and use the memory functions provided by the DLL. The note you cite mentions _fftwf_malloc and _fftwf_free, but then it gives some kind of warning that memory allocated from _fftwf_malloc "may not be accessed directly from Delphi." That can't be what the authors meant to say, though, because that's not how memory works in Windows. The authors probably meant to say that memory allocated by _fftwf_malloc cannot be freed by Delphi's FreeMem, and memory allocated by Delphi's GetMem cannot be freed by _fftwf_free. That's nothing special, though; you always need to keep your memory-management functions paired together.
If you use _fftwf_malloc to get your array, then you can access it through an ordinary pointer type. For example:
var
dataIn, dataOut: PDouble;
begin
dataIn := _fftwf_malloc(...);
dataOut := _fftwf_malloc(...);
_fftwf_plan_dft_1d(dataLength, dataIn, dataOut,
FFTW_FORWARD, FFTW_ESTIMATE);
As of Delphi 2009, you can even use array syntax on those pointers:
dataIn[0] := 3.5;
dataIn[2] := 7.3;
In order to enable that, use the {$POINTERMATH ON} compiler directive; it's not enabled by default except for the character-pointer types.
The disadvantage to manually allocating arrays like this is that you lose range checking. If you index beyond the end of an array, you won't get an easy-to-recognize ERangeError exception anymore. You'll get corrupted memory, access violations, or mysteriously crashing programs instead.
Heap blocks are iirc always aligned to 16-byte bounderies by FastMM (the old D7 memmanager aligned to 8). I don't know about sharemem, since I don't use it.
And dynamic arrays are heap based structures. OTOH dyn arrays could maybe become unaligned (from 16 to 8) because there is a length and ref count prefixed. Easiest is to simply print
ptruint(#in[0]) in hex and see if the end is 0 or 8. (*)
Note that there are fftw headers in FPC. ( packages/fftw), afaik it was recently fixed for 64-bit even.
I'm not aware of Stack alignment directives in Delphi. Maybe they are automatically "naturally" aligned though.
(*) ptruint is FPC speak for an unsigned integer type that is sizeof(pointer) large. cardinal on 32-bit, qword on 64-bit.
This is another possible variant of mghie's solution:
procedure TForm1.Button1Click(Sender: TObject);
type
TFFTData = array [0..0] of Double;
PFFTData = ^TFFTData;
var
AllocatedBuffer: Pointer;
AlignedArray: PFFTData;
i: Integer;
const
cFFTDataSize=63536;
begin
GetMem(AllocatedBuffer, cFFTDataSize*SizeOf(Double) + 16); // e.g 16 Bytes boudaries alignement
try
AlignedArray := PFFTData((Integer(AllocatedBuffer) and $FFFFFFF0) + 16);
// use data...
for i := 0 to cFFTDataSize-1 do
AlignedArray[i] := i * Pi;
finally
FreeMem(AllocatedBuffer);
end;
end;
I've refactored the piece of code to make it more meaningfull and make use of a similar manual alignement fix up technique.
I originally had an array[1..1000] that was defined as a global variable.
But now I need that to be n, not 1000 and I don't find out n until later.
I know what n is before I fill the array up but I need it to be global therefore need a way to define the size of a global array at run time.
Context is filling an array with a linear transformation of the bytes in a file.
I don't know how big the file is until someone wants to open it and the files can be of any size.
As of Delphi 4, Delphi supports dynamic arrays. You can modify their sizes at run time and they will retain the data you stored in other elements at the old size. They can hold elements of any homogeneous type, including records and other arrays. You can declare a dynamic array the same as you declare normal, "static" arrays, but simply omit the array bounds:
var
ArthurArray: array of TForm;
Although static arrays allow you to specify both the lower and upper bound, the low index of a dynamic array is always zero. The high index is given by the High function, which always returns one less than the length of the array. For any dynamic array x, High(x) = Length(x)-1.
A global variable can be accessed by any code, including local procedures.
A global variable of dynamic-array type will be initialized to be an empty array. Its length will be zero and High called on that array will be -1. Low on that array will still return zero.
At any time, you may resize a dynamic array. Use the SetLength function, just as you can do with strings:
var
NumElements: Integer;
begin
NumElements := GetNumberOfArthurForms();
SetLength(ArthurArray, NumElements);
end;
If you have a multidimensional array, you can set their lengths in a loop:
var
matrix: array of array of Double;
i: Integer;
begin
SetLength(matrix, height);
for i := 0 to height - 1 do
SetLength(matrix[i], width);
end;
There's a shortcut for that to set the lengths of all the inner arrays at once:
begin
SetLength(matrix, height, width);
end;
Like I mentioned, dynamic arrays keep their old values when you resize them:
var
data: array of string;
begin
SetLength(data, 2);
data[1] := 'foo';
SetLength(data, 20);
Assert(data[1] = 'foo');
end;
But if you shorten the array, any elements that resided beyond the new last element are gone forever:
begin
SetLength(data, 20);
data[15] := 'foo';
SetLength(data, 2);
// data[15] does not exist anymore.
SetLength(data, 16);
writeln(data[15); // Should print an *empty* line.
end;
My demonstrations above used strings. Strings are special in Delphi; they're managed by the compiler through reference counts. Because of that, new dynamic-array elements of type string are initialized to be empty. But if I had used integers instead, there would be no guarantee of the values of new elements. They might be zero, but they might be anything else, too, just like the initial values of standalone local variables.
The Delphi 7 help files are very good, I'm told. Please read more about dynamic arrays there. You can find demonstrations of their use throughout the VCL and RTL source code provided in your Delphi installation, as well as in nearly any Delphi code example produced in the last 10 years.
First, here's a general answer to the first part of your question:
If your array is no longer static, you might want to consider using a TList, a TStringList or one of the many container classes in the Contnrs unit.
They may better represent what you are doing, provide additional capabilities you might need, e.g. sorting or name/value pairs, they dynamically grow as you need them, and have been very well optimized.
Then you said:
"Context is filling an array with a linear transformation of the bytes in a file. I don't know how big the file is until someone wants to open it and the files can be of any size."
For your specific problem, I would load the bytes in a file using:
MyFileStream := TFileStream.Create(Filename, fmOpenRead or fmShareDenyWrite);
Size := MyFileStream.Size - MyFileStream.Position;
SetLength(Buffer, Size);
MyFileStream.Read(Buffer[0], Size);
Then you can easily use a PChar pointer to go through each character or even each byte in the Buffer one by one and transform them the way you need to.