As a follow-up to a previous answer to another question, I became curious of how heap allocations work in a loop.
Take the following two scenarios for example:
Declaration:
SomeList: TObjectList<TSomething>;
Scenario 1:
begin
for X := 1 to 10 do
SomeList[X].DoSomething;
end;
Scenario 2:
var
S: TSomething;
begin
for X:= 1 to 10 do begin
S:= SomeList[X];
S.DoSomething;
end;
end;
Now what I'm curious about is how heap allocations work in either scenario. Scenario 1 is directly calling the list item in each loop iteration, which I'm wondering if it adds to the heap and releases for each time the loop iterates. The second scenario on the other hand, obviously has one heap allocation, by simply declaring a local variable.
What I'm wondering is which scenario performs the heavier load on heap allocation (as being one leading cause to performance issues)?
Now what I'm curious about is how heap allocations work in either scenario.
There are no heap allocations in your example (unless DoSomething() is allocating memory internally).
Scenario 1 is directly calling the list item in each loop iteration
So is Scenario 2.
I'm wondering if it adds to the heap and releases for each time the loop iterates.
Nothing is being added to the heap.
The second scenario on the other hand, obviously has one heap allocation, by simply declaring a local variable.
Local variables are allocated on the stack, not on the heap. Variables can point at memory on the heap, though. Your S variable in Scenario 2 does, because TObject-derived classes are always allocated on the heap. S is just a local variable on the stack that points at the heap memory occupied by the TSomething object.
What I'm wondering is which scenario performs the heavier load on heap allocation (as being one leading cause to performance issues)?
Neither, because there is no heap allocation in your example.
Related
In the following procedure, will the array be allocated on the stack?
procedure One:
var
arr: array[0..1023] of byte;
begin
end;
What is the largest item that can go on the stack?
Is there a speed difference between accessing variable on the stack and on the heap?
In the following procedure, will the array be allocated on the stack?
Yes, provided that the local variable is not captured by an anonymous method. Such local variables reside on the heap.
What is the largest item that can go on the stack?
It depends on how large the stack is, and how much of the stack has already been used, and how much of the stack is used by calls made by the function itself. The stack is a fixed size, determined when the thread is created. The stack overflows if it grows beyond that size. On Windows at least, the default stack size is 1MB, so I would not expect you to encounter problems with a 1KB array as can be seen here.
Is there a speed difference between accessing variable on the stack and on the heap?
By and large no, but again this depends. Variables on the stack are probably more likely to be accessed frequently, and so probably easier to be cached. But for a decently sized object, like the 1KB array we can see here, I would not expect there to be any difference in access time. In terms of the underlying memory architecture, there's no difference between stack and heap, it's all just memory.
Now, where there is a difference in performance is in allocation. Heap allocation is more expensive than stack allocation. And especially if you have a multi-threaded application, heap allocation can be a bottleneck. In particular, the default Delphi memory manager does not scale well in multi-threaded use.
There is a maximum number of pointers that can be allocated? I'm working on a function that allocates various pointers to various records. After an amount (x) not calculated, the AllocMem function allocates a pointer overwriting the existing pointers. Anyone have a tips?
function NewObject(ID: Integer): boolean;
Var P: PNewObject;
begin
P:= Allocmem(SizeOf(TNewObject));
P^.ID:= ID;
...
Pointers that were allocated will only be released when the program close!
There is no maximum number of pointers that can be allocated. Dynamic memory allocation may fail if the memory manager is unable to find a suitable block of memory. In that scenario EOutOfMemory is raised.
After an amount (x) not calculated, the AllocMem function allocates a pointer overwriting the existing pointers.
No, that is not the case. The dynamic memory allocation functions will never return a block of memory that is already in use.
It sounds as though your program allocates but never deallocates. That might be a tenable approach if you have a garbage collector at hand, but this is not the case for you. Perhaps you need to consider deallocating when you are done with the memory.
Let's go back to the basics. Frankly, I have never used New and Dispose functions before. However, after I read the New() documentation and the included examples on the Embarcadero Technologies's website and the Delphi Basics explanation of New(), it leaves questions in my head:
What are the advantages of using System.New() instead of a local variable, other than just spare a tiny amount of memory?
Common code examples for New() are more or less as follows:
var
pCustRec : ^TCustomer;
begin
New(pCustRec);
pCustRec^.Name := 'Her indoors';
pCustRec^.Age := 55;
Dispose(pCustRec);
end;
In what circumstances is the above code more appropriate than the code below?
var
CustRec : TCustomer;
begin
CustRec.Name := 'Her indoors';
CustRec.Age := 55;
end;
If you can use a local variable, do so. That's a rule with practically no exceptions. This results in the cleanest and most efficient code.
If you need to allocate on the heap, use dynamic arrays, GetMem or New. Use New when allocating a record.
Examples of being unable to use the stack include structures whose size are not known at compile time, or very large structures. But for records, which are the primary use case for New, these concerns seldom apply.
So, if you are faced with a choice of stack vs heap for a record, invariably the stack is the correct choice.
From a different perspective:
Both can suffer from buffer overflow and can be exploited.
If a local variable overflows, you get stack corruption.
If a heap variable overflows, you get heap corruption.
Some say that stack corruptions are easier to exploit than heap corruptions, but that is not true in general.
Note there are various mechanisms in operating systems, processor architectures, libraries and languages that try to help preventing these kinds of exploits.
For instance there is DEP (Data Execution Prevention), ASLR (Address Space Layout Randomization) and more are mentioned at Wikipedia.
A local static variable reserves space on the limited stack. Allocated memory is located on the heap, which is basically all memory available.
As mentioned, the stack space is limited, so you should avoid large local variables and also large parameters which are passed by value (absence of var/const in the parameter declaration).
A word on memory usage:
1. Simple types (integer, char, string, double etc.) are located directly on the stack. The amount of bytes used can be determined by the sizeof(variable) function.
2. The same applies to record variables and arrays.
3. Pointers and Objects require 4/8 bytes.
Every object (that is, class instances) is always allocated on the heap.
Value structures (simple numerical types, records containing only those types) can be allocated on the heap.
Dynamic arrays and strings content are always allocated on the heap. Only the reference pointer can be allocated on the stack. If you write:
function MyFunc;
var s: string;
...
Here, 4/8 bytes are allocated on the stack, but the string content (the text characters) will always be allocated on the heap.
So using New()/Dispose() is of poor benefit. If it contains no reference-counted types, you may use GetMem()/FreeMem() instead, since there is no internal pointer to set to zero.
The main drawback of New() or Dispose() is that if an exception occur, you need to use a try...finally block:
var
pCustRec : ^TCustomer;
begin
New(pCustRec);
try
pCustRec^.Name := 'Her indoors';
pCustRec^.Age := 55;
finally
Dispose(pCustRec);
end;
end;
Whereas allocating on the stack let the compiler do it for you, in an hidden manner:
var
CustRec : TCustomer;
begin // here a try... is generated
CustRec.Name := 'Her indoors';
CustRec.Age := 55;
end; // here a finally + CustRec cleaning is generated
That's why I almost never use New()/Dispose(), but allocate on stack, or even better within a class.
2
The usual case for heap allocation is when the object must outlive the function that created it:
It is being returned as a function result or via a var/out parameter, either directly or by returning some container.
It's being stored in some object, struct or collection that is passed in or otherwise accessible inside the procedure (this includes being signaled/queued off to another thread).
In cases of limited stack space you might prefer allocation from the heap.
Ref.
as i can solve this problem: i want use a buffer of byte of 1 MB or more, with array it not is possible becouse i have a stack overlflow. I have thinked about getmem and freemem, or using tmemorystream, but not have understood exactely as solve it. To me need for use a buffer for copy a file using tfilestream with read/write.
I don't want load all fine in one time in memory and after write it in disk all in time too; for it, i have found solution, but not need me it.
Thanks very much. Daniela.
If you have a stack overflow then your variable doesn't fit on the stack. You are clearly using a local variable.
Solve the problem by using the heap instead. Either GetMem or SetLength.
One easy solution is using a dynamic array. Their data is allocated on the heap, so you will avoid the stackoverflow. The advantage of them over directly working with memory allocation functions is that they are refcounted and the memory they allocated will automatically be freed once the last reference goes out of scope.
var buffer:array of byte;
begin
SetLength(buffer,100000);
...
//Will be freed here as buffer goes out of scope
end;
Your buffer variable is allocated on stack and default maximum stack size used by Delphi compiler is 1 MiB. So the solution is to set higher limit by using project options or the following global directive:
{$MAXSTACKSIZE 4194304} // eg. now maximum is 4 MiB
Other way is to use heap instead of stack, any of dynamically allocated memory, in your case probably best solution will be a dynamic array.
Performance note: stack is faster than heap.
I am a Delphi programmer.
In a program I have to generate bidimensional arrays with different "branch" lengths.
They are very big and the operation takes a few seconds (annoying).
For example:
var a: array of array of Word;
i: Integer;
begin
SetLength(a, 5000000);
for i := 0 to 4999999 do
SetLength(a[i], Diff_Values);
end;
I am aware of the command SetLength(a, dim1, dim2) but is not applicable. Not even setting a min value (> 0) for dim2 and continuing from there because min of dim2 is 0 (some "branches" can be empty).
So, is there a way to make it fast? Not just by 5..10% but really FAST...
Thank you.
When dealing with a large amount of data, there's a lot of work that has to be done, and this places a theoretical minimum on the amount of time it can be done in.
For each of 5 million iterations, you need to:
Determine the size of the "branch" somehow
Allocate a new array of the appropriate size from the memory manager
Zero out all the memory used by the new array (SetLength does this for you automatically)
Step 1 is completely under your control and can possibly be optimized. 2 and 3, though, are about as fast as they're gonna get if you're using a modern version of Delphi. (If you're on an old version, you might benefit from installing FastMM and FastCode, which can speed up these operations.)
The other thing you might do, if appropriate, is lazy initialization. Instead of trying to allocate all 5 million arrays at once, just do the SetLength(a, 5000000); at first. Then when you need to get at a "branch", first check if its length = 0. If so, it hasn't been initialized, so initialize it to the proper length. This doesn't save time overall, in fact it will take slightly longer in total, but it does spread out the initialization time so the user doesn't notice.
If your initialization is already as fast as it will get, and your situation is such that lazy initialization can't be used here, then you're basically out of luck. That's the price of dealing with large amounts of data.
I just tested your exact code, with a constant for Diff_Values, timed it using GetTickCount() for rudimentary timing. If Diff_Values is 186 it takes 1466 milliseconds, if Diff_Values is 187 it fails with Out of Memory. You know, Out of Memory means Out of Address Space, not really Out of Memory.
In my opinion you're allocating so much data you run out of RAM and Windows starts paging, that's why it's slow. On my system I've got enough RAM for the process to allocate as much as it wants; And it does, until it fails.
Possible solutions
The obvious one: Don't allocate that much!
Figure out a way to allocate all data into one contiguous block of memory: helps with address space fragmentation. Similar to how a bi dimensional array with fixed size on the "branches" is allocated, but if your "branches" have different sizes, you'll need to figure a different mathematical formula, based on your data.
Look into other data structures, possibly ones that cache on disk (to brake the 2Gb address space limit).
In addition to Mason's points, here are some more ideas to consider:
If the branch lengths never change after they are allocated, and you have an upper bound on the total number of items that will be stored in the array across all branches, then you might be able to save some time by allocating one huge chunk of memory and divvying up the "branches" within that chunk yourself. Your array would become a 1 dimensional array of pointers, and each entry in that array points to the start of the data for that branch. You keep track of the "end" of the used space in your big block with a single pointer variable, and when you need to reserve space for a new "branch" you take the current "end" pointer value as the start of the new branch and increment the "end" pointer by the amount of space that branch requires. Don't forget to round up to dword boundaries to avoid misalignment penalties.
This technique will require more use of pointers, but it offers the potential of eliminating all the heap allocation overhead, or at least replacing the general purpose heap allocation with a purpose-built very simple, very fast suballocator that matches your specific use pattern. It should be faster to execute, but it will require more time to write and test.
This technique will also avoid heap fragmentation and reduces the releasing of all the memory to a single deallocation (instead of millions of separate allocations in your present model).
Another tip to consider: If the first thing you always do with the each newly allocated array "branch" is assign data into every slot, then you can eliminate step 3 in Mason's example - you don't need to zero out the memory if all you're going to do is immediately assign real data into it. This will cut your memory write operations by half.
Assuming you can fit the entire data structure into a contiguous block of memory, you can do the allocation in one shot and then take over the indexing.
Note: Even if you can't fit the data into a single contiguous block of memory, you can still use this technique by allocating multiple large blocks and then piecing them together.
First off form a helper array, colIndex, which is to contain the index of the first column of each row. Set the length of colIndex to RowCount+1. You build this by setting colIndex[0] := 0 and then colIndex[i+1] := colIndex[i] + ColCount[i]. Do this in a for loop which runs up to and including RowCount. So, in the final entry, colIndex[RowCount], you store the total number of elements.
Now set the length of a to be colIndex[RowCount]. This may take a little while, but it will be quicker than what you were doing before.
Now you need to write a couple of indexers. Put them in a class or a record.
The getter looks like this:
function GetItem(row, col: Integer): Word;
begin
Result := a[colIndex[row]+col];
end;
The setter is obvious. You can inline these access methods for increased performance. Expose them as an indexed property for convenience to the object's clients.
You'll want to add some code to check for validity of row and col. You need to use colIndex for the latter. You can make this checking optional with {$IFOPT R+} if you want to mimic range checking for native indexing.
Of course, this is a total non-starter if you want to change any of your column counts after the initial instantiation!