I will have to create a multi-threading project soon I have seen experiments ( delphitools.info/2011/10/13/memory-manager-investigations ) showing that the default Delphi memory manager has problems with multi-threading.
So, I have found this SynScaleMM. Anybody can give some feedback on it or on a similar memory manager?
Thanks
Our SynScaleMM is still experimental.
EDIT: Take a look at the more stable ScaleMM2 and the brand new SAPMM. But my remarks below are still worth following: the less allocation you do, the better you scale!
But it worked as expected in a multi-threaded server environment. Scaling is much better than FastMM4, for some critical tests.
But the Memory Manager is perhaps not the bigger bottleneck in Multi-Threaded applications. FastMM4 could work well, if you don't stress it.
Here are some (not dogmatic, just from experiment and knowledge of low-level Delphi RTL) advice if you want to write FAST multi-threaded application in Delphi:
Always use const for string or dynamic array parameters like in MyFunc(const aString: String) to avoid allocating a temporary string per each call;
Avoid using string concatenation (s := s+'Blabla'+IntToStr(i)) , but rely on a buffered writing such as TStringBuilder available in latest versions of Delphi;
TStringBuilder is not perfect either: for instance, it will create a lot of temporary strings for appending some numerical data, and will use the awfully slow SysUtils.IntToStr() function when you add some integer value - I had to rewrite a lot of low-level functions to avoid most string allocation in our TTextWriter class as defined in SynCommons.pas;
Don't abuse on critical sections, let them be as small as possible, but rely on some atomic modifiers if you need some concurrent access - see e.g. InterlockedIncrement / InterlockedExchangeAdd;
InterlockedExchange (from SysUtils.pas) is a good way of updating a buffer or a shared object. You create an updated version of of some content in your thread, then you exchange a shared pointer to the data (e.g. a TObject instance) in one low-level CPU operation. It will notify the change to the other threads, with very good multi-thread scaling. You'll have to take care of the data integrity, but it works very well in practice.
Don't share data between threads, but rather make your own private copy or rely on some read-only buffers (the RCU pattern is the better for scaling);
Don't use indexed access to string characters, but rely on some optimized functions like PosEx() for instance;
Don't mix AnsiString/UnicodeString kind of variables/functions, and check the generated asm code via Alt-F2 to track any hidden unwanted conversion (e.g. call UStrFromPCharLen);
Rather use var parameters in a procedure instead of function returning a string (a function returning a string will add an UStrAsg/LStrAsg call which has a LOCK which will flush all CPU cores);
If you can, for your data or text parsing, use pointers and some static stack-allocated buffers instead of temporary strings or dynamic arrays;
Don't create a TMemoryStream each time you need one, but rely on a private instance in your class, already sized in enough memory, in which you will write data using Position to retrieve the end of data and not changing its Size (which will be the memory block allocated by the MM);
Limit the number of class instances you create: try to reuse the same instance, and if you can, use some record/object pointers on already allocated memory buffers, mapping the data without copying it into temporary memory;
Always use test-driven development, with dedicated multi-threaded test, trying to reach the worse-case limit (increase number of threads, data content, add some incoherent data, pause at random, try to stress network or disk access, benchmark with timing on real data...);
Never trust your instinct, but use accurate timing on real data and process.
I tried to follow those rules in our Open Source framework, and if you take a look at our code, you'll find out a lot of real-world sample code.
If your app can accommodate GPL licensed code, then I'd recommend Hoard. You'll have to write your own wrapper to it but that is very easy. In my tests, I found nothing that matched this code. If your code cannot accommodate the GPL then you can obtain a commercial licence of Hoard, for a significant fee.
Even if you can't use Hoard in an external release of your code you could compare its performance with that of FastMM to determine whether or not your app has problems with heap allocation scalability.
I have also found that the memory allocators in the versions of msvcrt.dll distributed with Windows Vista and later scale quite well under thread contention, certainly much better than FastMM does. I use these routines via the following Delphi MM.
unit msvcrtMM;
interface
implementation
type
size_t = Cardinal;
const
msvcrtDLL = 'msvcrt.dll';
function malloc(Size: size_t): Pointer; cdecl; external msvcrtDLL;
function realloc(P: Pointer; Size: size_t): Pointer; cdecl; external msvcrtDLL;
procedure free(P: Pointer); cdecl; external msvcrtDLL;
function GetMem(Size: Integer): Pointer;
begin
Result := malloc(size);
end;
function FreeMem(P: Pointer): Integer;
begin
free(P);
Result := 0;
end;
function ReallocMem(P: Pointer; Size: Integer): Pointer;
begin
Result := realloc(P, Size);
end;
function AllocMem(Size: Cardinal): Pointer;
begin
Result := GetMem(Size);
if Assigned(Result) then begin
FillChar(Result^, Size, 0);
end;
end;
function RegisterUnregisterExpectedMemoryLeak(P: Pointer): Boolean;
begin
Result := False;
end;
const
MemoryManager: TMemoryManagerEx = (
GetMem: GetMem;
FreeMem: FreeMem;
ReallocMem: ReallocMem;
AllocMem: AllocMem;
RegisterExpectedMemoryLeak: RegisterUnregisterExpectedMemoryLeak;
UnregisterExpectedMemoryLeak: RegisterUnregisterExpectedMemoryLeak
);
initialization
SetMemoryManager(MemoryManager);
end.
It is worth pointing out that your app has to be hammering the heap allocator quite hard before thread contention in FastMM becomes a hindrance to performance. Typically in my experience this happens when your app does a lot of string processing.
My main piece of advice for anyone suffering from thread contention on heap allocation is to re-work the code to avoid hitting the heap. Not only do you avoid the contention, but you also avoid the expense of heap allocation – a classic twofer!
It is locking that makes the difference!
There are two issues to be aware of:
Use of the LOCK prefix by the Delphi itself (System.dcu);
How does FastMM4 handles thread contention and what it does after it failed to acquire a lock.
Use of the LOCK prefix by the Delphi itself
Borland Delphi 5, released in 1999, was the one that introduced the lock prefix in string operations. As you know, when you assign one string to another, it does not copy the whole string but merely increases the reference counter inside the string. If you modify the string, it is de-references, decreasing the reference counter and allocating separate space for the modified string.
In Delphi 4 and earlier, the operations to increase and decrease the reference counter were normal memory operations. The programmers that have used Delphi knew about and, and, if they were using strings across threads, i.e. pass a string from one thread to another, have used their own locking mechanism only for the relevant strings. Programmers did also use read-only string copy that did not modify in any way the source string and did not require locking, for example:
function AssignStringThreadSafe(const Src: string): string;
var
L: Integer;
begin
L := Length(Src);
if L <= 0 then Result := '' else
begin
SetString(Result, nil, L);
Move(PChar(Src)^, PChar(Result)^, L*SizeOf(Src[1]));
end;
end;
But in Delphi 5, Borland have added the LOCK prefix to the string operations and they became very slow, compared to Delphi 4, even for single-threaded applications.
To overcome this slowness, programmers became to use "single threaded" SYSTEM.PAS patch files with lock's commented.
Please see https://synopse.info/forum/viewtopic.php?id=57&p=1 for more information.
FastMM4 Thread Contention
You can modify FastMM4 source code for a better locking mechanism, or use any existing FastMM4 fork, for example https://github.com/maximmasiutin/FastMM4
FastMM4 is not the fastest one for multicore operation, especially when the number of threads is more than the number of physical sockets is because it, by default, on thread contention (i.e. when one thread cannot acquire access to data, locked by another thread) calls Windows API function Sleep(0), and then, if the lock is still not available enters a loop by calling Sleep(1) after each check of the lock.
Each call to Sleep(0) experiences the expensive cost of a context switch, which can be 10000+ cycles; it also suffers the cost of ring 3 to ring 0 transitions, which can be 1000+ cycles. As about Sleep(1) – besides the costs associated with Sleep(0) – it also delays execution by at least 1 millisecond, ceding control to other threads, and, if there are no threads waiting to be executed by a physical CPU core, puts the core into sleep, effectively reducing CPU usage and power consumption.
That’s why, on multithreded wotk with FastMM, CPU use never reached 100% - because of the Sleep(1) issued by FastMM4. This way of acquiring locks is not optimal. A better way would have been a spin-lock of about 5000 pause instructions, and, if the lock was still busy, calling SwitchToThread() API call. If pause is not available (on very old processors with no SSE2 support) or SwitchToThread() API call was not available (on very old Windows versions, prior to Windows 2000), the best solution would be to utilize EnterCriticalSection/LeaveCriticalSection, that don’t have latency associated by Sleep(1), and which also very effectively cedes control of the CPU core to other threads.
The fork that I've mentioned uses a new approach to waiting for a lock, recommended by Intel in its Optimization Manual for developers - a spinloop of pause + SwitchToThread(), and, if any of these are not available: CriticalSections instead of Sleep(). With these options, the Sleep() will never be used but EnterCriticalSection/LeaveCriticalSection will be used instead. Testing has shown that the approach of using CriticalSections instead of Sleep (which was used by default before in FastMM4) provides significant gain in situations when the number of threads working with the memory manager is the same or higher than the number of physical cores. The gain is even more evident on computers with multiple physical CPUs and Non-Uniform Memory Access (NUMA). I have implemented compile-time options to take away the original FastMM4 approach of using Sleep(InitialSleepTime) and then Sleep(AdditionalSleepTime) (or Sleep(0) and Sleep(1)) and replace them with EnterCriticalSection/LeaveCriticalSection to save valuable CPU cycles wasted by Sleep(0) and to improve speed (reduce latency) that was affected each time by at least 1 millisecond by Sleep(1), because the Critical Sections are much more CPU-friendly and have definitely lower latency than Sleep(1).
When these options are enabled, FastMM4-AVX it checks: (1) whether the CPU supports SSE2 and thus the "pause" instruction, and (2) whether the operating system has the SwitchToThread() API call, and, if both conditions are met, uses "pause" spin-loop for 5000 iterations and then SwitchToThread() instead of critical sections; If a CPU doesn't have the "pause" instrcution or Windows doesn't have the SwitchToThread() API function, it will use EnterCriticalSection/LeaveCriticalSection.
You can see the test results, including made on a computer with multiple physical CPUs (sockets) in that fork.
See also the Long Duration Spin-wait Loops on Hyper-Threading Technology Enabled Intel Processors article. Here is what Intel writes about this issue - and it applies to FastMM4 very well:
The long duration spin-wait loop in this threading model seldom causes a performance problem on conventional multiprocessor systems. But it may introduce a severe penalty on a system with Hyper-Threading Technology because processor resources can be consumed by the master thread while it is waiting on the worker threads. Sleep(0) in the loop may suspend the execution of the master thread, but only when all available processors have been taken by worker threads during the entire waiting period. This condition requires all worker threads to complete their work at the same time. In other words, the workloads assigned to worker threads must be balanced. If one of the worker threads completes its work sooner than others and releases the processor, the master thread can still run on one processor.
On a conventional multiprocessor system this doesn't cause performance problems because no other thread uses the processor. But on a system with Hyper-Threading Technology the processor the master thread runs on is a logical one that shares processor resources with one of the other worker threads.
The nature of many applications makes it difficult to guarantee that workloads assigned to worker threads are balanced. A multithreaded 3D application, for example, may assign the tasks for transformation of a block of vertices from world coordinates to viewing coordinates to a team of worker threads. The amount of work for a worker thread is determined not only by the number of vertices but also by the clipped status of the vertex, which is not predictable when the master thread divides the workload for working threads.
A non-zero argument in the Sleep function forces the waiting thread to sleep N milliseconds, regardless of the processor availability. It may effectively block the waiting thread from consuming processor resources if the waiting period is set properly. But if the waiting period is unpredictable from workload to workload, then a large value of N may make the waiting thread sleep too long, and a smaller value of N may cause it to wake up too quickly.
Therefore the preferred solution to avoid wasting processor resources in a long duration spin-wait loop is to replace the loop with an operating system thread-blocking API, such as the Microsoft Windows* threading API,
WaitForMultipleObjects. This call causes the operating system to block the waiting thread from consuming processor resources.
It refers to Using Spin-Loops on Intel Pentium 4 Processor and Intel Xeon Processor application note.
You can also find a very good spin-loop implementation here at stackoverflow.
It also loads normal loads just to check before issuing a lock-ed store, just to not flood the CPU with locked operations in a loop, that would lock the bus.
FastMM4 per se is very good. Just improve the locking and you will get an excelling multi-threaded memory manager.
Please also be aware that each small block type is locked separately in FastMM4.
You can put padding between the small block control areas, to make each area have own cache line, not shared with other block sizes, and to make sure it begins at a cache line size boundary. You can use CPUID to determine the size of the CPU cache line.
So, with locking correctly implemented to suit your needs (i.e. whether you need NUMA or not, whether to use lock-ing releases, etc., you may obtain the results that the memory allocation routines would be several times faster and would not suffer so severely from thread contention.
FastMM deals with multi-threading just fine. It is the default memory manager for Delphi 2006 and up.
If you are using an older version of Delphi (Delphi 5 and up), you can still use FastMM. It's available on SourceForge.
You could use TopMM:
http://www.topsoftwaresite.nl/
You could also try ScaleMM2 (SynScaleMM is based on ScaleMM1) but I have to fix a bug regarding to interthread memory, so not production ready yet :-(
http://code.google.com/p/scalemm/
Deplhi 6 memory manager is outdated and outright bad. We were using RecyclerMM both on a high-load production server and on a multi-threaded desktop application, and we had no issues with it: it's fast, reliable and doesn't cause excess fragmentation. (Fragmentation was Delphi's stock memory manager worst issue).
The only drawback of RecyclerMM is that it isn't compatible with MemCheck out of the box. However, a small source alteration was enough to render it compatible.
Related
I am writing an algorithm which all blocks are reading a same address. Such as we have a list=[1, 2, 3, 4], and all blocks are reading it and store it to their own shared memory...My test shows the more blocks reading it, the slower it will be...I guess no broadcast happen here? Any idea I can make it faster? Thank you!!!
I learnt from previous post that this can be broadcast in one wrap, seems can not happen in different wrap....(Actually in my case, the threads in one wrap are not reading a same location...)
Once list element is accessed by first warp of a SM unit, the second warp in same SM unit gets it from cache and broadcasts to all simt lanes. But another SM unit's warp may not have it in L1 cache so it fetches from L2 to L1 first.
It is similar in __constant__ memory but it requires same address to be accessed by all threads. Its latency is closer to register access. __constant__ memory is like instruction cache, you get more performance when all threads do same thing.
For example, if you have a Gaussian-filter that iterates over same coefficient-list of filter on all threads, it is better to use constant memory. Using shared memory does not have much advantage as the filter array is not scanned randomly. Shared memory is better when the filter array content is different per block or if it needs random access.
You can also combine constant memory and shared memory. Get half of list from constant memory, then the other half from shared memory. This should let 1024 threads hide latency of one memory type hidden behind the other.
If list is small enough, you can use registers directly (has to be compile-time known indices). But it increases register pressure and may decrease occupancy so be careful about this.
Some old cuda architectures (in case of fma operation) required one operand fetched from constant memory and the other operand from a register to achieve better performance in compute-bottlenecked algorithms.
In a test with 12000 floats as filter to be applied on all threads inputs, shared memory version with 128 threads-per-block completed work in 330 milliseconds while constant-memory version completed in 260 milliseconds and the L1 access performance was the real bottleneck in both versions so the real constant-memory performance is even better, as long as it is similar-index for all threads.
I have a code that creates a global array and when I unset the array the memory is still busy.
I have tried in Windows with TCL 8.4 and 8.6
console show
puts "allocating memory..."
update
for {set i 0} {$i < 10000} {incr i} {
set a($i) $i
}
after 10000
puts "deallocating memory..."
update
foreach v [array names a] {
unset a($v)
}
after 10000
exit
In a lot of programs, both written in Tcl and in other languages, past memory usage is a pretty good indicator of future memory usage. Thus, as a general heuristic, Tcl's implementation does not try to return memory to the OS (it can always page it out if it wants; the OS is always in charge). Indeed, each thread actually has its own memory pool (allowing memory handling to be largely lock-free), but this doesn't make much difference here where there's only one main thread (and a few workers behind the scenes that you can normally ignore). Also, the memory pools will tend to overallocate because it is much faster to work that way.
Whatever you are measuring with, if it is with a tool external to Tcl at all, it will not provide particularly good real memory usage tracking because of the way the pooling works. Tcl's internal tools for this (the memory command) provide much more accurate information but aren't there by default: they're a compile-time option when building the Tcl library, and are usually switched off because they have a lot of overhead. Also, on Windows some of their features only work at all if you build a console application (a consequence of how they're implemented).
I would like to allocate space (dynamic size) with a byte array and get a pointer to the "spacearea" and free it later if I don't need it anymore.
I know about VirtualAlloc, VirutalAllocEx and LocalAlloc.
Which one is the best and how can I free the memory afterwards?
Thank you for your help.
I don't think it is a good idea to use the winapi for that instead of the native Pascal functions.
You can simply define an array of bytes as
var yourarray: array of byte;
then it can be allocated by
setlength(yourarray, yoursize);
and freed by
setlength(yourarray, 0);
Such an array is reference counted and you can access individual bytes as yourarray[byteid]
Or if you really want pointers, you can use:
var p: pointer;
GetMem(p, yoursize);
FreeMem(p);
You should better use GetMem/FreeMem or a dynamic array, or a RawByteString. Note that GetMem/FreeMem, dynamic arrays or RawByteString uses the heap, not the stack for its allocation.
There is no interest about using VirtualAlloc/VirtualFree instead of GetMem/FreeMem. For big blocks, the memory manager (which implements the heap) will call VirtualAlloc/VirtualFree APIs, but for smaller blocks, it will be more optimized to rely on the heap.
Since VirtualAlloc/VirtualFree is local to the current process, the only interest to use it is if you want to create some memory block able to execute code, e.g. for creating some stubbing wrappers of classes or interfaces, via their VirtualAllocEx/VirtualFreeEx APIs (but I doubt it is your need).
If you want to use some memory global to all processes/programs, you have GlobalAlloc/GlobalFree API calls at hand.
VirtualAlloc is a page allocation function. It is the low level user space code function for allocating memory. But you must understand that the memory returned from VirtualAlloc is aligned to a multiple of the page size.
On windows 32 bit the page size is normally 4096 Bytes. On other systems it might be larger.
So this makes VirtualAlloc useful when you need whole pages of memory. VirtualAlloc can allocate large "ranges of pages". The pages are virtual and are thus actually mappings to underlying system RAM and half the time are swapped out to the swap file, and this is why it is called VirtualAlloc, emphasis on virtual.
Using VirtualAlloc and VirtualAllocEx you can also just reserve some pages of memory. Reserved pages are a range that are held in reserved state until you are sure they will be used, at which point you can commit the pages, at which time the underlying resources needed for the pages will be allocated/committed.
Use VirtualFree to free the pages you allocated or reserved with VirtualAlloc.
The difference between VirtualAlloc and LocalAlloc is that LocalAlloc allocates from a heap, and a heap is a mechanism of allocating blocks of memory from much larger blocks of reserved pages. Internally, a heap allocates large sections of memory using VirtualAlloc, and then divides those pages up into smaller blocks that you see as buffers returned from functions like malloc, getmem and LocalAlloc.
LocalAlloc could be though of as the Windows built in version of malloc or getmem. A call to LocalAlloc is similar to calling malloc in C++ or to calling getmem in Delphi. In fact you could override the GetMem in Delphi and use LocalAlloc and your DElphi application will probably just run the same.
Call LocalFree to free some memory allocated with LocalAlloc. Internally this will mark the block of memory as available to the next caller.
So the main consideration now when deciding is on overhead. If you need to allocate often then you should use LocalAlloc or getmem, because committing and reserving virtual pages is a more time consuming process.
In other words, use getmem or LocalAlloc unless you have a very special reason not to.
In all my tests with Delphi 5 versus C++ compilers the Delphi 5 getmem was faster, although that was five years ago. Since then allocators like hoard are available that might be faster. But it is hard to say what is faster when there are so many variables.
But for sure all the heap functions like LocalAlloc, malloc and getmem should be much faster than allocating and freeing with VirtualAlloc, which is normally used to reserve memory internally for heap functions like LocalAlloc and getmem.
For Pascal programs, prefer getmem or SetLength because this is more portable. Or you can write your own wrapper function to LocalAlloc or whatever the OS heap function is.
The functions that you have listed are WinAPI functions, which are platform dependant. Obviously you should use the functions of the same API for deallocating that you have used for allocation.
If you want to use Delphi memory manager, than GetMemory and FreeMemory is the obvious choice, however if you need your pointer to be aligned to the system page size(which is requirement for some low level libraries) or you are going to use large buffer sizes, then Windows API virtual memory functions VirtualAlloc and VirtualFree are your best friends.
In a recent post ( My program never releases the memory back. Why? ) I show that when using FastMM, the application does not release substantial amounts of memory back to the system.
Recently I created an artificial test program to make sure the issue it is not a memory and that it only appears with FastMM.
In this program I create and destroy an object (same as the one used in the previous post) 500 times.
The memory requirements are ("Private working set"):
Without FastMM
Before running the loop: 1.2MB
After running the loop: 2.1MB
With FastMM (aggressive debug mode)
Before running the loop: 2.1MB
After running the loop: 25MB
With FastMM (release mode)
Before running the loop: 1.8MB
After running the loop: 3MB
If I run the loop several times, the memory requirement does not increase. Which means that the unreleased memory is re-used so this is not a memory leak (a memory leak would increase the memory footprint with several KB/MB at each run).
My questions are:
How can I disable this behavior in FastMM? Is it even possible? I know, if I release the program without FastMM or with FastMM Release Mode it will "waste" moderate amounts of RAM. But disabling this behavior on demand, will help me (us?) identifying memory leaks. Actually in my first post (see link) many people suggested that I have a leak. The confusion was created obviously just because of this behavior. No, it is obvious there is no leak. It is just the memory manager that refuses to release large amounts of memory.
It will ever release the extra memory? When? What triggers this? Can the programmer trigger it? For example when I know that I have finished a RAM-intensive task and the user may not use the program for a while (minimize it), can I flush the RAM back to the system? What happens when the user open multiple instances of my program? Won't they compete for RAM?
You shouldn't think about it as "wasting" RAM, really. Think about it as "caching" unused RAM. The memory manager is holding onto the unused memory instead of releasing it back to the OS for a reason, and in fact you've hit upon that reason in your question.
You said that you keep re-running the same operations in a loop. When you do that, it still has the old memory available and it can assign it immediately, instead of having to ask Windows for a new chunk of heap. This is one of the tricks that puts the "Fast" in "FastMM," and if it didn't do that you'd find your program running a lot more slowly.
You don't need to worry about the FastMM debug mode figure. That's only for debugging, and you're not going to release a program compiled against FullDebugMode. And the difference between "without FastMM" and "with FastMM Release Mode" is about 1 MB, which is negligible on modern hardware. For the low cost of only 1 extra MB, you get a big performance boost. So don't worry about it.
Part of what makes FastMM fast is that it will allocate a large block of memory and carve smaller uniformly sized pieces out of it. If any part of the block is in use, none of it can be released back to the OS.
You're welcome to use a different memory manager. One approach would be to route all allocations directly to VirtualAlloc. Allocations will be rounded up to occupy an entire page at a time, so your program may suffer if you have lots of small allocations, but when you call VirtualFree, you can be confident that the memory definitely doesn't belong to your program anymore.
Another option is to route everything to the OS heap. Use HeapAlloc. You can even enable the low-fragmentation heap for your program (on by default as of Windows Vista), which will make the OS employ a strategy similar to the one used by FastMM, but it will allow you to use some debugging and analysis tools from Microsoft to track your program's memory usage over time. Beware, though, that after you call HeapFree, some metrics might still show the memory as belonging to your program.
Besides, the working set refers to the memory that's currently in physical RAM. That you observed the number go up does not mean that your program has allocated any more memory. It can simply mean that your program touched some memory that it had previously allocated, but which had not yet been put into RAM. During your loop, you touched that memory, and the OS has not decided to page it back out to disk yet.
I use the following as a memory manager. I do so because it performs much better under thread contention than FastMM which is actually rather poor. I know that a scalable manager such as Hoard would be better, but this is works fine for my needs.
unit msvcrtMM;
interface
implementation
type
size_t = Cardinal;
const
msvcrtDLL = 'msvcrt.dll';
function malloc(Size: size_t): Pointer; cdecl; external msvcrtDLL;
function realloc(P: Pointer; Size: size_t): Pointer; cdecl; external msvcrtDLL;
procedure free(P: Pointer); cdecl; external msvcrtDLL;
function GetMem(Size: Integer): Pointer;
begin
Result := malloc(size);
end;
function FreeMem(P: Pointer): Integer;
begin
free(P);
Result := 0;
end;
function ReallocMem(P: Pointer; Size: Integer): Pointer;
begin
Result := realloc(P, Size);
end;
function AllocMem(Size: Cardinal): Pointer;
begin
Result := GetMem(Size);
if Assigned(Result) then begin
FillChar(Result^, Size, 0);
end;
end;
function RegisterUnregisterExpectedMemoryLeak(P: Pointer): Boolean;
begin
Result := False;
end;
const
MemoryManager: TMemoryManagerEx = (
GetMem: GetMem;
FreeMem: FreeMem;
ReallocMem: ReallocMem;
AllocMem: AllocMem;
RegisterExpectedMemoryLeak: RegisterUnregisterExpectedMemoryLeak;
UnregisterExpectedMemoryLeak: RegisterUnregisterExpectedMemoryLeak
);
initialization
SetMemoryManager(MemoryManager);
end.
This isn't an answer to your question, but it's too long to fit into a comment and you may find it interesting to run your app against this MM. My guess is that it will perform the same way as FastMM.
SOLVED
As suggested by Barry Kelly the memory will be released automatically by FastaMM.
To confirm that this I crated a second program that allocated A LOT of RAM. As soon as Windows ran out of RAM, my program memory utilization returned to its original value.
Problem solved.
Thanks Barry.
GetMem allows you to allocate a buffer of arbitrary size. Somewhere, the size information is retained by the memory manager, because you don't need to tell it how big the buffer is when you pass the pointer to FreeMem.
Is that information for internal use only, or is there any way to retrieve the size of the buffer pointed to by a pointer?
It would seem that the size of a block referenced by a pointer returned by GetMem() must be available from somewhere, given that FreeMem() does not require that you identify the size of memory to be freed - the system must be able to determine that, so why not the application developer?
But, as others have said, the precise details of the memory management involved are NOT defined by the system per se.... Delphi has always had a replaceable memory manager architecture, and the "interface" defined for compatible memory managers does not require that they provide this information for an arbitrary pointer.
The default memory manager will maintain the necessary information in whatever way suits it, but some other memory manager will almost certainly use an entirely different, if superficially similar, mechanism, so even if you hack a solution based on intimate knowledge of one memory manager, if you change the memory manager (or if it is changed for you, e.g. by a change in thesystem defined, memory manager which you perhaps are using by default, as occurred between Delphi 2005 and 2006, for example) then your solution will almost certainly break.
In general, it's not an unreasonable assumption on the part of the RTL/memory manager that the application should already know how big a piece of memory a GetMem() allocated pointer refers to, given that the application asked for it in the first place! :)
And if your application did NOT allocate the pointer then your application's memory manager has absolutely no way of knowing how big the block it references may be. It may be a pointer into the middle of some larger block, for example - only the source of the pointer can possibly know how it relates to the memory it references!
But, if your application really does need to maintain such information about it's own pointers, then it could of course easily devise a means to achieve this with a simple singleton class or function library through which GetMem()/FreeMem() requests are routed, to maintain a record of the associated requested size for each current allocated pointer. Such a mechanism could then of course easily expose this information as required, entirely reliably and independently of whatever memory manager is in use.
This may in face be the only option if an "accurate" record is required , as a given memory manager implementation may allocate a larger block of memory for a given size of data than is actually requested. I do not know if any memory manager does in fact do this, but it could do so in theory, for efficiency sake.
It is for internal use as it depends on the MemoryManager used. BTW, that's why you need to use the pair GetMem/FreeMem from the same MemoryManager; there is no canonical way of knowing how the memory has been reserved.
In Delphi, if you look at FastMM4, you can see that the memory is allocated in small, medium or large blocks:
the small blocks are allocated in pools of fixed size blocks (block size is defined at the pool level in the block type)
TSmallBlockType = packed record
{True = Block type is locked}
BlockTypeLocked: Boolean;
{Bitmap indicating which of the first 8 medium block groups contain blocks
of a suitable size for a block pool.}
AllowedGroupsForBlockPoolBitmap: byte;
{The block size for this block type}
BlockSize: Word;
the medium blocks are also allocated in pools but have a variable size
{Medium block layout:
Offset: -8 = Previous Block Size (only if the previous block is free)
Offset: -4 = This block size and flags
Offset: 0 = User data / Previous Free Block (if this block is free)
Offset: 4 = Next Free Block (if this block is free)
Offset: BlockSize - 8 = Size of this block (if this block is free)
Offset: BlockSize - 4 = Size of the next block and flags
{Get the block header}
LBlockHeader := PCardinal(Cardinal(APointer) - BlockHeaderSize)^;
{Get the medium block size}
LBlockSize := LBlockHeader and DropMediumAndLargeFlagsMask;
the large blocks are allocated individually with the required size
TLargeBlockHeader = packed record
{Points to the previous and next large blocks. This circular linked
list is used to track memory leaks on program shutdown.}
PreviousLargeBlockHeader: PLargeBlockHeader;
NextLargeBlockHeader: PLargeBlockHeader;
{The user allocated size of the Large block}
UserAllocatedSize: Cardinal;
{The size of this block plus the flags}
BlockSizeAndFlags: Cardinal;
end;
Is that information for internal use only, or is there any way to retrieve the size of the buffer pointed to by a pointer?
Do these two `alternatives' contradict each other?
It's for internal use only.
There is some information before the allocated area to store meta information. This means, each time you allocate a piece of memory, a bigger piece is allocated and the first bytes are used for meta information. The returned pointer is to the block following this meta information.
I can imagine that the format is changed with an other version of the memory manager so don't count on this.
That information is for internal use only.
Note that memory managers doesn't need to store the size as part of the memory returned, many memory managers will store it in an internal table and use the memory address of the start of the chunk given out as a lookup key in that table.