Historically we have had problems with RAD studio running out of memory which no longer happens with XE10 Seattle. We have a lot of our own components which have never been tested for large memory awareness and do not need it when built into our applications BUT we have recently had an IDE fault due to the design time instance of a component being instantiated at an address above 2Gb (which we have fixed).
I have a feeling I read somewhere that Embarcadero have a method for testing RAD Studio (command line option ??) for higher memory compatibility but cannot find the reference anywhere. Does anyone know either how to force higher memory position allocation in the IDE to verify our component set's design time behavior or alternative a way of testing in an application other that writing something that just steals all the lower memory.
I've tried the "allocate from the top" option in FastMM but this just starts allocating from 2Gb downwards even when the executable is set for higher memory use.
The most effective way to test this is to force the system to allocate memory top down. How this is done is described here: https://msdn.microsoft.com/en-us/library/bb613473.aspx
To force allocations to allocate from higher addresses before lower addresses for testing purposes, specify MEM_TOP_DOWN when calling VirtualAlloc or set the following registry value to 0x100000:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Memory Management\AllocationPreference
Once you change the registry setting you will need to restart your machine.
Do not be surprised if your machine becomes unstable when you do this. A great many anti-malware products are incapable of operating under system-wide top down memory allocation. You might find it necessary to temporarily disable your anti-malware while performing top down allocation testing.
Related
I am in the process of porting a windows 7 library to an embedded platform. In order to do so my employer asks me the amount of memory (and CPU but let us concentrate on the memory for now) that my system will need once ported - so he can size the board to my needs.
I had a look on the internet and there seem not be exist much information about this question, hence my questions:
in order to get a rough idea of the memory footprint of the code in flash memory (code only without memory for data), I read on the Internet that I should sum the size of all the dlls I use. It seems that all compilers and platforms give a different size for the code footprint but overall the size of the code (without data) is often very close. Do you confirm?
in order to deal with the memory required by the data only (heap + stack but no code), I had a look at the task manager (and process explorer). It seems the overall amount of data which I use is specified in the 'peak working set'. I have a few questions about it though:
2.a. Does the 'working set' include the heap + stack memory or does it correspond to the heap only?
2.b. Does the 'working set' include the size for the code as well? (as I am on windows 7, the code is also stored in RAM and not in flash as on embedded systems), or does it only correspond to the data?
2.c. it seems the 'peak working set' reflects the maximum amount of physical memory that was actually in RAM from the time the program was started, but it does not reflect the size the program could take afterwards (if I happen to allocate memory at runtime - which would be bad ;) - the peak value would go on increasing). Do you confirm?
2.d. Hence, do you also confirm that if I do not allocate memory at runtime, the 'peak working set' should roughly be the maximum size of RAM my embedded system will need? Up to a bit of size difference due to the difference in systems technology...
Thanks,
Antoine.
Unless you are intending to run your application on Windows Embedded, then looking at the code and data usage in Windows is not going to be much of an indicator of anything useful!
1) DLLs are libraries - not all the code within them will be utilised by your code. Most embedded systems are statically linked and the linker will link only modules that are actually referenced in your by your code. So taking the sum of the DLL dependencies is likley to lead to a gross over estimation of memory requirement.
2) Windows memory management is profligate with memory use - because it can be and to do so generally improves performance of typical desktop systems. For example, an thread stack in Windows is typically of the order to 2Mb - you may seldom use that much, but Windows gives it to you in any case because it can and to do so errs on the side of safety. A thread stack in an embedded system will typically range from a few tens of bytes to a few tens of kilobytes - it depends on your application.
Windows task manager shows what Windows allocates to your process, that may not relate to what your process needs. Also your application is using Windows services - all the memory used for kernel and device services will not show up as part of your process, but your embedded system may still need those.
If you do use your Windows prototype code to assess the embedded system requirements, then your best place to start is by getting the linker to generate a map file, which will give a detailed description of memory usage in terms of statically allocated data and code size.
Code size depends not only on the performance of the compiler, but also on the efficiency of the instruction set. Some architectures achieve higher code density than others. Windows application code size is never a good indicator of embedded code size because its execution environment is likley to be so much different. For example an pre-emptive multitasking RTOS kernel on a 32bit ARM can be implemented in less than 10Kb of code, a file system perhaps another 10, and network stack anything from 10 to 30K, USB another 10. As you can see this is a different world to desktop code.
Data memory usage is more easily determined perhaps; but you do that through analysis of your application rather than observing what Windows does. There is the data your application instantiates directly, and then there is data instantiated by libraries and device drivers you might call - in Windows the latter is likley to be relatively large and out of your control. Typical embedded systems libraries for things such a s network stacks, USB, file systems etc. are fall smaller and far more deterministic in both performance and size.
Your better bet is to describe your application in terms of its general purpose, performance requirements, real-time constraints, and its hardware requirements (display, networking, I/O, mass storage etc.), and then look at comparable solutions or at the libraries you will need to implement your solution; most embedded systems are "bare board" and do not have the services you find in Windows unless you write them or use third-party solutions - Windows is seldom a comparable solution to an embedded system.
If it is just a library rather than an application, then build it for a likley target using a Windows hosted GCC cross-compiler and see how big it ends up. You don't need hardware for that or even expend any money.
This is a follow-up to this question: What could explain the difference in memory usage reported by FastMM or GetProcessMemoryInfo?
My Delphi XE application is using a very large amount of memory which sometimes lead to an out of memory exception. I'm trying to understand why and what is causing this memory usage and while FastMM is reporting low memory usage, when requesting for TProcessMemoryCounters.PageFileUsage I can clearly see that a lot of memory is used by the application.
I would like to understand what is causing this problem and would like some advise on how to handle it:
Is there a way to know what is contained in that memory and where it has been allocated ?
Is there some tool to track down memory usage by line/procedure in a Delphi application ?
Any general advise on how to handle such a problem ?
EDIT 1 : Here are two screenshots of FastMMUsageTracker indicating that memory has been allocate by the system.
Before process starts:
After process ends:
Legend: Light red is FastMM allocated and dark gray is system allocated.
I'd like to understand what is causing the system to use that much memory. Probably by understanding what is contained in that memory or what line of code or procedure did cause that allocation.
EDIT 2 : I'd rather not use the full version of AQTime for multiple reasons:
I'm using multiple virtual machines for development and their licensing system is a PITA (I'm already a registered user of TestComplete)
LITE version doesn't provide enough information and I won't waste money without making certain the FULL version will give me valuable information
Any other suggestions ?
Another problem might be heap fragmentation. This means you have enough memory free, but all the free blocks are to small. You might see it visually by using the source version of FastMM and use the FastMMUsageTracker.pas as suggested here.
You need a profiler, but even that won't be enough in lots of places and cases. Also, in your case, you would need the full featured AQTime, not the lite version that comes with Delphi XE and XE2. (AQTIME is extremely expensive, and annoyingly node-locked, so don't think I'm a shill for SmartBear software.)
The thing is that people often mistake AQTime Allocation Profiler as only a way to find leaks. It can also tell you where your memory goes, at least within the limits of the tool. While running, and consuming lots of memory, I click Run -> Get Results.
Here is one of my applications being profile in AQTime with its Allocation Profiler showing exactly what class is allocating how many instances on the heap and how much memory those use. Since you report low Delphi heap usage with FastMM, that tells me that most of AQTime's ability to analyze by delphi class name will also be useless to you. However by using AQTime's events and triggers, you might be able to figure out what areas of your application are causing you a "memory usage expense" and when those occur, what the expense is. AQTime's real-time instrumentation may be sufficient to help you narrow down the cause even though it might not find for you what function call is causing the most memory usage automatically.
The column names include "Object Name" which includes things like this:
* All delphi classes, and their instance count and heap usage.
* Virtual Memory blocks allocated via Win32 calls.
It can detect Delphi and C/C++ library allocations on the heap, and can see certain Windows-API level memory allocations.
Note the live count of objects, the amount of memory from the heap that is used.
I usually try to figure out the memory cost of a particular operation by measuring heap memory use before, and just after, some expensive operation, but before the cleanup (freeing) of the memory from that expensive operation. I can set event points inside AQTime and when a particular method gets hit or a flag gets turned on by me, I can measure before, and after values, and then compare them.
FastMM alone can not even detect a non-delphi allocation or an allocation from a heap that is not being managed by FastMM. AQTime is not limited in that way.
One old application started to consume memory a lot after server update. Memory usage seems to rise with out limit until program hangs.
According to FastMM4 and EurekaLog, there's no memory leak (except 28 bytes), so I assume all memory is freed when application is shutdown.
Are there any tools or strategies suitable for tracking this kind of memory problem?
Since September 2012, there is a very simple and comfortable way to find this type of "run-time only" memory leaks.
FastMM4991 introduced a new method, LogMemoryManagerStateToFile:
Added the LogMemoryManagerStateToFile call. This call logs a summary of
the memory manager state to file: The total allocated memory, overhead,
efficiency, and a breakdown of allocated memory by class and string type.
This call may be useful to catch objects that do not necessarily leak, but
do linger longer than they should.
To discover the leak at run time, you only need these steps
add a call to LogMemoryManagerStateToFile('memory.log', '') in a place where it will be called in intervals
run the application
open the log file with a tail program (for example BareTail), which will auto-refresh when the file content changes
watch the first lines of the file, they will contain the memory allocations which occupy the highest amount of memory
if you see a class or memory type constantly has a growing number of instances, this can be the reason of your leak
The growing memory consumption is an application issue. It is not a bug, which can discover FastMM4 or EurekaLog. As from they point of view - application just correctly uses the memory.
Using AQTime, MemProof (hard to find, D7 is last supported version (?)), SleuthQA (similar to MemProof) or similar memory profilers, you can track the memory usage outside of application in real-time.
Using FastMM4, GetMemoryManagerState / GetMemoryManagerUsageSummary you can track memory usage from application. Output this information into trace file and analyze it after run. Or make simple wrapping function for one of the above procedures, which will return curent memory usage. And call it from IDE Debugger Evalute / Modify, add to Watches or call OutputDebugString, and see the current memory usage.
Note, if memory is eated by some DLL then you may not see her memory usage using (3). Use (2).
Analyzing the memory usage and the tasks performed by the application, you may discover what leads to raised memory usage.
AQTime (a commercial tool which is quite expensive) can report your memory usage, down to the line of source code that allocated each object. In the case of very large memory usage scenarios, you might want the AQTime functionality that can show the number of objects and the size (total plus individual instance size) for each object. AQTime worked great for me, starting with Delphi 7, and all later versions, including your version (2006) and the latest versions (XE and XE2).
As the program memory usage grows, AQTime can be used to grab "snapshots" of the runtime heap, you can use to understand memory usage of your application; What is being created, and how many of each object exists. Even when no leaks exist, understanding the runtime behaviour of your application in terms of the objects it creates and manages, is very important, and AQTime is the most powerful tool I know of for Delphi users.
If you are willing to upgrade to Delphi XE/XE2, you might have an included light version of AQTime already, if so, check it out. If not, I recommend you try their demo. I am unaware of any free or open source alternatives that can provide the same functionality.
Lesser functionality could be cobbled together manually by writing lots of trace messages, or using the FastMM full-debug-mode. If you could write a complete dump of your memory usage into a very large file, you might be able to write some tools to parse, and create a summary. The problem I have with FastMM in this case, is that you will be drowned in detail information, without the ability to extract exactly the summary information that helps you understand your situation. So, you can try to write your own tool to summarize the memory usage. In one application I had that used a series of components that I knew would use a lot of memory, I wrote a dialog box into my application that showed current memory usage by these large memory-blob-of-data objects.
Have you ever think about the Leak that is causing the IDE... it is so huge!!!
In my case (2GB of RAM) i do the next...
1. Open the IDE
2. Leave it minimized for near six hours
3. See how Physical memory is getting used
The result:
While IDE is oppened (remember i also do the test having it minimized) it is getting more and more RAM... till no more ram free.
It gets all 2GB RAM + all Pagefile hard disk space (i have it configured to a mas of 4GB)
In less that six hours (doing nothing on IDE) it tries to use more than 6GB.
That is called a Memory Leak casused by the IDE... i do not type any letter on IDE, do not compile anything, do not even open any project... just open IDE and minimize it... leave the computer without doing anything on it for about six hours and IDE is consuming 6GB of memory.
Of course, after that, the IDE start with annoying messages of SystemOutOfMemory... and i must kill it... then all that 6GB are freed!!!
When on the hell will this get fixed?
Please note i have all patches applied, i also tested without applying each patch/hotfix, etc...
The best i got was dissabling some options on Tools, like the one that underlines bad code, etc... so why on the hell that option has any influence... i am not typing anything on the IDE (on the tests)... and if i have it dissabled the memory leak gets reduced a lot...
Of course, if i use the IDE (write code on an opened project) without even compiling / running it... the thing goes much more worst... memory leak upto 6GB can got reached on less than an hour, sometimes occurs after 15 minutes of Copy/Paste source code.
Seems there will not be a solution in a short time!!!
So i got the next solution that works perfect:
-Close the IDE an reopen it each 15 minutes or less
Ugly solution, i know... but works!!!
I have a MDI program. When It starts it takes 2-3MB of RAM. Then, in this program I create about 260 MDI child windows (each has a TStringGrid, a bitmap and some other controls) and display some data. The application needs about 500MB to load all those windows. If I close each MDI child manually, the application still uses 160MB of RAM. Why it doesn't return to few MB of RAM? Should I worry about this? 160MB it is A LOT for a system that has only 1GB or RAM!!
Note: I use the WORKING SET column in Task Manager to see RAM statistics. Maybe I need a better tool to read the RAM utilization. (Private Working Set is just a bit smaller than Working Set).
This is not a leak!
FastMM (set on aggressive) indicates no memory leak when I close the program. See my Answer post for additional evidence that it isn't a leak.
I release stuff
Many people told me that closing a child window only hides it. I know that. I use "Action:= caFree" to actually release the forms. Each form is responsible for releasing the controls it holds.
Answer
I have found that FastMM is responsible for this. See the answer I posted below.
Delphi 7, Win 7 32 bit
Similar posts:
Can memory be cleaned up?
When to call SetProcessWorkingSetSize? (Convincing the memory manager to release the memory)
Task Manager is not the right tool to detect memory leaks. Delphi allocates large blocks of memory and keeps them later for future use, so certain increase in allocated memory is expected even after you release all resources. Any other results (and more detailed answers) can be obtained only by using specialize memory analysis tools. AQTime is the first that comes to mind, or if you can find old but useful MemProof, it would help you a lot (MemProof is free and for memory analysis it's more handy than AQTime).
It is very well possible that FastMM does not show memory leaks upon application termination (for instance because all objects are TComponents that are owned, and the ownser frees them).
But in the mean time, while running those components can still be around, and not freed soon enough.
Did you use the FastMM unit that shows a form with the current memory usage?
< Edit >
This is the FastMMUsageTracker.pas in the directory ...\FastMM\Demos\Usage Tracker.
Use that unit, then call the ShowFastMMUsageTracker function in it.
You can refresh that form every once in a while to see how your memory consumption grows.
I have put a FastMMUsageTrackerProject sample on-line, including an update of FastMM4 that makes it easier to check and debug memory leaks:
the form in the FastMMUsageTracker unit is now resizable, and the controls in it anchor in the right way
there is a new FastMmBootstrapUnit unit making debugging specific memory leaks easier
Something I had at hand last week, was a 3rd party DLL, which was not written in Delphi.
The DLL had a memory leak using Windows GlobalAlloc calls, which are not tracked by FastMM.
NB: I'm about to post an update to FastMM on
--jeroen
Answer:
I just removed FastMM from my project and the program returned to few MB after freeing all those child windows. Many may argue that this is not a misbehavior and that FastMM is doing this in order to do some kind of kinky memory optimizations. They may be true. However, it may be good for MY application but it may not be good of other running applications.
So, at least we know who causes this. I worried to a whole day that may program is leaking RAM like an old bucket. I am relieved now.
UPDATE:
To confirm that this behavior is generated by FastMM (as suggested by Barry Kelly) I created a second program that allocated A LOT of RAM. As soon as Windows ran out of RAM, my program memory utilization returned to its original value.
(Note: I am not saying there is a bug in FastMM!)
My program is not leaking. Problem solved.
The main limitation of FastMM's memory leak tracing is that it can only run when you shut down the program. It could be that you're still holding references to objects or other data that gets cleaned up when you shut down the program, but stays around until then.
For example, when you close the MDI child windows, do you call Free or Release on them, or just make them disappear? If they're hidden but not freed, they'll still be in memory.
If you close an MDI form it is not freed automatically. Use Action = caFree (google for that) to make sure the form is also freed.
I set up a project and ran it, and looked at it in Process Explorer, and it turns out it's using about 5x more RAM than I would have guessed, just to start up. Now if my program's going too slowly, I hook it up to a profiler and have it tell me what's using all my cycles. Is there any similar tool I can hook it up to and have it tell me what's using all my RAM?
AQTime can help with that too.
What figures are you using from Process Explorer?
"Memory Use" in Windows is not a straightforward topic. Almost every application incorporates some form of memory manager that attempts to satisfy the memory needs of the application, about which the operating system has surprisingly little knowledge - the OS knows what memory the applications memory manager is using, but that is not always the same thing as what your application is actually using.
A simple way to see this is to watch the memory use reported by Task Manager.... start up a Delphi application, note it's "memory use" in Task Manager. Then minimise that application to the taskbar and you should see the memory use fall. Even restoring the application again won't result in the memory use climbing back to the previous level.
In crude terms, when you minimise the application the memory manager takes that as a cue that it should return any unnecessarily "used" memory back to the OS. That is, memory that the memory manager is using to efficiently service your application but which your application itself is not actually using.
The memory manager should also return this memory to the system if the system requires it, due to low memory conditions for example. The minimise to taskbar "trick" is simply a sensible optimisation - since a minimised app is typically not actively in use it's an opportune time to do such "housekeeping" automatically.
(This is not "a bad thing", it's just something to be aware of when considering "memory use")
To make matters worse, in addition to memory that the memory manager is using but which your application is not, there is also the question of "commit charge", which won't necessarily show up as memory that is used by either your application OR it's memory manager!
In a Delphi application (from Delphi 2006 onward) the memory manager is FastMM and that has a built in tool that will show you what your application memory use is like from "the inside" (or at least it used to have such a tool - I've not used it in a while).
iirc using it was a question of simply adding a unit to your project and creating a form at runtime (via some "debug only" menu item on the Help menu, or whatever mechanism you choose) that would then give you a "map" of your memory usage.
If you are using a version of Delphi earlier than 2006 you can still use FastMM - it's free and open source. Just download it from sourceforge.
AQTime has been for us an amazing profiling tool. It works amazingly well and allowed us to pinpoint bottlenecks in places where never thought were any, while sometimes showing us there was no bottleneck where we were sure there was.
It is, along with Finalbuilder, Araxis Merge, and TestComplete, an indispensable tool!
In addition to the others: Before I switched to D2006+ (and started using fastmm) I used AQTime's free memproof. It has some issues but it is workable.