StackWalk of other process in delphi? - delphi

Do you know how to read another process stack in delphi ??

Yes.
You can enumerate threads with Toolhelp functions; get the context with GetThreadContext(); and read the stack memory (i.e. using ESP from the context) with ReadProcessMemory(). The stack grows downwards in memory, so reading memory locations after ESP is going down the stack.

You could take a look at the "TThreadSampler.MakeStackDump" procedure of the following unit of my sampling profiler: http://code.google.com/p/asmprofiler/source/browse/trunk/Sampling/mcThreadSampler.pas
This function can read from the same thread, or same process or different process (each with it's own optimized function).
Btw: my Sampling Profiler reads Delphi debug symbols (.map, .jdbg, etc) because there is still no good Delphi to Pdb debug symbol converter (so you can view the stack of a Delphi program in Windows debugger or Process Explorer, Visual Studio etc). You can also use my sampling profiler for view the current stack of any process!
http://code.google.com/p/asmprofiler/wiki/AsmProfilerSamplingMode

Related

Calculating the Stack usage in RTOS application

I am currently working on a project to develop an application in STM32 microcontroller using RTOS (micrium).
Are there any tools to calculate the stack usage of a particular thread in RTOS application?
No tools I know of. However, two simple methods to estimate stack usage have always worked for me.
Fill all RAM with a value like 0x55 or 0xAA. Let the program run long enough while using all of the device's options to have the most code execution coverage. Stop (under some debugger), and examine RAM for the above values being overwritten. That should give you a good approximation. This works with or without an OS.
Modify the OS just a bit so that on task switches you record to some global variable (array) and for each task the lowest stack pointer found by comparing to the previous value for the same task. After running the app long enough as in [1], examine the counters. Although there is no guarantee the moment a task switch happens you will have the maximum stack used for that task, statistically, after long enough time and assuming preemptive switching, you will have managed to record an accurate enough value.
If you are using GCC or clang -fstack-usage compiler switch generates a stack frame size for each function. You need to combine that information with call-graph information generated by the linker to find the deepest stack usage starting from a specific function. Starting at main(), a task entry-point and and ISR will then give you the worst-case usage for that thread.
Helpfully the work to create such a tool has been done for you as discussed here, using a Perl script from here.
ARM's armcc compiler v5 and earlier (v6 is clang/llvm) has this functionality built-in and can include detailed stack analysis in the link map, including the worst-case call path and warnings of non-deterministic stack usage (due to recursion or call-backs through function pointers for example). You may be using armcc if you are using Keil ARM MDK for example. Again for multi-threaded systems (tasks/ISRs) you need to look at the stack usage for the thread entry point.
Note also that on ARM Cortex-M, the "system stack" is shared by the main() thread and all ISRs, and if you use the ISR preemption priorities multiple interrupts may be active simultaneously. So in theory worst case stack usage is the sum of the stack usage for each of main() and all ISRs that may occur concurrently. Whilst it is good practice to keep ISRs short and simple, beware of third-party code. ST's USB library for example runs the entire USB device stack in the ISR context for example!

Is program compiled to main memory or program memory when compiling?

Suppose there is a c++ code. It is compile to binary code during compiling. My question is where does my computer store binary code, in main memory (DRAM) of in program memory (inside CPU).
And I also want to know can the content of program memory be changed by computer user?
If you have a memory designed to holds X, that's where you need to put X in.
If the CPU of your reference architecture fetches instructions from a dedicated program memory, that's where the instructions must be stored since the CPU will look for them only there.
It's worth saying that modern processors are von Neumann, they have a unified program and data memory (internally they are not, e.g the caches are split) while
microcontroller are often Harvard.
I'll gloss over the advantages of each one and just say that each combination of attributes exists: you can have a CPU where the program memory is RO and programmed in factory, where it is programmable through an external interface, where it cannot be read by the program itself, where it can and where it can be written by the program itself.

Large memory aware testing in the IDE

Historically we have had problems with RAD studio running out of memory which no longer happens with XE10 Seattle. We have a lot of our own components which have never been tested for large memory awareness and do not need it when built into our applications BUT we have recently had an IDE fault due to the design time instance of a component being instantiated at an address above 2Gb (which we have fixed).
I have a feeling I read somewhere that Embarcadero have a method for testing RAD Studio (command line option ??) for higher memory compatibility but cannot find the reference anywhere. Does anyone know either how to force higher memory position allocation in the IDE to verify our component set's design time behavior or alternative a way of testing in an application other that writing something that just steals all the lower memory.
I've tried the "allocate from the top" option in FastMM but this just starts allocating from 2Gb downwards even when the executable is set for higher memory use.
The most effective way to test this is to force the system to allocate memory top down. How this is done is described here: https://msdn.microsoft.com/en-us/library/bb613473.aspx
To force allocations to allocate from higher addresses before lower addresses for testing purposes, specify MEM_TOP_DOWN when calling VirtualAlloc or set the following registry value to 0x100000:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Memory Management\AllocationPreference
Once you change the registry setting you will need to restart your machine.
Do not be surprised if your machine becomes unstable when you do this. A great many anti-malware products are incapable of operating under system-wide top down memory allocation. You might find it necessary to temporarily disable your anti-malware while performing top down allocation testing.

Strategy or tools to find "non-leak" memory usage problems in Delphi?

One old application started to consume memory a lot after server update. Memory usage seems to rise with out limit until program hangs.
According to FastMM4 and EurekaLog, there's no memory leak (except 28 bytes), so I assume all memory is freed when application is shutdown.
Are there any tools or strategies suitable for tracking this kind of memory problem?
Since September 2012, there is a very simple and comfortable way to find this type of "run-time only" memory leaks.
FastMM4991 introduced a new method, LogMemoryManagerStateToFile:
Added the LogMemoryManagerStateToFile call. This call logs a summary of
the memory manager state to file: The total allocated memory, overhead,
efficiency, and a breakdown of allocated memory by class and string type.
This call may be useful to catch objects that do not necessarily leak, but
do linger longer than they should.
To discover the leak at run time, you only need these steps
add a call to LogMemoryManagerStateToFile('memory.log', '') in a place where it will be called in intervals
run the application
open the log file with a tail program (for example BareTail), which will auto-refresh when the file content changes
watch the first lines of the file, they will contain the memory allocations which occupy the highest amount of memory
if you see a class or memory type constantly has a growing number of instances, this can be the reason of your leak
The growing memory consumption is an application issue. It is not a bug, which can discover FastMM4 or EurekaLog. As from they point of view - application just correctly uses the memory.
Using AQTime, MemProof (hard to find, D7 is last supported version (?)), SleuthQA (similar to MemProof) or similar memory profilers, you can track the memory usage outside of application in real-time.
Using FastMM4, GetMemoryManagerState / GetMemoryManagerUsageSummary you can track memory usage from application. Output this information into trace file and analyze it after run. Or make simple wrapping function for one of the above procedures, which will return curent memory usage. And call it from IDE Debugger Evalute / Modify, add to Watches or call OutputDebugString, and see the current memory usage.
Note, if memory is eated by some DLL then you may not see her memory usage using (3). Use (2).
Analyzing the memory usage and the tasks performed by the application, you may discover what leads to raised memory usage.
AQTime (a commercial tool which is quite expensive) can report your memory usage, down to the line of source code that allocated each object. In the case of very large memory usage scenarios, you might want the AQTime functionality that can show the number of objects and the size (total plus individual instance size) for each object. AQTime worked great for me, starting with Delphi 7, and all later versions, including your version (2006) and the latest versions (XE and XE2).
As the program memory usage grows, AQTime can be used to grab "snapshots" of the runtime heap, you can use to understand memory usage of your application; What is being created, and how many of each object exists. Even when no leaks exist, understanding the runtime behaviour of your application in terms of the objects it creates and manages, is very important, and AQTime is the most powerful tool I know of for Delphi users.
If you are willing to upgrade to Delphi XE/XE2, you might have an included light version of AQTime already, if so, check it out. If not, I recommend you try their demo. I am unaware of any free or open source alternatives that can provide the same functionality.
Lesser functionality could be cobbled together manually by writing lots of trace messages, or using the FastMM full-debug-mode. If you could write a complete dump of your memory usage into a very large file, you might be able to write some tools to parse, and create a summary. The problem I have with FastMM in this case, is that you will be drowned in detail information, without the ability to extract exactly the summary information that helps you understand your situation. So, you can try to write your own tool to summarize the memory usage. In one application I had that used a series of components that I knew would use a lot of memory, I wrote a dialog box into my application that showed current memory usage by these large memory-blob-of-data objects.
Have you ever think about the Leak that is causing the IDE... it is so huge!!!
In my case (2GB of RAM) i do the next...
1. Open the IDE
2. Leave it minimized for near six hours
3. See how Physical memory is getting used
The result:
While IDE is oppened (remember i also do the test having it minimized) it is getting more and more RAM... till no more ram free.
It gets all 2GB RAM + all Pagefile hard disk space (i have it configured to a mas of 4GB)
In less that six hours (doing nothing on IDE) it tries to use more than 6GB.
That is called a Memory Leak casused by the IDE... i do not type any letter on IDE, do not compile anything, do not even open any project... just open IDE and minimize it... leave the computer without doing anything on it for about six hours and IDE is consuming 6GB of memory.
Of course, after that, the IDE start with annoying messages of SystemOutOfMemory... and i must kill it... then all that 6GB are freed!!!
When on the hell will this get fixed?
Please note i have all patches applied, i also tested without applying each patch/hotfix, etc...
The best i got was dissabling some options on Tools, like the one that underlines bad code, etc... so why on the hell that option has any influence... i am not typing anything on the IDE (on the tests)... and if i have it dissabled the memory leak gets reduced a lot...
Of course, if i use the IDE (write code on an opened project) without even compiling / running it... the thing goes much more worst... memory leak upto 6GB can got reached on less than an hour, sometimes occurs after 15 minutes of Copy/Paste source code.
Seems there will not be a solution in a short time!!!
So i got the next solution that works perfect:
-Close the IDE an reopen it each 15 minutes or less
Ugly solution, i know... but works!!!

Why does my Delphi program's memory continue to grow?

I am using Delphi 2009 which has the FastMM4 memory manager built into it.
My program reads in and processes a large dataset. All memory is freed correctly whenever I clear the dataset or exit the program. It has no memory leaks at all.
Using the CurrentMemoryUsage routine given in spenwarr's answer to: How to get the Memory Used by a Delphi Program, I have displayed the memory used by FastMM4 during processing.
What seems to be happening is that memory is use is growing after every process and release cycle. e.g.:
1,456 KB used after starting my program with no dataset.
218,455 KB used after loading a large dataset.
71,994 KB after clearing the dataset completely. If I exit at this point (or any point in my example), no memory leaks are reported.
271,905 KB used after loading the same dataset again.
125,443 KB after clearing the dataset completely.
325,519 KB used after loading the same dataset again.
179,059 KB after clearing the dataset completely.
378,752 KB used after loading the same dataset again.
It seems that my program's memory use is growing by about 53,400 KB upon each load/clear cycle. Task Manager confirms that this is actually happening.
I have heard that FastMM4 does not always release all of the program's memory back to the Operating system when objects are freed so that it can keep some memory around when it needs more. But this continual growing bothers me. Since no memory leaks are reported, I can't identify a problem.
Does anyone know why this is happening, if it is bad, and if there is anything I can or should do about it?
Thank you dthorpe and Mason for your answers. You got me thinking and trying things that made me realize I was missing something. So detailed debugging was required.
As it turns out, all my structures were being properly freed upon exit. But the memory release after each cycle during the run was not. It was accumulating memory blocks that would normally have caused a leak that would have been detectable on exit if my exit cleanup was not correct - but it was.
There were some StringLists and other structures I needed to clear between the cycles. I'm still not sure how my program worked correctly with the extra data still there from the earlier cycles but it did. I'll probably research that further.
This question has been answered. Thanks for your help.
The CurrentMemoryUsage utility you linked to reports your application's working set size. Working set is the total number of pages of virtual memory address space that are mapped to physical memory addresses. However, some or many of those pages may have very little actual data stored in them. The working set is thus the "upper bound" of how much memory your process is using. It indicates how much address space is reserved for use, but it does not indicate how much is actually committed (actually residing in physical memory) or how much of the pages that are committed are actually in use by your application.
Try this: after you see your working set size creep up after several test runs, minimize your application's main window. You will most likely see the working set size drop significantly. Why? Because Windows performs a SetProcessWorkingSetSize(-1) call when you minimize an application which discards unused pages and shrinks the working set to the minimum. The OS doesn't do this while the app window is normal sized because reducing the working set size too often can make performance worse by forcing data to be reloaded from the swap file.
To get into it in more detail: Your Delphi application allocates memory in fairly small chunks - a string here, a class there. The average memory allocation for a program is typically less than a few hundred bytes. It's difficult to manage small allocations like this efficiently on a system-wide scale, so the operating system doesn't. It manages large memory blocks efficiently, particularly at the 4k virtual memory page size and 64k virtual memory address range minimum sizes.
This presents a problem for applications: applications typically allocate small chunks, but the OS doles out memory in rather large chunks. What to do? Answer: suballocate.
The Delphi runtime library's memory manager and the FastMM replacement memory manager (and the runtime libraries of just about every other language or toolset on the planet) both exist to do one thing: carve up big memory blocks from the OS into smaller blocks used by the application. Keeping track of where all the little blocks are, how big they are, and whether they've been "leaked" requires some memory as well - called overhead.
In situations of heavy memory allocation/deallocation, there can be situations in which you deallocate 99% of what you allocated, but the process's working set size only shrinks by, say, 50%. Why? Most often, this is caused by heap fragmentation: one small block of memory is still in use in one of the large blocks that the Delphi memory manager obtained from the OS and divvied up internally. The internal count of memory used is small (300 bytes, say) but since it's preventing the heap manager from releasing the big block that it's in back to the OS, the working set contribution of that little 300 byte chunk is more like 4k (or 64k depending on whether it's virtual pages or virtual address space - I can't recall).
In a heavy memory intensive operation involving megabytes of small memory allocations, heap fragmentation is very common - particularly if memory allocations for things not related to the memory intensive operation are going on at the same time as the big job. For example, if crunching through your 80MB database operation also outputs status to a listbox as it progresses, the strings used to report status will be scattered in the heap amongst the database memory blocks. When you release all the memory blocks used by the database computation, the listbox strings are still out there (in use, not lost) but they are scattered all over the place, potentially occupying an entire OS big block for each little string.
Try the minimize window trick to see if that reduces your working set. If it does, you can discount the apparent "severity" of the numbers returned by the working set counter. You could also add a call to SetProcessWorkingSetSize after your big compute operation to purge the pages that are no longer in use.
What sort of dataset are you using? If it's implemented completely in Delphi, (not calling out to other code with another memory manager, like Midas,) you could try deliberately leaking the dataset.
I assume that your dataset is on a form, and it's being freed automatically when the form clears its components. Try putting MyDataset := nil; in your form's OnDestroy. This will make sure that the dataset leaks, and also everything that the dataset owns. Try that after loading once and again after loading twice and compare the leak reports, and see if that gives you anything useful.
You are half-leaking memory; obviously. You are leaking memory while the program is running, but when you close the program, your dataset is properly freed so FastMM (rightfully) does not report it.
See this for details: My program never releases the memory back. Why?
You could use VMMap to trace the most allocated bytes. It helped me for an similar scenario.
Download VMMap
Compile your application with map file detailed
Convert the map file to dbg, so VMMap can understand it. Use the map2dbg tool
Configure symbol (dbg) path on VMMap: Options -> Configure Symbols -> Symbol paths
Configure source paths on VMMap -> Options -> Configure Symbols -> Source code paths. Hint: use the "*" to include subfolders
In VMMap, go to File -> Select Process -> Launch and trace a new process. Configure application and any parameter that it needs. Then Ok.
When the app opens, VMMap will trace all allocated and freed memory using detours in allocate/free methods. You can see in Timeline button (on the botton of VMMap) the timeline of memory (obviously).
Click in Trace button. It will show all the allocations/dealocattions operations in the traced time. Order the column Bytes to show the most bytes first, and double click it. It will show the callstack of the allocation. In my case, the first item showed my problem.
Sample app:
private
FList: TObjectList<TStringList>;
...
procedure TForm1.Button1Click(Sender: TObject);
var
i: Integer;
begin
for i := 0 to 1000000 do
FList.Add(TStringList.Create);
end;
procedure TForm1.FormCreate(Sender: TObject);
var
a: TStringList;
begin
FList := TObjectList<TStringList>.Create; //not leak
a := TStringList.Create; //leak
end;
procedure TForm1.FormDestroy(Sender: TObject);
begin
FList.Free;
end;
When clicking in button one time, and see the Trace in VMMap, shows:
And the callstack:
In this case did not show exactly the code, but the Vcl.Controls.TControl.Click give an idea. In my real scenario, helped more.
There is a lot of others functionalities in VMMap that helps analising memory problems.

Resources