I have just used FastMM4 to detect leaks. I did not realise our application was using a DLL which was leaking event handles and so I fixed any leaks reported by FastMM4 but not the handles as it was not reported.
My questions are, would have FastMM4 have reported the leaking event handles? Would this require me to rebuild the dlls with FastMM4 included? I also heard someone mention ShareMM, do I need to add that?
I am using Delphi2007, which I think is using the borland memory manager and if so, should I replace it with the fastMM4 one? What are the steps to do that?
Sorry for asking so many questions, I am looking at delphi after a few years of doing .net development.
JD.
No. FastMM is a memory manager, and it can only report leaks of memory that the application allocated via FastMM. Handles are opaque references to system objects that are allocated by Windows, so FastMM can't track them, and neither can any other Delphi memory manager.
And this isn't really a Delphi vs. .NET thing either, since .NET's garbage collection couldn't have solved this problem any better than FastMM can. Handles are non-memory resources and you have to keep them from leaking the same way you would in .NET: make sure that anything that allocates one releases it when you're done with it.
Do you know what type of handle you're leaking? If it's something less common than the ubiquitous HWND, that would be a good starting point in tracking down the problem: find where you're allocating that type of handle.
As for your other question, about Delphi 2007, it comes with FastMM built in, not the old BorlandMM. But it's sort of a basic version. For access to the FullDebugMode functionality, you need to download FastMM from SourceForge and add it at the top of your uses list and rebuild with the FullDebugMode compiler define on.
Related
I understand from other posts here that "IMAGE_FILE_LARGE_ADDRESS_AWARE" may work to effectively expand memory availability in e.g. Delphi 2007.
I don't get this to work in Delphi6, is this indeed the case, or should it work? Or is there an alternative command that does the same thing?
If not, I may need to migrate to a later version of Delphi. Then, does anyone know what the most recent version of Delphi is that would easily allow me to migrate my existing code (ideally, my existing code, which is fairly simple Turbo Pascal-type code, would just work as is) AND would support the "IMAGE_FILE_LARGE_ADDRESS_AWARE" 'trick' to expand memory?
Many thanks!
Remco
You can apply the IMAGE_FILE_LARGE_ADDRESS_AWARE PE flag to a Delphi 6 application, but you must beware of the following issues:
The default memory manager for Delphi 6, the Borland memory manager, does not support memory allocations with addresses above 2GB. You must replace the memory manager with one that supports large addresses. For instance FastMM.
Your code may well contain pointer truncation bugs that will need to be found and fixed.
The same goes for any third party software that you use. This includes the Borland RTL and VCL libraries. I did not encounter many problems with these libraries, but it may be that your program uses different parts of the runtime libraries that have pointer truncation bugs.
In order to stress test your program under large address conditions you should turn on top down memory allocation. Do not be surprised if your anti-malware software (or indeed other system level software) has to be disabled whilst you operate in top down memory allocation mode. This type of software is notoriously poor at operating in top down memory allocation mode.
Finally, it is worth pointing out that large address aware cannot solve all out of memory problems. All it does is open up the top half of the 32 bit address space. Your program might require even more address space than that. In which case you'd need to either re-design your program, or move to a 64 bit compiler.
And if so, how. I'm talking about this 4GB Patch.
On the face of it, it seems like a pretty nifty idea: on Windows, each 32-bit application normally only has access to 2GB of address space, but if you have 64-bit Windows, you can enable a little flag to allow a 32-bit application to access the full 4GB. The page gives some examples of applications that might benefit from it.
HOWEVER, most applications seem to assume that memory allocation is always successful. Some applications do check if allocations are successful, but even then can at best quit gracefully on failure. I've never in my (short) life come across an application that could fail a memory allocation and still keep going with no loss of functionality or impact on correctness, and I have a feeling that such applications are from extremely rare to essentially non-existent in the realm of desktop computers. With this in mind, it would seem reasonable to assume that any such application would be programmed to not exceed 2GB memory usage under normal conditions, and those few that do would have been built with this magic flag already enabled for the benefit of 64-bit users.
So, have I made some incorrect assumptions? If not, how does this tool help in practice? I don't see how it could, yet I see quite a few people around the internet claiming it works (for some definition of works).
Your troublesome assumptions are these ones:
Some applications do check if allocations are successful, but even then can at best quit gracefully on failure. I've never in my (short) life come across an application that could fail a memory allocation and still keep going with no loss of functionality or impact on correctness, and I have a feeling that such applications are from extremely rare to essentially non-existent in the realm of desktop computers.
There do exist applications that do better than "quit gracefully" on failure. Yes, functionality will be impacted (after all, there wasn't enough memory to continue with the requested operation), but many apps will at least be able to stay running - so, for example, you may not be able to add any more text to your enormous document, but you can at least save the document in its current state (or make it smaller, etc.)
With this in mind, it would seem reasonable to assume that any such application would be programmed to not exceed 2GB memory usage under normal conditions, and those few that do would have been built with this magic flag already enabled for the benefit of 64-bit users.
The trouble with this assumption is that, in general, an application's memory usage is determined by what you do with it. So, as over the past years storage sizes have grown, and memory sizes have grown, the sizes of files that people want to operate on have also grown - so an application that worked fine when 1GB files were unheard of may struggle now that (for example) high definition video can be taken by many consumer cameras.
Putting that another way: applications that used to fit comfortably within 2GB of memory no longer do, because people want do do more with them now.
I do think the following extract from your link of 4 GB Patch pretty much explains the reason of how and why it works.
Why things are this way on x64 is easy to explain. On x86 applications have 2GB of virtual memory out of 4GB (the other 2GB are reserved for the system). On x64 these two other GB can now be accessed by 32bit applications. In order to achieve this, a flag has to be set in the file's internal format. This is, of course, very easy for insiders who do it every day with the CFF Explorer. This tool was written because not everybody is an insider, and most probably a lot of people don't even know that this can be achieved. Even I wouldn't have written this tool if someone didn't explicitly ask me to.
And to expand on CFF,
The CFF Explorer was designed to make PE editing as easy as possible,
but without losing sight on the portable executable's internal
structure. This application includes a series of tools which might
help not only reverse engineers but also programmers. It offers a
multi-file environment and a switchable interface.
And to quote a Microsoft insider, Larry Miller of Microsoft MCSA on a blog post about patching games using the tool,
Under 32 bit windows an application has access to 2GB of VIRTUAL
memory space. 64 bit Windows makes 4GB available to applications.
Without the change mentioned an application will only be able to
access 2GB.
This was not an arbitrary restriction. Most 32 bit applications simply
can not cope with a larger than 2GB address space. The switch
mentioned indicates to the system that it is able to cope. If this
switch is manually set most 32 bit applications will crash in 64 bit
environment.
In some cases the switch may be useful. But don't be surprised if it
crashes.
And finally to add from MSDN - Migrating 32-bit Managed Code to 64-bit,
There is also information in the PE that tells the Windows loader if
the assembly is targeted for a specific architecture. This additional
information ensures that assemblies targeted for a particular
architecture are not loaded in a different one. The C#, Visual Basic
.NET, and C++ Whidbey compilers let you set the appropriate flags in
the PE header. For example, C# and THIRD have a /platform:{anycpu,
x86, Itanium, x64} compiler option.
Note: While it is technically possible to modify the flags in the PE header of an assembly after it has been compiled, Microsoft does not recommend doing this.
Finally to answer your question - how does this tool help in practice?
Since you have malloc in your tags, I believe you are working on unmanaged memory. This patch would mostly result in invalid pointers as they become twice the size now, and almost all other primitive datatypes would be scaled by a factor of 2X.
But for managed code since all these are handled by the CLR in .NET, this would mean really helpful and would not have much problems unless you are dealing with any of the following :
Invoking platform APIs via p/invoke
Invoking COM objects
Making use of unsafe code
Using marshaling as a mechanism for sharing information
Using serialization as a way of persisting state
To summarize, being a programmer I would not use the tool to convert my application and rather would migrate it myself by changing build targets. being said that if I have a exe that can do well like games with more RAM, then this is worth a try.
I have found some tools that can help the developers to find out the memory leaks like FastMM4. But can a QA person use it to determine memory leaks after we take a build? or is there any tool available that can aid a QA person to find out memory leaks then it would be great.
currently what we follow is like run the application note down the memory usage and perform some tasks and then check out the memory usage and if we find out a huge difference then we star narrowing down. Is there any tool which will do it automatically
Lots of functionality in FastMM4 can be enabled or disabled depending on a presence of FastMM_FullDebugMode.dll in system. This way you can have only one build, where leak detection will be enabled by copying FastMM_FullDebugMode.dll to program folder. Similar functionality you can achieve by using ShareMem unit together with different versions of BorlndMM.dll. In this case you can compile FastMM4 to BorlndMM.dll with any options you want.
Your QA testers can equally use FastMM to detect memory leaks. You just need to give them a build which enables memory leak detection.
SouceGuard is a lite and effective leakproof and bug reporting tool in Delphi.
It was formerly known as UMLD.
I see the following means of debugging and wonder if there are others or which FOSS tools a small company can use (we don't do much Windows programming).
1 Debug in the IDE, by setting breakpoints, using watches, etc
2 Debug in the IDE, by using the Event Log
I got some good info from this page and tweaked it to add timestamps and indent/outdent on procedure call/return, so that I can see nested calls more quickly. Does anyone know of anything better ?
3 Using a profiler
4 Any others?
Such as MadExcept, etc?
(I am currently using Delphi 7)
The Delphi integrated debugger is powerful enough, even in Delphi 7, to handle most debugging tasks. It can also debug an application remotely. Anyway, there are situations where you may need to track different kind of issue:
To check for memory leaks, you can switch to a memory manager like FastMM4 which has good memory leak reporting. Profilers like AQTime have also memory allocation profilers to identify such kind of issues.
To investigate performance problems, you need a performance profiler. There are sampling profilers (less invasive, although may be less precise) and standard profilers (AQTime again, not cheap but very good, and others).
To trace exception, especially on deployed applications, you may need tools like JCL/JVCL (free), MadExcept or EurekaLog or SmartInspect
To obtain a log of what the application does, you can use OutputDebugString() and the IDE event viewer, or the DebugView standalone application. There are also dedicated tools like SmartInspect.
You can also convert Delphi 7 .map files to .dbg files and use an external debugger as the WinSDK WinDbg, and look at application calls in tools like ProcessExplorer
Some debugging tools may also offer features like code coverage checks (which code was actually executed, and which was never), platform compliance (check API calls are supported by a given platform), resources use and so on, but may be useful for larger developments.
Delphi 7's IDE is pretty good to start with, only look at 3rd party tools if you run into something you can't fix with what you've got:
It's error messages are informative and not excessively verbose.
The debugger is pretty good, you've got lots of options for inspecting variables, brakepoints, conditional brakepoints, data brakepoints, address brakepoint, module load brakepoint. It's call-stack view is good, it has some support for multi-threaded debugging.
Good step-by-step execution, step into, step over, run until return, etc.
3rd party tools help when you need to diagnose a problem on the client's computer (you have no Delphi IDE on the client's computer). If you can get the problem to manifest on your computer you get away with the IDE alone, no need for any addition, free or payed for.
Profiler: That's not a debugging tool. You use an profiler when you need to find bottlenecks in your application, or you need to do some speed optimizations.
Logging 3rd party frameworks: The good ones are not cheap, and you can do minimal logging without a tool (even ShowMessage works some times).
MadExcept, other tools that log exceptions: They usually require debugging information to be present in the EXE, and that's not a good idea because it makes the program slower AND it easier to hack. Again, if you can get the exception on your machine, you don't need the logger.
I'm not saying 3rd party debugging aids are not useful: they are, but I'd wait until I can clearly see the benefit of any tool before I commit to it. And in my opinion there's no such thing as free software: Even the software you don't pay for requires time to learn how to use it and requires changes to your programs and workflow.
For the bigger work, there is AQTime.
A cheaper solution for selected code is running it through Free Pascal (with the "randomize local variables option") and run it through valgrind. I've validated most my streaming code (which heavily has backwards compat constructs) that way.
Another such interesting switch is -CR, verify object method call. It basically turns every
TXXX(something).callsomething
into
if something is txx then
TXXX(something).callsomething
else
raise some exception;
Specially in code with complex trees this can give some precious information.
Normal Pascal language checking (Range, I/O, Overflow, sTack aka -Criot) can be useful too, and is also available in Delphi.
Some range check errors (often loop bounderies) that can be detected statically, will result in compile errors in (beta) FPC 3.0.x+.
You can try the "Process Stack Viewer" of my (open source) sampling profiler:
http://code.google.com/p/asmprofiler/wiki/ProcessStackViewer
(you need some debug info: a .map or .jdbg file)
You can watch the stack (also the raw stack, with "false positives" but useful when normal stack walking is not possible) of all threads, and do some simple sampling profiling.
Note: My (older) instrumenting profiler does exact profiling, is on the same site.
Not sure why you would want to upgrade to debug a problem. Yes the newer IDE's provide more features to help you debug something, but taking into consideration your previous question on how to debug your program when it hangs, I'd sooner suggest a good logging solution like CodeSite or SmartInspect. They provide way more flexibility and features than any home-grown solution based around the event log and do not require you to step through the code, like the IDE does (which affects timings in multi-threadeded problems).
Update
Sorry, didn't get that FOSS stands for Free and Open Source Software. CodeSite and SmartInspect are neither. For a free solution, you could have a look though at the logging features within the Jedi family of tools.
Rad Studio XE includes a light version of CodeSite, and AQTime, which together are both compelling improvements.
You could do a lot with JCL Debug, MadExcept, and other profiling and logging tools, but CodeSite and AQTime are the two best for their respective tasks.
Need a simple code analyzer to see if I am forgetting to free objects and classes, or to see if I am releasing them to many times.
This is built into Delphi's memory manager (FastMM). Set ReportMemoryLeaksOnShutdown true. You can also use the "full debug" version of the memory manager for more detailed checks and information.
The Pascal Analyzer from Peganza does a static analysis of your code.
AQTime from Automated QA is pretty much the defacto standard tool in the Delphi World for profiling for memory leaks (and perf of course)
Another option is a static analysys tool, the only one I know of that supports Delphi is Understand from SciTools it's pretty expensive though.
Simplest tool I've ever used for memory leak checking is MemCheck.
http://v.mahon.free.fr/pro/freeware/memcheck/