Detect memory intrusion - delphi

There are software applications, such as ArtMoney, that edit the memory of other applications.
Is there a way to detect when some other application is editing the memory of my application?

The basic idea to protect from basic memory modification is to encrypt the parts of memory you care about, and have redundant checks to ensure against modification.
None of which will stop a determined hacker, but it's sufficient to keep the script kiddies out of your address space.

One method, used by many virus checkers, is to perform a checksum of your executable or memory and save it. When running, occasionally calculate a new checksum and compare with the original. Most programs don't intentionally modify their executables.

The short answer is no, it's not possible in the general case. Even if you implement some of the suggestions that have been given, there's nothing stopping someone from patching the code that performs the checks.
I don't know the specifics of how ArtMonkey works, but if it functions as a debugger you could try checking regularly to see if DebugHook <> 0, and reacting appropriately if it is. (Just make sure to put that code in a {$IFNDEF DEBUG} block so it doesn't cause trouble for you!)
You might want to ask yourself why you want to prevent people from patchimg your memory, though. Unless there's a genuine security issue, you probably shouldn't even try. Remember that the user's computer, that your program will be running on, is their property, not yours, and if you interfere too much with the user's choices as to what to do with their property, your program is morally indistinguishable from malware.

I do not know how it works, I think it can be done in 3 ways:
ReadProcessMemory and WriteProcessMemory Windows API
using a debugger (check for debughook, but that's almost too easy so it won't use that)
injects a dll so it can acces all memory (because it is in the same process)
The last one is easier (check for injected dll or something like that). The first one is trickier, but I found some articles about it:
Memory breakpoints: http://www.codeproject.com/KB/security/AntiReverseEngineering.aspx?fid=1529949&fr=51&df=90&mpp=25&noise=3&sort=Position&view=Quick#BpMem
Hook "WriteProcessMemory" api: http://www.codeproject.com/KB/system/hooksys.aspx

I asked a similar question, and the conclusion was basically that you cannot stop this.
How can I increase memory security in Delphi

Related

How to do lua_pushstring and avoiding an out of memory setjmp exception

Sometimes, I want to use lua_pushstring in places after I allocated some resources which I would need to cleanup in case of failure. However, as the documentation seems to imply, lua_push* functions can always end up with an out of memory exception. But that exception instant-quits my C scope and doesn't allow me to cleanup whatever I might have temporarily allocated that might have to be freed in case of error.
Example code to illustrate the situation:
void* blubb = malloc(20);
...some other things happening here...
lua_pushstring(L, "test"); //how to do this call safely so I can still take care of blubb?
...possibly more things going on here...
free(blubb);
Is there a way I can check beforehand if such an exception would happen and then avoid pushing and doing my own error triggering as soon as I safely cleaned up my own resources? Or can I somehow simply deactivate the setjmp, and then check some "magic variable" after doing the push to see if it actually worked or triggered an error?
I considered pcall'ing my own function, but even just pushing the function on the stack I want to call safely through pcall can possibly give me an out of memory, can't it?
To clear things up, I am specifically asking this for combined use with custom memory allocators that will prevent Lua from allocating too much memory, so assume this is not a case where the whole system has run out of memory.
Unless you have registered a user-defined memory handler with Lua when you created your Lua state, getting an out of memory error means that your entire application has run out of memory. Recovery from this state is generally not possible. Or at least, not feasible in a lot of cases. It could be depending on your application, but probably not.
In short, if it ever comes up, you've got bigger things to be concerned about ;)
The only kind of cleanup that should affect you is for things external to your application. If you have some process global memory that you need to free or set some state in. You're doing interprocess communication and you have some memory mapped file you're talking though. Or something like that.
Otherwise, it's probably better to just kill your process.
You could build Lua as a C++ library. When you do that, errors become actual exceptions, which you can either catch or just use RAII objects to handle.
If you're stuck with C... well, there's not much you can do.
I am specifically interested in a custom allocator that will out of memory much earlier to avoid Lua eating too much memory.
Then you should handle it another way. To signal an out-of-memory error is basically to say, "I want Lua to terminate right now."
The way to stop Lua from eating memory is to periodically check the Lua state's memory, and garbage collect it if it's using too much. And if that doesn't free up enough memory, then you should terminate the Lua state manually, but only when it is safe to do so.
lua_atpanic() may be one solution for you, depending on the kind of cleanup you need to do. It will never throw an error.
In your specific example you could also create blubb as a userdata. Then Lua would free it automatically when it left the stack.
I have recently gotten into some more Lua sandboxing again, and now I think the answer I accepted previously is a bad idea. I have given this some more thought:
Why periodic checking is not enough
Periodically checking for large memory consumption and terminating Lua "only when it is safe to do so" seems like a bad idea if you consider that a single huge table can eat up a lot of your memory with one single VM instruction about which you'll only find out after it happened - where your program might already be dying from it, and then you indeed have much bigger problems which you could have avoided entirely if you had stopped that allocation in time in the first place.
Since Lua has a nice out of memory exception already built-in, I would just like to use that one since this allows me to do the minimal required thing (preventing the script from allocating more stuff, while possibly allowing it to recover) without my C code breaking from it.
Therefore my current plan for Lua sandboxing with memory limit is:
Use custom allocator that returns NULL with limit
Design all C functions to be able to handle this without memory leak or other breakage
But how to design the C functions safely?
How to do that, since lua_pushstring and others can always setjmp away with an error without me knowing whether that is gonna happen in advance? (this was originally my question)
I think I found a working approach:
I added a facility to register pointers when I allocate them, and where I unregister them after I am done with them. This means if Lua suddenly setjmp's me out of my C code without me getting a chance to clean up, I have everything in a global list I need to clean up that mess later when I'm back in control.
Is that ugly or what?
Yes, it is quite the hack. But, it will most likely work, and unlike 'periodic checking' it will actually allow me to have a true hard limit and avoid getting the application itself trouble because of an aggressive attack.

Hunting down EOutOfResources

Question:
Is there an easy way to get a list of types of resources that leak in a running application? IOW by connecting to an application ?
I know memproof can do it, but it slows down so much that the application won't even last a minute. Most taskmanager likes can show the number, but not the type.
It is not a problem that the check itself is catastrophic (halts the app process), since I can check with a taskmgr if I'm getting close (or at least I hope)
Any other insights on resource leak hunting (so not memory) is also welcomed.
Background:
I've an Delphi 7/2006/2009 app (compiles with all three) and after about a few week it starts acting funny. However only on one of the places it runs, on several other systems it runs till the power goes out.
I've tried to put in some debug code to narrow the problem down. and found out that the exception is EOutofResources on a save of a file. (the file save can happen thousands of times a day).
I have tried to reason out memory leaks (with fastmm), but since the dataflow is quite high (60MByte/s from gigabit industrial camera's), I can only rule out "creeping" memory leaks with fastmm, not quick flashes of memoryleaks that exhaust memory around the time it happens. If something goes wrong, the app fills memory in under half a minute.,
Main suspects are filehandles that are somehow left on some error and TMetafiles (which are streamed to these files). Minor suspects are VST, popupmenu and tframes
Updates:
Another possible tip: It ran fine for two years with D7, and now the problems are with Turbo Explorer (which I use for stable projects not converted to D2009 ).
Paul-Jan: Since it only happens once a week (and that can happen at night), information acquisition is slow. Which is why I ask this question, need to combine stuff for when I'm there thursday. In short: no I don't know 100% sure. I intend to bring the entire Systemtools collection to see if I can find something (because then it will be running for days). There is also a chance that I see open files. (maybe should try to find some mingw lsof and schedule it)
But the app sees very little GUI action (it is an machine vision inspection app), except screen refresh +/- 15/s which is tbitmap stretchdraw + tmetafile, but I get this error when saving to disk (TFileStream) handles are probably really exhausted. However in the same stream, TMetafile is also savetostreamed, something which later apps don't have anymore, and they can run from months.
------------------- UPDATE
I've searched and searched and searched, and managed to reproduce the problems in-vitro two or three times. The problems happened when memusage was +/- 256MB (the systems have 2GB), user objects 200, gdi objects 500, not one file more open than expected ).
This is not really exceptional. I do notice that I leak small amounts of handles, probably due to reparenting frames (something in the VCL seems to leak HPalette's), but I suspect the core cause is a different problem. I reuse TMetafile, and .clear it inbetween. I think clearing the metafile doesn't really (always?) resize the resource, eventually each metafile in the entire pool of tmetafile at maximum size, and with 20-40+ tmetafiles (which can be several 100ks each) this will hit the desktop heap limit.
That's theory, but I'll try to verify this by setting the desktop limit to 10MB at the customers, but it will be several weeks before I have confirmation if this changes anything. This theory also confirms why this machine is special (it's possible that this machine naturally has slightly larger metafiles on average). Occasionally freeing and recreating a tmetafile in the pool might also help.
Luckily all these problems (both tmetafile and reparenting) have already been designed out in newer generations of the apps.
Due to the special circumstances (and the fact that I have very limited test windows), this is going to be a while, but I decided to accept the desktop heap as an example for now (though the GDILeaks stuff was also somewhat useful).
Another thing that the audit revealed GDI-types usage in a thread (though only saving tmetafiles (that weren't used or connected otherwise) to streams.
------------- Update 2.
Increasing the desktop limit only seemed to minorly increase the time till the problem occurred.
Unfortunately, I won't be able to follow up on this further, since the machines were updated to a newer version of the framework that doesn't have the problem.
In summary I can only state what the three core modifications were going from the old to the new framework:
I no longer change screens by reparenting frames. I now work with forms that I hide and show. I changed this since I also had very rare crashes or exceptions (that could be clicked away) due to this. The crashes were all while operating the GUI though, not spontaneously like the main problem
The routine where the crash happened dealt with TMetafile. TMetafile has been designed out, and replace by a simpler own made format. (basically arrays with Opengl vertices)
Drawing no longer happened with tbitmap with a tmetafile overlay strechdrawn over it, but using OpenGL.
Of course it could be something else too, that got changed in the rewrite of the above parts, fixing some very nasty detail bug. It would have to be an extremely bad one, since I analysed the above system as much as I could.
Updated nov 2012 after some private mail discussion: In retrospect, the next step would have been adding a counter to the metafiles objects, and simply reinstantiate them every x * 1000 uses or so, and see if that changes anything. If you have similar problems, try to see if you can somewhat regularly destroy and reinitialize long living resources that are dynamically allocated.
There is a slim chance that the error is misleading. The VCL naively reports EOutOfResources if it is unable to obtain a DC for a window (see TWinControl.GetDeviceContext in Controls.pas).
I say "naively" because there are other reasons why GetDC() might return a NULL handle and the VCL should report the OS error, not assume an out of resources condition (there is a Windows version check required for this to be reliably possible, but the VCL could and should take of that too).
I had a situation where I was getting the EOutOfResources error as the result of a window handle becoming invalid. Once I'd discovered the true problem, finding the cause and fixing it was simple, but I wasted many, many hours trying to find a non-existent resource leak.
If possible I would examine the stack trace leading to this exception - if it is coming from TWinControl.GetDeviceContext then the problem may not be what you think (it's impossible to say what it might be of course, but eliminating the impossible is always the first step toward discovering the solution, no matter how improbable).
If they are GDI handle leaks you can have a look at MSDN Magazine January 2003 which uses the tool GDILeaks. Other tools are GDIObj or GDIView. Also see here.
Another source of EOutOfResources could be that the Desktop Heap is full. I've had that issue on busy terminal servers with large screens.
If there are lots of file handles you are leaking you could check out Process Explorer and have a look at the open file handles of your process and see any out of the ordinary. Or use WinDbg with the !htrace command.
I've run into this problem before. From what I've been able to tell, Delphi may throw an EOutOfResources any time the Windows API returns ERROR_NOT_ENOUGH_MEMORY, and (as the other answers here discuss) Windows may return ERROR_NOT_ENOUGH_MEMORY for a variety of conditions.
In my case, EOutOfResources was being caused by a TBitmap - in particular, TBitmap's call to CreateCompatibleBitmap, which it uses with its default PixelFormat of pfDevice. Apparently Windows may enforce fairly strict systemwide limits on the memory available for device-dependent bitmaps (see, e.g, this discussion), even if your system otherwise has plenty of memory and plenty of GDI resources. (These systemwide limits are apparently because Windows may allocate device-dependent bitmaps in the video card's memory.)
The solution is simply to use device-independent bitmaps (DIBs) instead (although these may not offer quite as good of a performance). To do this in Delphi, set TBitmap.PixelFormat to anything other than pfDevice. This KB article describes how to pick the optimal DIB format for a device, although I generally just use pf32Bit instead of trying to determine the optimal format for each of the monitors the application is displayed on.
Most of the times I saw EOutOfResources, it was some sort of handle leak.
Did you try something like MadExcept?
--jeroen
"I've tried to put in some debug code to narrow the problem down. and found out that the exception is EOutofResources on a save of a file. (the file save can happen thousands of times a day)."
I'm shooting in the dark here, but could it be that you're using the Windows API to (GetTempFileName) create a temp file and you're blowing out some file system indexes or forgetting to close a file handle?
Either way, I do agree that with your supposition about it being a file handle problem. That seems to be the most likely thing given your symptoms and diagnosis.
Also try to check handle count for the application with Process Explorer from SysInternals. Handle leaks can be very dangerous and they build slowly through time.
I am currently having this problem, in software that is clearly not leaking any handles in my own code, so if there are leaks they could be happening in a component's source code or the VCL sourcecode itself.
The handle count and GDI and user object counts are not increasing, nor is anything being created. Deltic's answer shows corner cases where the message is kind of a red-herring, and Allen suggests that even a file write can cause this error.
So far, The best strategy I have found for hunting them down is to use either JCL JCLDEBUG stack tracebacks, or the exception report save features in MadExcept to generate the context information to find out what is actually failing.
Secondly, AQTime contains many tools to help you, including a resource profiler that can keep the links between where the code that created the resources is, and how it was called, along with counts of the total numbers of handles. It can grab results MID RUN and so it is not limited to detecting unfreed resources after you exit. So, run AQTime, do a results capture in mid run, wait several hours, and capture again, and you should have two points in time to compare handle counts. Just in case it is the obvious thing. But as Deltics wisely points out, this exception class is raised in cases where it probably shouldn't have been.
I spent all of today chasing this issue down. I found plenty of helpful resources pointing me in the direction of GDI, with the fact that I'm using GDI+ to produce high-speed animations directly onto the main form via timer/invalidate/onpaint (animation performed in separate thread). I also have a panel in this form with some dynamically created controls for the user to make changes to the animation.
It was extremely random and spontaneous. It wouldn't break anywhere in my code, and when the error dialog appeared, the animation on the main form would continue to work. At one point, two of these errors popped up at the same time (as opposed to sequential).
I carefully observed my code and made sure I wasn't leaking any handles related to GDI. In fact, my entire application tends to keep less than 300 handles, according to Task Manager. Regardless, this error would randomly pop up. And it would always correspond with the simplest UI related action, such as just moving the mouse over a standard VCL control.
Solution
I believe I have solved it by changing the logic to performing the drawing within a custom control, rather than directly to the main form as I had been doing before. I think the fact that I was rapidly drawing on the same form canvas which shared other controls, somehow they interfered. Now that it has its own dedicated canvas to draw on, it seems to be perfectly fixed.
That is with about 1 hour of vigorous testing at least.
[Fingers crossed]

How to Test for Memory Leaks?

We have an application with hundreds of possible user actions, and think about how enhancing memory leak testing.
Currently, here's the way it happens: When manually testing the software, if it appears that our application consumes too much memory, we use a memory tool, find the cause and fix it. It's a rather slow and not efficient process: the problems are discovered late and it relies on the good will of one developer.
How can we improve that?
Internally check that some actions (like "close file") do recover some memory and log it?
Assert on memory state inside our unit tests (but it seems this would be a tedious task) ?
Manually regularly check it from time to time?
Include that check each time a new user story is implemented?
Which language?
I'd use a tool such as Valgrind, try to fully exercise the program and see what it reports.
first line of defense:
check list with common memory
allocation related errors for
developers
coding guidelines
second line of defense:
code reviews
static code analyis (as a part of build process)
memory profiling tools
If you work with unmanaged language (like C/C++) you can efficiently discover most of the memory leaks by hijacking memory management functions. For example you can track all memory allocations/deallocations.
It seems to me that the core of the problem is not so much finding memory leaks as knowing when to test for them. You say you have lots of user actions, but you don't say what sequences of user actions are meaningful. If you can generate meaningful sequences at random, I'd argue hard for random testing. On random tests you would measure
Code coverage (with gcov or valgrind)
Memory usage (with valgrind)
Coverage of the user actions themselves
By "coverage of user actions" I mean statements like the following:
For every pair of actions A and B, if there is a meaningful sequence of actions in which A is immediately followed by B, then we have tested such a sequence.
If that's not true, then you can ask for what fraction of pairs A and B it is true.
If you have the CPU cycles to afford it, you would probably also benefit from running valgrind or another memory-checking tool either before every commit to your source-code repository or during a nightly build.
Automate!
In my company we have programmed an endless action path for our application. The java garbage collector should clean all unused maps and list and something like that. So we let the application start with the endless action path and look, whether the memory use size is growing.
The check which fields are not deleted you can use JProfiler for Java.
Replace new and delete with your custom versions and log every act of allocation/deallocation.
Speaking generally (not about testing, rather to fight the issue in its origin), smartpointers help to avoid this problem. Fortunately, C++11 standard provides new convenient smart pointer classes (shared_ptr, unique_ptr).

How do I go about diagnosing memory corruption errors occurring in a COM-DLL after porting it from Delphi 2007 to Delphi 2009?

I have just ported several of our home-made Outlook COM-addins from Delphi 2007 to Delphi 2009 and am now experiencing some really weird errors (before you ask: none of which appear to have any obvious relationship to string-handling), for example modal dialogs that hang Outlook when one tries to invoke them a second time (the first time around everything appears to be fine) but only when they're invoked from one specific event handler and not when doing the same thing somewhere else. When I trace the error to a specific line of code and comment out that line or replace it with different code to the same effect (e.g. by copying code that would otherwise be called via a function directly to the calling site), the error will appear to go away - typically only to reoccur a couple of (equally inconspicuous looking) statements later.
When running this inside the Delphi debugger I can see that the freezes are often preceded by Access Violations in GetMem.inc . At least all of these issues are 100% reproducible...
Needless to say we had none of these issues when compiling these addins in Delphi 2007.
Now, I'm quite at a loss. I know I have just been lucky but even though I consider myself a fairly experienced programmer (though mostly in niche areas) I never really had to deal with this class of error before. As the title of this question says, I don't even really know where to start. I can step through the code as much as I like but the endless assembler statements mean nothing to me and neither am I proficient in effectively using the CPU view.
Furthermore, I don't even know for sure yet whether this is an issue with my own code to begin with (I actually tend to doubt it in this case). We are makign massive use of a number of third-party libraries (e.g. JCL, ADX, Redemption). ADX in particular still labels its Delphi 2009 support "beta".
I also tried using FastMM's FullDebugMode and indeed I did uncover a number of errors in ADX that way (e.g. blocks that were modified after having been freed) but all of these also occur when I compile with Delphi 2007 so it doesn't yet seem imperative that these are ultimately the cause for the observed regression.
So, how do I deal with this? - or better yet: Where can I find some good resources on learning how to deal with this? e.g. tutorials on using the CPU view or effectively interpreting and acting upon the reports put out by FastMM? Are these the correct tools at all? Where else should I look?
Addendum:
What types of code should I be suspicious of in this context? What kind of code even has the potential to wreak such havoc in memory? The only places I can think of where my code performs anything remotely approaching explicit memory manipulation is when reserving some buffer space in preparation of a WinAPI call. Also keep in mind that all of my code is identical between the Delphi 2007 and Delphi 2009 versions and the Delphi 2007 version exhibits no such problems.
Update:
With some probability the issue that prompted me to post this question has now been solved. See my own answer below.
The best tool for getting to a solution is probably memory breakpoints.
Debugging memory corruption is painful, so try to make your life as simple as possible first: find an exact, guaranteed-reproducible set of steps that work every time. If necessary, mock up the Outlook host so that you don't need to rely on Outlook timing issues or address space layout issues etc.
It's imperative that you get a reliably reproducible set of steps that results in an AV or other error at a predictable address.
What you then do is restart the process, create a memory breakpoint set for whatever referred to that address, and get familiar with the lifecycle of that chunk of memory. Minmizing and rationalizing your reproduction steps helps here. It may help to add other breakpoints and only enable the memory breakpoint later in the application; or use the logging features of D2009 breakpoints to log memory values / call stacks etc., rather than actually breaking into the debuggee.
Not exactly an answer to the question which was more general, but very probably the solution to the specific problem that prompted it:
I am 95% sure to have identified the problem now! :)
Here's what I did:
I enabled RangeChecking and OverflowChecking in the compiler
I tracked down and fixed all problems that caused ERangeError or EIntOverflow exceptions
(there was one of each)
I ran the program again with FastMM and FullDebugMode enabled
I was finally able to identify the cause of the problem in all cases to be a call to the JCL function GetWindowCaption
It seems that GetWindowCaption has obviously not yet been checked for Unicode-compatibility: It was using the value returned from the API function GetWindowTextLength (which returns the number of characters) as input for ReallocMem (which expects the number of bytes) to allocate the buffer for GetWindowText (which in Delphi 2009 returns a buffer of WideChars). Boom! The function was allocating too little memory for the buffer but GetWindowText simply overwrote the following memory thus corrupting the block footer.
I have now filed this in the JCL bug tracker as item #4648
The bottom line I took out of this is: Always be sure to fix all reported errors! Including (seemingly) non-critical ones like range and overflow errors. If nothing else, it will make debugging that much more predictable.
The fact that you catch double-free bugs in D2007, even though it does appear to work fine in this version, means that you NEED to fix those because you are merely lucky that the D2007 version does not need to recycle the memory as aggressively as the D2009 version and the bugs do not show up due to "shadow persistence" in memory.
I would use FastMM fulldebugmode to find the bad code and fix it as much as possible, then follow Barry's advice to trouble shoot memory usage.
For how to use the features of the Integrated Debugger, and how to log info from non breaking breakpoints, you may want to look at this CodeRage 3 session: Delphi Debugging for Dummies
I'd look in direction of full pageheap support built into the system.
Look in this post for how to configure it. Provided your memory usage is not too extensive, this is the easiest to do to find the problem.
It gets tricky when memory consumption is heavier - but like I said, try full peagheap first.

How to log mallocs

This is a bit hypothetical and grossly simplified but...
Assume a program that will be calling functions written by third parties. These parties can be assumed to be non-hostile but can't be assumed to be "competent". Each function will take some arguments, have side effects and return a value. They have no state while they are not running.
The objective is to ensure they can't cause memory leaks by logging all mallocs (and the like) and then freeing everything after the function exits.
Is this possible? Is this practical?
p.s. The important part to me is ensuring that no allocations persist so ways to remove memory leaks without doing that are not useful to me.
You don't specify the operating system or environment, this answer assumes Linux, glibc, and C.
You can set __malloc_hook, __free_hook, and __realloc_hook to point to functions which will be called from malloc(), realloc(), and free() respectively. There is a __malloc_hook manpage showing the prototypes. You can add track allocations in these hooks, then return to let glibc handle the memory allocation/deallocation.
It sounds like you want to free any live allocations when the third-party function returns. There are ways to have gcc automatically insert calls at every function entrance and exit using -finstrument-functions, but I think that would be inelegant for what you are trying to do. Can you have your own code call a function in your memory-tracking library after calling one of these third-party functions? You could then check if there are any allocations which the third-party function did not already free.
First, you have to provide the entrypoints for malloc() and free() and friends. Because this code is compiled already (right?) you can't depend on #define to redirect.
Then you can implement these in the obvious way and log that they came from a certain module by linking those routines to those modules.
The fastest way involves no logging at all. If the amount of memory they use is bounded, why not pre-allocate all the "heap" they'll ever need and write an allocator out of that? Then when it's done, free the entire "heap" and you're done! You could extend this idea to multiple heaps if it's more complex that that.
If you really do need to "log" and not make your own allocator, here's some ideas. One, use a hash table with pointers and internal chaining. Another would be to allocate extra space in front of every block and put your own structure there containing, say, an index into your "log table," then keep a free-list of log table entries (as a stack so getting a free one or putting a free one back is O(1)). This takes more memory but should be fast.
Is it practical? I think it is, so long as the speed-hit is acceptable.
You could run the third party functions in a separate process and close the process when you are done using the library.
A better solution than attempting to log mallocs might be to sandbox the functions when you call them—give them access to a fixed segment of memory and then free that segment when the function is done running.
Unconfined, incompetent memory usage can be just as damaging as malicious code.
Can't you just force them to allocate all their memory on the stack? This way it would be garanteed to be freed after the function exits.
In the past I wrote a software library in C that had a memory management subsystem that contained the ability to log allocations and frees, and to manually match each allocation and free. This was of some use when attempting to find memory leaks, but it was difficult and time consuming to use. The number of logs was overwhelming, and it took an extensive amount of time to understand the logs.
That being said, if your third party library has extensive allocations, its more then likely impractical to track this via logging. If you're running in a Windows environment, I would suggest using a tool such as Purify[1] or BoundsChecker[2] that should be able to detect leaks in your third party libraries. The investment in the tool should pay for itself in time saved.
[1]: http://www-01.ibm.com/software/awdtools/purify/ Purify
[2]: http://www.compuware.com/products/devpartner/visualc.htm BoundsChecker
Since you're worried about memory leaks and talking about malloc/free, I assume you're in C. I'm also assuming based on your question that you do not have access to the source code of the third party library.
The only thing I can think of is to examine memory consumption of your app before & after the call, log error messages if they're different and convince the third party vendor to fix any leaks you find.
If you have money to spare, then consider using Purify to track issues. It works wonders, and does not require source code or recompilation. There are also other debugging malloc libraries available that are cheaper. Electric Fence is one name I recall. That said, the debugging hooks mentioned by Denton Gentry seem interesting too.
If you're too poor for Purify, try Valgrind. It it a lot better than it was 6 years ago and a lot easier to dive into than Purify.
Microsoft Windows provides (use SUA if you need a POSIX), quite possibly, the most advanced heap+(other api known to use the heap) infrastructure of any shipping OS today.
the __malloc() debug hooks and the associated CRT debug interfaces are nice for cases where you have the source code to the tests, however they can often miss allocations by standard libraries or other code which is linked. This is expected as they are the Visual Studio heap debugging infrastructure.
gflags is a very comprehensive and detailed set of debuging capabilities which has been included with Windows for many years. Having advanced functionality for source and binary only use cases (as it is the OS heap debugging infrastructure).
It can log full stack traces (repaginating symbolic information in a post-process operation), of all heap users, for all heap modifying entrypoint's, serially if needed. Also, it may modify the heap with pathalogical cases which may align the allocation of data such that the page protection offered by the VM system is optimally assigned (i.e. allocate your requested heap block at the end of a page, so even a singele byte overflow is detected at the time of the overflow.
umdh is a tool which can help assess the status at various checkpoints, however the data is continually accumulated during the execution of the target o it is not a simple checkpointing debug stop in the traditional context. Also, WARNING, Last I checked at least, the total size of the circular buffer which store's the stack information, for each request is somewhat small (64k entries (entries+stack)), so you may need to dump rapidly for heavy heap users. There are other ways to access this data but umdh is fairly simple.
NOTE there are 2 modes;
MODE 1, umdh {-p:Process-id|-pn:ProcessName} [-f:Filename] [-g]
MODE 2, umdh [-d] {File1} [File2] [-f:Filename]
I do not know what insanity gripped the developer who chose to alternate between -p:foo argument specifier's and naked ordering of argument's but it can get a little confusing.
The debugging sdk works with a number of other tools, memsnap is a tool which apparently focuses on memory leask and such, but I have not used it, your milage may vary.
Execute gflags with no arguments for the UI mode, +arg's and /args are different "modes" of use also.
On Linux I've successfully used mtrace(3) to log allocations and freeings. Its usage is as simple as
Modify your program to call mtrace() when you need to begin tracing (e.g. at the top of main()),
Set environment variable MALLOC_TRACE to the file path where the trace should be saved and run the program.
After that the output file will contain something like this (excerpt from the middle to show a failed allocation):
# /usr/lib/tls/libnvidia-tls.so.390.116:[0xf44b795c] + 0x99e5e20 0x49
# /opt/gcc-7/lib/libstdc++.so.6:(_ZdlPv+0x18)[0xf6a80f78] - 0x99beba0
# /usr/lib/tls/libnvidia-tls.so.390.116:[0xf44b795c] + 0x9a23ec0 0x10
# /opt/gcc-7/lib/libstdc++.so.6:(_ZdlPv+0x18)[0xf6a80f78] - 0x9a23ec0
# /opt/Xorg/lib/video-libs/libGL.so.1:[0xf668ee49] + 0x99c67c0 0x8
# /opt/Xorg/lib/video-libs/libGL.so.1:[0xf668f14f] - 0x99c67c0
# /opt/Xorg/lib/video-libs/libGL.so.1:[0xf668ee49] + (nil) 0x30000000
# /lib/libc.so.6:[0xf677f8eb] + 0x99c21f0 0x158
# /lib/libc.so.6:(_IO_file_doallocate+0x91)[0xf677ee61] + 0xbfb00480 0x400
# /lib/libc.so.6:(_IO_setb+0x59)[0xf678d7f9] - 0xbfb00480

Resources