Use of Memory Profiling - memory

Whats the meaning Memory Profiling?
Is it give statistics of memory like how much memory utilized?
And are there any different kinds in this?

The problem is, you may be doing way to much new-ing which, even in a language with a garbage-collector, may unnecessarily dominate your execution time.
You may also have a memory leak meaning that the amount of dynamic memory you're not returning to the pool grows steadily over time.
If your app runs for a long time, that's equally bad.
I use the random-pausing method for performance diagnosis, but that is of no value for finding memory leaks.
That's what Memory Profiling should help with.
Here's how I've found memory leaks in the past, using MFC.
In a debug build, when I shut down the app, it prints a list of all the non-collected memory blocks, along with their class type.
Then I look to see where those blocks are created, and try to figure out why they weren't deleted or collected.
It would be more useful if I could capture a stack trace on each block, so I could tell which new statement made it, and the stack could tell me why.
The point is, I could allocate 100 blocks of class Foo, and delete 99 of them.
The one that I don't delete is the problem, so it would be useful to know more about where it came from.
I don't know if memory profilers can do this or not.

Related

Instruments and heap growth, when is growth really a leak?

I'm using instruments on a device to try to figure out if I have any memory leaking or abandoned. Specifically I am using leaks and allocations. While instruments doesn't point out any leaks, that doesn't mean I don't have memory issues. I've been working on this for weeks, and I don't seem to be any closer to figuring out what issues I have (ugh).
I am testing a particular action by taking a heapshot after the action and repeating. After the first few "settling" generations, I can see that the growth and persistent count all start out at a certain number (several kb). After many repeated iterations (say 10-20), some (not all) slowly end up draining to 0. It takes a while, but it does happen. The generations where there remains persistent memory never actually show me anything that I find helpful, as the stack trace show all system libraries.
So my questions are:
What does this type of behavior indicate? Do I have memory issues? Is there some type of lazy release of memory somewhere?
In a sea of iterations that show persistent memory, what does one zero heap growth iteration mean?
If the stack trace for a particular generation points only to system libraries, does this mean the heap growth for that generation is valid or that there is a bug? Or could it still mean that there is something holding onto the memory on my end?
What does it mean when the stack trace shows your library and method, but it is greyed out like the system code and has a little house icon, vs a a line with your library and method that is in black and has a little person icon?
If I have something like a retain cycle - wouldn't the persistent growth be consistent?
Any answers to insights would be extremely helpful!
I'll take a stab at your questions:
What does this type of behavior indicate? Do I have memory issues? Is
there some type of lazy release of memory somewhere?
Since you can't know how the system frameworks manage their private memory needs, you must assume that yes, there could be lazy/deferred release of memory happening any time you call into the system frameworks, which in most apps is "all the time". Beyond not being able to rule it out, I can say with some certainty that there definitely are long-lived allocations triggered by seemingly-innocuous system framework usage. (See the discussion of UIWebView's long-lived memory use in this answer for an example.)
In a sea of
iterations that show persistent memory, what does one zero heap growth
iteration mean?
Hard to say. A good first-order guess might be that the heap growth associated with the iteration was somehow exactly offset by a lazy/deferred release of the memory allocated for a previous iteration.
If the stack trace for a particular generation points
only to system libraries, does this mean the heap growth for that
generation is valid or that there is a bug? Or could it still mean
that there is something holding onto the memory on my end?
If Instruments shows heap growth, then that heap growth almost certainly exists. Whether that heap growth is something you have direct control over depends. If you make no calls into system frameworks (not likely), then it's definitely your fault. Once you make a call into the system frameworks, you have to accept the possibility that the framework might allocate memory that stays allocated after your call returns.
What does
it mean when the stack trace shows your library and method, but it is
greyed out like the system code and has a little house icon, vs a a
line with your library and method that is in black and has a little
person icon?
Lines being greyed out indicates that Instruments doesn't have debug symbols for that line. That's all. It doesn't indicate anything specific with regard to memory use.
If I have something like a retain cycle - wouldn't the
persistent growth be consistent?
If each iteration created a new object graph with cyclic retains, then yes, you would expect that each iteration would cause heap growth by at least the size of that object graph. That said, small object graphs can easily be lost in the "noise." If you have suspicions, one way is to have objects of a "suspect" class perform a huge allocation that will make them stand out from the "noise." For instance, make your object malloc a megabyte (or more) for every instance (and, obviously, free it when the instance is deallocated.) This can help problem areas stick out where they might not have originally.

iOS Memory malady madness

I recently ported a project over to ARC as I was having trouble with crashes and actually determining the cause, whether it was leaks or retain cycles etc., Now I have ported it over, I have not done massive testing to see whether it still crashes as I have not managed to get passed the activity monitor giving me the heeby jeebies when it shows my application doing This (activity monitor profiler)
whereas in allocations tools it looks something like
That real memory usage is not even the worst of it, at one point it shot up to around 90 odd MBs, I am unsure on how to proceed as I am not 100 percent sure what to do with the information given here, Except assume that I might be dong something, very wrong, And I have also run the leaks instrument, I have a few but they are minimal, they are all in bytes.
Does anyone have an explanation? or at the very least are able to clarify what I am possibly looking at? what's the difference between real memory usage and live bytes and overall bytes? Also these results were gotten doing exactly the same actions once and then showing you at the end of it.
I have been trying to reduce the real memory usage as pre ARC conversion I was having memory warnings and silent crashes frequently, I have not run into these again after converting, but I have not done any prolonged testing as I cannot conceive of even trying when the real memory usage looks like that. Which actually looks a lot higher than before ARC...Although the live bytes does look lower post ARC...Madness!
Something that confused me for a while is that ARC - wonderful as it is - does not necessarily avoid the need for #autoreleasepool.
https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmAutoreleasePools.html
I ran very large memory usage in an app until someone suggested:
#autoreleasepool {
// lots of allocating of objects returned from methods then discarded
} // and the closing brace of the autoreleasepool block causes their memory to be recovered here
Maybe that will help you.
A good explanation of the meaning of the various columns in the profiler is at Instruments ObjectAlloc: Explanation of Live Bytes & Overall Bytes

Instruments Heapshots - What does this data mean?

I'm sorry about the title. I know it is rather poor but I wasn't sure how to word it.
I have read conflicting statements on how the Leaks instrument works. I am trying to figure out if I have any leaks left that I need to deal with, but I am very new to memory management with iOS.
My question is essentially: Does the data in this screenshot look good or bad? I know it isn't enough information to find specific problems for me or not but I am just confused as to whether I have a problem or not.
I have read that "Heap Growth" and "Persistent" are both things that are accumulative and are not released. Is this correct? The numbers in Heap Growth and Persistent both start large and get smaller each time. Does this mean things are eventually cleaning up or does it mean that I have my memory usage constantly expanding?
Bad. The heapgrowth is the amount of memory your app has grown since the last time you marked the heap. Meaning objects are being allocated but ever released. You'll have to expand the heapshots and see which objects are being retained and work out why they are not being released. Ideally, each time you mark the heap, the growth would be 0.
The blue bars in the leaks section also shows you have something leaking memory.

Windows Task Manager shows process memory keeps growing even though there are no memory leaks

My application keeps consuming more and more memory as seen in the Windows Task Manager and eventually crashes due to OutOfMemory. However when i check for leaks using MemoryValidator (from www.softwareverify.com) no leaks are detected. Why is this happening?
Just because there is a growing amount of memory usage doesn't mean it is necessarily 'leaking'. You could simply be accumulating a large number of live objects and/or very large ones (containing lots and lots of data).
If you can provide more information about what language(s) you are using and what the application is doing I can perhaps help out with some more specific information!
UPDATE AS PER COMMENTS
Well, you'll just want to make sure the garbage collection is happening correctly. I'd suggest the libgc library to help with that perhaps.
http://developers.sun.com/solaris/articles/libgc.html
The only other thing I could think of as being the cause of this is that you are maintaining references to the objects somewhere unintentionally so they are just piling up.

When to call SetProcessWorkingSetSize? (Convincing the memory manager to release the memory)

In a previous post ( My program never releases the memory back. Why? ) I show that FastMM can cache (read as hold for itself) pretty large amounts of memory. If your application just loaded a large data set in RAM, after releasing the data, you will see that impressive amounts of RAM are not released back to the memory pool.
I looked around and it seems that calling the SetProcessWorkingSetSize API function will "flush" the cache to disk. However, I cannot decide when to call this function. I wanted to call it at the end of the OnClick event on the button that is performing the RAM intensive operation. However, some people are saying that this may cause AV.
If anybody used this function successfully, please let me (us) know.
Many thanks.
Edit:
1. After releasing the data set, the program still takes large amounts of RAM. After calling SetProcessWorkingSetSize the size returns to few MB. Some argue that nothing was released back. I agree. But the memory foot print is now small AND it is NOT increasing back after using the program normally (for example when performing normal operation that does not involves loading large data sets). Unfortunately, there is no way to demonstrate that the memory swapped to disk is ever loaded back into memory, but I think it is not.
2. I have already demonstrated (I hope) this is not a memory leak:
My program never releases the memory back. Why?
How to convince the memory manager to release unused memory
If SetProcessWorkingSetSize would solve your problem, then your problem is not that FastMM is keeping hold of memory. Since this function will just trim the workingset of your application by writing the memory in RAM to the page file. Nothing is released back to Windows.
In fact you only have made accessing the memory again slower, since it now has to be read from disc. This method has the same effect as minimising your application. Then Windows presumes you are not going to use the application again soon and also writes the workingset in RAM to the pagefile. Windows does a good job of deciding when to write RAM to the pagefile and tries to keep the most used memory in RAM as long as it can. It will make the workinset size smaller (write to pagefile) when there is little RAM left. I would not mess with it just to give the illusion that you program is using less memory while in fact it is using just as much as before, only now it is slower to access. Memory that is accessed again will be loaded into RAM again and make the workinset size grow again. Touching less memory keeps the workingset size smaller.
So no, this will not help you forcing FastMM to release the memory. If your goal is for your application to use less memory you should look elsewhere. Look for leaks, look for heap fragmentations look for optimisations and if you think FastMM is keeping you from doing so you should try to find facts to support it. If your goal is to keep your workinset size small you could try to keep your memory access local. Maybe FastMM or another memory manager could help you with it, but it is a very different problem compared to using to much memory. And maybe this function does help you solve the problem you are having, but I would use it with care and certainly not use it just to keep the illusion that your program has a low memory usage.
I agree with Lars Truijens 100%, if you don't than you can check the FasttMM memory usage via FasttMM calls GetMemoryManagerState and GetMemoryManagerUsageSummary before and after calling API SetProcessWorkingSetSize.
Are you sure there is a problem? Working sets might only decrease when there really is a memory shortage.
Problem solved:
I don't need to use SetProcessWorkingSetSize. FastMM will eventually release the RAM.
To confirm that this behavior is generated by FastMM (as suggested by Barry Kelly) I crated a second program that allocated A LOT of RAM. As soon as Windows ran out of RAM, my program memory utilization returned to its original value.
I used this function just once, when I implemented TWebBrowser. This component took me so much memory even if I destroyed the instance.

Resources