I'm using instruments on a device to try to figure out if I have any memory leaking or abandoned. Specifically I am using leaks and allocations. While instruments doesn't point out any leaks, that doesn't mean I don't have memory issues. I've been working on this for weeks, and I don't seem to be any closer to figuring out what issues I have (ugh).
I am testing a particular action by taking a heapshot after the action and repeating. After the first few "settling" generations, I can see that the growth and persistent count all start out at a certain number (several kb). After many repeated iterations (say 10-20), some (not all) slowly end up draining to 0. It takes a while, but it does happen. The generations where there remains persistent memory never actually show me anything that I find helpful, as the stack trace show all system libraries.
So my questions are:
What does this type of behavior indicate? Do I have memory issues? Is there some type of lazy release of memory somewhere?
In a sea of iterations that show persistent memory, what does one zero heap growth iteration mean?
If the stack trace for a particular generation points only to system libraries, does this mean the heap growth for that generation is valid or that there is a bug? Or could it still mean that there is something holding onto the memory on my end?
What does it mean when the stack trace shows your library and method, but it is greyed out like the system code and has a little house icon, vs a a line with your library and method that is in black and has a little person icon?
If I have something like a retain cycle - wouldn't the persistent growth be consistent?
Any answers to insights would be extremely helpful!
I'll take a stab at your questions:
What does this type of behavior indicate? Do I have memory issues? Is
there some type of lazy release of memory somewhere?
Since you can't know how the system frameworks manage their private memory needs, you must assume that yes, there could be lazy/deferred release of memory happening any time you call into the system frameworks, which in most apps is "all the time". Beyond not being able to rule it out, I can say with some certainty that there definitely are long-lived allocations triggered by seemingly-innocuous system framework usage. (See the discussion of UIWebView's long-lived memory use in this answer for an example.)
In a sea of
iterations that show persistent memory, what does one zero heap growth
iteration mean?
Hard to say. A good first-order guess might be that the heap growth associated with the iteration was somehow exactly offset by a lazy/deferred release of the memory allocated for a previous iteration.
If the stack trace for a particular generation points
only to system libraries, does this mean the heap growth for that
generation is valid or that there is a bug? Or could it still mean
that there is something holding onto the memory on my end?
If Instruments shows heap growth, then that heap growth almost certainly exists. Whether that heap growth is something you have direct control over depends. If you make no calls into system frameworks (not likely), then it's definitely your fault. Once you make a call into the system frameworks, you have to accept the possibility that the framework might allocate memory that stays allocated after your call returns.
What does
it mean when the stack trace shows your library and method, but it is
greyed out like the system code and has a little house icon, vs a a
line with your library and method that is in black and has a little
person icon?
Lines being greyed out indicates that Instruments doesn't have debug symbols for that line. That's all. It doesn't indicate anything specific with regard to memory use.
If I have something like a retain cycle - wouldn't the
persistent growth be consistent?
If each iteration created a new object graph with cyclic retains, then yes, you would expect that each iteration would cause heap growth by at least the size of that object graph. That said, small object graphs can easily be lost in the "noise." If you have suspicions, one way is to have objects of a "suspect" class perform a huge allocation that will make them stand out from the "noise." For instance, make your object malloc a megabyte (or more) for every instance (and, obviously, free it when the instance is deallocated.) This can help problem areas stick out where they might not have originally.
Related
It's the first time I have to fix a memory problem from one of my iOS apps, so I'm not really sure how to track it, I've been reading some post from different blogs and I've found that my memory is constantly increasing:
The problem is that I don't know how to track and fix the problems in my code, and I also don't know what should be the best "Growth" in memory. Thank you.
First, I'd recommend watching WWDC 2013 Fixing Memory Problems and WWDC 2012 iOS App Performance: Memory videos. They are dated, yet still relevant, demonstrating of taking the above screen snapshot down to the next level in order to identify the underlying source of the memory issue.
That having been said, I have a couple of observations:
In my experience, nowadays, the problem is rarely "leaked memory" (i.e. memory that has been allocated, for which there are no more references, but you neglected to free it). It's rather hard to leak in Swift (and if you use the static analyzer on Objective-C code, the same it true there, too).
The more common problem is "abandoned memory" resulting from strong reference cycles, repeating timers not getting invalidated, circular references between view controller scenes, etc. These won't be identified by the Leaks tool. Only by digging into the Allocations tool.
First, I often like to see what accounts for the bulk of the abandoned memory. So I double click on a top level call tree and it will show me where the abandoned memory was allocated:
Note, this doesn't tell me why this was abandoned, but by knowing where the bulk of the abandoned memory was allocated, it gives me some clues (in this example, I'm starting to suspect the SecondViewController).
I then drill into the generations results and start by looking for my classes in the allocations (you can also do this by manually selecting a relevant portion of the allocations graph, as discussed here). I then filter the results, searching for my classes here. Sure, this won't always be the most significant allocation, but in my experience this sort of abandoned memory almost always stems from some misuse of my classes:
Again, this is pointing me to that SecondViewController class.
Note, when looking at generations, I generally ignore the first one or two generations because they may have false positives stemming from the app "warming up". I focus on the latter generations (and make sure that I only "mark" a generation when the app has been returned to some-steady, quiescent state.
For the sake of completion, it's worth pointing out that it's sometimes useful running Instruments with the "Record reference counts" feature on the unreleased allocation:
If you do that, you can sometimes drill into the list of retain and release calls and identify who still has the strong reference (focus on unpaired retain/release calls). In this case it's not very useful because there are just too many, but I mention in because sometimes this is useful to diagnose who still has the strong reference:
If you're using Xcode 8, it has a brilliant object graph debugger that cuts through much of this, graphically representing the strong references to the object in question, for example:
From this, the problem jumps out at me, that I not only have the navigation controller maintaining a reference to this view controller, but a timer. This happens to be a repeating timer for which I neglected to invalidate. In this case, that's why this object is not getting deallocated.
In the absence of Xcode 8's object graph debugger, you're left to pouring through the retain and release references (and you can look at where each of those strong references were established and determine if they were properly released) or just looking through my code for things that might be retaining that particular view controller (strong reference cycles, repeating timers, and circular view controller references are the most common problems I see).
Minimizing your app's Memory Footprint
For the benefit of both stability and performance, it's important to understand and heed the differing amounts of memory that are available to your app across the various devices your app supports. Minimizing the memory usage of your app is the best way to ensure it runs at full speed and avoids crashing due to memory depletion in customer instantiations. Furthermore, using the Allocations Instrument to assess large memory allocations within your app can sometimes be a quick exercise that can yield surprising performance gains.
Apple Official Document
Whats the meaning Memory Profiling?
Is it give statistics of memory like how much memory utilized?
And are there any different kinds in this?
The problem is, you may be doing way to much new-ing which, even in a language with a garbage-collector, may unnecessarily dominate your execution time.
You may also have a memory leak meaning that the amount of dynamic memory you're not returning to the pool grows steadily over time.
If your app runs for a long time, that's equally bad.
I use the random-pausing method for performance diagnosis, but that is of no value for finding memory leaks.
That's what Memory Profiling should help with.
Here's how I've found memory leaks in the past, using MFC.
In a debug build, when I shut down the app, it prints a list of all the non-collected memory blocks, along with their class type.
Then I look to see where those blocks are created, and try to figure out why they weren't deleted or collected.
It would be more useful if I could capture a stack trace on each block, so I could tell which new statement made it, and the stack could tell me why.
The point is, I could allocate 100 blocks of class Foo, and delete 99 of them.
The one that I don't delete is the problem, so it would be useful to know more about where it came from.
I don't know if memory profilers can do this or not.
This is a follow-up to this question: What could explain the difference in memory usage reported by FastMM or GetProcessMemoryInfo?
My Delphi XE application is using a very large amount of memory which sometimes lead to an out of memory exception. I'm trying to understand why and what is causing this memory usage and while FastMM is reporting low memory usage, when requesting for TProcessMemoryCounters.PageFileUsage I can clearly see that a lot of memory is used by the application.
I would like to understand what is causing this problem and would like some advise on how to handle it:
Is there a way to know what is contained in that memory and where it has been allocated ?
Is there some tool to track down memory usage by line/procedure in a Delphi application ?
Any general advise on how to handle such a problem ?
EDIT 1 : Here are two screenshots of FastMMUsageTracker indicating that memory has been allocate by the system.
Before process starts:
After process ends:
Legend: Light red is FastMM allocated and dark gray is system allocated.
I'd like to understand what is causing the system to use that much memory. Probably by understanding what is contained in that memory or what line of code or procedure did cause that allocation.
EDIT 2 : I'd rather not use the full version of AQTime for multiple reasons:
I'm using multiple virtual machines for development and their licensing system is a PITA (I'm already a registered user of TestComplete)
LITE version doesn't provide enough information and I won't waste money without making certain the FULL version will give me valuable information
Any other suggestions ?
Another problem might be heap fragmentation. This means you have enough memory free, but all the free blocks are to small. You might see it visually by using the source version of FastMM and use the FastMMUsageTracker.pas as suggested here.
You need a profiler, but even that won't be enough in lots of places and cases. Also, in your case, you would need the full featured AQTime, not the lite version that comes with Delphi XE and XE2. (AQTIME is extremely expensive, and annoyingly node-locked, so don't think I'm a shill for SmartBear software.)
The thing is that people often mistake AQTime Allocation Profiler as only a way to find leaks. It can also tell you where your memory goes, at least within the limits of the tool. While running, and consuming lots of memory, I click Run -> Get Results.
Here is one of my applications being profile in AQTime with its Allocation Profiler showing exactly what class is allocating how many instances on the heap and how much memory those use. Since you report low Delphi heap usage with FastMM, that tells me that most of AQTime's ability to analyze by delphi class name will also be useless to you. However by using AQTime's events and triggers, you might be able to figure out what areas of your application are causing you a "memory usage expense" and when those occur, what the expense is. AQTime's real-time instrumentation may be sufficient to help you narrow down the cause even though it might not find for you what function call is causing the most memory usage automatically.
The column names include "Object Name" which includes things like this:
* All delphi classes, and their instance count and heap usage.
* Virtual Memory blocks allocated via Win32 calls.
It can detect Delphi and C/C++ library allocations on the heap, and can see certain Windows-API level memory allocations.
Note the live count of objects, the amount of memory from the heap that is used.
I usually try to figure out the memory cost of a particular operation by measuring heap memory use before, and just after, some expensive operation, but before the cleanup (freeing) of the memory from that expensive operation. I can set event points inside AQTime and when a particular method gets hit or a flag gets turned on by me, I can measure before, and after values, and then compare them.
FastMM alone can not even detect a non-delphi allocation or an allocation from a heap that is not being managed by FastMM. AQTime is not limited in that way.
I am trying to work through some low memory conditions using instruments. I can watch memory consumption in the Physical Memory Free monitor drop down to a couple of MB, even though Allocations shows that All Allocations is about 3 MB and Overall Bytes is 34 MB.
I have started to experience crashing since I moved some operations to a separate thread with an NSOperationQueue. But I wasn't using instruments before the change. Nevertheless, I'm betting I did something that I can undo to stop the crashes.
By the way, it is much more stable without instruments or the debugger connected.
I have the leaks down to almost none (maybe a hundred bytes max before a crash).
When I look at Allocations, I only see very primitive objects. And the total memory reported by it is also very low. So I cant see how my app is causing these low memory warnings.
When I look at Heap Shots from the start up, I don't see more than about 3 MB there, between the baseline and the sum of all the heap growth values.
What should I be looking at to find where the problem is? Can I isolate it to one of my view controller instances, for example? Or to one of my other instances?
What I have done:
I powered the device off and back on, and this made a significant improvement. Instruments is not reporting a low memory warning. Also, I noticed that Physical Free Memory at start up was only about 7 MB before restarting, and its about 60 MB after restarting.
However, I am seeing a very regular (periodic) drop in Physical Free Memory, dropping from 43 MB to 6 MB (an then back up to 43 MB). I would like to knwo what it causing that. I don't have any timers running in this app. (I do have some performSelector:afterDelay:, but those aren't active during these tests.)
I am not using ARC.
The allocations and the leaks instruments only show what the objects actually take, but not what their underlaying non-object structures (the backing stores) are taking. For example, for UIImages it will show you have a few allocated bytes. This is because a UIImage object only takes those bytes, but the CGImageRef that actually contains the image data is not an object, and it is not taken into account in these instruments.
If you are not doing it already, try running the VM Tracker at the same time you run the allocations instrument. It will give you an idea of the type memory that is being allocated. For iOS the "Dirty Memory", shown by this instrument, is what normally triggers the memory warnings. Dirty memory is memory that cannot be automatically discarded by the VM system. If you see lots of CGImages, images might be your problem.
Another important concept is abandoned memory. This is memory that was allocated, it is still referenced somewhere (and as such not a leak), but not used. An example of this type of memory is a cache of some sort, which is not freeing up upon memory warning. A way to find this out is to use the heap shot analysis. Press the "Mark Heap" button of the allocations instrument, do some operation, return to the previous point in the app and press "Mark Heap" again. The second heap shot should show you what new objects have been allocated between those two moments, and might shed some light on the mystery. You could also repeat the operation simulating a memory warning to see if that behaviour changes.
Finally, I recommend you to read this article, which explains how all this works: http://liam.flookes.com/wp/2012/05/03/finding-ios-memory/.
The difference between physical memory from VM Tracker and allocated memory from "Allocations" is due to the major differences of how these instruments work:
Allocations traces what your app does by installing a tap in the functions that allocate memory (malloc, NSAllocateObject, ...). This method yields very precise information about each allocation, like position in code (stack), amount, time, type. The downside is that if you don't trace every function (like vm_allocate) that somehow allocates memory, you lose this information.
VM Tracker samples the state of the system's virtual memory in regular intervals. This is a much less precise method, as it just gives you an overall view of the current state. It operates at a low frequency (usually something like every three seconds) and you get no idea of how this state was reached.
A known culprit of invisible allocations is CoreGraphics: It uses a lot of memory when decompressing images, drawing bitmap contexts and the like. This memory is usually invisible in the Allocations instrument. So if your app handles a lot of images it is likely that you see a big difference between the amount of physical memory and the overall allocated size.
Spikes in physical memory might result from big images being decompressed, downsized and then only used in screen resolution in some view's or layer's contents. All this might happen automatically in UIKit without your code being involved.
I have the leaks down to almost none (maybe a hundred bytes max before a crash).
In my experience, also very small leaks are "dangerous" sign. In fact, I have never seen a leak larger than 4K, and leaks I usually see are a couple hundreds of bytes. Still, they usually "hide" behind themselves a much larger memory which is lost.
So, my first suggestion is: get rid of those leaks, even though they seem small and insignificant -- they are not.
I have started to experience crashing since I moved some operations to a separate thread with an NSOperationQueue.
Is there a chance that the operation you moved to the thread is the responsible for the pulsing peak? Could it be spawned more than once at a time?
As to the peaks, I see two ways you can go about them:
use the Time Profiler in Instruments and try to understand what code is executing while you see the peak rising;
selectively comment out portions of your code (I mean: entire parts of your app -- e.g., replace a "real" controller with a basic/empty UIViewController, etc) and see if you can identify the culprit this way.
I have never seen such a pulsating behaviour, so I assume it depends on your app or on your device. Have you tried with a different device? What happens in the simulator (do you see the peak)?
When I'm reading your text, I have the impression that you might have some hidden leaks. I could be wrong but, are you 100% sure that you have check all leaks?
I remember one particular project I was doing few month ago, I had the same kind of issue, and no leaks in Instruments. My memory kept growing up and I get memory warnings... I start to log on some important dealloc method. And I've seen that some objects, subviews (UIView) were "leaking". But they were not seen by Instruments because they were still attached to a main view.
Hope this was helpful.
In the Allocations Instrument make sure you have "Only Track Active Allocations" checked. See Image Below. I think this makes it easier to see what is actually happening.
Have you run Analyze on the project? If there's any analyze warnings, fix them first.
Are you using any CoreFoundation stuff? Some of the CF methods have ... strange ... interactions with the ObjC runtime and mem management (they shouldn't do, AFAICS, but I've seen some odd behaviour with the low-level image and AV manipulations where it seems like mem is being used outside the core app process - maybe the OS calls being used by Apple?)
... NB: there have also, in previous versions of iOS, been a few mem-leaks inside Apple's CF methods. IIRC the last of those was fixed in iOS 5.0.
(StackOVerflow's parser sucks: I typed "3" not "1") Are you doing something with a large number of / large-sized CALayer instances (or UIView's with CG* methods, e.g. a custom drawRect method in a UIView?)
... NB: I have seen the exact behaviour you describe caused by 2 and 3 above, either in the CF libraries, or in the Apple windowing system when it tries to work with image data that was originally generated inside CF libraries - or which found its way into CALayers.
It seems that Instruments DOES NOT CORRECTLY TRACK memory usage inside the CA / CG system; this area is a bit complex since Apple is shuffling back and forth between CPU and GPU ram, but it's disappointing that the mem usage seems to simply "disappear" when it clearly is still being used!
Final thought (4. -- but SO won't let me type that) - are you using the invisible RHS of Instruments?
Apple hardcoded Instruments to always disable itself everytime you run it (so you have to keep manually opening it). This is stupid, since some of the core information only exists in the RHS bar. But I've worked with several people who didn't even know it existed :)
I'm sorry about the title. I know it is rather poor but I wasn't sure how to word it.
I have read conflicting statements on how the Leaks instrument works. I am trying to figure out if I have any leaks left that I need to deal with, but I am very new to memory management with iOS.
My question is essentially: Does the data in this screenshot look good or bad? I know it isn't enough information to find specific problems for me or not but I am just confused as to whether I have a problem or not.
I have read that "Heap Growth" and "Persistent" are both things that are accumulative and are not released. Is this correct? The numbers in Heap Growth and Persistent both start large and get smaller each time. Does this mean things are eventually cleaning up or does it mean that I have my memory usage constantly expanding?
Bad. The heapgrowth is the amount of memory your app has grown since the last time you marked the heap. Meaning objects are being allocated but ever released. You'll have to expand the heapshots and see which objects are being retained and work out why they are not being released. Ideally, each time you mark the heap, the growth would be 0.
The blue bars in the leaks section also shows you have something leaking memory.