Hi I have truble with allocated memory, because I noticed in Instruments a lot of Heap Growth, so I designed a test app.
Test app contained two ViewControllers and each have one button.
First ViewController was linked thru Segue Modal to SecondViewController (and it has NO code at all - beside auto-generated).
Second ViewController has only function
-(IBAction)back:(id)sender{
[self dismissModalViewControllerAnimated:YES];
}
so I could flip throw views.
When I test it whit Instrument I noticed heap growth after I go to second view and back.
How is that posible? What am I missing?
The size of the heap is not the app memory usage.
When your app is alive, the kernel will have to allocate memory for you.
Modern systems uses virtual memory. Basically, they map physical addresses to virtual addresses, that your process will access.
This mapping is handled by the kernel, and it needs memory for it.
If you request 1MB of memory, it will have to allocate memory to keep track of the physical pages allocated, by increasing the size of your adress space.
If you free all memory, the kernel will usually keep the memory used for the mapping, and re-use it for the next allocations, avoiding the need to reallocate space for it.
This is why the heap size doesn't change. But it does not indicate your application's memory usage at all.
If using Instruments, look at the VM Tracker tool for this.
Related
I can't find how to determine how much memory is in use by individual objects.
Using Xcode's visual memory debugger, the memory shown on the right side (circled red) is not how much memory the object is actually using.
Xcode allocation instruments seems to only show the entire heap and memory used by individual allocations, not objects as a whole.
Am I missing something? Is there anyway to see how much aggregated memory is used by individual objects?
I understand the fact that stack grows upwards and heap grows downwards or vice-versa (architecture dependent).
But, i couldn't find much details about how actually it's implemented, my doubt is, for every process a memory block will be allocated, but is there a restriction on, how much max chunk can be used for stack or heap? Or are there no restrictions till whole allocated memory is consumed?
Yes, processes have predetermined stack sizes. Have you ever tried to recurse a method/function too much? You get a StackOverflow exception. That doesn't mean you've already went through your entire computer's memory. The OS controls distribution of stack and heap memory for each process.
Still pain with memory debugging.
I have 4 VC's I load by using navigation controller. Every VC has its own PNG images used for several controls.
In Instruments I realized that most of the VM regions are occupied by ImageIO_PNG_Data.
And as I push/pop VC's those VM increases and never decrease (I was supposing that dealloc some VC would also release images).
Of course, the debug is done in the Simulator.
To expand slightly on rokjarc's comment:
UIImage +imageNamed: explicitly caches. The documentation states:
This method looks in the system caches for an image object with the
specified name and returns that object if it exists. If a matching
image object is not already in the cache, this method loads the image
data from the specified file, caches it, and then returns the
resulting object.
So images loaded previously will remain in the cache unless or until the memory is needed elsewhere. There's no efficiency to be gained from freeing memory up needlessly.
If you want to avoid the caching for whatever reason — I would argue whatever spurious reason — you could use +imageWithContentsOfFile:, or the normal init equivalent, having obtained the full path from NSBundle.
PNGs set to image views and other places via the interface builder will be accessed via the cache as far as I'm aware.
If the VM allocations do not have physical memory allocated to them there is no problem.
iOS memory maps files and there may be no physical memory allocated at any given time. Some VM allocations are frameworks that are shared by other apps.
What you need to watch are Living Heap Allocations which in this case is a little over 4MB.
Ok. so my understanding of how executables are laid out in memory is... image a square box that represents the memory accessible by your app.
The program code resides at the bottom of the memory, the stack is allocated to a spot just beyond the program code and is allocated upwards. the heap starts at the top of the memory and is allocated downwards.
If this is the case, why is it possible to allocate more heap memory than stack memory?
Because even on modern systems with lots of virtual memory available, the maximum size of the call stack is usually deliberately limited to, say, 1MB.
This is not usually a fundamental limit; it's possible to modify this (using e.g. setrlimit() in Linux, or the -Xss flag for Java). But needing to do so usually indicates an abnormal program; if you have large data-sets, they should normally be stored on the heap.
Some systems such as Symbian insist people to use heap instead of stack when allocating
big objects(such as pathnames, which may be more than 512 bytes). Is there any specific reason for this?
Generally the stack on an embedded device is fixed to be quite small i.e. 8K is the default stack size on Symbian.
If you consider a maximum length filename is 256bytes, but double that for unicode that's 512bytes already (1/16th of your whole stack) just for 1 filename. So you can imagine that it is quite easy to use up the stack if you're not careful.
Most Symbian devices do come with an MMU, but, until very recently, do not support paging. This means that physical RAM is committed for every running process. Each thread on Symbian has (usually) a fixed 8KB stack. If each thread has a stack, then increasing the size of this stack from 8KB to, say 32KB, would have a large impact on the memory requirements of the device.
The heap is global. Increasing its size, if you need to do so, has far less impact. So, on Symbian, the stack is for small data items only - allocate larger ones from the heap.
Embedded devices often have a fixed-sized stack. Since a subroutine call in C only needs to push a few words onto the stack, a few hundred byte may suffice (if you avoid recursive function calls).
Most embedded devices doesn't come with a memory management unit so there is no way for the OS to grow the stack space automatically, transparent to the programmer. Even assuming a growable stack, you will have to manage it yourself which is no better than heap allocation and defeats the purpose of using a stack in the first place.
The stack for embedded devices usually resides in a very small amount of high-speed memory. If you allocate large objects on the stack on such a device, you might be facing a stack overflow.