Some systems such as Symbian insist people to use heap instead of stack when allocating
big objects(such as pathnames, which may be more than 512 bytes). Is there any specific reason for this?
Generally the stack on an embedded device is fixed to be quite small i.e. 8K is the default stack size on Symbian.
If you consider a maximum length filename is 256bytes, but double that for unicode that's 512bytes already (1/16th of your whole stack) just for 1 filename. So you can imagine that it is quite easy to use up the stack if you're not careful.
Most Symbian devices do come with an MMU, but, until very recently, do not support paging. This means that physical RAM is committed for every running process. Each thread on Symbian has (usually) a fixed 8KB stack. If each thread has a stack, then increasing the size of this stack from 8KB to, say 32KB, would have a large impact on the memory requirements of the device.
The heap is global. Increasing its size, if you need to do so, has far less impact. So, on Symbian, the stack is for small data items only - allocate larger ones from the heap.
Embedded devices often have a fixed-sized stack. Since a subroutine call in C only needs to push a few words onto the stack, a few hundred byte may suffice (if you avoid recursive function calls).
Most embedded devices doesn't come with a memory management unit so there is no way for the OS to grow the stack space automatically, transparent to the programmer. Even assuming a growable stack, you will have to manage it yourself which is no better than heap allocation and defeats the purpose of using a stack in the first place.
The stack for embedded devices usually resides in a very small amount of high-speed memory. If you allocate large objects on the stack on such a device, you might be facing a stack overflow.
Related
I read somewhere that Stack Pointer is decremented to allocate space for local variables when a function is called.
I don't understand how it is true because according to me it should be incremented. Can somebody please explain?
First, it does not really matter if it is increased or decreased - the difference is only that the CPU increases or decreases SP on push/pop operations. This does not effect et al what is the essence of a stack: we get the data read from it exactly in the opposite order as we put it into.
The reason for it is historical: on machines without a paging-based virtual memory support, we have a fixed address space. There should be the code, the heap and the stack somehow placed - without overwriting each other.
The code part of the program typically does not change (except self-overwriting code on the ASM level - it is nearly surreal today, but it was rare even long ago). The size of the heap (data) segment sometimes grows, sometimes shrink, and also fragments. The stack only grows or shrinks, but it does not fragment.
This resulted that the typical memory layout of a process address space was this:
code (at the beginning)
heap (right after the code, but note: its size varies and it can not overlap with the stack!)
stack (because we do not know, how the heap will grow, it needs to placed to far away from the data as it is possible).
This results that the stack has to be at the end of the address space, but to make its growth possible, it has to be decreased on data insertion.
Another memory layouts were also possible, there were CPUs where the stack has grown on data insertion.
Later, with the appearance of the paging-based memory virtualization, this problem could have been roughly solved (although a decreasing stack were still better if the size of the virtual address space was not big enough). But there was no need to break compatibility for a zero-to-little improvement.
The title pretty much says it. I am writing an algorithm (and right now porting it into nasm) that would need to allocate lots (upwards of 8gb) of ram (as a severe tradeoff for cpu usage). On every iteration it stores an int onto the stack (for output and later usage). Then, periodically it could free a set of values but only from the bottom of the stack. Could this be done by simply decrementing the stack base (rbp)?
A stack is a stack. You can push and pop values on the top but nothing more. You cannot deallocate anything from it in any other way.
Changing RBP doesn't do anything, it is just a helper register to use for the current stack frame. RSP shows the current top of the stack and moving that changes where the next value will be stored to or retrieved from within the stack. So you can drop a bunch of values from the top if needed, but not from the bottom.
If you have a need to temporarily store values and later release them then a circular buffer or blocks of regular memory would be a much better suited for that.
I understand the fact that stack grows upwards and heap grows downwards or vice-versa (architecture dependent).
But, i couldn't find much details about how actually it's implemented, my doubt is, for every process a memory block will be allocated, but is there a restriction on, how much max chunk can be used for stack or heap? Or are there no restrictions till whole allocated memory is consumed?
Yes, processes have predetermined stack sizes. Have you ever tried to recurse a method/function too much? You get a StackOverflow exception. That doesn't mean you've already went through your entire computer's memory. The OS controls distribution of stack and heap memory for each process.
Hi I have truble with allocated memory, because I noticed in Instruments a lot of Heap Growth, so I designed a test app.
Test app contained two ViewControllers and each have one button.
First ViewController was linked thru Segue Modal to SecondViewController (and it has NO code at all - beside auto-generated).
Second ViewController has only function
-(IBAction)back:(id)sender{
[self dismissModalViewControllerAnimated:YES];
}
so I could flip throw views.
When I test it whit Instrument I noticed heap growth after I go to second view and back.
How is that posible? What am I missing?
The size of the heap is not the app memory usage.
When your app is alive, the kernel will have to allocate memory for you.
Modern systems uses virtual memory. Basically, they map physical addresses to virtual addresses, that your process will access.
This mapping is handled by the kernel, and it needs memory for it.
If you request 1MB of memory, it will have to allocate memory to keep track of the physical pages allocated, by increasing the size of your adress space.
If you free all memory, the kernel will usually keep the memory used for the mapping, and re-use it for the next allocations, avoiding the need to reallocate space for it.
This is why the heap size doesn't change. But it does not indicate your application's memory usage at all.
If using Instruments, look at the VM Tracker tool for this.
Ok. so my understanding of how executables are laid out in memory is... image a square box that represents the memory accessible by your app.
The program code resides at the bottom of the memory, the stack is allocated to a spot just beyond the program code and is allocated upwards. the heap starts at the top of the memory and is allocated downwards.
If this is the case, why is it possible to allocate more heap memory than stack memory?
Because even on modern systems with lots of virtual memory available, the maximum size of the call stack is usually deliberately limited to, say, 1MB.
This is not usually a fundamental limit; it's possible to modify this (using e.g. setrlimit() in Linux, or the -Xss flag for Java). But needing to do so usually indicates an abnormal program; if you have large data-sets, they should normally be stored on the heap.