Change Stack size in Contiki - contiki

Is there a way to programmatically change the stack size in Contiki?
I know on Linux systems I'm able to call:
ulimit -s SIZE
But I'm currently using Contiki as a flashed binary, and don't really have access to a traditional terminal. I've tried executing the command from C using system() and popen() calls to no avail.
Perhaps there's a CFLAG or LDFLAG I can leverage? Or modifying something in the makefile?
FYI I'm flashing the binary to a Texas Instruments cc2650, which has a 32 bit processor.

CC2650 does not have a MPU (Memory Protection Unit), which implies that no one checks for the boundaries of the stack region during runtime, which in turn implies that there is no way to "reserve" stack in the same sense that stack memory is reserved on Linux.
Essentially, if you keep allocating new things on stack, the stack will keep to grow even after it reaches other memory regions - usually the .data region, which contains dynamically allocated memory, if any, and static/global variables. The growth of stack will corrupt the memory in those other regions in a way that you might even fail to notice, leading to hard-to-find bugs.
There are a couple of things to do. One is to reserve bigger stack memory during compile time. This will not limit the stack region, but will limit the extent of the data region. To do that, change the CC2650 linker script in cpu/cc26xx-cc13xx/cc26xx.ld:
_Min_Stack_Size = 0x100; /* 256 bytes by default for the stack */
The other thing is to use Contiki-NG recent revisions, which have built-in stackoverflow checks. There is still no way to change the stack region size during runtime, but you will get an error if stack overflow happens.

Related

What is the address space in Go(lang)?

I try to understand the basics of concurrent programming in Go. Almost all articles use the term "address space", for example: "All goroutines share the same address space". What does it mean?
I've tried to understand the following topics from wiki, but it wasn't successful:
http://en.wikipedia.org/wiki/Virtual_memory
http://en.wikipedia.org/wiki/Memory_segmentation
http://en.wikipedia.org/wiki/Page_(computer_memory)
...
However at the moment it's difficult to understand for me, because my knowledges in areas like memory management and concurrent programming are really poor. There are many unknown words like segments, pages, relative/absolute addresses, VAS etc.
Could anybody explain to me the basics of the problem? May be there are some useful articles, that I can't find.
Golang spec:
A "go" statement starts the execution of a function call as an independent concurrent thread of control, or goroutine, within the same address space.
Could anybody explain to me the basics of the problem?
"Address space" is a generic term which can apply to many contexts:
Address spaces are created by combining enough uniquely identified qualifiers to make an address unambiguous (within a particular address space)
Dave Cheney's presentation "Five things that make Go fast" illustrates the main issue addressed by having goroutine within the same process address space: stack management.
Dave's qualifies the "address space", speaking first of thread:
Because a process switch can occur at any point in a process’ execution, the operating system needs to store the contents of all of these registers because it does not know which are currently in use.
This lead to the development of threads, which are conceptually the same as processes, but share the same memory space.
(so this is about memory)
Then Dave illustrates the stack within a process address space (the addresses managed by a process):
Traditionally inside the address space of a process,
the heap is at the bottom of memory, just above the program (text) and grows upwards.
The stack is located at the top of the virtual address space, and grows downwards.
See also "What and where are the stack and heap?".
The issue:
Because the heap and stack overwriting each other would be catastrophic, the operating system usually arranges to place an area of unwritable memory between the stack and the heap to ensure that if they did collide, the program will abort.
With threads, that can lead to restrict the heap size of a process:
as the number of threads in your program increases, the amount of available address space is reduced.
goroutine uses a different approach, while still sharing the same process address space:
what about the stack requirements of those goroutines ?
Instead of using guard pages, the Go compiler inserts a check as part of every function call to check if there is sufficient stack for the function to run. If there is not, the runtime can allocate more stack space.
Because of this check, a goroutines initial stack can be made much smaller, which in turn permits Go programmers to treat goroutines as cheap resources.
Go 1.3 introduces a new way of managing those stacks:
Instead of adding and removing additional stack segments, if the stack of a goroutine is too small, a new, larger, stack will be allocated.
The old stack’s contents are copied to the new stack, then the goroutine continues with its new larger stack.
After the first call to H the stack will be large enough that the check for available stack space will always succeed.
When you application runs on the RAM, addresses in RAM are allocated to your application by the memory manager. This is refered to as address space.
Concept:
the processor (CPU) executes instructions in a Fetch-Decode-Execute
cycle. It executes instructions in an applicaiton by fetching it to
the RAM (Random Acces Memory). This is done because it is very
in-efficient to get it all the way from disk. Some-one needs to keep
track of memory usage, so the operating system implements a memory
manager. Your appication, consists of some program, in your case this
is written in Go programming language. When you execute your script,
the OS executes the instructions in the above mentioned fashion.
Reading your post i can empathize. The terms you mentioned will become familiar to you as program more and more.
I first encountered these terms from the operating systems book, a.k.a the dinosaur book.
Hope this helps you.

How does compiler lay out code in memory

Ok I have a bit of a noob student question.
So I'm familiar with the fact that stacks contain subroutine calls, and heaps contain variable length data structures, and global static variables are assigned to permanant memory locations.
But how does it all work on a less theoretical level?
Does the compiler just assume it's got an entire memory region to itself from address 0 to address infinity? And then just start assigning stuff?
And where does it layout the instructions, stack, and heap? At the top of the memory region, end of memory region?
And how does this then work with virtual memory? The virtual memory is transparent to the program?
Sorry for a bajilion questions but I'm taking programming language structures and it keeps referring to these regions and I want to understand them on a more practical level.
THANKS much in advance!
A comprehensive explanation is probably beyond the scope of this forum. Entire texts are devoted to the subject. However, at a simplistic level you can look at it this way.
The compiler does not lay out the code in memory. It does assume it has the entire memory region to itself. The compiler generates object files where the symbols in the object files typically begin at offset 0.
The linker is responsible for pulling the object files together, linking symbols to their new offset location within the linked object and generating the executable file format.
The linker doesn't lay out code in memory either. It packages code and data into sections typically labeled .text for the executable code instructions and .data for things like global variables and string constants. (and there are other sections as well for different purposes) The linker may provide a hint to the operating system loader where to relocate symbols but the loader doesn't have to oblige.
It is the operating system loader that parses the executable file and decides where code and data are layed out in memory. The location of which depends entirely on the operating system. Typically the stack is located in a higher memory region than the program instructions and data and grows downward.
Each program is compiled/linked with the assumption it has the entire address space to itself. This is where virtual memory comes in. It is completely transparent to the program and managed entirely by the operating system.
Virtual memory typically ranges from address 0 and up to the max address supported by the platform (not infinity). This virtual address space is partitioned off by the operating system into kernel addressable space and user addressable space. Say on a hypothetical 32-bit OS, the addresses above 0x80000000 are reserved for the operating system and the addresses below are for use by the program. If the program tries to access memory above this partition it will be aborted.
The operating system may decide the stack starts at the highest addressable user memory and grows down with the program code located at a much lower address.
The location of the heap is typically managed by the run-time library against which you've built your program. It could live beginning with the next available address after your program code and data.
This is a wide open question with lots of topics.
Assuming the typical compiler -> assembler -> linker toolchain. The compiler doesnt know a whole lot, it simply encodes stack relative stuff, doesnt care how much or where the stack is, that is the purpose/beauty of a stack, dont care. The compiler generates assembler the assembler is assembled into an object, then the linker takes info linker script of some flavor or command line arguments that tell it the details of the memory space, when you
gcc hello.c -o hello
your installation of binutils has a default linker script which is tailored to your target (windows, mac, linux, whatever you are running on). And that script contains the info about where the program space starts, and then from there it knows where to start the heap (after the text, data and bss). The stack pointer is likely set either by that linker script and/or the os manages it some other way. And that defines your stack.
For an operating system with an mmu, which is what your windows and linux and mac and bsd laptop or desktop computers have, then yes each program is compiled assuming it has its own address space starting at 0x0000 that doesnt mean that the program is linked to start running at 0x0000, it depends on the operating system as to what that operating systems rules are, some start at 0x8000 for example.
For a desktop like application where it is somewhat a single linear address space from your programs perspective you will likely have .text first then either .data or .bss and then after all of that the heap will be aligned at some point after that. The stack however it is set is typically up high and works down but that can be processor and operating system specific. that stack is typically within the programs view of the world the top of its memory.
virtual memory is invisible to all of this the application normally doesnt know or care about virtual memory. if and when the application fetches an instruction or does a data tranfer it goes through hardware which is configured by the operating system and that converts between virtual and physical. If the mmu indicates a fault, meaning that space has not been mapped to a physical address, that can sometimes be intentional and then another use of the term "Virtual memory" applies. This second definition the operating system can then for example take some other chunk of memory, yours or someone elses, move that to hard disk for example, mark that other chunk as not being there, and then mark your chunk as having some ram then let you execute not knowing you were interrupted with some ram that you didnt know you had to take from someone else. Your application by design doesnt want to know any of this, it just wants to run, the operating system takes care of managing physical memory and the mmu that gives you a virtual (zero based) address space...
If you were to do a little bit of bare metal programming, without mmu stuff at first then later with, microcontroller, qemu, raspberry pi, beaglebone, etc you can get your hands dirty both with the compiler, linker script and configuring an mmu. I would use an arm or mips for this not x86, just to make your life easier, the overall big picture all translates directly across targets.
It depends.
If you're compiling a bootloader, which has to start from scratch, you can assume you've got the entire memory for yourself.
On the other hand, if you're compiling an application, you can assume you've got the entire memory for yourself.
The minor difference is that in the first case, you have all physical memory for yourself. As a bootloader, there's nothing else in RAM yet. In the second case, there's an OS in memory, but it will (normally) set up virtual memory for you so that it appears you have the entire address space for yourself. Usuaully you still have to ask the OS for actual memory, though.
The latter does mean that the OS imposes some rules. E.g. the OS very much would like to know where the first instruction of your program is. A simple rule might be that your program always starts at address 0, so the C compiler could put int main() there. The OS typically would like to know where the stack is, but this is already a more flexible rule. As far as "the heap" is concerned, the OS really couldn't care.

What is the range of an address on stack and memory?

On computer memory, say IA32, what is the range of stack in general? I know an address like 0xffff1234 is probably on the stack. Is it possible for the stack to grow to 0x0800abcd, for example? How about the heap? I know the heap address is always lower than the stack address, but what is normally its range? Also what is the area below heap?
The stack - The memory the program uses to actually run the program. This contains local variables, call-back data (for example when you call a function, the stack stores the state and place you were in the code before you entered the new function), and some other little things of that nature. You usually don't control the stack directly, the variables and data are destroyed, created when you move in and out function scopes.
The heap - The "dynamic" memory of the program. Each time you create a new object or variable dynamically, it is stored on the heap. This memory is controlled by the programmer directly, you are supposed to take care of the creation AND deletion of the objects there.
Thanks a lot!
Stack:
You can define the size of your stack in linking time.
As I know, windows app default stack size is 2MB.
You can change the size of stack in your project setting. But when App is built, stack size is fixed.
And OS would set guard page for stack overflow. If any operation try to access the guard page would trigger EXCEPTION.
Heap:
Default heap size i guess also can be changed in project settings.
Because in your App, you can create your own heap, or use CRT heap, Win32 heap. So there should be lots of heaps.
When your try to allocate memory, Heap Manager based on algorithm to allocate memory. If there's not enough memory, Heap Manager would apply for memory from Virtual Memory Manager. Until there's not enough memory in User Address Space, throw exception: Out of Memory.
There's several definitions: HeapNode, HeapSegment, LFH, LEA, BEA.
And you can use Windbg: !heap -s, !heap -a, these commands to check the structure of Windows Heap.

Memory collision in Stacks

So I understand what a stack overflow is, when memory collides (and the title of this website) but what I do not understand is why new entries to the stack are in a decremental memory address. Why are they not in a random memory address, would it not make more sense so that memory collision is not an issue? I am guessing there is some sort of optimizing reason behind that?
** EDIT **
What I did not realize is a stack is given x amount of address space. Makes sense now but brings me to a follow-up question. Can I explicitly state how much memory I want to allocate to a stack?
"Memory collides" would better suit the term of "buffer overflow", where you write outside of the predestined space, but where it is likely to be within a different allocated memory block.
A stack overflow is not about writing outside of one's memory allocation into another memory allocation. It's just about writing outside of one's stack memory allocation. Most likely outside of the stack there's a guard memory page, that is not allocated for anything and which causes a fault on a read or write attempt.
And assigning a random address for each value pushed on the stack makes it hard to find data on the stack (and it's not a stack anymore). When the compiler or programmer knows that subsequent elements occupy subsequent addresses, then it's easy to compute those addresses just from the base pointer of the stack frame.
The answer to this question is probably complex, but basically stack operations are considered to be very primitive functions that the processor does as part of normal execution of code. (Saving return addresses and other stuff.)
So where do you put the memory management code? Where do you track the allocated addresses or add code to allocate new addresses? There really isn't anywhere to do this as these are basic operations performed by the processor itself.
Similar to the memory that holds the code itself, the stack is assumed to be setup before the code runs (and pointed to by the stack register). There really isn't any place to add complex memory management to stack memory. And so, yes, if not enough memory was provided, the stack will overflow.
Stack overflow is when you have used up all available stack space. The space available for the stack is, in most cases just an arbitrary limit chosen by the system designers. It is possible to alter this, but on modern systems, it's not really an issue - code that needs several megabytes of stack, unless the system is REALLY huge, is probably not correctly designed.
The stack grows towards zero from "custom" - it has to go in a defined direction or it would be very hard to follow what is going on, and lower adddress is just as good as higher address. It used to be that stack and heap grew towards each other, which would allow code that uses a lot of stack and not so much heap to work in the same amount of memory as something that uses a smaller amount of stack and a larger amount of heap. But these days, there is typically enough memory (space) that the heap can be defined to be somewhere completely separate from the stack. Instead the stack overflow is detected by having a region of "reserved" memory just at the top of the stack that is not usable - so the OS gets a "trap" for using memory that isn't available, and the application can be killed.

When does the stack really overflow?

Is infinite recursion the only case or can it happen for other reasons?
Doesn't the stack size grow as needed same as heap?
Sorry if this question has been asked before, would appreciate links to them if that is the case.
I can't speak for all platforms, but as it happens, I've just spent some time working with Windows .exe files (I mean, actually studying the binary format of them - I know in a sense all of us here work with executable files ;) ). I'm betting that most other platforms have similar capabilities, but I'm not immediate familiar with them.
Part of the file format itself includes two values relevant to the current discussion:
typedef struct _IMAGE_OPTIONAL_HEADER {
...
DWORD SizeOfStackReserve;
DWORD SizeOfStackCommit;
...
} IMAGE_OPTIONAL_HEADER32, *PIMAGE_OPTIONAL_HEADER32;
From MSDN:
SizeOfStackReserve
The number of bytes to reserve for the
stack. Only the memory specified by
the SizeOfStackCommit member is
committed at load time; the rest is
made available one page at a time
until this reserve size is reached.
SizeOfStackCommit
The number of bytes to commit for the
stack.
In other words, the linker specifies a maximum size for the program's stack. If you hit the maximum size, you overflow - no matter how you hit the maximum size. You could write a simple program to do it in one line of code just by allocating a single stack variable (say, an array) that's bigger than the maximum stack size. Or you could do it via infinite (or finite, but very deep) recursion, or just by allocating too many stack variables.
The Microsoft linker sets this value to 1MB by default on X86 platforms (4MB on Itanium systems). This seems small on the face of it, for a modern system. However, more modern versions of Windows interpret these values slightly differently. Instead of completely limiting the stack, it limits the physical memory the stack will use. If your stack grows beyond this, virtual memory will get involved, so you should still be good... assuming you have enough virtual memory.
Remember, it is possible to run out of memory, even on modern systems with huge amounts of RAM and plenty of virtual memory on disk. You just need to allocate really big amounts of data.
So, long story short: is it possible to overflow the stack without infinite recursion? Definitely. Is it likely? Not really, unless you're allocating really huge objects.
The stack overflows when the stack pointer is pushed out of the memory block the operating system has allocated for the stack. Some operating systems will resize the stack as it grows (IIRC Linux does this) while in others the stack size is fixed at the start of the process or thread (IIRC Windows does this).
Possible reasons for overflowing the stack:
An unbounded number of stack frames (e.g. from unbounded recursion)
Attempting to allocate large blocks from the stack
Buffer overflows for buffers allocated on the stack
There are probably other reasons as well that I can't think of off the top of my head.
This question doesn't specify which stack is "the" stack. So, here are a few answers:
Call Stack
The call stack gets overflowed whenever the number of calls on the stack overruns the amount of memory it has. The most common way is infinite recursion, but it's quite possible to have recursion that's excessive but not infinite. For example, computing the Ackermann function naively will tax any computer.
Languages
Stack-based languages
Some languages, like Postscript and Forth, and some virtual machines, like the Java virtual machine, are stack-based. In these languages, it may be possible to make expressions so complex that they overflow the stack.
Context-free languages
Context-free languages are often implemented using a stack. If the strings for the code of these languages gets too complex, it's possible to overflow the stack.
On a laptop or desktop machine it may be unusual to overflow the stack without infinite (or very deeply nested) recursion when running from the main thread... however, stack overflows are not uncommon for:
Threaded code in which the thread has been allotted a small, fixed-sized stack.
Signal handling code in which the signal handling context has a small, fixed-sized stack.
Code executing on embedded devices, where memory is generally scarce.
As an example, if you register a signal handler using sigaction, if the signal handler does any complex (i.e. deeply nested operations) it is very easy to get a stack overflow on a number of operating systems, since signal handlers are usually allotted a small, fixed-sized stack. Similarly, if you spawn a thread with pthread_create, but you specify a small stacksize with pthread_attr_setstacksize, then it is very easy to attain a stack overflow. On very memory-limited devices such wireless sensors, it is an art to avoid stack overflows.
My day job involves a lot of work with LotusScript in Lotus Notes, which has fixed stack limits for various scopes. E.g. most variables in a procedure/function must fit in a 32kB stack, except that the content of class variables is stored on the heap.
If fixed-size variables exceed the stack size, code won't compile.
Run-time stack overflows can occur with recursion. This is easy to achieve in LotusScript as it limits recursion of any single function to a 32kB stack. I gave up on using a recursive QuickSort years ago because of this.
If your program exceeds its alloted stack space without any infinite recursion going on, then you're doing something wrong.
Though it can happen if you leave off some asterisks and try to pass some huge buffers by value.
The memory allocated for the stack does generally grow as needed within reasonable boundaries - I'm not sure what the upper limit is on various systems.

Resources