How much SRAM will I use on my ARM board? - memory

I am developing for the Arduino Due which has 96k SRAM and 512k flash memory for code. If I have a program that will compile to, say, 50k, when I run the code, how much sram will I use? will I use 50k immediately, or only the memory used by the functions I call? Is there a way to measure this memory usage before I upload the sketch to the arduino?

You can run
arm-none-eabi-size bin.elf
Where:
bin.elf is the generated binary (look it up in the compile log)
arm-none-eabi-size is a tool included with Arduino for arm which lets you know the memory distribution of your binary. This program can be found inside the Arduino directory. In my mac, this is /Applications/Arduino.app/Contents/Resources/Java/hardware/tools/g++_arm_none_eabi/bin
This command will output:
text data bss dec hex filename
9648 0 1188 10836 2a54 /var/folders/jz/ylfb9j0s76xb57xrkb605djm0000gn/T/build2004175178561973401.tmp/sketch_oct24a.cpp.elf
data + bss is RAM, text is program memory.
Very important: This doesn't account for dynamic memory (created in stack), this is only RAM memory for static and global variables. There are other techniques to check the RAM usage dynamically, like this one, but it will depend on the linker capabilities of the compiler suite you are using.

Your whole program is loaded into arduino, so atleast 50K flash memory will be used. Then on running the code, you will allocate some variables, some on stack, some global which will take some memory too but on SRAM.
I am not sure if there is a way to exactly measure the memory required but you can get a rough estimation based on the number and types of variables being allocated in the code. Remember, the global variables will take the space during the entire time the code is running on arduino, the local variables( the ones that are declared within a pair of {..}) remain in the memory till the '}' brace also known as the scope of the variables. Also remember, the compiled 50K code which you are mentioning is just the code portion, it does not include your variables, not even the global ones. The code is store in Flash memory and the variables are stored in the SRAM. The variables start taking memory only during runtime.
Also I curious to know how you are calculating that your code uses 50K memory?

Here is a little library to output the avalaible RAM memory.
I used it a lot when my program was crashing with no bug in the code. It turned out that I was running out of RAM.
So it's very handy!
Avalaible Memory Library
Hope it helps! :)

Related

Is program compiled to main memory or program memory when compiling?

Suppose there is a c++ code. It is compile to binary code during compiling. My question is where does my computer store binary code, in main memory (DRAM) of in program memory (inside CPU).
And I also want to know can the content of program memory be changed by computer user?
If you have a memory designed to holds X, that's where you need to put X in.
If the CPU of your reference architecture fetches instructions from a dedicated program memory, that's where the instructions must be stored since the CPU will look for them only there.
It's worth saying that modern processors are von Neumann, they have a unified program and data memory (internally they are not, e.g the caches are split) while
microcontroller are often Harvard.
I'll gloss over the advantages of each one and just say that each combination of attributes exists: you can have a CPU where the program memory is RO and programmed in factory, where it is programmable through an external interface, where it cannot be read by the program itself, where it can and where it can be written by the program itself.

If v8 uses the "code" or "text" memory type, or if everything is in the heap/stack

In a typical memory layout there are 4 items:
code/text (where the compiled code of the program itself resides)
data
stack
heap
I am new to memory layouts so I am wondering if v8, which is a JIT compiler and dynamically generates code, stores this code in the "code" segment of the memory, or just stores it in the heap along with everything else. I'm not sure if the operating system gives you access to the code/text so not sure if this is a dumb question.
The below is true for the major operating systems running on the major CPUs in common use today. Things will differ on old or some embedded operating systems (in particular things are a lot simpler on operating systems without virtual memory) or when running code without an OS or on CPUs with no support for memory protection.
The picture in your question is a bit of a simplification. One thing it does not show is that (virtual) memory is made up of pages provided to you by the operating system. Each page has its own permissions controlling whether your process can read, write and/or execute the data in that page.
The text section of a binary will be loaded onto pages that are executable, but not writable. The read-only data section will be loaded onto pages that are neither writable nor executable. All other memory in your picture ((un)initialized data, heap, stack) will be stored on pages that are writable, but not executable.
These permissions prevent security flaws (such as buffer overruns) that could otherwise allow attackers to execute arbitrary code by making the program jump into code provided by the attacker or letting the attacker overwrite code in the text section.
Now the problem with these permissions, with regards to JIT compilation, is that you can't execute your JIT-compiled code: if you store it on the stack or the heap (or within a global variable), it won't be on an executable page, so the program will crash when you try to jump into the code. If you try to store it in the text area (by making use of left-over memory on the last page or by overwriting parts of the JIT-compilers code), the program will crash because you're trying to write to read-only memory.
But thankfully operating systems allow you to change the permissions of a page (on POSIX-systems this can be done using mprotect and on Windows using VirtualProtect). So your first idea might be to store the generated code on the heap and then simply make the containing pages executable. However this can be somewhat problematic: VirtualProtect and some implementations of mprotect require a pointer to the beginning of a page, but your array does not necessarily start at the beginning of a page if you allocated it using malloc (or new or your language's equivalent). Further your array may share a page with other data, which you don't want to be executable.
To prevent these issues, you can use functions, such as mmap on Unix-like operating systems and VirtualAlloc on Windows, that give you pages of memory "to yourself". These functions will allocate enough pages to contain as much memory as you requested and return a pointer to the beginning of that memory (which will be at the beginning of the first page). These pages will not be available to malloc. That is, even if you array is significantly smaller than the size of a page on your OS, the page will only be used to store your array - a subsequent call to malloc will not return a pointer to memory in that page.
So the way that most JIT-compilers work is that they allocate read-write memory using mmap or VirtualAlloc, copy the generated machine instructions into that memory, use mprotect or VirtualProtect to make the memory executable and non-writable (for security reasons you never want memory to be executable and writable at the same time if you can avoid it) and then jump into it. In terms of its (virtual) address, the memory will be part of the heap's area of the memory, but it will be separate from the heap in the sense that it won't be managed by malloc and free.
Heap and stack are the memory regions where programs can allocate at runtime. This is not specific to V8, or JIT compilers. For more detail, I humbly suggest that you read whatever book that illustration came from ;-)

Memory Detection in ARM

I am new to ARM and finding out ways to detect the memory map of platform based on ARM.Earlier I worked little in x86 and can find out memory map using some BIOS calls.
Same way can we do in ARM though BIOS is not there in ARM.
Is there any instruction do exist in ARM to find the Memory map ??
How do I find the memory map for an ARM CPU guide:
Read the documentation from arm.com for your coresponding core
Read the documentation of your CPU
Read the documentation of your platform, to see if it has external memory connected to SOC(CPU)
Or as a shortcut:
If your platform vendor provides a toolchain to compile code for it, make a dummy project and look for the memory layout in you linker file...
Gather this information:
Memory map for the corresponding core
Memory map of your CPU
If it has external accessible memory you have to perform some steps to initialize the controller.
Use gathered data and build the linker file for you project
Do whatever you want with it
There is no interface as ubiquitous as BIOS or EFI for ARM systems, though Microsoft does specify UEFI for systems that run Windows.
The Linux boot interface is the most common interface, see Documentation/arm/Booting in the kernel source and the header files.
If you want to write a program that to be portable across different Arm devices, you have to detect the memory by yourself. I am not very good especially in ARM, but there are common principles - you simply can scan the whole address space and probe the memory by writing a number and then reading it back. Usually, two such operations are provided with different numbers in order to exclude the occasional mistakes:
1. write 0aah
2. read and check for 0aah
3. write 055h
4. read and check for 055h
Note 1: for better speed not every byte have to be checked - some natural granularity have to be used and to check only at the start of the pages (whatever size are on this platform).
At the end you will have a map for the RAM memory. The ROM memory is not so easy to be detected, though and there is no common solution.
Note 2: Depending on the architecture (well, I said I am not ARM expert) your program must have access to the whole memory, according to the memory protection mechanisms of the CPU (if any).
Note 3: The only possible problem with this approach is the memory mapped IO. Touching it can affect the IO devices in unpredictable way. That is why, you must know what area of the addressing space is used for memory mapped IO and not to test it at all.

How does compiler lay out code in memory

Ok I have a bit of a noob student question.
So I'm familiar with the fact that stacks contain subroutine calls, and heaps contain variable length data structures, and global static variables are assigned to permanant memory locations.
But how does it all work on a less theoretical level?
Does the compiler just assume it's got an entire memory region to itself from address 0 to address infinity? And then just start assigning stuff?
And where does it layout the instructions, stack, and heap? At the top of the memory region, end of memory region?
And how does this then work with virtual memory? The virtual memory is transparent to the program?
Sorry for a bajilion questions but I'm taking programming language structures and it keeps referring to these regions and I want to understand them on a more practical level.
THANKS much in advance!
A comprehensive explanation is probably beyond the scope of this forum. Entire texts are devoted to the subject. However, at a simplistic level you can look at it this way.
The compiler does not lay out the code in memory. It does assume it has the entire memory region to itself. The compiler generates object files where the symbols in the object files typically begin at offset 0.
The linker is responsible for pulling the object files together, linking symbols to their new offset location within the linked object and generating the executable file format.
The linker doesn't lay out code in memory either. It packages code and data into sections typically labeled .text for the executable code instructions and .data for things like global variables and string constants. (and there are other sections as well for different purposes) The linker may provide a hint to the operating system loader where to relocate symbols but the loader doesn't have to oblige.
It is the operating system loader that parses the executable file and decides where code and data are layed out in memory. The location of which depends entirely on the operating system. Typically the stack is located in a higher memory region than the program instructions and data and grows downward.
Each program is compiled/linked with the assumption it has the entire address space to itself. This is where virtual memory comes in. It is completely transparent to the program and managed entirely by the operating system.
Virtual memory typically ranges from address 0 and up to the max address supported by the platform (not infinity). This virtual address space is partitioned off by the operating system into kernel addressable space and user addressable space. Say on a hypothetical 32-bit OS, the addresses above 0x80000000 are reserved for the operating system and the addresses below are for use by the program. If the program tries to access memory above this partition it will be aborted.
The operating system may decide the stack starts at the highest addressable user memory and grows down with the program code located at a much lower address.
The location of the heap is typically managed by the run-time library against which you've built your program. It could live beginning with the next available address after your program code and data.
This is a wide open question with lots of topics.
Assuming the typical compiler -> assembler -> linker toolchain. The compiler doesnt know a whole lot, it simply encodes stack relative stuff, doesnt care how much or where the stack is, that is the purpose/beauty of a stack, dont care. The compiler generates assembler the assembler is assembled into an object, then the linker takes info linker script of some flavor or command line arguments that tell it the details of the memory space, when you
gcc hello.c -o hello
your installation of binutils has a default linker script which is tailored to your target (windows, mac, linux, whatever you are running on). And that script contains the info about where the program space starts, and then from there it knows where to start the heap (after the text, data and bss). The stack pointer is likely set either by that linker script and/or the os manages it some other way. And that defines your stack.
For an operating system with an mmu, which is what your windows and linux and mac and bsd laptop or desktop computers have, then yes each program is compiled assuming it has its own address space starting at 0x0000 that doesnt mean that the program is linked to start running at 0x0000, it depends on the operating system as to what that operating systems rules are, some start at 0x8000 for example.
For a desktop like application where it is somewhat a single linear address space from your programs perspective you will likely have .text first then either .data or .bss and then after all of that the heap will be aligned at some point after that. The stack however it is set is typically up high and works down but that can be processor and operating system specific. that stack is typically within the programs view of the world the top of its memory.
virtual memory is invisible to all of this the application normally doesnt know or care about virtual memory. if and when the application fetches an instruction or does a data tranfer it goes through hardware which is configured by the operating system and that converts between virtual and physical. If the mmu indicates a fault, meaning that space has not been mapped to a physical address, that can sometimes be intentional and then another use of the term "Virtual memory" applies. This second definition the operating system can then for example take some other chunk of memory, yours or someone elses, move that to hard disk for example, mark that other chunk as not being there, and then mark your chunk as having some ram then let you execute not knowing you were interrupted with some ram that you didnt know you had to take from someone else. Your application by design doesnt want to know any of this, it just wants to run, the operating system takes care of managing physical memory and the mmu that gives you a virtual (zero based) address space...
If you were to do a little bit of bare metal programming, without mmu stuff at first then later with, microcontroller, qemu, raspberry pi, beaglebone, etc you can get your hands dirty both with the compiler, linker script and configuring an mmu. I would use an arm or mips for this not x86, just to make your life easier, the overall big picture all translates directly across targets.
It depends.
If you're compiling a bootloader, which has to start from scratch, you can assume you've got the entire memory for yourself.
On the other hand, if you're compiling an application, you can assume you've got the entire memory for yourself.
The minor difference is that in the first case, you have all physical memory for yourself. As a bootloader, there's nothing else in RAM yet. In the second case, there's an OS in memory, but it will (normally) set up virtual memory for you so that it appears you have the entire address space for yourself. Usuaully you still have to ask the OS for actual memory, though.
The latter does mean that the OS imposes some rules. E.g. the OS very much would like to know where the first instruction of your program is. A simple rule might be that your program always starts at address 0, so the C compiler could put int main() there. The OS typically would like to know where the stack is, but this is already a more flexible rule. As far as "the heap" is concerned, the OS really couldn't care.

What is meaning of small footprint in terms of programming?

I heard many libraries such as JXTA and PjSIP have smaller footprints. Is this pointing to small resource consumption or something else?
Footprint designates the size occupied by your application in computer RAM memory.
Footprint can have different meaning when speaking about memory consumption.
In my experience, memory footprint often doesn't include memory allocated on the heap (dynamic memory), or resource loaded from disc etc. This is because dynamic allocations are non constant and may vary depending on how the application or module is used. When reporting "low footprint" or "high footprint", a constant or top measure of the required space is usually wanted.
If for example including dynamic memory in the footprint report of an image editor, the footprint would entirely depend on the size of the image loaded into the application by the user.
In the context of a third party library, the library author can optimize the static memory footprint of the library by assuring that you never link more code into your application binary than absolutely needed. A common method used for doing this in for instance C, is to distribute library functions to separate c-files. This is because most C linkers will link all code from a c-file into your application, not just the function you call. So if you put a single function in the c-file, that's all the linker will incoporate into your application when calling it. If you put five functions in the c-file the linker will probably link all of them into your app even if you only use one of them.
All this being said, the general (academic) definition of footprint includes all kinds of memory/storage aspects.
From Wikipedia Memory footprint article:
Memory footprint refers to the amount of main memory that a program uses or references while running.
This includes all sorts of active memory regions like code segment containing (mostly) program instructions (and occasionally constants), data segment (both initialized and uninitialized), heap memory, call stack, plus memory required to hold any additional data structures, such as symbol tables, debugging data structures, open files, shared libraries mapped to the current process, etc., that the program ever needs while executing and will be loaded at least once during the entire run.
Generally it's the amount of memory it takes up - the 'footprint' it leaves in memory when running. However it can also refer to how much space it takes up on your harddrive - although these days that's less of an issue.
If you're writing an app and have memory limitations, consider running a profiler to keep track of how much your program is using.
It does refer to resources. Particularly memory. It requires a smaller amount of memory when running.
yes, resources such as memory or disk
Footprint in Computing i-e for computer programs or Computer machines is referred as the occupied device memory , for a program , process , code ,etc

Resources