more info on Memory layout of an executable program (process) - memory

I attended interview for samsung. They asked lot of questions on memory layout of the program. I barely know anything about this.
I googled it "Memory layout of an executable program". "Memory layout of process".
I'm surprised to see that there isn't much info on these topics. Most of the results are forum queries. I just wonder why?
These are the few links I found:
Run-Time Storage Organization
Run-Time Memory Organization
Memory layout of C process ^pdf^
I want to learn this from a proper book instead of some web links.(Randy Hyde's is also a book but some other book). In which book can I find clear & more information on this subject?
I also wonder, why didn't the operating systems book cover this in their books? I read stallings 6th edition. It just discusses the Process Control Block.
This entire creation of layout is task of linker right? Where can I read more about this process. I want COMPLETE info from a program on the disk to its execution on the processor.
EDIT:
Initially, I was not clear even after reading the answers given below. Recently, I came across these articles after reading them, I understood things clearly.
Resources that helped me in understanding:
www.tenouk.com/Bufferoverflowc/Bufferoverflow1b.html
5 part PE file format tutorial: http://win32assembly.online.fr/tutorials.html
Excellent article : http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html
PE Explorer: http://www.heaventools.com/
Yes, "layout of an executable program(PE/ELF)" != "Memory layout of process"). Findout for yourself in the 3rd link. :)
After clearing my concepts, my questions are making me look so stupid. :)

How things are loaded depends very strongly on the OS and on the binary format used, and the details can get nasty. There are standards for how binary files are laid out, but it's really up to the OS how a process's memory is laid out. This is probably why the documentation is hard to find.
To answer your questions:
Books:
If you're interested in how processes lay out their memory, look at Understanding the Linux Kernel. Chapter 3 talks about process descriptors, creating processes, and destroying processes.
The only book I know of that covers linking and loading in any detail is Linkers and Loaders by John Levine. There's an online and a print version, so check that out.
Executable code is created by the compiler and the linker, but it's the linker that puts things in the binary format the OS needs. On Linux, this format is typically ELF, on Windows and older Unixes it's COFF, and on Mac OS X it's Mach-O. This isn't a fixed list, though. Some OS's can and do support multiple binary formats. Linkers need to know the output format to create executable files.
The process's memory layout is pretty similar to the binary format, because a lot of binary formats are designed to be mmap'd so that the loader's task is easier.
It's not quite that simple though. Some parts of the binary format (like static data) are not stored directly in the binary file. Instead, the binary just contains the size of these sections. When the process is loaded into memory, the loader knows to allocate the right amount of memory, but the binary file doesn't need to contain large empty sections.
Also, the process's memory layout includes some space for the stack and the heap, where a process's call frames and dynamically allocated memory go. These generally live at opposite ends of a large address space.
This really just scratches the surface of how binaries get loaded, and it doesn't cover anything about dynamic libraries. For a really detailed treatment of how dynamic linking and loading work, read How to Write Shared Libraries.

Here is one way a program can be executed from a file (*nix).
The process is created (e.g. fork()). This gives the new process its own memory map. This includes a stack in some area of memory (usually high up in memory somewhere).
The new process calls exec() to replace the current executable (often a shell) with the new executable. Often, the new executables .text (executable code and constants) and .data (r/w initialized variables) are set up for demand page mapping, that is, they are mapped into the process memory space as needed. Often, the .text section comes first, followed by .data. The .bss section (uninitialized variables) is often allocated after the .data section. Many times it is mapped to return a page of zeros when the page containing a bss variable is first accessed. The heap often starts at the next page boundary after the .bss section. The heap then grows up in memory while the stack grows down (remember I said usually, there are exceptions!).
If the heap and stack collide, that often causes an out of memory situation, which is why the stack is often placed in high memory.
In a system without a memory management unit, demand paging is usually unavailable but the same memory layout is often used.

Art of assembly programming http://homepage.mac.com/randyhyde/webster.cs.ucr.edu/www.artofasm.com/Windows/PDFs/MemoryAccessandOrg.pdf

Related

If v8 uses the "code" or "text" memory type, or if everything is in the heap/stack

In a typical memory layout there are 4 items:
code/text (where the compiled code of the program itself resides)
data
stack
heap
I am new to memory layouts so I am wondering if v8, which is a JIT compiler and dynamically generates code, stores this code in the "code" segment of the memory, or just stores it in the heap along with everything else. I'm not sure if the operating system gives you access to the code/text so not sure if this is a dumb question.
The below is true for the major operating systems running on the major CPUs in common use today. Things will differ on old or some embedded operating systems (in particular things are a lot simpler on operating systems without virtual memory) or when running code without an OS or on CPUs with no support for memory protection.
The picture in your question is a bit of a simplification. One thing it does not show is that (virtual) memory is made up of pages provided to you by the operating system. Each page has its own permissions controlling whether your process can read, write and/or execute the data in that page.
The text section of a binary will be loaded onto pages that are executable, but not writable. The read-only data section will be loaded onto pages that are neither writable nor executable. All other memory in your picture ((un)initialized data, heap, stack) will be stored on pages that are writable, but not executable.
These permissions prevent security flaws (such as buffer overruns) that could otherwise allow attackers to execute arbitrary code by making the program jump into code provided by the attacker or letting the attacker overwrite code in the text section.
Now the problem with these permissions, with regards to JIT compilation, is that you can't execute your JIT-compiled code: if you store it on the stack or the heap (or within a global variable), it won't be on an executable page, so the program will crash when you try to jump into the code. If you try to store it in the text area (by making use of left-over memory on the last page or by overwriting parts of the JIT-compilers code), the program will crash because you're trying to write to read-only memory.
But thankfully operating systems allow you to change the permissions of a page (on POSIX-systems this can be done using mprotect and on Windows using VirtualProtect). So your first idea might be to store the generated code on the heap and then simply make the containing pages executable. However this can be somewhat problematic: VirtualProtect and some implementations of mprotect require a pointer to the beginning of a page, but your array does not necessarily start at the beginning of a page if you allocated it using malloc (or new or your language's equivalent). Further your array may share a page with other data, which you don't want to be executable.
To prevent these issues, you can use functions, such as mmap on Unix-like operating systems and VirtualAlloc on Windows, that give you pages of memory "to yourself". These functions will allocate enough pages to contain as much memory as you requested and return a pointer to the beginning of that memory (which will be at the beginning of the first page). These pages will not be available to malloc. That is, even if you array is significantly smaller than the size of a page on your OS, the page will only be used to store your array - a subsequent call to malloc will not return a pointer to memory in that page.
So the way that most JIT-compilers work is that they allocate read-write memory using mmap or VirtualAlloc, copy the generated machine instructions into that memory, use mprotect or VirtualProtect to make the memory executable and non-writable (for security reasons you never want memory to be executable and writable at the same time if you can avoid it) and then jump into it. In terms of its (virtual) address, the memory will be part of the heap's area of the memory, but it will be separate from the heap in the sense that it won't be managed by malloc and free.
Heap and stack are the memory regions where programs can allocate at runtime. This is not specific to V8, or JIT compilers. For more detail, I humbly suggest that you read whatever book that illustration came from ;-)

How does compiler lay out code in memory

Ok I have a bit of a noob student question.
So I'm familiar with the fact that stacks contain subroutine calls, and heaps contain variable length data structures, and global static variables are assigned to permanant memory locations.
But how does it all work on a less theoretical level?
Does the compiler just assume it's got an entire memory region to itself from address 0 to address infinity? And then just start assigning stuff?
And where does it layout the instructions, stack, and heap? At the top of the memory region, end of memory region?
And how does this then work with virtual memory? The virtual memory is transparent to the program?
Sorry for a bajilion questions but I'm taking programming language structures and it keeps referring to these regions and I want to understand them on a more practical level.
THANKS much in advance!
A comprehensive explanation is probably beyond the scope of this forum. Entire texts are devoted to the subject. However, at a simplistic level you can look at it this way.
The compiler does not lay out the code in memory. It does assume it has the entire memory region to itself. The compiler generates object files where the symbols in the object files typically begin at offset 0.
The linker is responsible for pulling the object files together, linking symbols to their new offset location within the linked object and generating the executable file format.
The linker doesn't lay out code in memory either. It packages code and data into sections typically labeled .text for the executable code instructions and .data for things like global variables and string constants. (and there are other sections as well for different purposes) The linker may provide a hint to the operating system loader where to relocate symbols but the loader doesn't have to oblige.
It is the operating system loader that parses the executable file and decides where code and data are layed out in memory. The location of which depends entirely on the operating system. Typically the stack is located in a higher memory region than the program instructions and data and grows downward.
Each program is compiled/linked with the assumption it has the entire address space to itself. This is where virtual memory comes in. It is completely transparent to the program and managed entirely by the operating system.
Virtual memory typically ranges from address 0 and up to the max address supported by the platform (not infinity). This virtual address space is partitioned off by the operating system into kernel addressable space and user addressable space. Say on a hypothetical 32-bit OS, the addresses above 0x80000000 are reserved for the operating system and the addresses below are for use by the program. If the program tries to access memory above this partition it will be aborted.
The operating system may decide the stack starts at the highest addressable user memory and grows down with the program code located at a much lower address.
The location of the heap is typically managed by the run-time library against which you've built your program. It could live beginning with the next available address after your program code and data.
This is a wide open question with lots of topics.
Assuming the typical compiler -> assembler -> linker toolchain. The compiler doesnt know a whole lot, it simply encodes stack relative stuff, doesnt care how much or where the stack is, that is the purpose/beauty of a stack, dont care. The compiler generates assembler the assembler is assembled into an object, then the linker takes info linker script of some flavor or command line arguments that tell it the details of the memory space, when you
gcc hello.c -o hello
your installation of binutils has a default linker script which is tailored to your target (windows, mac, linux, whatever you are running on). And that script contains the info about where the program space starts, and then from there it knows where to start the heap (after the text, data and bss). The stack pointer is likely set either by that linker script and/or the os manages it some other way. And that defines your stack.
For an operating system with an mmu, which is what your windows and linux and mac and bsd laptop or desktop computers have, then yes each program is compiled assuming it has its own address space starting at 0x0000 that doesnt mean that the program is linked to start running at 0x0000, it depends on the operating system as to what that operating systems rules are, some start at 0x8000 for example.
For a desktop like application where it is somewhat a single linear address space from your programs perspective you will likely have .text first then either .data or .bss and then after all of that the heap will be aligned at some point after that. The stack however it is set is typically up high and works down but that can be processor and operating system specific. that stack is typically within the programs view of the world the top of its memory.
virtual memory is invisible to all of this the application normally doesnt know or care about virtual memory. if and when the application fetches an instruction or does a data tranfer it goes through hardware which is configured by the operating system and that converts between virtual and physical. If the mmu indicates a fault, meaning that space has not been mapped to a physical address, that can sometimes be intentional and then another use of the term "Virtual memory" applies. This second definition the operating system can then for example take some other chunk of memory, yours or someone elses, move that to hard disk for example, mark that other chunk as not being there, and then mark your chunk as having some ram then let you execute not knowing you were interrupted with some ram that you didnt know you had to take from someone else. Your application by design doesnt want to know any of this, it just wants to run, the operating system takes care of managing physical memory and the mmu that gives you a virtual (zero based) address space...
If you were to do a little bit of bare metal programming, without mmu stuff at first then later with, microcontroller, qemu, raspberry pi, beaglebone, etc you can get your hands dirty both with the compiler, linker script and configuring an mmu. I would use an arm or mips for this not x86, just to make your life easier, the overall big picture all translates directly across targets.
It depends.
If you're compiling a bootloader, which has to start from scratch, you can assume you've got the entire memory for yourself.
On the other hand, if you're compiling an application, you can assume you've got the entire memory for yourself.
The minor difference is that in the first case, you have all physical memory for yourself. As a bootloader, there's nothing else in RAM yet. In the second case, there's an OS in memory, but it will (normally) set up virtual memory for you so that it appears you have the entire address space for yourself. Usuaully you still have to ask the OS for actual memory, though.
The latter does mean that the OS imposes some rules. E.g. the OS very much would like to know where the first instruction of your program is. A simple rule might be that your program always starts at address 0, so the C compiler could put int main() there. The OS typically would like to know where the stack is, but this is already a more flexible rule. As far as "the heap" is concerned, the OS really couldn't care.

Windows to embedded port: data and code memory size

I am in the process of porting a windows 7 library to an embedded platform. In order to do so my employer asks me the amount of memory (and CPU but let us concentrate on the memory for now) that my system will need once ported - so he can size the board to my needs.
I had a look on the internet and there seem not be exist much information about this question, hence my questions:
in order to get a rough idea of the memory footprint of the code in flash memory (code only without memory for data), I read on the Internet that I should sum the size of all the dlls I use. It seems that all compilers and platforms give a different size for the code footprint but overall the size of the code (without data) is often very close. Do you confirm?
in order to deal with the memory required by the data only (heap + stack but no code), I had a look at the task manager (and process explorer). It seems the overall amount of data which I use is specified in the 'peak working set'. I have a few questions about it though:
2.a. Does the 'working set' include the heap + stack memory or does it correspond to the heap only?
2.b. Does the 'working set' include the size for the code as well? (as I am on windows 7, the code is also stored in RAM and not in flash as on embedded systems), or does it only correspond to the data?
2.c. it seems the 'peak working set' reflects the maximum amount of physical memory that was actually in RAM from the time the program was started, but it does not reflect the size the program could take afterwards (if I happen to allocate memory at runtime - which would be bad ;) - the peak value would go on increasing). Do you confirm?
2.d. Hence, do you also confirm that if I do not allocate memory at runtime, the 'peak working set' should roughly be the maximum size of RAM my embedded system will need? Up to a bit of size difference due to the difference in systems technology...
Thanks,
Antoine.
Unless you are intending to run your application on Windows Embedded, then looking at the code and data usage in Windows is not going to be much of an indicator of anything useful!
1) DLLs are libraries - not all the code within them will be utilised by your code. Most embedded systems are statically linked and the linker will link only modules that are actually referenced in your by your code. So taking the sum of the DLL dependencies is likley to lead to a gross over estimation of memory requirement.
2) Windows memory management is profligate with memory use - because it can be and to do so generally improves performance of typical desktop systems. For example, an thread stack in Windows is typically of the order to 2Mb - you may seldom use that much, but Windows gives it to you in any case because it can and to do so errs on the side of safety. A thread stack in an embedded system will typically range from a few tens of bytes to a few tens of kilobytes - it depends on your application.
Windows task manager shows what Windows allocates to your process, that may not relate to what your process needs. Also your application is using Windows services - all the memory used for kernel and device services will not show up as part of your process, but your embedded system may still need those.
If you do use your Windows prototype code to assess the embedded system requirements, then your best place to start is by getting the linker to generate a map file, which will give a detailed description of memory usage in terms of statically allocated data and code size.
Code size depends not only on the performance of the compiler, but also on the efficiency of the instruction set. Some architectures achieve higher code density than others. Windows application code size is never a good indicator of embedded code size because its execution environment is likley to be so much different. For example an pre-emptive multitasking RTOS kernel on a 32bit ARM can be implemented in less than 10Kb of code, a file system perhaps another 10, and network stack anything from 10 to 30K, USB another 10. As you can see this is a different world to desktop code.
Data memory usage is more easily determined perhaps; but you do that through analysis of your application rather than observing what Windows does. There is the data your application instantiates directly, and then there is data instantiated by libraries and device drivers you might call - in Windows the latter is likley to be relatively large and out of your control. Typical embedded systems libraries for things such a s network stacks, USB, file systems etc. are fall smaller and far more deterministic in both performance and size.
Your better bet is to describe your application in terms of its general purpose, performance requirements, real-time constraints, and its hardware requirements (display, networking, I/O, mass storage etc.), and then look at comparable solutions or at the libraries you will need to implement your solution; most embedded systems are "bare board" and do not have the services you find in Windows unless you write them or use third-party solutions - Windows is seldom a comparable solution to an embedded system.
If it is just a library rather than an application, then build it for a likley target using a Windows hosted GCC cross-compiler and see how big it ends up. You don't need hardware for that or even expend any money.

Delphi: FastMM virtual memory management reference?

I had an issue recently (see my last question) that led me to take a closer look at the memory management in my Delphi application. After my first exploration, I have two questions.
I've started playing with the FastMMUsageTracker, and noticed the following. When I open a file to be used by the app (which also creates a form etc...), there is a significant discrepancy between the variation in available virtual memory for the app, and the variation in "FastMM4 allocated" memory.
First off, I'm a little confused by the terminology: why is there some FastMM-allocated memory and some "System-allocated" (and reserved) memory? Since FastMM is the memory manager, why is the system in charge of allocating some of the memory?
Also, how can I get more details on what objects/structures have been allocated that memory? The VM chart is only useful in showing the amount of memory that is "system allocated", "system reserved", or "FastMM allocated", but there is no link to the actual objects requiring that memory. Is it possible for example to get a report, mid-execution, similar to what FastMM generates upon closing the application? FastMM obviously stores that information somewhere.
As a bonus for me, if people can recommend a good reference (book, website) on the subject, it would also be much appreciated. There are tons of info on the net, but it's usually very case-specific and experts-oriented.
Thanks!
PS: This is not about finding leaks, no problem there, just trying to understand memory management better and be pre-emptive for the future, as our application uses more and more memory.
Some of your questions are easy. Well, one of them anyway!
Why is there some FastMM-allocated
memory and some "System-allocated"
(and reserved) memory? Since FastMM is
the memory manager, why is the system
in charge of allocating some of the
memory?
The code that you write in Delphi is only part of what runs in your process. You use 3rd party libraries in the form of DLLs, most notably the Windows API. Anytime you create a Delphi form, for example, there are a lot of windows objects behind it that consume memory. This memory does not get allocated by FastMM and I presume is what is termed "system-allocated" in your question.
However, if you want to go any deeper then this very rapidly becomes an extremely complex topic. If you do want to go deeper into the implementation of Windows memory management then I think you need to consult a serious reference source. I suggest Windows Internals by Mark Russinovich, David Solomon and Alex Ionescu.
First off, I'm a little confused by the terminology: why is there some FastMM-allocated memory and some "System-allocated" (and reserved) memory? Since FastMM is the memory manager, why is the system in charge of allocating some of the memory?
Where do you suppose FastMM gets the memory to allocate? It comes from the system, of course.
When your app starts up, FastMM gets a block of memory from the system. When you ask for some memory to use (whether with GetMem, New, or TSomething.Create), FastMM tries to give it to you from that first initial block. If there's not enough there, FastMM asks for more (in one block if possible) from the system, and returns a chunk of that to you. When you free something, FastMM doesn't return that memory to the OS, because it figures you'll use it again. It just marks it as unused internally. It also tries to realign unused blocks so that they're as contiguous as possible, in order to try not to have to go back to the OS for more needlessly. (This realignment isn't always possible, though; that's where you end up with memory fragmentation from things like multiple resizing of dynamic arrays, lots of object creates and frees, and so forth.)
In addition to the memory FastMM manages in your app, the system sets aside room for the stack and heap. Each process gets a meg of stack space when it starts up, as room to put variables. This stack (and the heap) can grow dynamically as needed.
When your application exits, all of the memory it's allocated is released back to the OS. (It may not appear so immediately in Task Manager, but it is.)
Is it possible for example to get a report, mid-execution, similar to what FastMM generates upon closing the application?
Not as far as I can tell. Because FastMM stores it somewhere doesn't necessarily mean there's a way to access it during runtime from outside the memory manager. You can look at the source for FastMMUsageTracker to see how the information is retrieved (using GetMemoryManagerState and GetMemoryMap, in the RefreshSnapshot method). The source to FastMM4 is also available; you can look and see what public methods are available.
FastMM's own documentation (in the form of the readme files, FastMM4Options.inc comments, and the FastMM4_FAQ.txt file) is useful to some extent in explaining how it works and what debugging options (and information) is available.
For a detailed map of what memory a process is using, try VMMAP from www.sysinternals.com (also co-authored by Mark Russinovich, mentioned in David's answer). This also allows you to see what is stored in some of the locations (type control-T when a detail line is selected).
Warning: there is much more memory in use by your process than you might think. You may need to read the book first.

What is meaning of small footprint in terms of programming?

I heard many libraries such as JXTA and PjSIP have smaller footprints. Is this pointing to small resource consumption or something else?
Footprint designates the size occupied by your application in computer RAM memory.
Footprint can have different meaning when speaking about memory consumption.
In my experience, memory footprint often doesn't include memory allocated on the heap (dynamic memory), or resource loaded from disc etc. This is because dynamic allocations are non constant and may vary depending on how the application or module is used. When reporting "low footprint" or "high footprint", a constant or top measure of the required space is usually wanted.
If for example including dynamic memory in the footprint report of an image editor, the footprint would entirely depend on the size of the image loaded into the application by the user.
In the context of a third party library, the library author can optimize the static memory footprint of the library by assuring that you never link more code into your application binary than absolutely needed. A common method used for doing this in for instance C, is to distribute library functions to separate c-files. This is because most C linkers will link all code from a c-file into your application, not just the function you call. So if you put a single function in the c-file, that's all the linker will incoporate into your application when calling it. If you put five functions in the c-file the linker will probably link all of them into your app even if you only use one of them.
All this being said, the general (academic) definition of footprint includes all kinds of memory/storage aspects.
From Wikipedia Memory footprint article:
Memory footprint refers to the amount of main memory that a program uses or references while running.
This includes all sorts of active memory regions like code segment containing (mostly) program instructions (and occasionally constants), data segment (both initialized and uninitialized), heap memory, call stack, plus memory required to hold any additional data structures, such as symbol tables, debugging data structures, open files, shared libraries mapped to the current process, etc., that the program ever needs while executing and will be loaded at least once during the entire run.
Generally it's the amount of memory it takes up - the 'footprint' it leaves in memory when running. However it can also refer to how much space it takes up on your harddrive - although these days that's less of an issue.
If you're writing an app and have memory limitations, consider running a profiler to keep track of how much your program is using.
It does refer to resources. Particularly memory. It requires a smaller amount of memory when running.
yes, resources such as memory or disk
Footprint in Computing i-e for computer programs or Computer machines is referred as the occupied device memory , for a program , process , code ,etc

Resources