Is malloc'd data zeroed in objective c? - ios

I was wondering about this because it's a potential security hole if process A can malloc 50 megs of data that is not zero'd out and that chunk of memory turns out to include what had been physical pages from process B and still contain process B's data.

Is malloc'd data zeroed in objective c?
Mostly Yes. There's a zero-page writer that is part of the memory manager which provides a process with zero'd pages. The memory manager will call memory_object_data_unavailable to tell the kernel to supply zero-filled memory for the region.
If the process calls free and then mallocs again, the page is not re-zero'd. Zeroization only occurs when a new page is demanded. In fact, the page is probably not returned to the system upon free. The process retains the page for its own use due to the runtime. Related, see Will malloc implementations return free-ed memory back to the system?
If a page is returned to the system under a low-memory condition, the the page will be re-zero'd even if the process formerly held the page. The memory manager does not account for last owner of a page. It just assumes a new page needs to be zero'd to avoid an information leak across processes.
Note Microsoft calls it the zero-page writer. Darwin has the same component, but I don't recall seeing it named. Also see Mac OS X Internals: A Systems Approach by Singh. Its a bit dated, but it provides a lot of system information. Chapter 8, Memory, is the chapter of interest.
Singh's book goes into other details, like cases where a page is demanded but does not need to be zeroized. In this case, there was some shared data among processes, and a new page was allocated to the process under a Copy-on-Write (COW) scheme. Effectively, the new page was populated from existing data rather than zero's. The function of interest is memory_object_data_request.
Linux has an interesting discussion of the zero page at Some ado about zero. Its interesting reading about a topic that seems mundane on the surface.

Related

Committed vs Reserved Memory

According to "Windows Internals, Part 1" (7th Edition, Kindle version):
Pages in a process virtual address space are either free, reserved, committed, or shareable.
Focusing only on the reserved and committed pages, the first type is described in the same book:
Reserving memory means setting aside a range of contiguous virtual addresses for possible future use (such as an array) while consuming negligible system resources, and then committing portions of the reserved space as needed as the application runs. Or, if the size requirements are known in advance, a process can reserve and commit in the same function call.
Both reserving or committing will initially get you entries in the VADs (virtual address descriptors), but neither operation will touch the PTE (page table entries) structures. It used to cost PTEs for reserving before Windows 8.1, but not anymore.
As described above, reserved means blocking a range of virtual addresses, NOT blocking physical memory or paging file space at the OS level. The OS doesn't include this in the commit limit, therefore when the time comes to allocate this memory, you might get a surprise. It's important to note that reserving happens from the perspective of the process address space. It's not that there's any physical resource reserved - there's no stamping of "no vacancy" against RAM space or page file(s).
The analogy with plots of land might be missing something: take reserved as the area of land surrounded by wooden poles, thus letting others now that the land is taken. But how about committed ? It can't be land on which structures (eg houses) have already been build, since those would require PTEs and there's none there yet, since we haven't accessed anything. It's only when touching committed data that the PTEs will get built, which will make the pages available to the process.
The main problem is that committed memory - at least in its initial state - is functionally very much alike reserved memory. It's just an area blocked within VADs. Try to touch one of the addresses, and you'll get an access violation exception for a reserved address:
Attempting to access free or reserved memory results in an access violation exception because the page isn’t mapped to any storage that can resolve the reference
...and an initial page fault for a committed one (immediately followed by the required PTE entries being created).
Back to the land analogy, once houses are build, that patch of land is still committed. Yet this is a bit peculiar, since it was still committed when the original grass was there, before the very first shovel was excavated to start construction. It resembled the same state as that of a reserved patch. Maybe it would be better to think of it like terrain eligible for construction. Eg you have a permit to build (albeit you might never build as much as a wall on that patch of land).
What would be the reasons for using one type of memory versus the other ? There's at least one: the OS guarantees that there will be room to allocate committed memory, should that ever occur in the future, but doesn't guarantee anything for reserved memory aside from blocking that process' address space range. The only downside for committed memory is that one or more paging files might need to be extended in size as to be able to make the commit limit take into account the recently allocated block, so should the requester demand the use of part of all the data in the future, the OS can provide access to it.
I can't really think how the land analogy can capture this detail of "guarantee". After all, the reserved patch also physically existed, covered by the same grass as a committed one in its pristine state.
The stack is another scenario where reserved and committed memory are used together:
When a thread is created, the memory manager automatically reserves a predetermined amount of virtual memory, which by default is 1 MB.[...] Although 1 MB is reserved, only the first page of the stack will be committed [...]
along with a guard page. When a thread’s stack grows large enough to touch the guard page, an exception occurs, causing an attempt to allocate another guard. Through this mechanism, a user stack doesn’t immediately consume all 1 MB of committed memory but instead grows with demand."
There is an answer here that deals with why one would want to use reserved memory as opposed to committed . It involves storing continuously expanding data - which is actually the stack model described above - and having specific absolute address ranges available when needed (although I'm not sure why one would want to do that within a process).
Ok, what am I actually asking ?
What would be a good analogy for the reserved/committed concept ?
Any other reason aside those depicted above that would mandate the
use of reserved memory ? Are there any interesting use cases when
resorting to reserved memory is a smart move ?
Your question hits upon the difference between logical memory translation and virtual memory translation. While CPU documentation likes to conflate these two concepts, they are different in practice.
If you look at logical memory translation, there are are only two states for a page. Using your terminology, they are FREE and COMMITTED. A free page is one that has no mapping to a physical page frame and a COMMITTED page has such a mapping.
In a virtual memory system, the operating system has to maintain a copy of the address space in secondary storage. How this is done depends upon the operating system. Typically, a process will have its mapping to several different files for secondary storage. The operating system divides the address space into what is usually called a SECTION.
For example, the code and read only data could be stored virtually as one or more SECTIONS in the executable file. Code and static data in shared libraries could each be in a different section that are paged to the shared libraries. You might have a map to a shared filed to the process that uses memory that can be accessed by multiple processes that forms another section. Most of the read/write data is likely to be in a page file in one or more sections. How the operating system tracks where it virtually stores each section of data is system dependent.
For windows, that gives the definition of one of your terms: Sharable. A sharable section is one where a range of addresses can be mapped to different processes, at different (or possibly the same) logical addresses.
Your last term is then RESERVED. If you look at the Windows' VirtualAlloc function documentation, you can see that (among your options) you can RESERVE or COMMIT. If you reserve you are creating a section of VIRTUAL MEMORY that has no mapping to physical memory.
This RESERVE/COMMIT model is Windows-specific (although other operating systems may do the same). The likely reason was to save disk space. When Windows NT was developed, 600MB drives the size of washing machine were still in use.
In these days of 64-bit address spaces, this system works well for (as you say) expanding data. In theory, an exception handler for a stack overrun can simply expand the stack. Reserving 4GB of memory takes no more resources than reserving a single page (which would not be practicable in a 32-bit system—see above). If you have 20 threads, this makes reserving stack space efficient.
What would be a good analogy for the reserved/committed concept ?
One could say RESERVE is like buying options to buy and COMMIT is exercising the option.
Any other reason aside those depicted above that would mandate the use of reserved memory ? Are there any interesting use cases when resorting to reserved memory is a smart move ?
IMHO, the most likely places to RESERVE without COMMITTING are for creating stacks and heaps with the former being the most important.

If v8 uses the "code" or "text" memory type, or if everything is in the heap/stack

In a typical memory layout there are 4 items:
code/text (where the compiled code of the program itself resides)
data
stack
heap
I am new to memory layouts so I am wondering if v8, which is a JIT compiler and dynamically generates code, stores this code in the "code" segment of the memory, or just stores it in the heap along with everything else. I'm not sure if the operating system gives you access to the code/text so not sure if this is a dumb question.
The below is true for the major operating systems running on the major CPUs in common use today. Things will differ on old or some embedded operating systems (in particular things are a lot simpler on operating systems without virtual memory) or when running code without an OS or on CPUs with no support for memory protection.
The picture in your question is a bit of a simplification. One thing it does not show is that (virtual) memory is made up of pages provided to you by the operating system. Each page has its own permissions controlling whether your process can read, write and/or execute the data in that page.
The text section of a binary will be loaded onto pages that are executable, but not writable. The read-only data section will be loaded onto pages that are neither writable nor executable. All other memory in your picture ((un)initialized data, heap, stack) will be stored on pages that are writable, but not executable.
These permissions prevent security flaws (such as buffer overruns) that could otherwise allow attackers to execute arbitrary code by making the program jump into code provided by the attacker or letting the attacker overwrite code in the text section.
Now the problem with these permissions, with regards to JIT compilation, is that you can't execute your JIT-compiled code: if you store it on the stack or the heap (or within a global variable), it won't be on an executable page, so the program will crash when you try to jump into the code. If you try to store it in the text area (by making use of left-over memory on the last page or by overwriting parts of the JIT-compilers code), the program will crash because you're trying to write to read-only memory.
But thankfully operating systems allow you to change the permissions of a page (on POSIX-systems this can be done using mprotect and on Windows using VirtualProtect). So your first idea might be to store the generated code on the heap and then simply make the containing pages executable. However this can be somewhat problematic: VirtualProtect and some implementations of mprotect require a pointer to the beginning of a page, but your array does not necessarily start at the beginning of a page if you allocated it using malloc (or new or your language's equivalent). Further your array may share a page with other data, which you don't want to be executable.
To prevent these issues, you can use functions, such as mmap on Unix-like operating systems and VirtualAlloc on Windows, that give you pages of memory "to yourself". These functions will allocate enough pages to contain as much memory as you requested and return a pointer to the beginning of that memory (which will be at the beginning of the first page). These pages will not be available to malloc. That is, even if you array is significantly smaller than the size of a page on your OS, the page will only be used to store your array - a subsequent call to malloc will not return a pointer to memory in that page.
So the way that most JIT-compilers work is that they allocate read-write memory using mmap or VirtualAlloc, copy the generated machine instructions into that memory, use mprotect or VirtualProtect to make the memory executable and non-writable (for security reasons you never want memory to be executable and writable at the same time if you can avoid it) and then jump into it. In terms of its (virtual) address, the memory will be part of the heap's area of the memory, but it will be separate from the heap in the sense that it won't be managed by malloc and free.
Heap and stack are the memory regions where programs can allocate at runtime. This is not specific to V8, or JIT compilers. For more detail, I humbly suggest that you read whatever book that illustration came from ;-)

x86 dirty bit in page table entry

The Intel Architecture manual says when there is first write access against a memory page, the CPU sets the dirty bit of the page table entry. I have questions regarding this issue.
1. The 'dirty bit' in this context is used for guaranteeing the correctness of disk swapping in, out of memory pages. is this correct?
2. Is this automatically performed by the hardware? or is this an implementation of operating system?
3. If it is automatically performed by the hardware, is there any noteworthy difference compared to the usual memory updates which are performed by software instructions?
Thank you in advance.
1 The 'dirty bit' in this context is used for guaranteeing the correctness of disk swapping in, out of memory pages. is this correct?
This hardware part of paging support. This bit helps OS determine in very fast and efficcient way to determine which page must be dumped to disk. Because if memory page will page out to disk and there is already allocated space in page file we can don`t dump this page to disk if this flag are cleared. This is just example of way how OS can use this flag in paging.
2 Is this automatically performed by the hardware? or is this an implementation of operating system?
Software clears this flag. Hardware sets this flag:
3.7.6 Page-Directory and Page-Table Entries
Dirty (D) flag, bit 6
Indicates whether a page has been written to when set. (This flag is
not used in page-directory entries that point to page tables.) Memory
management software typically clears this flag when a page is
initially loaded into physical memory. The processor then sets this
flag the first time a page is accessed for a write operation.
.
3 If it is automatically performed by the hardware, is there any noteworthy difference compared to the usual memory updates which are performed by software instructions?
They have LOCK semantics and atomic.

Delphi: FastMM virtual memory management reference?

I had an issue recently (see my last question) that led me to take a closer look at the memory management in my Delphi application. After my first exploration, I have two questions.
I've started playing with the FastMMUsageTracker, and noticed the following. When I open a file to be used by the app (which also creates a form etc...), there is a significant discrepancy between the variation in available virtual memory for the app, and the variation in "FastMM4 allocated" memory.
First off, I'm a little confused by the terminology: why is there some FastMM-allocated memory and some "System-allocated" (and reserved) memory? Since FastMM is the memory manager, why is the system in charge of allocating some of the memory?
Also, how can I get more details on what objects/structures have been allocated that memory? The VM chart is only useful in showing the amount of memory that is "system allocated", "system reserved", or "FastMM allocated", but there is no link to the actual objects requiring that memory. Is it possible for example to get a report, mid-execution, similar to what FastMM generates upon closing the application? FastMM obviously stores that information somewhere.
As a bonus for me, if people can recommend a good reference (book, website) on the subject, it would also be much appreciated. There are tons of info on the net, but it's usually very case-specific and experts-oriented.
Thanks!
PS: This is not about finding leaks, no problem there, just trying to understand memory management better and be pre-emptive for the future, as our application uses more and more memory.
Some of your questions are easy. Well, one of them anyway!
Why is there some FastMM-allocated
memory and some "System-allocated"
(and reserved) memory? Since FastMM is
the memory manager, why is the system
in charge of allocating some of the
memory?
The code that you write in Delphi is only part of what runs in your process. You use 3rd party libraries in the form of DLLs, most notably the Windows API. Anytime you create a Delphi form, for example, there are a lot of windows objects behind it that consume memory. This memory does not get allocated by FastMM and I presume is what is termed "system-allocated" in your question.
However, if you want to go any deeper then this very rapidly becomes an extremely complex topic. If you do want to go deeper into the implementation of Windows memory management then I think you need to consult a serious reference source. I suggest Windows Internals by Mark Russinovich, David Solomon and Alex Ionescu.
First off, I'm a little confused by the terminology: why is there some FastMM-allocated memory and some "System-allocated" (and reserved) memory? Since FastMM is the memory manager, why is the system in charge of allocating some of the memory?
Where do you suppose FastMM gets the memory to allocate? It comes from the system, of course.
When your app starts up, FastMM gets a block of memory from the system. When you ask for some memory to use (whether with GetMem, New, or TSomething.Create), FastMM tries to give it to you from that first initial block. If there's not enough there, FastMM asks for more (in one block if possible) from the system, and returns a chunk of that to you. When you free something, FastMM doesn't return that memory to the OS, because it figures you'll use it again. It just marks it as unused internally. It also tries to realign unused blocks so that they're as contiguous as possible, in order to try not to have to go back to the OS for more needlessly. (This realignment isn't always possible, though; that's where you end up with memory fragmentation from things like multiple resizing of dynamic arrays, lots of object creates and frees, and so forth.)
In addition to the memory FastMM manages in your app, the system sets aside room for the stack and heap. Each process gets a meg of stack space when it starts up, as room to put variables. This stack (and the heap) can grow dynamically as needed.
When your application exits, all of the memory it's allocated is released back to the OS. (It may not appear so immediately in Task Manager, but it is.)
Is it possible for example to get a report, mid-execution, similar to what FastMM generates upon closing the application?
Not as far as I can tell. Because FastMM stores it somewhere doesn't necessarily mean there's a way to access it during runtime from outside the memory manager. You can look at the source for FastMMUsageTracker to see how the information is retrieved (using GetMemoryManagerState and GetMemoryMap, in the RefreshSnapshot method). The source to FastMM4 is also available; you can look and see what public methods are available.
FastMM's own documentation (in the form of the readme files, FastMM4Options.inc comments, and the FastMM4_FAQ.txt file) is useful to some extent in explaining how it works and what debugging options (and information) is available.
For a detailed map of what memory a process is using, try VMMAP from www.sysinternals.com (also co-authored by Mark Russinovich, mentioned in David's answer). This also allows you to see what is stored in some of the locations (type control-T when a detail line is selected).
Warning: there is much more memory in use by your process than you might think. You may need to read the book first.

operating systems memory management - malloc() invocation

I'm studying up on OS memory management, and I wish to verify that I got the basic mechanism of allocation \ virtual memory \ paging straight.
Let's say a process calls malloc(), what happens behind the scenes?
my answer: The runtime library finds an appropriately sized block of memory in its virtual memory address space.
(This is where allocation algorithms such as first-fit, best-fit that deal with fragmentation come into play)
Now let's say the process accesses that memory, how is that done?
my answer: The memory address, as seen by the process, is in fact virtual. The OS checks if that address is currently mapped to a physical memory address and if so performs the access. If it isn't mapped - a page fault is raised.
Am I getting this straight? i.e. the compiler\runtime library are in charge of allocating virtual memory blocks, and the OS is in charge of a mapping between processes' virtual address and physical addresses (and the paging algorithm that entails)?
Thanks!
About right. The memory needs to exist in the virtual memory of the process for a page fault to actually allocate a physical page though. You can't just start poking around anywhere and expect the kernel to put physical memory where you happen to access.
There is much more to it than this. Read up on mmap(), anonymous and not, shared and private. And brk() too. malloc() builds on brk() and mmap().
You've almost got it. The one thing you missed is how the process asks the system for more virtual memory in the first place. As Thomas pointed out, you can't just write where you want. There's no reason an OS couldn't be designed to allow that, but it's much more efficient if it has some idea where you're going to be writing and the space where you do it is contiguous.
On Unixy systems, userland processes have a region called the data segment, which is what it sounds like: it's where the data goes. When a process needs memory for data, it calls brk(), which asks the system to extend the data segment to a specified pointer value. (For example, if your existing data segment was empty and you wanted to extend it to 2M, you'd call brk(0x200000).)
Note that while very common, brk() is not a standard; in fact it was yanked out of POSIX.1 a decade ago because C specifies malloc() and there's no reason to mandate the interface for data segment allocation.

Resources