I am trying to understand the Swap space and Virtual memory. These threads: thread1, thread2 say that Virtual memory is an abstraction and swap spaces merely are used in implementing virtual memory. Now, my questions are:
1) Are swap spaces special places in the hard disk that are used to temporarily store pages paged out from the RAM?
2) One more question is: Let's say I have a file F1 in hard disk and I'm currently operating on page P1 of file F1 (page P1 is in RAM) and I updated my page P1. Currently, page P1 in memory and page P1 in hard disk are divergent. Let's say the OS decided to page out P1 from memory. So, what has to happen is the page P1 has to move out of RAM and go somewhere in the hard disk. My understanding is that it either one of the following will happen:
a) has to go to the swap space in which case the page P1 in disk is still not updated yet (not yet flushed)
or
b) the page P1 gets flushed to disk in which case the file F1 in disk is up to date.
What will happen here, a or b?
1) Are swap spaces special places in the hard disk that are used to
temporarily store pages paged out from the RAM?
Yes. Depending on how the OS is configured, the swap space might be a separate partition, or perhaps just a special file in the filesystem. In either case, it's like you say: its purpose is to temporarily hold pages that have been swapped out of RAM.
I'm currently operating on page P1 of file F1 (page P1 is in RAM) and
I updated my page P1
It's a little ambiguous what you mean by "operating on" here -- you might mean that you've used fopen() (or similar) to open a file handle and are using that file handle to read from and/or write to the contents of the file. We'll call that scenario A.
Alternatively, you might have used mmap() (or something equivalent) to map the file's contents into virtual memory, so that you can access the file's contents directly via a pointer, as if it was a regular in-memory buffer. We'll call that scenario B.
Currently, page P1 in memory and page P1 in hard disk are divergent.
Let's say the OS decided to page out P1 from memory.
In scenario A (with fopen()), the pages of the file aren't themselves "in memory" in any relevant sense; rather, when the program calls fread() (or similar), the OS copies some bytes from the file into an in-memory buffer that the program passed to the fread() call. There is indeed a copy of some data from the file in memory, and that data might get paged out to the swap file, but that data is not tied to the file itself (except in the sense that the program might be planning eventually to fwrite() it back into the file -- but the OS doesn't know that). So in this scenario, the paged-out memory will go into the swap space, but that won't affect the on-disk file in any way.
In scenario B (with mmap()), the OS "knows" that the page in memory has a direct one-to-one mapping to the corresponding subset of the file, and in this case the OS may very well page the data out directly into the mmap'd file rather than going though the standard swap space and then to the file. (An OS is certainly not required to be clever in that way, but I believe that most modern/popular OS's will do it the more efficient way, because it's more efficient and there's no reason not to)
Related
I'm studying how virtual memory works and I'm not sure what happens if I load a big file (smaller than the physical memory, though) with fread() and similar.
As far as I understand, the operating system might not allocate the entire corresponding physical memory. Instead, it could wait until a page fault is triggered as my program reads a specific portion of the file (a portion not yet mapped to physical memory).
This is basically the behavior of a memory mapped file. So, if my assumptions are correct, what is the benefit of using system calls like mmap()? Just to avoid the usual for-loop dance when reading with fread(), maybe?
read(),fread() will read the amount you specified into the buffer you provide. Mmap is a separate interface into the kernel file cache. Where the two intersect is that the kernel will most likely first read the file into cache buffers, then copy select bits of those cache buffers into your user buffer.
This double copy is often necessary because your program doesn't provide the necessary alignment and blocking size the underlying device requires, and if the data requires transformation (decrypt, uncompress), it needs a place to do it from.
This kernel cache is kept coherent with the file, so system wide reads and writes go through it.
If you mmap the file, you may be able to avoid the double copy; but have to deal with changes to the file appearing un-announced.
According to "Windows Internals, Part 1" (7th Edition, Kindle version):
Pages in a process virtual address space are either free, reserved, committed, or shareable.
Focusing only on the reserved and committed pages, the first type is described in the same book:
Reserving memory means setting aside a range of contiguous virtual addresses for possible future use (such as an array) while consuming negligible system resources, and then committing portions of the reserved space as needed as the application runs. Or, if the size requirements are known in advance, a process can reserve and commit in the same function call.
Both reserving or committing will initially get you entries in the VADs (virtual address descriptors), but neither operation will touch the PTE (page table entries) structures. It used to cost PTEs for reserving before Windows 8.1, but not anymore.
As described above, reserved means blocking a range of virtual addresses, NOT blocking physical memory or paging file space at the OS level. The OS doesn't include this in the commit limit, therefore when the time comes to allocate this memory, you might get a surprise. It's important to note that reserving happens from the perspective of the process address space. It's not that there's any physical resource reserved - there's no stamping of "no vacancy" against RAM space or page file(s).
The analogy with plots of land might be missing something: take reserved as the area of land surrounded by wooden poles, thus letting others now that the land is taken. But how about committed ? It can't be land on which structures (eg houses) have already been build, since those would require PTEs and there's none there yet, since we haven't accessed anything. It's only when touching committed data that the PTEs will get built, which will make the pages available to the process.
The main problem is that committed memory - at least in its initial state - is functionally very much alike reserved memory. It's just an area blocked within VADs. Try to touch one of the addresses, and you'll get an access violation exception for a reserved address:
Attempting to access free or reserved memory results in an access violation exception because the page isn’t mapped to any storage that can resolve the reference
...and an initial page fault for a committed one (immediately followed by the required PTE entries being created).
Back to the land analogy, once houses are build, that patch of land is still committed. Yet this is a bit peculiar, since it was still committed when the original grass was there, before the very first shovel was excavated to start construction. It resembled the same state as that of a reserved patch. Maybe it would be better to think of it like terrain eligible for construction. Eg you have a permit to build (albeit you might never build as much as a wall on that patch of land).
What would be the reasons for using one type of memory versus the other ? There's at least one: the OS guarantees that there will be room to allocate committed memory, should that ever occur in the future, but doesn't guarantee anything for reserved memory aside from blocking that process' address space range. The only downside for committed memory is that one or more paging files might need to be extended in size as to be able to make the commit limit take into account the recently allocated block, so should the requester demand the use of part of all the data in the future, the OS can provide access to it.
I can't really think how the land analogy can capture this detail of "guarantee". After all, the reserved patch also physically existed, covered by the same grass as a committed one in its pristine state.
The stack is another scenario where reserved and committed memory are used together:
When a thread is created, the memory manager automatically reserves a predetermined amount of virtual memory, which by default is 1 MB.[...] Although 1 MB is reserved, only the first page of the stack will be committed [...]
along with a guard page. When a thread’s stack grows large enough to touch the guard page, an exception occurs, causing an attempt to allocate another guard. Through this mechanism, a user stack doesn’t immediately consume all 1 MB of committed memory but instead grows with demand."
There is an answer here that deals with why one would want to use reserved memory as opposed to committed . It involves storing continuously expanding data - which is actually the stack model described above - and having specific absolute address ranges available when needed (although I'm not sure why one would want to do that within a process).
Ok, what am I actually asking ?
What would be a good analogy for the reserved/committed concept ?
Any other reason aside those depicted above that would mandate the
use of reserved memory ? Are there any interesting use cases when
resorting to reserved memory is a smart move ?
Your question hits upon the difference between logical memory translation and virtual memory translation. While CPU documentation likes to conflate these two concepts, they are different in practice.
If you look at logical memory translation, there are are only two states for a page. Using your terminology, they are FREE and COMMITTED. A free page is one that has no mapping to a physical page frame and a COMMITTED page has such a mapping.
In a virtual memory system, the operating system has to maintain a copy of the address space in secondary storage. How this is done depends upon the operating system. Typically, a process will have its mapping to several different files for secondary storage. The operating system divides the address space into what is usually called a SECTION.
For example, the code and read only data could be stored virtually as one or more SECTIONS in the executable file. Code and static data in shared libraries could each be in a different section that are paged to the shared libraries. You might have a map to a shared filed to the process that uses memory that can be accessed by multiple processes that forms another section. Most of the read/write data is likely to be in a page file in one or more sections. How the operating system tracks where it virtually stores each section of data is system dependent.
For windows, that gives the definition of one of your terms: Sharable. A sharable section is one where a range of addresses can be mapped to different processes, at different (or possibly the same) logical addresses.
Your last term is then RESERVED. If you look at the Windows' VirtualAlloc function documentation, you can see that (among your options) you can RESERVE or COMMIT. If you reserve you are creating a section of VIRTUAL MEMORY that has no mapping to physical memory.
This RESERVE/COMMIT model is Windows-specific (although other operating systems may do the same). The likely reason was to save disk space. When Windows NT was developed, 600MB drives the size of washing machine were still in use.
In these days of 64-bit address spaces, this system works well for (as you say) expanding data. In theory, an exception handler for a stack overrun can simply expand the stack. Reserving 4GB of memory takes no more resources than reserving a single page (which would not be practicable in a 32-bit system—see above). If you have 20 threads, this makes reserving stack space efficient.
What would be a good analogy for the reserved/committed concept ?
One could say RESERVE is like buying options to buy and COMMIT is exercising the option.
Any other reason aside those depicted above that would mandate the use of reserved memory ? Are there any interesting use cases when resorting to reserved memory is a smart move ?
IMHO, the most likely places to RESERVE without COMMITTING are for creating stacks and heaps with the former being the most important.
In a typical memory layout there are 4 items:
code/text (where the compiled code of the program itself resides)
data
stack
heap
I am new to memory layouts so I am wondering if v8, which is a JIT compiler and dynamically generates code, stores this code in the "code" segment of the memory, or just stores it in the heap along with everything else. I'm not sure if the operating system gives you access to the code/text so not sure if this is a dumb question.
The below is true for the major operating systems running on the major CPUs in common use today. Things will differ on old or some embedded operating systems (in particular things are a lot simpler on operating systems without virtual memory) or when running code without an OS or on CPUs with no support for memory protection.
The picture in your question is a bit of a simplification. One thing it does not show is that (virtual) memory is made up of pages provided to you by the operating system. Each page has its own permissions controlling whether your process can read, write and/or execute the data in that page.
The text section of a binary will be loaded onto pages that are executable, but not writable. The read-only data section will be loaded onto pages that are neither writable nor executable. All other memory in your picture ((un)initialized data, heap, stack) will be stored on pages that are writable, but not executable.
These permissions prevent security flaws (such as buffer overruns) that could otherwise allow attackers to execute arbitrary code by making the program jump into code provided by the attacker or letting the attacker overwrite code in the text section.
Now the problem with these permissions, with regards to JIT compilation, is that you can't execute your JIT-compiled code: if you store it on the stack or the heap (or within a global variable), it won't be on an executable page, so the program will crash when you try to jump into the code. If you try to store it in the text area (by making use of left-over memory on the last page or by overwriting parts of the JIT-compilers code), the program will crash because you're trying to write to read-only memory.
But thankfully operating systems allow you to change the permissions of a page (on POSIX-systems this can be done using mprotect and on Windows using VirtualProtect). So your first idea might be to store the generated code on the heap and then simply make the containing pages executable. However this can be somewhat problematic: VirtualProtect and some implementations of mprotect require a pointer to the beginning of a page, but your array does not necessarily start at the beginning of a page if you allocated it using malloc (or new or your language's equivalent). Further your array may share a page with other data, which you don't want to be executable.
To prevent these issues, you can use functions, such as mmap on Unix-like operating systems and VirtualAlloc on Windows, that give you pages of memory "to yourself". These functions will allocate enough pages to contain as much memory as you requested and return a pointer to the beginning of that memory (which will be at the beginning of the first page). These pages will not be available to malloc. That is, even if you array is significantly smaller than the size of a page on your OS, the page will only be used to store your array - a subsequent call to malloc will not return a pointer to memory in that page.
So the way that most JIT-compilers work is that they allocate read-write memory using mmap or VirtualAlloc, copy the generated machine instructions into that memory, use mprotect or VirtualProtect to make the memory executable and non-writable (for security reasons you never want memory to be executable and writable at the same time if you can avoid it) and then jump into it. In terms of its (virtual) address, the memory will be part of the heap's area of the memory, but it will be separate from the heap in the sense that it won't be managed by malloc and free.
Heap and stack are the memory regions where programs can allocate at runtime. This is not specific to V8, or JIT compilers. For more detail, I humbly suggest that you read whatever book that illustration came from ;-)
Paging is explained here, slide #6 :
http://www.cs.ucc.ie/~grigoras/CS2506/Lecture_6.pdf
in my lecture notes, but I cannot for the life of me understand it. I know its a way of translating virtual addresses to physical addresses. So the virtual addresses, which are on disks are divided into chunks of 2^k. I am really confused after this. Can someone please explain it to me in simple terms?
Paging is, as you've noted, a type of virtual memory. To answer the question raised by #John Curtsy: it's covered separately from virtual memory in general because there are other types of virtual memory, although paging is now (by far) the most common.
Paged virtual memory is pretty simple: you split all of your physical memory up into blocks, mostly of equal size (though having a selection of two or three sizes is fairly common in practice). Making the blocks equal sized makes them interchangeable.
Then you have addressing. You start by breaking each address up into two pieces. One is an offset within a page. You normally use the least significant bits for that part. If you use (say) 4K pages, you need 12 bits for the offset. With (say) a 32-bit address space, that leaves 20 more bits.
From there, things are really a lot simpler than they initially seem. You basically build a small "descriptor" to describe each page of memory. This will have a linear address (the address used by the client application to address that memory), and a physical address for the memory, as well as a Present bit. There will (at least usually) be a few other things like permissions to indicate whether data in that page can be read, written, executed, etc.
Then, when client code uses an address, the CPU starts by breaking up the page offset from the rest of the address. It then takes the rest of the linear address, and looks through the page descriptors to find the physical address that goes with that linear address. Then, to address the physical memory, it uses the upper 20 bits of the physical address with the lower 12 bits of the linear address, and together they form the actual physical address that goes out on the processor pins and gets data from the memory chip.
Now, we get to the part where we get "true" virtual memory. When programs are using more memory than is actually available, the OS takes the data for some of those descriptors, and writes it out to the disk drive. It then clears the "Present" bit for that page of memory. The physical page of memory is now free for some other purpose.
When the client program tries to refer to that memory, the CPU checks that the Present bit is set. If it's not, the CPU raises an exception. When that happens, the CPU frees up a block of physical memory as above, reads the data for the current page back in from disk, and fills in the page descriptor with the address of the physical page where it's now located. When it's done all that, it returns from the exception, and the CPU restarts execution of the instruction that caused the exception to start with -- except now, the Present bit is set, so using the memory will work.
There is one more detail that you probably need to know: the page descriptors are normally arranged into page tables, and (the important part) you normally have a separate set of page tables for each process in the system (and another for the OS kernel itself). Having separate page tables for each process means that each process can use the same set of linear addresses, but those get mapped to different set of physical addresses as needed. You can also map the same physical memory to more than one process by just creating two separate page descriptors (one for each process) that contain the same physical address. Most OSes use this so that, for example, if you have two or three copies of the same program running, it'll really only have one copy of the executable code for that program in memory -- but it'll have two or three sets of page descriptors that point to that same code so all of them can use it without making separate copies for each.
Of course, I'm simplifying a lot -- quite a few complete (and often fairly large) books have been written about virtual memory. There's also a fair amount of variation among machines, with various embellishments added, minor changes in parameters made (e.g., whether a page is 4K or 8K), and so on. Nonetheless, this is at least a general idea of the core of what happens (and it's still at a high enough level to apply about equally to an ARM, x86, MIPS, SPARC, etc.)
Simply put, its a way of holding far more data than your address space would normally allow. I.e, if you have a 32 bit address space and 4 bit virtual address, you can hold (2^32)^(2^4) addresses (far more than a 32 bit address space).
Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main memory in the form of pages. In the Paging method, the main memory is divided into small fixed-size blocks of physical memory, which is called frames. The size of a frame should be kept the same as that of a page to have maximum utilization of the main memory and to avoid external fragmentation.
An example final question for my operating systems class:
Most operating systems support "memory-mapped files"; this describes files which are mapped into the address space of a running process. Reads and writes to the file are converted into memory reads and writes. We can imagine the existence of two new system calls, map() and unmap().
a) Consider map(); it accepts a file name and a virtual address, causing the operating system to map the file into the address space starting at the virtual address. Describe how the virtual memory system could be used to support this call.
b) Consider unmap(); it disassociates the file from the virtual address space. Describe the stats that should be taken to implement this system call. List all your assumptions.
c) In many UNIX systems, the inodes are kept at the start of disk. An alternative design is to allocate an inode when a file is created and put the inode at the start of the first block of the file. Discuss the pros and cons of this alternative.
d) What would happen if the bitmap or free list containing information about free disk blocks was completely lost due to a crash? Is there anyway to recover from this disaster, or is the disk no longer usable. Discuss your answer for a UNIX and FAT-style of disk-block allocation.
Any information of discussion on these questions is greatly appreciated.
for c) overheads to go retrieve the directory/file and a data especially true when allocating memory for new file,which results in looking up every inode and retrieving their file_size, permission,etc
good when large number of small files required lots of space this could cost a lot of memory in the start of the disk.