Does the operating system itself issue virtual memory addresses? - memory

An operating system itself has resources it needs to access, like block I/O cache and process control blocks. Does it use virtual memory addresses or physical memory addresses?
I feel like it should be the former since it prevents the need to keep a large area of physical memory for a purpose, even when it is mostly empty. The mechanism of page tables/virtual memory would do a much better job at keeping those resources that the OS really needs.
So which is it?

10 randomly selected operating systems will do virtual memory management in 10 different ways. There's no answer that applies to all operating systems.
Some (e.g. MS-DOS) don't support or use virtual memory management for anything, some (e.g. Linux) just map all of physical memory into kernel space and don't bother using virtual memory management tricks for the kernel itself (it's almost as if the kernel is in physical memory even though it's technically both), and some may do any number of virtual memory tricks in kernel space.

Related

Kernel virtual memory space and process virtual memory space

I was reading the textbook:Computer Systems A Programmer’s Perspective, in chapter 9.7.2:Linux Virtual Memory System (third edition) that talks about virtual memory.
I was a bit confused by the structure of virtual memory for linux process as shown below:
My question is: does kernel virtual memory preserve for kernel to run
and rest of the virtual memory preserve for user process? What does kernel code and data do? And what does the physical memory in kernel virtual memory?
does kernel virtual memory preserve for kernel to run and rest of the virtual memory preserve for user process?
Yes, there is a part of virtual memory that is always reserved for the kernel and another part that is left available to userspace processes. Every single process has its own virtual memory, but the kernel is always mapped in the higher part (higher addresses) of virtual memory. Whether or not this mapping is visible to the process depends on Kernel Page Table Isolation.
See also: Do the virtual address spaces of all the processes have the same content in their “Kernel” parts?
What does kernel code and data do?
Part of the high virtual memory is a direct mapping of the actual kernel image. That is, the kernel executable and all its data. You can see it in more detail here in this page of the kernel documentation, marked as "kernel text mapping, mapped to physical address 0".
See also: What's the use of having a kernel part in the virtual memory space of Linux processes?
And what does the physical memory in kernel virtual memory?
That part of the image is totally misleading. I don't know precisely what information the authors of the book were trying to convey, but physical memory is definitely not a part of kernel virtual memory. They were probably trying to address the fact that there is a direct mapping of all physical memory in the kernel virtual memory, which can be seen again on the same page of the kernel documentation, marked as "direct mapping of all physical memory".
Physical memory refers to the real memory of the system (i.e. the RAM). Each region of virtual memory is mapped to some region of physical memory. This virtual-to-physical mapping is totally transparent to processes and is managed by the kernel. For example, two executables that have the same file open in read-only mode are usually sharing the same physical memory region, while seeing two different virtual address.
This is a more accurate depiction of the relationship between virtual and physical memory:
Source: https://computationstructures.org/lectures/vm/vm.html
cited from the CSAPP book, 3rd version, section 9.7.2, where the picture is shown.
Interestingly, Linux also maps a set of contiguous virtual pages (equal in size to the total amount of DRAM in the system) to the corresponding set of contiguous physical pages. This provides the kernel with a convenient way to access any specific location in physical memory—for example, when it needs to access page tables or to perform memory-mapped I/O operations on devices that are mapped to particular physical memory locations.
I think the Physical memory in the picture just reflects what's described above: a virtual memory area that maps to the entire physical memory.

Is the Nand2Tetris Hack computer's RAM a good model for how RAM is structured on x86 machines?

The following is the structure of RAM for the entire Hack Computer in Nand2Tetris:
Putting aside virtual memory, is this a good simplified model for how the entire RAM is set up on x86 computers? Is RAM really just made of clusters of memory regions each with their own stack, heap and instruction memory, stacked on top of each other in RAM?
Basically, is RAM just a collection of independent and separate memory regions of each process/program running? Or, does RAM consist of variables scattered randomly from different programs?
Hugely over-simplified, processes on a machine with Virtual Memory could all think they have a memory map similar to that of the Hack Virtual Machine (note: Virtual Memory != Virtual Machine).
However, individual chunks of each process' memory map might be mapped to some arbitrary physical memory, shuffled off to a swap file, not allocated until actually needed, shared with other processes, and more. And those chunks that are in RAM might be anywhere (and might move).
You may find this article to be a good starting point to understanding Virtual Memory: https://en.wikipedia.org/wiki/Virtual_memory

Since modern computer uses virtual memory, why do we still encounter "out of memory" issue?

I am learning the concept of virtual memory, but this question has been confusing me for a while. Since most modern computers use virtual memory, when a program is in execution, the os is supposed to page data in and out between RAM and disk. But why do we still encounter "out of memory" issue? Could you please correct me if I misunderstood the concept? I really appreciate your explanation.
PS: For example, I was analyzing a large amount of data (>100G) output from simulation on a computing cluster, and read in the data to an C array. Very often the system crashed and complained a memory error.
First: Modern computer do indeed use virtual memory, however there is no magic here. Memory is not created out of nothing. Virtual memory schemes typically allow a portion of the mass storage sub-system (aka hard disk) to be used to hold portions of the process that are (hopefully) less frequently used.
This technique allows processes to use more memory than is available as RAM. However nothing is infinite. Eventually all RAM and Hard Drive resources will be used up and the process will get an out of memory error.
Second: It is not unheard of for operating systems to place a cap on the memory that a process may use. Hit that cap and again, the process gets an out of memory error.
Even with virtual memory the memory available is not unlimited.
Limit 1) Architectural limits. The processor and operating system will place some maximum virtual memory limit.
Limit 2) System Parameters. Many operating systems configure the maximum virtual memory size.
Limit 3) Process quotas. Many operating system have process quotas that limit the maximum virtual memory size.
Limit 4) System resources. Notably page file space.

How os handle fragmentation in virtual address space

As far as I know , the paging system do eliminate external fragment in physical address space, but what about fragment in virtual address space?
In modern OSes the virtual address space is used per process (the kernel has it's own dedicated virtual range), which means that the demands are much lower compared to the whole OS. The virtual address space is usually large enough (2-3 GB per process on x86 and multiple TB (8 on Windows) on x64 machines), so that fragmentation is not such a big issue as for the OS-wide physical address space. Still the issue can arise, especially for long running and memory hungry applications on x86 or other 32 bit architectures. For this the OS provides mechanisms, for example in form of the heap code. An application usually reserves one or more memory ranges as heap(s) when it starts and allocates the required chunks of memory from there later (e.g. malloc). There are a varity of implementations that handle fragmentation of the heap in different ways. Windows provides a special low-fragmentation heap implementation that can be used, if desired. Everything else is usually up to the application or it's libraries.
Let me add a qualification to your statement. Paging systems nearly eliminate fragmentation in the physical address space when the kernel is pageable.
On some systems, the user mode page tables are themselves pageable. On others, they are are physical locations that are not pageable. Then you can get fragmentation.
Fragmentation in the virtual address space tends to occur in heap allocation. The challenge of heap managers is to manage the space while minimizing fragmentation.

Can a 32-bit program use more than 4GB of memory on a 64-bit OS?

Is a 32-bit program running on a 64-bit OS able to use more than 4GB of memory if available?
Short answer is: yes.
Longer answer is depends. There is a hardware support for page re-mapping, which basically gives your program a window of a few pages into a larger area of memory.
This window is however, should be managed by the program itself and will not get support from memory manager. There are examples of programs doing that like SQL on Windows.
However, in general it is a bad idea and the program should either limit itself for 4GB or move to 64bits :)
Normally you're limited to a 2GB address space, in which all your allocations and their overhead, fragmentation, etc., must fit along with memory-mapped files (which includes your program and the DLLs it uses). This effectively limits you to 1.5GB.
With special configuration, e.g. /3GB, you can make more than 2GB available to applications, but by doing so you rob the kernel of space, costing you file caching, handle capacity, etc..
On Win32, you can use more with PAE support, but it's not transparent, you have to manage it yourself.
Only by explicitly mapping 4GB ranges of memory into its address space.

Resources