Demand paging terminologies clarification - memory

I have been reading about demand paging and there are a few terminologies I don't understand.
What is a frame? I read that it is a block of physical memory which can at least fit in a page ( so a frame can fit one or more pages? ). But does this physical memory refer to the RAM or the disk storage?.
Which one of these is true:
The virtual address space ( which is 4 GiB in 32 bit systems ) is allocated for one application at a time, so that every application has 4 GiB virtual address to access to, and each time we switch application, the OS reconfigures the virtual address space to map to other other applications. Or the virtual address space is allocated to several processes? If so, how much virtual memory does each application get and what happen when it wants more virtual memory?
Do we have a page table for each application running, or a common page table for all applications?
Where does virtual memory fragmentation come from ?
I hope someone can clarify me.

A frame is a block of physical memory, RAM. I've not heard of frames being larger than pages, I've always understood them synonymous. However, a CPU may allow for frames/pages of different sizes to coexist simultaneously (e.g. large pages of 4MB/2MB/1GB size and regular 4KB pages on x86).
Whether there's a single address space shared by multiple applications or each has its own address space depends on the OS. Windows 3.xx and 9x/Me had a shared address space. Windows NT/2000/XP/etc had individual, per-app address spaces. Not all of the address space is available to an application / applications. A portion is reserved for the OS (kernel, drivers, their data).
Should be obvious now. One note though... Even with individual address spaces a portion of memory can still be made available in several different address spaces and it may be done by having a common page table in the respective processes. Also, it's very typical for the kernel portion of the address space to be managed by several page tables common to all processes.
Whether the address space is virtual or not, it can become fragmented. You may want to allocate a contiguous (in terms of the virtual addresses) buffer of, say, 8KB, but you may only have two non-adjacent 4KB regions available.

Related

Is kernel memory pagable?

A page, memory page, or virtual page is a fixed-length contiguous block of virtual memory, described by a single entry in the page table. 
I wamna know if kernel memory also can be pagable?
Yes, e.g. on architectures with an MMU every virtual address (user space and kernel space) is translated by the MMU. There is an area where the kernel is directly mapped, i.e. the virtual address is at a fixed offset from their physical address.
When for example a system call needs to access an address in kernel space, the page table of the last process that ran is used. It does not matter which one, since the kernel space is shared between all processes and thus is the same for all.
There is one case where physical addresses are used directly and that is in the boot process before paging is enabled.
As Giacomo Catenazzi mentioned correctly in the comments, these pages are handled differently, e.g. they can not be swapped out.
There is one case where physical addresses are used directly and that is in the boot process before paging is enabled.

How is memory space kept different between processes?

How is memory space between processes kept separate?
I am confused how this works exactly. From what I see, thinking about MMU and virtual memory,
the CPU generates a virtual-address, which is then mapped to either somewhere in the Disk or some page in the RAM. But how exactly is the memory space kept separate?
You are highly confused here. The operating system maintains page tables that do the mapping of logical pages to physical page frames. Each process sees logical pages numbered 0 ... N that may or may not be mapped to physical page frames. The MMU uses the page tables to do the translation.
Process X can access page #2 and Process Y can access page #2 but they will usually map to different physical page frames. By mapping the same logical address to different physical pages, the operating system keeps the processes separated.

How os handle fragmentation in virtual address space

As far as I know , the paging system do eliminate external fragment in physical address space, but what about fragment in virtual address space?
In modern OSes the virtual address space is used per process (the kernel has it's own dedicated virtual range), which means that the demands are much lower compared to the whole OS. The virtual address space is usually large enough (2-3 GB per process on x86 and multiple TB (8 on Windows) on x64 machines), so that fragmentation is not such a big issue as for the OS-wide physical address space. Still the issue can arise, especially for long running and memory hungry applications on x86 or other 32 bit architectures. For this the OS provides mechanisms, for example in form of the heap code. An application usually reserves one or more memory ranges as heap(s) when it starts and allocates the required chunks of memory from there later (e.g. malloc). There are a varity of implementations that handle fragmentation of the heap in different ways. Windows provides a special low-fragmentation heap implementation that can be used, if desired. Everything else is usually up to the application or it's libraries.
Let me add a qualification to your statement. Paging systems nearly eliminate fragmentation in the physical address space when the kernel is pageable.
On some systems, the user mode page tables are themselves pageable. On others, they are are physical locations that are not pageable. Then you can get fragmentation.
Fragmentation in the virtual address space tends to occur in heap allocation. The challenge of heap managers is to manage the space while minimizing fragmentation.

When do memory addresses get assigned?

Consider the following CPU instruction which takes the memory at address 16777386 (decimal) and stores it in Register 1:
Move &0x010000AA, R1
Traditionally programs are translated to assembly (machine code) at compile time. (Let's ignore more complex modern systems like jitting).
However, if this address allocation is completed statically at compile time, how does the OS ensure that two processes do not use the same memory? (eg if you ran the same compiled program twice concurrently).
Question:
How, and when, does a program get its memory addresses assigned?
Virtual Memory:
I understand most (if not all) modern systems use Memory Management Units in hardware to allow for the use of virtual memory. The first few octets of an address space being used to reference which page. This would allow for memory protection if each process used different pages. However, if this is how memory protection is enforced, the original question still persists, only this time with how page numbers are assigned?
EDIT:
CPU:
One possibility is the CPU can handle memory protection by enforcing that a process id be assigned by the OS before executing memory based instructions. However, this is only speculation, and requires support in hardware by the CPU architecture, something I'm not sure RISC ISAs would be designed to do.
With virtual memory each process has separate address space, so 0x010000AA in one process will refer to different value than in another process.
Address spaces are implemented with kernel-controlled page tables that processor uses to translate virtual page addresses to physical ones. Having two processes using the same address page number is not an issue, since the processes have separate page tables and physical memory mapped can be different.
Usually executable code and global variables will be mapped statically, stack will be mapped at random address (some exploits are more difficult that way) and dynamic allocation routines will use syscalls to map more pages.
(ignoring the Unix fork) The initial state of a processes memory is set up by the executable loader. The linker defines the initial memory state and the loader creates it. That state usually includes memory to static data, executable code, writeable data, and the stack.
In most systems a process can modify the address space by adding pages (possibly removing them as well).
[Ignoring system addresses] In virtual (logical) memory systems each process has an address space starting at zero (usually the first page is not mapped). The address space is divided into pages. The operating system maps (and remaps) logical pages to physical pages.
Address 0x010000AA in one process is then a difference physical memory address in each process.

What's the difference between virtual address space and the actual address space of the computer?

I thought that virtual address space was a section of RAM allocated to a specific process. But the book I'm reading says that 4 gbs is the standard limit of virtual address space. Isn't that the entire amount of RAM? If that is the case then I'm confused at what virtual address space is. Can anyone enlighten me?
That's the whole point of virtual addresses: The OS handles the physical memory, the process handles its own, virtual memory which is mapped to any memory the OS has available, not necessarily RAM.
On a 32 bit operating system the virtual address space (VAS) is, as you say, usually 4 GiB. 32 bits give you (2^32) addresses (0 ... (2^32)-1), each addressing one byte.
You could have more or less physical RAM and still have a 4-GiB-VAS for each and every process running. If you have less physical RAM, the OS would usually swap to harddrives.
The process doesn't need to know any of this, it can use the full VAS it is given by the OS and it's the OS' job to supply the physical memory.
(This is actually just a dumbed-down version of the Wikipedia article on VAS.)

Resources