I have been reading about how the PCI subsystem gets configured from Bootup, BIOS involvement and mapping of device addresses i.e the BAR's into system Memory.
From the diagram above I am assuming that the address space is physical 4GB RAM with 4GB physical addresses. So, As can be seen above 3GB the device memory is mapped. What happens to this memory on 2GB physical RAM addresses.
If suppose my assumption is wrong and the above map shows virtual address for a 32 bit system. Then how is the device memory mapped to physical addresses for DMA. Is the mapping permanent (non swappable and changeable).
Please help me understand this concept.
If I understand your question, nothing different happens on a 2GB system. There will simply be a "hole" in the physical address space between 2GB and 3GB; that is, there simply won't be a hardware device decoding the addresses here. But otherwise there is no significant difference with respect to PCI devices: they will still be assigned space in the region above 3GB.
It's important to note that the map you show above (physical address space) doesn't necessarily stop at 4GB (since about 1995). Most modern x86 processors have more than 32 address bits. This is why you now often get systems with more than 4GB RAM. And there may be additional holes in the address space above 4GB too.
Actually using the RAM above 4GB requires either the processor's 64-bit mode or PAE (Physical Address Extension) which offers a way to address more than 4GB of physical space in 32-bit mode. [There is also PSE-36 {Page Size Extension} but that's much less commonly used.]
The map you're showing above is specific to physical address space. The x86 virtual address space, (when the processor is operating in 32-bit mode) is 4GB in size, but it does not have all the reserved areas in your diagram. Indeed, the layout of the virtual address space is totally dependent on and determined by the operating system. The usual way that it's configured in linux reserves the part of the virtual address space below the 3GB line for user-mode, and the area above 3GB for kernel-mode. However, this configuration can be changed via the kernel config.
Mapping of the physical address space into virtual address space is managed by the operating system kernel on a page by page basis. A virtual page may be directed either to system RAM or to a PCI device. And note that the page size can vary too, depending on how the processor and page tables are configured.
Related
Been looking around for answers but none come out satisfying so far, and very often misleading with mixed-up terminologies. Now here's the thing... Physical Memory is the kind of memory;
i. as seen by the CPU as the final memory on its address bus. That means it has already passed the MMU translation / paging, right?
ii. calculated based on the address bus width. If it is 48-bit wide, than it should be 256TB of physical memory addresses available for the system, right?
Now my question;
If I have 8GB of RAM, how on earth the much larger physical memory addresses generated by the CPU are mapped to the physical RAM? What translation unit is there between the Physical memory and the available RAM? From what I read, physical RAM has already passed the virtual / MMU translation phase.
And no, I am not talking about virtual memory. I am asking about the relationships between the physical memory (the one that appears on the CPU address bus) and the actual physical RAM.
Thanks.
In a systems memory map (also called cpu memory map) the address ranges are allocated for RAM memory ranges, MMIO for PCI devices etc.
Lets take an example where address ranges for RAM is started from address 0 to upto 512MB which includes DOS compatibility memory space starts from 0 and goes upto 1MB.
Now when we say that this 512MB of region will be mapped into the memory, does this mean that the address 0 in the CPU address space will be mapped to address 0 in the Physical RAM and the same goes up to 512MB? If not then how the mapping is done?
Also does the memory address ranges allocated in CPU address space will be exactly equal to the size of the RAM installed in the system? If its not the case then how the mapping would take place in such case?
Also how will memory mapping of DOS compatibility region be done? Does this region will be unused into the memory if using an OS other than DOS?
Also does the memory mapping means that when the CPU generates the address from 0 to 512 MB only will be redirected into the RAM ? Any other address generated by CPU will never be directed into the RAM by MMU ? In such case all the application would have the address between ranges from 0 to 512MB inorder to access the Memory ?
I'm considering an x86 system here.
Before going into the question, it's worth taking a look into DRAM architecture.
Now when we say that this 512MB of region will be mapped into the memory, does this mean that the address 0 in the CPU address space will be mapped to address 0 in the Physical RAM and the same goes up to 512MB? If not then how the mapping is done?
There isn't exactly a concept of 'address 0' in DRAM, instead, there is an architecture of channels, DIMM, ranks, chips, banks, rows and columns, and the DRAM controller generates 'commands' that activates parts of the DRAM and selects data from the cells:
So the answer to the first question is no. As other people mentioned, the exact mapping is complicated and undocumented. If you are interested, AMD does provide documentation (Section 2.10 and 3.5), and there are attempts of reverse engineering Intel's mapping (Section 4).
Also does the memory address ranges allocated in CPU address space will be exactly equal to the size of the RAM installed in the system? If its not the case then how the mapping would take place in such case?
The answer is also no for many reasons. You answered one of them: the physical address space represents more than just RAM/memory, there are also PCIe devices, ROM (where BIOS is located), etc, and thus there are memory holes. To inspect what does the physical address correspond to in the system, in Linux take a look at /proc/iomem, as it has the mappings.
Also how will memory mapping of DOS compatibility region be done? Does this region will be unused into the memory if using an OS other than DOS?
Yes, I believe these are unused memory holes.
Also does the memory mapping means that when the CPU generates the address from 0 to 512 MB only will be redirected into the RAM ? Any other address generated by CPU will never be directed into the RAM by MMU ? In such case all the application would have the address between ranges from 0 to 512MB inorder to access the Memory ?
MMU serves a completely different purpose. Take a look at virtual address to physical address translation.
I read this from a previous stack overflow answer:
At initial power on, the BIOS is executed directly from ROM. The ROM chip is mapped to a fixed location in the processor's memory space (this is typically a feature of the chipset). When the x86 processor comes out of reset, it immediately begins executing from 0xFFFFFFF0.
Follow up questions,
Is this address 0xFFFFFFF0 hardwired just to access the system BIOS ROM and later after the system is up and running this address 0xFFFFFFF0 can not be used by RAM?
Also, when this address 0xFFFFFFF0 is being us to access the system BIOS ROM, is the CPU accessing it as an IO device or Memory device?
At power up, it is ROM. Has to be or the CPU would be unable to boot. Some chipsets have register bits that allow you to unmap the BIOS flash chip from the memory address space. Of course you should not do this while executing from ROM!
There is a common technique on PC hardware called "shadowing" where the BIOS will copy the contents of the ROM chip into RAM mapped at the same address. RAM is generally much faster than ROM, so it can speed up the system.
As for your second question, it is a memory device. It must be for the following reasons:
I/O addresses are 16-bits, not 32.
An x86 processors cannot execute code from I/O space. You cannot point the Instruction Pointer to an I/O address.
It's mapped to the global memory space and is addressed in the same way. Conventionally, the RAM shouldn't be mapped to any range of addresses that are used by other devices. This is common enough. You might remember a few years ago before 64-bit operating systems became more standard on home PCs that a user could have 4 GB of physical memory installed but perhaps only 3.5 GB accessible due to the graphics card being mapped to 512 MB of the address space.
I thought that virtual address space was a section of RAM allocated to a specific process. But the book I'm reading says that 4 gbs is the standard limit of virtual address space. Isn't that the entire amount of RAM? If that is the case then I'm confused at what virtual address space is. Can anyone enlighten me?
That's the whole point of virtual addresses: The OS handles the physical memory, the process handles its own, virtual memory which is mapped to any memory the OS has available, not necessarily RAM.
On a 32 bit operating system the virtual address space (VAS) is, as you say, usually 4 GiB. 32 bits give you (2^32) addresses (0 ... (2^32)-1), each addressing one byte.
You could have more or less physical RAM and still have a 4-GiB-VAS for each and every process running. If you have less physical RAM, the OS would usually swap to harddrives.
The process doesn't need to know any of this, it can use the full VAS it is given by the OS and it's the OS' job to supply the physical memory.
(This is actually just a dumbed-down version of the Wikipedia article on VAS.)
An excerpt of Wikipedia's article on Physical Address Extension:
x86 processor hardware-architecture is augmented with additional address lines used to select the additional memory, so physical address size increases from 32 bits to 36 bits. This, theoretically, increases maximum physical memory size from 4 GB to 64 GB.
Along with an image explaining the mechanism:
But I can't see how the address space is expanded from 4GB to 64GB. And 4 * 512 * 512 * 4K still equals 4GB, isn't it?
x86 processors running in 32-bit mode uses page translations for memory addresses. This means that there is a mapping layer between the address used by the code (both kernel and user mode) and the actual physical memory. E.g. in Windows all processes map the image of the .exe file to the same address.
The mapping layer between the virtual and physical addresses can normally only map 4GB of memory. With PAE enabled, the 32 bit virtual addresses are mapped 36 bit physical addresses. Still, a single process cannot access more than 4GB at a single time. That's what you see in the image you've pasted, the 32-bit address space of one process. You can also see that the PTE (Page Table Entry) containing the physical address is 64 bit wide.
A PAE aware application can swap in and out different parts of memory into the visible address space to make use of more than 4GB of RAM, but it can only see 4GB at any single point in time.
That's the virtual address space that's still 4GB. The physical address space is larger because the page table entries contain longer physical addresses of pages.
See, the picture says "64-bit PD entry" and "64-bit PT entry". Those extra 32 bits of the entries make up the longer physical addresses of pages.
With this particular scheme your application can still address up to 4GB of memory (minus the kernel portion that's generally inaccessible due to protection) at a time, but if you consider several applications, they can address more than 4GB of memory together.
It does not. The address page never changes. What happens is that via API calls you can SWAP OUT areas of memory against other areas of memory. So, you still have only a full address space of 4gb (2-3 gb usable), but you can have another 2000 blocks of 512mb that you can swap into one part of the address space.