Pagination in some processor make it possible to map virtual address
(A2345678) to physical address (823C5678). However, it is not possible
to map virtual address (345678) to (2ABC678). What can we conclude
about size of frame, page, size of virtual memory, size of physical
memory.
What I think about it:
(A2345678) -> (823C5678)
So, size offset is most 19 bits. We know that size of page (and frame) has size at most 219, like in my previous question.
When it comes to size of virtual memory, and physical memory - I can conclude nothing.
Similary, I don't know what tell me information about non-possibility mapping address.
Can you try to explain it me ?
I do see something we can conclude after all:
If a mapping to physical address 0x823C5678 is possible, physical memory is at least that large. (Assuming there aren't any holes in physical address space; not a good assumption on real hardware, but whatever. We can tell that physical address space is at least that big, even if it doesn't all map to DRAM or MMIO).
Similarly, the valid virtual address 0xA2345678 gives us a lower bound on the virtual address size. Presumably all the virtual address bits can be 1, so the highest possible virtual address is at least 0xFFFFFFFF. i.e. virtual addresses are at least 32 bits, but could be any larger size.
This reasoning applies to physical address space, but not the size of physical memory. (e.g. in a computer with 19GiB of RAM, the highest valid physical address isn't 2n-1.)
The fact that you can't map 0x345678 to 0x2ABC678 does tell us that the page size is greater than 212. The physical address is below the address that was mappable, so we can rule out that possible reason for the mapping being impossible. I think too high and misaligned are the only possible reasons for a mapping not being possible.
(0xc = 0b1100, while 0x5 is 0b0101, so the common bits are only 0x678.)
We can assume that physical memory is a whole number of pages, so we can round up the lowest possible end of physical memory to the next multiple of 213.
Related
I want to ask some questions about this diagram that showing the main memory with OS and different processes : how can I compute the size of main memory in Kbytes ? and What will happen if Process B generates a logical address of 200? Will the CPU return a
physical address or error?
I'd assume the unlabeled numbers on the left are addresses in bytes; which would imply there's 2048 bytes (or 2 KiB) of something (virtual space, or physical space, or maybe even RAM if there's no devices mapped into the physical space). Of course it could just as easily be 2048 bits, or 2048 (36-bit) words, or..
If Process B tries to access logical address of 200; it might work (no security), or it might cause some kind of trap/exception because the process doesn't have permission to access the operating system's area; or it could be impossible for the process to do that (e.g. maybe the design of the CPU restricts the process to unsigned offsets from a base address of 1203).
In some system, paging mapped a virtual address (a8b43fââ)16 to a physical address (13efd43f)16.
What can be inferred about the page size?
While this is not enough information to determine anything for certain, you can infer an upper limit on the page size by noting that the lower 13 bits of both addresses are the same. As we know the page index is the lower x bits of the address, if we find the maximum page index, we can determine the maximum page size. 0x00A8B43F and 0x13EFD43F share the same lower 13-bits ('b1_0100_0011_1111). Thus, the maximum the page size can be is 2^13 words, or 8 kwords. If the memory is byte addressable, this means a page size of 8KB.
However, without more information, knowing the exact page size is not possible as the shared bits might have come from a convenient mapping.
A microprocessor is byte addressable with 24bit address bus and 16bit data bus and one word contains two bytes. I was asked a question regarding attaching peripherals, adding memory, and address space and there's a few general concepts I don't see why they work.
Why is it that to calculate the address space you use the address bus not the data bus? Is the address space a function of the address bus or does it have to do with the microprocessor? How is it relevant that one word contains two bytes?
Why is it that to calculate the address space you use the address bus not the data bus?
Because it's the address bits that go out to the memory subsystem to tell them which memory location you want to read or write. The data bits just carry the data being read or written.
Is the address space a function of the address bus or does it have to do with the microprocessor?
Yes, the address space is a function of the address bus though there are tricks you can use to expand how much memory you can use.
An example of that is bank switching which gives you more accessible memory but no more address space (multiple blocks of memory co-exist at the same address, one at a time).
Another example is shown below where you can effectively double the usable memory, provided you're willing to only read and write words.
How is it relevant that one word contains two bytes?
The data bus size generally dictates the size of a memory cell. Larger memory cells can mean you can have more memory available to you but not more memory cells.
With your example, assuming you can only access words, you could get 16 megawords which is 32 megabytes.
This depends, of course, on how the memory is put together. It may be that you are able to access memory on individual byte boundaries (e.g., bytes 0/1 or 1/2 or 2/3) rather than just word boundaries, which would mean you don't actually get that full 32MB but only 16MB plus maybe one extra byte when you read the word at address FFFFFF).
I was reading up on Virtual Memory and from what I understand is that each process has its own VM table that maps VM addresses to Physical Addresses in real memory. So if a process allocated objects continuously they can potentially be stored in completely different places in Physical Memory. My question is that if I allocate and array which is supposed to be stored in a contiguous block of memory and if the size of the array requires more space than one page can provide, from what I understand is that array will be stored contiguously in VM but possibly in completely different location in PM. Is this correct? please correct me if I misunderstood how VM works. And if it is correct does that mean we are only concerned whether allocation is contiguous in VM?
Whether or not something that overlaps a page boundary is actually contiguous in Physical Memory is never really knowable with modern memory handlers. Memory glue logic essentially treats all addressable memory pages as an unordered set, and the ordering is essentially associated with a process; there's no guarantee that for different processes that end up getting assigned the same two physical memory pages (at different points in time) that the expressed relationship between those physical pages will be the same. Effectively, there's a translation layer between the CPU and the memory that handles this stuff.
That's right. Arrays must only looks contiguous for your application, but may be physically scattered on memory.
I just wanted to add/make it clear that from a user space program's point of view, a chunk of allocated memory always appears contiguous. The operating system in conjunction with the CPU's Memory Management Unit (MMU) handles all virtual to physical memory mappings and the programmer never needs to worry about how this mapping is handled (unless, of course, said programmer is writing an operating system).
A compiler (or one who writes code in assembly) can treat a program's addresses as starting from 0 and going up until the largest address needed for that particular program. The operating system then creates a page table for each process and uses this table to partially decode a physical address for each virtual memory location. The OS treats an address in a program as two separate parts, the page address and the offset into that page. Then, the MMU translates a page address into a physical frame address. Note that a physical memory "frame" is analogous to the conceptual "page" from the standpoint of the OS; these two are of the same size (eg 4096 bytes).
Since physical memory is divided into equally sized frames, and page size is the same as frame size you can know how much of your virtual address is used as a page location and how much is an offset into that page. For instance, if your OS "allocates" 4 gigabytes to each process (as is the case in Linux), and your page/frame size is 4096 bytes, you can know that 20 bits (4,294,967,296 bytes / 4096 bytes = 2 ^ 20 = 1,048,576 pages/page addresses) of a 32 bit address are used as a page address, which will then be converted to a physical frame address by the MMU, and the remaining 12 bits are used as an offset to determine the location of the address starting from the beginning of the page/frame.
VM (user pace) address --> page + offset (OS) --> frame + offset (MMU) = physical address
Our teachers has asked us around 50 true of false questions in preparation for our final exam. I could find an answer for most of them online or by asking relative. How ever, those 4 questions adrive driving me crazy. Most of those question aren't that hard, I just cant get any satisfying answer anywhere. Sorry, the original question are not written in english, i had to translate them myself. If you don't understand something, please tell me.
Thanks!
True or false
The size of the manipulated address by the processor determines the size of the virtual memory. How ever, the size of the memory cache is independent.
For long, DRAM technology stayed imcompatible with CMOS technology used to do the standard logic in processor. This is the reason DRAM memory is (most of the time) used outside of the processor (on a different chip).
Pagination let correspond multiple virtual addressing space to a same space of physical addressing.
An associative cache memory with sets of 1 line is an entierly associative cache memory, because one memory block can go in any set since each sets are of the same size that of the block.
"Manipulated address" is not a term of the art. You have an m-bit virtual address mapping to an n-bit physical address. Yes, a cache may be of any size up to the physical address size, but typically is much smaller. Note that cache lines are tagged with virtual or more typically physical address bits corresponding to the maximum virtual or physical address range of the machine.
Yes, DRAM processes and logic processes are each tuned for different objectives, and involve different process steps (different materials and thicknesses to lay down DRAM capacitor stacks/trenches, for example) and historically you haven't built processors in DRAM processes (except the Mitsubishi M32RD) nor DRAM in logic processes. Exception is so-called eDRAM that IBM likes to use for their SOI processes, and which is used as last level cache in IBM microprocessors such as the Power 7.
"Pagination" is what we call issuing a form feed so that text output begins at the top of the next page. "Paging" on the other hand is sometimes a synonym for virtual memory management, by which a virtual address is mapped (on a page by page basis) to a physical address. If you set up your page tables just so it allows multiple virtual addresses (indeed, virtual addresses from different processes' virtual address spaces) to map to the same physical address and hence the same location in real RAM.
"An associative cache memory with sets of 1 line is an entierly associative cache memory, because one memory block can go in any set since each sets are of the same size that of the block."
Hmm, that's a strange question. Let's break it down. 1) You can have a direct mapped cache, in which an address maps to only one cache line. 2) You can have a fully associative cache, in which an address can map to any cache line; there is something like a CAM (content addressible memory) tag structure to find which if any line matches the address. Or 3) you can have an n-way set associative cache, in which you have, essentially, n sets of direct mapped caches, and a given address can map to one of n lines. There are other more esoteric cache organizations, but I doubt you're being taught them.
So let's parse the statement. "An associative cache memory". Well that rules out direct mapped caches. So we're left with "fully associative" and "n-way set associative". It has sets of 1 line. OK, so if it is set associative, then instead of something traditional like 4-ways x 64 lines/way, it is n-ways x 1 lines/way. In other words, it is fully associative. I would say this is a true statement, except the term of the art is "fully associative" not "entirely associative."
Makes sense?
Happy hacking!
True, more or less (it depends on the accuracy of your translation I guess :) ) The number of bits in addresses sets an upper limit on the virtual memory space; you could, of course, choose not to use all the bits. The size of the memory cache depends on how much actual memory is installed, which is independent; but of course if you had more memory than you can address, then it still can't be used.
Almost certainly false. We have RAM on separate chips so that we can install more without building a whole new computer or replacing the CPU.
There is no a-priori upper or lower limit to the cache size, though in a real application certain sizes make more sense than others, of course.
I don't know of any incompatibility. The reason why we use SRAM as on-die cache is because it's faster.
Maybe you can force an MMUs to map different virtual addresses to the same physical location, but usually it's used the other way around.
I don't understand the question.