Assume there is an MCU(like a cypress PSOC4 chip which I'm using). It contains a flash memory(to store firmware) and a RAM(probably SRAM) inside the chip. I understand that even these two components need to be memory mapped in order for the processing unit to access them.
However, the flash memory and SRAM should be mapped every time the MPU is powered on, right?.
Then where is the configuration for memory map stored?
Is it somehow hardwired inside the MPU? Or is it stored in a separately hidden small piece of RAM?
I once thought that the memory map info should be located at the front of the firmware, but this doesn't make sense because the firmware is stored in the flash, and the MPU would have no idea where the flash is mapped to. So, I think this is a wrong idea.
By the way, is a memory map even configurable?
Yes hardwired in the mcu on boot, some mcus allow for remapping once up and running, but in order to boot the flash/rom has to be mapped to a known place, a sane design would also have the on chip sram mapped and ready to use on boot at a known location.
Some use straps (pins externally hardwired high or low) to manipulate how the mcu boots, sometimes that includes a different mapping. A single strap could for example choose between mapping a bootloader rom vs the user flash into the boot space of the processor. But that would be documented as with other mapping choices in the chip vendors documentation for the part.
Some mcus allow you to in software after boot move ram into the vector/exception table area so you can manipulate it at run time and not be limited to what was in the flash at boot. Some mcus are going so far as to have a mmu like feature, but I have a hard time calling those mcus as they can run in the hundreds of mhz, have floating point uints, caches, etc. Technically they are a SOC with ram and flash on chip, so classified as an MCU.
Your thinking is sane, the flash and sram mappings are in logic and at reset you can know where things will be. It is in the documentation for that product.
Related
I'm familiar with MIPS architecture, and I've known that MIPS has memory section such as kseg0, kseg1. Which determine whether the segment can be cached or mapped. For example, you should locate some I/O devices(like UART) to the uncached segment.
But I didn't find anything related in RISCV arch. So how does the RISCV OS know the address should be mapped or not?
By the way: I know the value in satp CSR desrcibes the translation mode. When OS is running, the value must set other than "Bare(disabled MMU)" so that the OS can support the virtual memory. So if CPU access UART address, the value in satp is still not "Bare"? But it should be "Bare"?
RISC-V is a family of instruction sets, ranging from MCU style processors that have no memory-mapping and no memory protection mechanisms (Physical Memory Protection is optional).
From your question, I assume you are talking about processors that support User and Supervisor level ISA, as documented in the RISC-V privileged spec.
It sounds like you want a spec describing which physical addresses are cacheable. Looking at the list of CSRs, I believe this information is not in the CSRs because it is platform specific. In systems I've worked with, it is either hard-coded in platform drivers or passed via device-tree.
For Linux, the device-tree entries are not RISC-V specific: there are device tree entries specifying the physical address range of memory. Additionally, each I/O device would have a device tree entry specifying its physical address range.
You can read the RISC-V privileged spec (The RISC-V Instruction Set Manual Volume II: Privileged Architecture 3.5 Physical Memory Attributes).
"For RISC-V, we separate out specification and checking of PMAs into a separate hardware structure, the PMA checker. In many cases, the attributes are known at system design time for each physical address region, and can be hardwired into the PMA checker. Where the attributes are run-time configurable, platform-specific memory-mapped control registers can be provided to specify these attributes at a granularity appropriate to each region on the platform (e.g., for an on-chip SRAM that can be flexibly divided between cacheable and uncacheable uses)"
I think if you want to check non-cacheable or cacheable in RISCV, you need to design a PMA unit that provide MMU to check memory attribute.
I'm aware that in most modern architectures the CPU sends read and write requests, to a memory management unit rather than directly to the RAM controller.
If other peripherals are also addressed, that is to say, read from and written to using an address bus, then are these addresses also accessed through a virtual address? In other words, to speak to a USB drive etc. does the CPU send the target virtual address to an MMU which translates it to a physical one? Or does it simply write to a physical address with no intermediary device?
I cant speak globally there may be exceptions. But that is the general idea, that being that the cpu memory interface goes completely through the mmu (And completely through a cache or layers of caches).
In order for peripherals really to work (caching a status register on the first read then subsequent reads getting the cached version not the real version) you have to set the address space for the peripheral to be not cached. So for example on an arm and no doubt others where you have separate i and d cache enables, you can turn on the i cache without the mmu, but to turn on the d cache and not have this peripheral problem you need the mmu on and the peripheral space in the tables and marked as not cached.
It us up to the software designers to decide if they want to have the virtual address for the peripherals match the physical or to move the peripherals elsewhere, both have pros and cons.
It is certainly possible to design a chip/system where an address space is automatically not sent through the mmu or cache, that can make the busses ugly, and/or the chip may have separate busses for peripherals from ram, or other solutions, so the above is not necessarily a universal answer, but for say an arm and I would assume an x86 that is how it works. On the arms I am familar with the mmu and l1 cache are in the core, the l2 is outside and l3 beyond that if you have one. the l2 is literally between the core and the world (if you have one (from arm)) but the axi/amba bus has cacheable settings so each transaction may or may not be marked as cacheable, if not cacheable then it passes right through the l2 logic. if enabled the mmu determines that if enabled on a per transaction basis.
Actually the virtual-to-physical translation is in the CPU for almost all modern (and at this point, even most old) architectures. Even the DRAM and PCIe controllers (previously in the Northbridge) made it onto the CPU. So a modern CPU doesn't even talk to a RAM controller, it directly talks to DRAM.
If other peripherals are also addressed, that is to say, read from and written to using an address bus, then are these addresses also accessed through a virtual address?
At least in the case of x86, yes. You can virtually map your memory mapped IO ranges anywhere. Good thing too, otherwise the virtual address space would necessarily mirror the weird physical layout with "holes" that you couldn't map real ram into because then you'd have two things in the same place.
I understand that the computer loads the first sector of memory known as BIOS, which runs diagnostics on hardware and the proceeds to load the OS. I guess my question leans towards the hardware side. How does the computer know which memory to boot from (RAM, ROM, FLASH, etc). I understand the differences between memory and I understand computers boot from the hard drive, but Im attempting to make an 8 bit computer with a z80 microprocessor, which will need to boot from ROM or Flash memory. The only problem is that the processor reads only from whatever memory the address pins are connected to and there are no separate address pins for ram and rom. Its also impractical to run the system on rom or flash due to the much slower read/write time compared to ram. The z80 to the best of my knowledge doesnt have separate commands for reading from rom and ram, and it wouldnt matter even if it did because the ram will be blank upon powering up. How does a computer choose to read from rom only upon booting and then switch to ram once the OS has been loaded. Is it hardwired in using logic gates? And how does a computer choose to write to flash memory or a hard drive instead of ram once the OS has been loaded? Would flash memory be treated as a device? Or is this also hardwired into the motherboard using logic gates? Sorry for giving so much background, I just dont want you to waste your time explaining things Ive already grasped. Ive just researched this to a great extent and thought about it for hours on end and cant seem to figure it out, and everywhere Ive looked doesnt explain how the computer chooses which memory to read from, it just says that it does. Thanks
I'm not sure I'm answering what you are asking, but I'll give it a try.
Some computers (at least, IBM PC-compatible computers), after powering up, usually run this BIOS (Basic Input/Output System) program. For this to happen, to the best of my knowledge, the hardware must make the jump to this code, and this code must be accessible (that is, mapped) from the physical memory, since that's where the CPU will execute code from. So, a physical address space with some read-only areas where this code is hard-wired to would do the trick.
Once the BIOS code is running, it can select how to proceed next. It can copy a sector from a hard disk to memory, (or a bunch of data from a Flash drive) and then jump to it, or whatever. That's up to the BIOS writer.
I will try to explain the Pentium boot up process very briefly.
On the flash ROM mounted on the Motherboard. there is a small program called BIOS (basic input, output system). After pressing the power button the BIOS program is executed.
The BIOS contains low level software that performs the following operations :-
checks how much RAM is installed and if all other PCI and ISA buses peripherals are connected.
it checks if all IO devices are connected.
scans a list of boot devices and selects the boot devices based on BIOS configurations setup earlier by the user.
once the boot devices is selected. the first sector from the boot device is read into memory and executed. it contains a simple program which examines the partition table and selects the Active one (Holding the OS). The secondary bootloader is read from that partition. this loader then reads the OS from the partition into the memory and runs it. After running, the OS asks the BIOS for the configuration info for each device and configure the new devices (those have no stored configurations). after all devices configurations are set. they are delivered to the kernel. Then it initializes tables, background boot up processes and starts login program or GUI.
I have been trying to understand I/O ports and their mappings with the memory & I/O address space. I read about 'Memory Mapped I/O' and was wondering how this is accomplished by OS/Hardware. Does OS/Hardware uses some kind of table to map address specified in the instruction to respective port ?
Implementations differ in many ways. But the basic idea is that when a read or write occurs for a memory address, the microprocessor outputs the address on its bus. Hardware (called an 'address decoder') detects that the address is for a particular memory-mapped I/O device and enables that device as the target of the operation.
Typically, the OS doesn't do anything special. On some platforms, the BIOS or operating system may have to configure certain parameters for the hardware to work properly.
For example, the range may have to be set as uncacheable to prevent the caching logic from reordering operations to devices that care about the order in which things happen. (Imagine if one write tells the hardware what operation to do and another write tells the hardware to start. Reordering those could be disastrous.)
On some platforms, the operating system or BIOS may have to set certain memory-mapped I/O ranges as 'slow' by adding wait states. This is because the hardware that's the target of the operation may not be as fast as the system memory is.
Some devices may allow the operating system to choose where in memory to map the device. This is typical of newer plug-and-play devices on the PC platform.
In some devices, such as microcontrollers, this is all done entirely inside a single chip. A write to a particular address is routed in hardware to a particular port or register. This can include general-purpose I/O registers which interface to pins on the chip.
Is WM operating system protects process memory against one another?
Can one badly written application crash some other application just mistakenly writing over the first one memory?
Windows Mobile, at least in all current incarnations, is build on Windows CE 5.0 and therefore uses CE 5.0's memory model (which is the same as it was in CE 3.0). The OS doesn't actually do a lot to protect process memory, but it does enough to generally keep processes from interfering with one another. It's not hard and fast though.
CE processes run in "slots" of which there are 32. The currently running process gets swapped to slot zero, and it's addresses are re-based to zero (so all memory in the running process effectively has 2 addresses, the slot 0 address and it's non-zero slot address). These addresses are proctected (though there's a simple API call to cross the boundary). This means that pointer corruptions, etc will not step on other apps but if you want to, you still can.
Also CE has the concept of shared memory. All processes have access to this area and it is 100% unprotected. If your app is using shared memory (and the memory manager can give you a shared address without you specifically asking, depending on your allocation and its size). If you have shared memory then yes, any process can access that data, including corrupting it, and you will get no error or warning in either process.
Is WM operating system protects process memory against one another?
Yes.
Can one badly written application crash some other application just mistakenly writing over the first one memory?
No (but it might do other things like use up all the 'disk' space).
Even if you're a device driver, to get permission to write to memory that's owned by a different process there's an API which you must invoke explicitly.
While ChrisW's answer is technically correct, my experience of Windows mobile is that it is much easier to crash the entire device from an application than it is on the desktop. I could guess at a few reasons why this is the case;
The operating sytem is often much more heavily OEMed than Windows desktop, that is the amount of manufacturer specific low level code can be very high, which leads to manufacturer specific bugs at a level that can cause bad crashes. On many devices it is common to see a new firmware revision every month or so, where the revisions are fixes to such bugs.
Resources are scarcer, and an application that exhausts all available resources is liable to cause a crash.
The protection mechanisms and architecture vary quite a bit. The device I'm currently working with is SH4 based, while you mostly see ARM, X86 and the odd MIPs CPU..