Cmd in UBOOT is using physical address? - memory

I'm looking at the cmd_mem.c in UBOOT. When it runs the memory tests, it just asks the user to provide the address (or use the default value) and then it starts to read and write at that address. So does that mean UBOOT cmds are all running on physical memory address? Or it just means it is testing virtual address...?
Thanks in advance

U-boot does not implement virtual addresses.
So the range you specify for memory testing are physical addresses.
In U-boot environment you are dealing with linear addresses as they exactly are.
If the MMU isn't configured completely as the OS does, it will simply use the addresses as physical addresses, which u-boot will do.
Thus if you are testing a range say 0x1000-0x2000, it will literally test those physical addresses.

Related

Is there an explict split between userspace and kernel in physical memory on Linux x86-64?

That is, given a physical address, can I tell whether this address is from userspace or not?
As far as I know, in virtual address space, the kernel will use the
upper half and the userspace will use the lower half. But what about
in physical address space?
What makes the problem complicated is that I want to check the guest physical address in KVM, which means that I can't call some kernel functions in the guest OS. So I want to know whether there is an explict split line?
No.
Almost any physical page frame can be mapped to a userspace virtual address or a kernel virtual address, or even both at the same time.

Is is necessary to map the memory allocated to the device by the OS to a virtual memory space?

example while writing a driver we do the following
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
We get the info about the memory allocated to the device.
So is it necessary that I use this memory using virtual address
virt_base = ioremap(res->start, resource_size(res));
Can't we use the physical address itself to address the memory?
If we can, so are there any specific advantages of using virtual memory or this is how kernel wants us to do...
Yes, it is absolutely necessary. (On x86) Once paging is enabled in the CPU, all addresses visible to the OS (so you, the driver developer) are virtual addresses. In other words, any address you read from or write to will be interpreted by the CPU as a virtual address. It will then go through the page table hierarchy to finally arrive at a physical address to put on the bus.
You can't use physical addresses - they will not be mapped, or mapped to something other than what you want. This is why ioremap must exist and be used.

Device driver in virtual memory system

Assume that there is a device using memory-mapped I/O i.e. there is a specific range of physical memory assigned to this device
If virtual memory system is not used, then it is quite straightforward to manipulate the device through read/write operations done with corresponding physical addresses
What if there is virtual memory system ?
Device driver needs to be aware of that specific range of physical memory assigned to that device, but how does it access that address range if it should use virtual addresses instead of physical ?
In case of memory mapped IO devices, any physical address shared by that device can be mapped to the kernel virtual memory using the ioremap() API [1].
Hence in your case, we can map the physical address 0x1234 using ioremap() to obtain its kernel virtual address and start writing data to this address.
[1] http://lxr.gwbnsh.net.cn/linux/arch/cris/mm/ioremap.c
It's been a long time since I've done it, but my recollection is that when you map a block of physical memory, the address in your user space corresponds to that physical memory. Writing to your user-space address is a write to the physical memory.

What is the difference between physical and absolute address?

They seem both explicitly to specify real memory location. What is the difference between physical and absolute address?
Physical Address (a.k.a. the real deal):
A physical address is the address used by the bus circuitry (hence 'physical') when transferring data to and from RAM.
Its counterpart is a 'virtual address' i.e. in a computer with virtual memory, virtual addresses are used by applications, and are translated to physical addresses when actually accessing RAM. The applications only see virtual addresses. This means that all memory references in application code refer to virtual addresses.
Absolute Address:
Absolute address is actually a term used when referring to one of the addressing modes used by an application. Thus, in a computer that offers virtual memory, this 'absolute address' is also a virtual address - because all application code is only going to refer to virtual addresses. Other addressing modes use virtual addresses as well. Of course, like I wrote earlier, virtual addresses are eventually mapped to a physical addresses when accessing RAM.
Here is how an 'absolute address' is different from it's counterparts - the other addressing modes (one of them being 'relative address'):
An Intel JMP(jump) instruction may specify a 'relative jump', where the displacement is relative to the next instruction. Something like:
"Jump N bytes ahead of the next instruction" <- This is PC-relative addressing.
Or it may be used with an absolute address, like:
"Jump to the Nth byte in memory" <- This is absolute addressing.
In both cases, the addresses being referred to by the JMPs are virtual addresses (which get mapped to a physical address in a way that is transparent to the application)

Confused over memory mapping

I've recently started getting into low level stuff and looking into bootloaders and operating systems etc...
As I understand it, for ARM processors at least, peripherals are initialized by the bootloader and then they are mapped into the physical memory space. From here, code can access the peripherals by simply writing values to the memory space mapped to the peripherals registers. Later if the chip has a MMU, it can be used to further remap into virtual memory spaces. Am I right?
What I don't understand are (assuming what I have said above is correct):
How does the bootloader initialize the peripherals if they haven't been mapped to an address space yet?
With virtual memory mapping, there are tables that tell the MMU where to map what. But what determines where peripherals are mapped in physical memory?
When a device boots, the MMU is turned off and you will be typically running in supervisor mode. This means that any addresses provide are physical addresses.
Each ARM SOC (system on Chip) will have a memory map. The correspondece of addresses to devices is determined by which physical data and address line are connect to which parts of the processor. All this information can be found in a Technical reference manual. For OMAP4 chips this can be found here.
There are several ways to connect off-chip device. One is using the GPMC. Here you will need to sepcify the address in the GPMC that you want to use on the chip.
When the MMU is then turned on, these addresses may change depending on how the MMU is programmed. Typically direct access to hardware will also only be available in kernel mode.
Though this is an old question, thought of answering this as it might help some others like me trying to get sufficient answers from stackoverflow.
you explanation is almost correct but want to give little explanation on this one:
peripherals are initialized by the bootloader and then they are mapped into the physical memory space
Onchip peripherals already have a predefined physical address space. For other external IO mapped peripherals (like PCIe), we need to config a physical addr space, but their physical address space range is still predefined. They cannot be configured at random address space.
Now to your questions, here are my answers..
How does the bootloader initialize the peripherals if they haven't been mapped to an address space yet?
As I mentioned above, all (on-chip)peripherals have physical address space predefined (usually will be listed in Memory map chapter of processor RM). So, boot loaders (assuming MMU is off) can directly access them.
With virtual memory mapping, there are tables that tell the MMU where to map what. But what determines where peripherals are mapped in physical memory?
With VMM, there are page tables (created and stored in physical DRAM by kernel) that tells MMU to map virtual addr to physical addr. In linux kernel with 1G kernel virt space (say kernel virtual addrs from 0xc0000000-0xffffffff), on-chip peripherals will need to have a VM space from within the above kernel VM space (so that kernel & only kernel can access it); and page tables will be setup to map that peripheral virt addr to its actual physical addr (the ones defined in RM)
You can't remap peripherals in ARM processor, all peripheral devices correspond to fixed positions in memory map. Even registers are mapped to internal RAM memory that has permanent fixed positions. The only things you can remap are memory devices like SRAM, FLASH, etc. via FSMC or similar core feature. You can however remap a memory mapped add-on custom peripheral that is not part of the core itself, lets say a hard disk controller for instance, but what is inside ARM core its fixed.
A good start is to take a look at processor datasheets at company sites like Philips and ST, or ARM architecture itself at www.arm.com.

Resources