Write protect expansion ROM area - bios

As per PCIe FW spec 3.0, BIOS will write protect the expansion ROM region once init vector is completed, but for a BIOS which has no UEFI support and running in 16 bit real mode how does it ensure write protection ?

Related

Tracing PCI device memory I/O operations in QEMU / VFIO environment

I'm trying to reverse engineer some PCI device under QEMU / VFIO environment and I would like to trace all I/O operations on physical memory made by the card. The card makes use of PCI bus mastering and writes stuff to other devices and most probably reads some data preprocessed by driver from host RAM. As for now I was able to only trace all read and writes to the card MMIO space (data transfer from host to device), sadly I'm missing the second direction of R/W operations (device fetching data from host).
Is there a possibility to trace I/O operations that PCI device does on physical memory i.e. direct I/O and/or DMA transfers under QEMU / VFIO environment? I've enabled tracing for the following events:
vfio_pci_read_config vfio_pci_write_config vfio_region_write
vfio_region_read vfio_intx_interrupt vfio_intx_eoi vfio_intx_update
vfio_intx_enable vfio_intx_disable vfio_msi_interrupt
vfio_populate_device_config vfio_region_mmap
Is there any event that allows to do such thing that can be registered in QEMU? Thank you in advance.
The PCI device is considered a peripheral device, which means it has its own processing unit and runs its own firmware. The access to mmio region occurs in the peripheral device and thus not traceable with QEMU.
It's possible to trace read/write to mmio in the QEMU VM because the memory instruction executed invokes a callback function to handle mmio access in VFIO.
Since the PCI device read/write mmio region in its firmware(executed by the device's processing unit), you are not able to trace the events on the host side.
I think what you can do is comparing mmio_read values with mmio_write values during reverse engineering.

What is the memory map section in RISCV

I'm familiar with MIPS architecture, and I've known that MIPS has memory section such as kseg0, kseg1. Which determine whether the segment can be cached or mapped. For example, you should locate some I/O devices(like UART) to the uncached segment.
But I didn't find anything related in RISCV arch. So how does the RISCV OS know the address should be mapped or not?
By the way: I know the value in satp CSR desrcibes the translation mode. When OS is running, the value must set other than "Bare(disabled MMU)" so that the OS can support the virtual memory. So if CPU access UART address, the value in satp is still not "Bare"? But it should be "Bare"?
RISC-V is a family of instruction sets, ranging from MCU style processors that have no memory-mapping and no memory protection mechanisms (Physical Memory Protection is optional).
From your question, I assume you are talking about processors that support User and Supervisor level ISA, as documented in the RISC-V privileged spec.
It sounds like you want a spec describing which physical addresses are cacheable. Looking at the list of CSRs, I believe this information is not in the CSRs because it is platform specific. In systems I've worked with, it is either hard-coded in platform drivers or passed via device-tree.
For Linux, the device-tree entries are not RISC-V specific: there are device tree entries specifying the physical address range of memory. Additionally, each I/O device would have a device tree entry specifying its physical address range.
You can read the RISC-V privileged spec (The RISC-V Instruction Set Manual Volume II: Privileged Architecture 3.5 Physical Memory Attributes).
"For RISC-V, we separate out specification and checking of PMAs into a separate hardware structure, the PMA checker. In many cases, the attributes are known at system design time for each physical address region, and can be hardwired into the PMA checker. Where the attributes are run-time configurable, platform-specific memory-mapped control registers can be provided to specify these attributes at a granularity appropriate to each region on the platform (e.g., for an on-chip SRAM that can be flexibly divided between cacheable and uncacheable uses)"
I think if you want to check non-cacheable or cacheable in RISCV, you need to design a PMA unit that provide MMU to check memory attribute.

Is address 0xFFFFFFF0 hardwired for system BIOS ROM?

I read this from a previous stack overflow answer:
At initial power on, the BIOS is executed directly from ROM. The ROM chip is mapped to a fixed location in the processor's memory space (this is typically a feature of the chipset). When the x86 processor comes out of reset, it immediately begins executing from 0xFFFFFFF0.
Follow up questions,
Is this address 0xFFFFFFF0 hardwired just to access the system BIOS ROM and later after the system is up and running this address 0xFFFFFFF0 can not be used by RAM?
Also, when this address 0xFFFFFFF0 is being us to access the system BIOS ROM, is the CPU accessing it as an IO device or Memory device?
At power up, it is ROM. Has to be or the CPU would be unable to boot. Some chipsets have register bits that allow you to unmap the BIOS flash chip from the memory address space. Of course you should not do this while executing from ROM!
There is a common technique on PC hardware called "shadowing" where the BIOS will copy the contents of the ROM chip into RAM mapped at the same address. RAM is generally much faster than ROM, so it can speed up the system.
As for your second question, it is a memory device. It must be for the following reasons:
I/O addresses are 16-bits, not 32.
An x86 processors cannot execute code from I/O space. You cannot point the Instruction Pointer to an I/O address.
It's mapped to the global memory space and is addressed in the same way. Conventionally, the RAM shouldn't be mapped to any range of addresses that are used by other devices. This is common enough. You might remember a few years ago before 64-bit operating systems became more standard on home PCs that a user could have 4 GB of physical memory installed but perhaps only 3.5 GB accessible due to the graphics card being mapped to 512 MB of the address space.

How BIOS decide to enable BME bit for PCI device during POST?

BME means "Bus Master Enable" and it is the Bit 2 in Command Register(offset 0x4) in PCI Config space. If this bit is set to 1 then this indicates the device has the ability to act as a master for data transfer. Besides, it is configured by system BIOS(as I knew...)
My question is: how system BIOS decide this bit ? (based on class code or ...?)
AFAIK, BIOS sets this bit blindly. If the device supports bus master access, the bit becomes 1, otherwise, the write to this bit has no effect and the bit remains 0.
Of course, you can instruct BIOS to skip PCI enumeration altogether by choosing "PnP OS" somewhere in BIOS menus.

Writing Quad word to device register in PCI config space

My problem is I cannot write a 64 bit wide setting into a device register. I am working with a Intel® Xeon® Processor C5500/ C3500 Series with integrated memory controller and FreeBSD 10 based environment.
The data sheet (Intel® Xeon® Processor C5500/ C3500 Series Datasheet - Volume 2) mentions in section (4.12.40 Error Injection Implementation) the register MC_CHANNEL_x_ADDR_MATCH (which is a quad word access) should be set for ECC injection, but pci_cfgregwrite does not write 64 bit wide in port mapped I/O mode and the data sheet does not mention a base address for the register to help with memory mapping it. Tried to split the write into 2 32bit writes via pci_cfgregwrite but that does not help. How can I write a 64 bit wide setting into this register (Device: 4, 5, 6 Function: 0 Offset: F0h on bus 0xFF).
pci_cfgregwrite() writes to PCI configuration space and it does only 32-bit access.
I am pretty sure that your register is not located in PCI configuration space but in one of the PCI memory mapped address spaces that are described by Base Address Register 0/1/2/... (BAR0/1/2/...)
In order to access BARx regions you first map them to memory, then you use macros provided by FreeBSD for accessing the memory mapped regions. In your case, bus_space_write_8() would write 64bit. http://www.unix.com/man-page/freebsd/9/bus_space_write_8/
For more information, check FreeBSD documentation:
http://www.freebsd.org/doc/en/books/arch-handbook/pci-bus.html

Resources