Tracing PCI device memory I/O operations in QEMU / VFIO environment - memory

I'm trying to reverse engineer some PCI device under QEMU / VFIO environment and I would like to trace all I/O operations on physical memory made by the card. The card makes use of PCI bus mastering and writes stuff to other devices and most probably reads some data preprocessed by driver from host RAM. As for now I was able to only trace all read and writes to the card MMIO space (data transfer from host to device), sadly I'm missing the second direction of R/W operations (device fetching data from host).
Is there a possibility to trace I/O operations that PCI device does on physical memory i.e. direct I/O and/or DMA transfers under QEMU / VFIO environment? I've enabled tracing for the following events:
vfio_pci_read_config vfio_pci_write_config vfio_region_write
vfio_region_read vfio_intx_interrupt vfio_intx_eoi vfio_intx_update
vfio_intx_enable vfio_intx_disable vfio_msi_interrupt
vfio_populate_device_config vfio_region_mmap
Is there any event that allows to do such thing that can be registered in QEMU? Thank you in advance.

The PCI device is considered a peripheral device, which means it has its own processing unit and runs its own firmware. The access to mmio region occurs in the peripheral device and thus not traceable with QEMU.
It's possible to trace read/write to mmio in the QEMU VM because the memory instruction executed invokes a callback function to handle mmio access in VFIO.
Since the PCI device read/write mmio region in its firmware(executed by the device's processing unit), you are not able to trace the events on the host side.
I think what you can do is comparing mmio_read values with mmio_write values during reverse engineering.

Related

Who brings a program from Secondary memory(Hard Disk) to Primary Memory(RAM) for execution?

In the book "Operating System Concept by SILBERSCHATZ, GALVIN & GAGNE", they've mentioned that,
Main Memory(Primary memory) and the registers built into the processor itself are the only storage that CPU can access directly
This statement has caused a lot of confusion. If CPU can not access Secondary Memory, then how does it fetch a program from Secondary Memory?
CPU can't access Secondary memory directly doesn't mean It can't access it anyway. when System is booted, BIOS inbuilt program (in ROM) copy boot loader (from secondary bootable device) to RAM's memory and ask cpu to continue execution from that particular address.
Once CPU starts executing Boot loader, Boot loader calls necessary function to read from disk (secondary storage) and copy your OS kernel image into memory. transfer of data is done through I/O ports.
You have your kernel image into memory, and thus boot loader ask cpu to jump to kernel's entry point. You have your kernel starting up this point.
Kernel setup OS environment, load up necessary drivers (including disk/cd-rom driver). After this point, It is up to OS disk driver that It performs I/O port operation or DMA access to load up data from secondary storage.
Generally DMA is preferred because It does not involve CPU for polling up data from device, but It is little difficult to code.
I hope I cleared your doubt :)

How does DMA work with PCI Express devices?

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 days ago.
The community reviewed whether to reopen this question 3 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Let's suppose a CPU wants to make a DMA read transfer from a PCI Express device. Communication to PCI Express devices is provided by transaction layer packets (TLP). Theoretically, the maximum payload size is 1024 doubleword for TLP. So how does a DMA controller act when a CPU gives a DMA read command to PCI Express device in size of 4 megabyte?
In the PCIe enumeration phase, the maximum allowed payload size is determined (it can be lower then the device's max payload size: e.g. a intermediate PCIe switch has a lower max. payload size).
Most PCIe devices are DMA masters, so the driver transfers the command to the device. The device will send several write packets to transmit 4 MiB in xx max sized TLP chunks.
Edit 1 in reply to comment 1:
A PCI based bus has no "DMA Controller" in form of a chip or a sub circuit in the chipset. Every device on the bus can become a bus master. The main memory is always a slave.
Let's assume you have build your own PCIe device card, which can act as an PCI master and your program (running on CPU) wants to send data from that card to main memory (4 MiB).
The device driver knows the memory mapping for that particular memory region from operating system (some keywords: memory mapped I/O, PCI bus enumeration, PCI BARs, ).
The driver transfers the command (write), source-address, destination-address and length to the device. This can be done by sending bytes to a special address inside an pre-defined BAR or by writing into the PCI config space. The DMA master on the cards checks these special regions for new tasks (scatter-gather lists). If so, theses tasks get enqueued.
Now the DMA master knows where to send, how many data. He will read the data from local memory and wrap it into 512 byte TLPs of max payload size (the max payload size on path device <---> main memory is known from enumeration) and send it to the destination address. The PCI address-based routing mechanisms direct these TLPs to the main memory.
#Paebbels has already explained most of it. In PCI/PCI-e, "DMA" is implemented in terms of bus mastering, and it's the bus-master-capable peripheral devices that hold the reins. The peripheral device has the memory read/write transactions at its disposal, and it's up to the peripheral device, what granularity and ordering of the writes (or reads) it will use. I.e. the precise implementation details are hardware-specific to the peripheral device, and the corresponding software driver running on the host CPU must know how to operate the particular peripheral device, to provoke the desired DMA traffic in it.
Regarding the "memory management aspect", let me refer my distinguished audience to two chapters of a neat book by Jon Corbet, on exactly this topic in Linux. Memory management bordering on DMA, under the hood of the OS kernel. Linux and its source code and documentation are generally a good place (open source) to start looking for "how things work under the hood". I'll try to summarize to topic a bit.
First of all, please note that DMA access to the host's RAM (from a peripheral PCI device) is a different matter than PCI MMIO = where the peripheral device possesses a private bank of RAM of its very own, wants to make that available to the host system via a MMIO BAR. This is different from DMA, a different mechanism (although not quite), or maybe "the opposite perspective" if you will... suppose that the difference between a host and a peripheral device on the PCI/PCI-e is not great, and the host bridge / root complex merely has a somewhat special role in the tree topology, bus initialization and whatnot :-) I hope I've confused you enough.
The computer system containing a PCI(-e) bus tree and a modern host CPU actually works with several "address spaces". You've probably heard about the CPU's physical address space (spoken at the "front side bus" among the CPU cores, the RAM controller and the PCI root bridge) vs. the "virtual address spaces", managed by the OS with the help of some HW support on part of the CPU for individual user-space processes (including one such virtual space for the kernel itself, not identical with the physical address space). Those two address spaces, the physical one and the manifold virtual, occur irrespective of the PCI(-e) bus. And, guess what: the PCI(-e) bus has its own address space, called the "bus space". Note that there's also the so called "PCI configuration space" = yet another parallel address space. Let's abstract from the PCI config space for now, as access to it is indirect and complicated anyway = does not "get in the way" of our topic here.
So we have three different address spaces (or categories): the physical address space, the virtual spaces, and the PCI(-e) bus space. These need to be "mapped" to each other. Addresses need to be translated. The virtual memory management subsystem in the kernel uses its page tables and some x86 hardware magic (keyword: MMU) to do its job: translate from virtual to physical addresses. When speaking to PCI(-e) devices, or rather their "memory mapped IO", or when using DMA, addresses need to be translated between the CPU physical address space and the PCI(-e) bus space. In the hardware, in bus transactions, it is the job of the PCI(-e) root complex to handle the payload traffic, including address translation. And on the software side, the kernel provides functions (as part of its internal API) to drivers to be able to translate addresses where needed. As much as the software is only concerned about its respective virtual address space, when talking to PCI(-e) peripheral devices, it needs to program their "base address registers" for DMA with addresses from the "bus space", as that's where the PCI(-e) peripherals live. The peripherals are not gonna play the "game of multiple address translations" actively with us... It's up to the software, or specifically the OS, to make the PCI(-e) bus space allocations a part of the host CPU's physical address space, and to make the host physical space accessible to the PCI devices. (Although not a typical scenario, a host computer can even have multiple PCI(-e) root complexes, hosting multiple trees of the PCI(-e) bus. Their address space allocations must not overlap in the host CPU physical address space.)
There's a shortcut, although not quite: in an x86 PC, the PCI(-e) address space and the host CPU physical address space, are one.
Not sure if this is hardwired in the HW (the root complex just doesn't have any specific mapping/translation capability) or if this is how "things happen to be done", in the BIOS/UEFI and in Linux. Suffice to say that this happens to be the case.
But, at the same time, this doesn't make the life of a Linux driver writer any easier. Linux is made to work on various HW platforms, it does have an API for translating addresses, and the use of that API is mandatory, when crossing between address spaces.
Maybe interestingly, the API shorthands complicit in the context of PCI(-e) drivers and DMA, are "bus_to_virt()" and "virt_to_bus()". Because, to software, what matters is its respective virtual address - so why complicate things to the driver author by forcing him to translate (and keep track of) the virtual, the physical and the bus address space, right? There are also shorthands for allocating memory for DMA use: pci_alloc_consistent() and pci_map_single() - and their deallocation counterparts, and several companions - if interested, you really should refer to Jon Corbet's book and further docs (and kernel source code).
So as a driver author, you allocate a piece of RAM for DMA use, you get a pointer of your respective "virtual" flavour (some kernel space), and then you translate that pointer into the PCI "bus" space, which you can then quote to your PCI(-e) peripheral device = "this is where you can upload the input data".
You can then instruct your peripheral to do a DMA transaction into your allocated memory window. The DMA window in RAM can be bigger (and typically is) than the "maximum PCI-e transaction size" - which means, that the peripheral device needs to issue several consecutive transactions to accomplish a transfer of the whole allocated window (which may or may not be required, depending on your application). Exactly how that fragmented transfer is organized, that's specific to your PCI peripheral hardware and your software driver. The peripheral can just use a known integer count of consecutive offsets back to back. Or it can use a linked list. The list can grow dynamically. You can supply the list via some BAR to the peripheral device, or you can use a second DMA window (or subsection of your single window) to construct the linked list in your RAM, and the peripheral PCI device will just run along that chain. This is how scatter-gather DMA works in practical contemporary PCI-e devices.
The peripheral device can signal back completion or some other events using IRQ. In general, the operation of a peripheral device involving DMA will be a mixture of direct polling access to BAR's, DMA transfers and IRQ signaling.
As you may have inferred, when doing DMA, the peripheral device need NOT necessarily possess a private buffer on board, that would be as big as your DMA window allocation in the host RAM. Quite the contrary - the peripheral can easily "stream" the data from (or to) an internal register that's one word long (32b/64b), or a buffer worth a single "PCI-e payload size", if the application is suited for that arrangement. Or a minuscule double buffer or some such. Or the peripheral can indeed have a humongous private RAM to launch DMA against - and such a private RAM need not be mapped to a BAR (!) if direct MMIO access from the bus is not required/desired.
Note that a peripheral can launch DMA to another peripheral's MMIO BAR just as easily, as it can DMA-transfer data to/from the host RAM. I.e., given a PCI bus, two peripheral devices can actually send data directly to each other, without using bandwidth on the host's "front side bus" (or whatever it is nowadays, north of the PCI root complex: quickpath, torus, you name it).
During PCI bus initialization, the BIOS/UEFI or the OS allocates windows of bus address space (and physical address space) to PCI bus segments and peripherals - to satisfy the BARs' hunger for address space, while keeping the allocations non-overlapping systemwide. Individual PCI bridges (including the host bridge / root complex) get configured to "decode" their respective allocated spaces, but "remain in high impedance" (silent) for addresses that are not their own. Feel free to google on your own on "positive decode" vs. "subtractive decode", where one particular path down the PCI(-e) bus can be turned into an "address sink of last resort", maybe just for the range of the legacy ISA etc.
Another tangential note maybe: if you've never programmed simple MMIO in a driver, i.e. used BAR's offered by PCI devices, know ye that the relevant keyword (API call) is ioremap() (and its counterpart iounmap, upon driver unload). This is how you make your BAR accessible to memory-style access in your living driver.
And: you can make your mapped MMIO bar, or your DMA window, available directly to a user-space process, using a call to mmap(). Thus, your user-space process can then access that memory window directly, without having to go through the expensive and indirect rabbit hole of the ioctl().
Umm. Modulo PCI bus latencies and bandwidth, the cacheable attribute etc.
I feel that this is where I'm getting too deep down under the hood, and running out of steam... corrections welcome.

Linux memory manager infringes on PCI memory

My board has a Cavium Octeon NPU, running Linux kernel 2.6.34.10 that acts as a PCIe Root Complex. It is connected to PCIe switch, as are some other peripheral devices (Endpoints), among which there is Marvell's 9143 PCI-to_SATA controller based SSD.
When PCIe is initially enumerated, PCI driver on Octeon adds up the sizes of all the prefetchable memory resources and programs the PLIMIT and PBASE registers on the upstream switch port accordingly. In my case that address range is 0x80000000 - 0xEFFFFFFF.
After that, I would expect that address range to be inaccessible to kernel memory manager allocating for DMA buffers etc. And yet, I see the kernel, at some point starts sending SCSI requests to the SSD device, where scatter-gather list elements fall within this address range. I confirmed this, by looking at PCI analyzer trace. Naturally, when SSD controller receives such an address, it tries to access it (DMA read or write), and fails, because upstream switch port refuses to forward this request upstream to Root Complex, because it is programmed to think that this address would be downstream from it. (Interestingly enough, it mostly happens when I manipulate large files, I see that kernel allocated buffer addresses grow downward, until they dip below 0xEFFFFFFF)
Hence, the question: shouldn't PCI enumeration/rescan code, tell the kernel - these are PCI devices register addresses and therefore are off-limit for DMA buffer allocation? Or is it responsibility of each individual device driver to reserve its prefetchable memory? Marvell driver I use reserves regular memory BAR, but not the prefetcheable one. Is that a problem?
Thanks in advance and apologies for lengthy description.

PCI Express BAR memory mapping basic understanding

I am trying to understand how PCI Express works so i can write a windows driver that can read and write to a custom PCI Express device with no on-board memory.
I understand that the Base Address Registers (BAR) in the PCIE configuration space hold the memory address that the PCI Express should respond to / is allowed to write to. (Is that correct understood?)
My questions are the following:
What is a "bus-specific address" compared to physical address when talking about PCIE?
When and how is the BAR populated with addresses? Is the driver responsible for allocating memory and writing the address to the peripheral BAR?
Is DMA used when transferring data from peripheral to host memory?
I appreciate your time.
Best regards,
i'm also working on device driver (albeit on linux) with a custom board. Here is my attempt on answering your questions:
The BARs represent memory windows as seen by the host system (CPUs) to talk to the device. The device doesn't write into that window but merely answers TLPs (Transaction Layer Packets) requests (MRd*, MWr*).
I would say "bus-specific" = "physical" addresses if your architecture doesn't have a bus layer traslation mechanism. Check this thread for more info.
In all the x86 consumer PCs i've used so far, the BAR address seemed to be allocated either by the BIOS or at OS boot. The driver has to work with whatever address has been allocated.
The term DMA seems to abused instead of bus mastering which I believe is the correct term in PCIe. In PCIe every device may be a bus master (if allowed in its command register bit 2). It does so by sending MRd, MWr TLPs to other devices in the bus (but generally to the system memory) and signalling interrupts to the CPU.
From your query its clear that you want to write a driver for a PCIe slave device. To understand the scheme of things happening behind PCIe transfer, a lot of stuff is available on internet (Like PCIe Bus Enumeration, Peripheral Address Mapping to the memory etc).
Yes your understanding is correct regarding the mapping of PCIe registers to the memory and you can read/write them.( For e.g in case of linux PCIe device driver you can do this using "ioremap" ).
An address bus is used to specify a physical address. When a processor or DMA-enabled device needs to read or write to a memory location, it specifies that memory location on the address bus. Nothing more to add to that.
"PCIe bus enumeration" topic will answer your 2nd question.
Your third question is vague. You mean slave PCIe device. Assuming it is, yes you can transfer data between a slave PCIe device and host using a DMA controller.
I am working on a project which involves "PCIe-DMA" connected with the host over the PCIe bus. Really depends on your design and implementation. So in my case PCIe-DMA is itself a slave PCIe device on the target board connected to the host over PCIe.
clarification for your doubts/questions is here.
1> There are many devices thats sits on BUS like PCI which sees Memeory in terms that are different from a Physical address, those are called bus addresses.
For example if you are initaiating DMA from a device sitting on bus to Main memory of system then destination address should be corresponding bus address of same physical address in Memmory
2> BARS gets populated at the time of enumeration, in a typical PC it is at boot time when your PCI aware frimware enumerate PCI devices presents on slot and allocate addresses and size to BARS.
3> yes you can use both DMA initiated or CPU initiated operations on these BARS.
-- flyinghigh

How 'Input/Output' ports are mapped into memory?

I have been trying to understand I/O ports and their mappings with the memory & I/O address space. I read about 'Memory Mapped I/O' and was wondering how this is accomplished by OS/Hardware. Does OS/Hardware uses some kind of table to map address specified in the instruction to respective port ?
Implementations differ in many ways. But the basic idea is that when a read or write occurs for a memory address, the microprocessor outputs the address on its bus. Hardware (called an 'address decoder') detects that the address is for a particular memory-mapped I/O device and enables that device as the target of the operation.
Typically, the OS doesn't do anything special. On some platforms, the BIOS or operating system may have to configure certain parameters for the hardware to work properly.
For example, the range may have to be set as uncacheable to prevent the caching logic from reordering operations to devices that care about the order in which things happen. (Imagine if one write tells the hardware what operation to do and another write tells the hardware to start. Reordering those could be disastrous.)
On some platforms, the operating system or BIOS may have to set certain memory-mapped I/O ranges as 'slow' by adding wait states. This is because the hardware that's the target of the operation may not be as fast as the system memory is.
Some devices may allow the operating system to choose where in memory to map the device. This is typical of newer plug-and-play devices on the PC platform.
In some devices, such as microcontrollers, this is all done entirely inside a single chip. A write to a particular address is routed in hardware to a particular port or register. This can include general-purpose I/O registers which interface to pins on the chip.

Resources