need device Driver Program for PCI interrupt - driver

I am using a PCI based card with PCI-9054 PCI controller and one Xilinx FPGA.Actually i need to generate an interrupt through PCI INTA# line.Can anyone help me in the driver program for getting the interrupt in "cat /proc/interrupts". I am feeding an external signal to the FPGA which i am routing to the "pci_lint_n" signal.This signal will be going as an input to PCI9054 from which it will generate an INTA interrupt line.
can somebody help me in doing the same.
Thx in advance.

Related

DMA data transfer from SSD drive (over PCIe link) to AXI stream peripheral

I am trying to use DMA to transfer data from an SSD drive (PCIe device) to an AXI4 stream peripheral e.g., a FIFO implemented on FPGA. DDR memory is not involved in the transaction. PCIe is implemented using the AXI Memory Mapped to PCI Express (2.9) IP and configured as root complex in Vivado. The block design was created based on the following tutorial (the only difference being that a DMA was used instead of a CDMA): https://www.fpgadeveloper.com/2016/04/zynq-pci-express-root-complex-design-in-vivado.html/
As can be seen in the Address Editor, the DMA can access the PCIe and PS address spaces. My user-space driver fails when it uses a PCIe address (BAR0) as source address for the transaction, so that's probably not the right way for DMA transactions over PCIe link. There's not much information online about this topic.
I'm using Vivado 2018.3 and a Zynq-7000 device, part xc7z030sbg485-1.
Does anyone know the right way of using DMA to perform data transfers over the PCIe link?
Please let me know if you need any clarification.
Thank you.
hw_design
hw_design_address_editor

How to access Xilinx Axi DMA from Linux?

I'm a software developer but I'm a newbie to embedded software development.
I have a Zynq Ultrascale board that has an Axi DMA in its Hardware and I want to access this DMA from Linux.
I know I should use DMA-Engine to Access DMA in Linux and I found the following link which is the Xilinx DMA driver, but I can't add these files to my qt project without any errors and I received file(header file) not found errors.
drivers/dma/xilinx/xilinx_dma.c
I have a piece of scattered information about the DMA driver, Device tree, and DMA-Engine but I know nothing about how to utilize these to access hardware DMA.
I built a Petalinux project and add DMA-Engine and DMA Test client to its kernel.
I don't know adding DMAEngine to the Petalinux project is enough or I should have a driver as well.
I don't know adding hardware specification (by .xsa file and .bit file) to the Petalinux project is enough or I should add a device tree to my Linux for detecting DMA as well
I lookup a step-by-step tutorial on how to set up Linux and qt creator for accessing DMA,
or at least a clear roadmap to my target.
thank you in advance.
First of all, you are facing errors when adding xilinx_dma.c to the Qt project because this file is meant to be compiled as part of kernel or as a kernel module.
Adding DMA Engine to Petalinux is not enough to work with DMA from user space. DMA Engine only provides a standardized API to let different DMAs be integrated into kernel. You need to add a client driver as well. Xilinx, as far as I know, has provided a simple client driver called DMA Proxy Driver. It also includes some simple examples that show how you can access DMA from the user space. However, if your application needs high bandwidth, you probably need to consider other options.
There is also an open source client driver for Axi DMA which achieves higher bandwidths compared to Proxy DMA Driver. It's user space API also allows you to register a callback function to be called whenever a transaction is finished.
The third option is to implement the driver in the user space. This can be done by defining the DMA as a UIO device in the device tree and access its register map directly from the user space. In this case, you need to allocate some contiguous memory blocks in the kernel space to avoid complications with MMU, which cannot be dealt with from the user space.

Tracing PCI device memory I/O operations in QEMU / VFIO environment

I'm trying to reverse engineer some PCI device under QEMU / VFIO environment and I would like to trace all I/O operations on physical memory made by the card. The card makes use of PCI bus mastering and writes stuff to other devices and most probably reads some data preprocessed by driver from host RAM. As for now I was able to only trace all read and writes to the card MMIO space (data transfer from host to device), sadly I'm missing the second direction of R/W operations (device fetching data from host).
Is there a possibility to trace I/O operations that PCI device does on physical memory i.e. direct I/O and/or DMA transfers under QEMU / VFIO environment? I've enabled tracing for the following events:
vfio_pci_read_config vfio_pci_write_config vfio_region_write
vfio_region_read vfio_intx_interrupt vfio_intx_eoi vfio_intx_update
vfio_intx_enable vfio_intx_disable vfio_msi_interrupt
vfio_populate_device_config vfio_region_mmap
Is there any event that allows to do such thing that can be registered in QEMU? Thank you in advance.
The PCI device is considered a peripheral device, which means it has its own processing unit and runs its own firmware. The access to mmio region occurs in the peripheral device and thus not traceable with QEMU.
It's possible to trace read/write to mmio in the QEMU VM because the memory instruction executed invokes a callback function to handle mmio access in VFIO.
Since the PCI device read/write mmio region in its firmware(executed by the device's processing unit), you are not able to trace the events on the host side.
I think what you can do is comparing mmio_read values with mmio_write values during reverse engineering.

Linux memory manager infringes on PCI memory

My board has a Cavium Octeon NPU, running Linux kernel 2.6.34.10 that acts as a PCIe Root Complex. It is connected to PCIe switch, as are some other peripheral devices (Endpoints), among which there is Marvell's 9143 PCI-to_SATA controller based SSD.
When PCIe is initially enumerated, PCI driver on Octeon adds up the sizes of all the prefetchable memory resources and programs the PLIMIT and PBASE registers on the upstream switch port accordingly. In my case that address range is 0x80000000 - 0xEFFFFFFF.
After that, I would expect that address range to be inaccessible to kernel memory manager allocating for DMA buffers etc. And yet, I see the kernel, at some point starts sending SCSI requests to the SSD device, where scatter-gather list elements fall within this address range. I confirmed this, by looking at PCI analyzer trace. Naturally, when SSD controller receives such an address, it tries to access it (DMA read or write), and fails, because upstream switch port refuses to forward this request upstream to Root Complex, because it is programmed to think that this address would be downstream from it. (Interestingly enough, it mostly happens when I manipulate large files, I see that kernel allocated buffer addresses grow downward, until they dip below 0xEFFFFFFF)
Hence, the question: shouldn't PCI enumeration/rescan code, tell the kernel - these are PCI devices register addresses and therefore are off-limit for DMA buffer allocation? Or is it responsibility of each individual device driver to reserve its prefetchable memory? Marvell driver I use reserves regular memory BAR, but not the prefetcheable one. Is that a problem?
Thanks in advance and apologies for lengthy description.

PCI Express BAR memory mapping basic understanding

I am trying to understand how PCI Express works so i can write a windows driver that can read and write to a custom PCI Express device with no on-board memory.
I understand that the Base Address Registers (BAR) in the PCIE configuration space hold the memory address that the PCI Express should respond to / is allowed to write to. (Is that correct understood?)
My questions are the following:
What is a "bus-specific address" compared to physical address when talking about PCIE?
When and how is the BAR populated with addresses? Is the driver responsible for allocating memory and writing the address to the peripheral BAR?
Is DMA used when transferring data from peripheral to host memory?
I appreciate your time.
Best regards,
i'm also working on device driver (albeit on linux) with a custom board. Here is my attempt on answering your questions:
The BARs represent memory windows as seen by the host system (CPUs) to talk to the device. The device doesn't write into that window but merely answers TLPs (Transaction Layer Packets) requests (MRd*, MWr*).
I would say "bus-specific" = "physical" addresses if your architecture doesn't have a bus layer traslation mechanism. Check this thread for more info.
In all the x86 consumer PCs i've used so far, the BAR address seemed to be allocated either by the BIOS or at OS boot. The driver has to work with whatever address has been allocated.
The term DMA seems to abused instead of bus mastering which I believe is the correct term in PCIe. In PCIe every device may be a bus master (if allowed in its command register bit 2). It does so by sending MRd, MWr TLPs to other devices in the bus (but generally to the system memory) and signalling interrupts to the CPU.
From your query its clear that you want to write a driver for a PCIe slave device. To understand the scheme of things happening behind PCIe transfer, a lot of stuff is available on internet (Like PCIe Bus Enumeration, Peripheral Address Mapping to the memory etc).
Yes your understanding is correct regarding the mapping of PCIe registers to the memory and you can read/write them.( For e.g in case of linux PCIe device driver you can do this using "ioremap" ).
An address bus is used to specify a physical address. When a processor or DMA-enabled device needs to read or write to a memory location, it specifies that memory location on the address bus. Nothing more to add to that.
"PCIe bus enumeration" topic will answer your 2nd question.
Your third question is vague. You mean slave PCIe device. Assuming it is, yes you can transfer data between a slave PCIe device and host using a DMA controller.
I am working on a project which involves "PCIe-DMA" connected with the host over the PCIe bus. Really depends on your design and implementation. So in my case PCIe-DMA is itself a slave PCIe device on the target board connected to the host over PCIe.
clarification for your doubts/questions is here.
1> There are many devices thats sits on BUS like PCI which sees Memeory in terms that are different from a Physical address, those are called bus addresses.
For example if you are initaiating DMA from a device sitting on bus to Main memory of system then destination address should be corresponding bus address of same physical address in Memmory
2> BARS gets populated at the time of enumeration, in a typical PC it is at boot time when your PCI aware frimware enumerate PCI devices presents on slot and allocate addresses and size to BARS.
3> yes you can use both DMA initiated or CPU initiated operations on these BARS.
-- flyinghigh

Resources