I'm playing around with UEFI and SMM and currently am trying to trigger an SMI interrupt from ring 0 on an Intel NUC machine. I've been using Chipsec to do so but couldn't properly specify a valid communication buffer, which the SW SMI handler gets as one of its parameters.
The only clue I found in the UEFI specs is in "Appendix O - UEFI ACPI Data Table" under Table 310. SMM Communication ACPI Table but the specified method doesn't seem to work. I'm taking a black box approach as I don't have access to the NUC's SMRAM.
What is the working way of successfully specifying the communication buffer for an SMI? Some code samples will be greatly appreciated.
It is a secure SMI call must be invoked, then this buffer could be retrieved. Modern UEFI BIOS provides special mechanism to present external user to get private data. This information did not reveal on any documents.
Related
I'm trying to integrate FATFS file system on Micron NAND SPI FLASH. I'm using the SPI peripheral of the STM32L486RG as interface.
I have developped a low level driver through which I'm able to read, write and erase data from different locations in the NAND memory.
I have then integrated my Low-level driver APIs under diskio.c file in order they could be used by fatfs APIs.
I have successfully formatted the memory through f_mkfs (I'm getting FR_OK with both f_mkfs and f_open APIs and when debugging the fs object is containing the FAT signature).
However, when I try to write buffer into the file that I have created using f_oprn , I get "FR_INT_ERR" .
I have debugged my code step by step and I found that my get_fat function returns (1) as result which means that an internal error has occurred .
Any idea what could be the issue ?
I guess you need to erase the memory's sector you mean to write in - even though you write per pages and not per entire sector - and that's why using FatFs becomes tricky in NAND Flash.
Since your purpose is to bound the logical drive to the entire physical drive, you need to use the option ( FM_SDF | FM_ANY ) for the parameter opt into the f_mkfs function to format the memory.
We are working on project where we need to do some image processing on FPGA. For that purpose we are using ZedBoard with linaro (Ubuntu Version) running on it.
What we have already done is we have stored the image in binary form pixel by pixel in DDR using python script on Processing System of Zedboard.
Now our task is to read the content of DDR memory, process it and send back the processed output to DDR Memory again. We are using vivado xilinx tool for FPGA part. We tried to use AXI-DMA with AXI-Interconnect to read and write data from DDR.
My question is, Do We need to use SDK and some sort of C coding to read and write DDR Memory on Programmable Logic side? As we want to make our module start reading the data from DDR with a control signal and then start actual processing of Image data. Once we read specific block of data, process it and store the result back to DDR memory on the fly. We are not sure which IP Block do we need in our block design for vivado. Also do we need Block Ram Memory at the end before sending the date to DDR.
Can anyone who already done this sort of project or have any knowledge ? Any help from your side will be appreciated !
Thanks
The zynq FPGA provide an AMBA AXI interconnect for that purpose.
This is the interconnect on the right.
I am wondering how a hypervisor using Intel's VMX / VT technology would simulate memory-mapped I/O (so that the guest could think it was performing memory mapped I/O againsta device).
I think the basic principle would be to set up the EPT page tables in such a way that the memory addresses in question would cause an EPT violation (i.e. VM exit) by setting them such that they cannot be read or written? However, the next question is how to process the VM exit. Such a VM-exit would fill out all the exit qualification reasons etc. including the guest-linear and guest-physical address etc. But what I am missing in these exit qualification fields is some field indicating - in case of a write instruction - the value that was attempted to be written and the size of the write. Likewise, for a read instruction it would be nice with some bit fields indicating the destination of the read, say a register or a memory location (in case of memory-to-memory string operations). This would make it very easy for the hypervisor to figure out what the guest was trying to do and then simulate the device behavior towards the guest.
But the trouble is, I can't find such fields among the exit qualifications. I can see an instruction pointer to where the faulting instruction is, so I could walk the page tables to read in the instruction and then decode it to understand the instruction, then simulate the I/O behavior. However, this requires the hypervisor to have a fairly complete picture of all x86 instructions, and be able to decode them. That seems to be quite a heavy burden on the hypervisor, and will also require it to stay current with later instruction additions. And the CPU should already have this information.
There's a chance that that I am missing these relevant fields because the documentation is quite extensive, but I have tried to search carefully but have not been able to find it. Maybe someone can point me in the right direction OR confirm that the hypervisor will need to contain an instruction decoder.
I believe most VMs decode the instruction. It's not actually that hard, and most VMs have software emulators to fallback on when the CPU VM extensions aren't available or up to the task. You don't need to handle every instruction, just those that can take memory operands, and you can probably ignore everything that isn't a 1, 2, or 4 byte memory operand since you're not likely to emulating device registers other than those sizes. (For memory mapped device buffers, like video memory, you don't want to be trapping every memory accesses because that's too slow, and so you'll have to take different approach.)
However, there is one way you can let the CPU do the work for you, but it's much slower then decoding the instruction itself and it's not entirely perfect. You can single step the instruction while temporarily mapping in a valid page of RAM. The VM exit will tell you the guest physical address access and whether it was a read or write. Unfortunately it doesn't reliably tell you whether it was read-modify-write instruction, those may just set the write flag, and with some device registers that can make a difference. It might be easier to copy the instruction (it can only be a most 15 bytes, but watch out for page boundaries) and execute it in the host, but that requires that you can map the page to same virtual address in the host as in the guest.
You could combine these techniques, decode the common instructions that are actually used to access memory mapped device registers, while using single stepping for the instructions you don't recognize.
Note that by choosing to write your own hypervisor you've put a heavy burden on yourself. Having to decode instructions in software is a pretty minor burden compared to the task of emulating an entire IBM PC compatible computer. The Intel virtualisation extensions aren't designed to make this easier, they're just designed to make it more efficient. It would be easier to write a pure software emulator that interpreted the instructions. Handling memory mapped I/O would be just a matter of dispatching the reads and writes to the correct function.
I don't know in details how VT-X works, but I think I see a flaw in your wishlist way it could work:
Remember that x86 is not a load/store machine. The load part of add [rdi], 2 doesn't have an architecturally-visible destination, so your proposed solution of telling the hypervisor where to find or put the data doesn't really work, unless there's some temporary location that isn't part of the guest's architectural state, used only for communication between the hypervisor and the VMX hardware.
To handle a read-modify-write instruction with a memory destination efficiently, the VM should do the whole thing with one VM exit. So you can't just provide separate load and store interfaces.
More importantly, handling atomic read-modify-writes is a special case. lock add [rdi], 2 can't just be done as a separate load and store.
Is it possible to do DMA transferts with the IP core «Cyclone V Avalon-MM for PCIe» provided by altera in Qsys (quartus 14.0) ?
Altera provide an ip-core named «Cyclone V Avalon-MM DMA for PCIe» to do dma transfert. But this ip-core does not support PCIe Gen1 with 1x lane.
The demo (ep_g1x1) design for «Cyclone V Avalon-MM for PCIe» include a DMA block that is connected on Avalon-mm TX bus of PCIe ip-core.
Then I'm wondering if it's possible to write data from this DMA block to the root-complex (host) ? Because I can't find how to do that.
From my brief skim of the material, it should be possible to issue DMA reads or writes from an RC to your Cyclone V (EP) using the IP core you're interested in.
I've done DMA reads and writes on a Stratix V, however it was in a non-Qsys design just using the PCIe core HIP block (custom TLP encoding and decoding logic). This block just seems to be a wrapper around their PCIe HIP block that also handles the transaction layer for you.
The first step will be to get your RC to issue PCIe DMA read or writes requests. In the case of a read request, you'll want to send a memory read complete with data (CplD) request with a length greater than 1 DWORD. I would suggest dedicating an entire BAR to map the memory space you want to DMA from on the FPGA to keep your address targeting simple.
On the FPGA side, I would suggest using Signal Tap and probing the Rxm* interface signals on the core. This way you can see the exact timing of the DMA read request that comes out of the core. My guess is that the RXMRead_<n>_o signal will go high indicating the start of the request. At which point you'll have to decode and pass the RxmAddress_<n>_o and RXMBurstCount_<n>_o to some glue logic that will fetch the requested data from the FPGA's memory. Once you're ready to send back the data, assert the RXMReadDataValid_<n>_i for each valid word being sent.
I'm guessing that the «Cyclone V Avalon-MM DMA for PCIe» core that you referenced takes care of that 'glue' logic I mentioned for you, and allows you to connect straight to a SDRAM controller on your Qsys bus. Altera doesn't usually encrypt their megafuction code, so if your system verilog is strong, it might be worth digging through their generated files and seeing if you can reuse that bit of code in some way.
As for core settings, the only thing that I saw that you need to look out for is making sure the Single DW Completer setting is turned OFF. Otherwise the core will abort any requests it receives with a length greater than 1 DWORD.
Hope that helped somewhat.
I finally managed to make DMA request with the «Cyclone V Avalon-MM for PCIe» altera core-ip. Then yes it's possible.
On my Linux system, rootcomplex (RC) is included under i.MX6 with Linux operating system. Then most of the tricks are on the Linux side in fact.
Under the Linux driver a PAGE must be requested with dma_alloc_coherent() call and the address of this page must be written on the CRA register named ADDR_MAP_LO0 and ADDR_MAP_HI0.
On my system, memory pages are 4k sized, then I had to configure the «address translation settings» of the PCIe hard ip with pages of 4k to be coherent.
Once that done, I simply connected the DMA controller provided by Qsys on the TX avalon-MM slave port of PCIe IP.
Telling the DMA to write data on this port will automatically generate TLPs from the FPGA to write on i.MX6 ram.
I know the definition
A character device driver is one that transfers data directly to and
from a user process.
But can some one explain this in a more intutive way? First of all there should be a device. What is the device in the above definition?
If you say it can be a file, then can we say file reading and putting the data on the console an example of character driver?
What exactly is a character driver ?
Device driver is Integration of two pieces of code. First piece of code is how the driver services are made available to the application.(user space)
Second piece of code is the hardware access part. Instructions to carry out physical operation on target hardware.
Based on first piece of code we have three models. Character model, Block model, Network model. we can say character driver, block driver, Network driver.
First piece of code is totally kernel specific. what Interface kernel provides to access the hardware. Implementing first piece of code on Linux,windows on other os may be different. we should know what inteface it offers to provide services to application .
In linux From user perspective everyhardware is a file. Why means at boot time all the hardware devices present are detected and added to device tree. Based on hardware corresponding device node gets created automatically in /dev directory. As mentioned above if it is char device---> character device node. Block device---> block device node gets created.
To write character driver. we create a device node in /dev directory , assigning a major no and application usually performing read/write operations on device file.
We implement driver operations and assign driver operations to file structure (fops) pointer.
Device file request received by VFS. VFS turns device file operations into your corresponding driver operations.
APP--->dev file----> fops--->driver---->device.
dev file ---> interface between application and driver.
driver interacts ----> device
You can refer http://lwn.net/images/pdf/LDD3/ch03.pdf.
According to my knowledge,
The device can be your own private structure or system object.
The driver is said to be a char driver because the data read and write is in byte range
If you are writing your char driver you can use char buffer or kfifo to read and write into the device.
you can create your device file in procfs,
and can read/write as u wish and this is accomplished though your char driver
I hope that my answer helps you
Think of device driver as abstraction of various hardware that the kernel has to deal with. The device driver knows the details of hardware it is communicating with. So the kernel reads and writes data as it would do with any other filesystem file.
If you can make some reading on Device Drivers, you will find out that the system calls of fopen, read, write, fclose are all mapped to driver specific function calls.
There is 2 types of drivers: block driver and char driver. The difference is that first one handles with blocks of data and second one receives/transmits data byte by byte.
Ok, for example you have a device and you want to talk to it, how to do it?
You have a driver, simply a number of functions, that will be called by the kernel at the right time. Driver itself can't do anything (neither send data, nor receive data), you need to say your driver what it should do. As you should know so far, you can't discretely communicate with a kernel from user space. There are some tricks that you can use. What you are talking about named as device file. Your driver generates a file for your device and when you write something to this file from user space, driver is notified that it is some information to handle, it takes this information and provide this data to write function, that will transmit data to your device(in this case byte by bite).
Hopefully, this what you wanted to know.