Does loadable kernel module(LVM) also need device tree? - driver

Recently, I beginning research how to write SPI ADC driver(ADS7950) on the raspberry pi 4 with Linux.
I read this book which tell me I can get good flexibility with LKM driver.
I know every device should be initialized with device tree when Linux boot.
Device tree pass the features into device driver such as clock frequency and device address in memory.
So if I just make LKM module which means driver will initialize after boot.
Should I also make(append) device for LKM drive?

Related

FT313 USB driver issue

I am working in Linux 2.6.20.14 running processor having USB chip FT313. I got a Linux driver from a vendor and I ported that to my Linux. When I do the memory read at the Uboot side on the memory to be mapped to my FT313 chip, I saw each 16-bit value was repeated twice in the memory. For e.g, if the default value is 03130001, the output will be like 03130313 00010001.
I needed to have dummy entries for each register to ignore these repeated values. After that, USB insertion and ejection give prints and IRQ count also increases. But when I tried to mount the Pen Drive as mass storage, SCSI probes won't happen and a USB device is not getting detected as Mass storage. I have already installed USB storage and scsi_mod modules.
Can anyone who has worked on FT313 in Linux tell me whether it is the expected behavior or not? And if it is possible to have FT313 chip as USB mass storage.

How to access Xilinx Axi DMA from Linux?

I'm a software developer but I'm a newbie to embedded software development.
I have a Zynq Ultrascale board that has an Axi DMA in its Hardware and I want to access this DMA from Linux.
I know I should use DMA-Engine to Access DMA in Linux and I found the following link which is the Xilinx DMA driver, but I can't add these files to my qt project without any errors and I received file(header file) not found errors.
drivers/dma/xilinx/xilinx_dma.c
I have a piece of scattered information about the DMA driver, Device tree, and DMA-Engine but I know nothing about how to utilize these to access hardware DMA.
I built a Petalinux project and add DMA-Engine and DMA Test client to its kernel.
I don't know adding DMAEngine to the Petalinux project is enough or I should have a driver as well.
I don't know adding hardware specification (by .xsa file and .bit file) to the Petalinux project is enough or I should add a device tree to my Linux for detecting DMA as well
I lookup a step-by-step tutorial on how to set up Linux and qt creator for accessing DMA,
or at least a clear roadmap to my target.
thank you in advance.
First of all, you are facing errors when adding xilinx_dma.c to the Qt project because this file is meant to be compiled as part of kernel or as a kernel module.
Adding DMA Engine to Petalinux is not enough to work with DMA from user space. DMA Engine only provides a standardized API to let different DMAs be integrated into kernel. You need to add a client driver as well. Xilinx, as far as I know, has provided a simple client driver called DMA Proxy Driver. It also includes some simple examples that show how you can access DMA from the user space. However, if your application needs high bandwidth, you probably need to consider other options.
There is also an open source client driver for Axi DMA which achieves higher bandwidths compared to Proxy DMA Driver. It's user space API also allows you to register a callback function to be called whenever a transaction is finished.
The third option is to implement the driver in the user space. This can be done by defining the DMA as a UIO device in the device tree and access its register map directly from the user space. In this case, you need to allocate some contiguous memory blocks in the kernel space to avoid complications with MMU, which cannot be dealt with from the user space.

ALSA, TinyALSA support in Android Things for Raspberry Pi 0.5.1-devpreview

The 0.5.1-devpreview BSP for RPI3 comes with libtinyalsa.so, libalasautils.so but seemingly no adb shell commandline support for audio.
We are designing a custom audio board (with audio processor) for use with Android Things and Raspberry Pi and we would typically use ALSA utilities and custom kernel drivers for accessing this board under Raspian.
It is possible the default Android Things I2S peripheral drivers and Peripheral Manager support the stream interfaces we need (the same way the VoiceHat drivers were wrapped), but we have little to no information on the default drivers in the RPI3 BSP, and we don't have any information on how to override the default drivers in Android Things without a distro rebuild.
Seems silly to write a Native C++ low-level peripheral driver when so many audio processor companies already provide ALSA-ready ASoC drivers for device source tree use.
Best practices for writing your own audio driver for Android Things?
The VoiceHat driver is one example of how to do a userspace audio driver.
If you're using a custom audio board, you should be aware of the audio chip the board uses. Looking at that chip's datasheet, you should be able to use the same peripheral I/O (UART, GPIO, I2C, SPI) to configure the connection and read/write data over the I2S bus.
In the Google Assistant sample, the app registers the VoiceHat at the beginning of the activity and unregisters it at the end of the activity.

What exactly determines what’s in the radiotap header when capturing on WLAN?

I’m doing a study project on wifi signal quality. What I want to do is use Raspberry Pi’s to monitor as many metrics as possible on packet level data. I want to do this by putting wifi adapters on monitor mode (using airmon-ng) and than capture the data about the packets using a wireless network protocol analyzer, like tshark.
What I understand from the wireless networks is that you mainly have three parts: a frame part that has the same information independent of what you’re capturing on, which contains things as frame number, frame length and arrival time. (Want to upload images but don't have 10 reputation yet...).
Then the IEEE 802.11 data which contains the necessary stuff for the network to work. When capturing on WLAN this contains the MAC addresses.
And than we have the radiotap header, which contains all kind of information (signal strength db and dbm, noise level, signal quality, TX value, and much more). This one is a bit different, since this information is actually filled or injected by the wifi adapter you use to capture the data with.
In the present flags you can find which values are actually being injected by the wifi adapter. Now my problem is that for my research I really need as much values as possible. I’ve been working for hours but I didn’t succeed in finding a way to capture with anything more than dmb signal strength (if even available). So this is what I tried so far:
The adapters I used so far are the Edimax EW7811UN, the AirPcap Classic, the AirPcap Tx and two similar alfa adapters with Atheros AR9271 chipset. The AR9271 adapters worked out of the box on raspbian (debian for raspberry pi) on the ath9k_htc driver. Putting them on monitor mode and capturing works fine, but only dbm singal strength is given (as in the screenshots above) in the capture. The Edimax was working out of the box on the 8192cu driver, however it clearly doesn’t support monitor mode. I could put it into monitor mode when booting it on the zd1211rw driver but that didn’t even give the dbm signal strength. Strange thing however, is that a friend tried the exact same Edimax adapter and he could capture, and the only difference we could find is that the lsmod says rtl8192cu and not 8192cu. Strangely, forums are saying that 8192cu is the newer version, however this friend had the newest arch linux kernel installed (newer than the raspbian). So I installed Arch Linux on the pi, but still wasn’t able to put the edimax on 8192cu driver in monitor mode. Then I found a package in the aur repos: dkms-8192cu which was supposed to have a patched version. However, after installing it still didn’t work. Also downloading the driver from the realtek website didn’t work. There is some stuff on patching on the aircrack-ng website, but it actually is mentioning injection of frames and doesn’t really look to be what I exactly need.
Than I bought the Airpcap Classic and the Airpcap Tx to see what they could do. First of all, they have zero linux support so that already is a big drawback since l need to use it from the Pi’s. However even in windows the airpcap’s only capture db and dbm noise and signal quality. It does receive some data at dbm noise level, but it’s worthless since it is always at -100 level. I can boot the Airpcap classic and tx have zd1211B chipset so I can boot them on zd1211rw driver but this also gives no dbm signal value or anything else.
So my question is, what exactly determines what’s in the radiotap header? I guess it would be all in the driver, but I need to be exactly sure before I write off every ath9_htc driver based adapter. I’m about to purchase another adapter which runs on carl9170 driver, however I can’t find no guarantee anywhere that it will give me those values. What I did find in the literature is that the madwifi driver gives (or was giving) noise levels, however it is acquired by Atheros so the project stopped and all websites are suggestion just to use ath9k or ath5k drivers. I tried to install it but I failed because it seems to be really outdate software since the project stopped.
It would be of really big help if someone can explain me what exactly determines what’s inside the radiotap headers, and also if someone could share any experience on when they did capture more than only dbm signal strength values from linux.

Writing Device Drivers for a Microcontroller(any)

I am very enthusiastic in writing device drivers for a microcontroller(like PIC, Atmel etc).
Since I am a newbie in this controller-coding-area I just want to know whether writing device drivers for controller is same as we write for linux( or any other OS) ?
Also can anyone suggest some online device driver building tutorial for the same ..?
Thanks,
If you are thinking about developing the device drivers to interface your device with a host computer (probably using USB), then most of the microcontrollers nowadays implement default classes that rely on native drivers.
A concrete example:
If you use a PIC18F4555, you can use the regular HID (human interface device) windows driver to communicate with your microcontroller (given you implemented it correctly). No need to develop any driver.
Writing a device driver for an MCU is a pretty far cry from writing it for a OS. Most MCUs won't have an OS running on them at all. You'll generally end up writing some low level Interrupt Service Routines (ISRs) and filling up buffers, that your application software will end up emptying. You don't have to fit into any device driver paradigm that an O/S has defined. You basically have to read the datasheet for the device you are wanting to interface with and read and write to its memory over whatever interface it might use (e.g. SPI, I2C, UART, etc.). Ultimately the device driver ought to provide intuitive function calls to the application software.
If you are using AVR MCU like atmega then you can use vusb (https://www.obdev.at/products/vusb/index.html) for those MCU that don't have any HID and handles the interrupts by connecting D+ and D- pins of the USB to digital I/O ports of the MCU.
The atmegaU2 packages have their own USB communication ports and HID.

Resources