Raw access to HID devices in OS X - device-driver

What is the simplest way to get raw access to HID devices on OS X?
I've been looking through the IOKit examples, but even opening a device seems needlessly complex, involving multiple callbacks and include things from half a dozen libraries.
libusb is available for OS X, but the kernel grabs all HID devices for exclusive access, and I have been getting strange behavior while trying to use a codeless .kext to block it from associating with my device (it prevents the kernel from grabbing the device initially, but any calls to configure the device seem to cause the kernel to grab the device away from under the little python libusb script I am testing with).
Basically, I have a HID device that just streams data. I want to open it for (ideally exclusive) access, and just get the datastream.
All the examples I have found in the IOKit docs are really complex, compared to the ~8 lines it would take in libusb. There must be a simpler way that isn't a 3'rd party library.
It's worth noting that I am entirely unfamiliar with programming for OS X in any capability.
Python support would be a nice plus

Unfortunately there is no other way than using HID Manager apis. Raw access to HID devices in OS X it's not supported.
The documentation makes it clear:
HID family. Through the HID Manager, the HID family provides a device
interface for accessing a variety of devices, including joysticks and other
game devices, audio devices, non-Apple displays, and UPS (uninterruptible
power supply) devices.
Raw access through POSIX apis it's only available for storage, network and serial devices:
Using POSIX APIs
For each storage, network, and serial device the I/O Kit dynamically
creates a device file in the file system’s /dev directory when it discovers
a device and finds a driver for it, either at system startup or as part of
its ongoing matching process. If your device driver is a member of the I/O
Kit’s Storage, Network, or Serial families, then your clients can access your
driver’s services by using POSIX I/O routines.
So you can either use HID Manager apis directly or you can use libusb or (as the other answer mentions) hidapi which are nothing more than wrapper libraries over HID Manager apis. The benefit of using these libraries is that they abstract most of the low level calls thus making them easier to use.

Take a look at the hidapi mac backend
http://www.signal11.us/oss/hidapi/
https://github.com/signal11/hidapi

Related

How to access Xilinx Axi DMA from Linux?

I'm a software developer but I'm a newbie to embedded software development.
I have a Zynq Ultrascale board that has an Axi DMA in its Hardware and I want to access this DMA from Linux.
I know I should use DMA-Engine to Access DMA in Linux and I found the following link which is the Xilinx DMA driver, but I can't add these files to my qt project without any errors and I received file(header file) not found errors.
drivers/dma/xilinx/xilinx_dma.c
I have a piece of scattered information about the DMA driver, Device tree, and DMA-Engine but I know nothing about how to utilize these to access hardware DMA.
I built a Petalinux project and add DMA-Engine and DMA Test client to its kernel.
I don't know adding DMAEngine to the Petalinux project is enough or I should have a driver as well.
I don't know adding hardware specification (by .xsa file and .bit file) to the Petalinux project is enough or I should add a device tree to my Linux for detecting DMA as well
I lookup a step-by-step tutorial on how to set up Linux and qt creator for accessing DMA,
or at least a clear roadmap to my target.
thank you in advance.
First of all, you are facing errors when adding xilinx_dma.c to the Qt project because this file is meant to be compiled as part of kernel or as a kernel module.
Adding DMA Engine to Petalinux is not enough to work with DMA from user space. DMA Engine only provides a standardized API to let different DMAs be integrated into kernel. You need to add a client driver as well. Xilinx, as far as I know, has provided a simple client driver called DMA Proxy Driver. It also includes some simple examples that show how you can access DMA from the user space. However, if your application needs high bandwidth, you probably need to consider other options.
There is also an open source client driver for Axi DMA which achieves higher bandwidths compared to Proxy DMA Driver. It's user space API also allows you to register a callback function to be called whenever a transaction is finished.
The third option is to implement the driver in the user space. This can be done by defining the DMA as a UIO device in the device tree and access its register map directly from the user space. In this case, you need to allocate some contiguous memory blocks in the kernel space to avoid complications with MMU, which cannot be dealt with from the user space.

ALSA, TinyALSA support in Android Things for Raspberry Pi 0.5.1-devpreview

The 0.5.1-devpreview BSP for RPI3 comes with libtinyalsa.so, libalasautils.so but seemingly no adb shell commandline support for audio.
We are designing a custom audio board (with audio processor) for use with Android Things and Raspberry Pi and we would typically use ALSA utilities and custom kernel drivers for accessing this board under Raspian.
It is possible the default Android Things I2S peripheral drivers and Peripheral Manager support the stream interfaces we need (the same way the VoiceHat drivers were wrapped), but we have little to no information on the default drivers in the RPI3 BSP, and we don't have any information on how to override the default drivers in Android Things without a distro rebuild.
Seems silly to write a Native C++ low-level peripheral driver when so many audio processor companies already provide ALSA-ready ASoC drivers for device source tree use.
Best practices for writing your own audio driver for Android Things?
The VoiceHat driver is one example of how to do a userspace audio driver.
If you're using a custom audio board, you should be aware of the audio chip the board uses. Looking at that chip's datasheet, you should be able to use the same peripheral I/O (UART, GPIO, I2C, SPI) to configure the connection and read/write data over the I2S bus.
In the Google Assistant sample, the app registers the VoiceHat at the beginning of the activity and unregisters it at the end of the activity.

Is there an API to detect CPU features on iOS?

I have some cryptography code that has multiple implementations, selecting which implementation at runtime based on the features of the CPU it is running on. Porting this has been straightforward so far, with Windows, Linux and Android being easy.
But in iOS it does not seem easy. While x86 CPUs have the cpuid instruction to detect features, even from user mode, the ARM equivalent is privileged. It is not possible to detect CPU features on ARM without OS cooperation.
In Windows, IsProcessorFeaturePresent works for detecting ARM CPU features. On Linux, /proc/cpuinfo is the way to go. Android has a cpufeatures library (and /proc/cpuinfo still works anyway). Mac OS has sysctlbyname with hw.optional.*.
But what about iOS? The iOS kernel has hw.optional.* like Mac OS, but it is locked down in iOS 10. (Thus, my question is not a duplicate of this one, as circumstances have since changed.) Also, getting a list of those seems difficult - Apple's open source web site runs an automated process to scrub all ARM-specific code from the OS source they give out publicly in order to make jailbreakers work harder.
You may take a look on the iOS Security Guide for business
Apparently, if you can get the CPU series name, you may also deduce which cryptographic component and how it works from the documentation.
You may note that some devices have a Security Enclave:
The Secure Enclave is a coprocessor fabricated in the Apple T1, Apple
S2, Apple S3, Apple A7, or later A-series processors.
Page 6
And you may deduce that any older CPU version has not.
Every iOS device has a dedicated AES-256 crypto engine built into the
DMA path between the flash storage and main system memory
[...]
On T1, S2, S3, and A9 or later A-series processors, each Secure Enclave
generates its own UID (Unique ID).
Page 12
Method to access cryptographic components will depend of which kind of data or storage you would to get an access ( local data storage / sync / home data / app / siri / icloud / secure note / keybag / payment / applepay / vpn / wifi password / SSO / airdrop / etc...)
Could you precise which part of the cryptographic part you need to access in your use case?
You may also take a look here and here to get additional information relative to iOS native security and cryptography API.
The reason behind iOS blocking certain hardware information is very simple. Please read about Apple A11 processor. There is so much stuff in it, also stuff, which will never be documented.
Apple simply does not want developers to be aware of it and use it. I would not expect any progress on this topic.
The only way forward at this moment is to bypass the OS and talk directly to the hardware. You would be amazed what is inside and how quickly it responds!

How to monitor packets using Snort features?

I want to create a network intrusion detection system for iOS application. The main function is to allow the user to select a home network (maybe prompt them to simply enter the IP address only) and to be able to monitor the packets and if there is anything suspicious- we need to alert user via push notification or email. i wanted to use the features and functions of Snort, an open source network intrusion detection system.
Any Suggestions,Sample code ?! Where to start?
VM's do not have native hardware access, which is necessary for monitor mode. Maybe IOMMU PCI passthrough or bridged devices might work. It is probable that it is possible to compile the iOS kernel with a module that works for the wireless nic. I don't think it's a proprietary chip specific to apple, because a chip with multie technology capabilities in RF wouldn't be cost effective qt all. I'm just not sure if the filesystem blocks access in the OS framework or whatever. I have tried to compile linux/iOS ARM packages natively in the shell with the aircrack-ng source, but have not had any luck. Maybe someone would have better luck actually cross-compiling a package and sideloading it somehow.
I don't think this is possible for multiple reasons:
You wouldn't be able to compile snort for iOS.
In order to run snort you have to have the interface (NIC) in promiscuous mode, which I really don't think you can do on an iOS device (iPhone, iPad, etc) but I have never really looked into it, but Apple probably locks this down and restricts this for security purposes so if you could do it you'd likely have to jail-break the device first. It's not even possible to put the wifi card in an Apple laptop into monitor mode, which is similar.
There are a lot of dependencies for snort, most importantly the DAQ. You would probably only be able to monitor the wifi interface (even this might not be possible), not the interface used for the cellular network as this is probably a different daq than standard Ethernet nics.
This very likely is not possible on iOS, if it is it would be VERY difficult to pull off and even if you did the use case isn't really good. Even if you could get a daq for the cellular card, I don't know if promiscuous mode even exists and if it did all of the traffic on the cellular network is encrypted, so inspecting this with snort would be pointless. If you could do it for the wifi traffic it's probably not worth the effort honestly, especially since almost all traffic nowadays is encrypted, you'd have to decrypt it first, which certainly isn't possible to do.
In the view of Johnjg12's comments, I am wondering about your goal. If you want to make a NIDS, you can make it OS independent, anyway. If you want to consider only HIDS that monitors packet destined to it, we don't need it to be in promiscuous mode (a comment to Johgj12's response). so, now it is something to do with Snort on iOS. I am wondering if we can do it on a VM and then turning its promiscuous mode? Having said that I came across a link: https://www.securemac.com/macosxsnort.php

Writing Device Drivers for a Microcontroller(any)

I am very enthusiastic in writing device drivers for a microcontroller(like PIC, Atmel etc).
Since I am a newbie in this controller-coding-area I just want to know whether writing device drivers for controller is same as we write for linux( or any other OS) ?
Also can anyone suggest some online device driver building tutorial for the same ..?
Thanks,
If you are thinking about developing the device drivers to interface your device with a host computer (probably using USB), then most of the microcontrollers nowadays implement default classes that rely on native drivers.
A concrete example:
If you use a PIC18F4555, you can use the regular HID (human interface device) windows driver to communicate with your microcontroller (given you implemented it correctly). No need to develop any driver.
Writing a device driver for an MCU is a pretty far cry from writing it for a OS. Most MCUs won't have an OS running on them at all. You'll generally end up writing some low level Interrupt Service Routines (ISRs) and filling up buffers, that your application software will end up emptying. You don't have to fit into any device driver paradigm that an O/S has defined. You basically have to read the datasheet for the device you are wanting to interface with and read and write to its memory over whatever interface it might use (e.g. SPI, I2C, UART, etc.). Ultimately the device driver ought to provide intuitive function calls to the application software.
If you are using AVR MCU like atmega then you can use vusb (https://www.obdev.at/products/vusb/index.html) for those MCU that don't have any HID and handles the interrupts by connecting D+ and D- pins of the USB to digital I/O ports of the MCU.
The atmegaU2 packages have their own USB communication ports and HID.

Resources