I'm trying to debug the sound system on a amlogic device. amixer and alsamixer aren't working as expected and amixer can crash the system. What I'm struggling with is that the drivers pass methods for accessing the hardware registers by constructing a snd_kcontrol object all as described in Writing an ALSA Driver on the ALSA website. But amixer cset calls snd_ctl_elem_write from control.c which refers to element_write in a snd_ctl_t object.
I can't see any link between the defined snd_kcontrol and any snd_ctl_t objects so can't see how amixer is supposed to be writing to the hardware. How is it normally done?
In user space, a control device is represented by snd_ctl_t, which contains the file handle of the device node. element_write points to snd_ctl_hw_elem_write(), which issues a system call.
In the kernel, an opened device file is represented by a struct snd_ctl_file, which is linked to the struct snd_card.
Related
I'm looking to create an iOS app that communicates with a Windows app (also made by me). I've created some basic iOS apps in the past but I'm looking for assistance on the specifics of communicating with a Windows device.
Much like how a "remote mouse" application would work (where you install an iOS app and download the partnered Windows installer, which then talk together) I am looking to have my app search and then communicate with a program installed on a local network.
Is there a framework or recommended path to take when designing an app of this kind? The app itself will simply relay information available to the program installed in Windows as a proof of concept and then extra functionality will be added later. I'm primarily focusing on creating a working foundation where the iOS app and Windows program speak over the LAN.
I have implemented something similar to what you are describing using TCP sockets. Socket is a stream of raw bytes, after establishing a connection successfully between two ends, a socket may send or receive some bytes, there is no specific logical rules to define what you are sending or receiving, these bytes could represent anything, that is, you must define your protocol or structure of messages you are sending and receiving within your code logic to have meaningful messages at the end.
For example of a simple protocol, if you want to send a message (that message could be string, integer or anything) you send four bytes that represents an integer which specify the size of actual message (which has been serialized to bytes), then you send the actual message bytes, at the other end you read four bytes, which you know that they are representing the actual message size, now you know that you are expecting a specific number of bytes which represents a full message, so you keep reading bytes until you get all of these bytes to have a valid message. After receiving a message, you wait for another four bytes which represents new message size, and so on.
The message itself could be any serializable data type. Most programming languages supports serializing primitive types by default, if you want to serialize custom data type, one option is to encode your message (which, for example, is an instance of a struct that conforms to Codable protocol) using Swift JSONEncoder, so at the other end you expect a JSON bytes, which you can then decode them back to the original object. Another good option to serialize structured data is protobuf.
You may have a look at this class written in C#, it is a similar implementation of what I have described here, it is a bit different, but you will get the idea.
Note: default bytes order (endianness) may differ depending on the platform and programming language, it could be little-endian or big-endian.
CocoaAsyncSocket is a good library for dealing with sockets in iOS.
I would like to know if it is posible to cast the audio taken directly from the microphone iOS device to the receiver. (in a live way)
I´ve downloaded all the git example projects, and in all of them use a "loadMedia" method to start the casting. Example of one of those:
- (NSInteger)loadMedia:(GCKMediaInformation *)mediaInfo
autoplay:(BOOL)autoplay
playPosition:(NSTimeInterval)playPosition;
Can I follow this approach to do what I want? If so, what´s the expected delay?
Thanks a lot
Echo is likely if the device (iOS, Android, or Chrome) is in range of the speakers. That said:
Pick a fast codec that is supported, such as CELT/Opus or Vorbis
I haven't tried either of these, but they should be possible.
Implement your own protocol using CastChannel that passes the binary data. You'll want to do some simple conversion of the stream from Binary to something a bit more friendly. Take a look at Intro to Web Audio for using AudioContext.
or, 2. Setup a trivial server to stream from on your device, then tell the Receiver to just access that local server.
I 'm using coremidi all right, but I want to also support an external USB function.
I 've tried an app called Midi Monitor which indeed finds my USB interface when connected.
The problem is how to enable this interface through my own app. As said in MIDIGetNumberOfExternalDevices documentation, "Their presence is completely optional, only when a UI (such as Audio MIDI Setup) adds them."
How am I supposed to add them?
Best Regards.
"External devices" are not what you want. Those are the things that a user can create in Audio MIDI Setup in OS X, to represent a synthesizer or keyboard or other device that is connected to the computer via a MIDI cable. The system does not automatically create them. (It can't, because MIDI is terribly primitive and has no device discovery protocol.)
External devices are only for the user's benefit in naming and arranging things. They can't be used to do MIDI input or output. They're especially useless in iOS, since there's no Audio MIDI Setup app.
Instead, use MIDIGetNumberOfSources and MIDIGetSource to find sources of MIDI data.
To actually get input, use MIDIInputPortCreate to create an input port, then MIDIPortConnectSource to connect one or more sources to that port. Then your port's MIDIReadProc will be called when MIDI comes in.
Similarly, for output, you would use MIDIGetNumberOfDestinations and MIDIGetDestination to find destinations, create an output port using MIDIOutputPortCreate, and MIDISend to send data through a port to a destination.
For reference, see the MIDIServices documentation.
[OS: WinXP on VirtualBox, HostOS: win7]
We are developing a mini-filter driver and we are trying to block mounting of usb devices based on some conditions.
the mini-filter watches for IRP_MJ_VOLUME_MOUNT and whenever a usb drive is inserted, in the pre-callback, it asks userland whether or not to allow mount the drive using FltSendMessage.
In the userland, after FltGetMessage and before FltReplyMessage, certain conditions are checked and corresponding value is replied back to the driver.
This all is working fine, but we are experiencing two problems or lets say inconveniences.
The condition checking takes about 4-5 seconds [data is sent and received over network]. During this period, the Windows Explorer just hangs. And whatever the actions such as navigations, are performed as soon as FltReplyMessage is called. If I click anywhere such as the start menu, nothing happens until FltReplyMessage is called. Other applications such as VLC function normally [ie, the disk can be accessed].
When the usb drive is not allowed to mount the volume, it continues to try mounting the volume several times!
The workaround we used is to maintain a list of recently inserted devices and reject them if the GUID is present in the list.
I read somewhere that the mount point can be deleted using DeleteVolumeMountPoint and if we need to allow that device in future then we need to delete a reg key which contains the unique ID of device which can be obtained sending MOUNTDEV_UNIQUE_ID to the device. We tried to achieve this, but were unsuccessful to correctly obtain the unique ID. [we were unable to allocate enough memory for the MOUNTDEV_UNIQUE_ID structure. Tried new and malloc(enough size) but then sizeof(varUniqueID) returned just 4, and calling DeviceIoControl with that resulted in "More Data is available" error. We are doing in userland. Should this be done in kernel?]
Whew! a long post!
We would really appreciate any help we can get!
Cheers!
As far as your first concern there is not much you can do unless there is a way you can cache the data needed in the driver based on volume unique ids.
This way there will only be a Flt call per mount.
I am not sure about your requirements, but if the data needed to make a decision is over network then you will suffer from this delay one way or another. It is crucial for the driver to be able to cache its own data without having to even call into user-mode after a first mount attempt of the same device.
For the second point for sure you could do it in either user-mode or kernel-mode. Could you provide a code snip to check for any mistakes ?
Cheers,
Gabriel
I'd like to create a virtual HID device (emulate it with a driver).
It must be visible to clients that implement standard HID detection:
Call HidD_GetHidGuid() – Get the HID
device class GUID
Call SetupDiGetClassDevs() – Get a
handle to a set of devices which
implement the HID interface
Call SetupDiEnumDeviceInterfaces() –
For each device in the returned set
of devices, obtain the interface
information for all exposed HID
interfaces.
Call
SetupDiGetDeviceInterfaceDetail() –
For each interface obtained in the
previous call, get the detailed
information block for that interface.
This detailed information includes
the string that can be passed to
CreateFile() to open a handle to the
device
Call SetupDiDestroyDeviceInfoList() –
Free up the device information set
that was obtained in the call to
SetupDiGetClassDevs().
The device should also support reading, so CreateFile / ReadFile would return data supplied by me from the driver.
I don't really know where to begin, as I don't have a lot of exp. in kernel dev. :(
Some people have had luck with the vmulti project as a base http://code.google.com/p/vmulti/
You sholud write a driver, then use DevCon (Device Console Tool) with install option.
cmdInstall:
A variation of cmdUpdate to install a driver when there is no associated hardware. It creates a new root-enumerated device instance and associates it with a made up hardware ID specified on the command line (which should correspond to a hardware ID in the INF). This cannot be done on a remote machine or in the context of Wow64.
http://code.msdn.microsoft.com/windowshardware/DevCon-Sample-4e95d71c
http://msdn.microsoft.com/en-us/library/windows/hardware/ff544707%28v=vs.85%29.aspx
http://msdn.microsoft.com/en-us/library/windows/hardware/ff544780%28v=vs.85%29.aspx
see the vhidmini ddk sample driver. It was in the version 1830 DDK but is not in the latest version. alternatively the hidfake sample in Oney's book.
See http://www.microsoft.com/mspress/books/sampchap/6262.aspx