What's the difference between .setPreferredInput(...) and .setPreferredDataSource(...)? - ios

What's the difference between .setPreferredInput(_ inPort: AVAudioSessionPortDescription?) and .setPreferredDataSource(_ dataSource: AVAudioSessionDataSourceDescription?)?
Both functions can be used to determine the active or preferred microphone.
By calling:
import AVFoundation
let availableInputs = AVAudioSession.sharedInstance().availableInputs
I have an array containing built-in microphones, maybe an earphone microphone if connected, and so on. Each value in this array can be passed as a single argument to both functions. In the Data Source situation, you can pass it as an argument by calling availableInputs![0].dataSources, for example.
Is there any difference between them? How and when using they? .setPreferredDataSource(...) is more generic and can be called for input and output devices, but besides it, I can't see much difference between them.
I got an explanation here in the documentation, but it doesn't help much:
Setting a Preferred Input:
To discover built-in or connected input
ports, use the audio session’s availableInputs property. This property
returns an array of AVAudioSessionPortDescription objects that
describe the device’s available input ports. Ports can be identified
by their portType property. To set a preferred input port (built-in
microphone, wired microphone, USB input, and so on) use the audio
session’s setPreferredInput:error: method.
Setting a Preferred Data Source:
Some ports, such as the built-in
microphone and some USB accessories, support data sources. Apps can
discover available data sources by querying the port description’s
dataSources property. In the case of the built-in microphone, the
returned data source description objects represent each individual
microphone. Different devices return different values for the built-in
mic. For instance, the iPhone 4 and iPhone 4S have two microphones:
bottom and top. The iPhone 5 has three microphones: bottom, front, and
back.
Individual built-in microphones may be identified by a combination of
a data source description’s location property (upper, lower) and
orientation property (front, back, and so on). Apps may set a
preferred data source by using the setPreferredDataSource:error:
method of an AVAudioSessionPortDescription object.

Related

It is possible that CAN bus node have multiple identifiers?

I want to use CANopen, and by the preconfigured set a device can have more than one COB-ID(as it has different function codes)
I want to know if the CAN bus frame identifier uses CANopen's COB-ID as it is.
A CANopen node cannot use multiple identifiers at the same time, but it's technically possible to reconfigure the node-ID. According to CiA301 - CANopen application layer and communication profiles, during NMT state initialization the parameters of the manufacture specific profile area and of the standardized device profile area are set to their power-on values.
One way to implement this is to assign a default node-ID for the CANopen node. Then reserve a SDO object in the object dictionary to modify the node-ID after reset or power-on. Note that if you want to fully follow CANopen standard, when you change the node-ID, the CAN-ID allocation modify the IDs for the other NMT states and communication objects such as SDO, PDO, etc.
Check this link for further information.

How can i tell if a peripheral is connected to GPIO?

I want to be able to detect when a peripheral sensor is NOT connected to my Raspberry Pi 3.
For example, if I have a GPIO passive infrared sensor.
I can get all the GPIO ports like this:
PeripheralManagerService manager = new PeripheralManagerService();
List<String> portList = manager.getGpioList();
if (portList.isEmpty()) {
Log.i(TAG, "No GPIO port available on this device.");
} else {
Log.i(TAG, "List of available ports: " + portList);
}
Then I can connect to a port like this:
try {
Gpio pir = new PeripheralManagerService().openGpio("BCM4")
} catch (IOException e) {
// not thrown in the case of an empty pin
}
However even if the pin is empty I can still connect to it (which technically makes sense, as gpio is just binary on or off). There doesn't seem to be any api, and I can't legitimately think of logically how you can differentiate between a pin that has a peripheral sensor connected and one that is "empty".
Therefore at the moment, there is no way for me to assert programmatically that my sensors and circuit is setup correctly.
Any one have any ideas? Is it even possible from a electronics point of view?
Reference docs:
https://developer.android.com/things/sdk/pio/gpio.html
There are lots of ways to do "presence detection" electrically, but nothing that you will find intrinsically in the SoC. You wouldn’t normally ask a GPIO pin if something is attached—it would have no way to tell you that.
Extra GPIO pins are often used to detect if a peripheral is attached to a connector. The plug for some sensor could include a “detect” line that is shorted to ground and pulls the GPIO low when the sensor is attached, for example. USB and SDIO do something similar with some dedicated circuitry in the interface.
You could also build more elaborate detection circuits using things like current sensing, but they would inevitably have to put out a binary signal that you capture through a dedicated GPIO.
This is easier to achieve for serial peripherals, since you can usually send a basic command and verify that you get a response.
Detection using solely the input line can be tough. First, you'd want to narrow the scope of the problem. Treat as not-present the condition of a sensor not being connected, the sensor being connected but not responding, or the sensor responding in an uncharacteristic manner.
So, if it is a digital sensor, then communicating with the sensor may be enough to tell if it is present or not (especially if checksums or parity bits are involved).
Some analog sensors also have specific specs on how it behaves when triggered. You can utilize deviation from those specs to determine if the sensor is not present.
If you have a digital sensor w/o any error checking on it's output, where you clock out data (so all 0s or all 1s is valid) or it's just a binary 1 or 0 for output, then you'd need external help. Same for most analog sensors.
This external help would be something where you put the system in a known controlled state, press a button, and it then checks the sensors for output within a specific range. To be absolutely sure, you'd want at least two different states, to ensure your digital or analog inputs didn't happen to be stuck at the correct state for your test.
Just about any other method would be external to the system. Using additional IO to "detect" a sensor could help increase confidence the sensor is there, but you could get false positives where all you've learned is that "something" is there - not necessarily the sensor you expect.

Interpret Characteristic Properties (iOS and BLE)

I am reading characteristic properties from a BLE device from my iPhone.
However, some of the properties I am seeing (like 0xA, 0x22) are not in the enumerated list that Apple provides. Are these properties a combination of 2 or more enumerated values? Or are these custom properties from the manufacturer? Need guidance on this.
As you can read in the documentation:
Values representing the possible properties of a characteristic. Since
characteristic properties can be combined, a characteristic may have
multiple property values set.
In other words, a characteristic may have more than one property. That makes sense as you can, for example, have a characteristic which can be read (CBCharacteristicPropertyRead) and written to (CBCharacteristicPropertyWrite).
In this case the value of CBCharacteristic's properties would be the bitwise OR of CBCharacteristicPropertyRead and CBCharacteristicPropertyWrite, which is 0xA.

Discriminate between MSDU, MMPDU while using Native WiFi for retrieving incoming number of frames?

I have some problems whiling using Mircosoft Native Wifi API.
I can use queryinterface function to get the current value of counter to know the frames received.
But the statistic involves not only MSDU but also MMPDU and MPDU, which cause the result inaccurate.
as the link: See the site
The members of this structure are used to record MAC-level statistics for:
MAC service data unit (MSDU) packets.
MAC protocol data unit (MPDU) frames.
MPDU frame counters must include all MPDU fragments sent for an MSDU packet.
Management MPDU (MMPDU) frames.
So, is there any way to obtain only the received number of MSDU ??? THX!!

Why is DirectShow dragging in unnecessary intermediate filters when making multiple input connections to my DirectShow Transform filter?

I have a DirectShow Transform filter written in Delphi 6 using the DSPACK component library. It is a simple audio mixer that creates a new input pin whenever a new connection is attempted. I say simple because once its media format is set, all connections to the its input pins or singular output pin are forced to conform to that media format. I build the filter chain manually, making all pin connections explicitly myself. I do not use any of the "intelligent rendering" calls, unless there is some way to trigger that unwanted behavior (in my case) accidentally.
NOTE: The Capture Filter is a standard DirectShow filter external to my application. My push source audio filter and simple audio mixer filters are being used as private, unregistered filters and are internal to my application.
I am having a weird problem that only occurs when I try to make multiple input connections to my mixer, which does indeed accept them. Currently, I am attempting to connect both a Capture Filter and my custom Push Source audio filter to my mixer filter. Whenever I try to do that the second upstream filter connection fails. Regardless of whether I connect the Capture Filter first or Push Source audio filter first, the second upstream filter connection always fails.
The first test I ran was to try connecting just the Capture Filter to the mixer. That worked fine.
The second test I ran was to try connecting just the Push Source audio filter to the mixer. That worked fine.
But as soon as try to do both I get a "no combination of intermediate filters could be found" error. I did several hours of deep digging into the media negotiation calls hitting my filter from the graph builder and then I found the problem. For some reason, the filter graph is dragging in the ancient "Indeo (R) Audio Software" codec into the chain.
I discovered this because despite the fact that codec did have a media format that matched my filter in almost every regard (major type, sub type, format type, wave format parameters), it had an extra 2 bytes at the end of it's pbFormat data member and that was enough to fail the equals test since that test does a comparison between the source and target pbFormat areas by comparing the cbFormat value of each media type. The Indeo codec has a cbFormat value of 20 while my filter has a cbFormat value of 18, which is the size of a _tWAVEFORMATEX data structure. In a way it's a good thing the Indeo pbFormat has that weird size because the first 18 bytes of its 20 byte area were exactly equal to the pbFormat area of my mixer filter's supported media type. Without that anomaly I never would have known that ancient codec was being drug in. I'm surprised it's being drug in at all since it has known exploits and vulnerabilities. What is most confusing is that this is happening on my mixer filter's output pin, not one of the input pins, and I have not made a single downstream connection yet when building up my pin connections.
Can anyone tell me why DirectShow is trying to drag in that codec despite the fact that the media formats for the both incoming filters, the Capture Filter and the Push Source filter, are identical and don't need any intermediate filters at all since they match my mixer filter's input pins supported format exactly? How can I fix this problem?
Also, I noticed that even in the single filter attachment tests above that succeeded, my mixer output pin was still getting queried for media formats. Why is that when as I said, at this point in building up my pin connections I have not connected anything to the output pin of my mixer filter?
--------------------------- UPDATE: 1 ----------------------------
I have learned that you can avoid the "intelligent connection" behavior entirely by using IFilterGraph.ConnectDirect() instead of IGraphBuilder.Connect(). I switched over to DirectConnect() and turns out that the input pin on my mixer filter is coming back as "already connected". That may be what is causing the graph builder to drag in the Indeo codec filter. Now that I have this new diagnostic information I will correct the problem and update this post with my results.
--------------------------- RESOLUTION ----------------------------
The root problem of all of this was my re-use of the input pin I obtained from the first destination/downstream filter I connected to my simple audio mixer filter, at the top of my application code. In other words my filter was working correctly, but I was not getting a fresh input pin with each upstream filter I tried to connect to it. Once I started doing that the connection process worked fine. I don't know why the code behind the IGraphBuilder.Connect() interface tried to bring in the Indeo codec filter, perhaps something to do with trying to connect to an already connected input pin, but it did. For my needs, I prefer the tight control that IFilterGraph.ConnectDirect() provides since it eliminates any interference from the intelligent connection code in IGraphBuilder, but I could see when video filters get involved it could become useful.

Resources