How to patch portaudio for low-latency iOS usage? - ios

I am currently working on a patch that should make portaudio work for iOS. For now I successfully applied portaudio member Hans Petter's patch for iOS usage
https://www.dropbox.com/s/6hf9bjqpa6b6uv3/0001-Add-basic-support-for-iOS-to-portaudio.patch?dl=0
and at least I call tell that the audio process does work, however, it is currently stuck at 1024 samples. When I try decreasing it to lower values the callback function terminates immediately. When using 48 kHz it terminates without any warning or error message but when using 44.1 kHz it crashes with t the following error:
Assertion failed: (*streamCallbackResult == paContinue || *streamCallbackResult == paComplete || *streamCallbackResult == paAbort), function PaUtil_EndBufferProcessing, file pa_process.c, line 1499.
dyld4 config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_IMAGE_SUFFIX=_debug DYLD_INSERT_LIBRARIES=/Developer/usr/lib/libBacktraceRecording.dylib:/Developer/Library/PrivateFrameworks/DTDDISupport.framework/libViewDebuggerSupport.dylib
referring to line 1497 in pa_process.c -->
assert( *streamCallbackResult == paContinue
|| *streamCallbackResult == paComplete
|| streamCallbackResult == paAbort ); / don't forget to pass in a valid callback result value */
I'd like to get as low as possible in terms of buffer size (ideally 128 or even 64), however, since I am not familiar with audio on iOS am now seeking for first inspirational comments. On OSX, Win or Linux this problem does not exist. Maybe iOS expects special low-latency flags ?

Meantime I figured that iOS is able to handle audio frame sizes < 1024 (down to 128 and possibly less) and that the portaudio patch requires additional modifications and calls in order to make it work. Once completed I will provide the patch and further details here.

Integrating Objective C code is the secret and changing the file's suffix to mm:
auto session = [AVAudioSession sharedInstance];
[session setActive:TRUE error: nil];
[session setPreferredSampleRate: sampleRate error: nil];
[session setPreferredIOBufferDuration: bufferDuration error:nil];

Related

iOS : AVFoundation - Preroll mode set during render" and my app hangs

I am developing MIDI Player by referring to the following Web-Page.
http://twocentstudios.com/2017/02/20/bouncing-midi-to-audio-on-ios/
I don't do any recording, I just want to play the SMF file.
However, when I run setPreload (true), it says "ASSERTION FAILED: Preroll mode set during render" and my app hangs.
I searched for "Preroll mode set during render" but couldn't find any valid information.
Please help someone.
EDIT:
hi, #dspr.
The percussion sounds even if I don't do "AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 1)".
I think this is because the BANK for percussion is automatically assigned to ch.10.
However, in this state, the piano and guitar and others do not sound.
AVAudioUnitMIDI Instrument needs kAUMIDISynthProperty_EnablePreload to analyze which tone is assigned to which track in the SMF file, right?
Which method does AVAudioUnitMIDIInstrument use to preload SMF files?
(1) AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 1) to AVAudioUnitMIDISynth
(2) << How to preload? >>
(3) AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 0) to AVAudioUnitMIDISynth
(4) Start AVAudioSequencer
MIDI Player uses the kAUMIDISynthProperty_EnablePreload property of MIDISynth for that purpose. See the Apple comment about it below. Note the It should only be used prior to MIDI playback and must be set back to 0 before attempting to start playback sentence at the end :
/*!
#constant kAUMIDISynthProperty_EnablePreload
#discussion Scope: Global
Value Type: UInt32
Access: Write
Setting this property to 1 puts the MIDISynth in a mode where it will attempt to load
instruments from the bank or file when it receives a program change message. This
is used internally by the MusicSequence. It should only be used prior to MIDI playback,
and must be set back to 0 before attempting to start playback.
*/
EDIT : frankly, I'm a little bit reserved about your link
One strategy I haven’t tried would be to pitch shift the MIDI up one octave, play it back at 2x, record it at 88.2kHz, then downsample to 44.1kHz. AVAudioSession presumably can’t go past 48kHz though.
Clearly, the person who wrote that has a very poor knowledge about audio and sampling. Playing a MIDI song transposed one octave up at double tempo is really not equivalent than playing the same recorded in audio at double speed whatever you make the recording at 88.2kHz or any other sample rate. As a simple example, what happens is the file contains a drum set ? A snare drum (40) will become a Chinese cymbal (52) played two times slower ?
As I can understand this post, the described hack has for unique purpose to make recording. So if you simply want to play your MIDI file back you can certainly find a simpler and better example.

How to tell if IOS device only supports 48kHz in hardware

Newer IOS devices like the 6S only support native 48kHz playback. Not really much of a problem since standard CoreAudio graphs resample just fine. Problem is, if you're doing a VOIP type of app with the voice processing unit, you can't set the phone to 44.1kHz; it creates a nice Darth-Vader like experience!
Formerly, I used to check the model of the device and simply say 'If it's a 6S or later, then I have to resample 44.1 to 48kHz), and this worked fine. I didn't like this fix, so I tried the following code:
session = [AVAudioSession sharedInstance];
[session setActive:YES error:&nsError];
if (systemSampleRate == 44100) // We may need to resample if it's a phone that only supports 48kHz like the 6S or 6SPlus
{
[session setCategory:AVAudioSessionCategoryPlayback
withOptions:0
error:&nsError];
result = [session setPreferredSampleRate:systemSampleRate error:&nsError];
hardwareSampleRate = [session sampleRate];
NSLog (#"Phone reports sample rate of %f", hardwareSampleRate);
if (hardwareSampleRate != (double)systemSampleRate) // We can't set it!!!!
needsResampling = YES;
else
{
[session setCategory:AVAudioSessionCategoryRecord
withOptions:AVAudioSessionCategoryOptionAllowBluetooth
error:&nsError];
result = [session setPreferredSampleRate:systemSampleRate error:&nsError];
hardwareSampleRate = [session sampleRate];
if (hardwareSampleRate != (double)systemSampleRate) // We can't set it!!!!
needsResampling = YES;
else
needsResampling = NO;
}
}
MOST of the time, this works. The 6S devices would report 48kHz, and all others would report 44.1kHz. BUT, if it had been tied to a bluetooth headset type of system that only supports 8kHz mic audio and 44.1kHz playback, the first hardwareSample Rate value reports 44.1!!!! So I go ahead thinking the device natively supports 44.1 and everything screws up.
SO the question is: how do I find out if the native playback device on IOS physically only supports 48kHz, or can support both 44.1 and 48kHz? Apple's public document on this is worthless, it simply chastises people for assuming a device supports both without telling you how to figure it out.
You really do just have to assume that the sample rate can change. If systemSampleRate is an external requirement, try to set the sample rate to that, and then work with what you get. The catch is that you have to do this check every time your audio render chain starts or is interrupted in case the sample rate changes.
I use two different ways to handle this, both involve tearing down and reinitializing my audio unit chain if the sample rate changes.
One simple way is to make all of my audio unit's sample rates the system sample rate (provided by the sample rate property on an active audio session). I assume that this is the highest quality method as there is no sample rate conversion.
If I have a sample rate requirement I will create my chain with my required sample rate. then check if the system sample rate is different from my requirement. If it is different, I will put converter units between the system unit (remote io) and the ends of my chain.
The bottom line is that the most important information is whether or not the system sample rate is different from your requirement, not whether or not it can change. It's a total pain, and a bunch of audio apps broke when the 6S came out, but it's the right way to handle it moving forward.

AVAudioSession properties after Initializing AUGraph

To start a call, our VOIP app sets up an AVAudioSession, then builds, initializes and runs an AUGraph.
During the call, we allow the user to switch back and forth between a speakerphone mode using code such as:
avSession = [AVAudioSession sharedInstance];
AVAudioSessionCategoryOptions categoryOptions = [avSession categoryOptions];
categoryOptions |= AVAudioSessionCategoryOptionDefaultToSpeaker;
NSLog(#"AudioService:setSpeaker:setProperty:DefaultToSpeaker=1 categoryOptions = %lx", (unsigned long)categoryOptions);
BOOL success = [avSession setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:categoryOptions error:&error];
Which works just fine. But if we try to certain AVAudioSession queries after the AUGraph has been initialized, for example:
AVAudioSessionDataSourceDescription *myInputDataSource = [avSession inputDataSource];
the result is null. Running the same line of code BEFORE we execute the AUGraphInitialize gives the correct non-null result. Can anyone explain what is going on here and how to properly access AVAudioSession properties/methods while using AUGraph?
This is expected behavior per the developer documentation, inputDataSource should return nil if it is not possible to switch sources. So Apple is really not letting anything bad happen via a mis-config, but a nil source can also give the wrong idea. Hope this helps.
Discussion
The value of this property is nil if switching between multiple input
sources is not currently possible. This feature is supported only on
certain devices and peripherals–for example, on an iPhone equipped with
both front- and rear-facing microphones.

OSX Bluetooth LE Peripheral transfer rates are slow

Background Info:
I've implemented a Bluetooth LE Peripheral for OSX which exposes two characteristics (using CoreBluetooth). One is readable, and one is writable (both with indications on). I've implemented a Bluetooth LE Central on iOS which will read from the readable characteristic and write to the writable characteristic. I've set it up so that every time the characteristic value is read, the value is updated (in a way similar to this example). The transfer rates I get with this set up are pathetically slow (topping out at a measured sustained speed of roughly 340 bytes / second). This speed is the actual data, and not a measure including the packet details, ACKs and so on.
Problem:
This sustained speed is too slow. I've considered two solutions:
There's some parameter in CoreBluetooth that I've missed that will help me increase the speed.
I'll need to implement a custom Bluetooth LE service using the IOBluetooth classes instead of CoreBluetooth.
I believe, I've exhausted option 1. I don't see any other parameters I can tweak. I'm limited to sending 20 bytes per message. Anything else and I get cryptic errors on the iOS device concerning Unknown Errors, Unlikely Errors, or the value being "Not Long". Since the demo project also indicates a 20 byte MTU, I'll accept that this likely isn't possible.
So I'm left with option 2. I'm trying to somehow modify the connection parameters for Bluetooth LE on OSX to hopefully allow me to increase the transfer speed (by setting the min and max conn intervals to be 20ms and 40ms respectively - as well as sending multiple BT packets per connection interval). It looks like providing my own SDP Service on IOBluetooth is the only way to achieve this on OSX. The problem with this is the documentation for how to do this is negligible to non-existent.
This tells me how to implement my own service (albeit using deprecate API), however, it doesn't explain the required parameters for registering an SDP service. So I'm left wondering:
Where can I find the required parameters for this dictionary?
How do I define these parameters in a way to offer a Bluetooth LE service?
Is there any alternative to providing a Bluetooth LE Peripheral on OSX via another framework (Python library? Linux in a VM with access to the Bluetooth stack? I'd like to avoid this altogether.)
I decided my best course of action was to attempt to use Linux in a VM as there is more documentation available and access to the source code would hopefully guarantee that I could find a solution. For anyone who is also facing this problem, here's how you can issue a Connection Parameter Update Request on OS X (sort of).
Step 1
Install a Linux VM. I used Virtual Box with Linux Mint 15 (64-bit Cinnamon).
Step 2
Allow usage of the OS X Bluetooth device in your VM. Attempting to forward the Bluetooth USB Controller to your VM will give an error message. To allow this, you need to stop everything that is using the controller. On my machine, that included issuing the following commands from the command line:
sudo launchctl unload /System/Library/LaunchDaemons/com.apple.blued.plist
This will kill the OS X Bluetooth daemon. Attempting to kill blued from the Activity Monitor will just cause it to be automatically relaunched.
sudo kextunload -b com.apple.iokit.BroadcomBluetoothHostControllerUSBTransport
On my MacBook, I've got a Broadcom controller and this is the kernel module that OS X uses for it. Don't worry about issuing these commands. To undo the changes, you can power down and reboot your machine (note, in some cases when playing with the BT controller and it got into a bad state, I had to actually leave the machine powered down for ~10 seconds before rebooting to clear volatile memory).
If after running these two commands you still can't mount the BT controller, you can run kextstat | grep Bluetooth and see other Bluetooth related kernel modules and then try to unload them as well. I've got ones named IOBluetoothFamily and IOBluetoothSerialManager that don't need to be unloaded.
Step 3
Launch your VM and get your Linux BT stack. I checked out the bluez Git repo from here. I specifically grabbed the 5.14 release tag using git checkout tags/5.14 just to be sure it was at least a tagged version and less likely to be broken. 5.14 is the newest tag as of writing this answer.
Step 4
Build bluez. This was done using bootstrap, then configure, then make and make install. I used the --prefix=/opt/bluez flag on configure to prevent overwriting the install bluetooth stack. Also, I used the --enable-maintainer-mode configure flag for the reason stated in the next step. You also might need to use --disable-systemd to get it to configure. Bluez has a bunch of tools and utilities you can use for various things. In order to use the built Bluetooth daemon, you need to stop the system daemon using sudo service bluetooth stop. You can then launch the built one using sudo /opt/bluez/libexec/bluetooth/bluetoothd -n -d (this launches in non-daemon mode with debug output).
Step 5
Get your LE service running via bluez. You can view the bluez/plugins/gatt-example.c for how to do this. I directly modified this by removing the unnecessary code and using the battery service code as a template for my own service and characteristics. You need to recompile bluez to have this code added to the bluetooth daemon. One thing to note (that caused my a day or two of trouble getting this working) was that iOS caches the GATT service listing and this is not read/refreshed on each connection. If you add a service or characteristic or change a UUID, you'll need to disable Bluetooth on your iOS device and then re-enable it. This is undocumented in Apples docs and there is no programmatic way to do it.
Step 6
Unfortunately, this is where things get tricky. Bluez doesn't have support built-in for issuing the Connection Parameters Update Request using any of its utilities. I had to write it myself. I'm currently seeing if they want my code to be included in the bluez stack. I can't post the code currently as I'd need to first see if the bluez devs are interested in the code and then get approval from my workplace to give the code. However, I can currently explain what I did to enable support.
Step 7
Prime yourself on the Bluetooth Standard. Any version 4.0 or greater will have the details you need. Read the following sections.
See Vol. 2, Part E, 4.1 for Host to Controller HCI flow.
See Vol. 2, Part E, 5.4.2 for HCI ACL Data Packet format.
See Vol. 3, Part A, 4 for Signalling Packet format.
See Vol. 3, Part A, 4.20 for Connection Parameter Update Request format.
You're basically going to need to write the code to format the packets and then write them to the hci device. The HCI ACL Data Packet header will contain 4 bytes. This is followed by 4 bytes for the Signalling command's length and channel id. This is then followed by your signal payload which in my case was 12 bytes (for the Connection Parameter Update Request).
You can then write them to the device similar to hci_send_cmd in bluez/lib/hci.c. I did each packet header as it's own struct and wrote them each as iovecs to the device. I put my new function in the hci.c file and exposed it with a function prototype in bluez/lib/hci_lib.h. I then modified bluez/tools/hcitool.c to allow me to call this method from the command line. In my case, I made it so that the command was nearly identical to the lecup command as it requires the same parameters (lecup can't be used as it's meant to be called on the master side, not the slave).
Recompiled all of this and then, voila, I can use my new command on hcitool to send the parameters to the bluetooth controller. After sending my command, it then re-negotiates with the iOS device as expected.
Comments
This process is not for the faint of heart. Hopefully, either this, or some other method of setting the connection parameters is added to bluez to simplify this process. Ideally, Apple will allow the ability to do so via CoreBluetooth or IOBluetooth at some point as well (it could be possible, but undocumentated / difficult to do so, I gave up with the Apple libraries). I've journeyed down the rabbit hole and learned much more about the Bluetooth Spec then I thought I'd have to to simply change the connection parameters between a MacBook and an iPhone. Hopefully this will be helpful to somebody at some point (even if it's me checking back on how I did this).
I know I've left out a lot of details in this in order to keep it somewhat brief (i.e. usage on the bluez tools). Please comment if something isn't clear.
If you are implementing your Peripheral using CoreBluetooth, you can request somewhat customized connection parameters by calling -[CBPeripheralManager setDesiredConnectionLatency:forCentral:] to Low, Medium, or High (where Low latency means higher bandwidth). The documentation does not specify what this means, so we have to test it ourselves.
On an OSX Peripheral, when you set the desired latency to Low, the interval is still 22.5ms which is far from the minimum of 7.5ms.
On OSX Yosemite 10.10.4, this is what the CBPeripheralManagerConnectionLatency values mean:
Low: Min Interval: 18 (22.5ms), Max Interval: 18 (22.5ms), Slave Latency: 4 events, Timeout: 200 (2s).
Medium: Min Interval: 32 (40ms), Max Interval: 32 (40ms), Slave Latency: 6 events, Timeout: 200 (2s)
High: Min Interval: 160 (200ms), Max Interval: 160 (200ms), Slave Latency: 2 events, Timeout: 300 (3s)
Here is the code that I used to run a CBPeripheralManager on OSX. I used an Android device as central using BLE Explorer and dumped the Bluetooth traffic to a Btsnoop file.
// clang main.m -framework Foundation -framework IOBluetooth
#import <Foundation/Foundation.h>
#import <IOBluetooth/IOBluetooth.h>
#interface MyPeripheralManagerDelegate: NSObject<CBPeripheralManagerDelegate>
#property (nonatomic, assign) CBPeripheralManager* peripheralManager;
#property (nonatomic) CBPeripheralManagerConnectionLatency nextLatency;
#end
#implementation MyPeripheralManagerDelegate
+ (NSString*)stringFromCBPeripheralManagerState:(CBPeripheralManagerState)state {
switch (state) {
case CBPeripheralManagerStatePoweredOff: return #"PoweredOff";
case CBPeripheralManagerStatePoweredOn: return #"PoweredOn";
case CBPeripheralManagerStateResetting: return #"Resetting";
case CBPeripheralManagerStateUnauthorized: return #"Unauthorized";
case CBPeripheralManagerStateUnknown: return #"Unknown";
case CBPeripheralManagerStateUnsupported: return #"Unsupported";
}
}
+ (CBUUID*)LatencyCharacteristicUuid {
return [CBUUID UUIDWithString:#"B81672D5-396B-4803-82C2-029D34319015"];
}
- (void)peripheralManagerDidUpdateState:(CBPeripheralManager *)peripheral {
NSLog(#"CBPeripheralManager entered state %#", [MyPeripheralManagerDelegate stringFromCBPeripheralManagerState:peripheral.state]);
if (peripheral.state == CBPeripheralManagerStatePoweredOn) {
NSDictionary* dict = #{CBAdvertisementDataLocalNameKey: #"ConnLatencyTest"};
// Generated with uuidgen
CBUUID *serviceUuid = [CBUUID UUIDWithString:#"7AE48DEE-2597-4B4D-904E-A3E8C7735738"];
CBMutableService* service = [[CBMutableService alloc] initWithType:serviceUuid primary:TRUE];
// value:nil makes it a dynamic-valued characteristic
CBMutableCharacteristic* latencyCharacteristic = [[CBMutableCharacteristic alloc] initWithType:MyPeripheralManagerDelegate.LatencyCharacteristicUuid properties:CBCharacteristicPropertyRead value:nil permissions:CBAttributePermissionsReadable];
service.characteristics = #[latencyCharacteristic];
[self.peripheralManager addService:service];
[self.peripheralManager startAdvertising:dict];
NSLog(#"startAdvertising. isAdvertising: %d", self.peripheralManager.isAdvertising);
}
}
- (void)peripheralManagerDidStartAdvertising:(CBPeripheralManager *)peripheral
error:(NSError *)error {
if (error) {
NSLog(#"Error advertising: %#", [error localizedDescription]);
}
NSLog(#"peripheralManagerDidStartAdvertising %d", self.peripheralManager.isAdvertising);
}
+ (CBPeripheralManagerConnectionLatency) nextLatencyAfter:(CBPeripheralManagerConnectionLatency)latency {
switch (latency) {
case CBPeripheralManagerConnectionLatencyLow: return CBPeripheralManagerConnectionLatencyMedium;
case CBPeripheralManagerConnectionLatencyMedium: return CBPeripheralManagerConnectionLatencyHigh;
case CBPeripheralManagerConnectionLatencyHigh: return CBPeripheralManagerConnectionLatencyLow;
}
}
+ (NSString*)describeLatency:(CBPeripheralManagerConnectionLatency)latency {
switch (latency) {
case CBPeripheralManagerConnectionLatencyLow: return #"Low";
case CBPeripheralManagerConnectionLatencyMedium: return #"Medium";
case CBPeripheralManagerConnectionLatencyHigh: return #"High";
}
}
- (void)peripheralManager:(CBPeripheralManager *)peripheral didReceiveReadRequest:(CBATTRequest *)request {
if ([request.characteristic.UUID isEqualTo:MyPeripheralManagerDelegate.LatencyCharacteristicUuid]) {
[self.peripheralManager setDesiredConnectionLatency:self.nextLatency forCentral:request.central];
NSString* description = [MyPeripheralManagerDelegate describeLatency: self.nextLatency];
request.value = [description dataUsingEncoding:NSUTF8StringEncoding];
[self.peripheralManager respondToRequest:request withResult:CBATTErrorSuccess];
NSLog(#"didReceiveReadRequest:latencyCharacteristic. Responding with %#", description);
self.nextLatency = [MyPeripheralManagerDelegate nextLatencyAfter:self.nextLatency];
} else {
NSLog(#"didReceiveReadRequest: (unknown) %#", request);
}
}
#end
int main(int argc, const char * argv[]) {
#autoreleasepool {
MyPeripheralManagerDelegate *peripheralManagerDelegate = [[MyPeripheralManagerDelegate alloc] init];
CBPeripheralManager* peripheralManager = [[CBPeripheralManager alloc] initWithDelegate:peripheralManagerDelegate queue:nil];
peripheralManagerDelegate.peripheralManager = peripheralManager;
[[NSRunLoop currentRunLoop] run];
}
return 0;
}

Recording volume drop switching between RemoteIO and VPIO

In my app I need to switch between these 2 different AudioUnits.
Whenever I switch from VPIO to RemoteIO, there is a drop in my recording volume. Quite a significant drop.
No change in the playback volume though.Anyone experienced this?
Here's the code where I do the switch, which is triggered by a routing change. (I'm not too sure whether I did the change correctly, so am asking here as well.)
How do I solve the problem of the recording volume drop?
Thanks, appreciate any help I can get.
Pier.
- (void)switchInputBoxTo : (OSType) inputBoxSubType
{
OSStatus result;
if (!remoteIONode) return; // NULL check
// Get info about current output node
AudioComponentDescription outputACD;
AudioUnit currentOutputUnit;
AUGraphNodeInfo(theGraph, remoteIONode, &outputACD, &currentOutputUnit);
if (outputACD.componentSubType != inputBoxSubType)
{
AUGraphStop(theGraph);
AUGraphUninitialize(theGraph);
result = AUGraphDisconnectNodeInput(theGraph, remoteIONode, 0);
NSCAssert (result == noErr, #"Unable to disconnect the nodes in the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
AUGraphRemoveNode(theGraph, remoteIONode);
// Re-init as other type
outputACD.componentSubType = inputBoxSubType;
// Add the RemoteIO unit node to the graph
result = AUGraphAddNode (theGraph, &outputACD, &remoteIONode);
NSCAssert (result == noErr, #"Unable to add the replacement IO unit to the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
result = AUGraphConnectNodeInput(theGraph, mixerNode, 0, remoteIONode, 0);
// Obtain a reference to the I/O unit from its node
result = AUGraphNodeInfo (theGraph, remoteIONode, 0, &_remoteIOUnit);
NSCAssert (result == noErr, #"Unable to obtain a reference to the I/O unit. Error code: %d '%.4s'", (int) result, (const char *)&result);
//result = AudioUnitUninitialize(_remoteIOUnit);
[self setupRemoteIOTest]; // reinit all that remoteIO/voiceProcessing stuff
[self configureAndStartAudioProcessingGraph:theGraph];
}
}
I used my apple developer support for this.
Here's what the support said :
The presence of the Voice I/O will result in the input/output being processed very differently. We don't expect these units to have the same gain levels at all, but the levels shouldn't be drastically off as it seems you indicate.
That said, Core Audio engineering indicated that your results may be related to when the voice block is created it is is also affecting the RIO instance. Upon further discussion, Core Audio engineering it was felt that since you say the level difference is very drastic it therefore it would be good if you could file a bug with some recordings to highlight the level difference that you are hearing between voice I/O and remote I/O along with your test code so we can attempt to reproduce in house and see if this is indeed a bug. It would be a good idea to include the results of the singe IO unit tests outlined above as well as further comparative results.
There is no API that controls this gain level, everything is internally setup by the OS depending on Audio Session Category (for example VPIO is expected to be used with PlayAndRecord always) and which IO unit has been setup. Generally it is not expected that both will be instantiated simultaneously.
Conclusion? I think it's a bug. :/
There is some talk about low volume issues if you don't dispose of your audio unit correctly. Basically, the first audio component stays in memory and any successive playback will be ducked under your or other apps, causing the volume drop.
Solution:
Audio units are AudioComponentInstance's and must be freed using AudioComponentInstanceDispose().
I've had success when I change the audio session category when going from voice processing io (PlayAndRecord) to Remote IO (SoloAmbient). Make sure you pause the Audio Session before changing this. You'll also have to uninitialize you're audio graph.
From a talk I had with an Apple AVAudioSession engineer.
VPIO - Is adding audio processing on the audio sample, which also creates the echo cancellation, this creats the drop in the audio level
RemoteIO - Wont do any audio processing so the volume level will remain high.
If you are lookign for echo cancellation while using the RemoteIO option, you should create you own audio processing in the render callback

Resources