iOS - RemoteIO AudioUnits, possible to have 2? - buffer

I'm trying to do this
RemoteIO1 (for recording to buffer) -> kAudioUnitType_Mixer -> RemoteIO2 (for playback of output)
RemoteIO1 is used for 2 purposes :
1) To feed audio into the mixer channel 0
2) To record audio from mic to a buffer
kAudioUnitType_Mixer
1) Takes audio from RemoteIO - input 0
2) Mixes the audio from (1) with audio from the buffer - input1
RemoteIO2
1) Takes the mixed audio and sends it to playback
Initially I thought that I could just playback from mixer output but the following gives me an error. Can I confirm that I need another RemoteIO to do playback?
// Enable Mixer for playback
status = AudioUnitSetProperty(_mixerUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
mixerOutputBus,
&flag,
sizeof(flag));
if (noErr != status) { NSLog(#"Enable Mixer for playback error"); return; }
Also, I did the following test and realised there seems to be only one RemoteIO available (addresses for inputComponent and inputComponent2 are the same)
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
AudioComponent inputComponent2 = AudioComponentFindNext(NULL, &desc);
Is it true that I can only have one instance of RemoteIO in my app? If so, what are the alternatives for the 2nd RemoteIO?
Thanks.
Pier.

I have learnt since that 2 remoteIOs are not possible for iOS. (please correct me if I am wrong).
RemoteIO acts like a socket in the wall - one plug says "Input" and the other says "Output".
"Input" is not connected to "Output".
Hence I was able to connect my mixer's output to the remoteIO's output.
At the same time, I captured mic audio from RemoteIO input.

Related

How to connect multiple AudioUnits in Swift?

I currently have a RemoteIO Audio Unit configured and working, it simply takes input, and passes it to output, so I can hear myself through the headphones of my iPhone when I speak into its microphone.
The next step in what I want to do is add optional effects and create a chain. I understand that AUGraph has been deprecated and that I need to use kAudioUnitProperty_MakeConnection to connect things together, but I have a few key questions and I'm unable to get audio out just yet.
Firstly: If I want to go RemoteIO Input -> Reverb -> RemoteIO Output, do I need two instances of the RemoteIO Audio Unit? Or can I use the same one? I am guessing just one, but to connect different things to its input and output scopes, but I'm having trouble making this happen.
Secondly: how do render callbacks play into this? I implemented a single render callback (an AURenderCallbackStruct and set that as the kAudioUnitProperty_SetRenderCallback property on my RemoteIO Audio Unit, and in the implementation of the callback, I do this:
func performRender(
_ ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBufNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>
) -> OSStatus {
guard let unit = audioUnit else { crash("Asked to render before the AURemoteIO was created.") }
return AudioUnitRender(unit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData)
}
Do I need a render callback at all to make this work? Do I need two, one to render from RemoteIO -> Reverb, and another to render back to Reverb -> RemoteIO?
The CoreAudio documentation is notoriously sketchy but I'm having trouble finding any up-to-date info on how to do this without AUGraph which is deprecated.
Any advice hugely appreciated!
You only need one RemoteIO (apps only get one), don't need any explicit render callbacks (unless you are synthesizing samples in code), and if you add kAudioUnitProperty_MakeConnection input connections on your full chain of Audio Units, starting the output unit will pull data from the rest of the chain of units, all the way back to the microphone (or from whatever the OS has connected to the mic input).

Disable input/output AGC from RemoteIO and VPIO on iOS

CoreAudio is always a mystery due to lack of documentations. Recently I hit some stone again:
In my program, I invoke RemoteIO and VoiceProcessingIO (VPIO) back and forth, and also change AVAudiosession in between. I tried to turn off AGC on VPIO with the follwing code:
if (ASBD.componentSubType == kAudioUnitSubType_VoiceProcessingIO) {
UInt32 turnOff = 0;
status = AudioUnitSetProperty(_myAudioUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
0,
&turnOff,
sizeof(turnOff));
NSAssert1(status == noErr, #"Error setting AGC status: %d", (int)status);
}
Well I'm still not sure if this code disables AGC on the microphone side or the speaker side on VPIO, but anyways, let's continue. Here's the sequence to reproduce the problem:
Create a RemoteIO output audio unit with PlayAndRecord audio session category, work with it and destroy the unit;
Switch audio session to Playback only category;
Switch audio session to PlayAndRecord again and create another VPIO, work with it and destroy it;
Switch audio session to Playback and then PlayAndRecord category;
After these steps, then whatever RemoteIO/VPIO created later will bear this amplified microphone signal (as if a huge AGC is always applied) and there's no way to go back until manually kill the app and start over.
Maybe it's my particular sequence that triggered this, wonder if anyone seen this before and maybe know a correct workaround?
Try setting the mode AVAudioSessionModeMeasurement, or AVAudioSession.Mode .measurement, when configuring your app's Audio Session.

Recording volume drop switching between RemoteIO and VPIO

In my app I need to switch between these 2 different AudioUnits.
Whenever I switch from VPIO to RemoteIO, there is a drop in my recording volume. Quite a significant drop.
No change in the playback volume though.Anyone experienced this?
Here's the code where I do the switch, which is triggered by a routing change. (I'm not too sure whether I did the change correctly, so am asking here as well.)
How do I solve the problem of the recording volume drop?
Thanks, appreciate any help I can get.
Pier.
- (void)switchInputBoxTo : (OSType) inputBoxSubType
{
OSStatus result;
if (!remoteIONode) return; // NULL check
// Get info about current output node
AudioComponentDescription outputACD;
AudioUnit currentOutputUnit;
AUGraphNodeInfo(theGraph, remoteIONode, &outputACD, &currentOutputUnit);
if (outputACD.componentSubType != inputBoxSubType)
{
AUGraphStop(theGraph);
AUGraphUninitialize(theGraph);
result = AUGraphDisconnectNodeInput(theGraph, remoteIONode, 0);
NSCAssert (result == noErr, #"Unable to disconnect the nodes in the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
AUGraphRemoveNode(theGraph, remoteIONode);
// Re-init as other type
outputACD.componentSubType = inputBoxSubType;
// Add the RemoteIO unit node to the graph
result = AUGraphAddNode (theGraph, &outputACD, &remoteIONode);
NSCAssert (result == noErr, #"Unable to add the replacement IO unit to the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
result = AUGraphConnectNodeInput(theGraph, mixerNode, 0, remoteIONode, 0);
// Obtain a reference to the I/O unit from its node
result = AUGraphNodeInfo (theGraph, remoteIONode, 0, &_remoteIOUnit);
NSCAssert (result == noErr, #"Unable to obtain a reference to the I/O unit. Error code: %d '%.4s'", (int) result, (const char *)&result);
//result = AudioUnitUninitialize(_remoteIOUnit);
[self setupRemoteIOTest]; // reinit all that remoteIO/voiceProcessing stuff
[self configureAndStartAudioProcessingGraph:theGraph];
}
}
I used my apple developer support for this.
Here's what the support said :
The presence of the Voice I/O will result in the input/output being processed very differently. We don't expect these units to have the same gain levels at all, but the levels shouldn't be drastically off as it seems you indicate.
That said, Core Audio engineering indicated that your results may be related to when the voice block is created it is is also affecting the RIO instance. Upon further discussion, Core Audio engineering it was felt that since you say the level difference is very drastic it therefore it would be good if you could file a bug with some recordings to highlight the level difference that you are hearing between voice I/O and remote I/O along with your test code so we can attempt to reproduce in house and see if this is indeed a bug. It would be a good idea to include the results of the singe IO unit tests outlined above as well as further comparative results.
There is no API that controls this gain level, everything is internally setup by the OS depending on Audio Session Category (for example VPIO is expected to be used with PlayAndRecord always) and which IO unit has been setup. Generally it is not expected that both will be instantiated simultaneously.
Conclusion? I think it's a bug. :/
There is some talk about low volume issues if you don't dispose of your audio unit correctly. Basically, the first audio component stays in memory and any successive playback will be ducked under your or other apps, causing the volume drop.
Solution:
Audio units are AudioComponentInstance's and must be freed using AudioComponentInstanceDispose().
I've had success when I change the audio session category when going from voice processing io (PlayAndRecord) to Remote IO (SoloAmbient). Make sure you pause the Audio Session before changing this. You'll also have to uninitialize you're audio graph.
From a talk I had with an Apple AVAudioSession engineer.
VPIO - Is adding audio processing on the audio sample, which also creates the echo cancellation, this creats the drop in the audio level
RemoteIO - Wont do any audio processing so the volume level will remain high.
If you are lookign for echo cancellation while using the RemoteIO option, you should create you own audio processing in the render callback

RemoteIO: effecting audio produced by app

In a nutshell: Is there a way to capture/manipulate all audio produced by an app using RemoteIO?
I can get render callbacks which allow me to send audio to the speaker by hooking into RemoteIO's output bus for the input scope. But my input buffer in that callback does not contain the sound being produced elsewhere in the app by an AVPlayer. Is manipulating all app audio even possible?
Here is my setup:
-(void)setup
{
OSStatus status = noErr;
AudioComponentDescription remoteIODesc;
fillRemoteIODesc(&remoteIODesc);
AudioComponent inputComponent = AudioComponentFindNext(NULL, &remoteIODesc);
AudioComponentInstance remoteIO;
status = AudioComponentInstanceNew(inputComponent, &remoteIO);
assert(status == noErr);
AudioStreamBasicDescription desc = {0};
fillShortMonoASBD(&desc);
status = AudioUnitSetProperty(remoteIO,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&desc,
sizeof(desc));
assert(status == noErr);
AURenderCallbackStruct callback;
callback.inputProc = outputCallback;
callback.inputProcRefCon = _state;
status = AudioUnitSetProperty(remoteIO,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&callback,
sizeof(callback));
assert(status == noErr);
status = AudioUnitInitialize(remoteIO);
assert(status == noErr);
status = AudioOutputUnitStart(remoteIO);
assert(status == noErr);
}
Short answer : no, it doesn't work that way, unfortunately. You won't be able to add arbitrary processing to audio you're producing through AVFoundation (as of iOS 6).
You're misunderstanding the purpose of the RemoteIO unit. The RemoteIO gives you access to 2 things: the audio input hardware and the audio output hardware. As in, you can use the RemoteIO to get audio from the microphone, or send audio to the speakers. The RemoteIO unit won't let you grab audio that other parts of your app (e.g. AVFoundation) are sending to the hardware. Without getting too into it, this is because AVFoundation doesn't use the same audio pathway that you're using with the RemoteIO.
To manipulate audio on the level you want, you're going to have to go deeper than AVFoundation. The Audio Queue Services are the next layer down, and will give you access to the audio in the form of the Audio Queue Processing Tap. This is probably the simplest way to start processing audio. There's not too much documentation on it yet, though. Probably the best source at the moment is the header AudioToolbox.framework/AudioQueue.h Note that this was only introduced in iOS 6.
Deeper than that are Audio Units. This is where the RemoteIO unit lives. You can use the AUFilePlayer to produce sound from an audio file, then feed that audio to other Audio Units to process it (or do it yourself). This will be quite a bit more tricky / verbose than AVFoundation (understatement), but if you've already got a RemoteIO unit set up then you can probably handle it.

AudioUnitInitialize failed with error code 1852008051

I'm trying to record with Audio Unit in iOS.
I set componentSubType in AudioComponentDescription to kAudioUnitSubType_VoiceProcessingIO.
Then both of AudioUnitInitialize and AudioOutputUnitStart's error code is 1852008051('ncfs').
I cannot find this error code in the document.
But when I turn kAudioUnitSubType_VoiceProcessingIO to kAudioUnitSubType_RemoteIO, everything is just fine.
Could anyone tell me what should be modified when changing from VoiceProcessingIO to RemoteIO?
If you are wanting to record from the device, then you need to use the remote IO component subtype, and then set this property:
UInt32 flag = 1;
AudioUnitSetProperty(yourAudioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flag, sizeof(flag));
You also might find this tutorial about use of remote IO audio units helpful.

Resources