Disable input/output AGC from RemoteIO and VPIO on iOS - ios

CoreAudio is always a mystery due to lack of documentations. Recently I hit some stone again:
In my program, I invoke RemoteIO and VoiceProcessingIO (VPIO) back and forth, and also change AVAudiosession in between. I tried to turn off AGC on VPIO with the follwing code:
if (ASBD.componentSubType == kAudioUnitSubType_VoiceProcessingIO) {
UInt32 turnOff = 0;
status = AudioUnitSetProperty(_myAudioUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
0,
&turnOff,
sizeof(turnOff));
NSAssert1(status == noErr, #"Error setting AGC status: %d", (int)status);
}
Well I'm still not sure if this code disables AGC on the microphone side or the speaker side on VPIO, but anyways, let's continue. Here's the sequence to reproduce the problem:
Create a RemoteIO output audio unit with PlayAndRecord audio session category, work with it and destroy the unit;
Switch audio session to Playback only category;
Switch audio session to PlayAndRecord again and create another VPIO, work with it and destroy it;
Switch audio session to Playback and then PlayAndRecord category;
After these steps, then whatever RemoteIO/VPIO created later will bear this amplified microphone signal (as if a huge AGC is always applied) and there's no way to go back until manually kill the app and start over.
Maybe it's my particular sequence that triggered this, wonder if anyone seen this before and maybe know a correct workaround?

Try setting the mode AVAudioSessionModeMeasurement, or AVAudioSession.Mode .measurement, when configuring your app's Audio Session.

Related

iOS audio system. Start & stop or just start?

I have an app, where audio recording is the main and the most important part. However user can switch to table view controller where all records are displayed and no recording is performed.
The question is what approach is better: "start & stop audio system or just start it". It may seem obvious that the first one is more correct, like "allocate when you need it, deallocate when used it". I will show my thoughts on this question and I hope to find approval or disapproval with arguments among skilled people.
When I constructed AudioController.m the first time I implemented methods to open/close audio session and to start/stop audio unit. I wanted to stop audio system when recording is not active. I used the following code:
- (BOOL)startAudioSystem {
// open audio session
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *err = nil;
if (![audioSession setActive:YES error:&err] ) {
NSLog(#"Couldn't activate audio session: %#", err);
}
// start audio unit
OSStatus status;
status = AudioOutputUnitStart([self audioUnit]);
BOOL noErrors = err == nil && status == noErr;
return noErrors;
}
and
- (BOOL)stopAudioSystem {
// stop audio unit
BOOL result;
result = AudioOutputUnitStop([self audioUnit]) == noErr;
HANDLE_RESULT(result);
// close audio session
NSError *err;
HANDLE_RESULT([[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&err]);
HANDLE_ERROR(err);
BOOL noErrors = err == nil && result;
return noErrors;
}
I found this approach problematic because of the following reasons:
Audio system starts with delay. That means, recording_callback() not called for some time. I suspect it is AudioOutputUnitStart, which is responsible for that. I tried to comment out the line with this function call and move it to initialization. the delay was gone.
If user performs switching between recording view and table view very very fast (audio system's starts and stops are very fast too), it cause the death of media service (I know that observing AVAudioSessionMediaServicesWereResetNotification could help here, but it is not the point).
To resolve these issues I modified AudioController.m with other approach which I managed to discover: start audio system when application becomes active and do not stop it before the app is terminated In this case there are also several issues:
CPU usage
If audio category is set to recording only, then no other audio could be played when user explores table view controller.
The first one surprisingly is not a big deal, if cancel any kind of processing in recording_callback() like this:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioController *input = (__bridge AudioController*)inRefCon;
if(!input->shouldPerformProcessing)
return noErr;
// processing
// ...
//
return noErr;
}
By doing this CPU usage equals to 0% on real device, when no recording is needed and no other actions are performed.
And the second issue can be solved by switching audio category to RecordAndPlay and enable mixing or just ignore the problem. For example in my case app requires mini Jack to be used by external device, so no headphones can be used in parallel.
Despite all this, the first approach is more close to me since I like to close/clean every stream/resource when it is no longer needed. And I want to be sure that there is indeed no other option than just start audio system. Please make me sure that I'm not the only one who came to this solution and it is the correct one.
The key to solving this problem is to note that the audio system actually runs in another (real-time) thread. And you can't really stop and deallocate something running in another thread exactly when you (or the app's main UI thread) "don't need it", but have to delay in order to allow the other thread to realize it needs to do something and then finish and clean up itself. This can potentially take up to many 100's of milliseconds for audio.
Given that, strategy 2 (just start) is safer and more realistic.
Or perhaps set a delay of many many seconds of non-use before attempting to stop audio, and possibly another short delay after that before attempting any restart.

How to stop a soundfont MusicDeviceMIDIEvent from completing on iOS

I've an app that successfully uses the AUSamplerGraph to play soundfont sounds.
The main sound is guitar.
If a 'lower' note on the same string is played, I stop the previous note from playing through.
The default behaviour is like plucking a string and it rings until it dies out.
Sending a note off event, and the sound still tapers off.
Sending a note on with a volume of 0, and the sound still tapers off.
UInt32 onVelocity = loudness;
UInt32 noteCommand = kMIDIMessage_NoteOn << 4 | 0;
OSStatus osStatus;
osStatus = MusicDeviceMIDIEvent(self.samplerUnitGuitar, noteCommand, noteNbr, onVelocity, 0);
Tried quite a few things. The mode approach didn't appear to work. doesn't look like the AUSampler implements that control.
AudioUnitReset // did the trick.
To clarify, you want to stop the note's decay after you send Note Off?
To my knowledge, the only way to do this is to send a Control/Mode Change message with All Sound Off. Unfortunately, this will kill ANY notes that are currently playing on that channel. (And MusicDevice AudioUnits only respond to one channel.) But it works.
UInt8 kMIDIMessage_ControlModeChange = 0xB0;
UInt8 kMIDIMessage_ControlTypeAllSoundOff = 0x78;
MusicDeviceMIDIEvent(sampler, kMIDIMessage_ControlModeChange, kMIDIMessage_ControlTypeAllSoundOff, 0, 0);

Recording volume drop switching between RemoteIO and VPIO

In my app I need to switch between these 2 different AudioUnits.
Whenever I switch from VPIO to RemoteIO, there is a drop in my recording volume. Quite a significant drop.
No change in the playback volume though.Anyone experienced this?
Here's the code where I do the switch, which is triggered by a routing change. (I'm not too sure whether I did the change correctly, so am asking here as well.)
How do I solve the problem of the recording volume drop?
Thanks, appreciate any help I can get.
Pier.
- (void)switchInputBoxTo : (OSType) inputBoxSubType
{
OSStatus result;
if (!remoteIONode) return; // NULL check
// Get info about current output node
AudioComponentDescription outputACD;
AudioUnit currentOutputUnit;
AUGraphNodeInfo(theGraph, remoteIONode, &outputACD, &currentOutputUnit);
if (outputACD.componentSubType != inputBoxSubType)
{
AUGraphStop(theGraph);
AUGraphUninitialize(theGraph);
result = AUGraphDisconnectNodeInput(theGraph, remoteIONode, 0);
NSCAssert (result == noErr, #"Unable to disconnect the nodes in the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
AUGraphRemoveNode(theGraph, remoteIONode);
// Re-init as other type
outputACD.componentSubType = inputBoxSubType;
// Add the RemoteIO unit node to the graph
result = AUGraphAddNode (theGraph, &outputACD, &remoteIONode);
NSCAssert (result == noErr, #"Unable to add the replacement IO unit to the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
result = AUGraphConnectNodeInput(theGraph, mixerNode, 0, remoteIONode, 0);
// Obtain a reference to the I/O unit from its node
result = AUGraphNodeInfo (theGraph, remoteIONode, 0, &_remoteIOUnit);
NSCAssert (result == noErr, #"Unable to obtain a reference to the I/O unit. Error code: %d '%.4s'", (int) result, (const char *)&result);
//result = AudioUnitUninitialize(_remoteIOUnit);
[self setupRemoteIOTest]; // reinit all that remoteIO/voiceProcessing stuff
[self configureAndStartAudioProcessingGraph:theGraph];
}
}
I used my apple developer support for this.
Here's what the support said :
The presence of the Voice I/O will result in the input/output being processed very differently. We don't expect these units to have the same gain levels at all, but the levels shouldn't be drastically off as it seems you indicate.
That said, Core Audio engineering indicated that your results may be related to when the voice block is created it is is also affecting the RIO instance. Upon further discussion, Core Audio engineering it was felt that since you say the level difference is very drastic it therefore it would be good if you could file a bug with some recordings to highlight the level difference that you are hearing between voice I/O and remote I/O along with your test code so we can attempt to reproduce in house and see if this is indeed a bug. It would be a good idea to include the results of the singe IO unit tests outlined above as well as further comparative results.
There is no API that controls this gain level, everything is internally setup by the OS depending on Audio Session Category (for example VPIO is expected to be used with PlayAndRecord always) and which IO unit has been setup. Generally it is not expected that both will be instantiated simultaneously.
Conclusion? I think it's a bug. :/
There is some talk about low volume issues if you don't dispose of your audio unit correctly. Basically, the first audio component stays in memory and any successive playback will be ducked under your or other apps, causing the volume drop.
Solution:
Audio units are AudioComponentInstance's and must be freed using AudioComponentInstanceDispose().
I've had success when I change the audio session category when going from voice processing io (PlayAndRecord) to Remote IO (SoloAmbient). Make sure you pause the Audio Session before changing this. You'll also have to uninitialize you're audio graph.
From a talk I had with an Apple AVAudioSession engineer.
VPIO - Is adding audio processing on the audio sample, which also creates the echo cancellation, this creats the drop in the audio level
RemoteIO - Wont do any audio processing so the volume level will remain high.
If you are lookign for echo cancellation while using the RemoteIO option, you should create you own audio processing in the render callback

RemoteIO: effecting audio produced by app

In a nutshell: Is there a way to capture/manipulate all audio produced by an app using RemoteIO?
I can get render callbacks which allow me to send audio to the speaker by hooking into RemoteIO's output bus for the input scope. But my input buffer in that callback does not contain the sound being produced elsewhere in the app by an AVPlayer. Is manipulating all app audio even possible?
Here is my setup:
-(void)setup
{
OSStatus status = noErr;
AudioComponentDescription remoteIODesc;
fillRemoteIODesc(&remoteIODesc);
AudioComponent inputComponent = AudioComponentFindNext(NULL, &remoteIODesc);
AudioComponentInstance remoteIO;
status = AudioComponentInstanceNew(inputComponent, &remoteIO);
assert(status == noErr);
AudioStreamBasicDescription desc = {0};
fillShortMonoASBD(&desc);
status = AudioUnitSetProperty(remoteIO,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&desc,
sizeof(desc));
assert(status == noErr);
AURenderCallbackStruct callback;
callback.inputProc = outputCallback;
callback.inputProcRefCon = _state;
status = AudioUnitSetProperty(remoteIO,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&callback,
sizeof(callback));
assert(status == noErr);
status = AudioUnitInitialize(remoteIO);
assert(status == noErr);
status = AudioOutputUnitStart(remoteIO);
assert(status == noErr);
}
Short answer : no, it doesn't work that way, unfortunately. You won't be able to add arbitrary processing to audio you're producing through AVFoundation (as of iOS 6).
You're misunderstanding the purpose of the RemoteIO unit. The RemoteIO gives you access to 2 things: the audio input hardware and the audio output hardware. As in, you can use the RemoteIO to get audio from the microphone, or send audio to the speakers. The RemoteIO unit won't let you grab audio that other parts of your app (e.g. AVFoundation) are sending to the hardware. Without getting too into it, this is because AVFoundation doesn't use the same audio pathway that you're using with the RemoteIO.
To manipulate audio on the level you want, you're going to have to go deeper than AVFoundation. The Audio Queue Services are the next layer down, and will give you access to the audio in the form of the Audio Queue Processing Tap. This is probably the simplest way to start processing audio. There's not too much documentation on it yet, though. Probably the best source at the moment is the header AudioToolbox.framework/AudioQueue.h Note that this was only introduced in iOS 6.
Deeper than that are Audio Units. This is where the RemoteIO unit lives. You can use the AUFilePlayer to produce sound from an audio file, then feed that audio to other Audio Units to process it (or do it yourself). This will be quite a bit more tricky / verbose than AVFoundation (understatement), but if you've already got a RemoteIO unit set up then you can probably handle it.

Removing Silence from Audio Queue session recorded audio in ios

I'm using Audio Queue to record audio from the iphone's mic and stop recording when silence detected (no audio input for 10seconds) but I want to discard the silence from audio file.
In AudioInputCallback function I am using following code to detect silence :
AudioQueueLevelMeterState meters[1];
UInt32 dlen = sizeof(meters);
OSStatus Status AudioQueueGetProperty(inAQ,kAudioQueueProperty_CurrentLevelMeterDB,meters,&dlen);
if(meters[0].mPeakPower < _threshold)
{ // NSLog(#"Silence detected");}
But how to remove these packets? Or Is there any better option?
Instead of removing the packets from the AudioQueue, you can delay the write up by writing it to a buffer first. The buffer can be easily defined by having it inside the inUserData.
When you finish recording, if the last 10 seconds is not silent, you write it back to whatever file you are going to write. Otherwise just free the buffer.
after the file is recorded and closed, simply open and truncate the sample data you are not interested in (note: you can use AudioFile/ExtAudioFile APIs to properly update any dependent chunk/header sizes).

Resources