AVAudioSession properties after Initializing AUGraph - ios

To start a call, our VOIP app sets up an AVAudioSession, then builds, initializes and runs an AUGraph.
During the call, we allow the user to switch back and forth between a speakerphone mode using code such as:
avSession = [AVAudioSession sharedInstance];
AVAudioSessionCategoryOptions categoryOptions = [avSession categoryOptions];
categoryOptions |= AVAudioSessionCategoryOptionDefaultToSpeaker;
NSLog(#"AudioService:setSpeaker:setProperty:DefaultToSpeaker=1 categoryOptions = %lx", (unsigned long)categoryOptions);
BOOL success = [avSession setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:categoryOptions error:&error];
Which works just fine. But if we try to certain AVAudioSession queries after the AUGraph has been initialized, for example:
AVAudioSessionDataSourceDescription *myInputDataSource = [avSession inputDataSource];
the result is null. Running the same line of code BEFORE we execute the AUGraphInitialize gives the correct non-null result. Can anyone explain what is going on here and how to properly access AVAudioSession properties/methods while using AUGraph?

This is expected behavior per the developer documentation, inputDataSource should return nil if it is not possible to switch sources. So Apple is really not letting anything bad happen via a mis-config, but a nil source can also give the wrong idea. Hope this helps.
Discussion
The value of this property is nil if switching between multiple input
sources is not currently possible. This feature is supported only on
certain devices and peripherals–for example, on an iPhone equipped with
both front- and rear-facing microphones.

Related

How to tell if IOS device only supports 48kHz in hardware

Newer IOS devices like the 6S only support native 48kHz playback. Not really much of a problem since standard CoreAudio graphs resample just fine. Problem is, if you're doing a VOIP type of app with the voice processing unit, you can't set the phone to 44.1kHz; it creates a nice Darth-Vader like experience!
Formerly, I used to check the model of the device and simply say 'If it's a 6S or later, then I have to resample 44.1 to 48kHz), and this worked fine. I didn't like this fix, so I tried the following code:
session = [AVAudioSession sharedInstance];
[session setActive:YES error:&nsError];
if (systemSampleRate == 44100) // We may need to resample if it's a phone that only supports 48kHz like the 6S or 6SPlus
{
[session setCategory:AVAudioSessionCategoryPlayback
withOptions:0
error:&nsError];
result = [session setPreferredSampleRate:systemSampleRate error:&nsError];
hardwareSampleRate = [session sampleRate];
NSLog (#"Phone reports sample rate of %f", hardwareSampleRate);
if (hardwareSampleRate != (double)systemSampleRate) // We can't set it!!!!
needsResampling = YES;
else
{
[session setCategory:AVAudioSessionCategoryRecord
withOptions:AVAudioSessionCategoryOptionAllowBluetooth
error:&nsError];
result = [session setPreferredSampleRate:systemSampleRate error:&nsError];
hardwareSampleRate = [session sampleRate];
if (hardwareSampleRate != (double)systemSampleRate) // We can't set it!!!!
needsResampling = YES;
else
needsResampling = NO;
}
}
MOST of the time, this works. The 6S devices would report 48kHz, and all others would report 44.1kHz. BUT, if it had been tied to a bluetooth headset type of system that only supports 8kHz mic audio and 44.1kHz playback, the first hardwareSample Rate value reports 44.1!!!! So I go ahead thinking the device natively supports 44.1 and everything screws up.
SO the question is: how do I find out if the native playback device on IOS physically only supports 48kHz, or can support both 44.1 and 48kHz? Apple's public document on this is worthless, it simply chastises people for assuming a device supports both without telling you how to figure it out.
You really do just have to assume that the sample rate can change. If systemSampleRate is an external requirement, try to set the sample rate to that, and then work with what you get. The catch is that you have to do this check every time your audio render chain starts or is interrupted in case the sample rate changes.
I use two different ways to handle this, both involve tearing down and reinitializing my audio unit chain if the sample rate changes.
One simple way is to make all of my audio unit's sample rates the system sample rate (provided by the sample rate property on an active audio session). I assume that this is the highest quality method as there is no sample rate conversion.
If I have a sample rate requirement I will create my chain with my required sample rate. then check if the system sample rate is different from my requirement. If it is different, I will put converter units between the system unit (remote io) and the ends of my chain.
The bottom line is that the most important information is whether or not the system sample rate is different from your requirement, not whether or not it can change. It's a total pain, and a bunch of audio apps broke when the 6S came out, but it's the right way to handle it moving forward.

iOS audio system. Start & stop or just start?

I have an app, where audio recording is the main and the most important part. However user can switch to table view controller where all records are displayed and no recording is performed.
The question is what approach is better: "start & stop audio system or just start it". It may seem obvious that the first one is more correct, like "allocate when you need it, deallocate when used it". I will show my thoughts on this question and I hope to find approval or disapproval with arguments among skilled people.
When I constructed AudioController.m the first time I implemented methods to open/close audio session and to start/stop audio unit. I wanted to stop audio system when recording is not active. I used the following code:
- (BOOL)startAudioSystem {
// open audio session
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *err = nil;
if (![audioSession setActive:YES error:&err] ) {
NSLog(#"Couldn't activate audio session: %#", err);
}
// start audio unit
OSStatus status;
status = AudioOutputUnitStart([self audioUnit]);
BOOL noErrors = err == nil && status == noErr;
return noErrors;
}
and
- (BOOL)stopAudioSystem {
// stop audio unit
BOOL result;
result = AudioOutputUnitStop([self audioUnit]) == noErr;
HANDLE_RESULT(result);
// close audio session
NSError *err;
HANDLE_RESULT([[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&err]);
HANDLE_ERROR(err);
BOOL noErrors = err == nil && result;
return noErrors;
}
I found this approach problematic because of the following reasons:
Audio system starts with delay. That means, recording_callback() not called for some time. I suspect it is AudioOutputUnitStart, which is responsible for that. I tried to comment out the line with this function call and move it to initialization. the delay was gone.
If user performs switching between recording view and table view very very fast (audio system's starts and stops are very fast too), it cause the death of media service (I know that observing AVAudioSessionMediaServicesWereResetNotification could help here, but it is not the point).
To resolve these issues I modified AudioController.m with other approach which I managed to discover: start audio system when application becomes active and do not stop it before the app is terminated In this case there are also several issues:
CPU usage
If audio category is set to recording only, then no other audio could be played when user explores table view controller.
The first one surprisingly is not a big deal, if cancel any kind of processing in recording_callback() like this:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioController *input = (__bridge AudioController*)inRefCon;
if(!input->shouldPerformProcessing)
return noErr;
// processing
// ...
//
return noErr;
}
By doing this CPU usage equals to 0% on real device, when no recording is needed and no other actions are performed.
And the second issue can be solved by switching audio category to RecordAndPlay and enable mixing or just ignore the problem. For example in my case app requires mini Jack to be used by external device, so no headphones can be used in parallel.
Despite all this, the first approach is more close to me since I like to close/clean every stream/resource when it is no longer needed. And I want to be sure that there is indeed no other option than just start audio system. Please make me sure that I'm not the only one who came to this solution and it is the correct one.
The key to solving this problem is to note that the audio system actually runs in another (real-time) thread. And you can't really stop and deallocate something running in another thread exactly when you (or the app's main UI thread) "don't need it", but have to delay in order to allow the other thread to realize it needs to do something and then finish and clean up itself. This can potentially take up to many 100's of milliseconds for audio.
Given that, strategy 2 (just start) is safer and more realistic.
Or perhaps set a delay of many many seconds of non-use before attempting to stop audio, and possibly another short delay after that before attempting any restart.

App volume goes quiet on iPad when using GKVoiceChat

In my iOS game, I support push-to-talk using Game Center's GKVoiceChat.
When two iPhones are connected in a multiplayer match, this works as expected: the game's sounds are heard at roughly the same volume as the other player's voice (voice may be a tiny bit louder), and the game's volume is consistent whether or not the other player is using the push-to-talk function.
However, on an iPad, the volume of the game's sounds is drastically reduced; game sounds are played at roughly one quarter the volume of the voice sounds, so quiet that unless you put your ear to the speaker, you're hard pressed to tell that any game sounds are being played at all. (Voice sounds are at full volume.) In comparison, the iPhone's volume is deafening.
Here's how I'm setting up audio:
AVAudioSession* avSession = [AVAudioSession sharedInstance];
NSError *myError = nil;
[avSession setActive:YES error:&myError];
if(myError)
NSLog(#"Error initializing audio session: %#", myError);
[avSession setCategory:AVAudioSessionCategoryPlayAndRecord
error: &myError];
if(myError)
NSLog(#"Error setting audio session category: %#", myError);
[avSession setMode:AVAudioSessionModeVoiceChat
error:&myError];
if(myError)
NSLog(#"Error setting audio session mode: %#", myError);
// By default, AVAudioSessionCategoryPlayAndRecord sends audio output to the phone's earpiece; instead, we want to force it to the speakers
[avSession overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker
error:&myError];
if(myError)
NSLog(#"Error changing audio port to speakers: %#", myError);
Then, later, when a multiplayer match is set up, we set up the voice chat like this:
self.myVoiceChannel = [[self myMatch] voiceChatWithName:#"allPlayers"];
[[self myVoiceChannel] start];
[[self myVoiceChannel] setActive:NO];
self.myVoiceChannel.volume = 1.0;
I've confirmed that commenting out the [[self myVoiceChannel] start] statement is sufficient to restore the iPad volume to the expected levels.
What's surprising is that [[AVAudioSession sharedInstance] mode] never gets set to AVAudioSessionModeGameChat---no matter when I expect it, it's always AVAudioSessionModeVoiceChat. From the AVAudioSession documentation, it seemed like when I initiated a GKVoiceChat, this would be changed automatically.
Any ideas why the iPad's audio would be mixed so differently from the iPhone?
It looks like this is a known issue since iOS 6.
The only options for working around it is to crank up your game's sounds during voice chats (on the iPad only, obviously).

Switching avaudiosession categories then retaking control of remote control center controls

This is my first post asking a question as i never usually need help but i can't figure out if this is even possible. What i need is to switch between these two categories of avaudiosession
and when the switch is made from mixing allowed to no mixing for the app take back control of the remote controls in the control center.
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionMixWithOthers error:nil]
and
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback withOptions:nil error:nil]
Ill try explain what is occurring:
They both work independently so if i start with the first avaudiosession config it allows mixing and correctly switches the remote controls in the control center to iPod.
And if i start the second avaudiosession config the app correctly takes control of the remote control in the control center.
The issue occurs when i trying toggle these options. When i toggle the app doesn't retake control of the remote controls after mixing is turned off.
Any help would be greatly appreciated
I've found a solution that works for me, which involves calling
[[UIApplication sharedApplication] beginReceivingRemoteControlEvents]
or
[[UIApplication sharedApplication] endReceivingRemoteControlEvents]
before setting AVAudioSession category options. eg:
NSUInteger options = ... // determine your options
// it seems that calls to beginReceivingRemoteControlEvents and endReceivingRemoteControlEvents
// need to be balanced, so we keep track of the current state in _isReceivingRemoteControlEvents
BOOL shouldBeReceivingRemoteControlEvents = ( 0 == (options & AVAudioSessionCategoryOptionMixWithOthers) );
if(_isReceivingRemoteControlEvents != shouldBeReceivingRemoteControlEvents) {
if(shouldBeReceivingRemoteControlEvents) {
[[UIApplication sharedApplication] beginReceivingRemoteControlEvents];
_isReceivingRemoteControlEvents=YES;
} else {
[[UIApplication sharedApplication] endReceivingRemoteControlEvents];
_isReceivingRemoteControlEvents=NO;
}
}
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback withOptions:options error:&error];
...
[[AVAudioSession sharedInstance] setActive:YES error:&error]
I've been able to achieve consistent results by using a variable to keep track of whether or not the app is currently receiving remote control events so that I can ensure that calls to (begin/end)ReceivingRemoteControlEvents are balanced. I haven't found any documentation that says that you need to do this but otherwise things don't always seem to behave as expected, particularly since I call this code multiple times throughout the course of the application.
In my implementation, the code above gets called each time the app comes to the foreground and also just before each time I begin playing audio.
I hope this helps.

Erratic behavior when setting the microphone gain with AVAudioSession

I am trying to set the microphone gain with "setInputGain" in AVAudioSession to handle very weak sounds, but I am only partly successful. I am checking if "isInputGainSettable" and then I try to change the gain with a slider. I am checking if the gain actually changes, both by reading back the value and checking an actual recorded sound. The result is as follows:
The code I am using
-(void)viewDidLoad
{
self.audioSession = [AVAudioSession sharedInstance];
if(self.audioSession.isInputGainSettable){
[self.audioSession setActive:YES error:nil];
}
}
-(IBAction)setGain:(id)sender
{
float gain = self.gainSlider.value;
NSError* error;
BOOL gainset = [self.audioSession setInputGain:gain error:&error];
if (!gainset) NSLog(#"failed %#", error);
NSLog(#"audiosession gain: %.2f ",self.audioSession.inputGain);
}
I am not getting any error messages. I have been searching SO and elsewhere and people are both reporting problems, but also that they are able to set the gain on iPads and older iPhones. The only "trick" that I have seen reported is to "wait a while" before setting the gain, something I have tried without success.
So the question is if there something I have missed, and if I should be able to set the gain on iPads and older iPhones?
I was exhausted as well and searched for almost three hours.
The only thing which worked for me was calling the setGain method in viewDidAppear.

Resources