I am trying to set the microphone gain with "setInputGain" in AVAudioSession to handle very weak sounds, but I am only partly successful. I am checking if "isInputGainSettable" and then I try to change the gain with a slider. I am checking if the gain actually changes, both by reading back the value and checking an actual recorded sound. The result is as follows:
The code I am using
-(void)viewDidLoad
{
self.audioSession = [AVAudioSession sharedInstance];
if(self.audioSession.isInputGainSettable){
[self.audioSession setActive:YES error:nil];
}
}
-(IBAction)setGain:(id)sender
{
float gain = self.gainSlider.value;
NSError* error;
BOOL gainset = [self.audioSession setInputGain:gain error:&error];
if (!gainset) NSLog(#"failed %#", error);
NSLog(#"audiosession gain: %.2f ",self.audioSession.inputGain);
}
I am not getting any error messages. I have been searching SO and elsewhere and people are both reporting problems, but also that they are able to set the gain on iPads and older iPhones. The only "trick" that I have seen reported is to "wait a while" before setting the gain, something I have tried without success.
So the question is if there something I have missed, and if I should be able to set the gain on iPads and older iPhones?
I was exhausted as well and searched for almost three hours.
The only thing which worked for me was calling the setGain method in viewDidAppear.
Related
I am trying to perform a vibration in an app similar to Snapchat, that uses both audio output and input as well as supports audio mixing from other apps, but this seems to be a harder task that I initially thought it would be. Important to know is that I am not trying to vibrate during playback or recording. From reading all the documentation I could find on the subject, this is what I have come to understand:
In order to support both playback and recording (output and input), I need to use AVAudioSessionCategoryPlayAndRecord
Making the phone vibrate through AudioServicesPlaySystemSound (kSystemSoundID_Vibrate) is not supported in any of the recording categories, including AVAudioSessionCategoryPlayAndRecord.
Enabling other apps to play audio can be done by adding the option AVAudioSessionCategoryOptionMixWithOthers.
Therefore, I do this in my app delegate:
NSError *error = nil;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionMixWithOthers error:&error];
The possible solutions to doing the vibration that I have tried but failed at are:
Deactivating the shared AVAudioSession before vibrating, and then activate it straight after.
[[AVAudioSession sharedInstance] setActive:NO error:nil];
AudioServicesPlaySystemSound (kSystemSoundID_Vibrate);
[[AVAudioSession sharedInstance] setActive:YES error:nil];
This successfully performs the vibration, but afterwards, when I try to record a movie, the audio is ducked (or something else is causing it to be very silent). It also gives me an error saying that I am not allowed to deactivate a session without removing its I/O devices first.
Changing category before vibrating, and then changing it back.
[[AVAudioSession sharedInstance] setActive:NO error:nil];
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
[[AVAudioSession sharedInstance] setActive:YES error:nil];
AudioServicesPlaySystemSound(kSystemSoundID_Vibrate);
[[AVAudioSession sharedInstance] setActive:NO error:nil];
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionMixWithOthers error:nil];
[[AVAudioSession sharedInstance] setActive:YES error:nil];
This solution comes up every now and then, but does not seem to work for me. No vibration occurs, even though the categories seems to be set. This might still be a valid solution if I set usesApplicationAudioSession = YES on my AVCaptureSession, but I haven't made it work yet.
Sources:
https://developer.apple.com/library/ios/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/ConfiguringanAudioSession/ConfiguringanAudioSession.html
https://developer.apple.com/library/ios/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/AudioSessionBasics/AudioSessionBasics.html
https://developer.apple.com/library/ios/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/AudioGuidelinesByAppType/AudioGuidelinesByAppType.html#//apple_ref/doc/uid/TP40007875-CH11-SW1
So, I've been recently trying to play a simple beep in my app and one of the methods of doing so I stumbled upon is:
AudioServicesPlaySystemSound(SystemSoundID inSystemSoundID);
It's a function that plays a system sound from /System/Library/Audio/UISounds/ it's supported on all iOS versions.
The sound 1350 equals to RingerVibeChanged (vibration). So..
AudioServicesPlaySystemSound(1350);
...vibrates for about 0.5 seconds.
Incase you're interested in more sounds here is a link to all the playable sounds and what iOS versions they've been added in:
http://iphonedevwiki.net/index.php/AudioServices
You didn't say what your supported device requirements are, but for newer devices, you can use the new taptic engine APIs, such as:
UIImpactFeedbackGenerator
UISelectionFeedbackGenerator
UINotificationFeedbackGenerator
https://developer.apple.com/reference/uikit/uifeedbackgenerator#2555399
For reference, I'm not using AVCaptureSession but an AVAudioEngine to record something, and for my case I pinpointed it down to having to pause the input in order to play the vibration properly with AudioServicesPlaySystemSound during a record session.
So in my case, I did something like this
audioEngine?.pause()
AudioServicesPlaySystemSound(1519)
do {
try audioEngine.start()
} catch let error {
print("An error occurred starting audio engine. \(error.localizedDescription)")
}
I'm not sure if it would also work with doing something similar for AVCaptureSession (i.e., stopRunning() and startRunning()), but leaving this here in case someone wants to give it a try.
You may try setting the allowHapticsAndSystemSoundsDuringRecording flag on AVAudioSession to true (the flag defaults to false) by calling
sessionInstance.setAllowHapticsAndSystemSoundsDuringRecording(true)
It worked for me.
Note the system prevents haptic and sound feedback (such as those from a UIPickerView or UIDatePicker) from playing, to avoid unexpected system noise while recording.
Newer IOS devices like the 6S only support native 48kHz playback. Not really much of a problem since standard CoreAudio graphs resample just fine. Problem is, if you're doing a VOIP type of app with the voice processing unit, you can't set the phone to 44.1kHz; it creates a nice Darth-Vader like experience!
Formerly, I used to check the model of the device and simply say 'If it's a 6S or later, then I have to resample 44.1 to 48kHz), and this worked fine. I didn't like this fix, so I tried the following code:
session = [AVAudioSession sharedInstance];
[session setActive:YES error:&nsError];
if (systemSampleRate == 44100) // We may need to resample if it's a phone that only supports 48kHz like the 6S or 6SPlus
{
[session setCategory:AVAudioSessionCategoryPlayback
withOptions:0
error:&nsError];
result = [session setPreferredSampleRate:systemSampleRate error:&nsError];
hardwareSampleRate = [session sampleRate];
NSLog (#"Phone reports sample rate of %f", hardwareSampleRate);
if (hardwareSampleRate != (double)systemSampleRate) // We can't set it!!!!
needsResampling = YES;
else
{
[session setCategory:AVAudioSessionCategoryRecord
withOptions:AVAudioSessionCategoryOptionAllowBluetooth
error:&nsError];
result = [session setPreferredSampleRate:systemSampleRate error:&nsError];
hardwareSampleRate = [session sampleRate];
if (hardwareSampleRate != (double)systemSampleRate) // We can't set it!!!!
needsResampling = YES;
else
needsResampling = NO;
}
}
MOST of the time, this works. The 6S devices would report 48kHz, and all others would report 44.1kHz. BUT, if it had been tied to a bluetooth headset type of system that only supports 8kHz mic audio and 44.1kHz playback, the first hardwareSample Rate value reports 44.1!!!! So I go ahead thinking the device natively supports 44.1 and everything screws up.
SO the question is: how do I find out if the native playback device on IOS physically only supports 48kHz, or can support both 44.1 and 48kHz? Apple's public document on this is worthless, it simply chastises people for assuming a device supports both without telling you how to figure it out.
You really do just have to assume that the sample rate can change. If systemSampleRate is an external requirement, try to set the sample rate to that, and then work with what you get. The catch is that you have to do this check every time your audio render chain starts or is interrupted in case the sample rate changes.
I use two different ways to handle this, both involve tearing down and reinitializing my audio unit chain if the sample rate changes.
One simple way is to make all of my audio unit's sample rates the system sample rate (provided by the sample rate property on an active audio session). I assume that this is the highest quality method as there is no sample rate conversion.
If I have a sample rate requirement I will create my chain with my required sample rate. then check if the system sample rate is different from my requirement. If it is different, I will put converter units between the system unit (remote io) and the ends of my chain.
The bottom line is that the most important information is whether or not the system sample rate is different from your requirement, not whether or not it can change. It's a total pain, and a bunch of audio apps broke when the 6S came out, but it's the right way to handle it moving forward.
To start a call, our VOIP app sets up an AVAudioSession, then builds, initializes and runs an AUGraph.
During the call, we allow the user to switch back and forth between a speakerphone mode using code such as:
avSession = [AVAudioSession sharedInstance];
AVAudioSessionCategoryOptions categoryOptions = [avSession categoryOptions];
categoryOptions |= AVAudioSessionCategoryOptionDefaultToSpeaker;
NSLog(#"AudioService:setSpeaker:setProperty:DefaultToSpeaker=1 categoryOptions = %lx", (unsigned long)categoryOptions);
BOOL success = [avSession setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:categoryOptions error:&error];
Which works just fine. But if we try to certain AVAudioSession queries after the AUGraph has been initialized, for example:
AVAudioSessionDataSourceDescription *myInputDataSource = [avSession inputDataSource];
the result is null. Running the same line of code BEFORE we execute the AUGraphInitialize gives the correct non-null result. Can anyone explain what is going on here and how to properly access AVAudioSession properties/methods while using AUGraph?
This is expected behavior per the developer documentation, inputDataSource should return nil if it is not possible to switch sources. So Apple is really not letting anything bad happen via a mis-config, but a nil source can also give the wrong idea. Hope this helps.
Discussion
The value of this property is nil if switching between multiple input
sources is not currently possible. This feature is supported only on
certain devices and peripherals–for example, on an iPhone equipped with
both front- and rear-facing microphones.
In my iOS game, I support push-to-talk using Game Center's GKVoiceChat.
When two iPhones are connected in a multiplayer match, this works as expected: the game's sounds are heard at roughly the same volume as the other player's voice (voice may be a tiny bit louder), and the game's volume is consistent whether or not the other player is using the push-to-talk function.
However, on an iPad, the volume of the game's sounds is drastically reduced; game sounds are played at roughly one quarter the volume of the voice sounds, so quiet that unless you put your ear to the speaker, you're hard pressed to tell that any game sounds are being played at all. (Voice sounds are at full volume.) In comparison, the iPhone's volume is deafening.
Here's how I'm setting up audio:
AVAudioSession* avSession = [AVAudioSession sharedInstance];
NSError *myError = nil;
[avSession setActive:YES error:&myError];
if(myError)
NSLog(#"Error initializing audio session: %#", myError);
[avSession setCategory:AVAudioSessionCategoryPlayAndRecord
error: &myError];
if(myError)
NSLog(#"Error setting audio session category: %#", myError);
[avSession setMode:AVAudioSessionModeVoiceChat
error:&myError];
if(myError)
NSLog(#"Error setting audio session mode: %#", myError);
// By default, AVAudioSessionCategoryPlayAndRecord sends audio output to the phone's earpiece; instead, we want to force it to the speakers
[avSession overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker
error:&myError];
if(myError)
NSLog(#"Error changing audio port to speakers: %#", myError);
Then, later, when a multiplayer match is set up, we set up the voice chat like this:
self.myVoiceChannel = [[self myMatch] voiceChatWithName:#"allPlayers"];
[[self myVoiceChannel] start];
[[self myVoiceChannel] setActive:NO];
self.myVoiceChannel.volume = 1.0;
I've confirmed that commenting out the [[self myVoiceChannel] start] statement is sufficient to restore the iPad volume to the expected levels.
What's surprising is that [[AVAudioSession sharedInstance] mode] never gets set to AVAudioSessionModeGameChat---no matter when I expect it, it's always AVAudioSessionModeVoiceChat. From the AVAudioSession documentation, it seemed like when I initiated a GKVoiceChat, this would be changed automatically.
Any ideas why the iPad's audio would be mixed so differently from the iPhone?
It looks like this is a known issue since iOS 6.
The only options for working around it is to crank up your game's sounds during voice chats (on the iPad only, obviously).
This is my first post asking a question as i never usually need help but i can't figure out if this is even possible. What i need is to switch between these two categories of avaudiosession
and when the switch is made from mixing allowed to no mixing for the app take back control of the remote controls in the control center.
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionMixWithOthers error:nil]
and
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback withOptions:nil error:nil]
Ill try explain what is occurring:
They both work independently so if i start with the first avaudiosession config it allows mixing and correctly switches the remote controls in the control center to iPod.
And if i start the second avaudiosession config the app correctly takes control of the remote control in the control center.
The issue occurs when i trying toggle these options. When i toggle the app doesn't retake control of the remote controls after mixing is turned off.
Any help would be greatly appreciated
I've found a solution that works for me, which involves calling
[[UIApplication sharedApplication] beginReceivingRemoteControlEvents]
or
[[UIApplication sharedApplication] endReceivingRemoteControlEvents]
before setting AVAudioSession category options. eg:
NSUInteger options = ... // determine your options
// it seems that calls to beginReceivingRemoteControlEvents and endReceivingRemoteControlEvents
// need to be balanced, so we keep track of the current state in _isReceivingRemoteControlEvents
BOOL shouldBeReceivingRemoteControlEvents = ( 0 == (options & AVAudioSessionCategoryOptionMixWithOthers) );
if(_isReceivingRemoteControlEvents != shouldBeReceivingRemoteControlEvents) {
if(shouldBeReceivingRemoteControlEvents) {
[[UIApplication sharedApplication] beginReceivingRemoteControlEvents];
_isReceivingRemoteControlEvents=YES;
} else {
[[UIApplication sharedApplication] endReceivingRemoteControlEvents];
_isReceivingRemoteControlEvents=NO;
}
}
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback withOptions:options error:&error];
...
[[AVAudioSession sharedInstance] setActive:YES error:&error]
I've been able to achieve consistent results by using a variable to keep track of whether or not the app is currently receiving remote control events so that I can ensure that calls to (begin/end)ReceivingRemoteControlEvents are balanced. I haven't found any documentation that says that you need to do this but otherwise things don't always seem to behave as expected, particularly since I call this code multiple times throughout the course of the application.
In my implementation, the code above gets called each time the app comes to the foreground and also just before each time I begin playing audio.
I hope this helps.