How to control hardware mic input gain/level on iPhone? - ios

My audio-analysis function responds better on the iPad (2) than the iPhone (4). It seems sensitive to softer sounds on the iPad, whereas the iPhone requires much louder input to respond properly. Whether this is because of mic placement, different components, different software configurations or some other factor, I'd like to be able to control for it in my app.
Obviously I could just multiply all of my audio samples to programmatically apply gain. Of course that has a software cost too, so:
Is it possible to control the mic's gain from software in iOS, similarly to how it is in MacOS? I can't find any documentation on this but I'm hoping I'm just missing it somehow.

On ios6+ you can use AVAudioSession properties
CGFloat gain = sender.value;
NSError* error;
self.audioSession = [AVAudioSession sharedInstance];
if (self.audioSession.isInputGainSettable) {
BOOL success = [self.audioSession setInputGain:gain
error:&error];
if (!success){} //error handling
} else {
NSLog(#"ios6 - cannot set input gain");
}
On ios5 you can get/set audio input gain properties using AudioSession functions
UInt32 ui32propSize = sizeof(UInt32);
UInt32 f32propSize = sizeof(Float32);
UInt32 inputGainAvailable = 0;
Float32 inputGain = sender.value;
OSStatus err =
AudioSessionGetProperty(kAudioSessionProperty_InputGainAvailable
, &ui32propSize
, &inputGainAvailable);
if (inputGainAvailable) {
OSStatus err =
AudioSessionSetProperty(kAudioSessionProperty_InputGainScalar
, sizeof(inputGain)
, &inputGain);
} else {
NSLog(#"ios5 - cannot set input gain");
}
OSStatus err =
AudioSessionGetProperty(kAudioSessionProperty_InputGainScalar
, &f32propSize
, &inputGain);
NSLog(#"inputGain: %0.2f",inputGain);
(error handling omitted)
As you are interested in controlling input gain, you may also want to disable automatic gain control by setting the audio session mode to AVAudioSessionModeMeasurement (ios5+6)
[self.audioSession setMode:AVAudioSessionModeMeasurement
error:nil];
NSLog(#"mode:%#",self.audioSession.mode);
These settings are fairly hardware-specific so availability cannot be assumed. For example, I can alter the gain on iPhone3GS/ios6 and iPhone4S/ios5.1, but not on ipadMini/ios6.1. I can disable AGC on the iPhone3G and the iPad mini, but not the iPhone4S.

I think this can help you : http://www.stefanpopp.de/2011/capture-iphone-microphone/

Related

Why is playback whisper-quiet on iPhone using NuGet package Plugin.AudioRecorder?

Using the Plugin.AudioRecorder NuGet in Xamarin Forms / iOS, I am able to record audio on iPhone 8 but on playback it is whisper quiet. How to increase the sound volume?
In AppDelegate.cs I have:
AudioPlayer.RequestAVAudioSessionCategory (AVAudioSessionCategory.PlayAndRecord);
By default playback was via the phone's upper speaker. Directing output to the lower speaker solves the problem.
In AppDelegate.cs, add:
AudioPlayer.OnPrepareAudioSession = x =>
{
// Route audio to the lower speaker rather than the upper speaker so sound volume is not minimal
x.OverrideOutputAudioPort ( AVAudioSessionPortOverride.Speaker, out NSError error );
};
AudioPlayer.OnPrepareAudioSession was not getting called for me. An alternative solution to direct audio output to the lower speaker:
var audioSession = AVAudioSession.SharedInstance();
var success = audioSession.SetCategory(AVAudioSession.CategoryPlayAndRecord, out var error);
if (success)
{
success = audioSession.OverrideOutputAudioPort(AVAudioSessionPortOverride.Speaker, out error);
if (success)
audioSession.SetActive(true, out error);
}
I ran this code in my AppDelegate.FinishedLaunching() override

iOS app audio stops when screen auto-locks after upgrading app to xcode 8

I have an app that has been published on the iTunes App Store, and it has background mode enabled for audio.
After updating to XCode 8, I published an update for my app, after which I've found that the app stops playing whenever the screen locks. I had not made any changes to background play otherwise. Not sure if the behavior or coding requirements changed for iOS 9+
Here's what my code does:
App plist file:
<key>UIBackgroundModes</key>
<array>
<string>audio</string>
<string>remote-notification</string>
</array>
AudioController.m
-(void)setBackgroundPlay:(bool)backgroundPlay
{
NSLog(#"setBackgroundPlay %d", backgroundPlay);
AVAudioSession *mySession = [AVAudioSession sharedInstance];
NSError *audioSessionError = nil;
if (backgroundPlay) {
// Assign the Playback category to the audio session.
[mySession setCategory: AVAudioSessionCategoryPlayback
error: &audioSessionError];
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers, // 1
sizeof (allowMixing), // 2
&allowMixing // 3
);
if (propertySetError != 0) {
NSLog (#"Error setting audio property MixWithOthers");
}
} else {
// Assign the Playback category to the audio session.
[mySession setCategory: AVAudioSessionCategoryPlayback
error: &audioSessionError];
}
if (audioSessionError != nil) {
NSLog (#"Error setting audio session category.");
}
}
The audio does continue playing when I minimize the app, and it continues playing until the screen auto-locks. Whenever the screen turns on (like when a notification is received), audio resumes, and then shuts off when the screen goes black.
As mentioned, this stuff used to work, and seems to have changed behavior after update to Xcode 8/iOS 9.
I've tried searching the forum and other places for people experiences similar issues, but haven't been able to locate anything.
Any suggestions, or a fresh pair of eyes looking at this would be appreciated!
Thanks,
Sridhar
Ok, I found the problem! Everything was ok with regard to how I had setup background audio.
The key giveaway was looking at the console of the device when the screen lock had turned on:
Jan 17 11:03:59 My-iPad Talanome[1179] : kAudioUnitErr_TooManyFramesToProcess : inFramesToProcess=4096, mMaxFramesPerSlice=1156
A little searching led me to this Technical note - https://developer.apple.com/library/content/qa/qa1606/_index.html
The key is this --
// set the mixer unit to handle 4096 samples per slice since we want to keep rendering during screen lock
UInt32 maxFPS = 4096;
AudioUnitSetProperty(mMixer, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0,
&maxFPS, sizeof(maxFPS));
I had not set my maxFramesPerSlice, and so it was defaulting to 1156, which was too small for when the auto-lock is on (which is 4096). Setting the maxFramesPerSlice to 4096 in my audio initialization ensured that I have enough for when the screen locks.
Hope this helps others who may face similar issues!
-Sridhar

How to set pan in IOS Audio Unit Framework

Hello stack overflow users,
i want to change pan position using UISlided in my IOS application.
i am upgrading whole app which is currently using AudioStreamer of Matt Gallagher
To change the pan value in in AudioStreamer used below code.
AudioQueueRef audioQueue; // defined in AudioStreamer.h file
- (void)changePan:(float)newPan
{
OSStatus panErr = AudioQueueSetParameter( audioQueue, kAudioQueueParam_Pan, newPan);
NSLog(#" setting pan: %ld", panErr);
if( panErr )
NSLog(#"Error setting pan: %ld", panErr);
}
i am replacing AudioStreamer with StreamingKit which use AudioUnit
if i will get some help to make this thing done using StreamingKit or AudioUnit i will appreciate that.
P.S Let me know if anyone needs more info.
Thanks
Using AudioUnit API, you can simply set the kMultiChannelMixerParam_Pan property of the audio mixer unit to set the stereo pan:
AudioUnitParameterValue panValue = 0.9; // panned almost dead-right. possible values are between -1 and 1
int result = AudioUnitSetParameter(mixerUnit, kMultiChannelMixerParam_Pan, kAudioUnitScope_Input, 0, panValue, 0);
if (result == 0)
{
NSLog("success");
}
You may also need to retrieve the internal mixerUnit instance variable from inside STKAudioPlayer. You can try [audioPlayer valueForKey:#"_mixerUnit"] for that or implement a getter yourself inside StreamingKit's files.

Two-channel recording on the iPhone/iPad: headset + built-in mic

For an app, we have a requirement to record from two different audio sources. One mic is a special (throat) mic and it comes with the same connector that the iPhone headset with mic uses.
On a second channel, we would like to record the ambient sounds and the best thing would be if we could just record from the iPhone's/iPad's built-in mic at the same time as we record from the throat mic headset.
Is there any way this is possible? Any other tips?
The OS currently only allows an app to connect to one audio source route at a time. The only way to record 2-channels on a stock iOS device is by using an Apple USB to Lightning connector (Camera Connection kit on older models) with a standard USB stereo ADC or an audio mixing panel which has multiple mic inputs.
I have found some FAQ on Apple library about how to choose data source from different microphone port, maybe these will be helpful:
https://developer.apple.com/library/ios/qa/qa1799/_index.html
iOS 7 offers developers more flexibility in terms of selecting specific built-in microphones.
Using APIs introduced in iOS 7, developers can perform tasks such as locating a port description that represents the built-in microphone, locating specific microphones like the "front", "back" or "bottom", setting your choice of microphone as the preferred data source, setting the built-in microphone port as the preferred input and even selecting a preferred microphone polar pattern if the hardware supports it. See AVAudioSession.h.
Listing 1 demonstrates how applications can find the AVAudioSessionPortDescription that represents the built-in microphone, locate the front microphone (on iPhone 5 or another device that has a front facing microphone), set the front microphone as the preferred data source and set the built-in microphone port as the preferred input.
Listing 1  Demonstrate Input Selection.
#import <AVFoundation/AVAudioSession.h>
- (void) demonstrateInputSelection
{
NSError* theError = nil;
BOOL result = YES;
AVAudioSession* myAudioSession = [AVAudioSession sharedInstance];
result = [myAudioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&theError];
if (!result)
{
NSLog(#"setCategory failed");
}
result = [myAudioSession setActive:YES error:&theError];
if (!result)
{
NSLog(#"setActive failed");
}
// Get the set of available inputs. If there are no audio accessories attached, there will be
// only one available input -- the built in microphone.
NSArray* inputs = [myAudioSession availableInputs];
// Locate the Port corresponding to the built-in microphone.
AVAudioSessionPortDescription* builtInMicPort = nil;
for (AVAudioSessionPortDescription* port in inputs)
{
if ([port.portType isEqualToString:AVAudioSessionPortBuiltInMic])
{
builtInMicPort = port;
break;
}
}
// Print out a description of the data sources for the built-in microphone
NSLog(#"There are %u data sources for port :\"%#\"", (unsigned)[builtInMicPort.dataSources count], builtInMicPort);
NSLog(#"%#", builtInMicPort.dataSources);
// loop over the built-in mic's data sources and attempt to locate the front microphone
AVAudioSessionDataSourceDescription* frontDataSource = nil;
for (AVAudioSessionDataSourceDescription* source in builtInMicPort.dataSources)
{
if ([source.orientation isEqual:AVAudioSessionOrientationFront])
{
frontDataSource = source;
break;
}
} // end data source iteration
if (frontDataSource)
{
NSLog(#"Currently selected source is \"%#\" for port \"%#\"", builtInMicPort.selectedDataSource.dataSourceName, builtInMicPort.portName);
NSLog(#"Attempting to select source \"%#\" on port \"%#\"", frontDataSource, builtInMicPort.portName);
// Set a preference for the front data source.
theError = nil;
result = [builtInMicPort setPreferredDataSource:frontDataSource error:&theError];
if (!result)
{
// an error occurred. Handle it!
NSLog(#"setPreferredDataSource failed");
}
}
// Make sure the built-in mic is selected for input. This will be a no-op if the built-in mic is
// already the current input Port.
theError = nil;
result = [myAudioSession setPreferredInput:builtInMicPort error:&theError];
if (!result)
{
// an error occurred. Handle it!
NSLog(#"setPreferredInput failed");
}
}
Listing 1 will produce the following console output when run on an iPhone 5:
There are 3 data sources for port :"<AVAudioSessionPortDescription: 0x14d935a0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
(
"<AVAudioSessionDataSourceDescription: 0x14d93800, ID = 1835216945; name = Bottom>",
"<AVAudioSessionDataSourceDescription: 0x14d938d0, ID = 1835216946; name = Front>",
"<AVAudioSessionDataSourceDescription: 0x14d93a10, ID = 1835216947; name = Back>"
)
Currently selected source is "Bottom" for port "iPhone Microphone"
Attempting to select source "<AVAudioSessionDataSourceDescription: 0x14d938d0, ID = 1835216946; name = Front>" on port "iPhone Microphone”
UPDATE 14 Nov
Use the code in the front I can set the specific built-in mic on iPhone to record sound, now I’m trying to change the specific mic on iPhone frequently to simulate a stereo record.

iOS Voip Application | AudioQueue | AVSession Category

In my iOS Application , i am using AudioQueue for Audio recording and playback, basically i have OSX Version running and porting it on iOS.
I realize in iOS I need to configure / set the AV Session and i have done following till now,
-(void)initAudioSession{
//get your app's audioSession singleton object
AVAudioSession* session = [AVAudioSession sharedInstance];
//error handling
BOOL success;
NSError* error;
//set the audioSession category.
//Needs to be Record or PlayAndRecord to use audioRouteOverride:
success = [session setCategory:AVAudioSessionCategoryPlayAndRecord
error:&error];
if (!success) NSLog(#"AVAudioSession error setting category:%#",error);
//set the audioSession override
success = [session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker
error:&error];
if (!success) NSLog(#"AVAudioSession error overrideOutputAudioPort:%#",error);
//activate the audio session
success = [session setActive:YES error:&error];
if (!success) NSLog(#"AVAudioSession error activating: %#",error);
else NSLog(#"audioSession active");
}
Now what is happening is, Speaker AudioQueue callback is never getting called, i checked many answers, comments on so , google etc... and looks to be correct , the way i did is
Create AudioQueue for input and output : Configuration Linear PCM , 16000 Sampling rate
Allocate buffer
Setup queue with valid callback,
Start Queue,
It seems to be fine, i can able to hear Output on other end ( i.e. Input AudioQueue is working ) but output AudioQueue ( i.e. AudioQueueOutputCallback is never getting called).
I am suspecting i need to set the Proper AVSessionCatogery that i am trying with all possible option but didn't able to hear anything in the speaker,
I Compare my Implementation with Apple example Speakhere running AudioQueue on the main thread.
Even if i don't start Input AudioQueue ( mic ) then also i same behavior. and its difficult to have Speakhere behavior i.e. stop record and play
Thanks for looking at it, expecting your comments/help. Will be able to share code snippet.
Thanks for looking at it , i realize the problem, this is my callback,
void AudioStream::AQBufferCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inCompleteAQBuffer)
{
AudioStream *THIS = (AudioStream *)inUserData;
if (THIS->mIsDone) {
return;
}
if ( !THIS->IsRunning()){
NSLog(#" AudioQueue is not running");
**return;** // Error part
}
int bytes = THIS->bufferByteSize;
if ( !THIS->pSingleBuffer){
THIS->pSingleBuffer = new unsigned char[bytes];
}
unsigned char *buffer = THIS->pSingleBuffer;
if ((THIS->mNumPacketsToRead) > 0) {
/* lets read only firt packet */
memset(buffer,0x00,bytes);
float volume = THIS->volume();
if (THIS->volumeChange){
SInt16 *editBuffer = (SInt16 *)buffer;
// loop over every packet
for (int nb = 0; nb < (sizeof(buffer) / 2); nb++) {
// we check if the gain has been modified to save resoures
if (volume != 0) {
// we need more accuracy in our calculation so we calculate with doubles
double gainSample = ((double)editBuffer[nb]) / 32767.0;
/*
at this point we multiply with our gain factor
we dont make a addition to prevent generation of sound where no sound is.
no noise
0*10=0
noise if zero
0+10=10
*/
gainSample *= volume;
/**
our signal range cant be higher or lesser -1.0/1.0
we prevent that the signal got outside our range
*/
gainSample = (gainSample < -1.0) ? -1.0 : (gainSample > 1.0) ? 1.0 : gainSample;
/*
This thing here is a little helper to shape our incoming wave.
The sound gets pretty warm and better and the noise is reduced a lot.
Feel free to outcomment this line and here again.
You can see here what happens here http://silentmatt.com/javascript-function-plotter/
Copy this to the command line and hit enter: plot y=(1.5*x)-0.5*x*x*x
*/
gainSample = (1.5 * gainSample) - 0.5 * gainSample * gainSample * gainSample;
// multiply the new signal back to short
gainSample = gainSample * 32767.0;
// write calculate sample back to the buffer
editBuffer[nb] = (SInt16)gainSample;
}
}
}
else{
// NSLog(#" No change in the volume");
}
memcpy(inCompleteAQBuffer->mAudioData, buffer, 640);
inCompleteAQBuffer->mAudioDataByteSize = 640;
inCompleteAQBuffer->mPacketDescriptionCount = 320;
show_err(AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL));
}
}
as i was not enqueue when its allocated and i believe it had to enqueue few buffers before it gets started, removing the return part solved my problem.

Resources