AVAudioSession availableInputs returning nil with and without external microphone - ios

I am using AVAudioSession to detect whether an external mic is attached to the device I'm using (an iPad 2 in this case). However, the below call is returning nil when I have an external mic attached, and when I don't.
NSArray *availableInputs = [[AVAudioSession sharedInstance] availableInputs];
AVAudioSessionPortDescription *port = [availableInputs objectAtIndex:0];
I would have assumed that when the mic is not attached this would return a list including the internal mic alone, with the external microphone attached it would return a list including the internal mic and the external mic.
This thread indicates that it should be returning these sort of results (with an error in this case, but that seems irrelevant), so I'm confused as to why I'm not getting the correct output. Perhaps there's a flag that needs to be set to show that I'm using multi-route audio.
Any help would be appreciated.

Related

AVAudioEngine recording music from external microphone

I have set up a simple graph, using AVAudioEngine, to simply take the default input node's data and put it in the headphones (audio monitoring) - this should simply make your headphones replicate whatever it hears through the microphone and it does, the background noise is redirected right into your ears, when running this app, however, there is one problem: it will always take the built-in mic's input, even if an external mic is plugged into the iPad.
AVAudioSession tells me, that the input should be using the external microphone (through [[AVAudioSession sharedInstance] currentRoute]) and if I record audio with AVAudioRecorder, it does use that input, however not AVAudioEngine, it sticks to the built-in mic. Am I doing something wrong? Is there a setting I missed?
Try setting the preferred Input to the external mic:
//get all avaialable Inputs
var listOfInputs = AVAudioSession.sharedInstance().availableInputs
println(listOfInputs)
//pick which one you want (change index)
var availableInput: AVAudioSessionPortDescription = listOfInputs[0] as AVAudioSessionPortDescription
//set the Preffered Input
AVAudioSession.sharedInstance().setPreferredInput(availableInput, error: nil)
Careful though, this is without error handling for simplicities sake. You will want to offer a default option if your external mic is unplugged or not available.

AVAudioSessionManager availableInputs "Unknown selected data source for port iPhone Microphone"

I've noticed this error in my console log for a while. Though it does not affect the execution of my application, I find it really annoying. Thus, I started to trace where this error came from. It turns out when I call availableInputs
NSArray *inputs = [[AVAudioSession sharedInstance] availableInputs];
It will give me the log message:
ERROR: [0x3d61318c] AVAudioSessionPortImpl.mm:50: ValidateRequiredFields: Unknown selected data source for Port iPhone Microphone (type: MicrophoneBuiltIn)
I tried to print out the inputs..
Printing description of inputs:
<__NSArrayI 0x188c4610>(
<AVAudioSessionPortDescription: 0x188c4580, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = (null)>,
<AVAudioSessionPortDescription: 0x18835d90, type = BluetoothHFP; name = Valore-BTi22; UID = 00:23:01:10:38:77-tsco; selectedDataSource = (null)>
So selectedDataSource is (null). I don't know what should I do to make it not null? iPhone Microphone is a built-in input... I suppose it's set by Apple already?
This problem seems not just happen to me... I will just share my understanding here..
My situation is.. I'm using pjsip library, which has a lower level control of audio resources. I've noticed that, the sound device has been closed before I call [[AVAudioSession sharedInstance] availableInputs];
Thus, (I guess) AVAudioSession, as a higher level control, couldn't find corresponding audio data source for its input - as the error indicated...
To further investigate the problem, you'd better check somewhere in your code that modify the audio source.. and make sure the audio source is activated before you call AVAudioSession
I can only go this far for now... Deeper understanding and better explanation of audio control is always appreciated!!
Regarding the error in your console, I can also confirm that I sometimes receive this message when using my iPhone 5S, but I've never seen it on my 4S. It could just be some core audio dump, but it doesn't seem to affect actual performance (at least for me).
Regarding the available inputs, what your actually printing out is the available input ports and their descriptions. This bit is more confusing and I don't understand why the selectedDataSource field is null for each one.
I will say that the iPhone is definitely defaulting to one of those sources (probably the Built-in Mic) regardless of what the selectedDataSource is saying.
Now if you wanted to explicitly select one of the port descriptions you could do something like this:
NSArray *availableInputs = [[AVAudioSession sharedInstance] availableInputs];
AVAudioSessionPortDescription *port = [availableInputs objectAtIndex:0]; //built in mic for your case
NSError *portErr = nil;
[[AVAudioSession sharedInstance] setPreferredInput:port error:&portErr];
and I would check portErr afterwards to make sure there's no error in setting the preferredInput.
Its worth noting that you can also cycle through the available dataSources for a particular Port Description as well and select one using
[port setPreferredDataSource:source error:&sourceErr];
then follow that with:
[[AVAudioSession sharedInstance] setPreferredInput:port error:&portErr];
These are some handy iOS7 only features that take advantage of hardware with multiple built-in mice.

stereo recording on iPhone

The iPhone 5 has three microphones - top front, top back, and bottom. I would like to record on all of them at the same time to do some signal processing. I've tried for several days unsuccessfully.
Using AVAudioSession, I can see the microphones:
NSLog(#"%#", [AVAudioSession sharedInstance].availableInputs);
"<AVAudioSessionPortDescription: 0x14554400, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Back>"
NSLog(#"%#", [AVAudioSession sharedInstance].availableInputs[0].inputDataSources);
"<AVAudioSessionDataSourceDescription: 0x145afb00, ID = 1835216945; name = Bottom>",
"<AVAudioSessionDataSourceDescription: 0x145b1870, ID = 1835216946; name = Front>",
"<AVAudioSessionDataSourceDescription: 0x145b3650, ID = 1835216947; name = Back>"
I can use AVAudioSessionPortDescription -setPreferredDataSource:error: to record from one of the three. But I cannot record on more than one simultaneously. If I set the number of input channels to 2, I get two identical tracks from the same microphone.
AVAudioRecorder has a property channelAssignments which seems like it should work, but AVAudioSession inputNumberOfChannels and maximumInputNumberOfChannels are both 1. The property channelAssignments is designed for auxiliary microphones which have multiple channels.
I tried using the low-level AudioUnit, but I get the same result. I could not find any properties on AudioUnit to change the input source.
Any help would be appreciated.
My understanding, after all my research trying to do the same thing, is just what you've described - you can't prefer multiple data sources for the one device, thus you can't record from multiple built-in mics at once. If anyone can prove me wrong, I'd VERY much love to hear it!
Sidenote: I can't seem to run your code. As written, I get
Property availableInputs not found on object of type 'id'
Even after massaging what you've got into a format that doesn't require any explicit casts:
NSLog(#"%#", [[[AVAudioSession sharedInstance] availableInputs][0] inputDataSources]);
I get SIGABRT:
-[AVAudioSessionPortDescription inputDataSources]: unrecognized selector sent to instance 0xd59dbe0'
what SDK are you using that your code actually compiles, much less runs?

Play audio through upper (phone call) speaker

I'm trying to get audio in my app to play through the upper speaker on the iPhone, the one you press to your ear during a phone call. I know it's possible, because I've played a game from the App Store ("The Heist" by "tap tap tap") that simulates phone calls and does exactly that.
I've done a lot of research online, but I'm having a surprisingly hard time finding ANYONE who has even discussed the possibility. The overwhelming majority of posts seem to be about the handsfree speaker vs plugged-in earphones, (like this and this and this), rather than the upper "phone call" speaker vs the handsfree speaker. (Part of that problem might be not having a good name for it: "phone speaker" often means the handsfree speaker at the bottom of the device, etc, so it's hard to do a really well-targeted search). I've looked into Apple's Audio Session Category Route Overrides, but those again seem to (correct me if I'm wrong) deal only with the handsfree speaker at the bottom, not the speaker at the top of the phone.
I have found ONE post that seems to be about this: link. It even provides a bunch of code, so I thought I was home free, but now I can't seem to get the code to work. For simplicity I just copied the DisableSpeakerPhone method (which if I understand it correctly should be the one to re-route audio to the upper speaker) into my viewDidLoad to see if it would work, but the first "assert" line fails, and the audio continues to play out the bottom. (I also imported the AudioToolbox Framework, as suggested in the comment, so that isn't the problem.)
Here is the main block of code I'm working with (this is what I copied into my viewDidLoad to test), although there are a few more methods in the article I linked to:
void DisableSpeakerPhone () {
UInt32 dataSize = sizeof(CFStringRef);
CFStringRef currentRoute = NULL;
OSStatus result = noErr;
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &dataSize, &currentRoute);
// Set the category to use the speakers and microphone.
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
result = AudioSessionSetProperty (
kAudioSessionProperty_AudioCategory,
sizeof (sessionCategory),
&sessionCategory
);
assert(result == kAudioSessionNoError);
Float64 sampleRate = 44100.0;
dataSize = sizeof(sampleRate);
result = AudioSessionSetProperty (
kAudioSessionProperty_PreferredHardwareSampleRate,
dataSize,
&sampleRate
);
assert(result == kAudioSessionNoError);
// Default to speakerphone if a headset isn't plugged in.
// Overriding the output audio route
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
dataSize = sizeof(audioRouteOverride);
AudioSessionSetProperty(
kAudioSessionProperty_OverrideAudioRoute,
dataSize,
&audioRouteOverride);
assert(result == kAudioSessionNoError);
AudioSessionSetActive(YES);
}
So my question is this: can anyone either A) help me figure out why that code doesn't work, or B) offer a better suggestion for being able to press a button and route the audio up to the upper speaker?
PS I am getting more and more familiar with iOS programming, but this is my first foray into the world of AudioSessions and such, so details and code samples are much appreciated! Thank you for your help!
UPDATE:
From the suggestion of "He Was" (below) I've removed the code quoted above and replaced it with:
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error:nil];
[[AVAudioSession sharedInstance] setActive: YES error:nil];
at the beginning of viewDidLoad. It still isn't working, though, (by which I mean the audio is still coming out of the speaker at the bottom of the phone instead of the receiver at the top). Apparently the default behavior should be for AVAudioSessionCategoryPlayAndRecord to send audio out of the receiver on its own, so something is still wrong.
More specifically what I'm doing with this code is playing audio through the iPod Music Player (initialized right after the AVAudioSession lines above in viewDidLoad, for what it's worth):
_musicPlayer = [MPMusicPlayerController iPodMusicPlayer];
and the media for that iPod Music Player is chosen through an MPMediaPickerController:
- (void) mediaPicker: (MPMediaPickerController *) mediaPicker didPickMediaItems: (MPMediaItemCollection *) mediaItemCollection {
if (mediaItemCollection) {
[_musicPlayer setQueueWithItemCollection: mediaItemCollection];
[_musicPlayer play];
}
[self dismissViewControllerAnimated:YES completion:nil];
}
This all seems fairly straightforward to me, I've got no errors or warnings, and I know that the Media Picker and Music Player are working correctly because the correct songs start playing, it's just out of the wrong speaker. Could there be a "play media using this AudioSession" method or something? Or is there a way to check what audio session category is currently active, to confirm that nothing could have switched it back or something? Is there a way to emphatically tell the code to USE the receiver, rather than relying on the default to do so? I feel like I'm on the one-yard line, I just need to cross that final bit...
EDIT: I just thought of a theory, wherein it's something about the iPod Music Player that doesn't want to play out of the receiver. My reasoning: it is possible to set a song to start playing through the official iPod app and then seamlessly adjust it (pause, skip, etc) through the app I'm developing. The continuous playback from one app to the next made me think that maybe the iPod Music Player has its own audio route settings, or maybe it doesn't stop to check the settings in the new app? Does anyone who knows what they're talking about think it could it be something like that?
Was struggling with this for a while too. Maybe this would help someone later.You can also use the newer methods of overriding ports. Many of the methods in your sample code are actually deprecated.
So if you have your AudioSession sharedInstance by getting,
NSError *error = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
[session setActive: YES error:nil];
The session category has to be AVAudioSessionCategoryPlayAndRecord
You can get the current output by checking this value.
AVAudioSessionPortDescription *routePort = session.currentRoute.outputs.firstObject;
NSString *portType = routePort.portType;
And now depending on the port you want to send it to, simply toggle the output using
if ([portType isEqualToString:#"Receiver"]) {
[session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error];
} else {
[session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&error];
}
This should be a quick way to toggle the outputs to the speaker phone and receiver.
You have to initialise your audio session first.
Using the C API
AudioSessionInitialize (NULL, NULL, NULL, NULL);
In iOS6 you can use AVAudioSession methods instead (you will need to import the AVFoundation framework to use AVAudioSession):
Initialization using AVAudioSession
self.audioSession = [AVAudioSession sharedInstance];
Setting the audioSession category using AVAudioSession
[self.audioSession setCategory:AVAudioSessionCategoryPlayAndRecord
error:nil];
For further research, if you want better search terms, here are the full names of the constants for the speakers:
const CFStringRef kAudioSessionOutputRoute_BuiltInReceiver;
const CFStringRef kAudioSessionOutputRoute_BuiltInSpeaker;
see apple's docs here
But the real mystery is why you are having any trouble routing to the receiver. It's the default behaviour for the playAndRecord category. Apple's documentation of kAudioSessionOverrideAudioRoute_None:
"Specifies, for the kAudioSessionCategory_PlayAndRecord category, that output audio should go to the receiver. This is the default output audio route for this category."
update
In your updated question you reveal that you are using the MPMusicPlayerController class. This class invokes the global music player (the same player used in the Music app). This music player is separate from your app, and so doesn't share the same audio session as your app's audioSession. Any properties you set on your app's audioSession will be ignored by the MPMusicPlayerController.
If you want control over your app's audio behaviour, you need to use an audio framework internal to your app. This would be AVAudioRecorder / AVAudioPlayer or Core Audio (Audio Queues, Audio Units or OpenAL). Whichever method you use, the audio session can be controlled either via AVAudioSession properties or via the Core Audio API. Core Audio gives you more fine-grained control, but with each new release of iOS more of it is ported over to AVFoundation, so start with that.
Also remember that the audio session provides a way for you to describe the intended behaviour of your app's audio in relation to the total iOS environment, but it will not hand you total control. Apple takes care to ensure that the user's expectations of their device's audio behaviour remain consistent between apps, and when one app needs to interrupt another's audio stream.
update 2
In your edit you allude to the possibility of audio sessions checking other app's audio session settings. That does not happen1. The idea is that each app sets it's preferences for it's own audio behaviour using it's self-contained audio session. The operating system arbitrates between conflicting audio requirements when more than one app competes for an unshareable resource, such as the internal microphone or one of the speakers, and will usually decide in favour of that behaviour which is most likely to meet the user's expectations of the device as a whole.
The MPMusicPlayerController class is slightly unusual in that it gives you the ability for one app to have some degree of control over another. In this case, your app is not playing the audio, it is sending a request to the Music Player to play audio on your behalf. Your control is limited by the extent of the MPMusicPlayerController API. For more control, your app will have to provide it's own implementation of audio playback.
In your comment you wonder:
Could there be a way to pull an MPMediaItem from the MPMusicPlayerController and then play them through the app-specific audio session, or anything like that?
That's a (big) subject for a new question. Here is a good starting read (from Chris Adamson's blog) From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary - it's the sequel to From iphone media library to pcm samples in dozens of confounding and potentially lossy steps - that should give you a sense to the complexity you will face. This may have got easier since iOS6 but I wouldn't be so sure!
1 there is an otherAudioPlaying read-only BOOL property in ios6, but that's about it
Swift 3.0 Code
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
let routePort: AVAudioSessionPortDescription? = obsession. current Route. outputs. first
let portType: String? = routePort?.portType
if (portType == "Receiver") {
try? audioSession.overrideOutputAudioPort(.speaker)
}
else {
try? audioSession.overrideOutputAudioPort(.none)
}
swift 5.0
func activateProximitySensor(isOn: Bool) {
let device = UIDevice.current
device.isProximityMonitoringEnabled = isOn
if isOn {
NotificationCenter.default.addObserver(self, selector: #selector(proximityStateDidChange), name: UIDevice.proximityStateDidChangeNotification, object: device)
let session = AVAudioSession.sharedInstance()
do{
try session.setCategory(.playAndRecord)
try session.setActive(true)
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} catch {
print ("\(#file) - \(#function) error: \(error.localizedDescription)")
}
} else {
NotificationCenter.default.removeObserver(self, name: UIDevice.proximityStateDidChangeNotification, object: device)
}
}
#objc func proximityStateDidChange(notification: NSNotification) {
if let device = notification.object as? UIDevice {
print(device)
let session = AVAudioSession.sharedInstance()
do{
let routePort: AVAudioSessionPortDescription? = session.currentRoute.outputs.first
let portType = routePort?.portType
if let type = portType, type.rawValue == "Receiver" {
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} else {
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.none)
}
} catch {
print ("\(#file) - \(#function) error: \(error.localizedDescription)")
}
}
}

Detect attached audio devices iOS

I'm trying to figure out how to detect which if any audio devices are connected on iphone/ipad/ipod. I know all about the audio route calls and route change callbacks but these don't tell me anything about what's attached. They only report where the audio is currently routing. I need to know, for instance, if headphones and/or bluetooth are still attached while audio is routed through the speakers. Or, for instance, if a user plugs in the headset while using bluetooth then decides to disconnect bluetooth, I need to know that the bluetooth is disconnected even as audio is still routing through headphones.
Unfortunately, as of iOS11, it seems there's no API to reliably get the list of the output devices that are currently attached - as soon as the current route changes, you only see 1 device (currently routed) via AVAudioSession's currentRoute.outputs, even though multiple devices may still be attached.
However, for the input, and that includes Bluetooth devices with HFP profile, if the proper Audio Session mode is used (AVAudioSessionModeVoiceChat or AVAudioSessionModeVideoChat for example), one can get the list of the available input via AVAudioSession's availableInputs, and those inputs are listed there even when that device is not an active route - this is very useful when a user is doing a manual override via MPVolumeView from Bluetooth to the speaker, for example, and since HFP is a 2-way IO (has both input and output), you can judge whether output HFP Bluetooth is still available by looking at the inputs.
BOOL isBtInputAvailable = NO;
NSArray *inputs = [[AVAudioSession sharedInstance] availableInputs];
for (AVAudioSessionPortDescription* port in inputs) {
if ([port.portType isEqualToString:AVAudioSessionPortBluetoothHFP]) {
isBtInputAvailable = YES;
break;
}
}
In case of iOS 5 you should use:
CFStringRef newRoute;
size = sizeof(CFStringRef);
XThrowIfError(AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute), "couldn't get new audio route");
if (newRoute)
{
CFShow(newRoute);
if (CFStringCompare(newRoute, CFSTR("HeadsetInOut"), NULL) == kCFCompareEqualTo) // headset plugged in
{
colorLevels[0] = .3;
colorLevels[5] = .5;
}
else if (CFStringCompare(newRoute, CFSTR("SpeakerAndMicrophone"), NULL) == kCFCompareEqualTo)
}
You can get from AudioSession properties a list of InputSources and OutputDestinations.
Check out these Session Properties:
kAudioSessionProperty_InputSources
kAudioSessionProperty_OutputDestinations
And to query the details of each, you can use:
kAudioSessionProperty_InputSource
kAudioSessionProperty_OutputDestination

Resources