I am using the iOS speech recognition API from an Objective-C iOS app.
It works on iPhone 6, 7, but does not work on iPhone 5 (iOS, 10.2.1).
Also note it works on iPhone 5s, just not iPhone 5.
Is the iOS speech API suppose to work on iPhone 5? Do you have to do anything different to get it to work, or does anyone know what the issue could be?
The basic code is below. No errors occur, and the mic volume is detected, but no speech is detected.
if (audioEngine != NULL) {
[audioEngine stop];
[speechTask cancel];
AVAudioInputNode* inputNode = [audioEngine inputNode];
[inputNode removeTapOnBus: 0];
}
recording = YES;
micButton.selected = YES;
//NSLog(#"Starting recording... SFSpeechRecognizer Available? %d", [speechRecognizer isAvailable]);
NSError * outError;
//NSLog(#"AUDIO SESSION CATEGORY0: %#", [[AVAudioSession sharedInstance] category]);
AVAudioSession* audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory: AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:&outError];
[audioSession setMode: AVAudioSessionModeMeasurement error:&outError];
[audioSession setActive: true withOptions: AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&outError];
SFSpeechAudioBufferRecognitionRequest* speechRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
//NSLog(#"AUDIO SESSION CATEGORY1: %#", [[AVAudioSession sharedInstance] category]);
if (speechRequest == nil) {
NSLog(#"Unable to create SFSpeechAudioBufferRecognitionRequest.");
return;
}
speechDetectionSamples = 0;
// This some how fixes a crash on iPhone 7
// Seems like a bug in iOS ARC/lack of gc
AVAudioEngine* temp = audioEngine;
audioEngine = [[AVAudioEngine alloc] init];
AVAudioInputNode* inputNode = [audioEngine inputNode];
speechRequest.shouldReportPartialResults = true;
// iOS speech does not detect end of speech, so must track silence.
lastSpeechDetected = -1;
speechTask = [speechRecognizer recognitionTaskWithRequest: speechRequest delegate: self];
[inputNode installTapOnBus:0 bufferSize: 4096 format: [inputNode outputFormatForBus:0] block:^(AVAudioPCMBuffer* buffer, AVAudioTime* when) {
#try {
long millis = [[NSDate date] timeIntervalSince1970] * 1000;
if (lastSpeechDetected != -1 && ((millis - lastSpeechDetected) > 1000)) {
lastSpeechDetected = -1;
[speechTask finish];
return;
}
[speechRequest appendAudioPCMBuffer: buffer];
//Calculate volume level
if ([buffer floatChannelData] != nil) {
float volume = fabsf(*buffer.floatChannelData[0]);
if (volume >= speechDetectionThreshold) {
speechDetectionSamples++;
if (speechDetectionSamples >= speechDetectionSamplesNeeded) {
//Need to change mic button image in main thread
[[NSOperationQueue mainQueue] addOperationWithBlock:^ {
[micButton setImage: [UIImage imageNamed: #"micRecording"] forState: UIControlStateSelected];
}];
}
} else {
speechDetectionSamples = 0;
}
}
}
#catch (NSException * e) {
NSLog(#"Exception: %#", e);
}
}];
[audioEngine prepare];
[audioEngine startAndReturnError: &outError];
NSLog(#"Error %#", outError);
I think the bug is here in this code:
long millis = [[NSDate date] timeIntervalSince1970] * 1000;
The 32-bit devices(iPhone 5 is a 32-bit device), can save maximum number upto 2^32-1 i.e 2,147,483,647.
I checked on iPhone 5 simulator, the millis is having a negative value. In the code snippet you have posted, there is no mention of how lastSpeechDetected is getting set after initially setting it to -1, but if somehow ((millis - lastSpeechDetected) > 1000) is true, then it will enter the if-block and finish the speech task.
Related
Got a video conference app, implementing device selection feature of switching between Earpiece, Speaker, and connected Bluetooth devices. All seems to function appropriately apart from switching between bluetooth devices themselves.
For some reason, only the last connected device gets the audio routed to it, and you can't switch back to the other ones even if they're available in availableInputs of AVAudioSession SharedInstance, using setPreferredInput with OverridePortNone
I tried searching for resolutions but found the same unanswered issue from 5 years ago, I tried doing the same as changing setActive calls but was also unsuccessful.
Following is the test code, which is taken from here:
AVAudioSession *_audioSession = [AVAudioSession sharedInstance];
AVAudioSessionCategoryOptions _incallAudioCategoryOptionsAll = AVAudioSessionCategoryOptionMixWithOthers|AVAudioSessionCategoryOptionAllowBluetooth|AVAudioSessionCategoryOptionAllowAirPlay;
[_audioSession setCategory:AVAudioSessionCategoryPlayAndRecord
withOptions:_incallAudioCategoryOptionsAll
error:nil];
[_audioSession setMode:AVAudioSessionModeVoiceChat error: nil];
RCT_EXPORT_METHOD(setAudioDevice:(NSString *)device
resolve:(RCTPromiseResolveBlock)resolve
reject:(RCTPromiseRejectBlock)reject) {
BOOL success;
NSError *error = nil;
NSLog(#"[setAudioDevice] - Attempting to set audiodevice as: %#", device);
if ([device isEqualToString:kDeviceTypeSpeaker]) {
success = [_audioSession overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error];
} else {
AVAudioSessionPortDescription *port = nil;
for (AVAudioSessionPortDescription *portDesc in _audioSession.availableInputs) {
if ([portDesc.UID isEqualToString:device]) {
port = portDesc;
break;
}
}
if (port != nil) {
[_audioSession overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:nil];
success = [_audioSession setPreferredInput:port error:&error];
if(error != nil)
{
NSLog(#"setAudioDevice %# %#", error.localizedDescription, error);
}
} else {
success = NO;
error = RCTErrorWithMessage(#"Could not find audio device");
}
}
if (success) {
resolve(#"setAudioDevice success!");
NSLog(#"resolved success");
} else {
reject(#"setAudioDevice", error != nil ? error.localizedDescription : #"", error);
NSLog(#"sent reject");
}
}
So how can I make it that we're able to successfully change from one bluetooth device to other?
An Objective-C project of iOS. Used the Apple SpeechKit. The Speech recognition send a error 'Error Domain=kLSRErrorDomain Code=201 "Siri and Dictation are disabled" UserInfo={NSLocalizedDescription=Siri and Dictation are disabled}'
An error will be reported in resultHandler: Error Domain=kLSRErrorDomain Code=201 "Siri and Dictation are disabled" UserInfo={NSLocalizedDescription=Siri and Dictation are disabled}
- (void)resetRecognitionTask
{
// Cancel the previous task if it's running.
if (self.recognitionTask) {
//[self.recognitionTask cancel]; // Will cause the system error and memory problems.
[self.recognitionTask finish];
}
self.recognitionTask = nil;
// Configure the audio session for the app.
NSError *error = nil;
if (AVAudioSession.sharedInstance.categoryOptions != (AVAudioSessionCategoryOptionMixWithOthers|AVAudioSessionCategoryOptionDefaultToSpeaker|AVAudioSessionCategoryOptionAllowBluetooth)) {
[AVAudioSession.sharedInstance setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeMeasurement options:AVAudioSessionCategoryOptionMixWithOthers|AVAudioSessionCategoryOptionDefaultToSpeaker|AVAudioSessionCategoryOptionAllowBluetooth error:&error];
}
//[AVAudioSession.sharedInstance setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDuckOthers error:&error];
if (error)
{
[self stopWithError:error];
return;
}
[AVAudioSession.sharedInstance setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];
if (error)
{
[self stopWithError:error];
return;
}
// Create and configure the speech recognition request.
self.recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
self.recognitionRequest.taskHint = SFSpeechRecognitionTaskHintConfirmation;
// Keep speech recognition data on device
if (#available(iOS 13, *)) {
self.recognitionRequest.requiresOnDeviceRecognition = NO;
}
// Create a recognition task for the speech recognition session.
// Keep a reference to the task so that it can be canceled.
__weak typeof(self)weakSelf = self;
self.speechRecognizer = nil;
self.recognitionTask = [self.speechRecognizer recognitionTaskWithRequest:self.recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
__strong typeof(self)strongSelf = weakSelf;
if (result != nil) {
[strongSelf resultCallback:result];
}
}];
}
I think this is an oversight of iOS system.
Solution: Settings-> General -> Keyboards -> Enable Dictation
Turn it ON.
Most times I launch the app, the test-to-speech audio and speech recognition work perfectly. But sometimes I launch it and it crashes when first starting speech recognition. It seems to get on a roll and crash several launches in a row, so it is a little bit consistent, once it gets into one of its 'moods' :)
The recognition starts after a TTS introduction and at the same time as TTS says 'listening' - therefore both are active at once. Possibly the audio takes a few milliseconds to change over and gets crashy, but I am not clear how this works or how to prevent it.
I see the following error:
[avae] AVAEInternal.h:70:_AVAE_Check: required condition is false:
[AVAudioIONodeImpl.mm:911:SetOutputFormat: (format.sampleRate == hwFormat.sampleRate)]
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio',
reason: 'required condition is false: format.sampleRate == hwFormat.sampleRate'
I have put some try-catches in, just to see if it prevents this error and it doesn't. I also added a tiny sleep which also made no difference. So I am not even clear which code is causing it. If I put a breakpoint before the removeTapOnBus code, it does not crash until this line executes. If I put the breakpoint on the installTapOnBus line it does not crash until that line. And if I put the breakpoint after the code, it crashes. So it does seem to be this code.
In any case, what am I doing wrong or how could I debug it?
- (void) recordAndRecognizeWithLang:(NSString *) lang
{
NSLocale *locale = [[NSLocale alloc] initWithLocaleIdentifier:lang];
self.sfSpeechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:locale];
if (!self.sfSpeechRecognizer) {
[self sendErrorWithMessage:#"The language is not supported" andCode:7];
} else {
// Cancel the previous task if it's running.
if ( self.recognitionTask ) {
[self.recognitionTask cancel];
self.recognitionTask = nil;
}
//[self initAudioSession];
self.recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
self.recognitionRequest.shouldReportPartialResults = [[self.command argumentAtIndex:1] boolValue];
// https://developer.apple.com/documentation/speech/sfspeechrecognizerdelegate
// only callback is availabilityDidChange
self.sfSpeechRecognizer.delegate = self;
self.recognitionTask = [self.sfSpeechRecognizer recognitionTaskWithRequest:self.recognitionRequest resultHandler:^(SFSpeechRecognitionResult *result, NSError *error) {
NSLog(#"recognise");
if (error) {
NSLog(#"error %ld", error.code);
// code 1 or 203 or 216 = we called abort via self.recognitionTask cancel
// 1101 is thrown when in simulator
// 1700 is when not given permission
if (error.code==203){ //|| error.code==216
// nothing, carry on, this is bullshit, or maybe not...
[self sendErrorWithMessage:#"sfSpeechRecognizer Error" andCode:error.code];
}else{
[self stopAndRelease];
// note: we can't send error back to js as I found it crashes (when recognising, then switch apps, then come back)
[self sendErrorWithMessage:#"sfSpeechRecognizer Error" andCode:error.code];
return;
}
}
if (result) {
NSMutableArray * alternatives = [[NSMutableArray alloc] init];
int maxAlternatives = [[self.command argumentAtIndex:2] intValue];
for ( SFTranscription *transcription in result.transcriptions ) {
if (alternatives.count < maxAlternatives) {
float confMed = 0;
for ( SFTranscriptionSegment *transcriptionSegment in transcription.segments ) {
//NSLog(#"transcriptionSegment.confidence %f", transcriptionSegment.confidence);
if (transcriptionSegment.confidence){
confMed +=transcriptionSegment.confidence;
}
}
NSLog(#"transcriptionSegment.transcript %#", transcription.formattedString);
NSMutableDictionary * resultDict = [[NSMutableDictionary alloc]init];
[resultDict setValue:transcription.formattedString forKey:#"transcript"];
[resultDict setValue:[NSNumber numberWithBool:result.isFinal] forKey:#"final"];
float conf = 0;
if (confMed && transcription.segments && transcription.segments.count && transcription.segments.count>0){
conf = confMed/transcription.segments.count;
}
[resultDict setValue:[NSNumber numberWithFloat:conf]forKey:#"confidence"];
[alternatives addObject:resultDict];
}
}
[self sendResults:#[alternatives]];
if ( result.isFinal ) {
//NSLog(#"recog: isFinal");
[self stopAndRelease];
}
}
}];
//[self.audioEngine.inputNode disconnectNodeInput:0];
AVAudioFormat *recordingFormat = [self.audioEngine.inputNode outputFormatForBus:0];
//AVAudioFormat *recordingFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100 channels:1];
NSLog(#"samplerate=%f", recordingFormat.sampleRate);
NSLog(#"channelCount=%i", recordingFormat.channelCount);
// tried this but does not prevent crashing
//if (recordingFormat.sampleRate <= 0) {
// [self.audioEngine.inputNode reset];
// recordingFormat = [[self.audioEngine inputNode] outputFormatForBus:0];
//}
sleep(1); // to prevent random crashes
#try {
[self.audioEngine.inputNode removeTapOnBus:0];
} #catch (NSException *exception) {
NSLog(#"removeTapOnBus exception");
}
sleep(1); // to prevent random crashes
#try {
NSLog(#"install tap on bus");
[self.audioEngine.inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
//NSLog(#"tap");
[self.recognitionRequest appendAudioPCMBuffer:buffer];
}];
} #catch (NSException *exception) {
NSLog(#"installTapOnBus exception");
}
sleep(1); // to prevent random crashes
[self.audioEngine prepare];
NSError* error = nil;
BOOL isOK = [self.audioEngine startAndReturnError:&error];
if (!isOK){
NSLog(#"audioEngine startAndReturnError returned false");
}
if (error){
NSLog(#"audioEngine startAndReturnError error");
}
}
My app synthesises audio from a lookup table. It plays audio successfully but crashes the moment I try to stop playing. Audio playback only needs to exit without restarting so the requirements for handling the interruption are basic. I reread Apple’s Audio Session Programming Guide including the section Responding to Interruptions. However the method handleAudioSessionInterruption does not seem to register an interrupt so I’m obviously missing something.
EDIT See my answer. When I began work on this I knew next to nothing about NSNotificationCenter so I welcome any suggestion for improvement.
Two methods set up the audio session to play in the foreground.
- (void)setUpAudio
{
if (_playQueue == NULL)
{
if ([self setUpAudioSession] == TRUE)
{
[self setUpPlayQueue];
[self setUpPlayQueueBuffers];
}
}
}
- (BOOL)setUpAudioSession
{
BOOL success = NO;
NSError *audioSessionError = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
// Set up notifications
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(handleAudioSessionInterruption:)
name:AVAudioSessionInterruptionNotification
object:session];
// Set category
success = [session setCategory:AVAudioSessionCategoryPlayback
error:&audioSessionError];
if (!success)
{
NSLog(#"%# Error setting category: %#",
NSStringFromSelector(_cmd), [audioSessionError localizedDescription]);
// Exit early
return success;
}
// Set mode
success = [session setMode:AVAudioSessionModeDefault
error:&audioSessionError];
if (!success)
{
NSLog(#"%# Error setting mode: %#",
NSStringFromSelector(_cmd), [audioSessionError localizedDescription]);
// Exit early
return success;
}
// Set some preferred values
NSTimeInterval bufferDuration = .005; // I would prefer a 5ms buffer duration
success = [session setPreferredIOBufferDuration:bufferDuration
error:&audioSessionError];
if (audioSessionError)
{
NSLog(#"Error %ld, %# %i", (long)audioSessionError.code, audioSessionError.localizedDescription, success);
}
double sampleRate = _audioFormat.mSampleRate; // I would prefer a sample rate of 44.1kHz
success = [session setPreferredSampleRate:sampleRate
error:&audioSessionError];
if (audioSessionError)
{
NSLog(#"Error %ld, %# %i", (long)audioSessionError.code, audioSessionError.localizedDescription, success);
}
success = [session setActive:YES
error:&audioSessionError];
if (!success)
{
NSLog(#"%# Error activating %#",
NSStringFromSelector(_cmd), [audioSessionError localizedDescription]);
}
// Get current values
sampleRate = session.sampleRate;
bufferDuration = session.IOBufferDuration;
NSLog(#"Sample Rate:%0.0fHz I/O Buffer Duration:%f", sampleRate, bufferDuration);
return success;
}
And here is the method that handles the interruption when I press the stop button. However it does not respond.
EDIT The correct method needs block, not selector. See my answer.
- (void)handleAudioSessionInterruption:(NSNotification*)notification
{
if (_playQueue)
{
NSNumber *interruptionType = [[notification userInfo] objectForKey:AVAudioSessionInterruptionTypeKey];
NSNumber *interruptionOption = [[notification userInfo] objectForKey:AVAudioSessionInterruptionOptionKey];
NSLog(#"in-app Audio playback will be stopped by %# %lu", notification.name, (unsigned long)interruptionType.unsignedIntegerValue);
switch (interruptionType.unsignedIntegerValue)
{
case AVAudioSessionInterruptionTypeBegan:
{
if (interruptionOption.unsignedIntegerValue == AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation)
{
NSLog(#"notify other apps that audio is now available");
}
}
break;
default:
break;
}
}
}
Answer My method to handle AudioSessionInterruption did not subscribe the observer correctly with NSNotificationCentre. This has been fixed by adding observer using block, not selector.
The solution replaces deprecated AVAudioSession delegate methods in AudioBufferPlayer, an extremely fit for purpose audio player initially developed for direct audio synthesis by Matthias Hollejmans. Several deprecated functions including InterruptionListenerCallback were later upgraded by Mario Diana. The solution (below) uses NSNotification allowing users to exit AVAudioSession gracefully by pressing a button.
Here is the relevant code.
PlayViewController.m
UIButton action performs an orderly shutdown of synth, invalidates the timer and posts the notification that will exit AVAudioSession
- (void)fromEscButton:(UIButton*)button
{
[self stopConcertClock];
... // code for Exit PlayViewController not shown
}
- (void)stopConcertClock
{
[_synthLock lock];
[_synth stopAllNotes];
[_synthLock unlock];
[timer invalidate];
timer = nil;
[self postAVAudioSessionInterruptionNotification];
NSLog(#"Esc button pressed or sequence ended. Exit PlayViewController ");
}
- (void) postAVAudioSessionInterruptionNotification
{
[[NSNotificationCenter defaultCenter]
postNotificationName:#"AVAudioSessionInterruptionNotification"
object:self];
}
Initialising the AVAudioSession includes subscribing for a single interruption notification before starting startAudioPlayer in AudioBufferPlayer
- (id)init
{
if (self = [super init])
{
NSLog(#"PlayViewController starts MotionListener and AudioSession");
[self startAudioSession];
}
return self;
}
- (void)startAudioSession
{
// Synth and the AudioBufferPlayer must use the same sample rate.
_synthLock = [[NSLock alloc] init];
float sampleRate = 44100.0f;
// Initialise synth to fill the audio buffer with audio samples.
_synth = [[Synth alloc] initWithSampleRate:sampleRate];
// Initialise the audio buffer.
_player = [[AudioBufferPlayer alloc] initWithSampleRate:sampleRate
channels:1
bitsPerChannel:16
packetsPerBuffer:1024];
_player.gain = 0.9f;
__block __weak PlayViewController *weakSelf = self;
_player.block = ^(AudioQueueBufferRef buffer, AudioStreamBasicDescription audioFormat)
{
PlayViewController *blockSelf = weakSelf;
if (blockSelf != nil)
{
// Lock access to the synth. This callback runs on an internal Audio Queue thread and we don't
// want another thread to change the Synth's state while we're still filling up the audio buffer.
[blockSelf -> _synthLock lock];
// Calculate how many packets fit into this buffer. Remember that a packet equals one frame
// because we are dealing with uncompressed audio; a frame is a set of left+right samples
// for stereo sound, or a single sample for mono sound. Each sample consists of one or more
// bytes. So for 16-bit mono sound, each packet is 2 bytes. For stereo it would be 4 bytes.
int packetsPerBuffer = buffer -> mAudioDataBytesCapacity / audioFormat.mBytesPerPacket;
// Let the Synth write into the buffer. The Synth just knows how to fill up buffers
// in a particular format and does not care where they come from.
int packetsWritten = [blockSelf -> _synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
// We have to tell the buffer how many bytes we wrote into it.
buffer -> mAudioDataByteSize = packetsWritten * audioFormat.mBytesPerPacket;
[blockSelf -> _synthLock unlock];
}
};
// Set up notifications
[self subscribeForBlockNotification];
[_player startAudioPlayer];
}
- (void)subscribeForBlockNotification
{
NSNotificationCenter * __weak center = [NSNotificationCenter defaultCenter];
id __block token = [center addObserverForName:#"AVAudioSessionInterruptionNotification"
object:nil
queue:[NSOperationQueue mainQueue]
usingBlock:^(NSNotification *note) {
NSLog(#"Received the notification!");
[_player stopAudioPlayer];
[center removeObserver:token];
}];
}
PlayViewController.h
These are relevant interface settings
#interface PlayViewController : UIViewController <EscButtonDelegate>
{
...
// Initialisation of audio player and synth
AudioBufferPlayer* player;
Synth* synth;
NSLock* synthLock;
}
...
- (AudioBufferPlayer*)player;
- (Synth*)synth;
#end
AudioBufferPlayer.m
- (void)stopAudioPlayer
{
[self stopPlayQueue];
[self tearDownPlayQueue];
[self tearDownAudioSession];
}
- (void)stopPlayQueue
{
if (_audioPlaybackQueue != NULL)
{
AudioQueuePause(_audioPlaybackQueue);
AudioQueueReset(_audioPlaybackQueue);
_playing = NO;
}
}
- (void)tearDownPlayQueue
{
AudioQueueDispose(_audioPlaybackQueue, NO);
_audioPlaybackQueue = NULL;
}
- (BOOL)tearDownAudioSession
{
NSError *deactivationError = nil;
BOOL success = [[AVAudioSession sharedInstance] setActive:NO
withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation
error:nil];
if (!success)
{
NSLog(#"%s AVAudioSession Error: %#", __FUNCTION__, deactivationError);
}
return success;
}
I am working on a project which can play music via HFP device. But here's a problem that I want to detect whether an HFP or A2DP is connected when music is playing.
Now I am using the AVFoundation framework to do this. Here's the code:
- (BOOL)isConnectedToBluetoothPeripheral
{
BOOL isMatch = NO;
NSString* categoryString = [AVAudioSession sharedInstance].category;
AVAudioSessionCategoryOptions categoryOptions = [AVAudioSession sharedInstance].categoryOptions;
if ((![categoryString isEqualToString:AVAudioSessionCategoryPlayAndRecord] &&
![categoryString isEqualToString:AVAudioSessionCategoryRecord]) ||
categoryOptions != AVAudioSessionCategoryOptionAllowBluetooth)
{
NSError * error = nil;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord
withOptions:AVAudioSessionCategoryOptionAllowBluetooth
error:&error];
if (error) {
[[AVAudioSession sharedInstance] setCategory:categoryString
withOptions:categoryOptions
error:&error];
return isMatch;
}
}
NSArray * availableInputs = [AVAudioSession sharedInstance].availableInputs;
for (AVAudioSessionPortDescription *desc in availableInputs)
{
if ([[desc portType] isEqualToString:AVAudioSessionPortBluetoothA2DP] || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothHFP])
{
isMatch = YES;
break;
}
}
if (!isMatch)
{
NSArray * outputs = [[[AVAudioSession sharedInstance] currentRoute] outputs];
for (AVAudioSessionPortDescription * desc in outputs)
{
if ([[desc portType] isEqualToString:AVAudioSessionPortBluetoothA2DP] || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothHFP])
{
isMatch = YES;
break;
}
}
}
NSError * error = nil;
[[AVAudioSession sharedInstance] setCategory:categoryString
withOptions:categoryOptions
error:&error];
return isMatch;
}
It works well but cause another problem: when music is playing, using this method to detect HFP connection will make music playing interrupt for about two seconds.
So I tried another way which can reduce the effect of detecting HFP connecting. I am using a flag
static BOOL isHFPConnectedFlag
To indicate whether HFP or A2DP is connected. I use previous method to detect the connection only once (when the app is launching) and save the result into isHFPConnectedFlag. What's more, I observe the AudioSessionRouteChange to sync the connection status:
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(handleAudioSessionRouteChangeWithState:) name:AVAudioSessionRouteChangeNotification object:nil];
When the route change reason is AVAudioSessionRouteChangeReasonNewDeviceAvailable or AVAudioSessionRouteChangeReasonOldDeviceUnavailable I can know HFP is connected or disconnected. Unfortunately, when I connect some HFP in my iPhone, the system will not post this notification, so I cannot detect the connection in this situation.
Does anyone know the reason or a better way to implements this (Detecting HFP connection without music playing interrupting)?
you can use like this!
-(BOOL) bluetoothDeviceA2DPAvailable {
BOOL available = NO;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
AVAudioSessionRouteDescription *currentRoute = [audioSession currentRoute];
for (AVAudioSessionPortDescription *output in currentRoute.outputs) {
if (([[output portType] isEqualToString:AVAudioSessionPortBluetoothA2DP] ||
[[output portType] isEqualToString:AVAudioSessionPortBluetoothHFP])) {
available = YES;
break;
}
}
return available;
}
Swift 5 version:
func bluetoothDeviceHFPAvailable() -> Bool {
let audioSession = AVAudioSession.sharedInstance()
let currentRoute = audioSession.currentRoute
for output in currentRoute.outputs {
if output.portType == .bluetoothHFP || output.portType == .bluetoothA2DP {
return true
}
}
return false
}