Reroute Audio to Bluetooth Speaker - ios

I am creating a Video & Audio capturing app. Every time I start to record, the music played in the bluetooth speaker plays in the phone's speaker. When I exit the app, the music comes back playing on the bluetooth speaker.
My first attempt to solve this is to provide the necessary options for the audioSession, like this:
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, withOptions: [AVAudioSessionCategoryOptions.MixWithOthers, AVAudioSessionCategoryOptions.AllowBluetooth])
But that didn't work. So my second solution that I'm thinking is to reroute the music output again to the bluetooth speaker.
I researched on this and found this function: audioSession.setOutputDataSource
I don't really know the parameters to be passed there.
And I am not really sure if in the moment I started the video recording, the phone/or my code disables the bluetooth connection or it just reroutes the playback to the phone's speaker.
UPDATE: I coommented out this line: // try audioSession.setMode(AVAudioSessionModeMoviePlayback) and the music pauses a bit and plays again on the bluetooth speaker. But the problem here is that the captured video has no audio.
UPDATE 2: Would this question have a solution if I provide you with my code?

I'll go ahead and take a shot at answering the original question. From Apple
s documentation I go this:
func setOutputDataSource(_ dataSource: AVAudioSessionDataSourceDescription?)throws
Parameters dataSource
The data source for the audio session’s output.
outError
On input, a pointer to an error object. If an error occurs, the pointer is set to an NSError object that describes the
error. If you do not want error information, pass in nil. here
This page should help you figure out what the AV Session data source description does/returns, but in summery it:
You obtain data source descriptions from the shared AVAudioSession object or the AVAudioSessionPortDescription objects corresponding to its input and output ports. Only built-in microphone ports on certain devices support the location, orientation, and polar pattern properties; if a port does not support these features, the value of its dataSources property is nil. here
Are you trying to route music from your app to the speaker (is that the music playing?) or is the music coming from another app, and you would like a dual output?
For error checking you could make sure the speaker is still available, using something like the output data source. If it returns nill (null.) it means you are not able to switch between data-sources.
It's probably also worth noting the user must give you permission to record, however I doubt this is the problem as you seem to have already been recording at one point, just when it was playing through the phone, not the speaker

Related

Using MPNowPlayingInfoCenter without actually playing audio

I am trying to build an iOS app which controls a music player which runs on a seperate machine. I would like to use the MPNowPlayingInfoCenter for inspecting and controlling this player. As far as I can tell so far, the app actually has to output audio for this to work (see also this answer).
However, for instance, the Spotify app is actually capable of doing this without playing audio on the iOS device. If you use Spotify Connect to play the audio on a different device, the MPNowPlayingInfoCenter still displays the correct song and the controls are functional.
What's the catch here? What does one (conceptually) have to do to achieve this? I can think of continuously emitting a "silent" audio stream, but that seams a bit brute-force.
Streaming silence will work, but you don't need to stream it all the time. Just long enough to send your Now Playing info. Using AVAudioPlayer, I've found approaches as short as this will send the data (assuming the player is loaded with a 1s silent audio file):
player.play()
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
nowPlayingInfoCenter.nowPlayingInfo = [...]
player.stop()
I was very surprised this worked within a single event loop. Any other approach to playing silence seems to work as well. Again, it can be very brief (in fact the above code in my tests doesn't even get as far as playing the silence).
I'm very interested in whether this works reliably for other people, so please comment if you make discoveries about it.
I've explored the Spotify app a bit. I'm not 100% certain if this is the same technique they're using. They're able to mix with other audio somehow. So you can be playing local audio on the phone and also playing Spotify Connect to some other device, and the "Now Playing" info will kind of stomp on each other. For my purposes, that would actually be better, but I haven't been able to duplicate it. I still have to make the audio session non-mixable for at least a moment (so you get ~ 1s audio drop if you're playing local audio at the same time). I did find that the Spotify app was not highly reliable about playing to my Connect device when I was also playing other audio locally. Sometimes it would get confused and switch around where it wanted to play. (Admittedly this is a bizarre use case; I was intentionally trying to break it to figure out how they're doing it.)
EDIT: I don't believe Spotify is actually doing anything special here. I think they just play silent audio. I think getting two streams playing at the same time has to do with AirPlay rather than Spotify (I can't make it happen with Bluetooth, only AirPlay).

change output speaker when user move his phone

My app lets the user hear sound files, and im using AVAudioPlayer to play it.
I saw in some apps a very nice behaviour where the sound speakers changes from regular speakers to the ear speakers when the user put his phone next to his ear.
I have now idea where to start here, should i detect the phone's movement and change the output speaker or is there any native implementation for this behaviour?
The most straightforward path to accomplishing this is to use proximity monitoring in UIDevice. Proximity monitoring tells you whether the phone is close to the user or not.
Listen for UIDeviceOrientationDidChangeNotification and react to proximityState changes accordingly — in your case, rerouting audio.
There's a thorough answer to a similar question here. That answer includes supplementary details to combine device motion for increased accuracy.
What you're looking for is the proximity sensor (that little piece of hardware near the iphone ear speaker), and not any motion sensing mechanism. The proximity sensor is accessible via the public API through the UIDevice's proximityState property, which simply returns a boolean value indicating whether the sensor is close to the user or not: proximityState.
Based on that value, you can then proceed to routing your audio to the ear speaker. This can be achieved using the AVAudioSession class and specifically setting the category (setCategory:error) to AVAudioSessionCategoryPlayAndRecord.

iOS Audio Playback Over Phone Call

I've found a few related questions for Android but nothing for iOS.
Is there any possible way to override the phone's microphone once a phone call has been received and playback an audio file over the phone call? If it's not possible to override the microphone, is there a way to mix in an audio file along with the microphone?
I don't believe you can do what you're wanting. From Apple's Audio Session Programming Guide:
The system follows the inviolable rule that “the phone always wins.” No app, no matter how vehemently it demands priority, can trump the phone. When a call arrives, the user gets notified and your app is interrupted—no matter what audio operation you have in progress and no matter what category you have set.
Which, if you think about it, makes sense: A user is unlikely to want unexpected audio to interrupt or overlay a phone conversation.

How can I start a process when this play symbol comes up

In my app I am streaming audio and there is a period of 5-10 sec depending on the connection where the buffer is loading and after this, my app starts to play the audio. When it starts to play the audio this symbol comes up in the screen.
Here is an image of what im talking about.
http://img27.imageshack.us/img27/3667/img0596.png
I want to change a label in my app when this symbol comes up in the screen, but i dont know which function let me detect this.
The symbol is the "Play" button common to music devices. There is most likely an NSNotification center message that can be "listened for". However, depending on how you are buffering your sounds there is probably a delegate that can notify a selector once it has begun playback. Without more details I can not give more detailed advice. If I were in your position I would take a very hard look at the API you are utilizing, most likely several methods exist to either post notification or send delegate messages notifying the state of the stream as well as playback. I have worked with some streaming audio API and I was able to get status of the buffer as well many other messages from the stream object(s). These are just part of good design, so most likely it is there.

play sound while recording fails

I have a 3rd party SDK that handles an audio recording. It has a callback when the recording starts. In the callback I'm trying to play a sound to indicate to the user that the device is now listening (like Siri, or any other speech recognition tends to do), but when I try I get the following error:
AURemoteIO::ChangeHardwareFormats: error -10875
I have tried playing the sound using AudioServicesPlaySystemSound as well as an AVAudioPlayer both with the same result. The sound plays fine at other times, and per the error my assumption is there's an incompatibility between the playback and recording on the hardware level. Can anyone clarify this error, or give me a hint as to a possible workaround?
Make sure that the Audio Session is initialised and configured for Play_and_Record before you start the RemoteIO Audio Unit recording.
You shouldn't and likely can't start playing a sound in a RemoteIO recording callback. Only set a boolean flag in the callback to indicate that a sound should be played. Play your sound from the main UI run loop.
My problem is related specifically to the external SDK and how they handle the audio interface. They override everything when you ask the SDK to start recording, if you take control back you break the recording session. So within the context of that SDK there's no way to work around it unless they fix the SDK.

Resources