iOS6 multi route audio - ios

iOS 6.0 brought "multi-route audio" support to the iPhone / iPad.
DJay app for example benefits of it by allowing the user to hear one deck in headphones while playing the other.
The only mention of it is in the AVAudioSession class reference :
AVAudioSessionCategoryMultiRoute
Allows you to output distinct streams of audio data to different output devices at the same time. For example, you would use this category to route audio to both a USB device and a set of headphones. Use of this category requires a more detailed knowledge of, and interaction with, the capabilities of the available audio routes.
This category may be used for input, output, or both.
Available in iOS 6.0 and later.
How to route two distinct streams to different routes ? Especially using Remote I/O ?
Thanks!

Answering to myself : there's actually no information in iOS Developer Library, but hopefully, there's all the info needed in WWDC developer sessions.
Search for: WWDC 2012 Session 505: Audio Session and Multiroute Audio in iOS by Torrey Holbrook Walker.
I hope that may help somebody else.

Related

Is there any relationship between an AVAudioEngine and an AVAudioSession?

I understand that this question might get a bad rating, but I've been looking at questions which ask how to reroute audio output to the loud speaker on iOS devices.
Every question I looked at the user talked about using your AVAudioSession to reroute it.. However, I'm not using AVAudioSession, I'm using an AVAudioEngine.
So basically my question is, even though I'm using an AVAudioEngine, should I still have an AVAudioSession?
If so, what is the relationship between these two objects? Or is there a way to connect an AVAudioEngine to an AVAudioSession?
If this is not the case, and there is no relation between an AVAudioEngine and an AVAudioSession, than how do you reroute audio so that it plays out of the main speakers on an iOS device rather than the earpiece.
Thank you!
AVAudioSession is specific to iOS and coordinates audio playback between apps, so that, for example, audio is stopped when a call comes in, or music playback stops when the user starts a movie. This API is needed to make sure an app behaves correctly in response to such events
AVAudioEngine is a modern Objective-C API for playback and recording. It provides a level of control for which you previously had to drop down to the C APIs of the Audio Toolbox framework (for example, with real-time audio tasks). The audio engine APIs are built to interface well with lower-level APIs, so you can still drop down to Audio Toolbox if you have to.
The basic concept of this API is to build up a graph of audio nodes, ranging from source nodes (players and microphones) and overprocessing nodes (mixers and effects) to destination nodes (hardware outputs). Each node has a certain number of input and output busses with well-defined data formats. This architecture makes it very flexible and powerful. And it even integrates with audio units.
so there is no inclusive relation between this .
Source Link : https://www.objc.io/issues/24-audio/audio-api-overview/
Yes it is not clearly commented , however, I found this comment from ios developer documentation.
AVFoundation playback and recording classes automatically activate your audio session.
Document Link : https://developer.apple.com/library/content/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/ConfiguringanAudioSession/ConfiguringanAudioSession.html
I hope this may help you.

How to find the available audio routes in iOS 7?

I am new to iOS, i have a problem which need to find the available audio routes in iPhone (iOS7)? (like if headphone is connected it should be both iPhone and headphone,if Bluetooth and headphone is connected it should
be all the three).
I have tried [[AudioSession sharedInstance]current route].outputs, but it is giving the current route(currently
active route) details only, i have also tried the outputDataSources property of audioSession, but it is giving nil value.
Can anyone suggest a proper solution for this if apple is exposing API's for finding the available routes.

Openears asking for permission to use microphone

I have made an app that uses Openears framework to readout some text. But I haven't used any of Openears' speech recognition/speech synthesis features, just the talk to speech feature. My app got rejected by apple telling that the app asks for permission to use microphone while the app doesn't have any features of that kind. The following is the rejection message from apple:
During review we were prompted to provide consent to use the microphone, however, we were not able to find any features or functionality that use the microphone for audio recording.
The microphone consent request is generated by the use of either AVAudioSessionCategoryRecord or AVAudioSessionCategoryPlayAndRecord audio categories.
If you do not intend to record audio with your application, it would be appropriate to choose the AVAudioSession session category that fits your application's needs or modify your app to include audio-recording features.
For more information, please refer to the Security section of the iOS SDK Release Notes for iOS 7 GM Seed.
I have searched the app for AVAudioSessionCategoryRecord or AVAudioSessionCategoryPlayAndRecord audio categories as mentioned in the message but couldn't find any. How can I disable the prompting for permission to use microphone?
Your application got rejected because you don't need the microphone feature, openears by default interface with the use of the microphone feature hence why the user permissions came up. These user permissions are not dismissible as apple increased the security features for users so that they can be in more control of what they want their applications to be able to do. If you have to use OpenEars audio management feature for speech recognition see Update 1 otherwise continue on for a different solution using Apples Siri's Speech Synthesizer on iOS 7.
In your case, if all you want to do is read out some text, then you can use iOS7 Speech Synthesizer, which is the same synthesizer used to create Siri's voice.
It's SO easy to setup and I am currently using it for one of my projects to interact with the user via voice. Here's a quick tutorial on how to get it all setup:
Speech synthesizer tutorial
UPDATE 1
After #halle's comment, I decided to update the post for those that have to use the OpenEars framework who will be using only the FliteController Text To Speech feature without any sort of OpenEars speech recognition.
You can set the FliteController property noAudioSessionOverrides to TRUE so that you ensure that OpenEars wont interface with the Audio recording stream and this will stop the Microphone permissions alert from popping up.
[self.fliteController setNoAudioSessionOverrides:TRUE]
UPDATE 2
Based on #Halle's comment, you no longer need to do update 1:
Just an update that starting with today's update 1.65, FliteController won't ever make audio session calls on its own, so there is no further rejection danger here and it isn't necessary to set noAudioSessionOverrides.
I'm sorry your app was rejected. To use TTS only without any of the audio session management related to speech recognition in OpenEars, set FliteController's property noAudioSessionOverrides to TRUE. This will result in no audio session changes/no use of the mic stream.
I'll see if I can make the documentation for this setting a bit more prominent for developers doing TTS with OpenEars' FliteController only.
For completeness' sake, the documentation on how to greatly reduce your app binary size when using OpenEars, since that was also an issue for you:
http://www.politepix.com/forums/topic/slimming-down-your-app/
http://www.politepix.com/openears/support/#Q_How_can_I_trim_down_the_size_of_the_final_binary_for_distribution
Edit: starting with today's version 1.65 of OpenEars and its plugins, if you just use FliteController there is no danger of rejection because the TTS classes no longer make any calls to the audio session by themselves. Thanks for the heads-up about this and, again, sorry you had a rejection due to this.

iPhone headphone audio jack re-routing

We created an external iOS notification light that uses the device’s audio for power.
When you get a phone call on iPhone and the light is plugged in, you still get the ringtone but when you pick up, the audio is rerouted to the headphones (the iPhone thinks our light/device is a headphones set) and the user has to extract myLED for at least 2mm to get the audio from the front receiver of the phone.
We have been exploring alternative solutions to this challange - recently we made a prototype with a particular jack shape so that it could be rotated by the user when getting a call to "reroute" the audio to the iPhone speaker/mic.
Although it may sound a clever option, this hardware solution is far from being neat - this leads to having positions where the myLED does not work/ it is not reliable, plus other complications.
I know of the existence of kAudioSessionOverrideAudioRoute_Speaker however I suspect that this will only direct the app audio to the rear speaker (the “loud” one) and not to the front receiver (because the “receiver” for the iphone is the headphones set if they are detected).
What would you suggest?
Super appreciated!
I think you're in a tough spot:
It's highly unlikely Apple will ever release the option to override audio routing for phone calls. As a key functionality of the phone, they tend to keep the call aspect under lock and key.
The headphone jack (probably - this is how most of them do it) uses the impedance between ground and one or both speakers or the remote control to determine if the plug is in. Other than breaking the circuit, there is no good way to simulate this.
The only options I think you have are these:
Require the user to remove the device when a call comes in.
Provide a microcontroller on the jack to drive a transistor; this transistor can electronically break the circuit to provide the same sort of impedance signature as an unplugged jack.
How, when, and if you can provide the information to the jack that a phone call is in progress is beyond my knowledge: is there an API for "incoming but not yet answered call" you can hook to? Will you have to do a watchdog thing to ensure communication with your app? Would it be possible for you to use the dock connector instead? I think these are really your options. Not a complete answer, but those are my thoughts.

How does Audiobus for iOS work?

What SDK's does Audiobus use to provide inter-app audio routing? I am not aware of any Apple SDK that could facilitate inter-app communication for iOS and was under the impression that apps were sandboxed from each other so I'm really intrigued to hear how they pulled this off.
iOS allows inter-app communication via MIDI Sysex messages. AudioBus works by sending audio as MIDI Sysex message. You can read details from the developer himself:
http://atastypixel.com/blog/thirteen-months-of-audiobus/
My guess is that they use some sort of audio over network, because I've seen log statements when our app gets started even on a different device.
Don't really know about the details of the implementation, but this could be a way of staying in the "sandbox" constraint.
The Audiobus SDK (probably) use the Audio Session rules to "organize" all the sound output from the apps using their SDK, as you can see on their videos (on bottom of the page), the apps have an lateral menu to switch back and forwards between apps.
The Audio Session Category states:
Allows mixing: if yes, audio from other applications (such as the iPod) can continue playing when your application plays sound.
This way Audiobus can "control" the sound and allow the session to be persistent between the apps.

Resources