Is there a way (swift3/objc) to redirect output of VoiceOver between [headphones, speaker, BT, ...] ?
Because the API documentation as far as I could see doesn't have any information on channel changing...
We would like the ability to record voice while the user is listening to VoiceOver feedback on headphones (or change that, if they like).
On iOS, all sound is routed to the default audio route to start, including VoiceOver. When a new audio route is connected (eg. Bluetooth headphones), two extra VoiceOver options become available. First, an "Destination" option appears in the VoiceOver rotor that lets users choose the system's audio route. Second, the Sound section of VoiceOver settings in Settings app includes the option to select separate "Speech Channels" and "Sound Channels". You can even limit Speech output to a single channel of the route, which would let you listen to music in one ear and VoiceOver output in the other.
On Mac, this is a user-configurable setting within VoiceOver Utility.
You cannot redirect VoiceOver output programmatically on either platform. This is probably due to the catastrophic user experience that could create. Imagine redirecting your visual output to an unplugged device in your closet. How would you recover control of your computer?
You can use overrideOutputAudioPort in AVAudioSession to redirect the VoiceOver. Assuming your mode is set to playAndRecord.
Related
We are trying to integrate Youtube app on a STB that is using RDK middleware (capable of running HTML5/javascript applications). I have been through the "YouTube TV HTML5 Technical Requirements 2016" document and have some questions.
1) As per my understanding youtube is an opensource app and integration work will be required? Will there be any customization be required? For example there is a difference how search functionality is available on different device types. Youtube app being run on a browser on a PC, a textbox is available where you can type what you want to search and then press the search icon next to it to start the search. However on the devices like Smart TV, Set Top Box where user does not have the pointing device and the keyboard, usually soft keyboard is required to be shown on the screen and search starts automatically after entering certain number of characters. I want to know if this functionality is customized by the app integrator or there are different code bases for different device types?
Similar questions i have is for the settings menu. For example to support dial 2.0 protocol to remotely launch the youtube application from the remote device you need to have settings menu to let you to pair / unpair the device. So settings menu seems to be different for different device types.
2) Similarly there are differences how user is allowed to perform forward / rewind during the playback. On PC browser i have seen user can seek to any position with in a stream using a mouse. However on smart TV's there is a rewind forward button which result in seek -/+ 10 secs. I have not seen trick modes on any implementation. Are trick modes required and how they are performed? If they are required then using seek or some sort of iframe tag file to allow smooth trickmodes? Again doesn't that part come from the app itself?
3) I'm trying to find if Youtube support any or all of these MPEG-DASH, Apple HLS, Microsoft Smooth Streaming, adaptive bit rate protocols. However not having much luck with them. I tried to capture the packets using wireshark and launched the youtube application and played back the video but i was unable to see any http calls that can give me hint that youtube app is using any of the above ABR formats (may be all the communication was under TLS and so encrypted and so i was unable to find whats going on). Even youtube app running from a browser on a PC, when i playback the video, i can see under settings -> Quality always remain at auto, 480p for the whole duration of playback. And if i change the quality to any value e.g 720p it always stay there for the whole duration of the playback. This is telling me it is not using any of the ABR formats. So i guess these ABR formats are probably for future use?
4) Under the youtube specifications i can see that target device must implement at least com.youtube.playready and com.widevine.alpha (for 4K contents) DRM's. I was trying to find if you tube has any content available in these formats but was unable to find any. Can you please confirm?
I would appreciate if someone can answer these or point me in the right direction.
Best Regards,
Farhan
I have an app that displays videos, and it's very important to us that we intercept all pause events, and prevent users from seeking in videos.
Doing it on device is pretty simple, we just don't expose any 'regular' controls to user, and in -remoteControlReceivedWithEvent:, we wrap all events that we're actually interested in.
But we're struggling with supporting Apple TV. It's our understanding that it should forward all events sent from Apple Remote to our app, as per [0]:
When AirPlay is in use, your media might be playing in another room from your host device. The AirPlay output device might have its own controls or respond to an Apple remote control. For the best user experience, your app should listen for and respond to remote events, such as play, pause, and fast-forward requests. Enabling remote events also allows your app to respond to the controls on headphones or earbuds that are plugged into the host device.
However, as far as I can see from my debugging and pulled hair, it doesn't apply to cases where you let AVPlayer handle displaying your video. We actually don't do anything at all to make videos play on TV, since AVPlayer's allowsExternalPlayback property is YES by default.
If I'm understanding docs correctly, while using that mode with Apple TV, only URL/data from device is sent to Apple TV, and aTV does the decoding and rendering part on it's own, as per [1]:
External playback mode is when video data is sent to an external device such as Apple TV via AirPlay and the mini-connector-based HDMI/VGA adapters for full screen playback at its original fidelity. AirPlay Video playback is also considered as an "external playback" mode.
which could potentially explain why I don't receive any events on device (e.g. someone at Apple thought that since aTV does the heavy lifting and actually decoding and rendering, apps on device shouldn't receive those events).
So, my question is basically this - is there any obvious tree I'm missing from the forest, or do I have no retreat other than either:
ugly hacks using KVO on playback position and playback rate, and punishing users for 'cheating'
reimplementing whole video rendering on my own, treating TV screen as second display
Any pointers will be greatly appreciated.
[0] https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AirPlayGuide/EnrichYourAppforAirPlay/EnrichYourAppforAirPlay.html
[1] https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVPlayer_Class/Chapters/Reference.html#//apple_ref/occ/cl/AVPlayer
I just fired up the new Netflix app for iOS and upon trying to AirPlay the video to my Apple TV, I got this message:
So I was wondering how they're able to do this. Is there a public API that lets an app retrieve the version of the Apple TV that's selected for AirPlay? Or does Netflix use some other means to detect this?
It's actually not that difficult to be able to pull something like this off. We're going to need to be doing two things: 1) listening for when the audio route changes to know when we've connected to an AirPlay device, and 2) searching the network for AirPlay devices to get info from.
Once we get notified when the audio route changes, we can get info about the new route. The most important bit for us is the name of the output port for the AirPlay device. Once we have that, we can iterate through all the AirPlay devices we know about to find the one that matches the name of the output port. Once we have that device, we can get info from its TXT record, which contains info about the version of the software its running.
I've pushed a sample project to Github that details how I pulled this off. This is a really simple project that just handles when a route changes. If you wanted to go live with this code, then you'd also want to add checks at certain other places, like when you present an MPMoviePlayerViewController.
EDIT: Note that this is pretty hacky, as we rely on the name of the port to be the same as the name of the AirPlay device. If Apple changes either of those things, this breaks. Alternatively, you can compare the "deviceid" field of the TXT record of the device to the UID of the port, which is the deviceid with the string "-airplay" appended to it, as far as I can tell. Again, if that is changed by Apple, it would break, but at least it's another potential way of checking.
We created an external iOS notification light that uses the device’s audio for power.
When you get a phone call on iPhone and the light is plugged in, you still get the ringtone but when you pick up, the audio is rerouted to the headphones (the iPhone thinks our light/device is a headphones set) and the user has to extract myLED for at least 2mm to get the audio from the front receiver of the phone.
We have been exploring alternative solutions to this challange - recently we made a prototype with a particular jack shape so that it could be rotated by the user when getting a call to "reroute" the audio to the iPhone speaker/mic.
Although it may sound a clever option, this hardware solution is far from being neat - this leads to having positions where the myLED does not work/ it is not reliable, plus other complications.
I know of the existence of kAudioSessionOverrideAudioRoute_Speaker however I suspect that this will only direct the app audio to the rear speaker (the “loud” one) and not to the front receiver (because the “receiver” for the iphone is the headphones set if they are detected).
What would you suggest?
Super appreciated!
I think you're in a tough spot:
It's highly unlikely Apple will ever release the option to override audio routing for phone calls. As a key functionality of the phone, they tend to keep the call aspect under lock and key.
The headphone jack (probably - this is how most of them do it) uses the impedance between ground and one or both speakers or the remote control to determine if the plug is in. Other than breaking the circuit, there is no good way to simulate this.
The only options I think you have are these:
Require the user to remove the device when a call comes in.
Provide a microcontroller on the jack to drive a transistor; this transistor can electronically break the circuit to provide the same sort of impedance signature as an unplugged jack.
How, when, and if you can provide the information to the jack that a phone call is in progress is beyond my knowledge: is there an API for "incoming but not yet answered call" you can hook to? Will you have to do a watchdog thing to ensure communication with your app? Would it be possible for you to use the dock connector instead? I think these are really your options. Not a complete answer, but those are my thoughts.
Let us say I have an audio iPhone app which takes input from the microphone.
Now, although I haven't tried this myself, I believe the user could use an external microphone that plugs into the phonojack socket.
This means my audio unit could be receiving its input from the internal or the external microphone.
My guess is that iOS will automatically route from an external microphone if it is connected.
But what if I don't want that?
Is there a way to specify which microphone should be used?
I have looked in the audio session guide, I can find some setting regarding a Bluetooth headset. But that is as close as I can find. It appears that it is not possible. But I find that difficult to believe.
PS Also I am curious how it detects an external microphone... if I plug my headphones in, it should continue routing from the internal microphone. my headphones are just plain stereo headphones. but if I used my mobile phone's headphones ( on extra band on the Jack... they have a microphone built onto the cable where the individual earpiece strands meet ) I would expect it to pick up this source instead.
You have to use the AUHAL unit to set a specific input device as default input and then connect it with the AudioQueue.
Apple has a detailled Technical Note for that: Device input using the HAL Output Audio Unit