I'm trying to get spatial audio working using Dolby Atmos media files.
Question:
Do I need to use SceneKit, RealityKit and or CoreMotion with CMHeadphoneMotionManager to build this functionality or does it come out of the box by setting allowedAudioSpatializationFormats simply with AVPlayer?
More Info:
App I'm trying to build is a spatial music app, not a game or something using 3D objects.
The apple documentation is unclear as to how to actually achieve this.
For an example, use AirPods Pro with custom iOS app to hear the spatial audio ie. "Music" with instruments surrounding the listener.
Thanks!
I am going to make a little deduction because I haven't researched a lot yet, but according to the documentation you can choose the audio type. My deduction (you noted "use AirPods Pro with Apple TV to hear the spatial audio") is that it's hardware related. However:
If the device doesn’t support spatial audio, it falls back to mono.
You can use AVAudioEngine. Here is a reference https://developer.apple.com/library/archive/samplecode/AVAEGamingExample/Introduction/Intro.html
Related
Is it possible to play audio through speakers and headphones at the same time as of iOS 11? As of right now I am trying to find an app on the store that can do this, so far I have had no luck. I have found a few other threads, but they don't seem to be working for me or they say you can't. Here are a few of the links,
https://developer.apple.com/library/content/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/AudioSessionCategoriesandModes/AudioSessionCategoriesandModes.html
https://stackoverflow.com/a/35009801
https://apple.stackexchange.com/questions/48534/is-it-possible-to-play-sound-through-both-the-headphone-jack-and-my-internal-spe
iOS: Is it possible to send audio out both headphones and speakers at the same time?
Has anyone tried to play sounds through the speakers and headphones at the same time? Android + iOS
What is really strange is that I cannot find any official documentation saying whether this is possible only that 'The built-in speaker may be used only if no other eligible output ports (USB, HDMI, LineOut) are connected.'
If you need code let me know, but really I am just wondering if it is possible and what is the best route for doing this? The key thing is that I want to do it simultaneously. Thanks.
So after some further review, I would like to save everyone the headaches that I had to go through. Is it possible? Probably, however Apple as of right now does not supply any way to do this. There seems to be no official documentation as to if it can be done (please update this answer if you find some).
The best evidence I have to support my answer would that Apple states 'Important: The built-in speaker may be used only if no other eligible output ports (USB, HDMI, LineOut) are connected.' Also in the solutions posted here, neither method works. I have tested both with iPhone 7 on iOS 11.
NOTE* in the second method I think that the writer was confusing 'channels' with 'routes'. He is changing and selecting the channels (this would be like left, right, or middle speaker for devices that support it), whereas I was looking to change the route (play in the speaker, play in the headphones, HDMI, USB, and ect...). So, in my opinion, NO YOU CANNOT play on the headphones and the speaker at simultaneously (different sounds or the same), unless you are Apple or I am wrong.
Is there a method of determining the delay in audio when playing to bluetooth (or some other device, like airplay) on iOS?
I've searched and found a few things. The advanced audio distribution spec, for example, makes several references to "delay" reporting, but I'm not clear how to access this from iOS in the more general case of audio playback to some device.
If there were a method, iOS would use that method to play videos with the audio and video in sync which it doesn't seem to do. However, I do see some references to other systems being able to compensate for this (eg apparently the android YouTube player can compensate: Detect or Approximate Bluetooth Latency on Android (Audio Playback))
I came across this issue myself. AvAudioSession provides properties for outputLatency
as well as inputLatency. I do see differences in the values when I connect to a pair of bluetooth headphones, vs the iPhone microphone and speaker, though I cannot speak to how accurate these numbers are.
Currently I am developing an iOS app that captures the sound using iPad mic. At the same time, a sound is being played through iPad speakers. Since the objective is to process the isolated input sound, the speaker feedback should be removed (cancelled). I looked for some documentation related to AudioToolbox possibilities. I found out AudioUnit can perform Built-in Features such as echo cancellation. However, I did not manage to configure correctly my audio graph in order to achieve a successful performance.
I would appreciate any suggestion, sample codes, references to tutorials or recommended bibliography.
Thank you very much,
Carles
Is it possible to direct audio output to a Bluetooth speaker in iOS 7 without using the MPVolumeView UI class? (iOS 7 is required for my project.)
I need to do this because I am working on a native extension which will allow an Adobe Air application to send audio over A2DP. As I understand it native extensions preclude the use of UI classes so I cannot use MPVolumeView.
There is an overrideOutputAudioPort:error: method on AVAudioSession but the only available overrides are AVAudioSessionPortOverrideNone and AVAudioSessionPortOverrideSpeaker (for the built-in speaker and microphone).
The documentation states that the setOutputDataSource:error method needs to use an object from the outputDataSources array which is only supported on “certain USB accessories”.
I understand that Apple prefers applications to allow the user free choice of audio output device but if I cannot use MPVolumeView how can I offer this? I’d be happy to code the UI in ActionScript if I could just read and set the available options in the Objective-C extension but it seems to be impossible at present.
Have I missed anything or are there any other options I could use? Any thoughts or comments appreciated.
Edit:
I was wrong to say that "native extensions preclude the use of UI classes". I am now displaying the MPVolumeView in my AIR application. However I would still like to know if it is possible to take more direct control of the audio output route.
Is there a way to programmatically capture what's being played on an iOS device audio output? I want to capture an audio snippet and do some data processing on it.
There is an entire library devoted to this sort of thing that Apple built in for you called AVFoundation.
It's extensive and sprawling, so be prepared to do some work, but you'll be able to sample and manipulate the bit-streams directly with all types of audio.
If you are trying to have your app run in the background and strip audio from another app (like iTunes) or from the phone itself, put on your black hat, hack into the private API's of iOS, and release your stuff over in the jailbroken app-store, because that sort of thing is explicitly designed to not be possible with legal and legitimate apps.