AudioUnit Echo Cancellation Built-in Feature for iOS - ios

Currently I am developing an iOS app that captures the sound using iPad mic. At the same time, a sound is being played through iPad speakers. Since the objective is to process the isolated input sound, the speaker feedback should be removed (cancelled). I looked for some documentation related to AudioToolbox possibilities. I found out AudioUnit can perform Built-in Features such as echo cancellation. However, I did not manage to configure correctly my audio graph in order to achieve a successful performance.
I would appreciate any suggestion, sample codes, references to tutorials or recommended bibliography.
Thank you very much,
Carles

Related

How Can I Create Spatial Audio in iOS with Swift?

I'm trying to get spatial audio working using Dolby Atmos media files.
Question:
Do I need to use SceneKit, RealityKit and or CoreMotion with CMHeadphoneMotionManager to build this functionality or does it come out of the box by setting allowedAudioSpatializationFormats simply with AVPlayer?
More Info:
App I'm trying to build is a spatial music app, not a game or something using 3D objects.
The apple documentation is unclear as to how to actually achieve this.
For an example, use AirPods Pro with custom iOS app to hear the spatial audio ie. "Music" with instruments surrounding the listener.
Thanks!
I am going to make a little deduction because I haven't researched a lot yet, but according to the documentation you can choose the audio type. My deduction (you noted "use AirPods Pro with Apple TV to hear the spatial audio") is that it's hardware related. However:
If the device doesn’t support spatial audio, it falls back to mono.
You can use AVAudioEngine. Here is a reference https://developer.apple.com/library/archive/samplecode/AVAEGamingExample/Introduction/Intro.html

iOS Is it possible to play audio through speakers and headphones at the same time as of iOS 11?

Is it possible to play audio through speakers and headphones at the same time as of iOS 11? As of right now I am trying to find an app on the store that can do this, so far I have had no luck. I have found a few other threads, but they don't seem to be working for me or they say you can't. Here are a few of the links,
https://developer.apple.com/library/content/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/AudioSessionCategoriesandModes/AudioSessionCategoriesandModes.html
https://stackoverflow.com/a/35009801
https://apple.stackexchange.com/questions/48534/is-it-possible-to-play-sound-through-both-the-headphone-jack-and-my-internal-spe
iOS: Is it possible to send audio out both headphones and speakers at the same time?
Has anyone tried to play sounds through the speakers and headphones at the same time? Android + iOS
What is really strange is that I cannot find any official documentation saying whether this is possible only that 'The built-in speaker may be used only if no other eligible output ports (USB, HDMI, LineOut) are connected.'
If you need code let me know, but really I am just wondering if it is possible and what is the best route for doing this? The key thing is that I want to do it simultaneously. Thanks.
So after some further review, I would like to save everyone the headaches that I had to go through. Is it possible? Probably, however Apple as of right now does not supply any way to do this. There seems to be no official documentation as to if it can be done (please update this answer if you find some).
The best evidence I have to support my answer would that Apple states 'Important: The built-in speaker may be used only if no other eligible output ports (USB, HDMI, LineOut) are connected.' Also in the solutions posted here, neither method works. I have tested both with iPhone 7 on iOS 11.
NOTE* in the second method I think that the writer was confusing 'channels' with 'routes'. He is changing and selecting the channels (this would be like left, right, or middle speaker for devices that support it), whereas I was looking to change the route (play in the speaker, play in the headphones, HDMI, USB, and ect...). So, in my opinion, NO YOU CANNOT play on the headphones and the speaker at simultaneously (different sounds or the same), unless you are Apple or I am wrong.

Approximating audio latency in iOS for airplay, bluetooth and related technologies

Is there a method of determining the delay in audio when playing to bluetooth (or some other device, like airplay) on iOS?
I've searched and found a few things. The advanced audio distribution spec, for example, makes several references to "delay" reporting, but I'm not clear how to access this from iOS in the more general case of audio playback to some device.
If there were a method, iOS would use that method to play videos with the audio and video in sync which it doesn't seem to do. However, I do see some references to other systems being able to compensate for this (eg apparently the android YouTube player can compensate: Detect or Approximate Bluetooth Latency on Android (Audio Playback))
I came across this issue myself. AvAudioSession provides properties for outputLatency
as well as inputLatency. I do see differences in the values when I connect to a pair of bluetooth headphones, vs the iPhone microphone and speaker, though I cannot speak to how accurate these numbers are.

How can i capture what's being played on an iOS device output audio stream and store it in an object?

Is there a way to programmatically capture what's being played on an iOS device audio output? I want to capture an audio snippet and do some data processing on it.
There is an entire library devoted to this sort of thing that Apple built in for you called AVFoundation.
It's extensive and sprawling, so be prepared to do some work, but you'll be able to sample and manipulate the bit-streams directly with all types of audio.
If you are trying to have your app run in the background and strip audio from another app (like iTunes) or from the phone itself, put on your black hat, hack into the private API's of iOS, and release your stuff over in the jailbroken app-store, because that sort of thing is explicitly designed to not be possible with legal and legitimate apps.

Live streaming Audio from iOS

I have tried to live stream audio (AAC-LC) from iOS for 3 months without much success...
I tried Audio Queues, which work well but there is a strange delay (~4s) and I don't know why (high level API ?)
I tried Audio Units, it sometimes works on the simulator but never with the phone using a modified code from this source
I am really lost, can anyone help me ?
EDIT
I have to do a live streaming application (iPhone-> Wowza Server via RTSP). The video part works well with little delay (1s). Now I'm trying to add audio in addition to video but I'm stuck with the SDK.
tldr : I need to capture microphone input then send AAC frames over the network without getting huge delay
This app, which I just now completed, broadcasts audio between any two iOS devices on the same network:
https://drive.google.com/open?id=1tKgVl0X92SYvgpvbljRzilXNQ6iBcjqM
Compile it with the latest beta release of Xcode 9, and run it on two iOS 11 (beta) devices.
The app is simple; you launch it, and then start talking. Everything is automatic, from network connectivity to audio streaming.
Events generated by the app are displayed in an event log in the app:
Even though the code is simple and concise, the event log was provided to make understanding the app's architecture quicker and more easily.

Resources