Is there any relationship between an AVAudioEngine and an AVAudioSession? - ios

I understand that this question might get a bad rating, but I've been looking at questions which ask how to reroute audio output to the loud speaker on iOS devices.
Every question I looked at the user talked about using your AVAudioSession to reroute it.. However, I'm not using AVAudioSession, I'm using an AVAudioEngine.
So basically my question is, even though I'm using an AVAudioEngine, should I still have an AVAudioSession?
If so, what is the relationship between these two objects? Or is there a way to connect an AVAudioEngine to an AVAudioSession?
If this is not the case, and there is no relation between an AVAudioEngine and an AVAudioSession, than how do you reroute audio so that it plays out of the main speakers on an iOS device rather than the earpiece.
Thank you!

AVAudioSession is specific to iOS and coordinates audio playback between apps, so that, for example, audio is stopped when a call comes in, or music playback stops when the user starts a movie. This API is needed to make sure an app behaves correctly in response to such events
AVAudioEngine is a modern Objective-C API for playback and recording. It provides a level of control for which you previously had to drop down to the C APIs of the Audio Toolbox framework (for example, with real-time audio tasks). The audio engine APIs are built to interface well with lower-level APIs, so you can still drop down to Audio Toolbox if you have to.
The basic concept of this API is to build up a graph of audio nodes, ranging from source nodes (players and microphones) and overprocessing nodes (mixers and effects) to destination nodes (hardware outputs). Each node has a certain number of input and output busses with well-defined data formats. This architecture makes it very flexible and powerful. And it even integrates with audio units.
so there is no inclusive relation between this .
Source Link : https://www.objc.io/issues/24-audio/audio-api-overview/

Yes it is not clearly commented , however, I found this comment from ios developer documentation.
AVFoundation playback and recording classes automatically activate your audio session.
Document Link : https://developer.apple.com/library/content/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/ConfiguringanAudioSession/ConfiguringanAudioSession.html
I hope this may help you.

Related

iOS: Is it possible to record from multiple microphones at the same time

All the recent iPhones have 2+ microphones. Is it possible to record from all the microphones at the same time? If this is possible, what is the best iOS audio library for this (AudioKit, EzAudio, AudioUnits, CoreAudio)?
There is no mention of this functionality in AudioKit and EzAudio.
I don't see anything in the documentation about multi-mic audio capture being possible. They specify that you can choose a specific microphone but not that you can select more than one simultaneously. AVAudioSession is also a singleton.
Seemingly, at least as of iOS 10, AVCaptureSession also only allows one audio or video input concurrently.
Since, you can record stereo audio, you can definitely record from multiple microphones at once. Furthermore, since noise cancellation is likely using the 1+ microphones not participating in the stereo recording, it is likely “recording” or at least using all microphones at once.
However, I think the main crux of the questions is if we can get the audio input of each microphone separately at the same time. As Dare points out, the standard API does not support that.
However, assuming there is a one-to-one mapping from microphone source (eg top/bottom) to audio channel (left/right), a theoretical solutions exists…
Simply record in stereo, then separate out the left/right channels, and viola, you can grab the input from each microphone separately. I have not tested this out yet, but in theory it seems like it should work.
If you specifically want to know which channel corresponds to which microphone, you’ll likely need to inspect the device orientation, and have a table of where the microphones are located based on device type. For example:
if orientation == landscapeLeft && device == iPhoneX {
print(“the right audio channel source is the Face ID microphone”)
print(“the left audio channel source is the dock connector microphone”)
} …

Is there any possibility to read the frequency of the currently playing song with Swift?

I'm new to iOS programming and I don't know where to start. I found code examples how to read frequencies from the microphone with AudioKit framework. But this is not what I am looking for. Is it possible to retrieving frequency of the currently playing song in real time without using a microphone?
Thank you for help.
The iOS security sandbox prevents apps from capturing general audio output of any other app, such as the Music app.
Certain music apps, such as GarageBand might share inter-app audio, but this isn't supported by the majority of apps that output "songs".
An app might play the "song" itself, via an AVAudioPlayer, and tap the AVPlayer's output to get raw sample data for spectral frequency and pitch analysis (two very different things, by-the-way).

Audio record and play simultaneously

I am trying to develop an iOS app which reads sound from the microphone, apply some effects and play it through the headset instantly, may be with some acceptable delay.
Is this possible? As a first step, i am trying to play the sound received from microphone in my headsets at the same time, but struggling to do so...
I was able to record the sound, save it and then play it easily. Relevant questions, articles couldn't be found easily. Any ideas, links are much appreciated
I did check Apple's aurioTouch. I couldn't find simultaneous record and play of same signal.
Request the shortest buffers possible using audio session APIs (less than 6 mS is possible on most iOS devices). Then feed the raw audio samples you get from RemoteIO recording callbacks to the buffers in the RemoteIO play callbacks, possibly using a lock free circular fifo in between.

Problems Recording and Playing Back Audio Simultaneously

I'm having some trouble working with the iOS Audio frameworks to create a simple app. I would like to record audio through the Microphone and play it back to the user while recording.
I have tried each of the audio framework layers(AVFoundation, Audio Queue API, and RemoteIO), but have only found old documentation and broken examples.It seems like a simple request that AVFoundation should handle, but I have explored the following other SO questions and still find myself circling for hours to get the hang of this. Here is what I have reviewed:
iOS: Sample code for simultaneous record and playback (Other SO Users also state the accepted answer is not concrete and difficult to implement even with a delay of ~70ms.)
Record and play audio Simultaneously (From 2010 and very high level, I have downloaded the sample code and can't find a working example that does simultaneous playback and recording).
Adjust the length of an AudioUnit Buffer (RemoteIO is so confusing to me right now, is this really required?)
I have also downloaded and reviewed both the SpeakHere and AurioTouch sample projects from Apple. I promise I wouldn't post up without hours of googling and struggling. You can see "record audio and playback iOS simultaneously" returns many dated and non-working examples.I know myself and the community could really benefit from some updated documentation and examples in the audio section. RemoteIO seems to be too advanced for such a simple task. Thanks again for your help and consideration.
The appropriate way to do this is via AudioUnit APIs, even though it seems like a common scenario which should be handled by higher level APIs.
I wrote a small demo app using AudioUnit. You're free to try it our and modify it for suiting your purpose. The demo app does record audio and play it simultaneously, but it's recommended to use a ear phone to see the effect.
The RemoteIO Audio Unit is the only way to play back what is being recorded with low latency. RemoteIO is low latency because it runs audio callbacks in a separate dedicated real-time thread which is why it is fast, but also why it is a bit more complex to code. All the other iOS audio APIs are built on top of RemoteIO and thus add latency.
You will also need to configure the app's Audio Session APIs to request low latency with the appropriate audio session type. The foreground app can request and get audio input and output latencies as low as 5.6 milliseconds on most iOS devices most of the time.

iOS6 multi route audio

iOS 6.0 brought "multi-route audio" support to the iPhone / iPad.
DJay app for example benefits of it by allowing the user to hear one deck in headphones while playing the other.
The only mention of it is in the AVAudioSession class reference :
AVAudioSessionCategoryMultiRoute
Allows you to output distinct streams of audio data to different output devices at the same time. For example, you would use this category to route audio to both a USB device and a set of headphones. Use of this category requires a more detailed knowledge of, and interaction with, the capabilities of the available audio routes.
This category may be used for input, output, or both.
Available in iOS 6.0 and later.
How to route two distinct streams to different routes ? Especially using Remote I/O ?
Thanks!
Answering to myself : there's actually no information in iOS Developer Library, but hopefully, there's all the info needed in WWDC developer sessions.
Search for: WWDC 2012 Session 505: Audio Session and Multiroute Audio in iOS by Torrey Holbrook Walker.
I hope that may help somebody else.

Resources