iOS: Is it possible to record from multiple microphones at the same time - ios

All the recent iPhones have 2+ microphones. Is it possible to record from all the microphones at the same time? If this is possible, what is the best iOS audio library for this (AudioKit, EzAudio, AudioUnits, CoreAudio)?
There is no mention of this functionality in AudioKit and EzAudio.

I don't see anything in the documentation about multi-mic audio capture being possible. They specify that you can choose a specific microphone but not that you can select more than one simultaneously. AVAudioSession is also a singleton.
Seemingly, at least as of iOS 10, AVCaptureSession also only allows one audio or video input concurrently.

Since, you can record stereo audio, you can definitely record from multiple microphones at once. Furthermore, since noise cancellation is likely using the 1+ microphones not participating in the stereo recording, it is likely “recording” or at least using all microphones at once.
However, I think the main crux of the questions is if we can get the audio input of each microphone separately at the same time. As Dare points out, the standard API does not support that.
However, assuming there is a one-to-one mapping from microphone source (eg top/bottom) to audio channel (left/right), a theoretical solutions exists…
Simply record in stereo, then separate out the left/right channels, and viola, you can grab the input from each microphone separately. I have not tested this out yet, but in theory it seems like it should work.
If you specifically want to know which channel corresponds to which microphone, you’ll likely need to inspect the device orientation, and have a table of where the microphones are located based on device type. For example:
if orientation == landscapeLeft && device == iPhoneX {
print(“the right audio channel source is the Face ID microphone”)
print(“the left audio channel source is the dock connector microphone”)
} …

Related

Is there any relationship between an AVAudioEngine and an AVAudioSession?

I understand that this question might get a bad rating, but I've been looking at questions which ask how to reroute audio output to the loud speaker on iOS devices.
Every question I looked at the user talked about using your AVAudioSession to reroute it.. However, I'm not using AVAudioSession, I'm using an AVAudioEngine.
So basically my question is, even though I'm using an AVAudioEngine, should I still have an AVAudioSession?
If so, what is the relationship between these two objects? Or is there a way to connect an AVAudioEngine to an AVAudioSession?
If this is not the case, and there is no relation between an AVAudioEngine and an AVAudioSession, than how do you reroute audio so that it plays out of the main speakers on an iOS device rather than the earpiece.
Thank you!
AVAudioSession is specific to iOS and coordinates audio playback between apps, so that, for example, audio is stopped when a call comes in, or music playback stops when the user starts a movie. This API is needed to make sure an app behaves correctly in response to such events
AVAudioEngine is a modern Objective-C API for playback and recording. It provides a level of control for which you previously had to drop down to the C APIs of the Audio Toolbox framework (for example, with real-time audio tasks). The audio engine APIs are built to interface well with lower-level APIs, so you can still drop down to Audio Toolbox if you have to.
The basic concept of this API is to build up a graph of audio nodes, ranging from source nodes (players and microphones) and overprocessing nodes (mixers and effects) to destination nodes (hardware outputs). Each node has a certain number of input and output busses with well-defined data formats. This architecture makes it very flexible and powerful. And it even integrates with audio units.
so there is no inclusive relation between this .
Source Link : https://www.objc.io/issues/24-audio/audio-api-overview/
Yes it is not clearly commented , however, I found this comment from ios developer documentation.
AVFoundation playback and recording classes automatically activate your audio session.
Document Link : https://developer.apple.com/library/content/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/ConfiguringanAudioSession/ConfiguringanAudioSession.html
I hope this may help you.

How can I convert a stereo (two channels) audio file to a mono (one channel) audio file in iOS?

Currently, I'm recording audio on the Apple Watch using the presentAudioRecorderController and sending it to the corresponding iPhone app.
The audio format by default seems to be recorded on the Apple Watch using two channels and there doesn't seem to be a way to change that.
For various reasons, I need the audio format to be only one channel. I was hoping there would be something on iOS to convert the audio file from stereo (two channels) to mono (one channel)
Right now, I'm investigating if this is possible using AVAudioEngine and other various related classes. I'm not sure if it's possible and it seems very complicated at the moment.

Audio record and play simultaneously

I am trying to develop an iOS app which reads sound from the microphone, apply some effects and play it through the headset instantly, may be with some acceptable delay.
Is this possible? As a first step, i am trying to play the sound received from microphone in my headsets at the same time, but struggling to do so...
I was able to record the sound, save it and then play it easily. Relevant questions, articles couldn't be found easily. Any ideas, links are much appreciated
I did check Apple's aurioTouch. I couldn't find simultaneous record and play of same signal.
Request the shortest buffers possible using audio session APIs (less than 6 mS is possible on most iOS devices). Then feed the raw audio samples you get from RemoteIO recording callbacks to the buffers in the RemoteIO play callbacks, possibly using a lock free circular fifo in between.

iPhone headphone audio jack re-routing

We created an external iOS notification light that uses the device’s audio for power.
When you get a phone call on iPhone and the light is plugged in, you still get the ringtone but when you pick up, the audio is rerouted to the headphones (the iPhone thinks our light/device is a headphones set) and the user has to extract myLED for at least 2mm to get the audio from the front receiver of the phone.
We have been exploring alternative solutions to this challange - recently we made a prototype with a particular jack shape so that it could be rotated by the user when getting a call to "reroute" the audio to the iPhone speaker/mic.
Although it may sound a clever option, this hardware solution is far from being neat - this leads to having positions where the myLED does not work/ it is not reliable, plus other complications.
I know of the existence of kAudioSessionOverrideAudioRoute_Speaker however I suspect that this will only direct the app audio to the rear speaker (the “loud” one) and not to the front receiver (because the “receiver” for the iphone is the headphones set if they are detected).
What would you suggest?
Super appreciated!
I think you're in a tough spot:
It's highly unlikely Apple will ever release the option to override audio routing for phone calls. As a key functionality of the phone, they tend to keep the call aspect under lock and key.
The headphone jack (probably - this is how most of them do it) uses the impedance between ground and one or both speakers or the remote control to determine if the plug is in. Other than breaking the circuit, there is no good way to simulate this.
The only options I think you have are these:
Require the user to remove the device when a call comes in.
Provide a microcontroller on the jack to drive a transistor; this transistor can electronically break the circuit to provide the same sort of impedance signature as an unplugged jack.
How, when, and if you can provide the information to the jack that a phone call is in progress is beyond my knowledge: is there an API for "incoming but not yet answered call" you can hook to? Will you have to do a watchdog thing to ensure communication with your app? Would it be possible for you to use the dock connector instead? I think these are really your options. Not a complete answer, but those are my thoughts.

Echoprint not recognizing a single song

Echoprint “listens” to audio on a phone or on your computer to figure out what song it is. It does so very fast and with such good accuracy that it can identify very noisy versions of the original or recordings made on a mobile device with a lot of interference from outside sources.
I compiled the iOS example provided on the website. So far so good.
Sadly, Echoprint failed to recognize any song via the iPhone's microphone (recording time up to 1 minute).
On the other hand, it was capable of recognizing songs by "uploading" them directly from the iPhone's media library.
Any idea, what the problem could be?
Echo print is not intended to work over the air. At least not with the given configuration. You can adapt the code, focus on the matching functions (best_match), to get some results for over-the-air configuration. The actual best_match function returns a song only if it is really close to the reference, which won't happened with songs recorded with the microphone of your phone. Also consider recording a longer segment.
I think the problem is the sampling rate at which the song is being recorded. If it's at 8 kHz it probably won't work; it has to be at least a minimum of 11 kHz.

Resources