From what I understand the iPhone5 has 3 separate microphones (see here), is it possible to record audio from all 3 mics simultaneously? I've been digging through the documents, and I've started digging into RemoteIO and CoreAudio but I can't figure out if its even possible to specify which built-in microphone to record from? Does anyone have any experience with this, or know if its even possible?
Thanks in advance.
EDIT: Pi's comment below is probably correct: You can select which mic to record from, but can't record from multiple mics at same time.
Apple documentation says it's possible since iOS 7:
Using APIs introduced in iOS 7, developers can perform tasks such as
locating a port description that represents the built-in microphone,
locating specific microphones like the "front", "back" or "bottom",
setting your choice of microphone as the preferred data source,
setting the built-in microphone port as the preferred input and even
selecting a preferred microphone polar pattern if the hardware
supports it. See AVAudioSession.h.
Related
I'm trying to find a way to get the average power level for a channel, that comes out from the audio played in the embedded video. I'm using YouTube's iOS helper library for embedding the video https://developers.google.com/youtube/v3/guides/ios_youtube_helper
A lot of the answers I've found in StackOverflow refer to AVAudioPlayer, but that's not my case. I also looked in the docs of AudioKit framework to find something that can give the output level of the current audio, but I couldn't find anything related, maybe I missed something over there. I also looked in EZAudio framework even tough it's deprecated, and I also couldn't find something that relates to my case.
My direction of thinking was to find a way to get the actual level that's coming out from the device, but I found one answer in SO that's saying this is not allowed in iOS, although he didn't mention any source for this statement.
https://stackoverflow.com/a/12664340/4711172
So, any help would be much appreciated.
The iOS security sandbox blocks apps from seeing the device's digital audio output stream, or any other app's internal audio output (unless explicitly shared, e.g. inter-app audio, etc.) (when using Apple App store permitted public APIs.)
(Just a guess, but this was possibly and originally implemented in iOS to prevent apps from capturing samples of DRM'd music and/or recording phone call conversations.)
Might be a bit off/weird, but just in case -
Have you considered closing a loop? Meaning - record the incoming audio using 'AVAudioRecorder' and get the audio levels from there?.
See Apple's documentation for AVAudioRecorder (in the overview they're specifying: "Obtain input audio-level data that you can use to provide level metering")
AVAudioRecorder documentation
I'm working on a project which needs the iPhone to detect a small set of voltage data (two different values of voltage representing 0 and 1 respectively). My initial thought is to detect it through microphone, but I'm not sure if iPhone would be able to detect these signals since it contains no info of frequency. I searched in iOS Developer resources and on google, but nothing clear about this problem. Can anyone help me with this question? Thanks a lot!
as per our discussion, it seems you want to send a digital signal to the iphone. Now there are two main ways to do this.
Either sign up to Apples MFi hardware licensing program which allows you to create hardware for the iPhone. MFi program. Or,
There is an easy way to do this but it would require the use of the headphone jack. For demonstration and testing purposes you can use the headphone jack and if its a simple on and off signal then you can get good results with the headphone jack overall and might not require to create yourself a piece of hardware. Here's a link to that Grab sensor data and send it through the iPhone headphone jack
In fact, using the headphone approach is not as bad as it may sound, you can receive a nice signal if needed. Anyway, it will suffice for your purposes. Have a look what this guy is doing. I suggest you start with the video demonstration to get an overall idea of this approach.
UPDATE 1
Have a look at this link. People are using the headphone port to detect voltage. The reason this works is because the iPhone jack is a combination earpiece (output only) and microphone (input only) connector. The microphone input is a single wire, with common ground to the earpieces.
The usual problem when trying to use an audio signal as digital signal input is that it is high-pass-filtered to avoid offsets (long term pressure changes which could destroy the dynamic range of the soundcards Analog-to-Digital converters). This means you will get only the changes of the digital signal (like "clicks"). Actually you will be better of the higher frequency your signal has, but it will be non-trivial to process it well then (this was what ordinary modems did very well).
If you can control the digital signals sent as sound (or to the microphone jack), make sure they are modulated the signal with one or more tones, like a morse transmitter or modem. Essentially you want to use your iPhone as an Aucoustic Coupler. This is possible, but of course a bit computationally expensive considering the quite low bandwidth.
Start by using some dirt simple modulation, like 1200-1800 Hz and make an FFT of it. There's probably some reference implementation for any sound card out there to get started with.
There have been some cool viruses recently that was said to be able to jump air-gaps, they used similar techniques as this one.
If you still really want a DC (or slowly changing digital signal), check out a solution using a separate oscillator that is amplitude-modulated by the incoming signal
I am developing an architecture for digital audio workstation that works on iOS (mainly, but trying to support OS X too). I'm going slowly through miles of documentation by Apple and references of their frameworks.
I have experience with DSP, but iOS is more new to me and there are so many objects, tutorials (even for older versions of iOS) and different frameworks with different API's. I would just like to make sure I choose the right one on start, or combination of those.
The goals of the architecture are:
Sound track sample access (access samples in files)
iPod library songs
local file songs
songs on remote server
radio stations (infinite length songs)
Effect chaining (multiple equalizers, or pitch & tempo change at the same time)
Multiple channels and mixing (even surround)
Portability
Mac OS X at least
iOS 6+ support (iOS 5 or lower not needed)
Sample access in 32-bit floats, not signed integers.
Easy Objective-C API (DSP and processing done in C++ of course)
Recording, playing
Record to file (codec by choice), or send over network (VoIP)
Playing on different outputs (on Mac) or speakers/headphones on iOS
Changing of volume/mute
Background audio support
Real-time sample processing
Equalizer on any song that is currently played
Real-time sample manipulation
Multi-threading
I hope I did not miss anything, but those are the most important goals.
My research
I have looked through most of the frameworks (not so much in detail though) and here is what I have figured out. Apple lists following frameworks for using Audio on iOS:
Media Player framework
AV Foundation framework
Audio Toolbox framework
Audio Unit framework
OpenAL framework
Media Player and AV Foundation are too high-level API's and do not allow direct sample access. OpenAL on the other side cannot record audio. So that leaves Audio Toolbox and Audio Unit frameworks. Many of the differences are explained here: What's the difference between all these audio frameworks?
As much as I can understand, Audio Toolbox would be the way to go, since MIDI is currently not required. But there is very little information and tutorials on Audio Toolbox for more professional control, such as recording, playing, etc. There is much more on Audio Units though.
My first question: What exactly are Audio Queue Services and what framework they belong to?
And then the final question:
Which framework should be used to be able to achieve most of the desired goals?
You can suggest even mix and match of frameworks, classes, but I ask you kindly, to explain your answer and which classes would you use to achieve a goal in more detail. I encourage highest level API as possible, but as low level as it is needed to achieve the goals. Sample code links are also welcome.
Thank you very much for your help.
Audio Units is the lowest level iOS audio API, and the API that Audio Queues are built upon. And Audio Units will provide an app with the lowest latency, and thus closest to real-time processing possible. It is a C API though, so an app may have to do some of its own audio memory management.
The AVFoundation framework may provide an app with easier access to music library assets.
An app can only process sound from other apps that explicitly publish their audio data, which does not include the Music player app, but does include some of the apps using Apple's Inter-App Audio API, and the 3rd party Audiobus API.
I am creating a musical app which generate some music. I already used MIDI functions on Mac to create a MIDI file with MIDI events (unfortunately, I don't remember names of those functions).
I am looking for a way to create instrumental notes (MIDI's or anything else) programmatically in order to play them. I also would like to have multiple channels playing those notes at the same time.
I already tried 'SoundBankPlayer' but apparently, it can't play multiple instruments at the same time.
Have you got an idea?
This answer might be a bit more work than you intended, but you can use PD on iOS to do this. More precisely, you can use libpd for iOS for the synthesis, and then use any number of community-donated patches for the sound you're looking for.
In iOS 5:
MusicSequence, MusicTrack, MusicPlayer will do what you want.
http://developer.apple.com/library/ios/#documentation/AudioToolbox/Reference/MusicSequence_Reference/Reference/reference.html#//apple_ref/doc/uid/TP40009331
Check out AUSampler AudioUnit for iOS, you'll probably have to delve into Core Audio, which has some learning curve. ;)
I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.