iOS: Audio Units vs OpenAL vs Core Audio - ios

Could someone explain to me how OpenAL fits in with the schema of sound on the iPhone?
There seem to be APIs at different levels for handling sound. The higher level ones are easy enough to understand.
But my understanding gets murky towards the bottom. There is Core Audio, Audio Units, OpenAL.
What is the connection between these? Is openAL the substratum, upon which rests Core Audio (which contains as one of its lower-level objects Audio Units) ?
OpenAL doesn't seem to be documented by Xcode, yet I can run code that uses its functions.

This is what I have figured out:
The substratum is Core Audio. Specifically, Audio Units.
So Audio Units form the base layer, and some low-level framework has been built on top of this. And the whole caboodle is termed Core Audio.
OpenAL is a multiplatform API -- the creators are trying to mirror the portability of OpenGL. A few companies are sponsoring OpenAL, including Creative Labs and Apple!
So Apple has provided this API, basically as a thin wrapper over Core Audio. I am guessing this is to allow developers to pull over code easily. Be warned, it is an incomplete implementation, so if you want OpenAL to do something that Core Audio can do, it will do it. But otherwise it won't.
Kind of counterintuitive -- just looking at the source, it looks as if OpenAL is lower level. Not so!

Core Audio covers a lot of things, such as reading and writing various file formats, converting between encodings, pulling frames out of streams, etc. Much of this functionality is collected as the "Audio Toolbox". Core Audio also offers multiple APIs for processing streams of audio, for playback, capture, or both. The lowest level one is Audio Units, which works with uncompressed (PCM) audio and has some nice stuff for applying effects, mixing, etc. Audio Queues, implemented atop Audio Units, are a lot easier because they work with compressed formats (not just PCM) and save you from some threading challenges. OpenAL is also implemented atop Audio Units; you still have to use PCM, but at least the threading isn't scary. Difference is that since it's not from Apple, its programming conventions are totally different from Core Audio and the rest of iOS (most obviously, it's a push API: if you want to stream with OpenAL, you poll your sources to see if they've exhausted their buffers and push in new ones; by contrast, Audio Queues and Audio Units are pull-based, in that you get a callback when new samples are needed for playback).
Higher level, as you've seen, is nice stuff like Media Player and AV Foundation. These are a lot easier if you're just playing a file, but probably aren't going to give you deep enough access if you want to do some kind of effects, signal processing, etc.

Related

How can I use AVAudioPlayer to play audio faster *and* higher pitched?

Statement of Problem:
I have a collection of sound effects in my app stored as.m4a files (AAC format, 48 KHz, 16-bit) that I want to play at a variety of speeds and pitches, without having to pre-generate all the variants as separate files.
Although the .rate property of an AVAudioPlayer object can alter playback speed, it always maintains the original pitch, which is not what I want. Instead, I simply want to play the sound sample faster or slower and have the pitch go up or down to match — just like speeding up or slowing down an old-fashioned reel-to-reel tape recorder. In other words, I need some way to essentially alter the audio sample rate by amounts like +2 semitones (12% faster), –5 semitones (33% slower), +12 semitones (2x faster), etc.
Question:
Is there some way fetch the Linear PCM audio data from an AVAudioPlayer object, apply sample rate conversion using a different iOS framework, and stuff the resulting audio data into a new AVAudioPlayer object, which can then be played normally?
Possible avenues:
I was reading up on AudioConverterConvertComplexBuffer. In particular kAudioConverterSampleRateConverterComplexity_Mastering, and kAudioConverterQuality_Max, and AudioConverterFillComplexBuffer() caught my eye. So it looks possible with this audio conversion framework. Is this an avenue I should explore further?
Requirements:
I actually don't need playback to begin instantly. If sample rate conversion incurs a slight delay, that's fine. All of my samples are 4 seconds or less, so I would imagine that any on-the-fly resampling would occur quickly, on the order of 1/10 second or less. (More than 1/2 would be too much, though.)
I'd really rather not get into heavyweight stuff like OpenAL or Core Audio if there is a simpler way to do this using a conversion framework provided by iOS. However, if there is a simple solution to this problem using OpenAL or Core Audio, I'd be happy to consider that. By "simple" I mean something that can be implemented in 50–100 lines of code and doesn't require starting up additional threads to feed data to the a sound device. I'd rather just have everything taken care of automatically — which is why I'm willing to convert the audio clip prior to playing.
I want to avoid any third-party libraries here, because this isn't rocket science and I know it must be possible with native iOS frameworks somehow.
Again, I need to adjust the pitch and playback rate together, not separately. So if playback is slowed down 2x, a human voice would become very deep and slow-spoken. And if playback is sped up 2–3x, a human voice would sound like a fast-talking chipmunk. In other words, I absolutely do not want to alter the pitch while keeping the audio duration the same, because that operation results in an undesirably "tinny" sound when bending the pitch upward more than a couple semitones. I just want to speed the whole thing up and have the pitch go up as a natural side-effect, just like old-fashioned tape recorders used to do.
Needs to work in iOS 6 and up, although iOS 5 support would be a nice bonus.
The forum link Jack Wu mentions has one suggestion, which involves overriding the AIFF header data directly. This may work, but you will need to have AIFF files since it relies on a specific range of the AIFF header to write into. This also needs to be done before you create the AVAudioPlayer, which means that you can't modify the pitch once it is running.
If you are willing to go to the AudioUnits route, a complete simple solution is probably ~200 lines (note that this assumes the code style that has one function take up to 7 lines with one parameter on each line). There is an Varispeed AudioUnit, which does exactly what you want by locking pitch to rate. You would basically need to look at the API, docs and some sample AudioUnit code to get familiar and then:
create/init the audio graph and stream format (~100 lines)
create and add to the graph a RemoteIO AudioUnit (kAudioUnitSubType_RemoteIO) (this outputs to the speaker)
create and add a varispeed unit, and connect the output of the varispeed unit (kAudioUnitSubType_Varispeed) to the input of the RemoteIO Unit
create and add to the graph a AudioFilePlayer (kAudioUnitSubType_AudioFilePlayer) unit to read the file and connect it to the varispeed unit
start the graph to begin playback
when you want to change the pitch, do it via AudioUnitSetParameter, and the pitch and playback rate change will take effect while playing
Note that there is a TimePitch audio unit which allows independent control of pitch and rate, as well.
For iOS 7, you'd want to look at AVPlayerItem's time-pitch algorithm (audioTimePitchAlgorithm) called AVAudioTimePitchAlgorithmVarispeed. Unfortunately this feature is not available on early systems.

MTAudioProcessingTap on iOS - Should I use VLC or build a video player from scratch?

I'm trying to build an iOS app that plays video files and does some interesting things using MTAudioProcessingTap. I need it to be able to play all sorts of formats, including some that are not supported by Apple. I'm thinking of branching out from VLC, but I can't figure out if it uses Core Audio/Video at any point or if it's running something else completely.
If it's not, is there a library I can use to take care of the 203572964 codecs being used out there?
Thanks.
Preliminary note: I'm the developer of VLC for iOS so the following may be biased.
MobileVLCKit for iOS includes 2 different audio output modules. One of them is a high level module based on AudioQueue which is fairly incomplex but a bit slow. The other is based on AudioUnit, the low level framework of CoreAudio, quite a bit more complex, but way faster. Depending on your current experience, either module would be a good way to start.
Regarding the one library supporting all codecs thing: basically there are two forks of the same library: libav and FFmpeg. VLC supports either flavor and abstracts the complexity and the ever-changing APIs (which are a real pain if you intend to keep maintaining your app across multiple releases of those libraries). Additionally, we include a quite well performing OpenGL ES 2 video output module which is using shaders to do chroma conversation. All you need to do is embedding a UIView. MobileVLCKit handles the rest.
Speaking of MobileVLCKit: this is a thin ObjC layer on top of libvlc simplifying the use of this library in third party applications by abstracting most commonly used features.
As implicitly mentioned by HalR, libvlc does not use hardware accelerated decoding on iOS yet. We are working with the libav developers on a generic approach, but we are not quite there yet. Thus, we have to do all the decoding on the CPU, which leads to the heating but allows us to play virtually anything instead of H264/MP4 using the default, accelerated API.
If you can't figure out how its playing the video, at its lower level, that perhaps is a sign that you should keep working with it instead of trying to outdo it. Video processing is pretty difficult and often unsupported formats are unsupported due to patent issues. I really haven't seen anything better than VLC that is publicly available.
VLC 2.1.x appears to use AudioToolbox and AVFoundation.
One other issue, though, is that when I was doing work with VLC, I was stunned how it turned my iPod Touch into a miniature iron, because it was working so hard to process the video. Manually processing video is very processor intensive and really is a drain. So your way or VLC could still have some additional issues.

Digital Audio Workstation Architecture on iOS

I am developing an architecture for digital audio workstation that works on iOS (mainly, but trying to support OS X too). I'm going slowly through miles of documentation by Apple and references of their frameworks.
I have experience with DSP, but iOS is more new to me and there are so many objects, tutorials (even for older versions of iOS) and different frameworks with different API's. I would just like to make sure I choose the right one on start, or combination of those.
The goals of the architecture are:
Sound track sample access (access samples in files)
iPod library songs
local file songs
songs on remote server
radio stations (infinite length songs)
Effect chaining (multiple equalizers, or pitch & tempo change at the same time)
Multiple channels and mixing (even surround)
Portability
Mac OS X at least
iOS 6+ support (iOS 5 or lower not needed)
Sample access in 32-bit floats, not signed integers.
Easy Objective-C API (DSP and processing done in C++ of course)
Recording, playing
Record to file (codec by choice), or send over network (VoIP)
Playing on different outputs (on Mac) or speakers/headphones on iOS
Changing of volume/mute
Background audio support
Real-time sample processing
Equalizer on any song that is currently played
Real-time sample manipulation
Multi-threading
I hope I did not miss anything, but those are the most important goals.
My research
I have looked through most of the frameworks (not so much in detail though) and here is what I have figured out. Apple lists following frameworks for using Audio on iOS:
Media Player framework
AV Foundation framework
Audio Toolbox framework
Audio Unit framework
OpenAL framework
Media Player and AV Foundation are too high-level API's and do not allow direct sample access. OpenAL on the other side cannot record audio. So that leaves Audio Toolbox and Audio Unit frameworks. Many of the differences are explained here: What's the difference between all these audio frameworks?
As much as I can understand, Audio Toolbox would be the way to go, since MIDI is currently not required. But there is very little information and tutorials on Audio Toolbox for more professional control, such as recording, playing, etc. There is much more on Audio Units though.
My first question: What exactly are Audio Queue Services and what framework they belong to?
And then the final question:
Which framework should be used to be able to achieve most of the desired goals?
You can suggest even mix and match of frameworks, classes, but I ask you kindly, to explain your answer and which classes would you use to achieve a goal in more detail. I encourage highest level API as possible, but as low level as it is needed to achieve the goals. Sample code links are also welcome.
Thank you very much for your help.
Audio Units is the lowest level iOS audio API, and the API that Audio Queues are built upon. And Audio Units will provide an app with the lowest latency, and thus closest to real-time processing possible. It is a C API though, so an app may have to do some of its own audio memory management.
The AVFoundation framework may provide an app with easier access to music library assets.
An app can only process sound from other apps that explicitly publish their audio data, which does not include the Music player app, but does include some of the apps using Apple's Inter-App Audio API, and the 3rd party Audiobus API.

Virtual Instrument App Recording Functionality With RemoteIO

I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.

Reading raw audio in iphone SDK

Hi
I want to read and then want to perform some operations on raw audio. What is the best way to do this?
Audio File Services, Audio Converter Services, and Extended Audio File Services, all in Core Audio. AV Foundation + Core Media (specifically AVAssetReader) may also be an option, but it's really new, and therefore even less documented and less well understood than Core Audio at this point.
If you are looking for sample code, "Audio Graph" is a good starting point. The developer has provided a bit of his own documentation, that will help you quite a bit.
It will depend on the use for the audio. If latency is an issue, go for audio units. But if is not, a higher layer may be the one you require, such as AudioQueues.

Resources