How does one allocate memory in an Audio Unit extension (realtime process) for audiofile playback? - ios

I am working on creating an audio unit v3. I have made decent progress up to this point, I made a host app that loads all my 3rd party plugins. I created an auv3 plugin that processes audio and can be loaded by other hosts.
I now want to make an au that loads audio data from disk and scans the data at random positions with sample precision (timestretch, granular stuff, etc). I thought it would be a cool sample playback addittion for contributing to AudioKit.
So this would be making something at the level of AKSampler in the AudioKit framework. While looking through AK source, I feel like I am missing something.
While browsing github, I ended up at places like here:
https://github.com/AudioKit/AudioKit/tree/118affee65f4c9b8d4780ef1a03a6d03004bbcee/AudioKit/Common/Nodes/Playback/Samplers
then I looked in here:
https://github.com/AudioKit/AudioKit/blob/118affee65f4c9b8d4780ef1a03a6d03004bbcee/AudioKit/Common/Nodes/Playback/Samplers/Disk%20Streamer/AKDiskStreamerAudioUnit.mm
which brought me here:
https://github.com/AudioKit/AudioKit/blob/d69dabf090a5e78d4495d938bf6c0aea9f672630/AudioKit/Common/Nodes/Playback/Samplers/Disk%20Streamer/AKDiskStreamerDSPKernel.hpp
and then eventually here:
https://github.com/AudioKit/AudioKit/blob/d69dabf090a5e78d4495d938bf6c0aea9f672630/AudioKit/Core/Soundpipe/modules/wavin.c
I am not looking for info about AKSampler, specifically, just how in general audio files are being loaded and how it makes sense with the realtime nature of the au extension process..
I couldn't find any IPC/XPC code anywhere, so I am guessing that its not about circular buffers connecting to other processes or something.
Does AudioKit allocate memory in the realtime process for audiofile playback? This would seem to go against all the warnings from experienced audio programmers(articles like http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing), but I can't figure out what is being done in AudioKit and generally in iOS..
what am I just not understanding or finding? :D

Opening files and allocating memory for file reads should be done outside the real-time audio context, perhaps during UI file selection, never inside an Audio Unit callback.
One way to get random access to samples inside an AU callback is to memory map the file (mmap C API), and then touch every sample in the memory map before passing the memory pointer (unsafe raw etc.) and file length (mapped bounds) to the audio unit. Then you can do virtual random access file reads inside the callback with a fixed latency.
One way to touch every sample in an array by doing a checksum (and perhaps discarding the result later). This memory read is required to get the iOS virtual memory system to swap blocks from the file VM into RAM, so that storage system reads won't happen in the real-time context.

Related

iOS Audio Sampler with Volume Envelope

my goal is to create a sampler instrument for iPhone/iOS.
The Instrument should play back sound files on different pitches/notes and it should have a volume envelope.
A volume envelope means, that the sounds volume is fading in when nit starts to play.
I tried countless way on creating that. The desired way is to use a AVAudioEngine's AVPlayerNode, then process the individual samples of that node in realtime.
Unfortunately I had no success on that approach so far. Could you give me some pointers on how this works in iOS?
Thanks,
Tobias
PS: I did not learn the Core Audio Framework. Maybe it is possible to access an AVAudioNodes Audio Unit to execute this job, but I had not the time to read into the Framework yet.
A more low-level way is to read the audio from the file and process the audio buffers.
You store the ADSR in an array or better, a mathematical function that outputs the envelope value based on the sound index you pass it (using interpolation). So the envelope maps to any sound's duration.
Then you multiply the audio sample with the returned envelope value to get the filtered sample.
One way would be to use the AVAudioNode and link a processing node to it.
I looked at another post of yours I think AUSampler - Controlling the Settings of the AUSampler in Real Time is what you're looking for.
I haven't yet used AVAudioUnitSampler, but I believe it is just a wrapper for the AUSampler. To configure an AUSampler you must first make and export a preset file on your mac using AULab. This file is a plist which contains file references and sampler decay volume pitch cutoff and all of the good stuff that the AUSampler is built for. Then this file is put into your app bundle. You then create a directory named "Sounds", copy of all of the referenced audio samples into that folder and put it in your app bundle as well (as a folder reference). Then you create your audioGraph (or in your case AVAudioEngine) and sampler and load the preset from the preset file in your app bundle. It's kind of a pain. These links I'm providing are what I used to get up and running, but they are a little dated, if I where to start now I would definitely look into the AVAudioUnitSampler first to see if there are easier ways.
To get AULab go to Apple's developer downloads, select "Audio Tools for Xcode". Once downloaded just open the DMG and drag the folder anywhere (I drag it to my Applications folder). Inside is The AULab.
Here is a technical note describing how to load presets, another technical note on how to change parameters (such as attack/decay) in real time, and here is a WWDC Video that walks you through the whole thing including the creation of your preset using AULab.

What exactly is an audio queue processing tap?

These have been around in OS X for a little while now and just recently became available in ios with ios 6. I am trying to figure what they let you do exactly. So the idea is you can tap into an audio queue and process the data before sending it on. Does this mean you can now intercept raw audio coming from different applications and process that (such as the iOS music player) before it plays? In other words is inter-app audio possible? I have read over the audioQueue.h file and can't quite figure out what to make of it.
Consider it a mid-level entry for your audio custom processing (e.g. insert effect) or reading (e.g. for analysis or display purposes) of the queue's sample data. A basic interface for reading or processing an AQ's data.
Does this mean you can now intercept raw audio coming from different applications and process that (such as the iOS music player) before it plays? In other words is inter-app audio possible?
Nope - it's not inter-process; you have no access to other processes' audio queues. These are for your queues' sample data. They can be used to simplify general audio render or analysis chains (the common case, by app count). My guess is that it was provided because a lot of people wanted an easier entry to access this sample data for processing or analysis. Custom processing entries on iOS can also be more complicated to implement (i.e. AudioUnit availability is restricted).

How do I speed up the loading of Audio files so the user is not waiting.

I am building a game that lets users remix songs. I have built a mixer (based upon the apple sample code MixerHost (creating an audioGraph with a mixer audioUnit), but expanded to load 12 tracks. everything is working fine, however it takes a really long time for the songs to load when the gamer selects the songs they want to remix. This is because the program has to load 12 separate mp4 files into memory before I can start playing the music.
I think what I need is to create a AUFilePlayer audioUnit that is in charge of loading the file into the mixer. If the AUFilePlayer can handle loading the file on the fly then the user will not have to wait for the files to load 100% into memory. My two questions are, 1. can an AUFilePlayer be used this way? 2. The documentation on AUFilePlayer is very very very thin. Where can I find some example code demonstrated how to implement a AUFilePlayer properly in IOS (not in MacOS)?
Thanks
I think you're right - in this case a 'direct-from-disk' buffering approach is probably what you need. I believe the correct AudioUnit subtype is AudioFilePlayer. From the documentation:
The unit reads and converts
audio file data into its own internal
buffers. It performs disk I/O on a
high-priority thread shared among all
instances of this unit within a process.
Upon completion of a disk read, the unit
internally schedules buffers for playback.
A working example of using this unit on Mac OS X is given in Chris Adamson's book Learning Core Audio. The code for iOS isn't much different, and is discussed in this thread on the CoreAudio-API mailing list. Adamson's working code example can be found here. You should be able to adapt this to your requirements.

Example of saving audio from RemoteIO?

I've searched around but haven't found any good examples or tutorials of saving audio out of a RemoteIO Audio Unit.
My setup: Using the MusicPlayer API, I have several AUSamplers -> MixerUnit -> RemoteIO
Audio playback works great. I would like to add functionality to save the audio output to a file. Would I do this in a render callback on the RemoteIO?
Any tips or pointers to example code much appreciated!
Due to the tight latency requirements of Audio Unit callbacks, one should not to do any synchronous file access (or any other calls that could potentially block, involve memory management or OS locking actions) inside the RemoteIO callback. Instead, just copy the audio data out to another buffer (a larger circular buffer for example), and set some state indicating how much data has been copied. Then, in another thread, when the amount of data is sufficient, write the contents of that buffer out to a file. This could be a raw PCM file, which can later be converted by AVAssetReader/Writer into another audio file type.

iOS: Modify mic data stream with audio unit?

Could someone explain in terms of Audio Unit connections how to modify the iPhone microphone data stream visible to other processes with gain or EQ? I understand how to use a remote I/O unit to grab mic data and do my processing. I want this new data to replace the original mic data stream, not go to speakers or a file. "Audio Unit Hosting Fundamentals" Figure 1-3 is close.
I have read everything out there on Audio Units and used several of the online examples (Tim B, Play It Loud, Tasty Pixel) but don't see how to do this yet.
Any help?
Thanks
This doesn't seem to be clearly explained or illustrated in the documentation. However, if you look at the AURIOTOUCH sample code, you will see how within the remote I/O render callback, it makes a call to retrieve data from the microphone. then it optionally processes this data, and returns it.
this is kind of doubly useful because this call to retrieve microphone data returns already created buffesr. this means you don't have to create your own buffers, which is great becaues that is a bit of a hassle.

Resources