iOS Audio Sampler with Volume Envelope - ios

my goal is to create a sampler instrument for iPhone/iOS.
The Instrument should play back sound files on different pitches/notes and it should have a volume envelope.
A volume envelope means, that the sounds volume is fading in when nit starts to play.
I tried countless way on creating that. The desired way is to use a AVAudioEngine's AVPlayerNode, then process the individual samples of that node in realtime.
Unfortunately I had no success on that approach so far. Could you give me some pointers on how this works in iOS?
Thanks,
Tobias
PS: I did not learn the Core Audio Framework. Maybe it is possible to access an AVAudioNodes Audio Unit to execute this job, but I had not the time to read into the Framework yet.

A more low-level way is to read the audio from the file and process the audio buffers.
You store the ADSR in an array or better, a mathematical function that outputs the envelope value based on the sound index you pass it (using interpolation). So the envelope maps to any sound's duration.
Then you multiply the audio sample with the returned envelope value to get the filtered sample.
One way would be to use the AVAudioNode and link a processing node to it.

I looked at another post of yours I think AUSampler - Controlling the Settings of the AUSampler in Real Time is what you're looking for.
I haven't yet used AVAudioUnitSampler, but I believe it is just a wrapper for the AUSampler. To configure an AUSampler you must first make and export a preset file on your mac using AULab. This file is a plist which contains file references and sampler decay volume pitch cutoff and all of the good stuff that the AUSampler is built for. Then this file is put into your app bundle. You then create a directory named "Sounds", copy of all of the referenced audio samples into that folder and put it in your app bundle as well (as a folder reference). Then you create your audioGraph (or in your case AVAudioEngine) and sampler and load the preset from the preset file in your app bundle. It's kind of a pain. These links I'm providing are what I used to get up and running, but they are a little dated, if I where to start now I would definitely look into the AVAudioUnitSampler first to see if there are easier ways.
To get AULab go to Apple's developer downloads, select "Audio Tools for Xcode". Once downloaded just open the DMG and drag the folder anywhere (I drag it to my Applications folder). Inside is The AULab.
Here is a technical note describing how to load presets, another technical note on how to change parameters (such as attack/decay) in real time, and here is a WWDC Video that walks you through the whole thing including the creation of your preset using AULab.

Related

How does one allocate memory in an Audio Unit extension (realtime process) for audiofile playback?

I am working on creating an audio unit v3. I have made decent progress up to this point, I made a host app that loads all my 3rd party plugins. I created an auv3 plugin that processes audio and can be loaded by other hosts.
I now want to make an au that loads audio data from disk and scans the data at random positions with sample precision (timestretch, granular stuff, etc). I thought it would be a cool sample playback addittion for contributing to AudioKit.
So this would be making something at the level of AKSampler in the AudioKit framework. While looking through AK source, I feel like I am missing something.
While browsing github, I ended up at places like here:
https://github.com/AudioKit/AudioKit/tree/118affee65f4c9b8d4780ef1a03a6d03004bbcee/AudioKit/Common/Nodes/Playback/Samplers
then I looked in here:
https://github.com/AudioKit/AudioKit/blob/118affee65f4c9b8d4780ef1a03a6d03004bbcee/AudioKit/Common/Nodes/Playback/Samplers/Disk%20Streamer/AKDiskStreamerAudioUnit.mm
which brought me here:
https://github.com/AudioKit/AudioKit/blob/d69dabf090a5e78d4495d938bf6c0aea9f672630/AudioKit/Common/Nodes/Playback/Samplers/Disk%20Streamer/AKDiskStreamerDSPKernel.hpp
and then eventually here:
https://github.com/AudioKit/AudioKit/blob/d69dabf090a5e78d4495d938bf6c0aea9f672630/AudioKit/Core/Soundpipe/modules/wavin.c
I am not looking for info about AKSampler, specifically, just how in general audio files are being loaded and how it makes sense with the realtime nature of the au extension process..
I couldn't find any IPC/XPC code anywhere, so I am guessing that its not about circular buffers connecting to other processes or something.
Does AudioKit allocate memory in the realtime process for audiofile playback? This would seem to go against all the warnings from experienced audio programmers(articles like http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing), but I can't figure out what is being done in AudioKit and generally in iOS..
what am I just not understanding or finding? :D
Opening files and allocating memory for file reads should be done outside the real-time audio context, perhaps during UI file selection, never inside an Audio Unit callback.
One way to get random access to samples inside an AU callback is to memory map the file (mmap C API), and then touch every sample in the memory map before passing the memory pointer (unsafe raw etc.) and file length (mapped bounds) to the audio unit. Then you can do virtual random access file reads inside the callback with a fixed latency.
One way to touch every sample in an array by doing a checksum (and perhaps discarding the result later). This memory read is required to get the iOS virtual memory system to swap blocks from the file VM into RAM, so that storage system reads won't happen in the real-time context.

Process audio input from a mobile microphone without saving it to a file

I'm working on a mobile application that can perform basic analysis on audio input from the microphone in real time. However, the usual way to do it using the 'AVAudioRecorder` as shown in this guide and the API requires you to save it to a file first.
Since the app is meant to stay on for a long time and be used multiple times a day, I want to avoid clutter the phone with many audio files or audio files that are too big. However, I can't seem to find the way around it. Searching for solutions on the internet always lead to solutions of how to save an audio to a file, instead of avoiding saving to a file and work with some kind of buffer.
Any pointers would be super helpful!
Both the iOS Audio Unit and the Audio Queue APIs allow one to process short buffers of audio input in real-time without saving to a file.
You can also use a tap on the AVAudioEngine. See Apple's documentation: https://developer.apple.com/library/ios/samplecode/AVAEMixerSample/Introduction/Intro.html
You can use /dev/null as path in the AVAudioRecorder instance. This way it will not save to a file, but just discard the data.
var url = NSUrl.FromString("/dev/null");
var recorder = new AVAudioRecorder(url, settings, out error);

Read audio file, perform filters (i.e. reverb), and then write audio file without playback on iOS

I'm working on an app which has a requirement for running some basic audio filters (such as normalisation and reverb) on a file. The idea is to take an existing audio file, add the filters, and then write the data to a new file. Crucially, this must be done without any playback and should be fast (i.e. on a 60 second audio file I should be able to add reverb in under a second).
I've looked at several solutions such as The Amazing Audio Engine and AudioBox but these all seem to rely on you playing back any audio in realtime rather than writing it to a file.
Does anybody have examples, or can point me in the right direction, for simply taking a file and applying a basic audio filter without listening to it. I'm sure I must be missing something simple somewhere but my searches have turned up nothing.
In general,the steps are:
Set up an AUGraph like this - AudioFilePlayer -> Reverb/Limiter/EQ/etc. -> GenericOutput
Open the input file and schedule it on the AudioFilePlayer.
Create an output file and repeatedly call AudioUnitRender on the GenericOutput unit, writing the rendered buffers to the output file.
I'm not sure about the speed of this, but it should be acceptable.
There is a comprehensive example of offline rendering in this thread that covers the setup and rendering process.

How do I speed up the loading of Audio files so the user is not waiting.

I am building a game that lets users remix songs. I have built a mixer (based upon the apple sample code MixerHost (creating an audioGraph with a mixer audioUnit), but expanded to load 12 tracks. everything is working fine, however it takes a really long time for the songs to load when the gamer selects the songs they want to remix. This is because the program has to load 12 separate mp4 files into memory before I can start playing the music.
I think what I need is to create a AUFilePlayer audioUnit that is in charge of loading the file into the mixer. If the AUFilePlayer can handle loading the file on the fly then the user will not have to wait for the files to load 100% into memory. My two questions are, 1. can an AUFilePlayer be used this way? 2. The documentation on AUFilePlayer is very very very thin. Where can I find some example code demonstrated how to implement a AUFilePlayer properly in IOS (not in MacOS)?
Thanks
I think you're right - in this case a 'direct-from-disk' buffering approach is probably what you need. I believe the correct AudioUnit subtype is AudioFilePlayer. From the documentation:
The unit reads and converts
audio file data into its own internal
buffers. It performs disk I/O on a
high-priority thread shared among all
instances of this unit within a process.
Upon completion of a disk read, the unit
internally schedules buffers for playback.
A working example of using this unit on Mac OS X is given in Chris Adamson's book Learning Core Audio. The code for iOS isn't much different, and is discussed in this thread on the CoreAudio-API mailing list. Adamson's working code example can be found here. You should be able to adapt this to your requirements.

Play socket-streamed h.264 movie on iOS using AVFoundation

I’m working on a small iPhone app which is streaming movie content over a network connection using regular sockets. The video is in H.264 format. I’m however having difficulties with playing/decoding the data. I’ve been considering using FFMPEG, but the license makes it unsuitable for the project. I’ve been looking into Apple’s AVFoundation framework (AVPlayer in particular), which seems to be able to handle h264 content, however I’m only able to find methods to initiate the movie using an url – not by proving a memory buffer streamed from the network.
I’ve been doing some tests to make this happen anyway, using the following approaches:
Play the movie using a regular AVPlayer. Every time data is received on the network, it’s written to a file using fopen with append-mode. The AVPlayer’s asset is then reloaded/recreated with the updated data. There seems to be two issues with this approach: firstly, the screen goes black for a short moment while the first asset is unloaded and the new loaded. Secondly, I do not know exactly where the playing stopped, so I’m unsure how I would find out the right place to start playing the new asset from.
The second approach is to write the data to the file as in the first approach, but with the difference that the data is loaded into a second asset. A AVQueuedPlayer is then used where the second asset is inserted/queued in the player and then called when the buffering has been done. The first asset can then be unloaded without a black screen. However, using this approach it’s even more troublesome (than the first approach) to find out where to start playing the new asset.
Has anyone done something like this and made it work? Is there a proper way of doing this using AVFoundation?
The official method to do this is the HTTP Live Streaming format which supports multiple quality levels (among other things) and automatically switches between them (eg: if the user moves from WiFi to cellular).
You can find the docs here: Apple Http Streaming Docs

Resources