ADSR in iOS, sample code? - ios

I've been searching for some examples that show how to do ADSR in iOS using audio samples (preferably WAV files with loop points, but thats secondary). I guess most people who write a sampler/synth app use audio unit for this. Does any one know a good code example that shows ADSR in any iOS audio library?

In the new iOS SDK 5.0 there's now a Sampler Audio Unit! Which can do ADSR envelopes.
The presets demo shows how to use the sampler:
http://developer.apple.com/library/ios/#samplecode/LoadPresetDemo/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011214
If you want to load different sound formats to play this article is helpful:
https://developer.apple.com/library/mac/#technotes/tn2283/_index.html
And here's the iOS documentation reference:
http://developer.apple.com/library/ios/#documentation/AudioUnit/Reference/AUComponentServicesReference/Reference/reference.html#//apple_ref/doc/uid/TP40007291

you can find (a very basic) one in the Apple's SinSynth sample. That is an AU, but it should demonstrate how one would apply a envelope to an audio buffer. i don't remember - it may simply be an ASR, but adding a fourth stage is simple once you have understood the existing program. The implementation is right in the note's render.
Envelope Generators are not platform specific.
musicdsp.org will be a better resource if you want more than a push in the right direction.

MusicDSP has source code for an example envelope follower with attack/release. If you understand this, then sustain/decay should be pretty logical. ;)
But an ADSR envelope is basically just a matter of applying gain to your output signal with a state machine. Each state has a starting value, and ending value, and a duration. Calculating the slope of that line and the value of each point along it was covered in your algebra class back in high school. ;) If you want to be really fancy, you can implement other types of curves, but the concept remains the same.

Related

What kind of drum sampling options does Audiokit have?

Working in audio kit and I am looking to understand how people have incorporated drums. Obviously, the sampler is an option, but I am wondering if there is a built in option similar to some of the basic synthesis options.
There are a few options. I personally like the AppleSampler/MidiSampler like in the example but instead of using audio files you can create a EXS Sampler instrument in Logic where you can assign notes for different velocities. AppleSampler can also load AUPresets made in GarageBand and SoundFonts (SF2). The DunneAudioKit Sampler is an option if you are working with SFZ files, but I think that might be a work-in-progress in AudioKit 5. Loading WAV files directly into AppleSampler is also a good option if you just want one shot sounds.
I'm assuming you're mostly talking about playback of samples, not recording.
The best built-in option I've seen (other than AppleSampler/MidiSampler) is AudioPlayer, which lets you load in a sample and play it back on demand (from an on-screen pad, etc). MIDIListener can then help you respond to external MIDI events, etc. It works (I have a pretty big branch in my app where I tried it), but not sure it works well.
I wouldn't recommend DunneAudioKit Sampler for drums. There is no one-shot playback (so playing the same note in quick succession will cut off the previous note, even if you mess with the release). If you're trying to build a complex/realistic acoustic drum instrument, you'll also want round-robins so that variations of the same hit can be played, which Dunne also doesn't have. It can load SFZ files, but only a very limited subset of SFZ's opcodes (so again, it's missing things like round robins, mute groups, one-shot, etc).
Having gone down all those roads, I would suggest starting with AppleSampler, and I would build the EXS or aupreset file in Logic or Mainstage rather than trying to build something programmatically.
If your needs are really simple, the examples in AudioKit's recently released drum pad playground is a great place to start, loading single samples into a specific note on AppleSampler.

AudioKit: AKSampler: Simplest way to add multiple samples

I understand so far that AKSampler was recently rewritten and this GitHub project seems to be the defacto guide on the new AKSampler. What I can gather is a move toward SFZ format. I am new to the sampling world but in my application I only need a handful of samples recorded from my piano in order for it to work. As I have looked around with existing SFZ formats and samples, I do not need all of the complexity and features that SFZ provides.
I am currently using AKSampler with a single piano sample which works perfectly, however it gets a bit weird once I play anything too far from the original source, so I just want to fill in the gaps with a few other samples (I only need to play around an octave and a half with my current app).
I do see according to the Docs a couple methods buildSimpleKeyMap() and buildKeyMap() however there is no implementation currently.
Do I have any additional options? I know that EXS format has been deprecated, as well as SoundFont. Is the only way to map multiple samples to AKSampler currently using SFZ?
Thanks for all your help <3
Edit: This readme on the AKSampler GitHub page provides the breakdown for samples. I still only see SFZ being considered. If anyone else is lost with my question or needs a reference, this seems to be the best resource. If the current AKSampler only offers SFZ as the primary way to map multiple samples, so be it, however it does look very challenging, I'm really hoping there is some simple middle ground between only using a single sample for the AKSampler vs. a full bore SFZ file.
Edit 2: Getting a solution to this, will update as soon as possible, thanks for your patience!
I have provided a simple explainer and sample file in the AudioKit docs. Hope this helps new users of AudioKit!

iOS Audio Service : Read & write audio files

guys.
I'm working on some audio services on iOS.
I trying to search any examples or tutorials about
how audio service or stream can read a existing audio file than
process something like filter, than write another file.
Is there any body who can help me?
Dirac3LE (by Stephan M. Bernsee) is a great library for this job.
There are examples and manual included in the download.
It is particulary inteded for time and pitch manipulation
but in your case you'll be interested in its EAFRead and EAFWrite
classes.
If you want to get familiar with the lower level library that you can also use for microphone input/sound output, and that you can get raw samples into and out of, I would suggest taking a look at Audio Queue Services.
I used it in my side project to get audio from the microphone, and I also wrote some code you might find useful to do fast vectorized, FFT based FIR filtering on input audio. You can find the code here https://github.com/jamescarlson/FreeAPRS

Changing sample rate of an AUGraph on iOS

I've implemented an AUGraph similar to the one on the iOS Developer's Library. In my App, however, I need to be able to playback sound at different sample rates (probably two different ones).
I've been looking around Apple's documentation and haven't found a way to set the sample rate at runtime. I've been thinking of three possible work-arounds:
Re-initialize the AUGraph every time I need to change the sample rate;
Initialize a different AUGraph for each different sample rate;
Convert the sample rate of every sound before playing;
These methods all seem really clunky and heavy on the processor.
What is the best way of changing sample rate of an AUGraph at runtime?
typically #1 for continuous audio streaming scenarios.
your program may have a special need or benefit by using another approach you have listed:
#2: you need to process where reinitialization is not a concern.
#3: you need to mix and process two streams with different input sample rates together at the same time, particularly if you find yourself SRCing the signal multiple times.
but, if you just need playback with SRC and lowest latency is not a concern, you may want to try an AudioQueue instead.
I'm pretty sure that it can't be done in runtime. Solution #2 is your best bet, along with #3. For sample rate conversion, libsndfile can probably be adapted to your needs.
If you don't want latency from tearing down and rebuilding the audio graph, you may need to resample the sound data (for all but one sample rate).
You could either resample the sounds data before starting to play it, or run a real-time resampler as part of the audio graph. Many iOS music apps do the latter as part of a built-in sampler-based synth unit, so the device has plenty of compute power to do so.

Really simple wave synth/table in ios

I want to make a really simple synth.
In short, i want to play a wav file, and have it loop at certain points until touch is released.
I am looking for some example code, (doesn't need to be free).
Sorry for such a basic question, i have been googling this, though there seems to be nothing on this exact topic, unless I'm missing some important term.
Also, is what i'm describing, a wavetable synth, or a soundboard?
I'd call it a sampler.
Here's a sample project that will get you started:
https://sites.google.com/site/iphonecoreaudiodevelopment/remoteio-playback
See also:
The Audio Programming Book
The Core Audio Book
A sample project of mine
You need to store the sound data in memory, and have some sort of read() command that fills an array of bytes to send to the sound card. The read() command has to keep track of its position between reads, so a persistent pointer must be maintained. You will test the position of the pointer and see if you've reached the end or not, and reset to the beginning when needed.
The specifics are going to depend upon your chosen language, of course.
I did this with Java, with the added the possibility of playback at different speeds.
http://www.hexara.com/VSL/VSL2.htm
It's a little laggy. I've learned a bit since posting that, but haven't gone back to fix it yet. The program asks permission and has you load a wav file from your computer. It should be 16-bit, stereo, 44100fps, little-endian.
WaveTable synthesis is a bit different, in that only a single iteration of a wave is stored and used as source data.
Here is a short discussion, from Stanford's CCRMA website:
https://ccrma.stanford.edu/~bilbao/booktop/node9.html
I used this method to make a Java "Theremin"
http://www.hexara.com/VSL/JTheremin.htm
With a WaveTable, you decide on the size of the array. If it is a power of 2, one can bitmask the pointer after every increment, which is faster than doing a comparison and reset.

Resources