AudioKit custom Waveforms - waveform

It is my chief longing with AudioKit to be able to define my own waveforms for timbre sake.
As far as I have tried, passing a waveform parameter into a new AKOscillator will not actually use that shape of wave, instead we are limited to Square Sine Tri Saw.
Has anyone had any luck getting this feature to work?
Can I see how this is implemented in the source code somehow?
Thanks :D

Since you haven't provided any code or which AudioKit version you are on, its hard to debug, but I can vouch for the fact that this work on AudioKit's develop branch and I've never heard of it not working for anyone.
There's a playground demonstrating tables here:
http://audiokit.io/playgrounds/Basics/Tables/
If you add the following lines you'll hear the custom waveform:
let osc = AKOscillator(waveform: custom)
AudioKit.output = osc
osc.start()
try AudioKit.start()

Related

AVAudioEngine integration with Superpowered

We have a project that currently uses AVAudioEngine as an audio solution for mixing audio from different player nodes. For the most part the solution works exactly to our expectation except for the AVAudioUnitTimePitch node. We're finding that the quality of time stretching is below our expectations and we have not been able to fix it. We're currently looking at Superpowered as an alternative, however I would like to avoid replacing our entire audio solution with the new SDK and would rather create a custom AVAudioNode that handles the time stretching algorithm, so that we can attach it to our current implementation and replace AVAudioUnitTimePitch. I'm a bit of a loss on how to achieve this and was wondering if anyone here had solved a similar issue and how they approached this? Thanks!

Quantized Sequence using AudioKit

I've been working with AudioKit to create a sequencer that I would like to play a perfectly quantized sequence (i.e. all subdivisions metrically perfect). However, when I add notes to a sequence I hear fluctuations/imperfections in the time; the subdivisions aren't lining up in a metrically perfect way. When I print the current position of the sequencer in beats to the console during note on events, the fluctuations are shown: the notes are only consistent to two decimal places or so, and then they show variations in the placement. In the callback, I would expect perhaps, with a slight delay: 1.001, 2.001, 3.001. But the output displays seemingly random numbers after two decimals places.
I've created a project to demonstrate the issue here
What am I doing wrong here?
Note that in the project I've made use of AKCallbackInstrument, but the issue persists even if I plug the sampler that will play the sound directly into the sequencer. Also, in the project I've added notes to the sequencer "manually," but the issue persists even if I load a .mid file directly to the sequencer. The sampler in the demo project uses a sound font (.sf2), but the issue exists when I load a .wav or .mp3 sample as well.
I don't think you're doing anything wrong. The AKSequencer is based off of Apple's own MIDI Sequencer, so we provide AKSequencer as a wrapper to that functionality. However, there are known timing accuracies in Apple's sequencer that we can't address because it is closed source. We are working on a replacement to AKSequencer (which will be called AKSequencer, moving the current sequencer to AKAppleSequencer). This should be done in July. In the meantime, you can use AKTimeline to build your own sequencer as was done in the MetronomeSampleSync examples in AudioKit.

how to convert stereo audio to mono?

Nowadays, I'm developing an app for iPhone in which I want to play an audio in left channel and right channel seperately (ps:The audio played is muti-channel), up to now, I have tried many ways, for example, finding some properties(e.g. setPan:) which I can set to do this, but failed,so,what should I do with this problem, could you please give me some suggestions? Thank you very much!
For manipulating audio at the channel level, see the AVAudioSession class in AVFoundation in the docs that come with Xcode.
In particular, the Audio Session Programming Guide.
I think Novocaine library will be helpful. You can go through
this example.
It'll help you for sure. In example, you can alter following method
- (void)filterData:(float *)data numFrames:(UInt32)numFrames numChannels:(UInt32)numChannels
in NVDSP.mm file to get what you want.
One easy, powerful, free and maintained solution to manipulate audio in iOS is AudioKit. Through it you can create something like this:
leftSignal = AKBooster(input)
leftSignal.rightGain = 0
leftPannedRight = AKPanner(leftSignal, pan: 0.5)
mixer = AKMixer(leftPannedRight)
AudioKit.output = mixer
It's a great solution to audio without the need to deal with low level frameworks. To help you start, there is a lot of tutorials online and answered questions for AudioKit here at StackOverflow. A nice start is with the AudioKit's playgrounds.

wavetable synth for iOS

I need to implement a wavetable player in my app. For different notes (polyphony) a note on and note off feature is needed (including looping for relevant sounds).
The samples are available or can be converted by myself, the need is for a class that is capable of playing, looping and stopping the samples or waves.
I found some open source project like fluid synth but here the question is for some sample code available for iOS or openAL.
Thank you in advance for any hints or snippets,
regards, Koen.
You could take a look at the new Sampler audio unit in iOS5. This lets you play samples with pitch control at low latency.
There is some sample code from Apple.

ADSR in iOS, sample code?

I've been searching for some examples that show how to do ADSR in iOS using audio samples (preferably WAV files with loop points, but thats secondary). I guess most people who write a sampler/synth app use audio unit for this. Does any one know a good code example that shows ADSR in any iOS audio library?
In the new iOS SDK 5.0 there's now a Sampler Audio Unit! Which can do ADSR envelopes.
The presets demo shows how to use the sampler:
http://developer.apple.com/library/ios/#samplecode/LoadPresetDemo/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011214
If you want to load different sound formats to play this article is helpful:
https://developer.apple.com/library/mac/#technotes/tn2283/_index.html
And here's the iOS documentation reference:
http://developer.apple.com/library/ios/#documentation/AudioUnit/Reference/AUComponentServicesReference/Reference/reference.html#//apple_ref/doc/uid/TP40007291
you can find (a very basic) one in the Apple's SinSynth sample. That is an AU, but it should demonstrate how one would apply a envelope to an audio buffer. i don't remember - it may simply be an ASR, but adding a fourth stage is simple once you have understood the existing program. The implementation is right in the note's render.
Envelope Generators are not platform specific.
musicdsp.org will be a better resource if you want more than a push in the right direction.
MusicDSP has source code for an example envelope follower with attack/release. If you understand this, then sustain/decay should be pretty logical. ;)
But an ADSR envelope is basically just a matter of applying gain to your output signal with a state machine. Each state has a starting value, and ending value, and a duration. Calculating the slope of that line and the value of each point along it was covered in your algebra class back in high school. ;) If you want to be really fancy, you can implement other types of curves, but the concept remains the same.

Resources