I need to implement a wavetable player in my app. For different notes (polyphony) a note on and note off feature is needed (including looping for relevant sounds).
The samples are available or can be converted by myself, the need is for a class that is capable of playing, looping and stopping the samples or waves.
I found some open source project like fluid synth but here the question is for some sample code available for iOS or openAL.
Thank you in advance for any hints or snippets,
regards, Koen.
You could take a look at the new Sampler audio unit in iOS5. This lets you play samples with pitch control at low latency.
There is some sample code from Apple.
Related
I would like to modulate the signal from the mic input with a sine wave at 200HZ (FM only). Anyone know of any good tutorials/articles that will help get me started?
Any info is very welcome
Thanks
I suggest you start here Audio File Stream Services Reference
Here you can also find some basic tutorials: Getting Started with Audio & Video.
Especially the SpeakHere example app could be interesting
Hope that helps you
The standard way to do audio processing in iOS or OSX is Core Audio. Here's Apple's overview of the framework.
However, Core Audio has a reputation of being very difficult to learn, especially if you don't have experience with C. If you're still wanting to learn Core Audio, then this book is the way to go: Learning Core Audio.
There are simpler ways to work with audio on iOS and OSX, one of them being AudioKit, which was developed specifically so developers can quickly prototype audio without having to deal with lower-level memory management, buffers, and pointer arithmetic.
There are examples showing both FM synthesis and audio input via the microphone, so you should have everything you need :)
Full disclosure: I am one of the developers of AudioKit.
I've been searching for some examples that show how to do ADSR in iOS using audio samples (preferably WAV files with loop points, but thats secondary). I guess most people who write a sampler/synth app use audio unit for this. Does any one know a good code example that shows ADSR in any iOS audio library?
In the new iOS SDK 5.0 there's now a Sampler Audio Unit! Which can do ADSR envelopes.
The presets demo shows how to use the sampler:
http://developer.apple.com/library/ios/#samplecode/LoadPresetDemo/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011214
If you want to load different sound formats to play this article is helpful:
https://developer.apple.com/library/mac/#technotes/tn2283/_index.html
And here's the iOS documentation reference:
http://developer.apple.com/library/ios/#documentation/AudioUnit/Reference/AUComponentServicesReference/Reference/reference.html#//apple_ref/doc/uid/TP40007291
you can find (a very basic) one in the Apple's SinSynth sample. That is an AU, but it should demonstrate how one would apply a envelope to an audio buffer. i don't remember - it may simply be an ASR, but adding a fourth stage is simple once you have understood the existing program. The implementation is right in the note's render.
Envelope Generators are not platform specific.
musicdsp.org will be a better resource if you want more than a push in the right direction.
MusicDSP has source code for an example envelope follower with attack/release. If you understand this, then sustain/decay should be pretty logical. ;)
But an ADSR envelope is basically just a matter of applying gain to your output signal with a state machine. Each state has a starting value, and ending value, and a duration. Calculating the slope of that line and the value of each point along it was covered in your algebra class back in high school. ;) If you want to be really fancy, you can implement other types of curves, but the concept remains the same.
I am creating a musical app which generate some music. I already used MIDI functions on Mac to create a MIDI file with MIDI events (unfortunately, I don't remember names of those functions).
I am looking for a way to create instrumental notes (MIDI's or anything else) programmatically in order to play them. I also would like to have multiple channels playing those notes at the same time.
I already tried 'SoundBankPlayer' but apparently, it can't play multiple instruments at the same time.
Have you got an idea?
This answer might be a bit more work than you intended, but you can use PD on iOS to do this. More precisely, you can use libpd for iOS for the synthesis, and then use any number of community-donated patches for the sound you're looking for.
In iOS 5:
MusicSequence, MusicTrack, MusicPlayer will do what you want.
http://developer.apple.com/library/ios/#documentation/AudioToolbox/Reference/MusicSequence_Reference/Reference/reference.html#//apple_ref/doc/uid/TP40009331
Check out AUSampler AudioUnit for iOS, you'll probably have to delve into Core Audio, which has some learning curve. ;)
Can anyone please give pointers how we can add re verb effect to a recording in an iPhone app?
Vocal live free on app store is a pretty good example of how I would want to include reverb effect.
Core Audio Overview in iOS documentation references reverb as an audio unit.
Any help beyond this will be helpful.
Yoy can use ObjectAL library. See link below
https://github.com/kstenerud/ObjectAL-for-iPhone.
If you have access to the raw audio data, you can simply convolute it with corresponding reverberation finite impulse response (FIR) filter kernel.
I want to record two voices and compare them. I think there is some Apple sample code for voice recording. I have no idea about
comparing two audio files. What is the right approach for this? Is there any framework Apple provides for this purpose or is there any third party framework?
It's not in objective C, but it does contain some fantastic explanation about how audio is compared by Shazam, and includes sample code (and source for a working application) in Java:
Check this out
Additionally, This Question has a fantastic link to audio fingerprinting, which is essentially the same as the article above, but more in depth.
Hope this helps
I'm using Visqol for this purpose. If your audio files are generally not more than 10sek this could be something worth looking into. Also check ffmpeg library for converting the files into the desired format(Visqol will require certain sample rate depending if it is just music or speech).
https://github.com/google/visqol