Really simple wave synth/table in ios - ios

I want to make a really simple synth.
In short, i want to play a wav file, and have it loop at certain points until touch is released.
I am looking for some example code, (doesn't need to be free).
Sorry for such a basic question, i have been googling this, though there seems to be nothing on this exact topic, unless I'm missing some important term.
Also, is what i'm describing, a wavetable synth, or a soundboard?

I'd call it a sampler.
Here's a sample project that will get you started:
https://sites.google.com/site/iphonecoreaudiodevelopment/remoteio-playback
See also:
The Audio Programming Book
The Core Audio Book
A sample project of mine

You need to store the sound data in memory, and have some sort of read() command that fills an array of bytes to send to the sound card. The read() command has to keep track of its position between reads, so a persistent pointer must be maintained. You will test the position of the pointer and see if you've reached the end or not, and reset to the beginning when needed.
The specifics are going to depend upon your chosen language, of course.
I did this with Java, with the added the possibility of playback at different speeds.
http://www.hexara.com/VSL/VSL2.htm
It's a little laggy. I've learned a bit since posting that, but haven't gone back to fix it yet. The program asks permission and has you load a wav file from your computer. It should be 16-bit, stereo, 44100fps, little-endian.
WaveTable synthesis is a bit different, in that only a single iteration of a wave is stored and used as source data.
Here is a short discussion, from Stanford's CCRMA website:
https://ccrma.stanford.edu/~bilbao/booktop/node9.html
I used this method to make a Java "Theremin"
http://www.hexara.com/VSL/JTheremin.htm
With a WaveTable, you decide on the size of the array. If it is a power of 2, one can bitmask the pointer after every increment, which is faster than doing a comparison and reset.

Related

Access audio buffer of an AVAudioSession

Hope this question makes some sense, I'm completely lost....
In my proto-app I'm recording micro input and saving it, and so far no problems at all.
I now need to access the buffer while I'm recording it in order to pass chunks of data to another class (written in C, not by me) that will do some analysis.
I spent the whole day browsing and reading, and looks like I need use Audio Queues in order to access the buffer.
The problem is that the syntax is C, and I don't understand it at all :)
So my questions are:
1) Is there any other way to achieve what I'm looking for? I don't need in-depth explanation, just some hints and I will browse my way through :) I'm asking because I'm not 100% sure that Audio Queues are the only way to go
2) Any good tutorial or example about Audio Queues? The aurioTouch tutorial by Apple wasn't very useful (again, I don't know C). I could bypass my problems in C by following a good tutorial that a noob like me can understand
Thanks a lot, and for any help you could offer.
Good question.
You can use code written by other people like:
Novocaine - pretty straightforward. (but there are some bugs, at least in older version I used ~ 6 months ago. Something with mono and stereo.)
Momu - quite a good thing in C++ (you need to use .mm extension for you files)
Those will save you time if you want some low level audio programming. Some basic skills in C still required though. Check out this guy. His explanations and enthusiasm are excellent.
With all mentioned above you can be ready in a 1-2 days of work carrying away good skills in C.
EDIT
Basically, everywhere you work with low-level audio you deal with a C array of numbers (represented like float *audioBuffer;) called audio samples. You cycle through it in a loop, do some operations, copy it, send somewhere, analyze.
To copy it you have to allocate space for it. Actual byte size of the buffer can be calculated like this: numberOfSamples*sizeof(type).

ADSR in iOS, sample code?

I've been searching for some examples that show how to do ADSR in iOS using audio samples (preferably WAV files with loop points, but thats secondary). I guess most people who write a sampler/synth app use audio unit for this. Does any one know a good code example that shows ADSR in any iOS audio library?
In the new iOS SDK 5.0 there's now a Sampler Audio Unit! Which can do ADSR envelopes.
The presets demo shows how to use the sampler:
http://developer.apple.com/library/ios/#samplecode/LoadPresetDemo/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011214
If you want to load different sound formats to play this article is helpful:
https://developer.apple.com/library/mac/#technotes/tn2283/_index.html
And here's the iOS documentation reference:
http://developer.apple.com/library/ios/#documentation/AudioUnit/Reference/AUComponentServicesReference/Reference/reference.html#//apple_ref/doc/uid/TP40007291
you can find (a very basic) one in the Apple's SinSynth sample. That is an AU, but it should demonstrate how one would apply a envelope to an audio buffer. i don't remember - it may simply be an ASR, but adding a fourth stage is simple once you have understood the existing program. The implementation is right in the note's render.
Envelope Generators are not platform specific.
musicdsp.org will be a better resource if you want more than a push in the right direction.
MusicDSP has source code for an example envelope follower with attack/release. If you understand this, then sustain/decay should be pretty logical. ;)
But an ADSR envelope is basically just a matter of applying gain to your output signal with a state machine. Each state has a starting value, and ending value, and a duration. Calculating the slope of that line and the value of each point along it was covered in your algebra class back in high school. ;) If you want to be really fancy, you can implement other types of curves, but the concept remains the same.

XNA | C# : Record and Change the Voice

My aim is code a project which records human sound and changes it (with effects).
e.g : a person will record its sound over microphone (speak for a while) and than the program makes its like a baby sound.
This shall run effectively and fast (while recording the altering operation must run, too)
What is the optimum way to do it ?
Thanks
If you're looking for either XNA or DirectX to do this for you, I'm pretty sure you're going to be out of luck (I don't have much experience with DirectSound; maybe somebody can correct me). What it sounds like you want to do is realtime digital signal processing, which means that you're either going to need to write your own code to manipulate the raw waveform, or find somebody else who's already written the code for you.
If you don't have experience writing this sort of thing, it's probably best to use somebody else's signal processing library, because this sort of thing can quickly get complicated. Since you're developing for the PC, you're in luck; you can use any library you like using P/Invoke. You might try out some of the solutions suggested here and here.
MSDN has some info about the Audio namespace from XNA, and the audio recording introduced in version 4:
Working with Microphones
Recording Audio from a Microphone
Keep in mind that recorded data is returned in PCM format.

Virtual Instrument App Recording Functionality With RemoteIO

I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.

Get note data from MIDI file

Is there a way to get the note data from a MIDI file? That is, I want to break down the MIDI file into its constituent parts so they are in the form of a unique word (or any other data type).
What I want to do in the end is take in a MIDI file and find patterns in the notes. Get in each note, find it's frequency (of being played) and note how likely other notes are to be played after it.
It would be nice to do this in C/C++, but any language would be fine.
Nik Reisman - sorry, but I don't agree with you...parsing midi in C#, C++ is something about 400 rows of code..It's nothing hard and it is nothing difficult.
I will advise you start with this link: https://web.archive.org/web/20141227205754/http://www.sonicspot.com:80/guide/midifiles.html
There is everything you need to know about midi and how to read it..
In the short description how the parser will work:
1)Open midi in byte mode
2)Read the header chunk where there is info about size, number of tracks and IMPORTANT file format!!
- There are 3 types of formats: 0,1,2 (type 2 is really "valuable", there are only few midi files with this type, so you don't need to read the midi if there is type 2)
- if there is not written: "MThd" (0x4D546864), end with error (it's a bad midi file)
3)Read track chunk
- if there is not written: "MTrk" (0x4D54726B) end with error (it's a bad midi file)
4)Read the midi events..
- There are very many events, you can read them all with if-else commands, or you can read only the events what you want to know, for example NOTE ON, NOTE OFF
- Sometimes in some midi files are not NOTE OFF..this event is changed with NOTE ON and velocity 0
On the websites everything is explained really nicely. If you open the midi file in byte mode you will have only a few methods and everything is then only about if-else commands and there you will catch what is stored right now.
It is important to understand VARIABLE LENGTH, but on the websites it is also explained. It's not hard. You can google many sites where VARIABLE LENGTH is explained too, with some images and examples. So I don't think that it is hard to explain it in here.
If you want a bit more advice, write me, I will try it. But parsing midi is not as hard as how it looks. If you have some problems, write me..
Parsing MIDI files by hand is no fun, take my word on this. ;) The format, although well documented, is difficult to deal with since you are always on the raw byte level. Since you are interested in extracting some meaningful information from MIDI files themselves, I'd recommend using a framework such as Juce, which is written in C++ and has support for reading MIDI files.
Juce is pretty large, but the API is good and well-documented. The class for parsing MIDI files, for instance, is pretty straightforward and easy to use.
There is a number of ready made solutions, taking input from MIDI file, generating musical visualization,
so in theory and practice, MIDI parser works fine
I am working on such musical visualizer in HTML5, generating vertical top > down notes timeline live to support handicapped piano players.
DLP projector is great but it seems, I need to install large LCD TV screen, just over the piano keyboard to get visualization to match the notes played
#Brendan Kavanagh is the leader
another key developer is called Stephen Malinowski
just follow my question to get the right web links
How to build MIDI file visualizer to get input from MIDI file and display MIDI timeline over the keyboard to match notes played by a real player

Resources