Is there a way to get the note data from a MIDI file? That is, I want to break down the MIDI file into its constituent parts so they are in the form of a unique word (or any other data type).
What I want to do in the end is take in a MIDI file and find patterns in the notes. Get in each note, find it's frequency (of being played) and note how likely other notes are to be played after it.
It would be nice to do this in C/C++, but any language would be fine.
Nik Reisman - sorry, but I don't agree with you...parsing midi in C#, C++ is something about 400 rows of code..It's nothing hard and it is nothing difficult.
I will advise you start with this link: https://web.archive.org/web/20141227205754/http://www.sonicspot.com:80/guide/midifiles.html
There is everything you need to know about midi and how to read it..
In the short description how the parser will work:
1)Open midi in byte mode
2)Read the header chunk where there is info about size, number of tracks and IMPORTANT file format!!
- There are 3 types of formats: 0,1,2 (type 2 is really "valuable", there are only few midi files with this type, so you don't need to read the midi if there is type 2)
- if there is not written: "MThd" (0x4D546864), end with error (it's a bad midi file)
3)Read track chunk
- if there is not written: "MTrk" (0x4D54726B) end with error (it's a bad midi file)
4)Read the midi events..
- There are very many events, you can read them all with if-else commands, or you can read only the events what you want to know, for example NOTE ON, NOTE OFF
- Sometimes in some midi files are not NOTE OFF..this event is changed with NOTE ON and velocity 0
On the websites everything is explained really nicely. If you open the midi file in byte mode you will have only a few methods and everything is then only about if-else commands and there you will catch what is stored right now.
It is important to understand VARIABLE LENGTH, but on the websites it is also explained. It's not hard. You can google many sites where VARIABLE LENGTH is explained too, with some images and examples. So I don't think that it is hard to explain it in here.
If you want a bit more advice, write me, I will try it. But parsing midi is not as hard as how it looks. If you have some problems, write me..
Parsing MIDI files by hand is no fun, take my word on this. ;) The format, although well documented, is difficult to deal with since you are always on the raw byte level. Since you are interested in extracting some meaningful information from MIDI files themselves, I'd recommend using a framework such as Juce, which is written in C++ and has support for reading MIDI files.
Juce is pretty large, but the API is good and well-documented. The class for parsing MIDI files, for instance, is pretty straightforward and easy to use.
There is a number of ready made solutions, taking input from MIDI file, generating musical visualization,
so in theory and practice, MIDI parser works fine
I am working on such musical visualizer in HTML5, generating vertical top > down notes timeline live to support handicapped piano players.
DLP projector is great but it seems, I need to install large LCD TV screen, just over the piano keyboard to get visualization to match the notes played
#Brendan Kavanagh is the leader
another key developer is called Stephen Malinowski
just follow my question to get the right web links
How to build MIDI file visualizer to get input from MIDI file and display MIDI timeline over the keyboard to match notes played by a real player
Related
Working in audio kit and I am looking to understand how people have incorporated drums. Obviously, the sampler is an option, but I am wondering if there is a built in option similar to some of the basic synthesis options.
There are a few options. I personally like the AppleSampler/MidiSampler like in the example but instead of using audio files you can create a EXS Sampler instrument in Logic where you can assign notes for different velocities. AppleSampler can also load AUPresets made in GarageBand and SoundFonts (SF2). The DunneAudioKit Sampler is an option if you are working with SFZ files, but I think that might be a work-in-progress in AudioKit 5. Loading WAV files directly into AppleSampler is also a good option if you just want one shot sounds.
I'm assuming you're mostly talking about playback of samples, not recording.
The best built-in option I've seen (other than AppleSampler/MidiSampler) is AudioPlayer, which lets you load in a sample and play it back on demand (from an on-screen pad, etc). MIDIListener can then help you respond to external MIDI events, etc. It works (I have a pretty big branch in my app where I tried it), but not sure it works well.
I wouldn't recommend DunneAudioKit Sampler for drums. There is no one-shot playback (so playing the same note in quick succession will cut off the previous note, even if you mess with the release). If you're trying to build a complex/realistic acoustic drum instrument, you'll also want round-robins so that variations of the same hit can be played, which Dunne also doesn't have. It can load SFZ files, but only a very limited subset of SFZ's opcodes (so again, it's missing things like round robins, mute groups, one-shot, etc).
Having gone down all those roads, I would suggest starting with AppleSampler, and I would build the EXS or aupreset file in Logic or Mainstage rather than trying to build something programmatically.
If your needs are really simple, the examples in AudioKit's recently released drum pad playground is a great place to start, loading single samples into a specific note on AppleSampler.
After finally successfully finding a way to concatenate multiple voice files into one single audio file on the iPhone, I am am now trying to superimpose an audio file over the length of the voice file.
So basically I have two .m4a files:
voice.m4a which is about 10 seconds for example.
music.m4a which is about 5 seconds.
What I require is that two file be combined in such a manner that the resulting single audio file now contains the music in the background of the voice file for the length of it, so basically the resulting output should have the 10 seconds of voice and the 5seconds of music repeated twice. It is absolutely important to have a single file that contains all of this.
I am trying to get all of this done in an application on the iPhone.
Can anyone please help me out with this?
If you are looking to do that programmatically, you will need to go deeper down into CoreAudio. For a simpler solution you could use AudioQueues or for more fine grained control AudioUnits and an AUGraph. The MultiChannelMixer is the Audio Unit you are looking for. Unfortunately there is no space for an elaborate tutorial here (would take a couple of days to write just the tutorial itself), but I am hoping I could point you to the right direction.
If you decide to go down that path and want to do further audio programming then this one time simple example, then I strongly suggest you buy "Learning Core Audio, A Hands-on Guide to Audio Programming for Mac and iOS" - Chris Adamson, Kevin Avila. You can find it on Amazon, paperback or Kindle.
I want record screen (by capturing 15 screenshots per second). This part I know how to do. But I don't know how to write this to some popular video format. Best option which I found is write frames to separated PNG files and use commandline Mencoder which can convert them to many output formats. But maybe someone have another idea?
Requirements:
Must be multi-platform solutions (I'm using Free Pascal / Lazarus). Windows, Linux, MacOS
Exists some librarys for that?
Could be complex commandline application which record screen for me too, but I must have possibility to edit frames before converting whole raw data to popular video format
All materials which could give me some idea are appreciated. API, librarys, anything even in other languages than FPC (I would try rewrite it or find some equivalent)
I considered also writting frames to video RAW format and then use Mencoder (he can handle it) or other solution, but can't find any API/doc for video RAW data
Regards
Argalatyr mentioned ffmpeg already.
There are two ways that you can get that to work:
By spawning an new process. All you have to do is prepare the right input (could be a series of jpeg images for example), and the right commandline parameters. After that you just call ffmpeg.exe and wait for it to finish.
ffmpeg makes use of some dll's that do the actual work. You can use those dll's directly from within your Delphi application. It's a bit more work, because it's more low-level, but in the end it'll give you a finer control over what happens, and what you show the user while you're processing.
Here are some solutions to check out:
FFVCL Commercial. Actually looks quite good, but I was too greedy to spend money on this.
Open Source Delphi headers for FFMpeg. I've tried it, but I never managed to get it to work.
I ended up pulling the DLL wrappers from an open source karaoke program (UltraStar Deluxe). I had to remove some dependencies, but in the end it worked like a charm. The relevant (pascal) code can be found here:
http://ultrastardx.svn.sourceforge.net/viewvc/ultrastardx/trunk/src/lib/ffmpeg-0.10/
There was some earlier discussion with a Delphi component here. It's a very simple component that sometimes generates some weird movies. Maybe a start.
I want to make a really simple synth.
In short, i want to play a wav file, and have it loop at certain points until touch is released.
I am looking for some example code, (doesn't need to be free).
Sorry for such a basic question, i have been googling this, though there seems to be nothing on this exact topic, unless I'm missing some important term.
Also, is what i'm describing, a wavetable synth, or a soundboard?
I'd call it a sampler.
Here's a sample project that will get you started:
https://sites.google.com/site/iphonecoreaudiodevelopment/remoteio-playback
See also:
The Audio Programming Book
The Core Audio Book
A sample project of mine
You need to store the sound data in memory, and have some sort of read() command that fills an array of bytes to send to the sound card. The read() command has to keep track of its position between reads, so a persistent pointer must be maintained. You will test the position of the pointer and see if you've reached the end or not, and reset to the beginning when needed.
The specifics are going to depend upon your chosen language, of course.
I did this with Java, with the added the possibility of playback at different speeds.
http://www.hexara.com/VSL/VSL2.htm
It's a little laggy. I've learned a bit since posting that, but haven't gone back to fix it yet. The program asks permission and has you load a wav file from your computer. It should be 16-bit, stereo, 44100fps, little-endian.
WaveTable synthesis is a bit different, in that only a single iteration of a wave is stored and used as source data.
Here is a short discussion, from Stanford's CCRMA website:
https://ccrma.stanford.edu/~bilbao/booktop/node9.html
I used this method to make a Java "Theremin"
http://www.hexara.com/VSL/JTheremin.htm
With a WaveTable, you decide on the size of the array. If it is a power of 2, one can bitmask the pointer after every increment, which is faster than doing a comparison and reset.
I've been searching for some examples that show how to do ADSR in iOS using audio samples (preferably WAV files with loop points, but thats secondary). I guess most people who write a sampler/synth app use audio unit for this. Does any one know a good code example that shows ADSR in any iOS audio library?
In the new iOS SDK 5.0 there's now a Sampler Audio Unit! Which can do ADSR envelopes.
The presets demo shows how to use the sampler:
http://developer.apple.com/library/ios/#samplecode/LoadPresetDemo/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011214
If you want to load different sound formats to play this article is helpful:
https://developer.apple.com/library/mac/#technotes/tn2283/_index.html
And here's the iOS documentation reference:
http://developer.apple.com/library/ios/#documentation/AudioUnit/Reference/AUComponentServicesReference/Reference/reference.html#//apple_ref/doc/uid/TP40007291
you can find (a very basic) one in the Apple's SinSynth sample. That is an AU, but it should demonstrate how one would apply a envelope to an audio buffer. i don't remember - it may simply be an ASR, but adding a fourth stage is simple once you have understood the existing program. The implementation is right in the note's render.
Envelope Generators are not platform specific.
musicdsp.org will be a better resource if you want more than a push in the right direction.
MusicDSP has source code for an example envelope follower with attack/release. If you understand this, then sustain/decay should be pretty logical. ;)
But an ADSR envelope is basically just a matter of applying gain to your output signal with a state machine. Each state has a starting value, and ending value, and a duration. Calculating the slope of that line and the value of each point along it was covered in your algebra class back in high school. ;) If you want to be really fancy, you can implement other types of curves, but the concept remains the same.