Graphing MIDI events in iOS using AudioKit - ios

I want to create a graphical representation of a MIDI file. I am using AudioKit for my audio processing needs in my app.
I am loading the MIDI with an AKSequencer and using an AKMIDISampler to add a WAV file to the sequence.
Is there a way to do something like the view in GarageBand where you see the notes in a graphical representation using AudioKit?
The WAV part is not important for this. I jus want to be able to do draw the contents of the MIDI file.
Thanks!

It sounds like what you are asking for is what is referred to as a Piano Roll in a typical DAW (like GarageBand). AudioKit does not currently provide a built-in Piano Roll. However, as AudioKit is open source, it could certainly be contributed sometime in the future.

Related

How to apply audio effects in iOS

I use AVPlayer to play audio(streaming or local file). For this audio I want to apply some effects - boost volume, skip silence, reduce noise, change speed(in 0.1 intervals).
I did same thing in android by creating own player, decoding different audio formats into pcm data and then using some c libraries to modify it. It was quite complicated.
Is it possible to do with AVPlayer or how can I do that? Something like modifying audio already decoded by AVPlayer. Is there some ios api (AVAudioEngine?) or frameworks (audioKit?) which can do this?
Thanks!
IMHO the best solution is to use https://github.com/audiokit/AudioKit as it is well maintained and supports most of the requirements you listed.
Another approach is to import the C library you used in the Android project and have a wrapper around it so it can easily used in ObjectiveC/Swift. With this approach you will have less code to maintain and you guarantee similar results between the two platforms. Do you care to share more about this code ?

How can I retrieve PCM sample data from an audio clip in iOS?

I need to extract the PCM audio samples from a .wav file (or any other format) in iOS. I would also like to get the same data from a live recording using the microphone.
Can this be done using AVFoundation, or do I need to use the lower-level CoreAudio APIs? An example in Swift would be much appreciated. I'm just looking for a basic array of Floats corresponding to individual audio samples to use for signal processing.
AVFoundation includes a class called AVAssetReader that can be used to obtain the audio data from a sound file. https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVAssetReader_Class/index.html#//apple_ref/occ/instm/AVAssetReader/addOutput:
However, the most straightforward way is probably by using the Extended Audio File Services and the ExtAudioFileRead function: https://developer.apple.com/library/prerelease/ios/documentation/MusicAudio/Reference/ExtendedAudioFileServicesReference/index.html#//apple_ref/c/func/ExtAudioFileRead
The Extended Audio File Services is a C API, so you'll have to deal with calling that from Swift.

Generate a sound (not from a file)

I'm building a small game prototype, and I'd like to be able to play simple sounds whose length/tone/pitch will vary based on what the user is doing.
This is surprisingly hard to do. Closest resource I found was:
http://www.tmroyal.com/playing-sounds-in-swift-audioengine.html
But this does not actually generate any sound on my device or on the iOS simulator.
Does anyone know of any working code to play ANY procedurally generated audio? Simple Sine Wave would do.
https://gist.github.com/rgcottrell/5b876d9c5eea4c9e411c
This code on the other hand works, and it's beautifully written...
Success!
You can try AudioKit.
It's an audio framework built on top of Core Audio.
In their Continuous Control example they use a simple FM oscillator with controlled parameters.

iOS Audio Service : Read & write audio files

guys.
I'm working on some audio services on iOS.
I trying to search any examples or tutorials about
how audio service or stream can read a existing audio file than
process something like filter, than write another file.
Is there any body who can help me?
Dirac3LE (by Stephan M. Bernsee) is a great library for this job.
There are examples and manual included in the download.
It is particulary inteded for time and pitch manipulation
but in your case you'll be interested in its EAFRead and EAFWrite
classes.
If you want to get familiar with the lower level library that you can also use for microphone input/sound output, and that you can get raw samples into and out of, I would suggest taking a look at Audio Queue Services.
I used it in my side project to get audio from the microphone, and I also wrote some code you might find useful to do fast vectorized, FFT based FIR filtering on input audio. You can find the code here https://github.com/jamescarlson/FreeAPRS

How can I generate musical notes on iOS and play them?

I am creating a musical app which generate some music. I already used MIDI functions on Mac to create a MIDI file with MIDI events (unfortunately, I don't remember names of those functions).
I am looking for a way to create instrumental notes (MIDI's or anything else) programmatically in order to play them. I also would like to have multiple channels playing those notes at the same time.
I already tried 'SoundBankPlayer' but apparently, it can't play multiple instruments at the same time.
Have you got an idea?
This answer might be a bit more work than you intended, but you can use PD on iOS to do this. More precisely, you can use libpd for iOS for the synthesis, and then use any number of community-donated patches for the sound you're looking for.
In iOS 5:
MusicSequence, MusicTrack, MusicPlayer will do what you want.
http://developer.apple.com/library/ios/#documentation/AudioToolbox/Reference/MusicSequence_Reference/Reference/reference.html#//apple_ref/doc/uid/TP40009331
Check out AUSampler AudioUnit for iOS, you'll probably have to delve into Core Audio, which has some learning curve. ;)

Resources