AudioKit and Bass MIDI player - ios

We are considering using the AudioKit framework along with the BASSMIDI sampler provided by http://www.un4seen.com/ in our iOS app.
Here is what we aim to implement :
Play a MIDI File using the AudioKit sequencer
Send the MIDI events (read by the AudioKit sequencer) to the BASSMIDI sampler.
Redirect the BASSMIDI sampler's audio output to an AudioKit Mixer instance
Our main concern is that it doesn't seem possible to access the BASSMIDI sampler's audio output.
Has anyone had the experience to do this ?
Is it simply possible ?
Thanks in advance for your insights!

For those who would be interested in using the BASSMIDI player on iOS, we finally ended up implementing an AUv3 Audio Unit wrapped around the bassmidi library.
The main advantage is that this audio unit can be inserted into a graph of audio nodes handled by the iOS Audio Engine (just like you would do with the AVAudioUnitSampler).
The code is available on a public repository:
https://github.com/newzik/BassMidiAudioUnit
Feel free to use it!

Related

Graphing MIDI events in iOS using AudioKit

I want to create a graphical representation of a MIDI file. I am using AudioKit for my audio processing needs in my app.
I am loading the MIDI with an AKSequencer and using an AKMIDISampler to add a WAV file to the sequence.
Is there a way to do something like the view in GarageBand where you see the notes in a graphical representation using AudioKit?
The WAV part is not important for this. I jus want to be able to do draw the contents of the MIDI file.
Thanks!
It sounds like what you are asking for is what is referred to as a Piano Roll in a typical DAW (like GarageBand). AudioKit does not currently provide a built-in Piano Roll. However, as AudioKit is open source, it could certainly be contributed sometime in the future.

How can I retrieve PCM sample data from an audio clip in iOS?

I need to extract the PCM audio samples from a .wav file (or any other format) in iOS. I would also like to get the same data from a live recording using the microphone.
Can this be done using AVFoundation, or do I need to use the lower-level CoreAudio APIs? An example in Swift would be much appreciated. I'm just looking for a basic array of Floats corresponding to individual audio samples to use for signal processing.
AVFoundation includes a class called AVAssetReader that can be used to obtain the audio data from a sound file. https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVAssetReader_Class/index.html#//apple_ref/occ/instm/AVAssetReader/addOutput:
However, the most straightforward way is probably by using the Extended Audio File Services and the ExtAudioFileRead function: https://developer.apple.com/library/prerelease/ios/documentation/MusicAudio/Reference/ExtendedAudioFileServicesReference/index.html#//apple_ref/c/func/ExtAudioFileRead
The Extended Audio File Services is a C API, so you'll have to deal with calling that from Swift.

AUGraph and streaming audio (http and ets)

I have player based on AUGraph (i need equalizer). Player can play local audio files. Now i want to add support stream audio. What i can use for it ? Anyone standart class like AVAssetReader (AVAssertReader can't play stream :( ), or maybe anyone know open lib for it ? Thanks.
As steaming is now supported natively in AVPlayer, there aren't too libs which are actively supported. One which has been around for some time (i've used it myself in the past) is Matt Gallagher's AudioStreamer, https://github.com/mattgallagher/AudioStreamer, or Thong Nguyen StreamingKit, https://github.com/tumtumtum/StreamingKit
An alternative approach you might want to consider is an implementation similar to the one demonstrated in Apple's AudioTapProcessor. Essentially inserting an MTAudioProcessingTap into AVPlayer's pipeline to introduces a kAudioUnitSubType_BandPassFilter AudioUnit. A similar approach could be used to introduce an kAudioUnitSubType_NBandEQ AudioUnit.

Audio processing using AVFoundation Framework

I have one sound and i need to change the pitch and tempo of the sound how can i achieve this using Open-AL and Core Audio.
i don't want to use any third party library for this purpose so can anyone help me to get started...
The kAudioUnitSubType_NewTimePitch iOS Audio Unit can independently change the pitch and tempo of Core Audio buffer streams, but it is not of very high quality (for instance, to the level of the best commercial solutions). You will have to know how to configure Audio Units and set up an AUGraph.

Which Audio API to use for creating Audio Effects?

I want to record audio and apply custom-built sound effect filters, then play it back.
Are Audio Units and Audio Queue Services the API I'm looking for? Or are there other APIs which fit this purpose better?
Also, I've been told Audio Units can't be customized on iOS so there are just a few pre-made effects available. Is this true?
Audio Units is the most useful API for building effects processing under iOS. iOS 5 added several new types of filter and effect units. You can add your own custom DSP effects inside certain audio unit buffer callbacks.

Resources