Playing multi-sampled Instruments using AudioKit, controlling ADSR envelope - ios

I'm trying to play instrument of several .wav samples using AudioKit.
I've tried so far:
Using AKSampler (with underlying AVAudioUnitSampler) – it worked fine, but I can't figure out how to control ADSR envelope here – calling stop will stop note immediately.
Another way is to use AKSamplePlayer for each sample and play it, manually setting rate so it play the right note. I can (possibly?) then connect AKAmplitudeEnvelope to each sample player. But if I want to play 5 notes of the same sample simultaneously, I would need 5 instances of AKSamplePlayer, which seems like wasting resources.
I also tried to find a way to just push raw audio samples to the AudioKit output buffer, making mixing and sample interpolation by myself (in C, probably?). But didn't find how to do it :(
What is the right way to make a multi-sampled instrument using AudioKit? I feel like it must be a fairly simple task.

Thanks to mahal tertin, it's pretty easy to use AKAUPresetBuilder!
You can create .aupreset file somewhere in tmp directory and then load this instrument with AKSampler.
The only thing worth noting is that by default AKAUPresetBuilder will generate samples with trigger mode set to trigger, which will ignore note-off events. So you should set it explicitly.
For example:
let sampleC4 = AKAUPresetBuilder.generateDictionary(
rootNote: 60,
filename: pathToC4WavSample,
startNote: 48,
endNote: 65)
sampleC4["triggerMode"] = "hold"
let sampleC5 = AKAUPresetBuilder.generateDictionary(
rootNote: 72,
filename: pathToC5WavSample,
startNote: 66,
endNote: 83)
sampleC5["triggerMode"] = "hold"
AKAUPresetBuilder.createAUPreset(
dict: [sampleC4, sampleC5],
path: pathToAUPresetFilename,
instrumentName: "My Instrument",
attack: 0,
release: 0.2)
and then create a sampler and start AudioKit:
sampler = AKSampler()
try sampler.loadInstrument(atPath: pathToAUPresetFilename)
AudioKit.output = sampler
AudioKit.start()
and then use this to start playing note:
sampler.play(noteNumber: MIDINoteNumber(63), velocity: MIDIVelocity(120), channel: 0)
and this to stop, respecting release parameter:
sampler.stop(noteNumber: MIDINoteNumber(63), channel: 0)

Probably the best way would be to embed your wav files into an EXS or Soundfont format, making use of tools in that realm to accomplish the ADSR for instance. Otherwise you'll kind of have to have an instrument for each sample.

Related

iOS Audio Units - Connecting with Graphs?

I've jumped off the deep end, and have decided to figure out low-latency audio on iOS using Audio Units. I've read as much documentation (from Apple and forums galore) as I can find, and the overall concepts make sense, but I'm still scratching my head on some concepts that I need help with:
I saw somewhere that AU Graphs are deprecated and that I should instead connect Audio Units directly. I'm cool with that... but how? Do I just need to use the Connection property of an Audio Unit to connect it to a source AU, and off I go? Initialize and Start the Units, and watch the magic happen? (cause it doesn't for me...)
What's the best Audio Unit setup to use if I simply want to grab audio from my mic, do some processing to the audio data, and then store that audio data without sending it out to the RemoteIO speaker, bus 0 output? I tried hooking up a GenericOutput AudioUnit to catch the data in a callback without any luck...
That's it. I can provide code when requested, but it's way too late, and this has wiped me out. If there's know easy answer, that's cool. I'll send any code snippets at will. Suffice it to say, I can easily get a simple RemoteIO, mic in, speaker out setup working great. Latency seems non-existant (at least to my ears). I just want to do something with the mic data and store it in memory without it going out to the speaker. Eventually hooking in the eq and mixer would be hip, but one step at a time.
FWIW, I'm coding in Xamarin Forms/C# land, but code examples in Objective C, Swift or whatever is fine. I'm stuck on the concepts, not necessarily the exact code.
THANKS!
Working with audio units without a graph is pretty simple and very flexible. To connect two units, you call AudioUnitSetProperty this way :
AudioUnitConnection connection;
connection.sourceAudioUnit = sourceUnit;
connection.sourceOutputNumber = sourceOutputIndex;
connection.destInputNumber = destinationInputIndex;
AudioUnitSetProperty(
destinationUnit,
kAudioUnitProperty_MakeConnection,
kAudioUnitScope_Input,
destinationInputIndex,
&connection,
sizeof(connection)
);
Note that it is required for the units connected this way to have their Stream Format set uniformly and that it must be done before their initialization.
Your question mentions Audio Units, and Graphs. As said in the comments, the graph concept has been replaced with the idea of attaching "nodes" to an AVAudioEngine. These nodes then "connect" to other nodes. Connecting nodes creates signal paths and starting the engine makes it all happen. This may be obvious, but I am trying to respond generally here.
You can do this all in Swift or in Objective-C.
Two high level perspectives to consider with iOS audio are the idea of a "host" and that of a "plugin". The host is an app and it hosts plugins. The plugin is usually created as an "app extension" and you can look up audio unit extensions for more about that as needed. You said you have one doing what you want, so this is all explaining the code used in a host
Attach AudioUnit to an AVaudioEngine
var components = [AVAudioUnitComponent]()
let description =
AudioComponentDescription(
componentType: 0,
componentSubType: 0,
componentManufacturer: 0,
componentFlags: 0,
componentFlagsMask: 0
)
components = AVAudioUnitComponentManager.shared().components(matching: description)
.compactMap({ au -> AVAudioUnitComponent? in
if AudioUnitTypes.codeInTypes(
au.audioComponentDescription.componentType,
AudioUnitTypes.instrumentAudioUnitTypes,
AudioUnitTypes.fxAudioUnitTypes,
AudioUnitTypes.midiAudioUnitTypes
) && !AudioUnitTypes.isApplePlugin(au.manufacturerName) {
return au
}
return nil
})
guard let component = components.first else { fatalError("bugs") }
let description = component.audioComponentDescription
AVAudioUnit.instantiate(with: description) { (audioUnit: AVAudioUnit?, error: Error?) in
if let e = error {
return print("\(e)")
}
// save and connect
guard let audioUnit = audioUnit else {
print("Audio Unit was Nil")
return
}
let hardwareFormat = self.engine.outputNode.outputFormat(forBus: 0)
self.engine.attach(au)
self.engine.connect(au, to: self.engine.mainMixerNode, format: hardwareFormat)
}
Once you have your AudioUnit loaded, you can connect your Athe AVAudioNodeTapBlock below, it has more to it since it need to be a binary or something that other host apps that aren't yours can load.
Recording an AVAudioInputNode
(You can replace the audio unit with the input node.)
In an app, you can record audio by creating an AVAudioInputNode or just reference the 'inputNode' property of the AVAudioEngine, which is going to be connected to the system's selected input device(mic, line in, etc) by default
Once you have the input node you want to process the audio of, next "install a tap" on the node. You can also connect your input node to a mixer node and install a tap there.
https://developer.apple.com/documentation/avfoundation/avaudionode/1387122-installtap
func installTap(onBus bus: AVAudioNodeBus,
bufferSize: AVAudioFrameCount,
format: AVAudioFormat?,
block tapBlock: #escaping AVAudioNodeTapBlock)
The installed tap will basically split your audio stream into two signal paths. It will keep sending the audio to the AvaudioEngine's output device and also send the audio to a function that you define. This function(AVAudioNodeTapBlock) is passed to 'installTap' from AVAudioNode. The AVFoundation subsystem calls the AVAudioNodeTapBlock and passes you the input data one buffer at a time along with the time at which the data arrived.
https://developer.apple.com/documentation/avfoundation/avaudionodetapblock
typealias AVAudioNodeTapBlock = (AVAudioPCMBuffer, AVAudioTime) -> Void
Now the system is sending the audio data to a programmable context, and you can do what you want with it.
To use it elsewhere, you can create a separate AVAudioPCMBuffer and write each of the passed in buffers to it in the AVAudioNodeTapBlock.

AudioKit: how to perform frequency modulation for AKPlayer

I'm trying to perform frequency modulation on a signal coming from AKPlayer, which in return plays a mp3 file.
I've tried to work with AKOperationEffect, but it doesn't work as expected:
let modulatedPlayer = AKOperationEffect(player) { player, _ in
let oscillator = AKOperation.fmOscillator(baseFrequency: modulationFrequency,
carrierMultiplier: player.toMono(),
modulatingMultiplier: 100,
modulationIndex: 0,
amplitude: 1)
return oscillator
}
Has anybody an idea how to get the mp3 modulated?
Unfortunately, the AudioKit API is not so well documented .... there are a tons of examples, but they all deal with synthetic sounds such as sine, square waves etc.
I took the time to create a working practical example to help you #Ulrich, you can drop and play if you have the playground environment available, or just use it as a reference trusting me it works to amend your code, it's self-explanatory but you can read more about why my version work after the code TLDR;
Before <audio>
After <audio>
The following was tested and ran without problems in the latest XCode and Swift at the time of writing (XCode 11.4, Swift 5.2 and AudioKit 4.9.5):
import AudioKitPlaygrounds
import AudioKit
let audiofile = try! AKAudioFile(readFileName: "demo.mp3")
let player = AKPlayer(audioFile: audiofile)
let generator = AKOperationEffect(player) { player, _ in
let oscillator = AKOperation.fmOscillator(baseFrequency: 400,
carrierMultiplier: player.toMono(),
modulatingMultiplier: 100,
modulationIndex: 0,
amplitude: 1)
return oscillator
}
AudioKit.output = generator
try! AudioKit.start()
player.play()
generator.start()
Find the playground ready to use in the Download page ( https://audiokit.io/downloads/ )
As you can see, apart from declaring a path to the mp3 file when initializing a new AKAudioFile and passing to an AKPlayer instance, there are three steps that you need to occur in a certain order:
1) Assign an `AKNode` to the AudioKit output
2) Start the AudioKit engine
3) Start the `player` to generate output
4) Start the generator to moderate your sound
The best way to understand why this is to forget about code for a bit and imagine patching things in the real world; and finally, try to imagine the audio flow.
Hope this helps you and future readers!

Piano notes with AKKeyboardView

I am new to AudioKit - I am able to use the AKKeyboardView to play notes using AKOscillatorBank, but I want the audio to sound more like a grand piano. Loading .wav files seems to make the notes choppy. I have also changed the note envelope. How can I map grand piano notes onto the AKKeyboardView keys?
You're not easily going to get a piano sound out of an oscillator. You might want to use a soundfont instead. You can load an sf2 (but not sf3, I believe) into an AKAppleSampler and trigger it using AKKeyboardDelegate as you are doing with the AKOscillatorBank. MuseScore has list of soundfont file links, many of which use open source licenses.
First add the sf2 file to your project, then set up the AKAppleSampler:
let sampler = AKAppleSampler()
// note that if you're using a GM soundfont, 'Grand Piano' will be preset 0
sampler.loadMelodicSoundFont("NameOfSoundFontWithoutExtension", preset: 0)

Creating a MIDI file from an AKKeyboardView

Currently I am using an AKKeyboardView to connect essentially to the AKRhodesPiano object, and I was wondering if there was an easy way to generate a MIDI file from this?
I see the AKKeyboardView has the noteOn and noteOff functions, which does produce the MIDINoteNumber but I can't find anywhere else in the AudioKit library to really take this input and generate a MIDI file, even if only just a simple one.
You would need to run an AKSequencer in the background (maybe with a metronome track). Make an additional track that you will record onto. Also set the length to be as long as you will need for the recording.
When you get a noteOn message from the keyboard, you can check the sequencer's currentPosition and record this into a dictionary. When you get the matching pitch's noteOff message, again check the currentPosition. Use the difference between these two times to get the duration and add a note to your recording track on the sequencer:
myRecordingTrack.add(noteNumber: noteNumber,
velocity: 127,
position: timeAtNoteOn,
duration: timeAtNoteOff - timeAtNoteOn,
channel: 0)
Then you could easily use AKSequencer's genData() to create a MIDI file (possibly either deleting the metronome track, or copying the recorded track to a new AKSequencer instance).
Check out the SequencerDemo for setting up AKSequencer and building sequences and MIDIFileEditAndSync (both in the iOS Example folder in the AudioKit repo) for an example of writing AKSequencer to a MIDI file.

AudioKit - Does AKSampler pitch or do I need to add multiple sample files?

I am just getting started with AudioKit. I want to keep it very simple. I want to make a few UIButtons (C,D,E,F,...) and then have them play the corresponding piano sample. However I don't understand how to correctly prepare the sample file(s).
I found this example:
let sampler = AKSampler()
sampler.loadWav("Sounds/fmpia1")
let ampedSampler = AKBooster(sampler, gain: 3.0)
var delay = AKDelay(ampedSampler)
delay.time = pulse * 1.5
delay.dryWetMix = 0.0
delay.feedback = 0.0
let cMajor = [72, 74, 76, 77, 79, 81, 83, 84]
var mix = AKMixer(delay)
var reverb = AKReverb(mix)
AudioKit.output = reverb
AudioKit.start()
for note in cMajor {
sampler.playNote(note)
sleep(1)
}
What I understand: Loading the sampler and the numbers (72, 74, ...) are the MIDI signals for the notes.
However: How does the sampler know what to play? Does the sample "fmpia1" contain all notes? Is it just one sample, but the AKSampler pitches them automatically? But then how does AKSampler know what note the sample is? Shouldn't AKSampler be informed that the sample in the file is, let's say a F# ? So he can pitch accordingly?
I am very confused about this. I hope you can understand what my problem is.
Thanks in advance for any help!
AKSampler (and AKMIDISampler) use Apple's AVAudioUnitSampler internally. It is AVAudioUnitSampler that is doing the playback and pitching your root note. If you look at the documentation for AVAudioUnitSampler loadAudioFiles(at:) (https://developer.apple.com/documentation/avfoundation/avaudiounitsampler/1388631-loadaudiofiles), you will see that it creates a new zone for each audio file and uses the metadata in the audio file to try and map it correctly. It can also take a shortcut if the root note is in the file name (ie - ViolinC4).
So, in direct response to your questions:
fmpia1 is a single audio file (pitch). It gets mapped internally to a root note (maybe C4 if not specified - needs verified).
when you send in a midi event with a specific note number, the sampler will pitch your audio file to that note and play it back. (Here is a handy map of midi to notes: https://medium.com/#gmcerveny/midi-note-number-chart-for-ios-music-apps-b3c01df3cb19)
Yes, if you know the root note (pitch of the file), specifying as I said above will result in accurate playback.

Resources