How to do a program change in AudioKit? - audiokit

I am usind AudioKit's MIDISampler to play some notes. I use a soundfont with a lot of instruments.
After studying the documentation and the source-code, I did not find a way to do a program change in order to select a different instrument from the soundfont.
Any ideas?
var instrument = MIDISampler(name: "Instrument 1")
let soundFontUrl = Bundle.main.url(forResource: "soundfont", withExtension: "sf2")!
try! instrument.loadInstrument(url: soundFontUrl)
...
// TODO: Do program change.
instrument.play(noteNumber: 60, velocity: 80, channel: 0)
EDIT: I thought something like this might work. But it does not.
let channel:UInt8 = 0
let program:UInt8 = 33
self.instrument.samplerUnit.sendProgramChange(program, onChannel: channel)

MIDISampler extends AppleSampler which has some SoundFont specific methods. Try the loadSoundFont method. https://github.com/AudioKit/AudioKit/blob/110d2dfa0b798f76349922fdd1d23d660a16dc37/Sources/AudioKit/Nodes/Playback/Apple%20Sampler/AppleSampler%2BSoundFonts.swift

Related

How to sync accurately enough two music sequences (Audiokit.AKAppleSequencer)?

I have 2 sequencers:
let sequencer1 = AKAppleSequencer(filename: "filename1")
let sequencer2 = AKAppleSequencer(filename: "filename2")
Both have the same bpm value.
When sequencer1 starts playing one midi track (playing it only once) I need that sequencer2 begin playing exactly after first sequencers finished. How can I achieve this ?
Note that sequencer2 looped.
Currently I have this approach but it is not accurate enough:
let callbackInstrument = AKMIDICallbackInstrument(midiInputName: "callbackInstrument", callback: nil)
let callbackTrack = sequencer1.newTrack()!
callbackTrack.setMIDIOutput(callbackInstrument.midiIn)
let beatsCount = sequencer1.length.beats
callbackTrack.add(noteNumber: MIDINoteNumber(beatsCount),
velocity: 1,
position: AKDuration(beats: beatsCount),
duration: AKDuration(beats: 0.1))
callbackInstrument.callback = { status, _, _ in
guard AKMIDIStatusType.from(byte: status) == .noteOn else { return }
DispatchQueue.main.async { self.sequencer2.play() }//not accurate
}
let sampler = AKMIDISampler(midiOutputName: nil)
sequencer1.tracks[0].setMIDIOutput(sampler.midiIn)
Appreciate any thoughts.
Apple's MusicSequence, upon which AKAppleSequencer is built, always flubs the timing for the first 100ms or so after it starts. It is a known issue in closed source code and won't ever be fixed. Here are two possible ways around it.
Use the new AKSequencer. It might be accurate enough to make this work (but no guarantees). Here is an example of using AKSequencer with AKCallbackInstrument: https://stackoverflow.com/a/61545391/2717159
Use a single AKAppleSequencer, but place your 'sequencer2' content after the 'sequencer1' content. You won't be able to loop it automatically, but you can repeatedly re-write it from your callback function (or pre-write it 300 times or something like that). In my experience, there is no problem writing MIDI to AKAppleSequencer while it is playing. The sample app https://github.com/AudioKit/MIDIFileEditAndSync has examples of time shifting MIDI note data, which could be used to accomplish this.

iOS - Play multiple notes loaded from soundfount with a specific duration and possibility to stop individual

i'm currently working a musician app. In my app notes should be played with a specific duration. I don't get into detail when the notes are played. Basically there is a ui view (a vertical line) which is moving and when this hits my other ui views (rectangle) it should be played a note. Important here: the note should be played until the line is not hitting the rectangle anymore.
The note playing is no problem but I don't find any duration. Also it should be possible to play the same note multiple times with a delay.
So I tried to make this work with AudioKit cause it's seems like the best greatest solution for audio. But it has so much stuff. I took a look into their examples and found this:
let bundlePath = Bundle.main.bundlePath
let soundPath = ("\(bundlePath)/sounds")
let akSampler = AKAppleSampler()
let mixer = AKMixer(akSampler)
try! akSampler.loadSoundFont(soundPath, preset: 0, bank: 0)
mixer.start()
AudioKit.output = mixer
do {
_ = try AudioKit.engine.start()
} catch {
print("AudioKit wouldn't start!")
}
do {
try akSampler.play(noteNumber: myNote.rawValue, velocity: 100, channel: 1)
} catch let e{
print(e)
}
Unfortunately I can't pass any duration and when I call akSampler.stop(noteNumber: myNote.rawValue) it also stops the other notes with the same type.
I tried to find a solution with AVFoundation like so:
engine = AVAudioEngine()
sampler = AVAudioUnitSampler()
engine.attach(sampler)
engine.connect(sampler, to: engine.mainMixerNode, format: nil)
guard let bankURL = Bundle.main.url(forResource: "sounds", withExtension: "SF2") else {
print("could not load sound font")
return
}
... init engine
sampler.startNote(60, withVelocity: 64, onChannel: 0)
But same result. Also the same case that I can't pass any duration.
I also digged into MIDISequencer's but it seems that they generating a sequence which I can play but this does not fit on my problem.
Does someone has a solution here?
The laziest solution would be to just schedule a stop with asyncAfter when you trigger the note, e.g.,
func makeNote(note: MIDINoteNumber, dur: Double) {
sampler.play(noteNumber: note, velocity: 100, channel: 0)
DispatchQueue.main.asyncAfter(deadline: .now() + dur) {
self.sampler.stop(noteNumber: note)
}
}
A better solution would probably use either AKSequencer or AKAppleSequencer. Both allow you to create sequences on the fly by adding individual notes with a specified duration (in musical time, i.e., number of beats). AKSequencer is considerably more accurate, but AKAppleSequencer has more readily available code examples on the web. A little confusingly, the current AKAppleSequencer used to also be called AKSequencer, but their interfaces are sufficiently different that a quick look at the docs for the two classes will tell you which you're looking at.
Your question is asking about how to schedule MIDI events which is precisely what these classes are designed to do. You haven't really given a clear reason why generating a sequence doesn't fit your problem.

AudioKit File Normalization

I'm trying to normalize audio file after record to make it louder or vice versa, but i'm getting error WARNING AKAudioFile: cannot normalize a silent file
I have checked recordered audioFile.maxLevel and it was 1.17549e-38, minimum float.
I'm using official Recorder example, and to normalize after record i added this code:
let norm = try player.audioFile.normalized(newMaxLevel: -4.0);
What I'm doing wrong? Why maxLevel invalid? Record is loud enough.
Rather than use the internal audio file of the player, make a new instance like so:
if let file = try? AKAudioFile(forReading: url) {
if let normalizedFile = try? file.normalized(newMaxLevel: -4) {
Swift.print("Normalized file sucess: \(normalizedFile.maxLevel)")
}
}
I can add a normalize func to the AKAudioPlayer so that it's available for playback. Essentially, the player just uses the AKAudioFile for initialization, and all subsequent operations happen in a buffer.

Saved file using AudioKit AKOfflineRenderNode not right

I am using AudioKit to mix WAV files together with MIDI files.
I also need to save the result in a separate file.
To mix the WAVs and MIDIs I am using an AKMIDISampler with an AKSequencer like this:
func add(track: MixerTrack) -> Bool {
do{
let trackSampler = AKMIDISampler()
try trackSampler.loadWav(track.instrument.fileName)
trackSampler.connect(to: mixer)
let sequencer = AKSequencer(filename: track.midi.fileName)
sequencer.setTempo(Double(tempo))
sequencer.setRate(rate)
sequencer.setGlobalMIDIOutput(trackSampler.midiIn)
sequencer.enableLooping()
sequencer.enableLooping()
sequencers.append(sequencer)
tracks.append(track)
return true
} catch {
return false
}
}
I am using the SongProcessor example from AudioKit's examples for ideas on how to use AKOfflineRenderNode.
The thing is the example works with AKAudioPlayer instances and not sequencers as I am using. I believe I cannot use players because I need to mix the WAV and MIDI files, and I was only able to achieve that using sequencers.
My first question is: Is it possible to create files from sequencers the same way it is done in SongProcessor with players?
I was able to save an m4a file but the result is weird. First, if I don't set the rate manually to a number like 40, it is veeery slow to play all the notes. And when I set ti to a value like that,I can hear the sequence playing but at wrong rates. At some moments the beats play correctly but they often start playing too slow or too fast at different times.
Is there something I am doing wrong? Is this a bug with AKOfflineRenderNode or is it just not mean to be used like this?
Here is the code I use to save the mix to disk:
func saveMixToDisk() -> URL? {
do {
let fileManager = FileManager.default
let name = UUID().uuidString.appending(".m4a")
let documentDirectory = try fileManager.url(for: .documentDirectory, in: .userDomainMask, appropriateFor:nil, create:false)
let fileURL = documentDirectory.appendingPathComponent(name)
offlineRender.internalRenderEnabled = false
let duration = sequencers.first!.length.seconds
for sequencer in sequencers {
sequencer.stop()
sequencer.setTime(AKDuration(seconds: 0).musicTimeStamp)
sequencer.rewind()
}
for sequencer in sequencers {
sequencer.setRate(40) // I would like to find a way to avoid having to set this, since this value is hardcoded and I don't know how to find the correct one. (When I only play through the sequencer inside the app the rate is perfect, but it gets messed up when rendering to URL)
sequencer.play()
}
try offlineRender.renderToURL(fileURL, seconds: duration * 10)
for sequencer in sequencers {
sequencer.stop()
sequencer.setTime(AKDuration(seconds: 0).musicTimeStamp)
sequencer.rewind()
}
offlineRender.internalRenderEnabled = true
return fileURL
} catch let error {
print(error)
return nil
}
}
Any help is very much appreciated. I can't seem to be able to get this to work, and sadly I don't know of any other options in iOS to achieve what I need.
Instead of using AKOfflineRender, try the new AudioKit.renderToFile in AudioKit 4.0.4: https://github.com/AudioKit/AudioKit/commit/09aedf7c119a399ab00026ddfb91ae6778570176
I think you need to use this method in iOS11
[AudioKit renderToFile:file duration:self->_audioDurationSeconds error:&error prerender:^{
[self.voicePlayer start];
}];

Build a simple Equalizer

I would like to make a 5-band audio equalizer (60Hz, 230Hz, 910Hz, 4kHz, 14kHz) using AVAudioEngine. I would like to have the user input gain per band through a vertical slider and accordingly adjust the audio that is playing. I tried using AVAudioUnitEQ to do this, but I hear no difference when playing the audio. I tried to hardcode in values to specify a gain at each frequency, but it still does not work. Here is the code I have:
var audioEngine: AVAudioEngine = AVAudioEngine()
var equalizer: AVAudioUnitEQ!
var audioPlayerNode: AVAudioPlayerNode = AVAudioPlayerNode()
var audioFile: AVAudioFile!
// in viewDidLoad():
equalizer = AVAudioUnitEQ(numberOfBands: 5)
audioEngine.attach(audioPlayerNode)
audioEngine.attach(equalizer)
let bands = equalizer.bands
let freqs = [60, 230, 910, 4000, 14000]
audioEngine.connect(audioPlayerNode, to: equalizer, format: nil)
audioEngine.connect(equalizer, to: audioEngine.outputNode, format: nil)
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
}
bands[0].gain = -10.0
bands[0].filterType = .lowShelf
bands[1].gain = -10.0
bands[1].filterType = .lowShelf
bands[2].gain = -10.0
bands[2].filterType = .lowShelf
bands[3].gain = 10.0
bands[3].filterType = .highShelf
bands[4].gain = 10.0
bands[4].filterType = .highShelf
do {
if let filepath = Bundle.main.path(forResource: "song", ofType: "mp3") {
let filepathURL = NSURL.fileURL(withPath: filepath)
audioFile = try AVAudioFile(forReading: filepathURL)
audioEngine.prepare()
try audioEngine.start()
audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
audioPlayerNode.play()
}
} catch _ {}
Since the low frequencies have a gain of -10 and the high frequencies have a gain of 10, there should be a very noticeable difference when playing any media. However, when the media starts playing, it sounds the same as if played without any equalizer attached.
I'm not sure why this is happening, but I tried several different things to debug. I thought that it might be the order of the functions so I tried switching it so that audioEngine.connect is called after adjusting all of the bands, but that did not make a difference either.
I tried this same code with using an AVAudioUnitTimePitch, and it worked perfectly, so I am dumbfounded as to why it does not work with AVAudioUnitEQ.
I do not want to use any third-party libraries or cocoa pods for this project, I would like to do it using AVFoundation alone.
Any help would be greatly appreciated!
Thanks in advance.
AVAudioUnitEQFilterParameters
Looking through the documentation, I noticed that I had messed with all of the parameters except bypass and it seems that changing this flag fixed everything!
So, I believe the main issue here is that each AVAudioUnitEQ band must not be bypassed by the provided system values rather than the values the programmer sets.
So, I changed
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
}
to
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
bands[i].bypass = false
bands[i].filtertype = .parametric
}
and everything started working. Furthermore, to make an effective equalizer that allows the user to modify individual frequencies the filtertype for each band should be set to .parametric.
I am still unsure on what I should set the bandwith to, but I can probably check online for that or just mess with it until the sound matches a different equalizer application.

Resources