AudioKit 5 iOS Current Playback Time - ios

Hello fellow AudioKit users,
I'm trying to setup AudioKit 5 with a playback time indication, and am having trouble.
If I use AudioPlayer's duration property, this is the total time of the audio file, not the current playback time.
ex:
let duration = player.duration
Always gives the file's total time.
Looking at old code from AKAudioPlayer, it seemed to have a "currentTime" property.
The migration guide (https://github.com/AudioKit/AudioKit/blob/v5-main/docs/MigrationGuide.md) mentions some potentially helpful classes from the old version, however the "AKTimelineTap" has no replacement with no comments from the developers... nice.
Also still not sure how to manipulate the current playback time either...
I've also checked out Audio Kit 5's Cookbooks, however this is for adding effects and nodes, not necessarily for playback display, etc..
Thanks for any help with this new version of AudioKit.

You can find playerNode in AudioPlayer, it's AVAudioPlayerNode class.
Use lastRenderTime and playerTime, you can calculate current time.
ex:
// get playerNode in AudioPlayer.
let playerNode = player.playerNode
// get lastRenderTime, and transform to playerTime.
guard let lastRenderTime = playerNode.lastRenderTime else { return }
guard let playerTime = playerNode.playerTime(forNodeTime: lastRenderTime) else { return }
// use sampleRate and sampleTime to calculate current time in seconds.
let sampleRate = playerTime.sampleRate
let sampleTime = playerTime.sampleTime
let currentTime = Double(sampleTime) / sampleRate

Related

How to sync accurately enough two music sequences (Audiokit.AKAppleSequencer)?

I have 2 sequencers:
let sequencer1 = AKAppleSequencer(filename: "filename1")
let sequencer2 = AKAppleSequencer(filename: "filename2")
Both have the same bpm value.
When sequencer1 starts playing one midi track (playing it only once) I need that sequencer2 begin playing exactly after first sequencers finished. How can I achieve this ?
Note that sequencer2 looped.
Currently I have this approach but it is not accurate enough:
let callbackInstrument = AKMIDICallbackInstrument(midiInputName: "callbackInstrument", callback: nil)
let callbackTrack = sequencer1.newTrack()!
callbackTrack.setMIDIOutput(callbackInstrument.midiIn)
let beatsCount = sequencer1.length.beats
callbackTrack.add(noteNumber: MIDINoteNumber(beatsCount),
velocity: 1,
position: AKDuration(beats: beatsCount),
duration: AKDuration(beats: 0.1))
callbackInstrument.callback = { status, _, _ in
guard AKMIDIStatusType.from(byte: status) == .noteOn else { return }
DispatchQueue.main.async { self.sequencer2.play() }//not accurate
}
let sampler = AKMIDISampler(midiOutputName: nil)
sequencer1.tracks[0].setMIDIOutput(sampler.midiIn)
Appreciate any thoughts.
Apple's MusicSequence, upon which AKAppleSequencer is built, always flubs the timing for the first 100ms or so after it starts. It is a known issue in closed source code and won't ever be fixed. Here are two possible ways around it.
Use the new AKSequencer. It might be accurate enough to make this work (but no guarantees). Here is an example of using AKSequencer with AKCallbackInstrument: https://stackoverflow.com/a/61545391/2717159
Use a single AKAppleSequencer, but place your 'sequencer2' content after the 'sequencer1' content. You won't be able to loop it automatically, but you can repeatedly re-write it from your callback function (or pre-write it 300 times or something like that). In my experience, there is no problem writing MIDI to AKAppleSequencer while it is playing. The sample app https://github.com/AudioKit/MIDIFileEditAndSync has examples of time shifting MIDI note data, which could be used to accomplish this.

iOS - Play multiple notes loaded from soundfount with a specific duration and possibility to stop individual

i'm currently working a musician app. In my app notes should be played with a specific duration. I don't get into detail when the notes are played. Basically there is a ui view (a vertical line) which is moving and when this hits my other ui views (rectangle) it should be played a note. Important here: the note should be played until the line is not hitting the rectangle anymore.
The note playing is no problem but I don't find any duration. Also it should be possible to play the same note multiple times with a delay.
So I tried to make this work with AudioKit cause it's seems like the best greatest solution for audio. But it has so much stuff. I took a look into their examples and found this:
let bundlePath = Bundle.main.bundlePath
let soundPath = ("\(bundlePath)/sounds")
let akSampler = AKAppleSampler()
let mixer = AKMixer(akSampler)
try! akSampler.loadSoundFont(soundPath, preset: 0, bank: 0)
mixer.start()
AudioKit.output = mixer
do {
_ = try AudioKit.engine.start()
} catch {
print("AudioKit wouldn't start!")
}
do {
try akSampler.play(noteNumber: myNote.rawValue, velocity: 100, channel: 1)
} catch let e{
print(e)
}
Unfortunately I can't pass any duration and when I call akSampler.stop(noteNumber: myNote.rawValue) it also stops the other notes with the same type.
I tried to find a solution with AVFoundation like so:
engine = AVAudioEngine()
sampler = AVAudioUnitSampler()
engine.attach(sampler)
engine.connect(sampler, to: engine.mainMixerNode, format: nil)
guard let bankURL = Bundle.main.url(forResource: "sounds", withExtension: "SF2") else {
print("could not load sound font")
return
}
... init engine
sampler.startNote(60, withVelocity: 64, onChannel: 0)
But same result. Also the same case that I can't pass any duration.
I also digged into MIDISequencer's but it seems that they generating a sequence which I can play but this does not fit on my problem.
Does someone has a solution here?
The laziest solution would be to just schedule a stop with asyncAfter when you trigger the note, e.g.,
func makeNote(note: MIDINoteNumber, dur: Double) {
sampler.play(noteNumber: note, velocity: 100, channel: 0)
DispatchQueue.main.asyncAfter(deadline: .now() + dur) {
self.sampler.stop(noteNumber: note)
}
}
A better solution would probably use either AKSequencer or AKAppleSequencer. Both allow you to create sequences on the fly by adding individual notes with a specified duration (in musical time, i.e., number of beats). AKSequencer is considerably more accurate, but AKAppleSequencer has more readily available code examples on the web. A little confusingly, the current AKAppleSequencer used to also be called AKSequencer, but their interfaces are sufficiently different that a quick look at the docs for the two classes will tell you which you're looking at.
Your question is asking about how to schedule MIDI events which is precisely what these classes are designed to do. You haven't really given a clear reason why generating a sequence doesn't fit your problem.

Modify Playback Tempo of an AVAudioSequencer / AVMusicTrack?

How does one programmatically change the tempo (in BPM) of an AVAudioSequencer that's been loaded from an existing MIDI file (i.e. using the following)?
try sequencer.load(from: fileURL, options: AVMusicSequenceLoadOptions.smfChannelsToTracks)
I know that the sequencer's tempoTrack property returns the AVMusicTrack controlling the tempo, but how does one then edit it to add/change tempo events? The Apple documentation simply says...
"The tempo track can be edited and iterated upon as any other track. Non-tempo events in a tempo track are ignored."
...but gives no further indication on how such editing would be done.
I know there's the rate property, but that just revolves around a default value of 1.0, which would need some complex adjustments to allow BPM values, and I don't think would even be possible unless the file's original BPM is known at runtime.
Alternatively, is there a way to create a new AVMusicTrack from scratch, with a custom tempo, and make that the sequencer's tempoTrack?
The only way I managed this was to dip into the Audio Toolbox API momentarily.
This approach assumes that you instantiated your AVAudioSequencer with an AVAudioEngine via:
init(audioEngine engine: AVAudioEngine)
(1) After loading your midi file with AVAudioSequencer, get a pointer to the underlying MusicSequence from this property on AVAudioEngine
var musicSequence: MusicSequence? { get set }
(2) Get a pointer to the sequence's tempo track.
var tempoTrack: MusicTrack!
MusicSequenceGetTempoTrack(musicSequence, &tempoTrack)
(3) Remove all existing tempo information from the tempo track.
var iterator: MusicEventIterator!
NewMusicEventIterator(tempoTrack, &iterator)
var hasEvent = DarwinBoolean(false)
MusicEventIteratorHasCurrentEvent(iterator, &hasEvent)
while hasEvent.boolValue {
var timeStamp = MusicTimeStamp()
var eventType = MusicEventType()
var data: UnsafeRawPointer? = nil
var dataSize = UInt32()
MusicEventIteratorGetEventInfo(iterator, &timeStamp, &eventType, &data, &dataSize)
guard eventType == kMusicEventType_ExtendedTempo else {
MusicEventIteratorNextEvent(iterator)
MusicEventIteratorHasCurrentEvent(iterator, &hasEvent)
continue
}
// remove tempo event
MusicEventIteratorDeleteEvent(iterator)
MusicEventIteratorHasCurrentEvent(iterator, &hasEvent)
}
DisposeMusicEventIterator(iterator)
(4) Set the new tempo.
let bpm = 92
let timeStamp = MusicTimeStamp(0)
MusicTrackNewExtendedTempoEvent(tempoTrack, timeStamp, bpm)

Build a simple Equalizer

I would like to make a 5-band audio equalizer (60Hz, 230Hz, 910Hz, 4kHz, 14kHz) using AVAudioEngine. I would like to have the user input gain per band through a vertical slider and accordingly adjust the audio that is playing. I tried using AVAudioUnitEQ to do this, but I hear no difference when playing the audio. I tried to hardcode in values to specify a gain at each frequency, but it still does not work. Here is the code I have:
var audioEngine: AVAudioEngine = AVAudioEngine()
var equalizer: AVAudioUnitEQ!
var audioPlayerNode: AVAudioPlayerNode = AVAudioPlayerNode()
var audioFile: AVAudioFile!
// in viewDidLoad():
equalizer = AVAudioUnitEQ(numberOfBands: 5)
audioEngine.attach(audioPlayerNode)
audioEngine.attach(equalizer)
let bands = equalizer.bands
let freqs = [60, 230, 910, 4000, 14000]
audioEngine.connect(audioPlayerNode, to: equalizer, format: nil)
audioEngine.connect(equalizer, to: audioEngine.outputNode, format: nil)
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
}
bands[0].gain = -10.0
bands[0].filterType = .lowShelf
bands[1].gain = -10.0
bands[1].filterType = .lowShelf
bands[2].gain = -10.0
bands[2].filterType = .lowShelf
bands[3].gain = 10.0
bands[3].filterType = .highShelf
bands[4].gain = 10.0
bands[4].filterType = .highShelf
do {
if let filepath = Bundle.main.path(forResource: "song", ofType: "mp3") {
let filepathURL = NSURL.fileURL(withPath: filepath)
audioFile = try AVAudioFile(forReading: filepathURL)
audioEngine.prepare()
try audioEngine.start()
audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
audioPlayerNode.play()
}
} catch _ {}
Since the low frequencies have a gain of -10 and the high frequencies have a gain of 10, there should be a very noticeable difference when playing any media. However, when the media starts playing, it sounds the same as if played without any equalizer attached.
I'm not sure why this is happening, but I tried several different things to debug. I thought that it might be the order of the functions so I tried switching it so that audioEngine.connect is called after adjusting all of the bands, but that did not make a difference either.
I tried this same code with using an AVAudioUnitTimePitch, and it worked perfectly, so I am dumbfounded as to why it does not work with AVAudioUnitEQ.
I do not want to use any third-party libraries or cocoa pods for this project, I would like to do it using AVFoundation alone.
Any help would be greatly appreciated!
Thanks in advance.
AVAudioUnitEQFilterParameters
Looking through the documentation, I noticed that I had messed with all of the parameters except bypass and it seems that changing this flag fixed everything!
So, I believe the main issue here is that each AVAudioUnitEQ band must not be bypassed by the provided system values rather than the values the programmer sets.
So, I changed
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
}
to
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
bands[i].bypass = false
bands[i].filtertype = .parametric
}
and everything started working. Furthermore, to make an effective equalizer that allows the user to modify individual frequencies the filtertype for each band should be set to .parametric.
I am still unsure on what I should set the bandwith to, but I can probably check online for that or just mess with it until the sound matches a different equalizer application.

AVAudioEngine seek the time of the song

I am playing a song using AVAudioPlayerNode and I am trying to control its time using a UISlider but I can't figure it out how to seek the time using AVAUdioEngine.
After MUCH trial and error I think I have finally figured this out.
First you need to calculate the sample rate of your file. To do this get the last render time of your AudioNode:
var nodetime: AVAudioTime = self.playerNode.lastRenderTime
var playerTime: AVAudioTime = self.playerNode.playerTimeForNodeTime(nodetime)
var sampleRate = playerTime.sampleRate
Then, multiply your sample rate by the new time in seconds. This will give you the exact frame of the song at which you want to start the player:
var newsampletime = AVAudioFramePosition(sampleRate * Double(Slider.value))
Next, you are going to want to calculate the amount of frames there are left in the audio file:
var length = Float(songDuration!) - Slider.value
var framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
Finally, stop your node, schedule the new segment of audio, and start your node again!
playerNode.stop()
if framestoplay > 1000 {
playerNode.scheduleSegment(audioFile, startingFrame: newsampletime, frameCount: framestoplay, atTime: nil,completionHandler: nil)
}
playerNode.play()
If you need further explanation I wrote a short tutorial here: http://swiftexplained.com/?p=9
For future readers, probably better to get the sample rate as :
playerNode.outputFormat(forBus: 0).sampleRate
Also take care when converting to AVAudioFramePosition, as it is an integer, while sample rate is a double. Without rounding the result, you may end up with undesirable results.
P.S. The above answer assumes that the file you are playing has the same sample rate as the output format of the player, which may or may not be true.

Resources