AVAudioEngine seek the time of the song - ios

I am playing a song using AVAudioPlayerNode and I am trying to control its time using a UISlider but I can't figure it out how to seek the time using AVAUdioEngine.

After MUCH trial and error I think I have finally figured this out.
First you need to calculate the sample rate of your file. To do this get the last render time of your AudioNode:
var nodetime: AVAudioTime = self.playerNode.lastRenderTime
var playerTime: AVAudioTime = self.playerNode.playerTimeForNodeTime(nodetime)
var sampleRate = playerTime.sampleRate
Then, multiply your sample rate by the new time in seconds. This will give you the exact frame of the song at which you want to start the player:
var newsampletime = AVAudioFramePosition(sampleRate * Double(Slider.value))
Next, you are going to want to calculate the amount of frames there are left in the audio file:
var length = Float(songDuration!) - Slider.value
var framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
Finally, stop your node, schedule the new segment of audio, and start your node again!
playerNode.stop()
if framestoplay > 1000 {
playerNode.scheduleSegment(audioFile, startingFrame: newsampletime, frameCount: framestoplay, atTime: nil,completionHandler: nil)
}
playerNode.play()
If you need further explanation I wrote a short tutorial here: http://swiftexplained.com/?p=9

For future readers, probably better to get the sample rate as :
playerNode.outputFormat(forBus: 0).sampleRate
Also take care when converting to AVAudioFramePosition, as it is an integer, while sample rate is a double. Without rounding the result, you may end up with undesirable results.
P.S. The above answer assumes that the file you are playing has the same sample rate as the output format of the player, which may or may not be true.

Related

AudioKit 5 iOS Current Playback Time

Hello fellow AudioKit users,
I'm trying to setup AudioKit 5 with a playback time indication, and am having trouble.
If I use AudioPlayer's duration property, this is the total time of the audio file, not the current playback time.
ex:
let duration = player.duration
Always gives the file's total time.
Looking at old code from AKAudioPlayer, it seemed to have a "currentTime" property.
The migration guide (https://github.com/AudioKit/AudioKit/blob/v5-main/docs/MigrationGuide.md) mentions some potentially helpful classes from the old version, however the "AKTimelineTap" has no replacement with no comments from the developers... nice.
Also still not sure how to manipulate the current playback time either...
I've also checked out Audio Kit 5's Cookbooks, however this is for adding effects and nodes, not necessarily for playback display, etc..
Thanks for any help with this new version of AudioKit.
You can find playerNode in AudioPlayer, it's AVAudioPlayerNode class.
Use lastRenderTime and playerTime, you can calculate current time.
ex:
// get playerNode in AudioPlayer.
let playerNode = player.playerNode
// get lastRenderTime, and transform to playerTime.
guard let lastRenderTime = playerNode.lastRenderTime else { return }
guard let playerTime = playerNode.playerTime(forNodeTime: lastRenderTime) else { return }
// use sampleRate and sampleTime to calculate current time in seconds.
let sampleRate = playerTime.sampleRate
let sampleTime = playerTime.sampleTime
let currentTime = Double(sampleTime) / sampleRate

How to sync accurately enough two music sequences (Audiokit.AKAppleSequencer)?

I have 2 sequencers:
let sequencer1 = AKAppleSequencer(filename: "filename1")
let sequencer2 = AKAppleSequencer(filename: "filename2")
Both have the same bpm value.
When sequencer1 starts playing one midi track (playing it only once) I need that sequencer2 begin playing exactly after first sequencers finished. How can I achieve this ?
Note that sequencer2 looped.
Currently I have this approach but it is not accurate enough:
let callbackInstrument = AKMIDICallbackInstrument(midiInputName: "callbackInstrument", callback: nil)
let callbackTrack = sequencer1.newTrack()!
callbackTrack.setMIDIOutput(callbackInstrument.midiIn)
let beatsCount = sequencer1.length.beats
callbackTrack.add(noteNumber: MIDINoteNumber(beatsCount),
velocity: 1,
position: AKDuration(beats: beatsCount),
duration: AKDuration(beats: 0.1))
callbackInstrument.callback = { status, _, _ in
guard AKMIDIStatusType.from(byte: status) == .noteOn else { return }
DispatchQueue.main.async { self.sequencer2.play() }//not accurate
}
let sampler = AKMIDISampler(midiOutputName: nil)
sequencer1.tracks[0].setMIDIOutput(sampler.midiIn)
Appreciate any thoughts.
Apple's MusicSequence, upon which AKAppleSequencer is built, always flubs the timing for the first 100ms or so after it starts. It is a known issue in closed source code and won't ever be fixed. Here are two possible ways around it.
Use the new AKSequencer. It might be accurate enough to make this work (but no guarantees). Here is an example of using AKSequencer with AKCallbackInstrument: https://stackoverflow.com/a/61545391/2717159
Use a single AKAppleSequencer, but place your 'sequencer2' content after the 'sequencer1' content. You won't be able to loop it automatically, but you can repeatedly re-write it from your callback function (or pre-write it 300 times or something like that). In my experience, there is no problem writing MIDI to AKAppleSequencer while it is playing. The sample app https://github.com/AudioKit/MIDIFileEditAndSync has examples of time shifting MIDI note data, which could be used to accomplish this.

Modify Playback Tempo of an AVAudioSequencer / AVMusicTrack?

How does one programmatically change the tempo (in BPM) of an AVAudioSequencer that's been loaded from an existing MIDI file (i.e. using the following)?
try sequencer.load(from: fileURL, options: AVMusicSequenceLoadOptions.smfChannelsToTracks)
I know that the sequencer's tempoTrack property returns the AVMusicTrack controlling the tempo, but how does one then edit it to add/change tempo events? The Apple documentation simply says...
"The tempo track can be edited and iterated upon as any other track. Non-tempo events in a tempo track are ignored."
...but gives no further indication on how such editing would be done.
I know there's the rate property, but that just revolves around a default value of 1.0, which would need some complex adjustments to allow BPM values, and I don't think would even be possible unless the file's original BPM is known at runtime.
Alternatively, is there a way to create a new AVMusicTrack from scratch, with a custom tempo, and make that the sequencer's tempoTrack?
The only way I managed this was to dip into the Audio Toolbox API momentarily.
This approach assumes that you instantiated your AVAudioSequencer with an AVAudioEngine via:
init(audioEngine engine: AVAudioEngine)
(1) After loading your midi file with AVAudioSequencer, get a pointer to the underlying MusicSequence from this property on AVAudioEngine
var musicSequence: MusicSequence? { get set }
(2) Get a pointer to the sequence's tempo track.
var tempoTrack: MusicTrack!
MusicSequenceGetTempoTrack(musicSequence, &tempoTrack)
(3) Remove all existing tempo information from the tempo track.
var iterator: MusicEventIterator!
NewMusicEventIterator(tempoTrack, &iterator)
var hasEvent = DarwinBoolean(false)
MusicEventIteratorHasCurrentEvent(iterator, &hasEvent)
while hasEvent.boolValue {
var timeStamp = MusicTimeStamp()
var eventType = MusicEventType()
var data: UnsafeRawPointer? = nil
var dataSize = UInt32()
MusicEventIteratorGetEventInfo(iterator, &timeStamp, &eventType, &data, &dataSize)
guard eventType == kMusicEventType_ExtendedTempo else {
MusicEventIteratorNextEvent(iterator)
MusicEventIteratorHasCurrentEvent(iterator, &hasEvent)
continue
}
// remove tempo event
MusicEventIteratorDeleteEvent(iterator)
MusicEventIteratorHasCurrentEvent(iterator, &hasEvent)
}
DisposeMusicEventIterator(iterator)
(4) Set the new tempo.
let bpm = 92
let timeStamp = MusicTimeStamp(0)
MusicTrackNewExtendedTempoEvent(tempoTrack, timeStamp, bpm)

Real time rate and pitch adjustments Swift

I am setting up a tts app with AVSpeechSynthesizer. I have to do real-time pitch and rate adjustments. I am using UISLider for adjusting pitch and rate.
Here is my code:-
#IBAction func sl(_ sender: UISlider) {
if synthesizer.isSpeaking {
synthesizer.stopSpeaking(at: .immediate)
self.rate = sender.value
if currentRange.length > 0 {
let valuee = currentRange.length + currentRange.location
let neww = self.tvEditor.text.dropFirst(valuee)
self.tvEditor.text = String(neww)
synthesizer.speak(buildUtterance(for: rate, pitch: pitch, with: String(neww), language: self.preferredVoiceLanguageCode2 ?? "en"))
}
} else {
}
}
I may have understood your problem even if no details are provided: you can't take into account the new values of the rate and pitchMultiplier when the speech is running.
To explain the following details, I read this example that contains code snippets (ObjC, Swift) and illustrations.
Create your AVSpeechUtterance instances with their rate and pitchMultiplier properties.
Add each one of them in an array that will represent the queue to be spoken.
Make a loop inside the previous queue with the synthesizer to read out every elements.
Now, if you want to change the property values in real-time, see the steps hereafter once one of your sliders moves:
Get the current spoken utterance thanks to the AVSpeechSynthesizerDelegate protocol.
Run the stopSpeaking synthesizer method that will remove from the queue the utterances that haven't been spoken yet.
Create the previous removed utterances with the new property values.
Redo steps 2/ and 3/ to resume where you stopped with these updated values.
The synthesizer queues all information to be spoken long before you ask for new values that don't impact the stored utterances: you must remove and recreate the utterances with their new property values to be spoken.
If the code example provided by the link above isn't enough, I suggest to take a look at this WWDC video detailed summary dealing with AVSpeechSynthesizer.

AVPlayer seekToTime does not play at correct position

I have an AVPlayer which is playing a HLS video stream. My user interface provides a row of buttons, one for each "chapter" in the video (the buttons are labeled "1", "2", "3"). The app downloads some meta-data from a server which contains the list of chapter cut-in points denoted in seconds. For example, one video is 12 minutes in length - the list of chapter cut-in points are 0, 58, 71, 230, 530, etc., etc.
When the user taps one of the "chapter buttons" the button handler code does this:
[self.avPlayer pause];
[self.avPlayer seekToTime: CMTimeMakeWithSeconds(seekTime, 600)
toleranceBefore: kCMTimeZero
toleranceAfter: kCMTimeZero
completionHandler: ^(BOOL finished)
{
[self.avPlayer play];
}];
Where "seekTime" is a local var which contains the cut-in point (as described above).
The problem is that the video does not always start at the correct point. Sometimes it does. But sometimes it is anywhere from a tenth of a second, to 2 seconds BEFORE the requested seekTime. It NEVER starts after the requested seekTime.
Here are some stats on the video encoding:
Encoder: handbrakeCLI
Codec: h.264
Frame rate: 24 (actually, 23.976 - same as how it was shot)
Video Bitrate: multiple bitrates (64/150/300/500/800/1200)
Audio Bitrate: 128k
Keyframes: 23.976 (1 per second)
I am using the Apple mediafilesegmenter tool, of course, and the variantplaylistcreator to generate the playlist.
The files are being served from an Amazon Cloud/S3 bucket.
One area which I remain unclear about is CMTimeMakeWithSeconds - I have tried several variations based on different articles/docs I have read. For example, in the above excerpt I am using:
CMTimeMakeWithSeconds(seekTime, 600)
I have also tried:
CMTimeMakeWithSeconds(seekTime, 1)
I can't tell which is correct, though BOTH seem to produce the same inconsistent results!
I have also tried:
CMTimeMakeWithSeconds(seekTime, 23.967)
Some articles claim this works like a numerator/denomenator, so n/1 should be correct where 'n' is number of seconds (as in CMTimeMakeWithseconds(n, 1)). But, the code was originally created by a different programmer (who is gone now) and he used the 600 number for the preferredTimeScale (ie. CMTimeMakeWithseconds(n, 600)).
Can anyone offer any clues as to what I am doing wrong, or even if the kind of accuracy I am trying to achieve is even possible?
And in case someone is tempted to offer "alternative" solutions, we are already considering breaking the video up into separate streams, one per chapter, but we do not believe that will give us the same performance in the sense that changing chapters will take longer as a new AVPlayerItem will have to be created and loaded, etc., etc., etc. So if you think this is the only solution that will work (and we do expect this will achieve the result we want - ie. each chapter WILL start exactly where we want it to) feel free to say so.
Thanks in advance!
int32_t timeScale = self.player.currentItem.asset.duration.timescale;
CMTime time = CMTimeMakeWithSeconds(77.000000, timeScale);
[self.player seekToTime:time toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
I had a problem with 'seekToTime'. I solved my problem with this code. 'timescale' is trick for this problem.
Swift version:
let playerTimescale = self.player.currentItem?.asset.duration.timescale ?? 1
let time = CMTime(seconds: 77.000000, preferredTimescale: playerTimescale)
self.player.seek(to: time, toleranceBefore: kCMTimeZero, toleranceAfter: kCMTimeZero) { (finished) in /* Add your completion code here */
}
My suggestion:
1) Don't use [avplayer seekToTime: toleranceBefore: toleranceAfter: ], this will delay your seek time 4-5 seconds.
2) HLS video cut to 10 seconds per segment. Your chapter start postion should fit the value which is multipes of 10. As the segment starts with I frame, on this way, you can get quick seek time and accurate time.
please use function like [player seekToTime:CMTimeMakeWithSeconds(seekTime,1)] .
Because your tolerance value kCMTimeZero will take more time to seek.Instead of using tolerance value of kCMTimeZero you can use kCMTimeIndefinite which is equivalent the function that i specified earlier.
Put this code it may be resolve your problem.
let targetTime = CMTimeMakeWithSeconds(videoLastDuration, 1) // videoLastDuration hold the previous video state.
self.playerController.player?.currentItem?.seekToTime(targetTime, toleranceBefore: kCMTimeZero, toleranceAfter: kCMTimeZero)
Swift5
let seconds = 45.0
let time = CMTimeMake(value: seconds, timescale: 1)
player?.seek(to: time, toleranceBefore: CMTime.zero, toleranceAfter: CMTime.zero)

Resources