I am setting up a tts app with AVSpeechSynthesizer. I have to do real-time pitch and rate adjustments. I am using UISLider for adjusting pitch and rate.
Here is my code:-
#IBAction func sl(_ sender: UISlider) {
if synthesizer.isSpeaking {
synthesizer.stopSpeaking(at: .immediate)
self.rate = sender.value
if currentRange.length > 0 {
let valuee = currentRange.length + currentRange.location
let neww = self.tvEditor.text.dropFirst(valuee)
self.tvEditor.text = String(neww)
synthesizer.speak(buildUtterance(for: rate, pitch: pitch, with: String(neww), language: self.preferredVoiceLanguageCode2 ?? "en"))
}
} else {
}
}
I may have understood your problem even if no details are provided: you can't take into account the new values of the rate and pitchMultiplier when the speech is running.
To explain the following details, I read this example that contains code snippets (ObjC, Swift) and illustrations.
Create your AVSpeechUtterance instances with their rate and pitchMultiplier properties.
Add each one of them in an array that will represent the queue to be spoken.
Make a loop inside the previous queue with the synthesizer to read out every elements.
Now, if you want to change the property values in real-time, see the steps hereafter once one of your sliders moves:
Get the current spoken utterance thanks to the AVSpeechSynthesizerDelegate protocol.
Run the stopSpeaking synthesizer method that will remove from the queue the utterances that haven't been spoken yet.
Create the previous removed utterances with the new property values.
Redo steps 2/ and 3/ to resume where you stopped with these updated values.
The synthesizer queues all information to be spoken long before you ask for new values that don't impact the stored utterances: you must remove and recreate the utterances with their new property values to be spoken.
If the code example provided by the link above isn't enough, I suggest to take a look at this WWDC video detailed summary dealing with AVSpeechSynthesizer.
Related
Hello fellow AudioKit users,
I'm trying to setup AudioKit 5 with a playback time indication, and am having trouble.
If I use AudioPlayer's duration property, this is the total time of the audio file, not the current playback time.
ex:
let duration = player.duration
Always gives the file's total time.
Looking at old code from AKAudioPlayer, it seemed to have a "currentTime" property.
The migration guide (https://github.com/AudioKit/AudioKit/blob/v5-main/docs/MigrationGuide.md) mentions some potentially helpful classes from the old version, however the "AKTimelineTap" has no replacement with no comments from the developers... nice.
Also still not sure how to manipulate the current playback time either...
I've also checked out Audio Kit 5's Cookbooks, however this is for adding effects and nodes, not necessarily for playback display, etc..
Thanks for any help with this new version of AudioKit.
You can find playerNode in AudioPlayer, it's AVAudioPlayerNode class.
Use lastRenderTime and playerTime, you can calculate current time.
ex:
// get playerNode in AudioPlayer.
let playerNode = player.playerNode
// get lastRenderTime, and transform to playerTime.
guard let lastRenderTime = playerNode.lastRenderTime else { return }
guard let playerTime = playerNode.playerTime(forNodeTime: lastRenderTime) else { return }
// use sampleRate and sampleTime to calculate current time in seconds.
let sampleRate = playerTime.sampleRate
let sampleTime = playerTime.sampleTime
let currentTime = Double(sampleTime) / sampleRate
I have 2 sequencers:
let sequencer1 = AKAppleSequencer(filename: "filename1")
let sequencer2 = AKAppleSequencer(filename: "filename2")
Both have the same bpm value.
When sequencer1 starts playing one midi track (playing it only once) I need that sequencer2 begin playing exactly after first sequencers finished. How can I achieve this ?
Note that sequencer2 looped.
Currently I have this approach but it is not accurate enough:
let callbackInstrument = AKMIDICallbackInstrument(midiInputName: "callbackInstrument", callback: nil)
let callbackTrack = sequencer1.newTrack()!
callbackTrack.setMIDIOutput(callbackInstrument.midiIn)
let beatsCount = sequencer1.length.beats
callbackTrack.add(noteNumber: MIDINoteNumber(beatsCount),
velocity: 1,
position: AKDuration(beats: beatsCount),
duration: AKDuration(beats: 0.1))
callbackInstrument.callback = { status, _, _ in
guard AKMIDIStatusType.from(byte: status) == .noteOn else { return }
DispatchQueue.main.async { self.sequencer2.play() }//not accurate
}
let sampler = AKMIDISampler(midiOutputName: nil)
sequencer1.tracks[0].setMIDIOutput(sampler.midiIn)
Appreciate any thoughts.
Apple's MusicSequence, upon which AKAppleSequencer is built, always flubs the timing for the first 100ms or so after it starts. It is a known issue in closed source code and won't ever be fixed. Here are two possible ways around it.
Use the new AKSequencer. It might be accurate enough to make this work (but no guarantees). Here is an example of using AKSequencer with AKCallbackInstrument: https://stackoverflow.com/a/61545391/2717159
Use a single AKAppleSequencer, but place your 'sequencer2' content after the 'sequencer1' content. You won't be able to loop it automatically, but you can repeatedly re-write it from your callback function (or pre-write it 300 times or something like that). In my experience, there is no problem writing MIDI to AKAppleSequencer while it is playing. The sample app https://github.com/AudioKit/MIDIFileEditAndSync has examples of time shifting MIDI note data, which could be used to accomplish this.
i'm currently working a musician app. In my app notes should be played with a specific duration. I don't get into detail when the notes are played. Basically there is a ui view (a vertical line) which is moving and when this hits my other ui views (rectangle) it should be played a note. Important here: the note should be played until the line is not hitting the rectangle anymore.
The note playing is no problem but I don't find any duration. Also it should be possible to play the same note multiple times with a delay.
So I tried to make this work with AudioKit cause it's seems like the best greatest solution for audio. But it has so much stuff. I took a look into their examples and found this:
let bundlePath = Bundle.main.bundlePath
let soundPath = ("\(bundlePath)/sounds")
let akSampler = AKAppleSampler()
let mixer = AKMixer(akSampler)
try! akSampler.loadSoundFont(soundPath, preset: 0, bank: 0)
mixer.start()
AudioKit.output = mixer
do {
_ = try AudioKit.engine.start()
} catch {
print("AudioKit wouldn't start!")
}
do {
try akSampler.play(noteNumber: myNote.rawValue, velocity: 100, channel: 1)
} catch let e{
print(e)
}
Unfortunately I can't pass any duration and when I call akSampler.stop(noteNumber: myNote.rawValue) it also stops the other notes with the same type.
I tried to find a solution with AVFoundation like so:
engine = AVAudioEngine()
sampler = AVAudioUnitSampler()
engine.attach(sampler)
engine.connect(sampler, to: engine.mainMixerNode, format: nil)
guard let bankURL = Bundle.main.url(forResource: "sounds", withExtension: "SF2") else {
print("could not load sound font")
return
}
... init engine
sampler.startNote(60, withVelocity: 64, onChannel: 0)
But same result. Also the same case that I can't pass any duration.
I also digged into MIDISequencer's but it seems that they generating a sequence which I can play but this does not fit on my problem.
Does someone has a solution here?
The laziest solution would be to just schedule a stop with asyncAfter when you trigger the note, e.g.,
func makeNote(note: MIDINoteNumber, dur: Double) {
sampler.play(noteNumber: note, velocity: 100, channel: 0)
DispatchQueue.main.asyncAfter(deadline: .now() + dur) {
self.sampler.stop(noteNumber: note)
}
}
A better solution would probably use either AKSequencer or AKAppleSequencer. Both allow you to create sequences on the fly by adding individual notes with a specified duration (in musical time, i.e., number of beats). AKSequencer is considerably more accurate, but AKAppleSequencer has more readily available code examples on the web. A little confusingly, the current AKAppleSequencer used to also be called AKSequencer, but their interfaces are sufficiently different that a quick look at the docs for the two classes will tell you which you're looking at.
Your question is asking about how to schedule MIDI events which is precisely what these classes are designed to do. You haven't really given a clear reason why generating a sequence doesn't fit your problem.
I have multiple audio files that I want to play continuously and be able to control it with UISlider. I was using AVAudioPlayer but there was problem with gaps between each tracks. So I found AVQueuePlayer and now there are no gaps. But now I have problem with set AVQueuePlayer to right asset and time to play when UISlider value changed.
I have different duration of each tracks and I want to slider have same slice for each track so I get maximum duration of biggest track and then get acceleration for each track. Here is how I update slider when AVQueuePlayer is playing:
func updateSliderProgress() {
var value: Float = 0
if let track = tracks[self.playingIndex] {
value = Float(self.playingIndex) * self.maximumDuration + Float(CMTimeGetSeconds(audioQueuePlayer.currentTime())) * track.acceleration
}
playerSlider.setValue(value, animated: false)
}
And here is notification when AVPlayerItem did end:
func playerItemDidReachEnd(sender: AnyObject) {
self.playingIndex++
...
}
It's working and UISlider is correctly progressing. But I have problem with other way:
#IBAction func playerSliderValueChanged(sender: AnyObject) {
let seconds = Double(self.getSeekTime(self.playerSlider.value))
audioQueuePlayer.seekToTime(CMTimeMakeWithSeconds(seconds, 1000))
}
Function seekToTime is inherit from AVPlayer and it sets time just for current track, right? So is it possible to change current AVPlayerItem (by index or something like that) and then apply time in that item? I found just method advanceToNextItem but I was hoping in more functions with changing current item.
So for now only solution which comes to my mind is that each time user uses slider I create new AVQueuePlayer use advanceToNextItem to set right track and then use seekToTime to get to correct time. Is there better solution?
I am playing a song using AVAudioPlayerNode and I am trying to control its time using a UISlider but I can't figure it out how to seek the time using AVAUdioEngine.
After MUCH trial and error I think I have finally figured this out.
First you need to calculate the sample rate of your file. To do this get the last render time of your AudioNode:
var nodetime: AVAudioTime = self.playerNode.lastRenderTime
var playerTime: AVAudioTime = self.playerNode.playerTimeForNodeTime(nodetime)
var sampleRate = playerTime.sampleRate
Then, multiply your sample rate by the new time in seconds. This will give you the exact frame of the song at which you want to start the player:
var newsampletime = AVAudioFramePosition(sampleRate * Double(Slider.value))
Next, you are going to want to calculate the amount of frames there are left in the audio file:
var length = Float(songDuration!) - Slider.value
var framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
Finally, stop your node, schedule the new segment of audio, and start your node again!
playerNode.stop()
if framestoplay > 1000 {
playerNode.scheduleSegment(audioFile, startingFrame: newsampletime, frameCount: framestoplay, atTime: nil,completionHandler: nil)
}
playerNode.play()
If you need further explanation I wrote a short tutorial here: http://swiftexplained.com/?p=9
For future readers, probably better to get the sample rate as :
playerNode.outputFormat(forBus: 0).sampleRate
Also take care when converting to AVAudioFramePosition, as it is an integer, while sample rate is a double. Without rounding the result, you may end up with undesirable results.
P.S. The above answer assumes that the file you are playing has the same sample rate as the output format of the player, which may or may not be true.