I am stuck a little bit.
Here is my code:
let speaker = AVSpeechSynthesizer()
var playQueue = [AVSpeechUtterance]() // current queue
var backedQueue = [AVSpeechUtterance]() // queue backup
...
func moveBackward(_ currentUtterance:AVSpeechUtterance) {
speaker.stopSpeaking(at: .immediate)
let currentIndex = getCurrentIndexOfText(currentUtterance)
// out of range check was deleted
let previousElement = backedQueue[currentIndex-1]
playQueue.insert(previousElement, at: 0)
for utterance in playQueue {
speaker.speak(utterance) // error here
}
}
According to the docs AVSpeechSynthesizer.stopSpeaking(at:):
Stopping the synthesizer cancels any further speech; in constrast with
when the synthesizer is paused, speech cannot be resumed where it left
off. Any utterances yet to be spoken are removed from the
synthesizer’s queue.
I always get the error(AVSpeechUtterance shall not be enqueued twice), when I insert an AVSpeechUtterance in the AVSpeechSynthesizer queue. But it should stop according to the doc.
When stopping the player, the utterances are definitely removed from the queue.
However, in your moveBackward function, you insert another AVSpeechUterrance at playQueue[0] whose complete array represents the player queue.
Assuming the stops happens with currentIndex = 2, the following snapshots prove that the same object is injected twice in the queue:
Copy backedQueue[1] that is a copy of playQueue[1] (same memory address).
Insert backedQueue[1] at playQueue[0] (former playQueue[1] becomes new playQueue[2]).
Unfortunately, as the system indicates, AVSpeechUtterance shall not be enqueued twice and that's exactly what you're doing here: objects at playQueue indexes 0 and 2 have the same memory address.
The last loop after inserting the new object at index 0 asks the speech synthesizer to put all the utterances in its all new queue... and two of them are the same.
Instead of copying the playedQueue into the backedQueue (both contain the same memory addresses to their objects) OR appending the same utterance in both arrays, I suggest to create different utterance instances to be put as follows:
for i in 1...5 {
let stringNb = "number " + String(i) + " of the speech synthesizer."
let utterance = AVSpeechUtterance(string: stringNb)
playQueue.append(utterance)
let utteranceBis = AVSpeechUtterance(string: stringNb)
backedQueue.append(utteranceBis)
}
Following this piece of advice, you shouldn't meet the error AVSpeechUtterance shall not be enqueued twice.
Related
We have a requirement for audio processing on the output of AVSpeechSynthesizer. So we started with using the write method of AVSpeechSynthesizer class to apply processing on top. of it. What we currently have:
var synthesizer = AVSpeechSynthesizer()
var playerNode: AVAudioPlayerNode = AVAudioPlayerNode()
fun play(audioCue: String){
let utterance = AVSpeechUtterance(string: audioCue)
synthesizer.write(utterance, toBufferCallback: {[weak self] buffer in
// We do our processing including conversion from pcmFormatFloat16 format to pcmFormatFloat32 format which is supported by AVAudioPlayerNode
self.playerNode.scheduleBuffer(buffer as! AVAudioPCMBuffer, completionCallbackType: .dataPlayedBack)
}
}
All of it was working fine before iOS 16 but with iOS 16 we started getting this exception:
[AXTTSCommon] TTSPlaybackEnqueueFullAudioQueueBuffer: error -66686 enqueueing buffer
Not sure what this exception means exactly. So we are looking for a way of addressing this exception or may be a better way of playing the buffers.
UPDATE:
Created an empty project for testing and it turns out the write method if called with an empty bloc generates these logs:
Code I have used for Swift project :
let synth = AVSpeechSynthesizer()
let myUtterance = AVSpeechUtterance(string: message)
myUtterance.rate = 0.4
synth.speak(myUtterance)
Can move let synth = AVSpeechSynthesizer() out of this method and declare on top for this class and use.
Settings to enable for Xcode14 & iOS 16 : If you are using XCode14 and iOS16, it may be voices under spoken content is not downloaded and you will get an error on console saying identifier, source, content nil. All you need to do is, go to accessiblity in settings -> Spoken content -> Voices -> Select any language and download any profile. After this run ur voice and you will be able to hear the speech from passed text.
It is working for me now.
In order to exert greater control over speech in the spirit of this tutorial for an audiobook although I'm not following it exactly, I have tried sending smaller pieces of a string such as phrases in separate chunks. The speech synthesizer enqueues each utterance and speaks them one after the other. In theory, this is supposed to give you greater control to make speech sound less robotic.
I can get the synthesizer to speak the chunks in order however there is a long delay between each so it sounds way worse than just sending all the text at the same time.
Is there anyway to speed up the queue so that the utterances are spoken one after the other with no delay?
Setting the properties: utt.preUtteranceDelay and utt.postUtteranceDelay to zero seconds does not seem to have any effect
Here is my code:
phraseCounter = 0
func readParagraph(test: String) {
let phrases = test.components(separatedBy: " ")
for phrase in phrases {
phraseCounter = phraseCounter+1
let utt = AVSpeechUtterance(string:phrase)
let preUtteranceDelayInSecond = 0
let postUtteranceDelayInSecond = 0
utt.preUtteranceDelay = TimeInterval.init(exactly:preUtteranceDelayInSecond)!
utt.postUtteranceDelay = TimeInterval.init(exactly:postUtteranceDelayInSecond)!
voice.delegate = self
if (phraseCounter == 2) {
utt.rate = .8
}
voice.speak(utt)
}
}
Is there anyway to speed up the queue so that the utterances are spoken one after the other with no delay?
As you did, the only way is to set the post and pre UtteranceDelay properties to 0 which is the default value by the way.
As recommended here, I implemented the code snippet hereafter (Xcode 10, Swift 5.0 and iOS 12.3.1) to check the impact of different UtteranceDelay values ⟹ 0 is the best solution to improve the speed of enqueued utterances.
var synthesizer = AVSpeechSynthesizer()
var playQueue = [AVSpeechUtterance]()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
for i in 1...10 {
let stringNb = "Sentence number " + String(i) + " of the speech synthesizer."
let utterance = AVSpeechUtterance(string: stringNb)
utterance.rate = AVSpeechUtteranceDefaultSpeechRate
utterance.pitchMultiplier = 1.0
utterance.volume = 1.0
utterance.postUtteranceDelay = 0.0
utterance.preUtteranceDelay = 0.0
playQueue.append(utterance)
}
synthesizer.delegate = self
for utterance in playQueue {
synthesizer.speak(utterance)
}
}
If a delay is too important with the '0' value in your code, the incoming string is maybe the problem? (adapt the code snippet above to your needs)
I am setting up a tts app with AVSpeechSynthesizer. I have to do real-time pitch and rate adjustments. I am using UISLider for adjusting pitch and rate.
Here is my code:-
#IBAction func sl(_ sender: UISlider) {
if synthesizer.isSpeaking {
synthesizer.stopSpeaking(at: .immediate)
self.rate = sender.value
if currentRange.length > 0 {
let valuee = currentRange.length + currentRange.location
let neww = self.tvEditor.text.dropFirst(valuee)
self.tvEditor.text = String(neww)
synthesizer.speak(buildUtterance(for: rate, pitch: pitch, with: String(neww), language: self.preferredVoiceLanguageCode2 ?? "en"))
}
} else {
}
}
I may have understood your problem even if no details are provided: you can't take into account the new values of the rate and pitchMultiplier when the speech is running.
To explain the following details, I read this example that contains code snippets (ObjC, Swift) and illustrations.
Create your AVSpeechUtterance instances with their rate and pitchMultiplier properties.
Add each one of them in an array that will represent the queue to be spoken.
Make a loop inside the previous queue with the synthesizer to read out every elements.
Now, if you want to change the property values in real-time, see the steps hereafter once one of your sliders moves:
Get the current spoken utterance thanks to the AVSpeechSynthesizerDelegate protocol.
Run the stopSpeaking synthesizer method that will remove from the queue the utterances that haven't been spoken yet.
Create the previous removed utterances with the new property values.
Redo steps 2/ and 3/ to resume where you stopped with these updated values.
The synthesizer queues all information to be spoken long before you ask for new values that don't impact the stored utterances: you must remove and recreate the utterances with their new property values to be spoken.
If the code example provided by the link above isn't enough, I suggest to take a look at this WWDC video detailed summary dealing with AVSpeechSynthesizer.
I have experiencing very strange crash from iOS App. The function below is an implementation of some protocol so I cannot change its declaration to use some success/failure callback. It has input parameters and expects AVAsset at the output. My problem is during writing asset I get strange crash during leaving dispatch group (dg variable). I marked line of the crash with comment. This crash is not always happens. Just from time to time. This is the function:
func writeAsset(to url: URL, metadataArray: [AVTimedMetadataGroup]) -> AVAsset {
let writer = try! AVAssetWriter(url: url, fileType: AVFileTypeQuickTimeMovie)
writer.movieTimeScale = track.timeScale
// setup writer, inputs and metadata adaptor and so on ...
if writer.startWriting() {
writer.startSession(atSourceTime: kCMTimeZero)
}
let writeQueue = DispatchQueue(label: "HH.Write.Track.Queue")
let dg = DispatchGroup()
var i = 0
dg.enter() // Entering to the group
writerMetadataIn.requestMediaDataWhenReady(on: writeQueue) {
while writerMetadataIn.isReadyForMoreMediaData {
//let group = ..fetch next group to write
if i < metadataArray.count {
let group = metadataArray[i]
if writerMetadataAdaptor.append(group) {
}
i += 1
} else {
writerMetadataIn.markAsFinished()
writer.finishWriting {
dg.leave() // CRASH IN THIS LINE
}
break
}
}
}
dg.wait()
let writtenAsset = AVAsset(url: url)
return writtenAsset
}
Can somebody have idea what is the cause of this crash? I have only this information from crash report in xCode.
I suspect your issue is that since you are entering the dispatch group once, and then (sometimes) leaving it more than once inside the loop, that you do not have balanced calls. ie. you are calling leave more times than you have called enter.
Found solution for the problem. It was not related to DispatchGroup but with AVAssetWriter and input array of AVTimedMetadataGroup elements. Each of this elements has time range. If start times for two of them is identical then writter during appending this groups is going to be in error state and behavior is very unpredictible. I don't know why error was in this line during leaving group but solution for me was to detect groups with the same start times and skip them.
So I'm working on setting up a background queue that does all realm writes on its own thread. I've run into some strange issues I can't figure out.
Issue #1
I'm not sure if this is related (see post: Xcode debug issues with realm) but I do have an apparent mismatch with my lldbg output as to whether a certain field:
messages element
My DataTypes
OTTOSession
class OTTOSession : Object {
dynamic var messages : MessageList?
dynamic var recordingStart : Double = NSDate().timeIntervalSince1970
func addLocationMessage(msg : dmParsedMessage) -> LocationMessage {
let dmsg : dmLocationMessage = msg as! dmLocationMessage
let locMsg = LocationMessage(locMsg: dmsg)
self.messages!.locationMessages.append(locMsg)
return locMsg;
}
}
MessageList
public class MessageList : Object {
dynamic var date : NSDate = NSDate();
dynamic var test : String = "HI";
let locationMessages = List<LocationMessage>()
let ahrsMessages = List<AHRSMessage>()
// let statusMessages = List<StatusMessageRLM>()
let logMessages = List<LogMessage>()
}
Realm Interactions
In my code I create my new OTTOSession in a code block on my realmQueue
internal var realmQueue = dispatch_queue_create("DataRecorder.realmQueue",
DISPATCH_QUEUE_SERIAL)
All realm calls are done on this realmQueue thread
dispatch_async(realmQueue) {
self.session = OTTOSession()
}
I've also tried different variants such as:
dispatch_async(realmQueue) {
self.session = OTTOSession()
// Directly making a message list
self.session!.messages = MessageList()
//Making a separate message list var
self.messages = MessageList()
self.session!.messages = self.messages
}
The reason I've played around with the MessageList is that I cant tell from the debugger whether the .messages variable is set or not
Recording
Once I signal to my processes I want to start recording I then actually make the write calls into Realm (which I'm not 100% sure i'm doing correctly)
dispatch_async(realmQueue){
// Update some of the data
self.session!.recordingStart = NSDate().timeIntervalSince1970
// Then start writing the objects
try! Realm().write {
// I've tried different variants of:
let session = self.session!
try! Realm().add(self.session!)
// Or
try! Realm().add(self.session!)
// or
let session = self.session!
session.messages = MessageList()
session.messages!.ahrsMessages
try! Realm().add(self.session!)
try! self.session!.messages = Realm().create(MessageList)
try! Realm().add(self.session!.messages!)
print ("Done")
}
}
Basically I've tried various combinations of trying to get the objects into realm.
Question: When adding an object with a one-to-one relationship do I have to add both objects to Realm or will just adding the parent object cause the related object to also be added to realm
Adding Data
Where things start to go awry is when I start adding data to my objects.
Inside my OTTOSession Object I have the following function:
func addLocationMessage(msg : dmParsedMessage) -> LocationMessage {
let dmsg : dmLocationMessage = msg as! dmLocationMessage
let locMsg = LocationMessage(locMsg: dmsg)
// THIS LINE IS CAUSING A 'REALM ACCESSED FROM INCORRECT THREAD ERROR
self.messages!.locationMessages.append(locMsg)
return locMsg;
}
I'm getting my access error on this line:
self.messages!.locationMessages.append(locMsg)
Now the function call itself is wrapped in the following block:
dispatch_async(realmQueue) {
try! Realm().write {
self.session?.addLocationMessage(msg)
}
}
So as far as I can tell by looking at the debugger - and by looking at the code - everything should be running inside the right thread.
My queue is SERIAL so things should be happening one after another. The only thing I can't figure out is when I break at this point the debugger does show that messages is nil but I cant trust that because:
Question
So my question is two fold
1) Is my code for adding an object into the RealmDB correct. i.e. do I need to make two separate Realm().add calls for both the OTTOSession and the MessageList or can I get away with a single call
2) Is there anything that pops out to explain why I'm getting a thread violation here - should doing all my realm writing calls on a single thread be enough ?
1) No, you don't need to make two separate calls to Realm.add(). When you add an object to a Realm all related objects are persisted as well.
2) Your thread violation very likely originates from the fact that dispatch queues make no guarantee over the thread on which they are executed on (beside the main queue). So that means your Realm queue is executed on different threads. You will need to make sure to retrieve your session object from a Realm opened on this thread. You might want to use primary keys for that purpose and share those between queues / threads.