I have a list of strings that I need to display on the screen and simultaneously convert to speech using speechSynthesizer.speak
The issue I am facing is that I am not able how to figure out how to wait for one utterance to finish and them move to the next one. What happens is that the text is displayed one after the other without syncing with the audio.
func speechAndText(text: String){
let speechUtterance = AVSpeechUtterance(string: text)
speechUtterance.rate = AVSpeechUtteranceMaximumSpeechRate/2.0
speechUtterance.pitchMultiplier = 1.2
speechUtterance.volume = 1
speechUtterance.voice = AVSpeechSynthesisVoice(language: "en-GB")
DispatchQueue.main.async {
self.textView.text = text
}
self.speechSynthesizer.speak(speechUtterance)
}
func speakPara(_ phrases: [String]) {
if let phrase = phrases.first {
let group = DispatchGroup()
group.enter()
DispatchQueue.global(qos: .default).async {
self.speechAndText(text: phrase)
group.leave()
}
group.wait()
let rest = Array(phrases.dropFirst())
if !rest.isEmpty {
self.speakPara(rest)
}
}
}
The speakParafunction takes the list of strings to be uttered and sends one string at a time to speechAndText to convert to speech and write on the display.
Any help would be appreciated. Thank you.
No need for GCD, you can use the delegate methods of AVSpeechSynthesizer.
Set self as delegate:
self.speechSynthesizer.delegate = self
and implement didFinish by speaking the next text.
extension YourClass : AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer,
didFinish utterance: AVSpeechUtterance) {
currentIndex += 1
if currentIndex < phrases.lastIndex {
self.speechAndText(text: phrases[currentIndex])
}
}
}
currentIndex and phrases are two instance properties. phrases stores the array of strings that you want to speak, and currentIndex stores the index of the text that is currently being spoken. speakPara can be implemented like so:
func speakPara(_ phrases: [String]) {
// reset both helper properties
self.phrases = phrases
self.currentIndex = 0
// stop speaking whatever it was speaking and start speaking the first item (if any)
self.speechSynthesizer.stopSpeaking(at: .immediate)
phrases.map { self.speechAndText(text: $0) }
}
Related
This is stupid simple but I cannot get it to work.
I want to stop recording before the phone speaks something. No data is being passed.
let words = "Hello world"
let utt = AVSpeechUtterance(string:words)
stopRecordingWithCompletion() {
voice.speak(utt)
}
func stopRecordinWithCompletion(closure: () -> Void) {
recognitionRequest?.endAudio()
recognitionRequest = nil
recognitionTask?.cancel()
recognitionTask = nil
let inputNode = audioEngine.inputNode
let bus = 0
inputNode?.removeTap(onBus: bus)
self.audioEngine.stop()
closure()
}
What am I doing wrong?
Your current approach is not really ideal for this.
To begin with, AVSpeechSynthesizer provides a delegate you can monitor for changes, including when it is about to speak.
speechSynthesizer(_:willSpeakRangeOfSpeechString:utterance:)
Just observe this, and call your stop function. No closure is required since it is a synchronous function call.
In summary:
Conform to AVSpeechSynthesizerDelegate
Implement speechSynthesizer(_:willSpeakRangeOfSpeechString:utterance:)
When the function above is called, have it call your stopRecording() function
An example of the delegate setup:
extension YourClassHere: AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer,
willSpeakRangeOfSpeechString characterRange: NSRange,
utterance: AVSpeechUtterance) {
stopRecording()
}
}
I am trying to add(move forward) 10 second song duration or minus(move backward) 10 second in Spotify player but i am really confused how to add or minus.
When i m trying to use this code the song is not changed duration
// forward button action
#IBAction func moveFrdBtnAction(_ sender: Any) {
SpotifyManager.shared.audioStreaming(SpotifyManager.shared.player, didSeekToPosition: TimeInterval(10))
}
// spotify delegate method seekToPosition
func audioStreaming(_ audioStreaming: SPTAudioStreamingController!, didSeekToPosition position: TimeInterval) {
player?.seek(to: position, callback: { (error) in
let songDuration = audioStreaming.metadata.currentTrack?.duration as Any as! Double
self.delegate?.getSongTime(timeCount: Int(songDuration)+1)
})
}
We are making a music application using the same SDK in both the platforms (Android & iOS), the seekToPosition method of the Spotify SDK is working correctly in the Android version, however, it is not working in the iOS one.The delegate method calls itself but the music stops.
Can you kindly let us know why this scenario is happening, and what should we do to run it on the iOS devices as well.
Can someone please explain to me how to solve this , i've tried to solve this but no results yet.
Any help would be greatly appreciated.
Thanks in advance.
I don't use this API so my answer will be based your code and Spotify's reference documentation.
I think there are a few things wrong with your flow:
As Robert Dresler commented, you should (approximately) never call a delegate directly, a delegate calls you.
I'm pretty sure your action currently results in jumping to exactly 10 seconds, not by 10 seconds.
(As an aside, I'd suggest changing the name of your function moveFrdBtnAction to at least add more vowels)
Anyway, here's my best guess at what you want:
// forward button action
#IBAction func moveForwardButtonAction(_ sender: Any) {
skipAudio(by: 10)
}
#IBAction func moveBackButtonAction(_ sender: Any) {
skipAudio(by: -10)
}
func skipAudio(by interval: TimeInterval) {
if let player = player {
let position = player.playbackState.position // The documentation alludes to milliseconds but examples don't.
player.seek(to: position + interval, callback: { (error) in
// Handle the error (if any)
})
}
}
// spotify delegate method seekToPosition
func audioStreaming(_ audioStreaming: SPTAudioStreamingController!, didSeekToPosition position: TimeInterval) {
// Update your UI
}
Note that I have not handled seeking before the start of the track, nor after the end which could happen with a simple position + interval. The API may handle this for you, or not.
You could take a look at the examples here: spotify/ios-sdk. In the NowPlayingView example they use the 'seekForward15Seconds', maybe you could use that? If you still need 10s I have added a function below. The position is in milliseconds.
"position: The position to seek to in milliseconds"
docs
ViewController.swift
var appRemote: SPTAppRemote {
get {
return AppDelegate.sharedInstance.appRemote
}
}
fileprivate func seekForward15Seconds() {
appRemote.playerAPI?.seekForward15Seconds(defaultCallback)
}
fileprivate seekBackward15Seconds() {
appRemote.playerAPI?.seekBackward15Seconds(defaultCallback)
}
// TODO: Or you could try this function
func seekForward(seconds: Int){
appRemote.playerAPI?.getPlayerState({ (result, error) in
// playback position in milliseconds
let current_position = self.playerState?.playbackPosition
let seconds_in_milliseconds = seconds * 1000
self.appRemote.playerAPI?.seek(toPosition: current_position + seconds_in_milliseconds, callback: { (result, error) in
guard error == nil else {
print(error)
return
}
})
})
}
var defaultCallback: SPTAppRemoteCallback {
get {
return {[weak self] _, error in
if let error = error {
self?.displayError(error as NSError)
}
}
}
}
AppDelegate.swift
lazy var appRemote: SPTAppRemote = {
let configuration = SPTConfiguration(clientID: self.clientIdentifier, redirectURL: self.redirectUri)
let appRemote = SPTAppRemote(configuration: configuration, logLevel: .debug)
appRemote.connectionParameters.accessToken = self.accessToken
appRemote.delegate = self
return appRemote
}()
class var sharedInstance: AppDelegate {
get {
return UIApplication.shared.delegate as! AppDelegate
}
}
Edit1:
For this to work you need to follow the Prepare Your Environment:
Add the SpotifyiOS.framework to your Xcode project
Hope it helps!
I want to pause speech utterance and it has to complete the current sentence it is speaking out then it has to pause but the API provides only two pause types immediate and word not current sentence. I tried this,
myutterance = AVSpeechUtterance(string:readTextView.text)
synth .speak(myutterance)
synth .pauseSpeaking(at: AVSpeechBoundary.immediate)
But it pauses immediately after text is completed.
I just tried whatever you did and it read the whole sentence without giving enough pause,
let someText = "Some Sample text. That will read and pause after every sentence"
let speechSynthesizer = AVSpeechSynthesizer()
let myutterance = AVSpeechUtterance(string:someText)
speechSynthesizer .speak(myutterance)
//Normal reading without pause
speechSynthesizer .pauseSpeaking(at: AVSpeechBoundary.immediate)
To pause after every sentence you can break the whole text to simple components and read them individually in a loop like below by using the postUtteranceDelay property.
//To pause after every sentence
let components = someText.components(separatedBy: ".")
for str in components{
let myutterance = AVSpeechUtterance(string:str)
speechSynthesizer .speak(myutterance)
myutterance.postUtteranceDelay = 1 //Set it to whatever value you would want.
}
To pause after completing the currently speaking sentence we need to do a tiny hack,
var isPaused:Bool = false
let someText = "Some Sample text. That will read and pause after every sentence. The next sentence should'nt be heard if you had pressed the pause button. Let us try this."
let speechSynthesizer = AVSpeechSynthesizer()
var currentUtterance:AVSpeechUtterance?
override func viewDidLoad() {
super.viewDidLoad()
speechSynthesizer.delegate = self
}
#IBAction func startSpeaking(_ sender: Any) {
//Pause after every sentence
let components = someText.components(separatedBy: ".")
for str in components{
let myutterance = AVSpeechUtterance(string:str)
speechSynthesizer .speak(myutterance)
myutterance.postUtteranceDelay = 1
}
}
#IBAction func pauseSpeaking(_ sender: Any) {
isPaused = !isPaused
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, willSpeakRangeOfSpeechString characterRange: NSRange, utterance: AVSpeechUtterance){
print("location\(characterRange.location)")
print("Range + Length\(characterRange.location + characterRange.length)")
print("currentUtterance!.speechString\(currentUtterance!.speechString)")
print("currentUtterance!.count\(currentUtterance!.speechString.characters.count)")
if isPaused && (characterRange.location + characterRange.length) == currentUtterance!.speechString.characters.count{
speechSynthesizer.stopSpeaking(at:.word)
}
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didStart utterance: AVSpeechUtterance){
currentUtterance = utterance
}
I'm trying to set my AVAudioSession to inactive to get back to normal state.
My utterance function:
class SSpeech : NSObject, AVSpeechSynthesizerDelegate {
var group = DispatchGroup();
var queue = DispatchQueue(label: "co.xxxx.speech", attributes: [])
class var sharedInstance: SSpeech {
struct Static {
static var instance: SSpeech?
}
if !(Static.instance != nil) {
Static.instance = SSpeech()
}
return Static.instance!
}
required override init() {
super.init();
self.speechsynt.delegate = self;
}
deinit {
print("deinit SSpeech")
}
let audioSession = AVAudioSession.sharedInstance();
var speechsynt: AVSpeechSynthesizer = AVSpeechSynthesizer()
var queueTalks = SQueue<String>();
func pause() {
speechsynt.pauseSpeaking(at: .word)
}
func talk(_ sentence: String, languageCode code:String = SUtils.selectedLanguage.code, withEndPausing: Bool = false) {
if SUser.sharedInstance.currentUser.value!.speechOn != 1 {
return
}
queue.async{
self.queueTalks.enQueue(sentence)
do {
let category = AVAudioSessionCategoryPlayback;
var categoryOptions = AVAudioSessionCategoryOptions.duckOthers
if #available(iOS 9.0, *) {
categoryOptions.formUnion(AVAudioSessionCategoryOptions.interruptSpokenAudioAndMixWithOthers)
}
try self.audioSession.setCategory(category, with: categoryOptions)
try self.audioSession.setActive(true);
} catch _ {
return;
}
self.utteranceTalk(sentence, initSentence: false, speechsynt: self.speechsynt, languageCode:code, withEndPausing: withEndPausing)
do {
try self.audioSession.setCategory(AVAudioSessionCategoryPlayback, with: AVAudioSessionCategoryOptions.mixWithOthers)
} catch _ {
return;
}
}
}
func utteranceTalk(_ sentence: String, initSentence: Bool, speechsynt: AVSpeechSynthesizer, languageCode:String = "en-US", withEndPausing: Bool = false){
if SUser.sharedInstance.currentUser.value!.speechOn != 1 {
return
}
let nextSpeech:AVSpeechUtterance = AVSpeechUtterance(string: sentence)
nextSpeech.voice = AVSpeechSynthesisVoice(language: languageCode)
if !initSentence {
nextSpeech.rate = 0.4;
}
if(withEndPausing){
nextSpeech.postUtteranceDelay = 0.2;
}
speechsynt.speak(nextSpeech)
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance:AVSpeechUtterance) {
print("Speaker has finished to talk")
queue.async {
do {
try self.audioSession.setActive(false, with: AVAudioSessionSetActiveOptions.notifyOthersOnDeactivation)
}
catch {}
}
}
}
}
My method is correctly called, but my audioSession still active when the utterance is finished. i've tried lot of thing but nothing work :(.
I would suggest using an AvAudioPlayer. They have very easy start and stop commands.
first declare the audio player as a variable
var SoundEffect: AVAudioPlayer!
then select the file you need
let path = Bundle.main.path(forResource: "Untitled2.wav", ofType:nil)!
let url = URL(fileURLWithPath: path)
let sound = try AVAudioPlayer(contentsOf: url)
SoundEffect = sound
sound.numberOfLoops = -1
sound.play()
and to stop the audio player
if SoundEffect != nil {
SoundEffect.stop()
SoundEffect = nil
}
You cannot stop or deactive AudioSession, your app gets it upon launching. Documentation:
An audio session is the intermediary between your app and iOS used to configure your app’s audio behavior. Upon launch, your app automatically gets a singleton audio session.
So method -setActive: does not make your AudioSession "active", it just puts its category and mode configuration into action. For getting back to the "normal state", you could set default settings or just call setActive(false, with:.notifyOthersOnDeactivation), that will be enough.
A part from documentation of AVAudioSession:
Discussion
If another active audio session has higher priority than yours (for
example, a phone call), and neither audio session allows mixing,
attempting to activate your audio session fails. Deactivating your
session will fail if any associated audio objects (such as queues,
converters, players, or recorders) are currently running.
My guess is that the failure to deactivate the session is the running process(es) of your queue as I highlighted in the document quote.
Probably you should make the deactivation process synchronous instead of asynchronous OR make sure that all the running actions under your queue has been processed.
Give this a try:
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance:AVSpeechUtterance) {
print("Speaker has finished to talk")
queue.sync { // <---- `async` changed to `sync`
do {
try self.audioSession.setActive(false, with: AVAudioSessionSetActiveOptions.notifyOthersOnDeactivation)
}
catch {}
}
}
}
I'm trying to live search at my PHP API with Swift. Until now i've done this thing.
var filteredData = [Products]()
func getSearch(completed: #escaping DownloadComplete, searchString: String) {
let parameters: Parameters = [
"action" : "search",
"subaction" : "get",
"product_name" : searchString,
"limit" : "0,30"
]
Alamofire.request(baseurl, method: .get, parameters: parameters).responseJSON { (responseData) -> Void in
if((responseData.result.value) != nil) {
let result = responseData.result
if let dict = result.value as? Dictionary<String, AnyObject>{
if let list = dict["products_in_category"] as? [Dictionary<String, AnyObject>] {
if self.filteredData.isEmpty == false {
self.filteredData.removeAll()
}
for obj in list {
let manPerfumes = Products(productDict: obj)
self.filteredData.append(manPerfumes)
}
}
}
completed()
}
}
}
extension SearchViewController: UISearchResultsUpdating {
func updateSearchResults(for searchController: UISearchController) {
if (searchController.searchBar.text?.characters.count)! >= 3 {
self.getSearch(completed: {
self.searchResultTable.reloadData()
self.searchResultTable.setContentOffset(CGPoint.zero, animated: true)
}, searchString: searchController.searchBar.text!)
} else {
self.searchResultTable.reloadData()
}
}
}
And the table view is being updated with the filteredData.
How can i throttle the search so lets say when the user writes
"example" -> shows the results with example
then he erase the "le" ->
"examp" -> if the previous request is not completed, cancel it -> make request for "examp" and show the data in table view!
P.S. from another answer i found
func searchBar(searchBar: UISearchBar, textDidChange searchText: String) {
// to limit network activity, reload half a second after last key press.
NSObject.cancelPreviousPerformRequests(withTarget: self, selector: #selector(self.reload), object: nil)
self.perform(#selector(self.reload), with: nil, afterDelay: 0.5)
}
func reload() {
print("Doing things")
}
Although if I try to replace "self.reload" with my function, I get an error
cannot convert value of type () to expected argument type selector
Use DispatchWorkItem with Swift 4 !
// Add a searchTask property to your controller
var searchTask: DispatchWorkItem?
// then in your search bar update method
// Cancel previous task if any
self.searchTask?.cancel()
// Replace previous task with a new one
let task = DispatchWorkItem { [weak self] in
self?.sendSearchRequest()
}
self.searchTask = task
// Execute task in 0.75 seconds (if not cancelled !)
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 0.75, execute: task)
Hope it helps !
Your error was because you probably forgot the #selector() part.
Here's how it should look:
func searchBar() {
NSObject.cancelPreviousPerformRequests(withTarget: self,
selector: #selector(self.getSearch(completed:searchString:)),
object: nil)
perform(#selector(self.getSearch(completed:searchString:)),
with: nil, afterDelay: 0.5) }
You get the error because you didn't enclose your function in #selector
Now, as for the arguments, here's a function for that:
perform(#selector(getSearch:completion:searchString), with: <some completion>, with: "search string")
disclaimer: I am a writer.
Throttler could be the right tool to get it done.
You can do debounce and throttle without going reactive programming using Throttler like,
import Throttler
// advanced debounce, running a first task immediately before initiating debounce.
for i in 1...1000 {
Throttler.debounce {
print("debounce! > \(i)")
}
}
// debounce! > 1
// debounce! > 1000
// equivalent to debounce of Combine, RxSwift.
for i in 1...1000 {
Throttler.debounce(shouldRunImmediately: false) {
print("debounce! > \(i)")
}
}
// debounce! > 1000
Throttler also can do advanced debounce, running a first event immediately before initiating debounce that Combine and RxSwift don't have by default.
You could, but you may need a complex implementation yourself for that.