In my Swift code I want to set the voice of the "AVSynthesisVoice" to the original Siri voice, not the additional voices you can choose. I can only use their name to identify the voice to use, but can I apply the original Siri voice in my preferred language?
let u = AVSpeechUtterance(string: "Hello, I'm Siri!")
u.voice = AVSpeechSynthesisVoice(TheSiriVoice)
u.speak()
The enhanced Siri voice released in iOS 11 is not available to AVSpeechSynthesizer.
Related
I'm working on an translation APP on WatchOS. I'd like to know how to set siri to "listen" to a certain language and change it along with a choice from the user.
For example. The user picks the italian flag? Siri sets itself to recieve Italian dictation and transcribe it. The user picks the english flag? Siri goes on english mode and trascribes it and so on.
Hope you can help, I'd normally use speech but we don't have it on watchos.
It's possible to change the language of an SFSpeechRecognizer.
let locale = Locale(identifier: "nl_NL")
SFSpeechRecognizer(locale: locale)
This will now listen for and transcribe Dutch words.
Here is more information on live speech recognition which explains how to use SFSpeechRecognizer.
See this document for more information on identifiers.
I'm making an app which will be using Speech Recognition and want to know how frequently or when will my app encounter this scenario
I know that this related to device restricting Speech recognition rather than user but when exactly ??
is it due to some specific models not supporting speech recognition or is iOS version specific
or are there some settings that can restrict apps from using Speech recognition
Though no longer quite accurate, think of a restriction as a parental control that blocks a user from even having the option to enable a service controlled by device privacy settings.
https://support.apple.com/en-ca/HT201304
This falls under "Here are the things you can restrict:"
Speech Recognition: Prevent apps from accessing Speech Recognition or
Dictation
How often will you encounter it? Who knows, but if your app targets minors, then that is likely an increased chance, but this is purely speculative.
To answer your other question:
...is it due to some specific models not supporting speech
recognition...
There is a different way to test for speech support on a device:
https://developer.apple.com/documentation/speech/sfspeechrecognizer/1649885-isavailable
Using isAvailable (for Swift) or available (Obj-C), you can tell if the speech recognizer is available.
Since you marked your question as Objective-C, then the following would work:
SFSpeechRecognizer *recognizer = [[SFSpeechRecognizer alloc] init];
if (recognizer.available) {
// Do recognizer things
}
The same in Swift:
let recognizer = SFSpeechRecognizer()
if recognizer.isAvailable { }
I want to know if there's a way to use iOS speech recognition in offline mode. According to the documentation (https://developer.apple.com/reference/speech) I didn't see anything about it.
I am afraid that there is no way to do it (however, please make sure to check the update at the end of the answer).
As mentioned at the Speech Framework Official Documentation:
Best Practices for a Great User Experience:
Be prepared to handle the failures that can be caused by reaching speech recognition limits.
Because speech recognition is a network-based service, limits are
enforced so that the service can remain freely available to all apps.
As an end user perspective, trying to get Siri's help without connecting to a network should displays a screen similar to:
Also, When trying to send a massage -for example-, you'll notice that the mike button should be disabled if the device is unconnected to a network.
Natively, the iOS itself won't able this feature until checking network connection, I assume that would be the same for the third-party developer when using the Speech Framework.
UPDATE:
After watching Speech Recognition API Session (especially, the part 03:00 - 03:25) , I came up with:
Speech Recognition API usually requires an internet connection, but there are some of new devices do support this feature all the time; You might want to check whether the given language is available or not.
Adapted from SFSpeechRecognizer Documentation:
Note that a supported speech recognizer is not the same as an
available speech recognizer; for example, the recognizers for some
locales may require an Internet connection. You can use the
supportedLocales() method to get a list of supported locales and the
isAvailable property to find out if the recognizer for a specific
locale is available.
Further Reading:
These topics might be related:
Which iOS devices support offline speech recognition?
How to Enable Offline Dictation on Your iPhone?
Will Siri ever work offline?
Offline transcription will be available starting in iOS 13. You enable it with requiresOnDeviceRecognition.
Example code (Swift 5):
// Create and configure the speech recognition request.
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
guard let recognitionRequest = recognitionRequest else { fatalError("Unable to create a SFSpeechAudioBufferRecognitionRequest object") }
recognitionRequest.shouldReportPartialResults = true
// Keep speech recognition data on device
if #available(iOS 13, *) {
recognitionRequest.requiresOnDeviceRecognition = true
}
I'm using the voice synthesizer in iOS and can see how to specify a specific voice or language for it to use, but can't see a way to find out what the user has selected as their voice for Siri and use that, which would be nice!
Now I would like to implement Arabic speech recognition and tts function to my sample app. It seems ispeech is already supporting Arabic and I have implemented Arabic tts function, but I don't know the way of implementing speech recognition of Arabic.
I guess it is necessary to know Arabic locale which is supported by iSpeech, but I am not sure.
Is there a way for this?