I'm working on an translation APP on WatchOS. I'd like to know how to set siri to "listen" to a certain language and change it along with a choice from the user.
For example. The user picks the italian flag? Siri sets itself to recieve Italian dictation and transcribe it. The user picks the english flag? Siri goes on english mode and trascribes it and so on.
Hope you can help, I'd normally use speech but we don't have it on watchos.
It's possible to change the language of an SFSpeechRecognizer.
let locale = Locale(identifier: "nl_NL")
SFSpeechRecognizer(locale: locale)
This will now listen for and transcribe Dutch words.
Here is more information on live speech recognition which explains how to use SFSpeechRecognizer.
See this document for more information on identifiers.
Related
In my Swift code I want to set the voice of the "AVSynthesisVoice" to the original Siri voice, not the additional voices you can choose. I can only use their name to identify the voice to use, but can I apply the original Siri voice in my preferred language?
let u = AVSpeechUtterance(string: "Hello, I'm Siri!")
u.voice = AVSpeechSynthesisVoice(TheSiriVoice)
u.speak()
The enhanced Siri voice released in iOS 11 is not available to AVSpeechSynthesizer.
I have question regarding iOS speech recognition API. Is there a way to start listening, like hey Siri/ok, google/hey Alexa, with the API?
My app is a handsfree app and I need the text of what the user said after he/she says the certain keyword: like "Hey Assistant".
You could listen continuously and then just check for the phrase you're looking for.
This question has some examples of continuous recognition:
Continuous speech recogn. with SFSpeechRecognizer (ios10-beta)
I'm using the voice synthesizer in iOS and can see how to specify a specific voice or language for it to use, but can't see a way to find out what the user has selected as their voice for Siri and use that, which would be nice!
Currently, I am working on developing the iOS App that triggers an event upon voice command.
I saw a camera app, where a user says "start recording," then the camera starts to the recording mode.
This is an in-app voice control capability, so I am thinking it is different from SiriKit or SpeechRecognizer, which I have already implemented.
How would I achieve it?
My question is NOT the voice dictation where a user has to press a button to start dictation.
App needs to passively wait for a keyword, or intent, which is something like "myApp, start recording" or "myApp, stop recording", then the app starts/stop that event function accordingly.
Thanks.
OpenEars : Free speech recognition and speech synthesis for the iPhone.
OpenEars makes it simple for you to add offline speech recognition in many languages and synthesized speech/TTS to your iPhone app quickly and easily. It lets everyone get the great results of using advanced speech app interface concepts.
Check out this link.
http://www.politepix.com/openears/
or
Building an iOS App like Siri
https://www.raywenderlich.com/60870/building-ios-app-like-siri
Thank you.
How would I achieve it?
There's an iOS 13 new feature called Voice Control that will allow you to reach your goal.
You can find useful information in the Customize Commands section where all the vocal commands are available (you can create a custom one as well):
For the example of the camera you mentioned, everything can be done vocally as follows:
I showed the items names to understand the vocal commands I used but they can be hidden if you prefer (hide names).
Voice Control is a built-in feature you can use inside your apps as well.
The only thing to do as a developer is eventually adapting the accessibilityUserInputLabels properties if you need specific names to be displayed for some items in your apps.
If you're looking for a voice command without pressing a button on iOS, the Voice Control is THE perfect candidate.
Now I would like to implement Arabic speech recognition and tts function to my sample app. It seems ispeech is already supporting Arabic and I have implemented Arabic tts function, but I don't know the way of implementing speech recognition of Arabic.
I guess it is necessary to know Arabic locale which is supported by iSpeech, but I am not sure.
Is there a way for this?