I'm using AVSpeechSynthesizer but it supports only a few languages, not Vietnamese. Is there any way?
I prefer the engine which can use offline and cheap too. Or whatever if there is no other way.
Related
We try to use build-in iOS text-to-speech tool for reading Chinese words in the app.
It's good in reading texts. But got problems reading separate words.
For example, we have character 还. It could be pronounced like "hái" with meaning "also, in addition"; and could be pronounced like "huàn" with meaning "to return".
In phrase 我还要还钱 (wǒ hái yào huàn qián) it pronounce 还 in both ways (correct).
In case of separate "还" iOS prefer to read it only like "hái". How to make it pronounce characters in the way we need it (if possible)?
As a quick solution you can cut required words from longer files and play them as audio instead of using TTS
I have been playing around with Cloud Speech API and noticed that it returns punctuation for English but not for Japanese when enableAutomaticPunctuation is set to true.
Does anybody know What languages does Google Cloud Speech Automatic Punctuation Support?
Speech-to-Text can provide punctuation in audio transcription text for 'en-US' language only.
EDIT MAY 2020: Now, Speech-To-Text supports more languages
Update: as of May 2020, several languages have punctuation supported, including Japanese. They have a full list of languages they support and features that are supported for each language listed here.
By Default WatchOS's presentTextInputController is recognizing in english language, we can change the recognizing language using deep touch.How to get the language code when user changes language using deep touch while recognizing?
I don't think this is possible at the moment. Looking at the documentation of WatchKit, there doesn't seem to be an API for this.
However, you can call presentTextInputControllerWithSuggestions(forLanguage:allowedInputMode:completion:) if you want to specify what language you can handle. See the official documentation.
Currently i am using open ears to detect a phrase and it works pretty well, although i would like to recognize all words in the english language and add that to a text field. So I had two thoughts on how to approach this.
1) Somehow load the entire english dictionary into OpenEars.
(i don't think it is a good idea because they say from 2-300 words or something like that
2)Activate the native iOS voice recognition without deploying the keyboard.
I'm leaning towoards the second way if possible because i love the live recognition in iOS 8, it works flawlessly for me.
How do i recognize all words using one of the two methods (or a better way if you know)?
Thank you
The answer is that you can't do 1) or 2), at least not the way you want to. OpenEars won't handle the whole English dictionary, and you can't get iOS voice recognition without the keyboard widget. You might want to look into Dragon Dictation, which is the speech engine that Siri uses, or SILVIA. You'll have to pay for a license though.
Now i am confusing about Text To Speech Engine for iOS.
When i search in internet, i found two kinds of Text To Speech Engine for iOS.
These are iSpeech and Dragon Mobile SDK from Nuance.
i have to use Text To Speech for multiple languages and also have to use Speech To Text.
I want to know which engine is better and which is faster?
Thanks in advance.