Is it possible to change a label by voice? - ios

I want to have a set of images/labels. Let's call them labelOne and labelTwo. When labelOne is present is it possible to change it to labelTwo using voice?
Example. As I say "one" the labelOne changes to labelTwo.
Is this possible with speech kit? If so, can I host some voice recognition on the app so it doesn't have to contact the server?

Yes by using iSpeech its possible, but iSpeech is paid. The better option to do this via OpenEars
Follow this official doc

Yes, You can do that.But if you wanted to track just few keywords then don't go with API implementation it will be time consuming as well,Just use "Keyword Spotting".also you can use open source package like OpenEars which i had used in my project voice recognition logic and all.
check following ref which i made for my self as bookmark
http://appaloud.com/top-sdks-to-voice-enable-mobile-apps-quickly/
http://www.raywenderlich.com/60870/building-ios-app-like-siri
http://www.politepix.com/openears
http://www.politepix.com/openears/tutorial
Rejecto Plugin
I hope it will help you.

I use OpenEars (http://www.politepix.com/openears/) to do my voice recognition. It works offline and supports grammar based rules too.
Here is a video of it in action: http://youtu.be/idq7IRnrVq8

Related

What's kind of document that allow String turn to speaking (Siri speaking, not human speaking)

I am a comic developer, however, I am deaf iOS developer. I don't need to hiring hearing but i know iOS has feature to select all sentence to speak like that. I need any idea whats called and code sample to make a speak whenever the String comes into speaking like that in Swift. Thanks.
I think you are asking for “speech synthesis”, also called “text-to-speech” (TTS for short).
Apple provides speech synthesis in the AVFoundation framework.
Speech Synthesis documentation
AVSpeechSynthesizer: Making iOS Talk
Create a seamless speech experience in your apps
Apple's accessibility system also provides speech synthesis. This feature is called VoiceOver.
UIAccessibilityReadingContent protocol documentation
Creating an Accessible Reading Experience

How do I create custom intent in siri

I know siri provides limited Intents and we have to add our app to domain to be able take input from siri.
But I would like to create my own intent for the users to access my app via siri.
I couldn't find much support for this anywhere ? any helpful pointers are welcome
Update
Now you can create custom intents and use them with Siri Shortcuts. Here is a simplified tutorial by Ray Wenderlich
https://www.raywenderlich.com/6462-siri-shortcuts-tutorial-in-ios-12
or you can prefer
This article for beginners
For more details, you should prefer Apple's official documentation.
https://developer.apple.com/documentation/sirikit
Outdated Answer
You can't create a custom intent for now. Maybe in later versions, they can add support for custom intents. Maybe they don't. With their current approach, Apple holds all the control over, intent(operation) types, data, privacy, etc. I'm not sure they will change that.
If you really need custom voice commands, you can implement it inside your application(not out of the app like Siri). There are alternatives like
Apple's AVSpeechSynthesizer
Apple's Speech
IBM's Watson
Nuance Speech Kit
You can create a custom intent as of iOS 12. It is quite a complicated process, but there are some tutorials available that can help you out.
I think Apple's WWDC example is a good starting point.

Can I change the voice of OpenEars (iOS)

Would I be able to change the speaking voice of OpenEars to another? I don't quite like the the default one. Is that possible or would I have to use another API. Sorry if this is a stupid question.
Yes, you can, but it's a bit complex process. Openears uses Flite for speech synthesis, so you need to change the voice in Flite. Flite supports 13 voices to choose already and you have an option to build a new voice.
To build a new voice you need to follow the documentation. A festvox documentation might be also useful for you to understand the basics.

How can you synthesize speech using Phonemes for iOS

I'm working on an application that requires the use of a text to speech synthesizer. Implementing this was rather simple for iOS using AVSpeechSynthesizer. However, when it comes to customizing synthesis, I was directed to documentation about speech synthesis for an OSX only API, which allows you to input phoneme pairs, in order to customize word pronunciation. Unfortunately, this interface is not available on iOS.
I was hoping someone might know of a similar library or plugin that might accomplish the same task. If you do, it would be much appreciated if you would lend a hand.
Thanks in advance!
AVSpeechSynthesizer for iOS is not capable (out of the box) to work with phonemes. NSSpeechSynthesizer is capable of it, but that's not available on iOS.
You can create an algorithm that produces short phonemes, but it would be incredibly difficult to make it sound good by any means.
... allows you to input phoneme pairs, in order to customize word pronunciation. Unfortunately, this interface is not available on iOS.
This kind of interface is definitely available on iOS: in your device settings (iOS 12), once the menu General - Accessibility - Speech - Pronunciations is reached:
Select the '+' icon to add a new phonetic element.
Name this new element in order to quickly find it later on.
Tap the microphone icon.
Vocalize an entire sentence or a single word.
Listen to the different system proposals.
Validate your choice with the 'OK' button or cancel to start over.
Tap the back button to confirm the new created phonetic element.
Find all the generated elements in the Pronunciations page.
Following the steps above, you will be able to synthesize speech using phonemes for iOS.

Speech recognition from recorded file

I've been researching several iOS speech recognition frameworks and have found it hard to accomplish something I would think is pretty straightforward.
I have an app that allows people to record their voices. After a recording is made, they have the option to create a text version.
Looking into the services out there (i.e., Nuance) most require you to use the microphone. OpenEars allows you to do this, but the dictionary is so limited because it is an offline solution (they recommend 300 or less words).
There are a few other things going on with the app that would make it very unappealing to switch from the current recording method. For what it is worth, I am using the Amazing Audio Engine framework.
Anyone have any other suggestions for frameworks. Or is there a way to dig deeper with Nuance to transcribe a recorded file?
Thank you for your time.
For services, there are a few cloud based hosted speech recognition services you can use. You simply post the audio file to their URL and receive back the text. Most of them don't have any constraint on the vocabulary. You can of course choose any recording method you like.
See here: Server-side Voice Recognition . Many of them offer free trial as well.

Resources