Registering new voice commands from my app to iOS Voice Control engine - ios

I have question to iOS Developers.
Does anybody know if Apple iOS Api allows to add new commands to build in iOS Voice Control engine. I noticed that Voice Control can control phone application using names, nicknames from address book. It can also play music list from default iOS music player app. I would like in my app to register new voice commands for this Voice Control engine and handle some actions based on recognized commands. I was searching in developer documentations but can't find anything like that. Am I missing something?

There's an iOS 13 new feature called Voice Control that may help you reach your goal:
I would like in my app to register new voice commands for this Voice Control engine and handle some actions based on recognized commands.
This is definitely possible thanks to the Customize Commands - Create New Command... menu:
If you need dedicated names to be read out for some items in your app, use the accessibilityUserInputLabels property to define them.
Following this rationale, you can now register new voice commands from your app to iOS Voice Control engine.

IOS till now not exposed any API's related to voice. However it is achievable using CMU Sphinx.
Big advantage of CMU Sphinx - it works offline.

Related

Voice Command without Pressing a Button on iOS

Currently, I am working on developing the iOS App that triggers an event upon voice command.
I saw a camera app, where a user says "start recording," then the camera starts to the recording mode.
This is an in-app voice control capability, so I am thinking it is different from SiriKit or SpeechRecognizer, which I have already implemented.
How would I achieve it?
My question is NOT the voice dictation where a user has to press a button to start dictation.
App needs to passively wait for a keyword, or intent, which is something like "myApp, start recording" or "myApp, stop recording", then the app starts/stop that event function accordingly.
Thanks.
OpenEars : Free speech recognition and speech synthesis for the iPhone.
OpenEars makes it simple for you to add offline speech recognition in many languages and synthesized speech/TTS to your iPhone app quickly and easily. It lets everyone get the great results of using advanced speech app interface concepts.
Check out this link.
http://www.politepix.com/openears/
or
Building an iOS App like Siri
https://www.raywenderlich.com/60870/building-ios-app-like-siri
Thank you.
How would I achieve it?
There's an iOS 13 new feature called Voice Control that will allow you to reach your goal.
You can find useful information in the Customize Commands section where all the vocal commands are available (you can create a custom one as well):
For the example of the camera you mentioned, everything can be done vocally as follows:
I showed the items names to understand the vocal commands I used but they can be hidden if you prefer (hide names).
Voice Control is a built-in feature you can use inside your apps as well.
The only thing to do as a developer is eventually adapting the accessibilityUserInputLabels properties if you need specific names to be displayed for some items in your apps.
If you're looking for a voice command without pressing a button on iOS, the Voice Control is THE perfect candidate.

Is it possible to take picture using voice command in iOS app?

I created a sample application that uses custom controls to take a picture. Now I want to take a picture using a voice command (like if the user says "capture photo"). Are there any default controls or options in iOS or do we need to implement our own? Any references?
Now I want to take a picture using a voice command (like if the user says "capture photo"). Are there any default controls or options in iOS...
There's an iOS 13 new feature called Voice Control that will allow you to reach your goal.
Activate this feature as follows:
You can find useful information in the Customize Commands section like taking a screenshot for instance (taking a photo with the camera works as well):
If you need dedicated names to be read out in your app, use the accessibilityUserInputLabels property to define them.
It's now possible to take picture using the Voice Control feature in iOS app but only since iOS 13.
Are there any default controls or options in iOS or do we need to implement our own?
As of iOS 8, there is no Apple-provided speech-to-text API. You can use SpeechKit, or implement one of the hacks that uses the Google Voice API.

IOS voice command start recording

Currently I am developing an IOS app.I need once I open the camera through application then when i say "start recording" then recording should start automatically. Once i say "Cut" then it should stop. and ask for save video share etc.
My main concern is regarding voice command. recording should start stop through voice command.
Looking forward for experts suggestions.
Many Thanks
As Siri is not yet exposed to developers there are lot of third party sdks dat can be used for recognizing speech.
I have used
http://dragonmobile.nuancemobiledeveloper.com/public/Help/DragonMobileSDKReference_iOS/Introduction.html
There are number of other sdks too ,u can easily google them
U have to set the words dat u want to recognize and perform intended action.
All d best with your app!!

Read user's music library within Phonegap

I'm currently developing an app with Phoengap which uses peer-to-peer connection through WebRTC. For my purposes I need to list the sounds available on the user's device.
So I'd like to know if it's currently possible with Phonegap to gain access to the user's music library and e.g. list all available songs sorted by artists? I came across this article from Aurelio de Rosa but I tested it and it doesn't seem to work on iOS.
Any suggestions? Or is there maybe a plugin around which I'm not aware of?
You can find the iOS SDK, Music Library Access example code here. I expect you will need to write a plugin to expose this to Cordova.
Your link should work, but only with music that you store inside your app sandbox or inside the assets (inside www folder).
If you want to use the music library you will need a plugin
I have found one, but it's very old, you will need to update it. It searchs the music and plays it natively too
https://github.com/hutley/HelloPhoneGap1.0/tree/master/HelloPhoneGap1/Plugins/iPod
here you can find a tutorial about how to create a music player using Music Library Access, but it's in japanese. The code is in english http://blog.asial.co.jp/884

Voice Control iOS

I would like to build a simple reader app for the iPad 2 that would allow users to navigate/read via voice controls. The app would allow the user to enter a mode where the microphone was live and listened for predefined keywords like 'down', 'up', 'next', 'back', 'home', etc.
I don't want to reinvent the wheel on this so I'm just wondering first, if someone has done this already and if not, are there any good tutorials or SDKs available to help with recording someone's voice, and then comparing future output to see if it matches, or just dealing with the microphone in general?
Let's put aside that this is a fairly vaguely worded question for the moment.
If you are expecting to allow voice control in your app that somehow works throughout the entire device, it's just not possible. Your app would only work to control itself -- or at least itself and whatever external hooks you can normally get to the rest of the device, like, say, playing a song out of the user's iTunes library.
If you're planning on doing this in a jailbroken environment, then you should find some open-source library that does voice recognition -- if there are any -- and start from there. Be prepared for a very long haul, though.
Dragon Mobile SDK is what you're looking for.
http://dragonmobile.nuancemobiledeveloper.com/
There maybe others voice recognition SDKs out there, but this is the only one I can think of from the top of my head.
You can find a library called CMU Sphinx. There's an iphone version for it called
PocketSphinx. See if it fits your needs.
I would like to build a simple reader app for the iPad 2 that would allow users to navigate/read via voice controls.
The iOS 13 new feature Voice Control fully meets your request because you can control your device and your app with your voice exactly the same as with touches.
It's also possible to define actions for some specific words for instance.
The device settings are perfectly well detailed to handle this amazing new feature (Accessibility - Voice Control):
If you need dedicated names to be read out in your app, use the accessibilityUserInputLabels property to define them.
That's definitely the built-in tool your need to reach your goal: no need to use external library or SDK, everything is natively provided. ;o)

Resources