React native voice plugin for iOS doesn't recognize contact names - ios

I have implemented the react-native-voice plugin in my app. Speech to text on iOS works fine except that it doesn't take into account my contact names. Meaning that a sentence such as "Please send a message to John Appleseed" will not detect the name "Appleseed" correctly even if this contact is in the contact list of my phone!
What is strange is that dictating inside my app, through the keyboard's voice to text feature, the name will be recognized perfectly!
Is there a configuration I am missing? Why is there a difference between Apple's keyboard dictation and react-native-voice plugin?
In Apple's developer website, it clearly says:
The keyboard’s dictation support uses speech recognition to translate audio content into text. This framework provides a similar behavior, except that you can use it without the presence of the keyboard.

Related

Setting autofill in Cordova for iOS

I'm pretty new to Cordova dev. and I'm trying to achieve the following.
We have an application, running both on Android and iPhone, written in AngularJS,
under the Cordova framework.
In order to use our application, we require the users to send their phone number,
receive an SMS containing an OTP, type the OTP into a shaped text field, and press
a button for sending the OTP (and receiving an authentication token).
I was asked to enable the simple feature of enabling the application to do that
automatically, meaning it would parse the SMS, feed that input field, and send
the OTP, without any user intervention.
This is pretty easily achieved on Android, using a specific SMS Receive plugin,
but cannot be done in iOS.
However, I saw that it can be achieved semi-automatically on the new iOS versions,
but I have to change the input field type to "one-time-code". I tried to do that
on my Cordova code, and I couldn't achieve that, no matter what I did. I would
like to know how to do it through Cordova, if this can be done, anyway.
You should be able to do this using purely HTML without needing a Cordova plugin or any native iOS code as described here. Just set the autocomplete attribute, not the type attribute, of the input element to one-time-code:
<input id="single-factor-code-text-field" autocomplete="one-time-code" />
In my case it had to contain the word 'code' in the message then space and then the code you want to show. I'm not sure but I think it gets the symbols till the next space because I had more symbols after.
ex: code 123456

Can iOS 8 native credit card scanning be utilized in a native app?

In iOS 8, a user can scan his or her credit card (takes a picture) in both Safari and Apple Pay. Additionally, a web form can prompt a user to scan a credit card to autofill a form asking for payment info. This is done in html by setting a tag / name on the field, e.g. "..." Safari will then automaticaly prompt the user to use their camera (see links below).
Is there a way to take advantage of this functionality in a native iOS app, either via an apple API or by setting some field type parameters on an input field?
Example use case: user opens my app and tries to buy something, I prompt user to enter a credit card, she or he can then scan a card.
If a user adds a card directly through Safari settings they might have this option:
https://9to5mac.files.wordpress.com/2014/10/2014-10-02-08-41-21.png
If a user hasn't added a card already, they might have this option:
http://photos2.appleinsidercdn.com/gallery/9512-1291-safari-140609-l.png
NOTE: I know about Card.io and will probably use that, but wanted to find out if there is an easier / more seamless way.
I don't think so, I never saw it outside of a webview.
Card.io is the best option in my opinion.

How to detect language of the WatchKit voice-to-text input?

I am trying to get some input from the user on the Apple Watch using presentTextInputControllerWithSuggestions. I wonder what happens if user speaks multiple languages – is there a way to detect which language has he spoken?
Also, is there a way to find out what languages are set in his preferences?
Not having a Watch on hand, I don't think anyone here knows. (Edit: this was first posted before the Watch launched.) But even though it'd be really cool if there were dictation software that could guess cual idioma で話しています from word to word, watchOS is no different than iOS in that respect.
In iOS, Siri listens only in the language you set in Settings, and dictation listens only in the language of the active keyboard (whose microphone button you pressed to start dictation).
In watchOS, Siri likewise has a set language. Dictation is based on the keyboard language last used on your paired phone, but you can also change the language during text entry with a force press. That's a user choice for a system service, so it's opaque to the app, just like choice of keyboard is to an iOS app. (You're welcome to perform some sort of text analysis of you want to know what language of text the user has entered.)

Is it possible to customize the Bluetooth message when the alert first show up in iOS

I am writing an app which uses bluetooth to send data. The first time the call is made an alert pops which says
"app name" would like to make data available to nearby bluetooth devices even when you're not using the app.
Is there a way to customize this similar to the the Location services message?
There is a NSBluetoothPeripheralUsageDescription key to store the purpose string in Info.plist to describe the reason that the app uses Bluetooth. When the system prompts the user to allow usage, this string is displayed as part of the dialog box

use iPhone as keyboard wedge with bluetooth

I'm trying to create an application where I can send a string from an iPhone to an active textfield on my mac. I'm coming from a Microsoft background and they call it focus. The active textfield is not part of my application (3rd party).
I tested the concept by creating an iOS app to send a string to a mac via bluetooth. The mac (cocoa app) presents the string, in a label, in an NSWindow.
I want to create a keyboard wedge like a USB device to input the string in a textfield with a Safari webpage open using the active text box. I see there is a CGEventCreateKeyboardEvent in Apple's documentation. My question is can I pass the entire string to a Keyboard event with out having to input each CGKeyCode possibilities, and coding each true/false for keyup and keydown?
I must be missing a better way...
There is no universal "better way", since, unlike Microsoft, Apple knows something about security and is not going to let just any old process out of the blue start manipulating the text entered in some application's text box. However, there is a hole which you can ask the user to open: if the user has granted Accessibility permissions, then you can use the Accessibility API to "see" interface of the target application and to make changes like modifying the text in a text box. That is how applications like Nuance Dragon Dictate for Mac and Smile TextExpander work.

Resources