I'm trying to create an application where I can send a string from an iPhone to an active textfield on my mac. I'm coming from a Microsoft background and they call it focus. The active textfield is not part of my application (3rd party).
I tested the concept by creating an iOS app to send a string to a mac via bluetooth. The mac (cocoa app) presents the string, in a label, in an NSWindow.
I want to create a keyboard wedge like a USB device to input the string in a textfield with a Safari webpage open using the active text box. I see there is a CGEventCreateKeyboardEvent in Apple's documentation. My question is can I pass the entire string to a Keyboard event with out having to input each CGKeyCode possibilities, and coding each true/false for keyup and keydown?
I must be missing a better way...
There is no universal "better way", since, unlike Microsoft, Apple knows something about security and is not going to let just any old process out of the blue start manipulating the text entered in some application's text box. However, there is a hole which you can ask the user to open: if the user has granted Accessibility permissions, then you can use the Accessibility API to "see" interface of the target application and to make changes like modifying the text in a text box. That is how applications like Nuance Dragon Dictate for Mac and Smile TextExpander work.
Related
I have implemented the react-native-voice plugin in my app. Speech to text on iOS works fine except that it doesn't take into account my contact names. Meaning that a sentence such as "Please send a message to John Appleseed" will not detect the name "Appleseed" correctly even if this contact is in the contact list of my phone!
What is strange is that dictating inside my app, through the keyboard's voice to text feature, the name will be recognized perfectly!
Is there a configuration I am missing? Why is there a difference between Apple's keyboard dictation and react-native-voice plugin?
In Apple's developer website, it clearly says:
The keyboard’s dictation support uses speech recognition to translate audio content into text. This framework provides a similar behavior, except that you can use it without the presence of the keyboard.
I am trying to get some input from the user on the Apple Watch using presentTextInputControllerWithSuggestions. I wonder what happens if user speaks multiple languages – is there a way to detect which language has he spoken?
Also, is there a way to find out what languages are set in his preferences?
Not having a Watch on hand, I don't think anyone here knows. (Edit: this was first posted before the Watch launched.) But even though it'd be really cool if there were dictation software that could guess cual idioma で話しています from word to word, watchOS is no different than iOS in that respect.
In iOS, Siri listens only in the language you set in Settings, and dictation listens only in the language of the active keyboard (whose microphone button you pressed to start dictation).
In watchOS, Siri likewise has a set language. Dictation is based on the keyboard language last used on your paired phone, but you can also change the language during text entry with a force press. That's a user choice for a system service, so it's opaque to the app, just like choice of keyboard is to an iOS app. (You're welcome to perform some sort of text analysis of you want to know what language of text the user has entered.)
I am trying to write a program for iPhone that pop up messages/information depending on the combination of keys a user press on the iPhone. For example, if a user type a certain key word, my program will display a message (my program will be running in the back ground upon user’s request). Can you please tell me if there is a way we can capture what keys user is pressing on his/her iPhone. Either on a text message or a web browser etc.. Any ideas would be greatly appreciated
This is not possible (thankfully) using public APIs. It might be possible (I don't know) using private APIs on jailbroken devices.
I'm developing a WebWorks application for the Blackberry Playbook. This page in their documentation says
You can display a specific type of virtual keyboard, depending on the
type of input that is required. In addition to the default keyboard,
you can choose from a selection of keyboards, such as a keyboard
designed for typing in an email or a keyboard designed for typing in a
browser.
https://bdsc.webapps.blackberry.com/html5/documentation/ww_best_practices/ui_components_tablet_microsites_1877108_11.html
However, I can't find any documentation on how to actually DO that. I have some fields for entering an email address, so I'd like to give the user an email focused keyboard.
This is a WebWorks app, not Flash. Any ideas?
Those special keyboard layouts are determined the type attribute of HTML5 <input> tags. Here are some of the "special" types defined in the HTML5 specification:
tel
url
email
number
When selected, the respective keyboard layout will appear for you.
Blackberry has also their own document where they list all possible <input> types available under WebWorks.
In the Windows Mobile world you can create a so-called Today plugin that adds content to the phone's main screen -- the one where you see the number of missed calls, unread sms and upcoming events. Is it possible to do something similar on the BlackBerry? I'd like to show some important info there, so that they are as visible and as easily reachable as possible.
To provide some information on the Home Screen you can use App Icon:
Add a notification icon at the status bar in BlackBerry JDE 4.5.0
alt text http://img691.imageshack.us/img691/6459/icoupdate3.jpg
Other thing available from RIM OS 4.6 if app indicator:
Blackberry - How to use notification icon in statusbar
alt text http://img198.imageshack.us/img198/3807/standardindicator.png
These things are already displayed on the ribbon depending on the theme.
If you want to create/modify a theme, use the Plazmic CDK.
There are two ways to put individual icons onto the ribbon (similar to icons like mail, calendar, address book, browser, maps, etc):
channel push: easiest if the user is on a BES, now a way to do it for the non-enterprise people on BIS called WebSignals.
write a midp or rimlet j2me application. Rimlet preferred unless you really need compatibility with other (non-RIM) j2me midp devices.
OK, there is technically a third way, but I really don't think it's serious: MDS Studio.
All the tools are free but you may need a code signing key if you're writing a rimlet using a secure API.
http://na.blackberry.com/eng/support/docs/developers/?userType=21