Blackberry's Status Four Way input? - blackberry

The documentation for Field in the BB 4.7 API #
http://www.blackberry.com/developers/docs/4.7.0api/net/rim/device/api/ui/Field.html#navigationClick(int,%20int) shows that the source of a navigationClick can be determined "by checking the KeypadListener.STATUS_TRACKWHEEL and KeypadListener.STATUS_FOUR_WAY bits in the status parameter" and that "exactly one of them will be set".
I'm having trouble understanding some of this documentation. Would anyone be able to explain what the "four way input device" is that STATUS_FOUR_WAY represents? And if the navigationClick event is triggered by a touch-screen pointer press event (eg: on the Blackberry Storm), which of these bits should I expect to be set? It doesn't seem like either one of these is the "correct" source, but the docs imply one of them will be set.
Thanks for the help!

The "four way input device" is the trackwheel on older device than Storm, like the Curve and the Bold.
When you move it, you can catch which "way" the user is using the wheel.
Has for the Screen touch, the best way to know what it triggers is reading the API 5.0, or testing it yourself.
In the 5.0 API they recommand using this Screen.navigationClick(int, int)
http://www.blackberry.com/developers/docs/5.0.0api/index.html

Related

How to change the VoiceOver pronunciation in swift?

I am trying to implement the accessibility to my ios project.
Is there a way to correct the pronunciation of some specific words when the voice-over is turned on? For example, The correct pronunciation of 'speech' is [spiːtʃ], but I want the voice-over to read all the words 'speech' as same as 'speak' [spiːk] during my whole project.
I know there is one way that I can set the accessibility label of any UIElements that I want to change the pronunciation to 'speak'. However, some elements are dynamic. For example, we get the label text from the back-end, but we will never know when the label text will be 'speech'. If I get the words 'speech' from the back end, I would like to hear voice-over read it as 'speak'.
Therefore, I would like to change the setting for the voice-over. Every time, If the words are 'speech', the voice-over will read as 'speak'.
Can I do it?
Short answer.
Yes you can do it, but please do not.
Long Answer
Can I do it?
Yes, of course you can.
Simply fetch the data from the backend and do a find-replace on the string for any words you want spoken differently using a dictionary of words to replace, then add the new version of the string as the accessibility label.
SHOULD you do it?
Absolutely not.
Every time someone tries to "fix" pronunciation it ends up making things a lot worse.
I don't even understand why you would want screen reader users to hear "speak" whenever anyone else sees "speech", it does not make sense and is likely to break the meaning of sentences:
"I attended the speech given last night, it was very informative".
Would transform into:
"I attended the speak given last night, it was very informative"
Screen reader users are used to it.
A screen reader user is used to hearing things said differently (and incorrectly!), my guess is you have not been using a screen reader long enough to get used to the idiosyncrasies of screen reader speech.
Far from helping screen reader users you will actually end up making things worse.
I have only ever overridden screen reader default behaviour twice, once when it was a version number that was being read as a date and once when it was a password manager that read the password back and would try and read things as words.
Other than those very narrow examples I have not come across a reason to change things for a screen reader.
What about braille users?
You could change things because they don't sound right. But braille users also use screen readers and changing things for them could be very confusing (as per the example above of "speech").
What about best practices
"Give assistive technology users as similar an experience as possible to non assistive tech users". That is the number one guiding principle of accessibility, the second you change pronunciations and words, you potentially change the meaning of sentences and therefore offer a different experience.
Summing up
Anyway this is turning into a rant when it isn't meant to be (my apologies, I am just trying to get the point across as I answer similar questions to this quite often!), hopefully you get the idea, leave it alone and present the same info, I haven't even covered different speech synthesizers, language translation and more that using "unnatural" language can interfere with.
The easiest solution is to return a 2nd string from the backend that is used just for the accessibilityLabel.
If you need a bit more control, you can pass an AttributedString as the accessibilityLabel with a number of different options for controlling pronunication
https://medium.com/macoclock/ios-attributed-accessibility-labels-f54b8dcbf9fa

MergZXing GetBarcode() captures barcode but Cancel and ! Buttons do not, any workaround?

I'm using LiveCode 8.0 Indy
I have the MergEXT that is bundled with LiveCode, and am using the MergZXingGetBarcode() Method.
The method correctly displays the control, and will capture a barcode, however if I try to press Cancel or !, nothing appears until AFTER the control has successfully captured a barcode.
I suspect in the background, it is displaying the results of these actions, any way for me to catch this and close the control?
Or maybe hide those buttons?
Here is the code I'm using (from Sample App)
on mouseUp
try
put mergZXingGetBarcode() into fld 1
catch e
put e
end try
end mouseUp
Thank you
"This sounds like a bug. Please report it here FYI the barcode scanner in mergAV has considerably better performance and as the ZXing library is no longer maintained for iOS I usually encourage people in that direction. It does mean you need to create your own UI for scanning though." – Monte Goulding
This answer was taken from the comments, this seemed like the best way to mark as answered.

How can you synthesize speech using Phonemes for iOS

I'm working on an application that requires the use of a text to speech synthesizer. Implementing this was rather simple for iOS using AVSpeechSynthesizer. However, when it comes to customizing synthesis, I was directed to documentation about speech synthesis for an OSX only API, which allows you to input phoneme pairs, in order to customize word pronunciation. Unfortunately, this interface is not available on iOS.
I was hoping someone might know of a similar library or plugin that might accomplish the same task. If you do, it would be much appreciated if you would lend a hand.
Thanks in advance!
AVSpeechSynthesizer for iOS is not capable (out of the box) to work with phonemes. NSSpeechSynthesizer is capable of it, but that's not available on iOS.
You can create an algorithm that produces short phonemes, but it would be incredibly difficult to make it sound good by any means.
... allows you to input phoneme pairs, in order to customize word pronunciation. Unfortunately, this interface is not available on iOS.
This kind of interface is definitely available on iOS: in your device settings (iOS 12), once the menu General - Accessibility - Speech - Pronunciations is reached:
Select the '+' icon to add a new phonetic element.
Name this new element in order to quickly find it later on.
Tap the microphone icon.
Vocalize an entire sentence or a single word.
Listen to the different system proposals.
Validate your choice with the 'OK' button or cancel to start over.
Tap the back button to confirm the new created phonetic element.
Find all the generated elements in the Pronunciations page.
Following the steps above, you will be able to synthesize speech using phonemes for iOS.

Speech input for visually impaired users without the need to tap the screen

We're working on a app for blind and visually impaired users. We've been experimenting with a third party library to get spoken user input and convert it to text, which we then parse as commands to control the app. The problem is that the word recognition is not very good and certainly not anywhere near as good as what iOS uses to get voice input on a text field.
I'd like to experiment with that, but our users are mostly unable to tap a text field, then hit the mic button on the popup keyboard, then hit the done button or even dismiss any of it. I'm not even sure how they can deal with a single tap on the whole screen, it might be too difficult for some. So, I'd like to automate that for them, but I don't see anything in the docs that indicates it is possible. So, is it even possible, and if so, what's the proper way to do it so that it passes verification?
The solution for you is to implement a keyword spotting so that the speech recognition will be activated with the keyword instead of button tap. After that you can record commands/text and recognize them with any service you need. Something like "Ok google" activation on Motorola X.
There are several keyword activation libraries for iOS, one possible solution is OpenEars based on the open source speech recogntion library CMUSphinx. If you want to use Pocketsphinx directly, you can find keyword activation implementation in kws branch in subversion (branches/kws)
The only way to get the iOS dictation is to sign up yourself through Nuance: http://dragonmobile.nuancemobiledeveloper.com/ - it's expensive, because it's the best. Presumably, Apple's contract prevents them from exposing an API.
The built in iOS accessibility features allow immobilized users to access dictation (and other keyboard buttons) through tools like VoiceOver and Assistive Touch. It may not be worth reinventing this if your users might be familiar with these tools.

How to read from a cheap generic USB device?

I bough a cheap RFID reader from eBay, just to play about with. There is no API, it just writes to stdin - that it to say, if you have Notepad open and tap an RFID tag to the reader its Id number appears in the Notepad window.
I am looking around for a reasonably priced reader/writer with an actual API (any recommendations?).
Until then I need to knock together a quick demo using what I have, just to prove the concept.
How can I best intercept the input from the USB connection? (and is there a free VCL control to do this?)
I guess if I just have a modal form with a control which is active then I can hook its on change event. But modal forms seem a bit rude. Maybe I can hook keyboard input, as it seems to be injecting like types chars?
Any idea? Please tell me if I cam not explaining this clearly enough.
Thanks in advance for your help.
In the end, I just hooked the keyboard, rather than trying to intercept the USB. It works if I check that my application is active and pass on the keystrokes otherwise. My app doesn't have any keyboard input, just mouse clicks (and what I read from RFID is digits only, so I can still handle things like Alt+F4. Maybe not the perfect solution for everyone, but all that I could get to work)
Based on your description, it sounds like the RFID reader is providing a USB HID keyboard interface.
I don't know if there is anything similar in delphi, but in libusb there is a libusb_claim_interface, which requests that the OS hand control over to your program.
A Delphi library for doing HID devices:
http://www.soft-gems.net/index.php?option=com_content&task=view&id=14&Itemid=33

Resources