I am trying to implement the accessibility to my ios project.
Is there a way to correct the pronunciation of some specific words when the voice-over is turned on? For example, The correct pronunciation of 'speech' is [spiːtʃ], but I want the voice-over to read all the words 'speech' as same as 'speak' [spiːk] during my whole project.
I know there is one way that I can set the accessibility label of any UIElements that I want to change the pronunciation to 'speak'. However, some elements are dynamic. For example, we get the label text from the back-end, but we will never know when the label text will be 'speech'. If I get the words 'speech' from the back end, I would like to hear voice-over read it as 'speak'.
Therefore, I would like to change the setting for the voice-over. Every time, If the words are 'speech', the voice-over will read as 'speak'.
Can I do it?
Short answer.
Yes you can do it, but please do not.
Long Answer
Can I do it?
Yes, of course you can.
Simply fetch the data from the backend and do a find-replace on the string for any words you want spoken differently using a dictionary of words to replace, then add the new version of the string as the accessibility label.
SHOULD you do it?
Absolutely not.
Every time someone tries to "fix" pronunciation it ends up making things a lot worse.
I don't even understand why you would want screen reader users to hear "speak" whenever anyone else sees "speech", it does not make sense and is likely to break the meaning of sentences:
"I attended the speech given last night, it was very informative".
Would transform into:
"I attended the speak given last night, it was very informative"
Screen reader users are used to it.
A screen reader user is used to hearing things said differently (and incorrectly!), my guess is you have not been using a screen reader long enough to get used to the idiosyncrasies of screen reader speech.
Far from helping screen reader users you will actually end up making things worse.
I have only ever overridden screen reader default behaviour twice, once when it was a version number that was being read as a date and once when it was a password manager that read the password back and would try and read things as words.
Other than those very narrow examples I have not come across a reason to change things for a screen reader.
What about braille users?
You could change things because they don't sound right. But braille users also use screen readers and changing things for them could be very confusing (as per the example above of "speech").
What about best practices
"Give assistive technology users as similar an experience as possible to non assistive tech users". That is the number one guiding principle of accessibility, the second you change pronunciations and words, you potentially change the meaning of sentences and therefore offer a different experience.
Summing up
Anyway this is turning into a rant when it isn't meant to be (my apologies, I am just trying to get the point across as I answer similar questions to this quite often!), hopefully you get the idea, leave it alone and present the same info, I haven't even covered different speech synthesizers, language translation and more that using "unnatural" language can interfere with.
The easiest solution is to return a 2nd string from the backend that is used just for the accessibilityLabel.
If you need a bit more control, you can pass an AttributedString as the accessibilityLabel with a number of different options for controlling pronunication
https://medium.com/macoclock/ios-attributed-accessibility-labels-f54b8dcbf9fa
Related
I am building a custom suggestion/autocorrection feature in an iOS app. It must detect accidental adjacent keypresses and compare this to a known word list to suggest the word it thinks the user intended to type.
For example, if the custom word list contains cat, dog, monkey, and the user types cst, the app can determine that the most likely word was cat (because s is adjacent to the a key)
This will work on a standard QWERTY keyboard, but what happens if the user is using an AZERTY keyboard?
For the autocorrect/suggest to work reliably, the app must be able to detect the keyboard layout in use.
In iOS, it is possible to obtain a UITextInputMode object from a UITextField. This object has a primaryLanguage (string) property, which will display the locale (e.g. en-GB), but this does not contain enough granularity to distinguish between English (Australia) QWERTY and English (Australia) AZERTY. In both cases, the primaryLanguage is en-AU.
Is it possible to detect the keyboard layout in iOS?
I have not been able to find a clean solution to this problem.
Maybe this would be worth a TSI ticket to discuss it with Apple employees.
I know that this will not be a satisfying answer, but I would still like to share my thoughts here for future readers:
Private API of UITextInputMode:
textField.textInputMode?.value(forKey: "identifierWithLayouts")
This will return a string like de_DE#sw=QWERTZ-German;hw=Automatic from which you can infer the keyboard layout.
UserDefaults
UserDefaults.standard.object(forKey: "AppleKeyboards")
This will return a list of all keyboards that the user has installed. In most cases, this will only be one language (besides the emoji keyboard).
For example:
Optional(<__NSCFArray 0x600003b8e6c0>(en_US#sw=QWERTY;hw=Automatic,emoji#sw=Emoji))
You could also iterate over UserDefaults.standard.dictionaryRepresentation() and search for QWERTZ/QWERTY/AZERTY within the values.
With much manual effort, you could maybe encode UITextInputModes to binary data in all ambiguous cases like en_AU. Something like
NSKeyedArchiver.archivedData(withRootObject:textField.textInputMode, requiringSecureCoding: false) can then be used to compare binary encodings of the user's textInputMode at runtime.
I have found this old question that may have a solution for you. No sure it's still working, but it shows in the question how to get the current instaled keyboards, and someone provided a "gray area" solution, as it seems that there is no direct way to achieve what you intend to do.
Hope this help.
AppleLanguages object at index 0 is common way to get input language. Instead of trying to determinate which language users are using and default layout, as far I worked with custom keyboard extension, I used one of recommended ways from Apple: use separate keyboard layout for each language. In other way I don't think you will have a stable and productive prediction, auto correct, by the way. As for auto correct I used SymSpell (https://github.com/AmitBhavsarIphone/SymSpell) and different dictionaries from https://github.com/wolfgarbe/SymSpell/tree/master/SymSpell.FrequencyDictionary to make my own RealmDb for each language. So far it was a little work to do, but finally my keyboard extension was publisehd in App Store. [Note: I am not related with SynSpell owners or coders] See images
I've got a field on an iPhone app that contains "1.2 m". VoiceOver will speak this as "1 point 2 metres". I was a little surprised that VoiceOver was smart enough to understand units.
However, I have a different field that contains the text "1.2 m/s" which VoiceOver speaks as "1 point 2 metres slash S", which obviously isn't what I want. Another slight oddity is that in the Google Earth app, the longitude and latitude are pronounced as "xx degrees, xx minutes, xx inches" which is clearly wrong.
This raises a couple of questions for me:
What units does iOS VoiceOver understand?
Do I have any control over what VoiceOver says without setting the text explicitly in the accessibilityLabel? Can I tell it to understand that "m/s" is pronounced "metres per second"?
Bear in mind that people who depend on VoiceOver and use it every day to access information are familiar with its quirks and expect to hear information read out in a consistent way. Any efforts to work around those quirks might create an experience that is actually more confusing. That said, if your intended reader is unfamiliar with the units/numbers expressed, then a parenthetical clarification of any abbreviations will be of benefit to all users (not just those using VoiceOver).
For your first question, to be honest, I don't know the metrics perfectly handled by VoiceOver but I always make tests to be sure of what's vocalized : the best way to be 100% sure in my view.
About your second one, I always format the date, time and numbers I use in my labels so as not to have unpleasant surprise with a new iOS release that could weirdly interprets these elements: I follow these steps to format these data.
I am trying to use Open Ears for small part of my app. I have three or four keywords that I want to be able to "listen" to. Something like "Add", "Subtract", etc. I am just using the sample app found here. I want to have a special case in the app when I here "Add" etc. as opposed to a word that is not one of my four keywords. Right now I set my language to be only the four keywords, but whenever the Open Ears API hears anything, it picks between my four keywords. So if I cough, it picks the closest word out of the four words
How can I listen for a specific word without always choosing one of the keywords?
I was thinking I could have a whole bunch of words, a few hundred, and just check which word was spoken, and have a special case for my four keywords, but I don't want to have to type down each word. Does Open ears provide any default languages?
OpenEars developer here. Check out the dynamic grammar generation API that was just added in OpenEars 1.7 which may provide the right results for your requirements: http://www.politepix.com/2014/04/10/openears-1-7-introducing-dynamic-grammar-generation/
This approach might be more suitable for keyword detection and detection of fixed phrases. Please bring further questions to the OpenEars forums if you'd like to troubleshoot them with me.
Are there any standard for keyboard short cut key localization?
I am developing one web application in this key suppose for "Submit" button I have set Alt + S as keyboard short cut key. This will work fine on English keyboard.
But what will happen on other (non-English) keyboards? Do we need to create separate short keys for each language?
Or will having one common English shortcut do? Are there any best practices for this?
Definitely, English short cuts won't do. Or...
There are two possible issues:
you will use short cut also used by (possibly localized) web browser or one of its extensions - it probably won't work
if you have forms with text area or text field where user can enter data, your short cuts might be mapped to one of national (diacritic) letters (i.e. right-Alt + a, right-Alt + C,...,etc. means something in Polish).
I do not think that any best practices exists. And I am not very fond of "localized" short cuts - I tend to use applications both in English and my mother's tongue and I really hate memorizing two sets of short cuts... What I think will work best is to give an opportunity for user to re-map keyboard short cuts. Although it is problematic from programming point of view. Alternatively you may decide on creating different sets of short cuts and allow users to switch between them (or suggest their mapping).
I don't know if my approach to this is fundamentally wrong, but I'm struggling to get my head around a (seemingly trivial?!) localisation issue.
I want to display the title of a 'System' UITabBarItem (More, Favorites, Featured, etc...) in a navigation bar. But where do I get the string from? The strings file of the MainWindow.nib doesn't contain the string (I didn't expect it to) and reading the title of the TabBarItem returns nil, which is what stumped me.
I've been told, there's no way to achieve it and I'll just have to add my own localised string for the terms in question. But I simply don't (want to) believe that!! That's maybe easy enough in some languages, but looking up, say, "More" in already presents me with more than one possible word in some languages. I'm not happy about simply sending these words for translation either, because it still depends on the translator knowing exactly which term Apple uses. So am I missing something simple here? What do other people do?
Obviously, setting the system language on my test device and simply looking to see what titles the Tab Items have is another 'obvious' possibility. But I really have a problem with half baked workarounds like that. That'll work for most languages, but I'm really gonna have fun when it comes to Russian or Japanese.
I'm convinced there must be a more reliable way to do this. Surely there must be a .strings file somewhere in the SDK that has these strings defined?
Thanks in advance...
Rich
The simple and unfortunate answer is that aside from a very few standard elements (e.g. a Back button), you need to localize all strings yourself. Yes, UIKit has its own Localization.strings file but obviously that's outside of your app sandbox so you don't have access to it.
I filed a bug with Apple years ago about providing OS-level localization for common button titles, tab item labels, etc. That bug is still open but obviously they haven't done it yet (sorry, I don't have the radar # handy).