iOS detect keyboard layout (e.g. QWERTY, AZERTY) - ios

I am building a custom suggestion/autocorrection feature in an iOS app. It must detect accidental adjacent keypresses and compare this to a known word list to suggest the word it thinks the user intended to type.
For example, if the custom word list contains cat, dog, monkey, and the user types cst, the app can determine that the most likely word was cat (because s is adjacent to the a key)
This will work on a standard QWERTY keyboard, but what happens if the user is using an AZERTY keyboard?
For the autocorrect/suggest to work reliably, the app must be able to detect the keyboard layout in use.
In iOS, it is possible to obtain a UITextInputMode object from a UITextField. This object has a primaryLanguage (string) property, which will display the locale (e.g. en-GB), but this does not contain enough granularity to distinguish between English (Australia) QWERTY and English (Australia) AZERTY. In both cases, the primaryLanguage is en-AU.
Is it possible to detect the keyboard layout in iOS?

I have not been able to find a clean solution to this problem.
Maybe this would be worth a TSI ticket to discuss it with Apple employees.
I know that this will not be a satisfying answer, but I would still like to share my thoughts here for future readers:
Private API of UITextInputMode:
textField.textInputMode?.value(forKey: "identifierWithLayouts")
This will return a string like de_DE#sw=QWERTZ-German;hw=Automatic from which you can infer the keyboard layout.
UserDefaults
UserDefaults.standard.object(forKey: "AppleKeyboards")
This will return a list of all keyboards that the user has installed. In most cases, this will only be one language (besides the emoji keyboard).
For example:
Optional(<__NSCFArray 0x600003b8e6c0>(en_US#sw=QWERTY;hw=Automatic,emoji#sw=Emoji))
You could also iterate over UserDefaults.standard.dictionaryRepresentation() and search for QWERTZ/QWERTY/AZERTY within the values.
With much manual effort, you could maybe encode UITextInputModes to binary data in all ambiguous cases like en_AU. Something like
NSKeyedArchiver.archivedData(withRootObject:textField.textInputMode, requiringSecureCoding: false) can then be used to compare binary encodings of the user's textInputMode at runtime.

I have found this old question that may have a solution for you. No sure it's still working, but it shows in the question how to get the current instaled keyboards, and someone provided a "gray area" solution, as it seems that there is no direct way to achieve what you intend to do.
Hope this help.

AppleLanguages object at index 0 is common way to get input language. Instead of trying to determinate which language users are using and default layout, as far I worked with custom keyboard extension, I used one of recommended ways from Apple: use separate keyboard layout for each language. In other way I don't think you will have a stable and productive prediction, auto correct, by the way. As for auto correct I used SymSpell (https://github.com/AmitBhavsarIphone/SymSpell) and different dictionaries from https://github.com/wolfgarbe/SymSpell/tree/master/SymSpell.FrequencyDictionary to make my own RealmDb for each language. So far it was a little work to do, but finally my keyboard extension was publisehd in App Store. [Note: I am not related with SynSpell owners or coders] See images

Related

How to change the VoiceOver pronunciation in swift?

I am trying to implement the accessibility to my ios project.
Is there a way to correct the pronunciation of some specific words when the voice-over is turned on? For example, The correct pronunciation of 'speech' is [spiːtʃ], but I want the voice-over to read all the words 'speech' as same as 'speak' [spiːk] during my whole project.
I know there is one way that I can set the accessibility label of any UIElements that I want to change the pronunciation to 'speak'. However, some elements are dynamic. For example, we get the label text from the back-end, but we will never know when the label text will be 'speech'. If I get the words 'speech' from the back end, I would like to hear voice-over read it as 'speak'.
Therefore, I would like to change the setting for the voice-over. Every time, If the words are 'speech', the voice-over will read as 'speak'.
Can I do it?
Short answer.
Yes you can do it, but please do not.
Long Answer
Can I do it?
Yes, of course you can.
Simply fetch the data from the backend and do a find-replace on the string for any words you want spoken differently using a dictionary of words to replace, then add the new version of the string as the accessibility label.
SHOULD you do it?
Absolutely not.
Every time someone tries to "fix" pronunciation it ends up making things a lot worse.
I don't even understand why you would want screen reader users to hear "speak" whenever anyone else sees "speech", it does not make sense and is likely to break the meaning of sentences:
"I attended the speech given last night, it was very informative".
Would transform into:
"I attended the speak given last night, it was very informative"
Screen reader users are used to it.
A screen reader user is used to hearing things said differently (and incorrectly!), my guess is you have not been using a screen reader long enough to get used to the idiosyncrasies of screen reader speech.
Far from helping screen reader users you will actually end up making things worse.
I have only ever overridden screen reader default behaviour twice, once when it was a version number that was being read as a date and once when it was a password manager that read the password back and would try and read things as words.
Other than those very narrow examples I have not come across a reason to change things for a screen reader.
What about braille users?
You could change things because they don't sound right. But braille users also use screen readers and changing things for them could be very confusing (as per the example above of "speech").
What about best practices
"Give assistive technology users as similar an experience as possible to non assistive tech users". That is the number one guiding principle of accessibility, the second you change pronunciations and words, you potentially change the meaning of sentences and therefore offer a different experience.
Summing up
Anyway this is turning into a rant when it isn't meant to be (my apologies, I am just trying to get the point across as I answer similar questions to this quite often!), hopefully you get the idea, leave it alone and present the same info, I haven't even covered different speech synthesizers, language translation and more that using "unnatural" language can interfere with.
The easiest solution is to return a 2nd string from the backend that is used just for the accessibilityLabel.
If you need a bit more control, you can pass an AttributedString as the accessibilityLabel with a number of different options for controlling pronunication
https://medium.com/macoclock/ios-attributed-accessibility-labels-f54b8dcbf9fa

iOS custom keyboard base?

I need to create a custom keyboard that looks/feels pretty much the same like the system keyboards but is for a language that iOS doesn't have:
whenever I have to type using the system keyboard, I'm subject to the autocorrect, which not only gives wrong options, but also learns wrong words for that keyboard's language.
the language I need doesn't use 3 of the 26 Latin letters but it does need diacritics in some others as well as the ' quite often, so it would be nice to repurpose 3 of the keys for that.
My problem is that I'm not interested in creating a keyboard from bare UIViews just to do what in my opinion amount to tweaks to the existing system keyboards. I was dumbstruck when I found out that apparently I do have to recreate the whole experience myself instead of having some Apple-provided basis to build upon. I also can't see most developers being thrilled, so I began to think I may be wrong and there is something we can use after all. Can anyone enlighten me?

Where to find emoji's accessibility texts for iOS "voice over" feature?

I am working on an app that uses emojis on screen.
These emojis are displayed on buttons that can be pressed by the users.
To make this app compatible with "accessibility requirements", a.k.a. voice over, etc. I need to get all the emojis' description text, and when user is using "voice over", the emojis can be read to the user.
For example, when user is choosing an emoji is a "smiley face", voice over should read "smiley face" to the user. However, I cannot label manually for each of the emoji, because there are thousands of them.
I am wondering where should I get all the emoji description texts?
Thanks!!
As you've noticed already, the Accessibility subsystem already knows how to accessibly describe an emoji if given one as part of an accessibility-oriented text (like the accessibilityLabel for a control).
However, should you ever need emoji descriptions for other purposes (perhaps some kind of accessibility accommodation that doesn't go through the OS's Accessibility system), it might help to know how to find them yourself.
You can do this with Swift String.applyingTransform or ObjC NSString.stringByApplyingTransform:. (Both of these are wrappers for CoreFoundation's CFStringTransform API, which is better documented and featured in an old NSHipster post.) Use the toUnicodeName transform to get the names for emoji and other special characters — for example, as noted in the docs, that transforms “🐶🐮” into “{DOG FACE}{COW FACE}”.
(As you might notice in the StringTransform docs and the aforelinked NSHipster article, there are lots of other fun things you can do with string transforms, too, like latinizing text from other scripts or producing the XML/HTML hex escape codes for unusual characters.)
Forgot to post my answer the other day.
Turns out that Apple has already handled this in the framework.
All that we need to do is just to set the *.accessibilityLabel = the emoji itself. Then it all reads out correctly, such as "smiley face" when voice over feature is turned on.
Awesome!

Implement Autocorrect in iOS 8 Keyboard Extension

I'm creating a custom iOS 8 keyboard as a pet project.
I'm trying to replicate the system keyboard as accurately as possible, but building it from the ground up.
I'm largely done with this. The final hurdle I'm encountering is with adding autocorrect to my keyboard. Is there a way I can have the autocorrect behave as it would on the regular system keyboard?
The UILexicon documentation is quite sparse.
EDIT:
Making some progress with this. UILexicon's requestSupplementaryLexiconWithCompletion: method appears to only be returning results from my device's Contacts and keyboard shortcuts. I then went on to see how to autocorrect an NSString and found the UITextChecker class, which has been available since iOS 3.2.
Using this approach I can achieve autocorrect suggestions on individual words, but I'm still investigating the ability to add context-aware autocorrect (e.g. correcting "arctic monkeys" to "Arctic Monkeys").
From the documentation, it seems that UILexicon is to help you to create your own autocorrect, UILexicon has a bunch of UILexiconEntry entries which contain String pairs, the entry contains a userInput String which I assume its supposed to be what the user entered, and documentText which I assume is what you should be replacing that input with. You use func requestSupplementaryLexiconWithCompletion(_ completionHandler: ((UILexicon!) -> Void)!) From UIInputViewController to get this UILexicon.
I am assuming that the UIInputViewController knows what has been written to the documentProxy since it is the one relaying those messages, and thats how it knows what the user has input and in return what to put in the UILexicon..
This is what I gathered from reading the documentation, I have not tested it, though it should not be very hard to test this to verify..
I hope it helps
Daniel
Check out this simple but very effective auto correct implementation
http://norvig.com/spell-correct.html
For auto completion you can implement a trie.

How can I access localisable strings for standard iOS system terms (E.g. Favorites, More...)?

I don't know if my approach to this is fundamentally wrong, but I'm struggling to get my head around a (seemingly trivial?!) localisation issue.
I want to display the title of a 'System' UITabBarItem (More, Favorites, Featured, etc...) in a navigation bar. But where do I get the string from? The strings file of the MainWindow.nib doesn't contain the string (I didn't expect it to) and reading the title of the TabBarItem returns nil, which is what stumped me.
I've been told, there's no way to achieve it and I'll just have to add my own localised string for the terms in question. But I simply don't (want to) believe that!! That's maybe easy enough in some languages, but looking up, say, "More" in already presents me with more than one possible word in some languages. I'm not happy about simply sending these words for translation either, because it still depends on the translator knowing exactly which term Apple uses. So am I missing something simple here? What do other people do?
Obviously, setting the system language on my test device and simply looking to see what titles the Tab Items have is another 'obvious' possibility. But I really have a problem with half baked workarounds like that. That'll work for most languages, but I'm really gonna have fun when it comes to Russian or Japanese.
I'm convinced there must be a more reliable way to do this. Surely there must be a .strings file somewhere in the SDK that has these strings defined?
Thanks in advance...
Rich
The simple and unfortunate answer is that aside from a very few standard elements (e.g. a Back button), you need to localize all strings yourself. Yes, UIKit has its own Localization.strings file but obviously that's outside of your app sandbox so you don't have access to it.
I filed a bug with Apple years ago about providing OS-level localization for common button titles, tab item labels, etc. That bug is still open but obviously they haven't done it yet (sorry, I don't have the radar # handy).

Resources