I want to add Autocorrection and Suggestion to my custom keyboard.
There are already several similar questions on Stack, but there is only said about UILexicon which as I understood is only used to get user's shortcuts and that there is no way to acces Apple Autocorrection and Suggestion library.
I also saw some questions regarding UITextChecker but don't know if it has an access to Apple's native sugesstion library?
May be there are some new classes for that?
I use four different systems for my keyboard:
I have a list of the top 30,000 or so words, ranked in order of the most used to the least used. You can pay for lists, I just got a free one of about 42,000 and edited it down a lot.
guessesForWordRange is provided by Apple. It will guess words that are close to what you have typed. It does a fairly good job, but I had to filter out some things. The top guess sometimes has quotation marks around it, but other then that it works great.
completionsForPartialWordRange is also provided by Apple. It will return completed words, but in alphabetical order, not ranked by usage. Not much good on it's own, but is a great supplement to 1 and 2. (if this worked correctly #1 wouldn't be needed)
Special cases. Mainly for contractions. When someone types didnt, I wanted it to auto choose didn't. So I have almost all contractions specifically programmed in.
So my word suggestions and autocorrection aren't perfect, but it does a decent job.
Hope this helps.
Edit: As of iOS 16 it seems completionsForPartialWordRange is working correctly, so having your own list of words shouldn't be needed anymore.
Related
I am trying to implement the accessibility to my ios project.
Is there a way to correct the pronunciation of some specific words when the voice-over is turned on? For example, The correct pronunciation of 'speech' is [spiːtʃ], but I want the voice-over to read all the words 'speech' as same as 'speak' [spiːk] during my whole project.
I know there is one way that I can set the accessibility label of any UIElements that I want to change the pronunciation to 'speak'. However, some elements are dynamic. For example, we get the label text from the back-end, but we will never know when the label text will be 'speech'. If I get the words 'speech' from the back end, I would like to hear voice-over read it as 'speak'.
Therefore, I would like to change the setting for the voice-over. Every time, If the words are 'speech', the voice-over will read as 'speak'.
Can I do it?
Short answer.
Yes you can do it, but please do not.
Long Answer
Can I do it?
Yes, of course you can.
Simply fetch the data from the backend and do a find-replace on the string for any words you want spoken differently using a dictionary of words to replace, then add the new version of the string as the accessibility label.
SHOULD you do it?
Absolutely not.
Every time someone tries to "fix" pronunciation it ends up making things a lot worse.
I don't even understand why you would want screen reader users to hear "speak" whenever anyone else sees "speech", it does not make sense and is likely to break the meaning of sentences:
"I attended the speech given last night, it was very informative".
Would transform into:
"I attended the speak given last night, it was very informative"
Screen reader users are used to it.
A screen reader user is used to hearing things said differently (and incorrectly!), my guess is you have not been using a screen reader long enough to get used to the idiosyncrasies of screen reader speech.
Far from helping screen reader users you will actually end up making things worse.
I have only ever overridden screen reader default behaviour twice, once when it was a version number that was being read as a date and once when it was a password manager that read the password back and would try and read things as words.
Other than those very narrow examples I have not come across a reason to change things for a screen reader.
What about braille users?
You could change things because they don't sound right. But braille users also use screen readers and changing things for them could be very confusing (as per the example above of "speech").
What about best practices
"Give assistive technology users as similar an experience as possible to non assistive tech users". That is the number one guiding principle of accessibility, the second you change pronunciations and words, you potentially change the meaning of sentences and therefore offer a different experience.
Summing up
Anyway this is turning into a rant when it isn't meant to be (my apologies, I am just trying to get the point across as I answer similar questions to this quite often!), hopefully you get the idea, leave it alone and present the same info, I haven't even covered different speech synthesizers, language translation and more that using "unnatural" language can interfere with.
The easiest solution is to return a 2nd string from the backend that is used just for the accessibilityLabel.
If you need a bit more control, you can pass an AttributedString as the accessibilityLabel with a number of different options for controlling pronunication
https://medium.com/macoclock/ios-attributed-accessibility-labels-f54b8dcbf9fa
I am building a custom suggestion/autocorrection feature in an iOS app. It must detect accidental adjacent keypresses and compare this to a known word list to suggest the word it thinks the user intended to type.
For example, if the custom word list contains cat, dog, monkey, and the user types cst, the app can determine that the most likely word was cat (because s is adjacent to the a key)
This will work on a standard QWERTY keyboard, but what happens if the user is using an AZERTY keyboard?
For the autocorrect/suggest to work reliably, the app must be able to detect the keyboard layout in use.
In iOS, it is possible to obtain a UITextInputMode object from a UITextField. This object has a primaryLanguage (string) property, which will display the locale (e.g. en-GB), but this does not contain enough granularity to distinguish between English (Australia) QWERTY and English (Australia) AZERTY. In both cases, the primaryLanguage is en-AU.
Is it possible to detect the keyboard layout in iOS?
I have not been able to find a clean solution to this problem.
Maybe this would be worth a TSI ticket to discuss it with Apple employees.
I know that this will not be a satisfying answer, but I would still like to share my thoughts here for future readers:
Private API of UITextInputMode:
textField.textInputMode?.value(forKey: "identifierWithLayouts")
This will return a string like de_DE#sw=QWERTZ-German;hw=Automatic from which you can infer the keyboard layout.
UserDefaults
UserDefaults.standard.object(forKey: "AppleKeyboards")
This will return a list of all keyboards that the user has installed. In most cases, this will only be one language (besides the emoji keyboard).
For example:
Optional(<__NSCFArray 0x600003b8e6c0>(en_US#sw=QWERTY;hw=Automatic,emoji#sw=Emoji))
You could also iterate over UserDefaults.standard.dictionaryRepresentation() and search for QWERTZ/QWERTY/AZERTY within the values.
With much manual effort, you could maybe encode UITextInputModes to binary data in all ambiguous cases like en_AU. Something like
NSKeyedArchiver.archivedData(withRootObject:textField.textInputMode, requiringSecureCoding: false) can then be used to compare binary encodings of the user's textInputMode at runtime.
I have found this old question that may have a solution for you. No sure it's still working, but it shows in the question how to get the current instaled keyboards, and someone provided a "gray area" solution, as it seems that there is no direct way to achieve what you intend to do.
Hope this help.
AppleLanguages object at index 0 is common way to get input language. Instead of trying to determinate which language users are using and default layout, as far I worked with custom keyboard extension, I used one of recommended ways from Apple: use separate keyboard layout for each language. In other way I don't think you will have a stable and productive prediction, auto correct, by the way. As for auto correct I used SymSpell (https://github.com/AmitBhavsarIphone/SymSpell) and different dictionaries from https://github.com/wolfgarbe/SymSpell/tree/master/SymSpell.FrequencyDictionary to make my own RealmDb for each language. So far it was a little work to do, but finally my keyboard extension was publisehd in App Store. [Note: I am not related with SynSpell owners or coders] See images
I need to create a custom keyboard that looks/feels pretty much the same like the system keyboards but is for a language that iOS doesn't have:
whenever I have to type using the system keyboard, I'm subject to the autocorrect, which not only gives wrong options, but also learns wrong words for that keyboard's language.
the language I need doesn't use 3 of the 26 Latin letters but it does need diacritics in some others as well as the ' quite often, so it would be nice to repurpose 3 of the keys for that.
My problem is that I'm not interested in creating a keyboard from bare UIViews just to do what in my opinion amount to tweaks to the existing system keyboards. I was dumbstruck when I found out that apparently I do have to recreate the whole experience myself instead of having some Apple-provided basis to build upon. I also can't see most developers being thrilled, so I began to think I may be wrong and there is something we can use after all. Can anyone enlighten me?
Sorry for the generalized question...I have been hunting for a long time and haven't found anything I can use or easily adapt yet. I'd really appreciate any pointers!
I'm building a reference app that will contain several textbooks in plain-text format. I want the user to be able to perform a search, and get a table back with a list of results. I have a working prototype, but the search logic that I wrote isn't all that smart and it's been hell trying to make it better.
This is obviously a fairly common problem so I'm looking for a tool that I could adapt to the task. So far I've found Lucene (http://vafer.org/blog/20090107014544/) and Locayta (http://www.locayta.com/iOS-search-engine/locayta-search-mobile/)
Lucene appears to have been last updated for iOS 2...I don't even know if I'll be able to rework it myself. Maybe.
Locayta would probably work great, but a commercial license is $1,000 and I may not soon recoup that with this app, as it's a niche market.
Thanks!
We stumbled upon the same predicament where I work, and have yet to decide on a solution.
Locayta seems promising, but barring that, I've looked into SQLite's FTS3/FTS4 as well.
The only issue seemed the lack of a way to match partial words. It's easy to search for fields that contain whole words (eg. "paper" matches "printer paper", "paper punch", and "sketch paper"), or words that start with something (eg. "bi*" matches "binder", and "bicycle"), but there's no built in way to match a suffix.
If you don't require that functionality, FTS3/FTS4 might work.
I see you mentioned in the follow-up that your SQLite didn't recognize FTS3(), and I had the same issue at first.
Apparently it's not bundled into the iOS version by default, instead you have to download the SQLite3 amalgamation, and include it in the project manually. As found at is FTS available in the iOS build of SQLite?
Also note, the SQLITE_ENABLE_FTS3 variable is not enabled by default, you just have to add it to the configuration as detailed at http://www.sqlite.org/fts3.html#section_2
Hope this helps.
If you can translate plain C code to iOS Objective-C, then Apache Lucy (a loose "C" port of Lucene) might be worth a look.
I don't know if my approach to this is fundamentally wrong, but I'm struggling to get my head around a (seemingly trivial?!) localisation issue.
I want to display the title of a 'System' UITabBarItem (More, Favorites, Featured, etc...) in a navigation bar. But where do I get the string from? The strings file of the MainWindow.nib doesn't contain the string (I didn't expect it to) and reading the title of the TabBarItem returns nil, which is what stumped me.
I've been told, there's no way to achieve it and I'll just have to add my own localised string for the terms in question. But I simply don't (want to) believe that!! That's maybe easy enough in some languages, but looking up, say, "More" in already presents me with more than one possible word in some languages. I'm not happy about simply sending these words for translation either, because it still depends on the translator knowing exactly which term Apple uses. So am I missing something simple here? What do other people do?
Obviously, setting the system language on my test device and simply looking to see what titles the Tab Items have is another 'obvious' possibility. But I really have a problem with half baked workarounds like that. That'll work for most languages, but I'm really gonna have fun when it comes to Russian or Japanese.
I'm convinced there must be a more reliable way to do this. Surely there must be a .strings file somewhere in the SDK that has these strings defined?
Thanks in advance...
Rich
The simple and unfortunate answer is that aside from a very few standard elements (e.g. a Back button), you need to localize all strings yourself. Yes, UIKit has its own Localization.strings file but obviously that's outside of your app sandbox so you don't have access to it.
I filed a bug with Apple years ago about providing OS-level localization for common button titles, tab item labels, etc. That bug is still open but obviously they haven't done it yet (sorry, I don't have the radar # handy).