All,
I ran into this problem where for a UITextField that has secureTextEntry=YES, I cannot get any UTF-8 keyboards(Japanese, Arabic, etc.) to show, only non UTF-8 ones do(English, French, etc..). I did alot of searching on Google, on this site, and on Apple dev forums and see others with the same problem, but short of implementing my own UITextField, nobody seems to have a reasonable solution or an answer as to whether this is a bug or intended behavior.
And if this is intended behavior, why? Is there a standard, a white paper, SOMETHING someplace that I can look at and then point to when I go to my Product Manager and say we cannot support UTF-8 passwords?
THanks,
I was unable to find anything in Apple's documentation to explain why this should be the case, but after creating a test project it does indeed appear to be so. At a guess, I imagine secure text entry is disallowed for any language using composite characters because it would make character input difficult.
For instance, for Japanese input, should each kana character be hidden after it is typed? Or just kanji characters? If the latter, the length of time characters remain onscreen is long enough make secure input almost moot. Similarly for other languages using composite input methods.
This post includes code for manually implementing your own secure input behaviour.
Related
I am building a custom suggestion/autocorrection feature in an iOS app. It must detect accidental adjacent keypresses and compare this to a known word list to suggest the word it thinks the user intended to type.
For example, if the custom word list contains cat, dog, monkey, and the user types cst, the app can determine that the most likely word was cat (because s is adjacent to the a key)
This will work on a standard QWERTY keyboard, but what happens if the user is using an AZERTY keyboard?
For the autocorrect/suggest to work reliably, the app must be able to detect the keyboard layout in use.
In iOS, it is possible to obtain a UITextInputMode object from a UITextField. This object has a primaryLanguage (string) property, which will display the locale (e.g. en-GB), but this does not contain enough granularity to distinguish between English (Australia) QWERTY and English (Australia) AZERTY. In both cases, the primaryLanguage is en-AU.
Is it possible to detect the keyboard layout in iOS?
I have not been able to find a clean solution to this problem.
Maybe this would be worth a TSI ticket to discuss it with Apple employees.
I know that this will not be a satisfying answer, but I would still like to share my thoughts here for future readers:
Private API of UITextInputMode:
textField.textInputMode?.value(forKey: "identifierWithLayouts")
This will return a string like de_DE#sw=QWERTZ-German;hw=Automatic from which you can infer the keyboard layout.
UserDefaults
UserDefaults.standard.object(forKey: "AppleKeyboards")
This will return a list of all keyboards that the user has installed. In most cases, this will only be one language (besides the emoji keyboard).
For example:
Optional(<__NSCFArray 0x600003b8e6c0>(en_US#sw=QWERTY;hw=Automatic,emoji#sw=Emoji))
You could also iterate over UserDefaults.standard.dictionaryRepresentation() and search for QWERTZ/QWERTY/AZERTY within the values.
With much manual effort, you could maybe encode UITextInputModes to binary data in all ambiguous cases like en_AU. Something like
NSKeyedArchiver.archivedData(withRootObject:textField.textInputMode, requiringSecureCoding: false) can then be used to compare binary encodings of the user's textInputMode at runtime.
I have found this old question that may have a solution for you. No sure it's still working, but it shows in the question how to get the current instaled keyboards, and someone provided a "gray area" solution, as it seems that there is no direct way to achieve what you intend to do.
Hope this help.
AppleLanguages object at index 0 is common way to get input language. Instead of trying to determinate which language users are using and default layout, as far I worked with custom keyboard extension, I used one of recommended ways from Apple: use separate keyboard layout for each language. In other way I don't think you will have a stable and productive prediction, auto correct, by the way. As for auto correct I used SymSpell (https://github.com/AmitBhavsarIphone/SymSpell) and different dictionaries from https://github.com/wolfgarbe/SymSpell/tree/master/SymSpell.FrequencyDictionary to make my own RealmDb for each language. So far it was a little work to do, but finally my keyboard extension was publisehd in App Store. [Note: I am not related with SynSpell owners or coders] See images
I need to create a custom keyboard that looks/feels pretty much the same like the system keyboards but is for a language that iOS doesn't have:
whenever I have to type using the system keyboard, I'm subject to the autocorrect, which not only gives wrong options, but also learns wrong words for that keyboard's language.
the language I need doesn't use 3 of the 26 Latin letters but it does need diacritics in some others as well as the ' quite often, so it would be nice to repurpose 3 of the keys for that.
My problem is that I'm not interested in creating a keyboard from bare UIViews just to do what in my opinion amount to tweaks to the existing system keyboards. I was dumbstruck when I found out that apparently I do have to recreate the whole experience myself instead of having some Apple-provided basis to build upon. I also can't see most developers being thrilled, so I began to think I may be wrong and there is something we can use after all. Can anyone enlighten me?
I am working on an app that uses emojis on screen.
These emojis are displayed on buttons that can be pressed by the users.
To make this app compatible with "accessibility requirements", a.k.a. voice over, etc. I need to get all the emojis' description text, and when user is using "voice over", the emojis can be read to the user.
For example, when user is choosing an emoji is a "smiley face", voice over should read "smiley face" to the user. However, I cannot label manually for each of the emoji, because there are thousands of them.
I am wondering where should I get all the emoji description texts?
Thanks!!
As you've noticed already, the Accessibility subsystem already knows how to accessibly describe an emoji if given one as part of an accessibility-oriented text (like the accessibilityLabel for a control).
However, should you ever need emoji descriptions for other purposes (perhaps some kind of accessibility accommodation that doesn't go through the OS's Accessibility system), it might help to know how to find them yourself.
You can do this with Swift String.applyingTransform or ObjC NSString.stringByApplyingTransform:. (Both of these are wrappers for CoreFoundation's CFStringTransform API, which is better documented and featured in an old NSHipster post.) Use the toUnicodeName transform to get the names for emoji and other special characters — for example, as noted in the docs, that transforms “🐶🐮” into “{DOG FACE}{COW FACE}”.
(As you might notice in the StringTransform docs and the aforelinked NSHipster article, there are lots of other fun things you can do with string transforms, too, like latinizing text from other scripts or producing the XML/HTML hex escape codes for unusual characters.)
Forgot to post my answer the other day.
Turns out that Apple has already handled this in the framework.
All that we need to do is just to set the *.accessibilityLabel = the emoji itself. Then it all reads out correctly, such as "smiley face" when voice over feature is turned on.
Awesome!
Is there a font that can be used for math notation? I'm thinking there isn't. If that is the case, does anyone know what the simplest route is to having nice math notation in my iPad app?
Update: Thank you for all the great responses. Looking at the current replies, would people generally recommend that if what I want to do is essentially create a feature that allows people to enter math equations intuitively then I should probably start with MathML as something I would work towards? What I mean is should I take a strategy of creating a UI that enables the user to write his/her math notation such that said notation they input gets converted into MathML (versus simply using a unicode math font which wouldn't already contain some semblance of the typesetting functionality?
I would leverage WebKit's support for MathML, or at least use a javascript library like jsMath. In general typesetting math notation in a non-webview is going to be annoying and take development time away from things that are actually relevant to the specifics of your app.
(It may also be useful to look at MathJax, which looks more modern and shiny than jsMath)
Unicode provides an amazing array of math-related characters (you can see a lot of examples here: http://tlt.its.psu.edu/suggestions/international/bylanguage/mathchart.html).
You might be able to use the StarMath font from OpenOffice.
I'm sure there are other fonts, but finding a font with the symbols is the easy part. The challenge is in the typesetting.
We're implementing a blog for a site which supports six different languages and five of them have non-Latin characters in their alphabets. We are not sure whether we should have them encoded (that is what we're doing at the moment)
Létání s potravinami: Co je dovoleno? becomes l%c3%a9t%c3%a1n%c3%ad-s-potravinami-co-je-dovoleno and the browser displays it as létání-s-potravinami-co-je-dovoleno.
or if we should replace them with their Latin "counterparts" (similar looking letters)
Létání s potravinami: Co je dovoleno? becomes letani-s-potravinami-co-je-dovoleno.
I can't find a definitive answer as to what's better from SEO perspective? Search engine optimization is very important for us. Which approach would you suggest?
Most of the times, search engines deal with latin counterparts good, although sometimes, results for i.e. "létání" and "letani" slightly differ.
So, in terms of SEO, almost no harm is done - once your site has good content, good markup and all that other stuff, your site won't suffer from having latin URLs.
You don't always know what combination of system browser and plugins users use, so make them as easy as possible - all websites use standard latin in URLs, because non-latin symbols can choke anything from server through browser to any plugin that might break user's experience.
And I can't stress this enough; Users before SEO!
"what's better from SEO perspective"
Who's your audience? Americans who think all those extra letters are a mistake?
Or folks who read (and search) for "non-ASCII" letters because those non-ASCII letters are part of their language?
SEO is a bad thing to chase. Complete, correct, consistent and usable is what you what to build first.
well i suggest you to replace them with there latin counterparts because it's user friendly and your website will be accessible on every single computer (as the keyboard changes from computer to another but all of them have latins letters), but for SEO perspective i don't think it's gonna be a problem.
Pawel, first of all, you should decide whether you're going to optimize for global Google (google.com) or Polish one.
In accordance with the URI specification, RFC 3986, only 7bit ASCII characters are allowed, and characters among those mentioned in the specification as control characters must be properly escaped. If you want to represent other characters or URI control characters then you should be using IRI, RFC 3987. Keep in mind that HTTP is not compatible with IRI, however.
When in doubt RTFM.
Another issue is that there are Unicode code points whose glyphs look very much alike in most fonts, which is absolutely ideal for phishers. Stick to ASCII and the glyphs are visibly different when the characters are.