VoiceOver accessibility label for Touch ID - ios

I am trying to ensure that the iOS app that I am working on is accessible and am trying to implement VoiceOver to ensure this.
One strange thing that I cannot find any help for is when the Touch ID view is displayed (in my case for signing into the app). VoiceOver pronounces ID as a word and not I.D.
I have tried implementing the accessibility attributes to both NSString and the LAContext object but neither seem to change what is read out by VoiceOver. Code snippets below:
LAContext *context = [[LAContext alloc] init];
[context setIsAccessibilityElement:YES];
[context setAccessibilityLabel:#"TEST 2"];
NSError *error = nil;
NSString *label = #"Please authenticate your ID using the Touch ID";
[label setIsAccessibilityElement:YES];
[label setAccessibilityTraits:UIAccessibilityTraitStaticText];
[label setAccessibilityLabel:#"TEST"];
showingTouchID = TRUE;
if ([context canEvaluatePolicy:LAPolicyDeviceOwnerAuthenticationWithBiometrics error:&error]) {
[context evaluatePolicy:LAPolicyDeviceOwnerAuthenticationWithBiometrics
localizedReason:label
reply:^(BOOL success, NSError *error) {
......
The output from VoiceOver with or without context having the accessibility attributes is always the label text.
All help greatly appreciated :)

You should definitely not change the accessibility label just to make VoiceOver pronounce things correctly (i.e. do not try "hack" the label pronounciation). The reason is that VoiceOver does not have speech output only; it has also braille output where blind users expect to read things exactly letter-by-letter as they are written (i.e. see exactly all the spaces, capital/small letters, etc.) If you did e.g. write "I D" instead of "ID", then while VoiceOver would pronounce it perhaps correctly (in the specific version of iOS), blind users, after also reading "I D" on a braille display might think that that is how it is actually written and appear let's say non-professionally when they would then use this wrong spelling in written exchanges with other people.
The correct way to deal with this, albeit without giving you an immediate solution, is:
File a bug with Apple about pronounciation of the specific word with the specific voice in the specific language (e.g. "Expected pronounciation: [aj'di:]" vs. "Actual pronounciation: [id]")
File a bug with Apple to request the ability to customize pronunciation only (i.e. where you would leave the accessibility label intact and correct, but specify to the voice how it should pronounce certain part of the text), and where this customization could be done for each language individually by the translator translating the strings (because wrong pronunciation is language-specific) - also see the next point.
If you can reword, try different word than the problematic one (which seems not applicable in case of "Touch ID" which is a set term). But this is a hack too, as that solves only the English original and does not care about translations where the rewording might on the contrary potentially complicate the pronunciation.
Sorry for the bad news.
Finally, here, both on iOS 8.4.1 and iOS 9.0.2, VoiceOver with default US English iOS voice, at least on this webpage, pronounces "ID" in "Touch ID" as [ajdi:], not [id].

You can try this for a quick work around: Just give space between I and D
NSString *label = #"Please authenticate your ID using the Touch ID";
label.accessibilityLabel=#"Please authenticate your I D using the Touch I D";
Also please note that you can only set accessibility to UIElements and you cannot set it to general variables. It doesn't make sense to set accessibility label for LAContext and to NSString.
YOu need to set the accessibility label to UILabel or the element which you give the NSString to.

Starting with iOS 11, you can set the element's accessibilityAttributedLabel and use the UIAccessibilitySpeechAttributeIPANotation key (Swift: NSAttributedString.Key.accessibilitySpeechIPANotation) to specify the pronunciation for a range of the attributed string.
See "Speech Attributes for Attributed Strings" for other tools you can use to tweak how VoiceOver reads your text.

Related

iOS TTS Alex voice is treated like available when it's actually not

In my app I allow the user to change TTS voices and English Alex voice is one of possible options. However, there are some cases when it's treated as available when it's actually not. It results in pronouncing the TTS utterance with another voice.
To prepare the list of available voices I use next code:
NSMutableArray *voices = [NSMutableArray new];
for (AVSpeechSynthesisVoice *voice in [AVSpeechSynthesisVoice speechVoices]) {
[voices addObject:#{
#"id": voice.identifier,
#"name": voice.name,
#"language": voice.language,
#"quality": (voice.quality == AVSpeechSynthesisVoiceQualityEnhanced) ? #500 : #300
}];
}
To start the playback I use the next code (here it's simplified a bit):
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:text];
utterance.voice = [AVSpeechSynthesisVoice voiceWithIdentifier:voice];
[AVSpeechSynthesizer speakUtterance:utterance];
Cases when AVSpeechSynthesisVoice returns Alex as available when it's not:
The easiest way to reproduce it is on simulator. When I download Alex from iOS settings, the download button disappear, but when I press on the voice nothing happens. As a result it seems to be downloaded, however it can't be deleted.
In some cases Alex is dowloaded correctly and is actually available in the app, but when I try to delete it, it looks like it's not fully deleted. As a result it's treated as available in my app, but in iOS settings it's shown as not downloaded.
If the iPhone storage is close to the full state as much as possible and Alex voice hasn't been used recently, looks like it's being offloaded, so it's shown as available both in iOS settings and in my app, but in fact the utterance is being pronounced by another voice.
For all cases above in my app Alex looks like it's available, but when I pass it to the utterance it's pronounced with some different voice. Note that it happens only with this voice, I haven't seen such a case for others. Maybe this voice should be treated separately somehow?

Google Places API for IOS does not set localized language

In my app I need user to pick a place using Google places. I call google place picker like so:
GMSPlacePicker * placePicker = [[GMSPlacePicker alloc] initWithConfig:config];
GMSPlacesClient * placesClient = [GMSPlacesClient sharedClient];
[placePicker pickPlaceWithCallback:^(GMSPlace *place, NSError *error) {...}];
My problem is, that presented picking View has English language set instead of my localized language (Slovak). I have read that it is supposed to find localization by itself?
what i did was to changing the default language in my app info.plist
Localization native development region to = "your language prefix goes here"
this changed my app localization and also changed the Google Place Picker to the the preferred language.
my app also needs to be from left to right but my language is right to left so i also forced the app to "left to right"
in my app delegate :
UIView.appearance().semanticContentAttribute = UISemanticContentAttribute.forceLeftToRight
take for consideration "semanticContentAttribute" supported only from ios 9 and above.
i hope this helps anyone struggling with this issue.

Forcing Voiceover to read particular word (homographs)

I want to use the word "live" (l-eye-v) for an event on now rather than "live" (l-i-v) as in "will she live". Is there any way to force this?
The answers to VoiceOver accessibility label for Touch ID are fairly relevant to this question and suggest that the answer is no, there is nothing that can be done to force it. This question title is much more generally applicable and searchable so I think it is a useful addition even if answers do link there. There are also aspects of the linked question that are applicable only to a particular situation.
There is also VoiceOver pronunciation issue: "Live" "ADD" which discusses the specific case of "Live" too worth a read if you find this page now.
A good way to implement this is to override the accessibilityLabel getter property. This way you don't have to track both strings separately, just have a dictionary of words that need phonetic replacement. So for example, if your object were a UILabel, you could do something like this:
-(NSString*)accessibilityLabel {
NSMutableString* mutableResult = [NSMutableString new];
for (NSString* word in [self.text componentsSeparatedByCharactersInSet:[NSCharacterSet characterSetWithCharactersInString:#" \t\n"]]) {
if ([word isInDictionary]) {
[mutableResult appendFormat:#" %#", [word phoneticReplacement]];
} else {
[mutableResult appendFormat:#" %#", word];
}
}
return mutableResult;
}
In these cases, what I usually do is just add a separate accessibility string (as opposed to using the text displayed to the user) that contains the word phonetically. So try something like lyve/lieve. Text to speech is a complicated process and requires a lot of AI to work properly with homonyms.

iOS: How to detect the escape/control keys on a hardware bluetooth keyboard?

I'm trying to figure out how to detect that the Escape (and other key combinations like Ctrl and alt) have been pressed on a bluetooth keyboard attached to an iOS device.
Some answers seem to suggest this isn't possible. However there are apps in the Appstore that do this (for example iSSH) so I assume it is possible using the public APIs somehow.
I've tried creating my own UITextInput however this receives nothing when the Escape key is pressed. The only part of the API I can see where the iPad might respond is when VoiceOver is enabled (Escape works as back in Safari), so I'm wondering if there's a way in via the accessibility API?
I've also tried to see if there's something I can observe from NSNotificationCenter that might help, but have yet to find anything.
Suggestions welcome, I've been hacking away at this for a day and I'm at a bit of a loss now.
You can do this now in iOS 7. For example, to implement the escape key, override UITextView and place the following methods in your class:
- (NSArray *) keyCommands {
UIKeyCommand *esc = [UIKeyCommand keyCommandWithInput: UIKeyInputEscape modifierFlags: 0 action: #selector(esc:)];
return [[NSArray alloc] initWithObjects: esc, nil];
}
- (void) esc: (UIKeyCommand *) keyCommand {
// Your custom code goes here.
}
You don't need to check to be sure you are on iOS 7, since earlier versions of the OS won't call the keyCommands method.
There are no public APIs for what you intend to accomplish, so this may lead to a rejection.
If you are willing to risk it, you can try this. Wich basically intercepts all events sent to your App by overwriting sendEvent: in your UIApplication.
AFAIK this is not possible using public API.
I've done a bit of searching and the esc key is not recognized.
The only thing that I didn't do is to try iSSH (it costs 9€ :-), but if you read the description on the AppStore it seems clear that ESC key on a hardware (bluetooth) keyboard doesn't work:
Exhaustive key configuration support. Has arrow keys (by pop-up or by toolbar). ctrl, alt, esc, tab, shift, Fn keys (1-10), ` key, PgUp, PgDown and for those keys not listed provides multiple means to add them.
Bluetooth keyboard support for arrow keys, function keys and a remapping of the ctrl key through option key mapping in either X11/VNC server or terminal. When enabled, an Option+key press maps to equivalent Ctrl+key press.
As you can see, in the second line the ESC key is not mentioned.
Moreover, I've found this (old) post.
EDIT:
As your last updates, I've found a way to "hide" the _gsEvent inside the binary. I don't know if Apple static analyser can find it, however.
The trick is simple...create the _gsEvent selector (and other private selectors) at runtime!
-(void)sendEvent:(UIEvent *)event
{
SEL aSelector = NSSelectorFromString([self theSelector]);
if ([event respondsToSelector:aSelector]) {
NSLog(#"Event: %#", event.description);
}
[super sendEvent:event];
}
-(NSString *)theSelector
{
// compose the keyword as you prefer
NSString *sel = [NSString stringWithFormat:#"%#g%#%#ent", #"_", #"s", #"Ev"];
return sel;
}
I've tried to search inside the binary and I don't find the _gsEvent keyword, obviously because it's created only at runtime.
Hope this helps.

Unable to get UIAutomation iOS UILabel value

I am trying to get the value "HELLO" of the UILabel shown in the iPad simulator.
I have enabled accessibility and have set the label as "Label Access".
But when I call target.logElementTree(), both the name and value are set to "LabelAccess" and as far as the apple docs say, the value field should contain the string that is set (in this case "Hello").
Does anybody know a fix for this?
PS: I am using the latest iOS SDK and Xcode.
Apple Stack Exchange
I think you encountered a UIAutomation bug that exists since forever.
Easiest way to get around this bug is to set the accessibilityValue to your text in code.
Something like this.
NSString *valueString = [NSString stringWithFormat:#"%d", value];
self.label.text = valueString;
self.label.accessibilityValue = valueString;
Helps those people that use Voice Over too ;-)
thanks for the workaround. Doesn't look like this bug has been fixed.
Came across this while writing Appium test for the iOS app. The element found by the driver somehow only contains accessibilityLabel and accessibilityIdentifier but not the actual text that's shown on the screen.
<XCUIElementTypeStaticText type="XCUIElementTypeStaticText" value=<accessibilityLabel> name=<accessibilityIdentifier> label=<accessibilityLabel> .../>
Has someone found if this issue has been logged with apple?
EDIT: Refer to this answer and the comment underneath. https://stackoverflow.com/a/11411803/4725937
Basically need to use [accessibilityValue]: https://developer.apple.com/documentation/uikit/uiaccessibilityelement/1619583-accessibilityvalue for accessible components for the display text to show up as XCUIElementTypeStaticText.value in the page source.
For eg:
someUILabel.accessibiityLabel = "This is used for voice-over"
someUILabel.accessibilityValue = "This is displayed text"

Resources