iOS accessibility: what are the pros/cons for hardcoding "double tap to activate" as hint? - ios

iOS has built-in support for accessibility, for UIButtons it reads the title of the button followed by a hint "double tap to activate" (by default). Sometimes we are required to make a non-UIButton control behaving similar to UIButton in terms of accessibility, so we would set its accessibility trait to button and hardcode "double tap to activate" for accessibilityHint.
I don't like altering system behaviours, and I've seen accessibility users who prefer single tap instead of double tap (there's an option they can set), although I haven't checked if the opt for single tap instead of double tap, does the system hint become "single tap to activiate".
What is the common practice regarding accessibility support for a non-UIButton control that is tappable? Thanks!

I've seen accessibility users who prefer single tap instead of double tap (there's an option they can set)
I'm really curious to know how it's possible using VoiceOver because a single tap with one finger deals with the accessibility focus. In the UIButton documentation, Apple states: πŸ€“
VoiceOver speaks the value of the title [...] when a user taps the button once.
Would you mind detailing the way to activate this option you mentioned because I'm really astonished, please? πŸ€”
What is the common practice regarding accessibility support for a non-UIButton control that is tappable?
Using a hint is a very good practice to provide useful information to the user but this information mustn't be crucial for using because the accessibility hint may be deactivated in the device settings.😰
Admittedly speaking, this kind of element must be read out in such a way that its goal and its using are clear enough for any user: that's what traits are made for. πŸ‘
Many traits are well known and give rise to different actions like adjustable values, customed actions and the rotor items using.
Besides, it's also possible to use the accessibilityActivate() method so as to define the purpose of a double-tap with one finger of an accessible element. 🀯
The way you want to vocally expose the possible actions on a tappable control depends on the content of your application.
Finally, keep in mind that hardcoding a hint must be understood as a plus information but definitely not as an essential one because it can be deactivated by the user: a conception oriented a11y is very important when building an app. πŸ˜‰

Related

Prevent UIButton showsTouchWhenHighlighted from altering VoiceOver description

Setting showsTouchWhenHighlighted, e.g. via the IB option "Shows Touch On Highlight," on a button without a title will alter the VoiceOver description. After reading the accessibility label, VoiceOver beeps and announces a description of the image. Is there a way to disable this behavior?
Setting the accessibilityContainerType value to UIAccessibilityContainerTypeSemanticGroup works and that's great πŸ‘ but I'm not sure that
this is the goal of this element.
Even if a button my be seen as a container, I understood this instance property dealt with
data-based containers rather. πŸ€”
I looked into your problem that arouse my curiosity and couldn't find out an appropriate solution with the Apple API unfortunately.
First, I thought that this solution may help but it didn't work as you mentioned in your comment... thanks. πŸ˜‰
Apparently, when the showsTouchWhenHighlighted property is used, there's an added view inside the button that renders the glow touch ⟹ this is a UIButtonBarPressedIndicator image you can detect thanks to the Debug Hierarchy in Xcode for instance. πŸ‘
This new image seems to put the default value of the accessibility trait property of your button image even if you have already changed it programmatically. 🀯
So, in order to avoid VoiceOver from using the screen recognition and reading out some useless information, I made something very ugly but efficient to reach your goal in the viewDidAppear for instance (see the 'basic operations' sheet of this link):
myButton.subviews.forEach{$0.accessibilityTraits = .none}
Bad hack due to a native problem of implementation or a simple line of code that anyone can understand, this solution removes every possible VoiceOver screen recognition from the button πŸ₯³... and I'm still interested if you can explain a little bit the reason why your solution works, please. πŸ˜‰
Set accessibilityContainerType = UIAccessibilityContainerTypeSemanticGroup.

Actual difference between UIAccessibilityLayoutChangedNotification and UIAccessibilityScreenChangedNotification?

I’m trying to ascertain what exactly happens differently when posting a UIAccessibilityLayoutChangedNotification, and a UIAccessibilityScreenChangedNotification. From what I can see, I can use them interchangeably everywhere and nothing different happens.
The Apple documentation simply says to use LayoutChanged when (for example) an element has been hidden or shown, and to use ScreenChanged if the entire screen changes, but I’m interested in what THEY do when I provide this information, and what I should see differently when using one or the other.
Can anyone give a clear explanation of implementation differences between the two?
These two notifications are for dynamic content on views, and communicating these changes to VoiceOver for screenreader users. There is little difference between these two notifications, except for their default behavior, and the silly little "boop beep" for ScreenChange notifications.
In both instances, the argument
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, arg);
Represents a string to be read out, or an on screen element, which VoiceOver will shift its focus to. In the event of dramatic context changes, it is important to send focus to a place that makes sense, or announce that such changes have taken place. Either approach is acceptable from an accessibility point of view, though I prefer approaches that involve the least amount of change possible. In the event of simple layout changes, it is almost always best just to announce the context change, and leave focus where it was. Though sometimes, the element that caused the context change is hidden, and then it is clearly necessary to direct voiceover to highlight new content, because the default behavior in this case is undefined, or perhaps deterministic, but determined by a framework that knows absolutely nothing about your app!
The difference between the two events, given that they both do exactly the same thing, is in their default behavior. If you supply nil to the UIAccessibilityLayoutChangedNotification it is as if you have done nothing. If you supply a nil argument to the UIAccessibilityScreenChangedNotification it will send focus to the first UIObject in your view hierarchy that is marked as an accessibilityElement, once all view hierarchy changes and drawings are complete.
UIAccessibilityLayoutChangedNotification
A good use case example for UIAccessibilityLayoutChangedNotification is for dynamic forms. You want to let users know that, based on decisions they've made in the form, new options are available. For example, if in a form you select that you are a Veteran, additional areas of the form may pop up to provide more input, but these areas may have been hidden to other users who did not care about them. So you could shift focus to these elements after user interaction:
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, firstNewFormElement);
Which would shift focus to the provided element, and announce it's accessibilityLabel.
Or just tell them that the new form elements are there:
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, #"Veterans form elements available");
Which would leave focus where it is, but VoiceOver would announce "Veterans form elements available".
Note: This particular behavior is bugged on my iPad (8.1.2).
Or finally you could do this:
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, nil);
Which does absolutely nothing :). Seriously, I don't even think the a11y framework backend cares. This particular line of code is a complete waste!
UIAccessibilityScreenChangedNotification
A good use case example for the UIAccessibilityScreenChangedNotification is customized tabbed browsing situations. When the entire screen, with the exception of your navigation area, changes. You want to let voiceover know that essentially the entire screen changed, but NOT to focus the first element (your first tab) but to focus the first content element.
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, firstNonGlobalNavElement);
Which would play the "boop beep" sound and then shift focus to just beneath your global navigation bar. Or you could do this:
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, #"You're on a new tab");
Which would wait for the new tab to load, play the "beep boop" sound, announce "You're on a new tab" in voiceover, then shift focus to the first element on the screen, then announce the accessibilityLabel for that element. (PHEW! That's a lot! This is jarring for screen reader users. Avoid this scenario, unless absolutely necessary).
And finally you can of course do this:
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, nil);
Which is equivalent to:
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, firstA11yElement);
Both of which will play the "beep boop" sound, shift VoiceOver focus to the first element on the screen, and then announce it.
Finally
In a comment somebody mentioned caching, and I occasionally comment in my answer about things the A11y Backend may or may not care about. While it is certainly possible that there is some backend magic happening, I don't believe in either of these circumstances, the back end cares at all. The reason I say this is because:
If you've ever used the UIAccessibilityContainer protocol, you can watch as your container of views gets queried. There is no caching going on. Even the accessibilityElementCount property gets pinged each time VoiceOver changes focus to a new AccessibilityElement within your container. Then it goes through the process of checking which element it is on, asking for the next element, and so on. It is designed at its core to handle dynamic situations. If you were to insert a new element into your container after interaction, it would still go through all of these queries and be just fine about it! Furthermore, if you override the properties of the UIAccessibility protocol, in order to provide dynamic hints and labels, you can also see that these functions get called every time! As such, I believe that the A11y Framework backend gleans ABSOLUTELY ZERO information from these notifications. The only information VoiceOver needs to do its job is it's currently focused Accessibility Element, and this elements Accessibility Container. The notifications are simply there for you to make your app more usable for VoiceOver users.
Imagine if this weren't the case how many times Safari would post these notifications!!!! :)
These particular statements can only be confirmed by someone with backend knowledge of the framework, who works with the code, and should be viewed as conjecture. It could be the case that this is highly version/implementation dependent. Definitely open to discussion on these points! The rest of this post is pretty concrete.
For Your Reference
Most of this comes from experience working with the frameworks, but here is a useful reference if you wish to dig further.
https://developer.apple.com/documentation/uikit/accessibility/uiaccessibility
https://developer.apple.com/documentation/uikit/uiaccessibilitylayoutchangednotification
https://developer.apple.com/documentation/uikit/uiaccessibilityscreenchangednotification
And finally, an open source repo of the silly little app I put together to test all this stuff.
https://github.com/chriscm2006/IOS-A11y-Api-Test
UIAccessibilityScreenChangedNotification is to indicate that the whole screen has changed and VoiceOver should reset.
UIAccessibilityLayoutChangedNotification is to indicate that one or more, but not all, elements on the screen have changed.
when your UI changes dramatically. Usually when a user moves into a different part of your app (navigates to a different screen). VoiceOver notifies the user with a tone, and it clears its caches and does other preparations to deal with a new set of accessibility data. It alerts VoiceOver that the screen has changed and there may be new elements on the screen so VoiceOver will rebuild it's index of accessibility elements.
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, nil);
If some part of your UI changes, but the user hasn’t necessarily jumped to an entirely different part of your app. (Example: in the iTunes Store app, tapping on the price label ($0.99, etc.) next to a song changes it to a β€œBuy” button.) This notification tells VoiceOver to re-read the current state of all accessible items that are on-screen, and by doing this it figures out what has changed and informs the user of those changes. It alerts VoiceOver that the layout has changed and that it's current index is out of date because the items on the screen have reordered themselves.
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, nil);

Stop voiceover reading a text

I have a long text on my view, when I tap on it VoiceOver reads the text.
Is there a default behavior to stop VoiceOver reading?
If not, is there a way to do it programmatically? for example when the view receive a tap.
Thanks in advance.
Without knowing the content or the interface it is difficult to give a solid answer to this question but one way to approach this problem it to try to not think of the experiences between a VoiceOver user and any other user as different experiences in the first place.
If you don't want VoiceOver users to repeatedly hear a long string of text you probably are also making the assumption that other users are going to be skipping over it after they have read it once as well.
Consider altering your interface so that the information is only presented once in a flow or is only presented when the user needs it and requests it, like contextual help.
Again, not knowing the interface or the purpose of the text makes it hard to answer this question directly but I generally find that building one interface to be inclusive of everyone often helps to point out that what might be perceived as just an Accessibility concern is actually a broader user experience concern and not just confined to the VoiceOver interface.
I hope that helps a little bit.

The Correct Way to do Custom Keyboards in iOS?

I am looking to implement a custom toolbar that sits above my keyboard for a text field with some custom values. I've found a ton of tutorials online but this question is for asking what's the best way to do this.
This tutorial here http://blog.carbonfive.com/2012/03/12/customizing-the-ios-keyboard/ provides the most common way I can see across many tutorials, with creating a new subclass of UIView and using delegates to get that information across.
That's the commonality. However, I came across this tutorial which in the view controller itself just creates the toolbar, assigns it to the textField inputAccessory and it's good to go. In fact, I tried out the code and without any effort, I have now a custom keyboard.
http://easyplace.wordpress.com/2013/03/29/adding-custom-buttons-to-ios-keyboard/
This just seems a bit too easy to me though and I'd think the proper, Apple recommended way would be to create that UIView subclass and use delegates so that the view controller with the text fields acts as that delegate.
I'm specifically targeting iOS 7 in my app.
What are people's thoughts on this? If the second easier link is supported and is likely to pass Apple's guidelines, it's a good starting point but if delegates are the way to go, I'd rather look into that from the start.
Your thoughts will be appreciated.
There is no 'Apple Approved' way to do this, and its hard to believe anything you do here would get your app rejected. The custom keyboard you reference in your post has the iOS6 look and will appear outdated in an iOS6 app. I'll mention some iOS7 suggestions shortly, but the constant danger of mimicking what the System looks like today is guaranteed to look outdated later. In Mac/Cocoa development, Apple use to say at the WWDC that if you did something custom, make it look custom, don't take a standard Apple widget and try to duplicate it. But that advice is mostly ignored.
For iOS 7, you can create buttons that appear just like the system ones do (not pressed), but of course when someone presses them, they won't act like system buttons (i.e. animate up and "balloon" out.
I'm currently using a fantastic add-on keyboard, my fork of KOKeyboard (which uses the buttons above). This is such a cool addition. While the buttons look like iPad buttons, each one has 5 keys in it. By dragging to a corner you select one of the four, and tapping in the middle gives you that key. This might be overkill for your app, but it really helped me with mine. It looks like this:
(the Key / Value is in the under laying view.) The center control lets you move the cursor - its like a joy stick - and can be used to both move and select text. Amazing class, I wish I'd invented it!
Also, for any solution, you want to use a UIToolbar as the view holding the keys, for the reason that it supports blur of the view it overlays, just like the keyboard does. You can use the UIToolbar with no bar button items in it (if you want), and just add subviews. This is a "trick" I learned here, as there is no other way to get blur!
David's KOKeyboard (er…, the one he used - see David's comment below) looks nice. I suspect that he is using the official Apple mechanism:
inputAccessoryView
Typically, you'd set that value on a UITextView, but it can be any class that allows itself to become the first responder.
The provided view will be placed above the default apple keyboard.
It is correct that there is no official mechanism (and it is suggested against) to modify any system provided keyboard. You can add to it, as above. You can also entirely replace it for with your own mechanism. Apply will forgo the keyboard setting on your view and use a custom input mechanism if you set
inputView
set it to any view - Apple will still manage its appearance and dismissal as it does the custom keyboards.
Edit: Of course, iOS 8.x added significant access to keyboards. (not mentioned here)

Is there any way to create a custom VoiceOver gesture?

Is there any way to create a custom gesture in iOS specifically for VoiceOver users?
Thank you
I think this MIGHT be possible. The iOS Mail app (at least in iOS 6) seems to contain custom Voiceover actions (you can swipe up or down to enable a "delete" operation on a mail item in the list).
My guess is (and I haven't verified this, is that if you add a swipe recogonizer only when UIAccessibilityIsVoiceOverRunning() returns true.
I haven't tested this yet.
I'm almost certain that this is not possible. That said, the accessibility APIs allow you to do things like speak content when a view changes, so maybe you could use this?
You mentioned a gesture specifically for Voiceover users - if Voiceover users are the majority of your audience, then you could just provide a standard gesture, which Voiceover users could invoke by double tapping and holding to pass the gesture through, and then performing the gesture itself.
For example, to "pull to refresh" a Voiceover user would double tap, hold, then pull down.

Resources