In our current UI, next to certain labels, we have a help-tip button that when clicked, explains the details of what the label references. As such, VoiceOver identifies these two items as separate accessibility items.
However, when using accessibility, we're hoping we can just do everything in the label itself. This way when the label gets focused, the user will here 'Account value, $20 (the accessibilityLabel), double-tap for help (the accessibilityHint)'
However, unlike a button, a label doesn't have an action associated with it so I'm not sure how to wire up actually triggering the accessibility gesture indicating I want to do something.
Short of converting all of our labels over to buttons, is there any way to listen to the accessibility 'action' method on our labels?
My current work-around is to make only the Help-tip buttons accessible, then move all the relevant information to their accessibility properties, but that seems like code smell as it's easy for a developer to miss that when updating the code.
In your UILabel subclass, override accessibilityActivate() and implement whatever double-tapping should do:
override func accessibilityActivate() -> Bool {
// do things...
return true
}
If the action can fail, return false in those instances.
Have your tried adding a UITapGestureRecognizer to the Labels?
Something like :
let tapGesture: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: "tapResponse:")
tapGesture.numberOfTapsRequired = 1
sampleLabel.userInteractionEnabled = true
sampleLabel.addGestureRecognizer(tapGesture)
func tapResponse(recognizer: UITapGestureRecognizer) {
print("tap")
}
Ok, this was easier than I thought. To make a UILabel respond to accessibility actions similar to how a button does, you simply implement a UITapGestureRecognizer. The Accessibility framework uses that just like any other UIView.
let tapGestureRecognizer = UITapGestureRecognizer(target:self, action:#selector(labelTapped))
testLabel.userInteractionEnabled = true
testLabel.addGestureRecognizer(tapGestureRecognizer)
Once you do that, your label will respond to accessibility actions.
Group your label and your hint button as one unique accessible element.
Once done, you can use :
The accessibilityActivationPoint property to define the hint button to be triggered when the double tap occurs.
The accessibilityActivate method to indicate the action to be done when you double tap your new element.
According to your environment, I don't recommend to implement a custom action for such a simple use case... the two solutions above should do the job.
Absolutely! You can do this by using UIAccessibilityCustomActions on the accessibility element rather than using tap gesture recognizers. This is because accessibility operates differently than normal users and single tapping while the voice over focus lands somewhere will not give you the desired result as in the case of a normal use, nor will it permit you to execute multiple options on the same accessibility element.
At their recent WWDC, Apple put out an excellent video explaining how to add UIAccessibilityCustomActions to any kind of accessibility element. If you start this video 33 minutes in, you will be able to see how this is implemented.
Once in place, your Voice Over users will be able to scroll through the options and select the one that most suits his/her intentions, thereby permitting multiple actions to be accessible from the same UILabel.
Related
I have a custom control to increment and decrement values. Now that I've added support for voice over, I've stumbled upon a problem.
My customView has the accessibility trait .adjustable and I implemented the correct methods for increasing and decreasing the values.
However, the voice over user can also double tap on that view to activate it. The problem is, that this triggers a gesture which is irrelevant to voice over users.
Is there a way to prevent an adjustable accessibility view from being activated so that the element is only adjustable, not double-tappable like a button?
There are two important properties to know when a double-tap occurs:
accessibilityActivate.
accessibilityActivationPoint.
In your case, you could just return true by overriding accessibilityActivate and if it's not enough, provide as well a CGPoint coordinate that triggers nothing (depends of your custom control and its neighborhood).
Otherwise, use the accessibilityElementIsFocused instance method to know wether you can trigger actions as this complete example shows up.
I ended up using UIAccessibility.isVoiceOverRunning to stop any tasks which would be triggered by a doubletap on that specific element.
I have a button that can toggle a label being shown:
class ViewController: UIViewController {
#IBOutlet weak var label: UILabel!
#IBOutlet weak var button: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
button.accessibilityLabel = "You can tap this really long string that i'm testing"
label.accessibilityLabel = "This is a label"
}
#IBAction func buttonTapped(_ sender: UIButton) {
label.isHidden = !label.isHidden
if !label.isHidden {
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, label)
}
}
}
When tapping the button, if the label is shown I activate the label to be read by VoiceOver. The problem is VoiceOver automatically starts reading the button's accessibilityLabel when the user taps the button. This results in VoiceOver reading half of the button's accessibilityLabel before swapping to reading the label's accessibilityLabel (e.g. "You can tap this really...This is a label").
Is there a way I can know when VoiceOver is done reading the button's accessibilityLabel and only then call UIAccessibilityPostNotification? Or is there a way to disable the button from being read again by VoiceOver when the user taps the button?
An example project can be seen here: https://github.com/rajohns08/VoiceOverTest
You can set the following attribute on the button, and it will no longer read out the button again when it's being clicked:
button.accessibilityTraits += UIAccessibilityTraitStartsMediaSession
This tells the system that the button initiates a multimedia event, and it shouldn't speak anything out when activated.
Reference documentation from Apple: https://developer.apple.com/documentation/uikit/uiaccessibilitytraits/1620173-startsmediasession
Use this trait to silence the audio output of an assistive app, such as VoiceOver, during a media session that you don't want to interrupt. For example, you might use this trait to silence VoiceOver speech while the user is recording audio.
In regards to waiting on elements to finish reading before moving to other elements: I only was able to find out how to wait for Announcements to finish, by subscribing to this notification: .UIAccessibilityAnnouncementDidFinish
That works fine when the system is done reading out announcements dispatched like this:
UIAccessibilityPostNotification(UIAccessibilityAnnouncementNotification, title)
However, i wasn't able to figure out how to wait on things like LayoutChanged and ScreenChanged to get finished reading. They do not dispatch the above announcement notification. If you can figure that out, please let me know.
Unfortunately, my gut (an eminently citable source) says you cannot, and should not, inspect and work around any speech generated by VoiceOver in response to user navigation or action. The user should not have to wait through the button label before hearing the outcome of activating the button. That said, you might reconsider using such a long button label and include extra information in the accessibilityHint, which is read after a delay, instead.
One possible approach is to split the content of the accessibilityLabel over a shorter accessibilityLabel and a longer accessibilityHint.
I assume that the reason for the long accessibilityLabel is that there is a need for providing extra information about the button action for users who can't see the screen.
Just like we prefer brief visible button labels so that seeing user can "see fast", voiceover users want to "hear fast", so it's a good idea to keep the accessibilityLabel brief, and have the salient words first, since reading the label is interrupted when the user moves on.
The hint will be read if focus stays long enough on the button.
Users can turn off spoken hints in the settings, so if it is crucial to impart the information every time the button is pressed, then this solution won't work. You would probably have to rely on announcements instead as FranticRock suggests, perhaps combined with a dispatch delay.
It would be interesting to know the use case, maybe that will lead to more ideas!
Can anyone clarify the difference between these 2 ways of triggering a function when tapping a view?
1)
myView.addTarget(self, action: #selector(myFunctionToTrigger(_:)), forControlEvents: UIControlEvents.TouchUpInside)
2)
let tapGesture = UITapGestureRecognizer(target: self, action:
#selector(myFunctionToTrigger(_:)))
myView.addGestureRecognizer(tapGesture)
This is 2 completely different ways of implementing user event handling in iOS apps.
1). addTarget() - is method on UIControl class, which is part of Target-Action Mechanism. More about that in documentation.
And you can't addTarget tot any UIView, only to UIControl subclasses.
2). UIGestureRecognizer subclasses is just simply a mechanism to detect and distinguish user gestures on specific view.
Main difference between them that Gesture Recognizers can detect more complex events like swipe or pinch or zoom, but -addTarget is a much more efficient way to detect user activity, also it provides the same level of interface for all UIControls such as UISegmetedControl, UISlider, etc.
Hope that I helped you.
These two method work at two different levels of abstraction:
addTarget:action:forControlEvents is the lower level that provides isolated events. Several of these events must be combined and interpreted to detect more complex gestures like swiping or pinching.
addGestureRecognizer works at a higher level closer to what an app usually needs. It adds specific gesture recoginzer that listen to the low level events, detect gestures and deliver specific information about the gesture.
In the case of a tap, the difference is minor. But when it comes to swiping, pinching and a combination of tapping, swiping, pinching (e.g. in a image viewr or in a map app), one or more gesture recoginzers are the way to go.
Here is the difference
For UITapGestureRecognizer you can add event for specified gestures like UITapGestureRecognizer, UIPanGestureRecognizer... and many other gestures .
Where as For UIView addTarget() you can add target for specified events like UIControlEvents.TouchUpInside.. and many other events.
Pavel's answer is correct, you can only add a target to a UIControlView, which is a subclass of UIView. A UIGestureRecognizer can be added to any UIView.
Codo's answer that a target is lower level than a gesture is wrong, gestures are the lower level touch support. A UIControl uses gestures to make addTarget:action:forControlEvents work.
There are several benefits for addTarget:
It is a build-in function. You don't need to initialize another object to do the same thing.
You can set when to react to the action: "touchUpInside" or "touchDown" (or "valueChanged" for sliders).
You can set the different appearances of the button (e.g. title text, title color, content image, background image, highlight tint) and the button only shows those statuses if addTarget is used.
Besides the benefits above, I think it's more like a coding convention for UIControl elements.
I found that touchDown event is kind of slow, in that it requires a major, fairly long touch and does not respond to a light tap. Why is that?
Whereas, touchesBegan responds just when I need it to, i.e. responds even to very light, quick touches. But that's not an event but a method that can be overridden.
Problem is, touchesBegan apparently requires me to either 1) subclass a label (I need to respond to touching a label), or 2) analyze the event to figure out whether it came from the right label. I am wondering whether it's a code smell and whether there should be an event for simple touch.
Try to add a UITapGestureRecognizer to your label.
First of all, allow label to handle user interaction:
label.userInteractionEnabled = true
You can assign tap gesture to label. Then in handler method you need to switch over state property of recognizer. If it is .Began, you got the event that you need.
The cool thing about this approach that you could use this one handler for all of your labels. Inside handler, you can get touched label like this:
let label = tapRecognizer.view as! UILabel
"Code smell"? No, it's a user interface smell. A user interface stink.
If you make a button in your user interface behave different from buttons in any other application, people will hate your app. Do what people are used to.
I'd like to call a method every time a different element is focused while VoiceOver is active. I was hoping there would be some UIAccessibilityNotification for this, but I can't seem to find any.
Ultimately, my goal is to add an additional condition prior to reading the accessibility label. For example, as opposed to saying (by default) "If UIButton becomes focused: read label", I'd like to be able to say "When UIButton becomes focused AND UIButton's background color is blue: read label".
So my question is: how do I either add an additional condition prior to reading the label, or receive a notification when a new element becomes focused?
You can't explicitly tell when the user moves the VoiceOver cursor (just like you can't tell where a sighted user is looking).
For the behavior you want, you have two options:
Set the button's accessibilityLabel to an appropriate value whenever the other conditions change.
Subclass UIButton and override its accessibilityLabel getter method:
- (NSString *) accessibilityLabel {
if (SOME_CONDITION) {
return #"Hooray!";
} else {
return #"Womp womp";
}
}
If you need to disable an item entirely, rather than returning nil or a blank string, you should set its accessibilityElementsHidden property to YES.
You can use the UIAccessibilityFocus protocol to detect changes in focus by accessibility clients (including VoiceOver). Note that UIAccessibilityFocus is an informal protocol that each accessibility element must implement independently.
That said, for your use case, Aaron is right to suggest returning a different accessibilityLabel under each condition.