In my application (that has to be accessible for blind user) I have this scenario (its a grammatic exercise)
When I try it on a device turning on VoiceOver, first it focus on the first part of the sentence, so in that case it read "Kesha" and when i swipe right to read the next part it read the second part of the sentence "the contract because...". What I want is to make it also focus on the gray box (that is a UIViewelement) before it read the second part of the sentence, so that the user know where that empty box is in the sentence, but i don't know how to do that.
I alread tried grayBox.accessibilityLabel = "empty box" or grayBox.accessibilityHint = "empty box" but it just don't set the focus on that view and it doesn't speak. I also tried to put an empty UILabel inside the box but I have some issue positioning that in the right order and I don't think it is the right way to do it. Any suggestion?
On the UIView that you want to 'receive focus' you just need to enable accessibility or mark it as an accessible element, An example:
myGreyView.isAccessibilityElement = true
myGreyView.accessibilityLabel = "A grey box"
myGreyView.accessibilityHint = "this is a secretive box. I don't know what it does"
You can also tick a box in the UIView's properties in xcode interface builder "Accessibility Enabled" I think its called. Which also lets you set the label and hint.
For more information see this Apple guide to VoiceOver
In regards to the order of elements being read out. Is the first part and second part different UILabels or one label?
Are you adding these in code or Xib/Storyboard? Depending on the order they are added as subviews can effect the order VoiceOver reads items out.
When adding the 3x UI elements add them in this order below.
UILabel - "Kesha"
UIView - Gray box, "A grey box"
UILabel - "the contract because it was not fair."
If you have added them via Interface builder (Xib/Storyboard) make sure the order is correct in the view hierarchy.
If this fails you could try overriding the method "accessibilityElements" and return an array of the labels and grey view in the order you want them read out.
Related
I tried and understood what could be the purpose of the accessibilityActivationPoint but in vain.
When a focused accessible element is activated, that property should indicate VoiceOver the specific area it's going to activate when a user double-taps the element (Apple reference) : for me, it's always the selected element itself.
I understood the selected element is considered as a block by VoiceOver, whatever the other elements inside. Once a double tap occurs to activate this block, VoiceOver calls accessibilityActivate to know what to perform (Apple reference).
1/. I've written many tests by creating a custom view including a switch control. Whatever the value of accessibilityActivationPoint inside (or outside on another switch control), the value of the switch control never changes. Is it a proper use case or am I totally wrong ?
2/. When we gather many elements inside one accessible element, how is VoiceOver able to activate one of them while they aren't accessible by definition ? Pointing one of them thanks to the accessibilityActivationPoint should work ?
Personally, I couldn't make it work and think that I'm really confusing accessibilityActivationPoint and accessibilityActivate.
Any help would be appreciated, thanks in advance.
Yes, you have the right idea with accessibilityActivate and accessibilityActivationPoint. Note that, in order for it to work, the accessibilityActivationPoint needs to be a point within the Control that you are trying to activate in on-screen coordinates (use the convert function!).
I think the short answer is "yes" to answer your second question, but, just to clear up confusion about when Accessibility Activation Point is useful, I'll go into more detail about it.
By default (aka, the default behavior for AcessibilityActivate()), when any view is activated by VoiceOver, VoiceOver will send a "tap gesture" to the center of the view. The position of this "tap gesture" can be changed by updating the accessibilityActivationPoint attribute on a view. Below, I have an example for how this property can be used.
Let's say you have a blank button (in the image below, the button is the gray box) next to some text:
For the purpose of accessibility, you may want to make the entire view that holds the button and text an Accessibility Element (so that VoiceOver users can easily understand that the button is associated with the text "Worldspace Attest"). In the image below, I am using Accessibility Inspector to show that the view holding both of these elements is an Accessibility Element.
Notice in these images that the button is not in the center of the view, but rather, it is to the right. When you activate this view using VoiceOver, the view will not select the button; instead, it will send a "tap" to the center of the view (which is the same as tapping the text, which does not do anything). In order to select the button, you have to set the view's accessibilityActivationPoint to be the on-screen coordinates of the button:
view.accessibilityActivationPoint = self.convert(button.center, to: UIApplication.shared.windows.first)
This should make it so that this button is usable by a VoiceOver user.
I hope this information clears up any confusion about the Accessibility Activation Point property. The example I used above can be found in this repository in the "Active Control Name" demo.
I have a custom #IBDesignable UIButton. I use several of them on a screen in the Storyboard file and they are central to the whole flow of the app. They all appear in the Guide as "Button":
"Button"s in guide
Seeing the actual title would be a whole lot easier in managing the Storyboard (although of course has no effect on actual runtime). I'd like to set the title to an #IBInspectable label text I'm using:
Their attribute Inspector
I'm using my own label text instead of the regular button title because it's layout and format is special. If I set the button title it shows up in the middle over the real title.
Ideally I want to set the guide "title" to my label. I couldn't find anything anywhere on how to do this. Otherwise, is there some work around trick to use the Button Label? Keep in mind this is just to make the Storyboard less confusing for others to see and use.
Thanks!
You can always slow double click each button (in the Document Outline (the left pane (your first picture))). It'll let you rename it right there.
By "slow double click" I mean, click to select, wait a sec, click again (without moving the mouse).
I'm only familiar with The Document Outline automatically naming things when you provide referencing outlets (control dragging to a .swift file). They'll take on the names of the variables you're dragging them to (even so far as formatting camel case with spaces and caps for you). Otherwise, in your case, you're probably going to have to manually name each one.
You can "name" your elements by filling in the Label field in the Identity Inspector pane:
As you see, I have a normal UIButton, and I put "Button One" in the Label field... so it shows up as "Button One" in the outline pane.
(using iOS8.3, Xcode6.3, OSX10.10.3)
Hi, I wonder if a WatchKit WKInterfaceGroup can be a Button ??
In my watchkit-application, I would like to maximize the touch-surface for a particular action.
I know that one can place one or more buttons in a goup (next to other things like labels, images etc). Having such a WKInterfaceGroup (called group) with small items in it - I thought of placing several buttons, all filling out the empty space between the groups container.
But by placing several buttons close to each other in the group, even if they all reference to the very same action, I realised that touching two buttons by the user's finger in the group would not lead me to the desired surface-increase.
The problem is, the user-finger touches more than one button at once and even tough, I gave all the buttons in the group, as just mentioned, the very same action behind, the action does not get fired off.
The solution might be, if possible, to define the entire group as a button. How would that work ?? (...maybe accessibility traits could help ?? or other....???). Or can you somehow overlay a button on to a group ???
Any idea appreciated !
You can use a WKInterfaceButton, change its content type to "group" in the attributes inspector in IB and fill it with whatever content you need.
Is it possible to add a text link into a TextView? I want the link to perhaps behave like a button, where I can assign an action to it.
EDIT: When I say assign an action, I mean actually giving it something in the code. I'm wondering if it's possible to dynamically add a "button" into text that I can assign a coded action to.
Live scenario
Think of something like a dictionary app. Maybe the definition of one word uses another word that you might not know the definition of, so being able to click on that word to instantly search it rather than having to type it in would be a nice user friendly feature. It seems rather unlikely, though, I guess.
I would recommend using NIAttributedLabel from Nimbus, an open source iOS library. You can specify text ranges that are links, and you get delegate messages sent when a user taps on it.
Main Nimbus site: http://nimbuskit.info/
NIAttributedLabel docs: http://docs.nimbuskit.info/interface_n_i_attributed_label.html
in the inspector, go to the Text View Attributes tab then make sure "Detect Links" is checked.
Yes you can. Add the URL into the text view, then open up the Attributes Inspector. You will see an option in there to detect links.
I know of a way, but its a LOT of work. First, you have an NSAttributedString that you have the text view display. Second, attribute the range of text you want to be the button. Third, assign a tap gesture recognizer to the text view and in the method called by the recognizer, you'll use core text to determine if the tap happened over the range of text that represents the buttons.
Heres how youll use core text: create a framesetter with the attributed string. Create a frame from the framsetter with the shape of a square that is the frame of the text view, inset by the padding of the text view. The frame will allow you to get the y origins of every line in the text view and and once you know what line the tap happened on, you can use the line to then figure out exactly what character was tapped on that line by giving it an x offset. Once you know character index on the line, you can add it to the beginning of the range of the line and get the index of the character within the whole string. Then you can check if its within the range of the text that is your button. If it is, you can then call a method to simulate a target action type behavior.
Ive explained the process of how to accomplish this and specified what kinds of core text objects youll need, ill let you look up the specific api details:
https://developer.apple.com/library/mac/documentation/Carbon/Reference/CoreText_Framework_Ref/_index.html#//apple_ref/doc/uid/TP40005304
You can also use my objc core text wrapper:
https://github.com/mysterioustrousers/MYSCoreText
What about CoreText? It Can draw many kinds of Text .
What I'm trying to do is simple: bring a label in front of an image within a subview.
But all of the options for arranging are disabled/un-selectable when my label is selected. I find this happens often.
What could be the reason that I'm almost never allowed to change the z-axis of my objects in the Arrange menu? Is it a better practice to avoid this feature and set the order of views programmatically?
It can depend on how you have selected the label (similar to how the label can only be moved with the keyboard when selected in some ways).
A simple alternative is to look at the list of view in the pane on the left and to drag the views up and down to change the order.
It happens sometimes. In that situation, click the view or image you want to send back then you will see little square box at the edges of image from which you can re size your image, Just click on it once and then go to Editor > Arrange > Choose option according your need.