Line number swiped in UITextField - ios

I am working on a react-native app that needs to detect which line number in a TextInput (UITextfield) has been swiped by a user.
A popular app that uses this type of interaction is Paper which allows a user to swipe a given line in a text document to style it.
Even though my main use case is react-native, curious to know what other developers familiar with RN or iOS think are the building blocks for such an interaction.
My current thoughts are:
Wrap my TextInput with a PanResponder element
Detect whatever I consider is a valid user gesture
Determine the x,y coordinate of the valid gesture
Somehow determine the sentence that was swiped on given the above coordinates??

Related

Detect tap on a label in Mapbox map on iOS

I'd like to know if there is a way to best detect a users tap on a label?
The new iOS15 Maps app allows a tap on e.g. a cities name and then shows informations about that city.
I am now wondering if something similar can be done with mapbox?
I know that there is a mapView.visibleFeatures(in: myRect) function that can somehow help here. So I can convert my finger location to a rect and then get all features there.
BUT... my city e.g. might have a label that is let's say 200 px wide. So I would need to have a quite large rect to find the point of my city label. And then I will also get all kinds of other labels that might be there. Maybe even not visible, but in the dataset.
Is there no way to ask the map what the frontmost element was when I tapped? So that when I tap on the far end of the label, I still get that ONE feature?
I am still using Mapbox V6.3... the latest before their last major update.
But if it's not possible with that version, an answer about the latest V10.something would also be great.
For v10, this example demonstrates how to identify features near a click. While the overall example is to a different end, the onMapClick functions shows the method to find a feature and then build an annotation.
https://docs.mapbox.com/ios/maps/examples/view-annotation-marker/

Prevent Roomle Camera from resetting

As can be seen in the example provided in the tutorial the Roomle configurator demonstrates different behaviors when zoomed in and a parameter is changed:
If the user clicks on the component and then scrolls to zoom in, the parameters of the component can be changed and the camera will stay in place.
If the user does not select the component first and just zooms in, the camera will reset to its original position if a parameter is changed.
I want to programmatically control camera movement and parameter selection without direct user interaction with the Roomle configurator (custom GUI) and this 2nd behavior is very annoying as the camera always jumps around.
I've tried using RoomleConfigurator._sceneHelper._cameraControl.lock() which successfully prevents manual camera movement but it will still reset on a parameter change.
How can I achieve the 1st behavior, where the camera is locked in place?
It is currently not possible to deactivate this behaviour (camera reset on parameter change).
Also a word of warning regarding the code RoomleConfigurator._sceneHelper._cameraControl.lock(). Functions which start with an underscore (_) are private and may be subject to change in a future release.
Here you can find the documentation on how to move the camera using the official API:
https://docs.roomle.com/web/guides/tutorial/configurator/07_recipes.html#move-camera

iOS accessibility: what are the pros/cons for hardcoding "double tap to activate" as hint?

iOS has built-in support for accessibility, for UIButtons it reads the title of the button followed by a hint "double tap to activate" (by default). Sometimes we are required to make a non-UIButton control behaving similar to UIButton in terms of accessibility, so we would set its accessibility trait to button and hardcode "double tap to activate" for accessibilityHint.
I don't like altering system behaviours, and I've seen accessibility users who prefer single tap instead of double tap (there's an option they can set), although I haven't checked if the opt for single tap instead of double tap, does the system hint become "single tap to activiate".
What is the common practice regarding accessibility support for a non-UIButton control that is tappable? Thanks!
I've seen accessibility users who prefer single tap instead of double tap (there's an option they can set)
I'm really curious to know how it's possible using VoiceOver because a single tap with one finger deals with the accessibility focus. In the UIButton documentation, Apple states: 🤓
VoiceOver speaks the value of the title [...] when a user taps the button once.
Would you mind detailing the way to activate this option you mentioned because I'm really astonished, please? 🤔
What is the common practice regarding accessibility support for a non-UIButton control that is tappable?
Using a hint is a very good practice to provide useful information to the user but this information mustn't be crucial for using because the accessibility hint may be deactivated in the device settings.😰
Admittedly speaking, this kind of element must be read out in such a way that its goal and its using are clear enough for any user: that's what traits are made for. 👍
Many traits are well known and give rise to different actions like adjustable values, customed actions and the rotor items using.
Besides, it's also possible to use the accessibilityActivate() method so as to define the purpose of a double-tap with one finger of an accessible element. 🤯
The way you want to vocally expose the possible actions on a tappable control depends on the content of your application.
Finally, keep in mind that hardcoding a hint must be understood as a plus information but definitely not as an essential one because it can be deactivated by the user: a conception oriented a11y is very important when building an app. 😉

How to get the single tap point in iOS voice over mode

When I am doing the single tap in iOS voice over mode, it will read the tagged elment, but I want to know the tag point x and y, are there any api to get it?
You can't get this information from VoiceOver. The APIs don't support it. The closest you could get would be to grab onto the focused element, and understand that somewhere within that rectangle was the last touch point. But, even then, there would be no way to distinguish between elements focused by single touch vs elements focused "focustNext" style sequential navigation (swipe right and swipe left gestures).

Is there any way to have VoiceOver read a label on command?

I'd like to have my QR code scanning app inform the user when it finds a QR code. For sighted users, this works using a label at the bottom that updates to notify the user. However, a blind user would have to tap on that label again to have it read by Voice Over. I would much prefer it to just read automatically.
The closest I can find to this question is
UIAccessibility - Read all the labels and buttons on the screen from top to down, which wasn't possible. While this doesn't bode well for my app, that was a year ago. Has Apple updated it's UIAccessibility protocol in any way to allow this?
As a last resort I suppose I could play my own mp3 recording if VoiceOver is on.
You can make VoiceOver speak any string any time you want by calling:
UIAccessibilityPostNotification(UIAccessibilityAnnouncementNotification, NSLocalizedString("QR code has been detected", comment: ""))
Swift 4
UIAccessibility.post(notification: .announcement, argument: "Text")
There is no direct way to tell VoiceOver to speak updates of an element that VoiceOver cursor is not on. This (i.e. speaking the same content "manually") is a feasible workaround.
You can move VoiceOver focus to an element by using the following:
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, elementToFocusOn)
VoiceOver will then parse and read the accessibility properties associated with that element.

Resources