Whats the relation between First Responder and hitTest methods? - ios

I understand how the system find the view handles touch events by calling the following methods on a view and its subviews
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event;
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event;
But I don't understand the role of first responder in this mechanism.
Does firstResponder represents the start point of the hitTest traverse?

I would recommend a complete reading of the first article
Using Responders and the Responder Chain to Handle Events
in Apple documentation
Touches, Presses, and Gestures
Short answer:
Touch events are delivered directly to the first responder.
When your app receives an event, UIKit automatically directs that event to the most appropriate responder object, known as the first responder.
First responder is determined by hit-testing.
UIKit uses view-based hit-testing to determine where touch events occur. Specifically, UIKit compares the touch location to the bounds of view objects in the view hierarchy. The hitTest(_:with:) method of UIView traverses the view hierarchy, looking for the deepest subview that contains the specified touch, which becomes the first responder for the touch event.
If the first responder does not handle the event, the event is then passed from responder to responder in the active responder chain.

There's not a lot of relationship between them, except that the result of hit test might cause the window to make the hit view become firstResponder.
firstResponder is all about keyboard events and, at least on macOS, menu item actions and commands like cut, copy, paste, undo etc...
When a keyboard event is received by the app from the Window Server, it goes to the firstResponder. If it's not interested in it, then it goes up the chain to nextResponder until it exhausts the responder chain. On macOS there are related but separate concepts of the mainWindow and keyWindow. They are usually the same, but can be different. If they are different the responder chain first starts with the keyWindow, and when that chain is exhausted, it then goes to the mainWindow. Then the application gets a crack at it. Then the application's delegate. Then if it's a document based app, the document and then the document's delegate.
On iOS, I'm a little fuzzy on the exact details, but it's similar. Actually I think it's simpler, because you don't have multiple windows.
Hit testing on the other hand is all about the view heirarchy. So the app finds which window (on macOS) the hit occurs in, then from there it proceeds down to it's immediate subviews, and then down its subviews, etc... until it finds a leaf view that is hit.

Related

What is first responder?

According to Apple's documentation:
When your app receives an event, UIKit automatically directs that
event to the most appropriate responder object, known as the first
responder.
Same documentation explain how first responder is determined:
The hitTest:withEvent: method of UIView traverses the view hierarchy,
looking for the deepest subview that contains the specified touch,
which becomes the first responder for the touch event.
What I don't understand is why there is a property of UIResponder called isFirstResponder? And why becomeFirstResponder exists. Should not the first responder be determined dynamically by UIKit based on location of the specific touch event?
Additionally, canBecomeFirstResponder return NO for UIView, which is clearly incorrect since views do handle touch events.
The only way I can think that can resolve this confusion is if all these methods are relevant only to events of the type of shake, remote control and editing menu. But the documentation is not clear about it.
What I don't understand is why there is a property of UIResponder called firstResponder?
There isn't. UIResponder does not have a public property named firstResponder.
And why becomeFirstResponder exists.
The main use of becomeFirstResponder is to programmatically choose which text field gets keyboard events.
Should not the first responder be determined dynamically by UIKit based on location of the specific touch event?
There are more kinds of events than touch events. For example, there are keyboard events and motion events. The first responder tracked by UIKit is for non-touch events. In other systems, this concept is usually called the “focus” or more specifically the “keyboard focus”. But (in iOS) the first responder can be a view that doesn't respond to keyboard events.
Additionally, canBecomeFirstResponder return NO for UIView, which is clearly incorrect since views do handle touch events.
That's ok, because touch events don't really start at the first responder. They start at the view returned by -[UIView hitTest:withEvent:].
The only way I can think that can resolve this confusion is if all these methods are relevant only to events of the type of shake, remote control and editing menu. But the documentation is not clear about it.
There are more kinds of non-touch events that start with the first responder, but aside from that, you have resolved it correctly.
This is not a "quick answer" topic -- your best bet is to do some searching and read through several articles about it.
But, briefly...
.becomeFirstResponder() is often used to activate text fields without requiring the user to tap in the field. Common case is with multiple text fields (fill out the form type of interface), where you would automatically "jump" to the next field based on input:
myTextField.becomeFirstResponder()
Again, as you've already seen from glancing at the docs, there is much more to it than that... but far too much for an answer here.

What gets called instead of - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event when Accessibility(voice over is on)

Recently I have put a breakpoint in a UIViews method
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
}
method and checked if the compiler stops here when a user taps on the UIView while voiceover is on, but it never came to the breakpoint, does anyone know what gets called and how the touch can be intercepted?
The standard hitTest mechanism is not used when VoiceOver is on. Instead, UIView has an _accessibilityHitTest:withEvent: method, but unlike macOS, it is private and can't easily be overridden or called.
Similar to hitTest, _accessibilityHitTest uses _accessibilityPointInside:withEvent:, which, in turn, calls pointInside:withEvent: (which is public).
First of all, note that users must double-tap to "activate" or "tap" a view when VoiceOver is enabled. If you still aren't hitting hitTest:…, then break on acccessibilityActivate(). This is the default accessibility action triggered by a double-tap. You may also be interested in the activationPoint, which is the default location of the simulated touch VoiceOver emits upon activation. Note that the activation point isn't relevant to all VoiceOver interactions (eg. adjustable controls).
The hit-test view is given the first opportunity to handle a touch event. If the hit-test view cannot handle an event, the event travels up that view’s chain of responders as described in “The Responder Chain Is Made Up of Responder Objects” until the system finds an object that can handle it. Please look at this.

How to response a touch event which begin to touch down outside of a view and drag enter it in iOS

I want to response this kind of touch event on a view:begin to touch down outside of the view and drag enter it. I have tried to use the iOS UIControlEvent such as UIControlEventTouchDragEnter,UIControlEventTouchDragInside and the UIGesture, and I found no way can do this directly.
Finally, I implement it with my own way:Create a custom subclass of UIView and overwrite the method (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event and forward touch event up to the responder chain. In the touchesBegan, touchesMoved, touchesEnded method of parent view, I use the location of UITouch object to judge touch down outside of the view and drag enter it.
I am not satisfied with this way. Is there anyone can tell me a more efficient and elegant way to work out it? Thank you very much for your time and consideration.
begin to touch down outside of the view and drag enter it
You can never do this more "elegantly" because if your initial touch down is outside the view, then it is not associated with this view and never will be. Whatever view the touch's initial hit test associates it with, that is the view that will always be this touch's view throughout the gesture, and touch events will be sent only to that view. The default definition of hit testing is that the view that the initial touch is inside is the hit-test view, and that, by hypothesis, is not your view.

UIKit: Initial first responder?

When an app launches, what is the initial first responder?
According to the UIKit docs the first responder can be set with the becomeFirstResponder message. However, if this message isn't sent, what is the initial first responder? The UIApplication? The key window?
Also, is there a property anywhere which points to the current first responder?
In both MacOS & iOS, each window has their own UIResponder (or, to be more precise, each window IS a UIResponder -- UIWindow descends from UIResponder), which means that each window can have their own first responder. On MacOS, there can be many open windows (each one with a first responder) and under iOS, there is usually one UIWindow displayed at any one time.
Each window will have a first responder (whether the window itself, or a text field which is receiving keyboard events, or whatever). You can query each window's responder chain by walking down each of them via the "nextResponder" API.
I'm probably simplifying things a little too much but for the sake of a nice, simple summarized answer I hope this helps. Here is more information about the iOS Responder chain, which shows how an initial view (e.g. the first responder) gets an event and if it can't handle it, the event get passed up to parent views, to the window and to the application.

Custom UIGestureRecognizer Not Working As Expected

I have a UITableView which I present in a UIPopoverController. The table view presents a list of elements that can be dragged and dropped onto the main view.
When the user begins a pan gesture that is principally vertical at the outset, I want the UITableView to scroll as usual. When it's not principally vertical at the outset, I want the application to interpret this as a drag-and-drop action.
My unfortunately lengthy journey down this path has compelled me to create a custom UIGestureRecognizer. In an attempt to get the basics right, I left this custom gesturer as an empty implementation at first, one that merely calls the super version of each of the five custom methods Apple says should be overridden:
(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event;
(void)reset;
This results in nothing happening, i.e. the custom gesture's action method is never called, and the table view scrolls as usual.
For my next experiment, I set the gesture's state to UIGestureRecognizerStateBegan in the touchesBegan method.
This caused the gesture's action method to fire, making the gesture appear to behave just like the standard UIPanGestureRecognizer. This obviously suggested I was responsible for managing the gesture's state.
Next up, I set the gesture's state to UIGestureRecognizerStateChanged in the touchesMoved method. Everything still fine.
Now, instead, I tried setting the gesture's state to UIGestureRecognizerStateFailed in the touchesMoved method. I was expecting this to terminate the gesture and restore the flow of events to the table view, but it didn't. All it did was stop firing the gesture's action method.
Lastly, I set the gesture's state to UIGestureRecognizerStateFailed in the touchesBegan method, immediately after I had set it to UIGestureRecognizerStateBegan.
This causes the gesture to fire its action method exactly once, then pass all subsequent events to the table view.
So...sorry for such a long question...but why, if I cause the gesture to fail in the touchesBegan method (after first setting the state to UIGestureRecognizerStateBegan), does it redirect events to the table view, as expected. But if I try the same technique in touchesMoved (the only place I can detect that a move is principally vertical), why doesn't this redirection occur?
Sorry for making this more complicated than it actually was. After much reading and testing, I've finally figured out how to do this.
First, creating the custom UIGestureRecognizer was one of the proper solutions to this issue, but when I made my first test of the empty custom recognizer, I made a rookie mistake: I forgot to call [super touches...:touches withEvent:event] for each of the methods I overrode. This caused nothing to happen, so I set the state of the recognizer to UIGestureRecognizerStateBegan in touchesBegan, which did result in the action method being called once, thus convincing me I had to explicitly manage states, which is only partially true.
In truth, if you create an empty custom recognizer and call the appropriate super method in each method your override, your program will behave as expected. In this case, the action method will get called throughout the dragging motion. If, in touchesMoved, you set the recognizer's state to UIGestureRecognizerStateFailed, the events will bubble up to the super view (in this case a UITableView), also as expected.
The mistake I made and I think others might make is thinking there is a direct correlation between setting the gesture's state and the chronology of the standard methods when you subclass a gesture recognizer (i.e. touchesBegan, touchesMoved, etc.). There isn't - at least, it's not an exact mapping. You're better off to let the base behavior work as is, and only intervene where necessary. So, in my case, once I determined the user's drag was principally vertical, which I could only do in touchesMoved, I set the gesture recognizer's state to UIGestureRecognizerStateFailed in that method. This took the recognizer out of the picture and automatically forwarded a full set of events to the encompassing view.
For the sake of brevity, I've left out a ton of other stuff I learned through this exercise, but would like to point out that, of six or seven books on the subject, Matt Neuburg's Programming IOS 4 provided the best explanation of this subject by far. I hope that referral is allowed on this site. I am in no way affiliated with the author or publisher - just grateful for an excellent explanation!
That probably happens because responders expect to see an entire touch from beginning to end, not just part of one. Often, -touchesBegan:... sets up some state that's then modified in -touchesMoved..., and it really wouldn't make sense for a view to get a -touchesMoved... without having previously received -touchesBegan.... There's even a note in the documentation that says, in part:
All views that process touches,
including your own, expect (or should
expect) to receive a full touch-event
stream. If you prevent a UIKit
responder object from receiving
touches for a certain phase of an
event, the resulting behavior may be
undefined and probably undesirable.

Resources