I have a hierarchy of views with different UIGestureRecognizers. Every now and then one of them stops working, and I am looking for an efficient way to debug this. Due to the passive nature of UIGestureRecognizers in general, I always find them hard to debug. If one stops working, the two things that I can think of would be to
Examine the view hierarchy to make sure another view isn't blocking the touch
Override touchesBegan:withEvent: on all relevant views to try to track down the passing of a touch
These sometimes don't cut it, though, as is the current situation where I have a UITapGestureRecognizer that sometimes (I believe after presenting and dismissing views) stops working while the app is running.
I would recommend checking out the little known (IME) property gestureRecognizers on UITouch, that may be enough to help you. You can also combine this with overriding the UIWindow and UIApplication methods called sendEvent to gain more visibility into the sequence of events wrt. touch handling. (No pun intended.)
Related
I've been thinking about this for quite some time now and I haven't found a suiting answer to this.
How performant are UIGestureRecognizer in swift/iOS development?
Let me explain by giving you a theoretical example:
You have an app on the iPad Pro (big screen, much space) and there you have maybe dozens of different views and buttons and so on. For whatever reason you need every one of these views and buttons to be moveable/clickable/resizable/...
What's better?
Adding one (or multiple) UIGestureRecognizer(s) to each view (which results in many active gesture recognizers and many small, specific handling methods [maybe grouped for each type of view])
Adding one single recognizer to the superview (which results in one active gesture recognizer and a big handling method that needs to cycle through the subviews and determine which one has been tapped)
I guess the first one is the most simple one but is it slower than the second one? I'm not sure about that. My stomach tells me that having that many UIGestureRecognizers can't be a good solution.
But either way, the system has to cycle through everything (in the worst case), be it many recognizers or many subviews. I'm curious about this.
Thank you
Let look at your question in terms of gesture recognition flow -> to pass event to the right gesture recognize the system goes by the views tree to find last one specific to this gesture, that will return true in one specific method of UIView
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
Adding one (or multiple) UIGestureRecognizer(s) to each view
This way I recommended to use. In this case the system will do most of the work for you and prevent you from mistakes that very difficult to debug later - trust me. This is for UI. Especially if you have multiple different gestures on different parts of a screen. In my particular case I have a huge video player UI that has around 20 gesture recognizers on one screen and feels pretty good - no lags or frame drop in UI. this way is simple and self describing. I recommend implement it using storyboard or xib. You can refer to Interface Builderand later any time to recall in a moment what recognizer should you update to change the behaviour of UI. The speed of this approach guaranteed by a system.
Adding one single recognizer to the superview
This way could be used with only one simple gesture for a multiple views (more > 20). this could happened if you implementing some game where user pick up and paste a bricks of different shapes for example. It is not suitable for common UI tasks. The speed depends on your implementation, and based on question itself I am not recommend to do it. This approach is design specific not speed relevant.
I have a somewhat specific scenario and I am unsure of the best way to deal with it.
I will note that both ways I have tried works but I am looking for maybe the optimal or proper way.
Scenario
I am building a UIWindow that will be presented above an application. The UIWindow will contain multiple layers of views, each layer could contain floating subviews that are able to be dragged by the user and each layer is controlled by it's own View controller as well. If the user does not touch any of the subviews, it will pass the touches through the views and the UIWindow down to the application to handle it like normal.
Option 1 - Letting each UIView/Subview handle the hit-test. Basically the views themselves are the ones that implement pointInside and hitTest which makes me believe that the views themselves are supposed to do this check. In the UIWindow, I will then have to loop through each subview, and their subviews to find the hit one. I can also do just plainly hitTest but I have to then explicitly check that my sub viewcontrollers "main" views (self.view) were not the ones that should be dragged.
Option 2 - Letting each UIViewController handle the pointInside thing, a custom method that is. Basically this would make me have to implement a custom method in each UIViewController that loops through it's views subviews to then return a BOOL weather or not it was hit. I would have to manually call this method from the UIWindow > VC-main.sub-2.check() + VC-main.sub-1.check()
Option 3 - ????
As I said, I've got both of these methods to work, but none of them feel very very elegant so I am wondering if anyone has some good idea.
Here is an image of the general layout.
When I demo my touch apps to remote teams the people on the other end dont know where I am touching. To remedy this, I have been working on an event intercepting view/window that can display touches over applications. No matter how may variations on nextResponder I call, I am unable to react to the touch and pass it along to the controllers underneath. Specifically scroll views dont react nor do buttons.
Is there a way to take an event, get its position, then pass it along to what ever component would have been responding to it initially (the controller underneath)?
Update:
I am making some progress with a UIView. The new view is always returning NO to pointInside.This works great for when the touch starts, but it doesnt track moves or releases. Is there a strategy to adding gesture recognizers to the touch in order to track its event lifecycle?
Joe
You could try creating your own subclass of UIApplication that overrides sendEvent:. Your implementation should call [super sendEvent:event] as well as process the event as needed.
Update your main.m and pass the name of you custom UIApplication class as the 3rd parameter to the call of UIApplicationMain.
After some more due diligence, I found my oversight. In the layer that was on top and displaying the touches, user interaction needed to be set to false. Once I set that to false, I was able to use that layer for display while catching events on the layers below. The project still isn't done but I am one step closer.
Take care,
Joe
I know what I want to do, but I'm stumped as to how to do it: I want to implement something like the iOS multitasking gestures. That is, I want to "steal" touches from any view inside my view hierarchy if the number of touches is greater than, say, two. Of course, the gestures are not meant to control multitasking, it's just the transparent touch-stealing I'm after.
Since this is a fairly complex app (which makes extensive use of viewController containment), I want this to be transparent to the views that it happens to (i. e. I want to be able to display arbitrary views and hierarchies, including UIScrollViews, MKMapViews, UIWebViews etc. without having to change their implementation to play nice with my gestures).
Just adding a gestureRecognizer to the common superview doesn't work, as subviews that are interaction enabled eat all the touches that fall on them.
Adding a visually transparent UI-enabled view as a sibling (but in front) of the main view hierarchy also doesn't work, since now this view eats all the touches. I've experimented with reimplementing touchesBegan: etc. in the touchView, but forwarding the touches to nextResponder doesn't work, because that'll be the common superview, in effect funnelling the touches right around the views that are supposed to be receiving them when the touchView gives them up.
I am sure I'm not the only one looking for a solution for this, and I'm sure there are smarter people than me that have this already figured out. I even suspect it might not actually be very hard, and just maybe my brain won't see the forest for the trees today. I'm thankful for any helpful answers anyway :)
I would suggest you to try using method swizzling, reimplementing the touchesbegan on UIView. I think that the best way is to store in a static shared variable the number of touches (so that each view can increment/decrement this value). It's just a very simple idea, take it with a grain of salt.
Hope this helps.
Ciao! :)
A possible, but potentially dangerous (if you aren't careful) approach is to subclass your application UIWindow and redefine the sendEvent: method.
As this method is called for each touch event received by the app, you can inspect it and then decide to call [super sendEvent:] (if the touch is not filtered), or don't call it (if the touch is filtered) or just defer its call if you are still recognizing the touch.
Another possibility is to play with the hitTest:withEvent: method but this would require your stealing view to be placed properly in the subview, and I think it doesn't fit well when you have many view controllers. I believe the previous solution is more general purpose.
Actually, adding a gesture recognizer on the common superview is the right way to do this. But it sound like you may need to set either delaysTouchesBegan or cancelsTouchesInView (or both) to ensure that the gesture recognizer handles everything before letting it through to the child views.
Imagine a view with, say, 4 subviews, next to each other but non overlapping.
Let's call them view#1 ... view#4
All 5 such views are my own UIView subclasses (yes, I've read: Event Handling as well as iOS Event Guide and this SO question and this one, not answered yet)
When the user touches one of them, UIKit "hiTests" it and delivers subsequent events to that view: view#1
Even when the finger goes outside view#1, over say view#3.
Even if this "drag" is now over view#3, view#1 still receives touchesMoved, but view#3 receives nothing.
I want view#3 to start replying to the touches. Maybe with a "touchedEntered" of my own, together with possibly a "touchesExited" on view#1.
How would I go about this?
I can see two approaches.
side step the problem and do all the touch handling in the parent
view whenever I detect a touchesMoved outside of view#1 bounds or,
transfer to the parent view telling it to "redispatch". Not very
clear how such redispatching would work, though.
For solution #2 where I am getting confused is not about the forwarding per se, but how to find the UIVIew I want to forward to. I can obviously loop through the parent subviews until I find one whose bounds/frame contain the touch, but I am wondering if I am missing something, that Apple would have already provided but I cannot relate to this problem.
Any idea?
I have done this, but I used CALayers instead of sub-UIViews. That way, there is no worries about the subviews catching/redispatching events to the parent UIView. You might not be able to do that, but it does simplify things. My solution tended to use CGRectContainsPoint() a lot.
You may want to read Event Handling again, as it comes pretty close to answering your question:
A touch object...is associated with its hit-test view for its
lifetime, even if the touch represented by the object subsequently
moves outside the view.
Given that, if you want to accomplish your goal of having different views react to the user's finger crossing over them, and if you want to do it within the touch-handling mechanism provided by UIView, you should go with your first approach: have the parent view handle the touch. The parent can use -hitTest:withEvent: or -pointInside:withEvent: as it's tracking a touch to determine if the touch is in one of the subviews, and if so can send an appropriate message.