Swift - Performance of `UIGestureRecognizer` when having many of them - ios

I've been thinking about this for quite some time now and I haven't found a suiting answer to this.
How performant are UIGestureRecognizer in swift/iOS development?
Let me explain by giving you a theoretical example:
You have an app on the iPad Pro (big screen, much space) and there you have maybe dozens of different views and buttons and so on. For whatever reason you need every one of these views and buttons to be moveable/clickable/resizable/...
What's better?
Adding one (or multiple) UIGestureRecognizer(s) to each view (which results in many active gesture recognizers and many small, specific handling methods [maybe grouped for each type of view])
Adding one single recognizer to the superview (which results in one active gesture recognizer and a big handling method that needs to cycle through the subviews and determine which one has been tapped)
I guess the first one is the most simple one but is it slower than the second one? I'm not sure about that. My stomach tells me that having that many UIGestureRecognizers can't be a good solution.
But either way, the system has to cycle through everything (in the worst case), be it many recognizers or many subviews. I'm curious about this.
Thank you

Let look at your question in terms of gesture recognition flow -> to pass event to the right gesture recognize the system goes by the views tree to find last one specific to this gesture, that will return true in one specific method of UIView
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
Adding one (or multiple) UIGestureRecognizer(s) to each view
This way I recommended to use. In this case the system will do most of the work for you and prevent you from mistakes that very difficult to debug later - trust me. This is for UI. Especially if you have multiple different gestures on different parts of a screen. In my particular case I have a huge video player UI that has around 20 gesture recognizers on one screen and feels pretty good - no lags or frame drop in UI. this way is simple and self describing. I recommend implement it using storyboard or xib. You can refer to Interface Builderand later any time to recall in a moment what recognizer should you update to change the behaviour of UI. The speed of this approach guaranteed by a system.
Adding one single recognizer to the superview
This way could be used with only one simple gesture for a multiple views (more > 20). this could happened if you implementing some game where user pick up and paste a bricks of different shapes for example. It is not suitable for common UI tasks. The speed depends on your implementation, and based on question itself I am not recommend to do it. This approach is design specific not speed relevant.

Related

iOS Swift: UITapGestureRecognizer and Storyboard

I am a novice at iOS programming, obviously a novice at Swift (as it is new), and a dabbler at best in programming overall over the past ~35 years. I have, however, worked for the past 20 years as a manager of multidisciplinary teams that include programmers and as a result I understand a lot of fundamental concepts of software design. I provide this information for context.
I am working on a database app for a class and adding a lot of functionality of my own choosing to enhance my own learning experience. I yesterday wanted to allow users to tap a UIImageView to choose a new picture for the database entry. I added a Tap Gesture Recognizer to the UIImageView and hooked up the IBAction to the appropriate view controller then added a println() to the IBAction to test whether it was being recognized. Taps on the UIImageView didn't produce the println() and I was frustrated, so I looked around on the tubes for some hints and found some sample code to programmatically recognize the tap:
let recognizer = UITapGestureRecognizer(target: self, action: Selector("didTap:"))
recognizer.delegate = self
view.addGestureRecognizer(recognizer)
This worked a treat, as they say. I was frustrated, however, as I saw a lot of reference to the idea that the code was unnecessary if I was using the storyboard. After a bit of experimentation with a test project, I eventually found that the UIImageView had to have "User Interaction Enabled" in the Attributes Inspector (not the default setting) to recognize user interaction, which in hindsight makes sense.
My question (at last!) is whether the difference between the two approaches is stylistic or whether there is a reason to choose to do it programmatically over the implementation via storyboard. For performance or delegation or otherwise. I can, for example, see that I could embed the recognition code in an if statement. Are there other reasons?
Is this question too theoretical for this format?
IMHO, I always use storyboards unless and until I encounter something that has to be done in code. It's just conceptually easier for me to understand the overall shape of the app if I can see it in large, interconnected chunks. There shouldn't be any noticeable performance differences.
Regarding your particular example, whenever I have an image that has to be tappable, I just put the image in a UIButton, and hook the button up to an IBAction in the controller. This obviates the need for adding a custom gesture recognizer and remembering to make the image tappable.

UISwipeGestureRecognizer to override all others?

I have an pretty standard application that uses gesture recognizers in various places. I’m trying to add an above-all UISwipeGestureRecognizer with three fingers which can be performed anywhere in the app, similar to the Apple four-fingered ones. This is working fine in some views, but if there’s another swipe recognizer beneath it, it’ll trigger that one instead of the new one.
I’d like this new three-finger swipe to be given priority at all times – I’ve added it to my root view controller’s view, but it still seems to bleed through at times.
Is there an easier way to do this than going through and requiring all other recognizers to fail?
You can use requireGestureRecognizerToFail: method to filter through unneeded gestures.
Apple doc.

Detect user dragging items out of UICollectionView?

I've got a UICollectionView, and I'd like to be able to touch-and-drag items up and out of the View, and thus delete them. (Very much along the same lines as how the Dock works on OS X: drag something off and let go, and it is removed).
I've done some research, but almost everything I find is looking for CollectionViews that are drag-and-drop to reorder. I don't need to reorder (I'm happy to just remove the item at the given index from the source array and then reload), I just need to detect when an item is moved outside of the View and released.
So I suppose my questions are these:
1) Is that possible with the built-in CollectionView, some kind of itemWasDraggedOutsideViewFromIndex: method or something?
2) If not, is it something that can be done with a subclass (and specifically is it possible for a CollectionView beginner)?
3) Are there any code samples or tutorials you can recommend that do this?
Here is a helper class that I've been working on that does just that: implementation: https://github.com/Ice3SteveFortune/i3-dragndrop, hope it helps. There's examples on how to use it in the TestApp.
UPDATE
About a year on, this is now a full-on drag-and-drop framework. Hope this proves useful: https://github.com/ice3-software/between-kit
There is no built-in method like you're suggesting. What you're wanting to be can be done but you'll have to handle it with a gesture recognizer and appropriate code to handle the drag/drop operation.
I tried using a subclass to do this and finally went back to putting it in my view controller. In my case, though, I was dragging stuff in/out of the collection view as well as two other views on the screen.
I don't know if you have the book, but the most helpful thing I found was Erica Sadun's Core iOS6 Develper's Cookbook, which has excellent code on drag/drop within Collection Views. I don't think it specifically addresses dragging outside of the CV, but for me the solution was to put the gesture recognizer on the common superview and always use its coordinates rather than the subview's coordinates.
One problem I hit was I wanted to be able to select cells with a tap as well as drag, and there is no way (despite Apple's docs to the contrary) to require the single tap gesture to fail on the collection view. As a result, I ended up having to use the long press gesture to perform the entire operation, and there is no translationInView for long press (there is locationInView) so that required some additional work:
iOS - Gesture Recognizer translationInView
Another thing that will make it harder or easier is the number of possible drop targets you have. I had many, in many different types of views (straight UIView, collectionview, and scrollViews). I found it necessary to maintain a list of "drop targets" and to test for intersections with targets as the dragged object was moved. Somehow, you have to be able to determine whether the view you're intersecting is a place where a drop can occur.
If you are addressing the specific situation of dragging something out of a view to delete it (like dragging to a trash can view) and that's it, this should not be complicated. You have to remember that when you do a transform your frame becomes meaningless, but the center is still good; so you end up using the center for everything that you would normally use the frame for.
Here is the closest thing I found online that was helpful; I didn't end up using this class though as I thought it would be too complicated to implement in my app.
http://www.ancientprogramming.com/2012/04/05/drag-and-drop-between-multiple-uiviews-in-ios/
Hope this has been some help.
Yes there is.
1 - Conform your view to UIDropInteractionDelegate.
2 - Then add this line to your viewload or init:
For viewcontroller add to ViewDidload:
self.view.addInteraction(UIDropInteraction(delegate: self))
Or, for UIViews add to init:
self.addInteraction(UIDropInteraction(delegate: self))
3 - Then get the location for item being dragged here and have fun with it:
func dropInteraction(_ interaction: UIDropInteraction, sessionDidUpdate session: UIDropSession) -> UIDropProposal {
print(session.location(in: self))
return UIDropProposal(operation: .move)
}

Gestures that steal touches like iOS multitasking swipe

I know what I want to do, but I'm stumped as to how to do it: I want to implement something like the iOS multitasking gestures. That is, I want to "steal" touches from any view inside my view hierarchy if the number of touches is greater than, say, two. Of course, the gestures are not meant to control multitasking, it's just the transparent touch-stealing I'm after.
Since this is a fairly complex app (which makes extensive use of viewController containment), I want this to be transparent to the views that it happens to (i. e. I want to be able to display arbitrary views and hierarchies, including UIScrollViews, MKMapViews, UIWebViews etc. without having to change their implementation to play nice with my gestures).
Just adding a gestureRecognizer to the common superview doesn't work, as subviews that are interaction enabled eat all the touches that fall on them.
Adding a visually transparent UI-enabled view as a sibling (but in front) of the main view hierarchy also doesn't work, since now this view eats all the touches. I've experimented with reimplementing touchesBegan: etc. in the touchView, but forwarding the touches to nextResponder doesn't work, because that'll be the common superview, in effect funnelling the touches right around the views that are supposed to be receiving them when the touchView gives them up.
I am sure I'm not the only one looking for a solution for this, and I'm sure there are smarter people than me that have this already figured out. I even suspect it might not actually be very hard, and just maybe my brain won't see the forest for the trees today. I'm thankful for any helpful answers anyway :)
I would suggest you to try using method swizzling, reimplementing the touchesbegan on UIView. I think that the best way is to store in a static shared variable the number of touches (so that each view can increment/decrement this value). It's just a very simple idea, take it with a grain of salt.
Hope this helps.
Ciao! :)
A possible, but potentially dangerous (if you aren't careful) approach is to subclass your application UIWindow and redefine the sendEvent: method.
As this method is called for each touch event received by the app, you can inspect it and then decide to call [super sendEvent:] (if the touch is not filtered), or don't call it (if the touch is filtered) or just defer its call if you are still recognizing the touch.
Another possibility is to play with the hitTest:withEvent: method but this would require your stealing view to be placed properly in the subview, and I think it doesn't fit well when you have many view controllers. I believe the previous solution is more general purpose.
Actually, adding a gesture recognizer on the common superview is the right way to do this. But it sound like you may need to set either delaysTouchesBegan or cancelsTouchesInView (or both) to ensure that the gesture recognizer handles everything before letting it through to the child views.

UITouch & UIEvents: fighting the framework?

Imagine a view with, say, 4 subviews, next to each other but non overlapping.
Let's call them view#1 ... view#4
All 5 such views are my own UIView subclasses (yes, I've read: Event Handling as well as iOS Event Guide and this SO question and this one, not answered yet)
When the user touches one of them, UIKit "hiTests" it and delivers subsequent events to that view: view#1
Even when the finger goes outside view#1, over say view#3.
Even if this "drag" is now over view#3, view#1 still receives touchesMoved, but view#3 receives nothing.
I want view#3 to start replying to the touches. Maybe with a "touchedEntered" of my own, together with possibly a "touchesExited" on view#1.
How would I go about this?
I can see two approaches.
side step the problem and do all the touch handling in the parent
view whenever I detect a touchesMoved outside of view#1 bounds or,
transfer to the parent view telling it to "redispatch". Not very
clear how such redispatching would work, though.
For solution #2 where I am getting confused is not about the forwarding per se, but how to find the UIVIew I want to forward to. I can obviously loop through the parent subviews until I find one whose bounds/frame contain the touch, but I am wondering if I am missing something, that Apple would have already provided but I cannot relate to this problem.
Any idea?
I have done this, but I used CALayers instead of sub-UIViews. That way, there is no worries about the subviews catching/redispatching events to the parent UIView. You might not be able to do that, but it does simplify things. My solution tended to use CGRectContainsPoint() a lot.
You may want to read Event Handling again, as it comes pretty close to answering your question:
A touch object...is associated with its hit-test view for its
lifetime, even if the touch represented by the object subsequently
moves outside the view.
Given that, if you want to accomplish your goal of having different views react to the user's finger crossing over them, and if you want to do it within the touch-handling mechanism provided by UIView, you should go with your first approach: have the parent view handle the touch. The parent can use -hitTest:withEvent: or -pointInside:withEvent: as it's tracking a touch to determine if the touch is in one of the subviews, and if so can send an appropriate message.

Resources