Can anyone clarify the difference between these 2 ways of triggering a function when tapping a view?
1)
myView.addTarget(self, action: #selector(myFunctionToTrigger(_:)), forControlEvents: UIControlEvents.TouchUpInside)
2)
let tapGesture = UITapGestureRecognizer(target: self, action:
#selector(myFunctionToTrigger(_:)))
myView.addGestureRecognizer(tapGesture)
This is 2 completely different ways of implementing user event handling in iOS apps.
1). addTarget() - is method on UIControl class, which is part of Target-Action Mechanism. More about that in documentation.
And you can't addTarget tot any UIView, only to UIControl subclasses.
2). UIGestureRecognizer subclasses is just simply a mechanism to detect and distinguish user gestures on specific view.
Main difference between them that Gesture Recognizers can detect more complex events like swipe or pinch or zoom, but -addTarget is a much more efficient way to detect user activity, also it provides the same level of interface for all UIControls such as UISegmetedControl, UISlider, etc.
Hope that I helped you.
These two method work at two different levels of abstraction:
addTarget:action:forControlEvents is the lower level that provides isolated events. Several of these events must be combined and interpreted to detect more complex gestures like swiping or pinching.
addGestureRecognizer works at a higher level closer to what an app usually needs. It adds specific gesture recoginzer that listen to the low level events, detect gestures and deliver specific information about the gesture.
In the case of a tap, the difference is minor. But when it comes to swiping, pinching and a combination of tapping, swiping, pinching (e.g. in a image viewr or in a map app), one or more gesture recoginzers are the way to go.
Here is the difference
For UITapGestureRecognizer you can add event for specified gestures like UITapGestureRecognizer, UIPanGestureRecognizer... and many other gestures .
Where as For UIView addTarget() you can add target for specified events like UIControlEvents.TouchUpInside.. and many other events.
Pavel's answer is correct, you can only add a target to a UIControlView, which is a subclass of UIView. A UIGestureRecognizer can be added to any UIView.
Codo's answer that a target is lower level than a gesture is wrong, gestures are the lower level touch support. A UIControl uses gestures to make addTarget:action:forControlEvents work.
There are several benefits for addTarget:
It is a build-in function. You don't need to initialize another object to do the same thing.
You can set when to react to the action: "touchUpInside" or "touchDown" (or "valueChanged" for sliders).
You can set the different appearances of the button (e.g. title text, title color, content image, background image, highlight tint) and the button only shows those statuses if addTarget is used.
Besides the benefits above, I think it's more like a coding convention for UIControl elements.
Related
I'm trying to pass tap events to the superview but handle longpress events. I've added LongPressGestureRecognizer to the top view but the tap events aren't passed to the superview. I tried multiple approaches:
Overriding hitTest doesn't work since the longpress gesture recognizer handler doesn't get called
isUserInteractionEnabled - same as above
Overriding touchesBegan/Ended and calling them manually on the superview doesn't trigger the tap event
Handing complex tap interactions can be hard, and mixing different approaches can make it much much harder.
Generally, the best way to handle it is to have a single view that has multiple gesture recognisers on them. Implement the UIGestureRecognizerDelegate method gestureRecognizer(_:shouldRecognizeSimultaneouslyWith:) and gestureRecognizer(_:shouldRequireFailureOf:) to handle conflicts. When a touch event is recognised it can delegate the action to whatever other object needs to deal with it. Having different views all trying to deal with touches at the same time is not a good way to deal with the problem. Gestures are dependent on other gestures and cannot all be handled independently by different views.
I found that touchDown event is kind of slow, in that it requires a major, fairly long touch and does not respond to a light tap. Why is that?
Whereas, touchesBegan responds just when I need it to, i.e. responds even to very light, quick touches. But that's not an event but a method that can be overridden.
Problem is, touchesBegan apparently requires me to either 1) subclass a label (I need to respond to touching a label), or 2) analyze the event to figure out whether it came from the right label. I am wondering whether it's a code smell and whether there should be an event for simple touch.
Try to add a UITapGestureRecognizer to your label.
First of all, allow label to handle user interaction:
label.userInteractionEnabled = true
You can assign tap gesture to label. Then in handler method you need to switch over state property of recognizer. If it is .Began, you got the event that you need.
The cool thing about this approach that you could use this one handler for all of your labels. Inside handler, you can get touched label like this:
let label = tapRecognizer.view as! UILabel
"Code smell"? No, it's a user interface smell. A user interface stink.
If you make a button in your user interface behave different from buttons in any other application, people will hate your app. Do what people are used to.
Brent Simmons wrote in a blog post that tap gesture recognizers, presumably on a UIView, are less accessible than UIButtons. I'm trying to learn my way around making my app accessible, and I was curious if anyone could clarify what makes that less accessible than a UIButton, and what makes an element "accessible" to begin with?
For more customizability I was planning to build a button comprised of a UIView and tap gesture recognizers with some subviews, but now I'm not so sure. Is it possible to make a UIView as accessible as a UIButton?
Accessible in this context most likely refers to UI elements that can be used using Apple's accessibility features, such as VoiceOver (see example below).
For example, a visually impaired person will not be able to see your view or subviews, or buttons for that matter; but the accessibility software "VoiceOver" built into every iOS device will read to her/him the kind of object and its title, something like "Button: Continue" (if the button title is "Continue").
You can see that most likely the tap gesture recognizer will not be read by VoiceOver and thus be less "accessible".
I have an pretty standard application that uses gesture recognizers in various places. I’m trying to add an above-all UISwipeGestureRecognizer with three fingers which can be performed anywhere in the app, similar to the Apple four-fingered ones. This is working fine in some views, but if there’s another swipe recognizer beneath it, it’ll trigger that one instead of the new one.
I’d like this new three-finger swipe to be given priority at all times – I’ve added it to my root view controller’s view, but it still seems to bleed through at times.
Is there an easier way to do this than going through and requiring all other recognizers to fail?
You can use requireGestureRecognizerToFail: method to filter through unneeded gestures.
Apple doc.
Is there a pattern Apple follows for which methods to call for handling an event for a UI objects?
What I mean is, to set an action for UIButton is [button addTarget:action:forControlEvents]
and for an ImageView I have to [imageView addGestureRecognizer]
I can never remember what methods to call. Is there an easy way to remember?
The "pattern" is that it is normal behavior for a button to respond to control events while it is not very common for an image view to respond to user interaction. Normally if you want to have a tappable image, you would use a UIButton and set an image on it. Apple decided to write the control events into UIButtons but not normal UIViews or UIImageViews.
So basically, you can use a control event if it is a button, otherwise you must use a different method. For normal views, gesture recognizers are a good option.