What methods to call for handling events for UIObject? - ios

Is there a pattern Apple follows for which methods to call for handling an event for a UI objects?
What I mean is, to set an action for UIButton is [button addTarget:action:forControlEvents]
and for an ImageView I have to [imageView addGestureRecognizer]
I can never remember what methods to call. Is there an easy way to remember?

The "pattern" is that it is normal behavior for a button to respond to control events while it is not very common for an image view to respond to user interaction. Normally if you want to have a tappable image, you would use a UIButton and set an image on it. Apple decided to write the control events into UIButtons but not normal UIViews or UIImageViews.
So basically, you can use a control event if it is a button, otherwise you must use a different method. For normal views, gesture recognizers are a good option.

Related

Difference between UITapGestureRecognizer and addTarget

Can anyone clarify the difference between these 2 ways of triggering a function when tapping a view?
1)
myView.addTarget(self, action: #selector(myFunctionToTrigger(_:)), forControlEvents: UIControlEvents.TouchUpInside)
2)
let tapGesture = UITapGestureRecognizer(target: self, action:
#selector(myFunctionToTrigger(_:)))
myView.addGestureRecognizer(tapGesture)
This is 2 completely different ways of implementing user event handling in iOS apps.
1). addTarget() - is method on UIControl class, which is part of Target-Action Mechanism. More about that in documentation.
And you can't addTarget tot any UIView, only to UIControl subclasses.
2). UIGestureRecognizer subclasses is just simply a mechanism to detect and distinguish user gestures on specific view.
Main difference between them that Gesture Recognizers can detect more complex events like swipe or pinch or zoom, but -addTarget is a much more efficient way to detect user activity, also it provides the same level of interface for all UIControls such as UISegmetedControl, UISlider, etc.
Hope that I helped you.
These two method work at two different levels of abstraction:
addTarget:action:forControlEvents is the lower level that provides isolated events. Several of these events must be combined and interpreted to detect more complex gestures like swiping or pinching.
addGestureRecognizer works at a higher level closer to what an app usually needs. It adds specific gesture recoginzer that listen to the low level events, detect gestures and deliver specific information about the gesture.
In the case of a tap, the difference is minor. But when it comes to swiping, pinching and a combination of tapping, swiping, pinching (e.g. in a image viewr or in a map app), one or more gesture recoginzers are the way to go.
Here is the difference
For UITapGestureRecognizer you can add event for specified gestures like UITapGestureRecognizer, UIPanGestureRecognizer... and many other gestures .
Where as For UIView addTarget() you can add target for specified events like UIControlEvents.TouchUpInside.. and many other events.
Pavel's answer is correct, you can only add a target to a UIControlView, which is a subclass of UIView. A UIGestureRecognizer can be added to any UIView.
Codo's answer that a target is lower level than a gesture is wrong, gestures are the lower level touch support. A UIControl uses gestures to make addTarget:action:forControlEvents work.
There are several benefits for addTarget:
It is a build-in function. You don't need to initialize another object to do the same thing.
You can set when to react to the action: "touchUpInside" or "touchDown" (or "valueChanged" for sliders).
You can set the different appearances of the button (e.g. title text, title color, content image, background image, highlight tint) and the button only shows those statuses if addTarget is used.
Besides the benefits above, I think it's more like a coding convention for UIControl elements.

UIImageView: add tap event from Interface Builder

I know how to programmatically add a gesture recognizer. I was wondering how come when i Ctrl+drag from a UIImageView in the Interface Builder into the code, I'm only given the possibility to link Outlets and Outlets collections but not actions. I have enabled user interactions on the image view, so I would like to know id it's possible to access its actions in a "drag to add" way.
Setting user interactions enabled only allows the image view to work with gesture recognizers. They have no IBAction because they don't allow touch on their own. You would have to drag a gesture recognizer onto the image view to get it to work.
From the UIImageView class ref:
Image views ignore user events by default. Normally, you use image views only to present visual content in your interface. If you want an image view to handle user interactions as well, change the value of its userInteractionEnabled property to true. After doing that, you can attach gesture recognizers or use any other event handling techniques to respond to touch events or other user-initiated events.

How to react to a "touch up inside" event in UIAccessibilityElement subclass?

I have a map with drawn items. The map handles touch events and determines the touched items as if they were buttons.
I made the map a container and implemented the methods to return accessibleElements. For each item I create one instance of a UIAccessibilityElement subclass.
It seems UIAccessibilityAction protocol has no callback for a "tap" or "button pressed" event.
How would I mimic the effect of a UIButton with UIAccessibilityElement then?
Assuming you are running under iOS 5 or iOS 6, consider the following workaround. It is not optimal, but will work until there is a better way:
Create a dummy view that is not, itself, an accessibility element.
On this view, implement your "tap" handling in -touchesEnded:withEvent:.
Set your accessibility element's accessibilityActivationPoint to a value that falls within this dummy view.
Your dummy view will receive touch events when the corresponding accessibility element is activated. Make sure to ignore touch handling in your dummy view if VoiceOver and other assistive technologies are not running.
EDIT: Another less hacky approach is to implement a tap gesture recognizer on the view you're concerned with, convert the coordinate from -touchesEnded:withEvent: to screen coordinates, and manually hit test the point against the frames of your accessibility elements.

Using a UIView as a button

I'm trying to use a UIView I've created in Storyboard as a button. I assumed it would be possible to use a UIButton, setting the type to custom. However I was unable to add subviews to a custom UIButton in Storyboard.
As such I've just spent the last hour reinventing the wheel by making my own custom gesture recoginizers to reimplement button functionality.
Surely this isn't the best way of doing it though, so my question - to more experienced iOS developers than myself - is what is the best way to make a custom button?
To be clear it needs to:
Use the UIView I've created as it's hittable area.
Be able to show a
different state depending on whether is currently highlighted or not
(i.e. touch down).
Perform some action when actually tapped.
Thank you for your help.
You can use a UIButton, set the type to custom, and then programmatically add your subviews...
Change your UIView into a UIControl in the storyboard. Then use the method [controlViewName addTarget:self action:#selector(*click handler method*) forControlEvents:UIControlEventTouchDown];. click handler method is a placeholder for the method name of your handler. Use this method except change out the UIControlEventTouchDown for UIControlEventTouchInside and UIControlEventTouchDragExit to call a method when the user finishes their click and drags their finger out of the view respectively. I used this for something I'm working on now and it works great.
In Touch down you will want to: highlight all subviews
In Touch up inside you will want to: unhighlight all subviews and perform segue or do whatever the button is supposed to do
In Touch Drag Exit you will want to: unhighlight all subviews
See second answer by LiCheng in this similiar SO post.
Subclass UIControl instead. You can add subviews to it and it can respond to actions
Why are you implementing your own GestureRecognizer? I recommend using the UIView so you can add subviews in the interface builder and adding a UITapGestureRecognizer. You can even do this graphically since you don't care about IOS4 support.

Passing touch events to appropriate sibling UIViews

I'm trying to handle touch events with touchesBegan in an overlay to a parent UIView but also allow the touch input to pass through to sibling UIViews underneath. I expected there would be some straight-forward way to process a touch event and then say "now send it to the next responder as if this one didn't exist", but all I can find is the nextResponder method which appears to be giving back the parent to my overlay view. That parent is then not really passing it onto the next sibling of that overlay view so I'm stuck uncertain how to do what seems like a simple task that is usually accomplished with a touch callback that gets a True or False return value to tell it whether to keep processing down the widget hierarchy.
Am I missing something obvious?
Late answer, but I think you would be better off overriding hitTest:withEvent: instead of touchesBegan. It seems to me that touchesBegan is a pretty "high-level" method that is there to just do a simple thing, so you cannot alter at that level if the event if propagated further. The right place to do that is hitTest:withEvent:.
Also have a look at this S.O. answer for more details about this point.
I understand the desired behavior you're looking for Joey - I haven't found something in the API that supports this automatic messaging-up-the-chain behavior with sibling views.
What I originally wrote below was with respect to just informing a parent UIView about a touch. This still applies, but I believe you need to take it a step further and have the parent UIView use the hit testing technique that Sergio described on each of it's subviews that are siblings to the overlay, and have the parent UIView manually invoke a "do something" method on each of it's subviews that pass the hit test. Each of those sibling views can return a BOOL value on whether to abort informing other siblings or continue the chain.
If you find yourself using this pattern a lot, consider adding a category method on UIView that encapsulates the hit testing and asking views to perform a selector.
My Original Answer
With a little bit of manual work, you can wire this together yourself. I've had to do this, and it worked for me, because I had an oft-repeated use case (an overlay view on a button), where it made sense to create some custom classes. If your situation is similar, one of these techniques will suffice.
Option 1:
If the overlay doesn't need to do anything but look pretty, have it opt out of touch handling completely with userInteractionEnabled = NO. This will make it so that the touch event goes to it's parent UIView (the one it is an overlay to).
Option 2:
Have the overlay absorb the touch event (as it would by default), and then invoke a method on the parent UIView indicating that a touch or certain gesture was recognized, and here's what it is. This way, the UIView behind the overlay still gets to act on the touch recognition, even if someone else did the interception.
With Option 2, it's more a fit for simple UIControlEvent types, like UIControlEventTouchDown and UIControlEventTouchUpInside. In my case (a custom UIButton subclass with a custom overlay view on top of it), I'll wire touch down and touch up events on the button to two separate methods. These fire if a touch down or touch up inside event occurs on the button itself. But, they are also hooks I can invoke from the overlay view if I need to simulate that a button press occurred.
Depending on your needs, you could have a known protocol between the overlay and it's parent UIView or just have the overlay test the UIView informally, with a respondsToSelector: check before invoking performSelector: on it with the custom method you want called that would have fired automatically if the UIView wasn't covered by an overlay.

Resources