I often UIImageView when you tap some actions will be excused instead of UIButton. Because if you use UIButton, it isn't excused until you release your finger. But if you use UIImageView, the action will be excused the moment you touch UIImageView. But I'm wondering whether it is good way to make "UIButton" or not because when I use UIImageView, I set the tag of UIImageView, use touchesBegan, and check which UIImageView you tap. I'm not sure the way it good or not.
Do you use UIButton when you want to do some actions when you touch the object? And what do you think about it?
(I'm sorry I'm a beginner iOS engineer so it might be idiot question.)
Because if you use UIButton, it isn't excused until you release your
finger.
This is true if you have a function that is triggered at the event touchUpInside. If you use the event touchUpDown, the function will get triggered right after the user tapped the button. See here all the events: UIButton events. What's the difference?
I can easily be done using UIButton, you just need to use different controlEvents, for instance like so
yourButton.addTarget(self, action: #selector(yourSelector), for: .touchDown)
Try it from .xib or .storyboard file. I think it will help you.
Related
Can anyone clarify the difference between these 2 ways of triggering a function when tapping a view?
1)
myView.addTarget(self, action: #selector(myFunctionToTrigger(_:)), forControlEvents: UIControlEvents.TouchUpInside)
2)
let tapGesture = UITapGestureRecognizer(target: self, action:
#selector(myFunctionToTrigger(_:)))
myView.addGestureRecognizer(tapGesture)
This is 2 completely different ways of implementing user event handling in iOS apps.
1). addTarget() - is method on UIControl class, which is part of Target-Action Mechanism. More about that in documentation.
And you can't addTarget tot any UIView, only to UIControl subclasses.
2). UIGestureRecognizer subclasses is just simply a mechanism to detect and distinguish user gestures on specific view.
Main difference between them that Gesture Recognizers can detect more complex events like swipe or pinch or zoom, but -addTarget is a much more efficient way to detect user activity, also it provides the same level of interface for all UIControls such as UISegmetedControl, UISlider, etc.
Hope that I helped you.
These two method work at two different levels of abstraction:
addTarget:action:forControlEvents is the lower level that provides isolated events. Several of these events must be combined and interpreted to detect more complex gestures like swiping or pinching.
addGestureRecognizer works at a higher level closer to what an app usually needs. It adds specific gesture recoginzer that listen to the low level events, detect gestures and deliver specific information about the gesture.
In the case of a tap, the difference is minor. But when it comes to swiping, pinching and a combination of tapping, swiping, pinching (e.g. in a image viewr or in a map app), one or more gesture recoginzers are the way to go.
Here is the difference
For UITapGestureRecognizer you can add event for specified gestures like UITapGestureRecognizer, UIPanGestureRecognizer... and many other gestures .
Where as For UIView addTarget() you can add target for specified events like UIControlEvents.TouchUpInside.. and many other events.
Pavel's answer is correct, you can only add a target to a UIControlView, which is a subclass of UIView. A UIGestureRecognizer can be added to any UIView.
Codo's answer that a target is lower level than a gesture is wrong, gestures are the lower level touch support. A UIControl uses gestures to make addTarget:action:forControlEvents work.
There are several benefits for addTarget:
It is a build-in function. You don't need to initialize another object to do the same thing.
You can set when to react to the action: "touchUpInside" or "touchDown" (or "valueChanged" for sliders).
You can set the different appearances of the button (e.g. title text, title color, content image, background image, highlight tint) and the button only shows those statuses if addTarget is used.
Besides the benefits above, I think it's more like a coding convention for UIControl elements.
I'm trying to recreate the function of the clear button in UITextFields but with a UIButton and as such have a UIButton as a subview of another UIButton. I read somewhere that I need to have the superview handle the touch events of the subview.
The post that hinted at that was outdated and in obj-C so I'm looking for modern and swift version.
Placing a button on another button is a kind of bad implementation in spite of any requirements.
Hope you find out a right way to solve this problem.
Or you can just use a combination of enabling or disabling buttons according to the situation.
I found that touchDown event is kind of slow, in that it requires a major, fairly long touch and does not respond to a light tap. Why is that?
Whereas, touchesBegan responds just when I need it to, i.e. responds even to very light, quick touches. But that's not an event but a method that can be overridden.
Problem is, touchesBegan apparently requires me to either 1) subclass a label (I need to respond to touching a label), or 2) analyze the event to figure out whether it came from the right label. I am wondering whether it's a code smell and whether there should be an event for simple touch.
Try to add a UITapGestureRecognizer to your label.
First of all, allow label to handle user interaction:
label.userInteractionEnabled = true
You can assign tap gesture to label. Then in handler method you need to switch over state property of recognizer. If it is .Began, you got the event that you need.
The cool thing about this approach that you could use this one handler for all of your labels. Inside handler, you can get touched label like this:
let label = tapRecognizer.view as! UILabel
"Code smell"? No, it's a user interface smell. A user interface stink.
If you make a button in your user interface behave different from buttons in any other application, people will hate your app. Do what people are used to.
I'm trying to use a UIView I've created in Storyboard as a button. I assumed it would be possible to use a UIButton, setting the type to custom. However I was unable to add subviews to a custom UIButton in Storyboard.
As such I've just spent the last hour reinventing the wheel by making my own custom gesture recoginizers to reimplement button functionality.
Surely this isn't the best way of doing it though, so my question - to more experienced iOS developers than myself - is what is the best way to make a custom button?
To be clear it needs to:
Use the UIView I've created as it's hittable area.
Be able to show a
different state depending on whether is currently highlighted or not
(i.e. touch down).
Perform some action when actually tapped.
Thank you for your help.
You can use a UIButton, set the type to custom, and then programmatically add your subviews...
Change your UIView into a UIControl in the storyboard. Then use the method [controlViewName addTarget:self action:#selector(*click handler method*) forControlEvents:UIControlEventTouchDown];. click handler method is a placeholder for the method name of your handler. Use this method except change out the UIControlEventTouchDown for UIControlEventTouchInside and UIControlEventTouchDragExit to call a method when the user finishes their click and drags their finger out of the view respectively. I used this for something I'm working on now and it works great.
In Touch down you will want to: highlight all subviews
In Touch up inside you will want to: unhighlight all subviews and perform segue or do whatever the button is supposed to do
In Touch Drag Exit you will want to: unhighlight all subviews
See second answer by LiCheng in this similiar SO post.
Subclass UIControl instead. You can add subviews to it and it can respond to actions
Why are you implementing your own GestureRecognizer? I recommend using the UIView so you can add subviews in the interface builder and adding a UITapGestureRecognizer. You can even do this graphically since you don't care about IOS4 support.
In MonoTouch how do I get a UIImage or UIImageView to fire off a delegate or something when clicked?
Exactly as Dimitris pointed out, you'll want to subclass UIImageView, and also ensure that the userInteractionEnabled flag is set to true.
I should point out that in most cases where people intend to have a UIImageView respond to interaction, these problems can just as easily be fixed by creating a UIButton instance with an image instead.
You have to subclass it (the UIImageView) and override one of TouchesBegan, TouchesMoved or TouchesEnded methods, according to what you want to do.