I have two view A and B. B is subview of A. I want to monitor double tap actions in A, in this case, I will move B to tap position.
Now, I want to make that part of codes inside B, which will avoid coding in A.
So I added a double tap gesture recogniser in B, and I have overwritten the -pointInside:withEvent: in B, so it can react to double tap action outside B's frame.
However, I still want other gestures (including single tap) work on A, so I came up with two different ways on how to do this:
Recognize inside pointInside:, and for single taps, return NO, for double tapā«, return YES, however, seems there is no way to do this.
Always return YES for pointInside:, and capture both single taps and double tap gestures. For single tap gestures, send it to A to handle, however, still not find a way for this.
Anyone can help me one this? Or tell If I am looking in a wrong direction ?
That approach could work, but it's very messy, simply because, the pointInside:withEvent: is a very primitive call.
When you double tap on a view, you'll receive multiple hitTest:withEvent: method calls (which, in turn, calls to pointInisde:withEvent:), meaning that you'd have to do some hard work by using a time offset to measure whenever two taps occur one after the other.
How many calls does it get? As many as it can, every milisecond your finger rests on the screen, this method will get carpet-bombed by method calls. It's simply not wise to overload it for what you intend to do.
Simply put, gestures recognizers are very convenient objects that encapsulate all the complexities of having to deal with real time UITouch by yourself.
As a solution that keeps the code relatively clean, you could add the UITapGestureRecognizer to Aand then hand the selector for B to handle, you can even do this in interface builder, or through code:
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:B action:#selector(handleGesture:)];
tapGesture.numberOfTapsRequired = 2;
[A addGestureRecognizer:tapGesture];
(A and B being your views)
Related
I have found no way in the documentation on how to specify the number of touches for UIPinchGestureRecognizer or UIRotationGestureRecognizer.
All I found anywhere is that it only works with two fingers, but by my experiments, it also works with 3 or even more fingers.
Furthermore in the action the property numberOfTouches also never returns the actual number of fingers.
I want to limit it only for two fingers because it gets all confused with other 3-finger recognizers.
Could you, please, suggest me a good way to do that? Thanks.
According to the docs UIPinchGestureRecognizer handles
[...] pinching gestures involving two touches [...]
Apparently it only considers two touches but allows additional touches to happen concurrently.
To answer your question: you can try to get the actual number of touches by other means and prevent the pinch action when that count is larger than 2. One way is to add more gesture recognizers which handle gestures on the same view (e.g. multiple UITapGestureRecognizers, one for each possible number of touches); another one is to override touchesBegan and touchesMoved of the view your gesture recognizer is installed on and use the count of the provided touches array.
(I'd go with the second approach first because managing multiple gesture recognizers in parallel can get problematic.)
Add a delegate to the pinch gesture recogniser you're concerned about.
Implement gestureRecognizer(_:, shouldRecognizeSimultaneouslyWith:) and return false if you want the pinch gesture to be ignored if there is another recogniser also in progress.
I found that touchDown event is kind of slow, in that it requires a major, fairly long touch and does not respond to a light tap. Why is that?
Whereas, touchesBegan responds just when I need it to, i.e. responds even to very light, quick touches. But that's not an event but a method that can be overridden.
Problem is, touchesBegan apparently requires me to either 1) subclass a label (I need to respond to touching a label), or 2) analyze the event to figure out whether it came from the right label. I am wondering whether it's a code smell and whether there should be an event for simple touch.
Try to add a UITapGestureRecognizer to your label.
First of all, allow label to handle user interaction:
label.userInteractionEnabled = true
You can assign tap gesture to label. Then in handler method you need to switch over state property of recognizer. If it is .Began, you got the event that you need.
The cool thing about this approach that you could use this one handler for all of your labels. Inside handler, you can get touched label like this:
let label = tapRecognizer.view as! UILabel
"Code smell"? No, it's a user interface smell. A user interface stink.
If you make a button in your user interface behave different from buttons in any other application, people will hate your app. Do what people are used to.
I'm rather confident [editable] UITextView's become firstResponder when a long press or tap gesture occurs within the scrollView. I want to identify where in the view this touch occured. Digging through the documentation and source code didn't yield me much. I might be going about this wrong. My concern is a race condition if I just add my own tap recognizer (how can I be sure it is called before the textView's delegate methods).
For practical clarification, I want to call two similar functions from a delegate function (editingDidBegin) but depending if they touched the left or right half of the text view, I want to call either of the two.
I'm using a UIGestureRecognizer to recognize a single tap, a double tap and a longpress.
What I would like to do is also recognize a longpress, then swipe to either left or right.
Would this be possible given that I'm already consuming a longpress? I'm confused on this one and would appreciate pointers on how to do.
Thanks
Just tried this out myself and it seems as if the UILongPressGestureRecognizer will transition to its end state as soon as the UISwipeGestureRecognizer begins. Just make sure shouldRecognizeSimultaneouslyWithGestureRecognizer: returns YES for this gesture combination.
You'd need to use two gesture recognisers and make sure you track the state of the long press one when you receive a callback to say it's ended, and then do something based on the swipe/pan gesture following it.
As shown in the diagram below, my app has a few UIViews, B, C and D, side by side, and all contained in an enveloping UIView A:
I have a UIPinchGestureRecognizer in each of B, C, and D. What I'd also like to do is recognize a different gesture over the entire of area A (without hindering the other gesture recognizers from working).
What's the best strategy for this? I'm targeting iOS5+, no backwards compatability needed.
It's also worth noting that the gesture recognizer for A will probably have to be a custom gesture recognizer, since I want to detect a pinch but with > 2 fingers involved.
Thought:
If installing a gesture recognizer for A doesn't work well, it might be possible to do it the old way by using touchesBegan etc. As the UIResponder docs note, you can have an subclass of UIView just call [super touchesBegan:touches withEvent:event] to have it passed on in the responder chain if you're not interested in the touch.
Add the GestureRecognize to A as you would normally do.
Now you need to start by hit-testing what was touched.
First you need to test the z-index of the items. For example if you touch B, then your function will loop/hit-test over all the items that are affected, in this case A & B.
After your function detects both A & B (B over A) hit-test, it should check for the z-index. For example B's z-index is 2, then A z-index is 1. Now you know that the B is what the user intended to touch because it's z-index is higher and this means that it is on-top.
After you have the target identified(the B), before executing it's GestureRecognize you need to temporarily disable the GestureRecognize for A to eliminate any conflict between the overlapping GestureRecognizes. After the B touch completes/ends, enable A's GestureRecognize back.
It turns out just adding gesture recognizers in the straightforward obvious way works, at least for the gestures I want to recognize. I imagined it would be more complicated.