Detecting gestures on overlapping screen areas in iOS5 - ipad

As shown in the diagram below, my app has a few UIViews, B, C and D, side by side, and all contained in an enveloping UIView A:
I have a UIPinchGestureRecognizer in each of B, C, and D. What I'd also like to do is recognize a different gesture over the entire of area A (without hindering the other gesture recognizers from working).
What's the best strategy for this? I'm targeting iOS5+, no backwards compatability needed.
It's also worth noting that the gesture recognizer for A will probably have to be a custom gesture recognizer, since I want to detect a pinch but with > 2 fingers involved.
Thought:
If installing a gesture recognizer for A doesn't work well, it might be possible to do it the old way by using touchesBegan etc. As the UIResponder docs note, you can have an subclass of UIView just call [super touchesBegan:touches withEvent:event] to have it passed on in the responder chain if you're not interested in the touch.

Add the GestureRecognize to A as you would normally do.
Now you need to start by hit-testing what was touched.
First you need to test the z-index of the items. For example if you touch B, then your function will loop/hit-test over all the items that are affected, in this case A & B.
After your function detects both A & B (B over A) hit-test, it should check for the z-index. For example B's z-index is 2, then A z-index is 1. Now you know that the B is what the user intended to touch because it's z-index is higher and this means that it is on-top.
After you have the target identified(the B), before executing it's GestureRecognize you need to temporarily disable the GestureRecognize for A to eliminate any conflict between the overlapping GestureRecognizes. After the B touch completes/ends, enable A's GestureRecognize back.

It turns out just adding gesture recognizers in the straightforward obvious way works, at least for the gestures I want to recognize. I imagined it would be more complicated.

Related

which one is better? UIPanGesture? or UIDragInteraction/UIDropInteraction?

I'm trying to make a card game app (with swift 4) like a Solitaire Card Game.
So I have to use Drag and Drop to each Card UIView.
But I think there are two ways to use for dragging.
Which one is better between UIPanGesture and UIDragInteraction/UIDropInteraction?
Furthermore, I'm not sure about what panning means.
2. what is difference between dragging and panning?
(From Apple site: UIDragInteraction & UIPanGestureRecognizer),
UIDragInteraction
An interaction to enable dragging of items from a view, employing a delegate to provide drag items and to respond to calls from the drag session.
UIPanGestureRecognizer
A concrete subclass of UIGestureRecognizer that looks for panning (dragging) gestures.
Here, Pan & Drag gesture is almost same.
Some of differences which I got from searching are as follow..
UIDragInteraction works from iOS 11.0+ & UIPanGestureRecognizer works from iOS 3.2+, so if you want to run your application in older version devices then you should use UIPanGestureRecognizer
UIPanGestureRecognizer works on the whole screen & gives you the CGPoints as response of touch where UIDragInteraction works on the particular object you want to drag & drop & gives you direct the view object.
UIPanGestureRecognizer can work with multiple touches & handle those where UIDragInteraction doesn't allow you to handle multiple touch.
Hope this helps.
Thanks

Using pointInside:withEvent: for different gestures

I have two view A and B. B is subview of A. I want to monitor double tap actions in A, in this case, I will move B to tap position.
Now, I want to make that part of codes inside B, which will avoid coding in A.
So I added a double tap gesture recogniser in B, and I have overwritten the -pointInside:withEvent: in B, so it can react to double tap action outside B's frame.
However, I still want other gestures (including single tap) work on A, so I came up with two different ways on how to do this:
Recognize inside pointInside:, and for single taps, return NO, for double tapāˆ«, return YES, however, seems there is no way to do this.
Always return YES for pointInside:, and capture both single taps and double tap gestures. For single tap gestures, send it to A to handle, however, still not find a way for this.
Anyone can help me one this? Or tell If I am looking in a wrong direction ?
That approach could work, but it's very messy, simply because, the pointInside:withEvent: is a very primitive call.
When you double tap on a view, you'll receive multiple hitTest:withEvent: method calls (which, in turn, calls to pointInisde:withEvent:), meaning that you'd have to do some hard work by using a time offset to measure whenever two taps occur one after the other.
How many calls does it get? As many as it can, every milisecond your finger rests on the screen, this method will get carpet-bombed by method calls. It's simply not wise to overload it for what you intend to do.
Simply put, gestures recognizers are very convenient objects that encapsulate all the complexities of having to deal with real time UITouch by yourself.
As a solution that keeps the code relatively clean, you could add the UITapGestureRecognizer to Aand then hand the selector for B to handle, you can even do this in interface builder, or through code:
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:B action:#selector(handleGesture:)];
tapGesture.numberOfTapsRequired = 2;
[A addGestureRecognizer:tapGesture];
(A and B being your views)

Puzzle swipe gesture

Im trying to figure the best way available in iOS to solve the following:
Basically I've built a 4x6 tile matrix with UIButtons, each containing a letter. The buttons are contained within a UIView. (Apple, Fast, Tree)
A
P F
P A
L S
E T
T R E E
All UIButtons have userInteractionEnabled set to FALSE to receive touchesBegan calls. On creation, all UIButtons are placed into a NSMutableArray.
My challenge is how to Swipe&drag from a letter(starting point) and move to a destination letter, trying to "find" the complete word.Kind of like the Ruzzle App but only horizontal & Vertical swipes.
The UIButtons that are being "multi-selected" have to change background color as a visual indication.
Im receiving the touch location via the touchesMoved. Does the entire code of detection has to be triggered under touchesMoved?
What will the best approach for this be? and the least process intensive
Instead of using touchesBegan/Moved methods, why don't you look into UIControlEventTouchDragEnter/UIControlEventTouchDragInside events? That will give you better performance as the associated action will only be called when touch enters the button or is dragged inside the button. In these methods you can keep pushing the new buttons in an array as touch enters into their bounds and check for their position with respect to buttons pushed previously. In this approach I think you will have to handle the first button touch using UIControlEventTouchDown event.
I would love to know how you implement it finally.

hacking ios ui responder chain

I'm facing a delicated problem handling touch events. This is problably not a usual stuff to make but i think it is possible. I just don't know how...
I have a Main View (A) and Main View (B) with a lot of subviews 1,2,3,4,5,...
MainView
SubView(A)
1
2
3
SubView(B)
1
2
3
Some of these sub sub views (1,2,4) are scrollviews.
It happens that I want to change between A and B with a two finger pan.
I have tried to attach a UIPanGestureRecognizer to MainView but the scrollviews cancel the touches and it only works sometimes.
I need a consistent method to first capture the touches, detect if it is a two finger pan, and only then decide if it will pass the touches downwards (or upwards... i'm not sure) the responder chain.
I tried to create a top level view to handle that, but I cant make to have the touches being passed through that view.
I have found a lot of people with similar problems but couldn't find a solution to this problem from their solutions.
If any one could give me a light, that would be great as i'm already desperate with this.
You can create a top level view to capture the touches and the coordinates of touches then you can check if the coordinates of touch is inside of the sub views. You can do that using
BOOL CGRectContainsPoint(CGRect rect, CGPoint point)
Method. Rect is a frame of the view, and point is point of touch.
Please not that the frames and touch locations are relative to their super views, therefore you need to convert them to coordinate system of app window.
Or maybe it can be more helpful
Receiving touch events on more then one UIView simultaneously

Gestures that steal touches like iOS multitasking swipe

I know what I want to do, but I'm stumped as to how to do it: I want to implement something like the iOS multitasking gestures. That is, I want to "steal" touches from any view inside my view hierarchy if the number of touches is greater than, say, two. Of course, the gestures are not meant to control multitasking, it's just the transparent touch-stealing I'm after.
Since this is a fairly complex app (which makes extensive use of viewController containment), I want this to be transparent to the views that it happens to (i. e. I want to be able to display arbitrary views and hierarchies, including UIScrollViews, MKMapViews, UIWebViews etc. without having to change their implementation to play nice with my gestures).
Just adding a gestureRecognizer to the common superview doesn't work, as subviews that are interaction enabled eat all the touches that fall on them.
Adding a visually transparent UI-enabled view as a sibling (but in front) of the main view hierarchy also doesn't work, since now this view eats all the touches. I've experimented with reimplementing touchesBegan: etc. in the touchView, but forwarding the touches to nextResponder doesn't work, because that'll be the common superview, in effect funnelling the touches right around the views that are supposed to be receiving them when the touchView gives them up.
I am sure I'm not the only one looking for a solution for this, and I'm sure there are smarter people than me that have this already figured out. I even suspect it might not actually be very hard, and just maybe my brain won't see the forest for the trees today. I'm thankful for any helpful answers anyway :)
I would suggest you to try using method swizzling, reimplementing the touchesbegan on UIView. I think that the best way is to store in a static shared variable the number of touches (so that each view can increment/decrement this value). It's just a very simple idea, take it with a grain of salt.
Hope this helps.
Ciao! :)
A possible, but potentially dangerous (if you aren't careful) approach is to subclass your application UIWindow and redefine the sendEvent: method.
As this method is called for each touch event received by the app, you can inspect it and then decide to call [super sendEvent:] (if the touch is not filtered), or don't call it (if the touch is filtered) or just defer its call if you are still recognizing the touch.
Another possibility is to play with the hitTest:withEvent: method but this would require your stealing view to be placed properly in the subview, and I think it doesn't fit well when you have many view controllers. I believe the previous solution is more general purpose.
Actually, adding a gesture recognizer on the common superview is the right way to do this. But it sound like you may need to set either delaysTouchesBegan or cancelsTouchesInView (or both) to ensure that the gesture recognizer handles everything before letting it through to the child views.

Resources