Touch Events touchesCancelled and touchesEnded - ios

I have a project that I started out using a tap gesture recognizer for. I realized I didn't have enough control with the tap gesture recognizer, so I've started coding with using my viewcontroller as a UIGestureRecognizerDelegate. Just to make sure I was on the right track, I added methods for touchesBegan, touchesMoved, touchesEnded, touchesCancelled. The methods are empty except for NSLog calls so I can tell what is being fired when I try different things.
Things worked as expected except that I was getting a bunch of calls to touchesCancelled. I assume this is because of the tap gesture recognizer I still have in place. I'm not ready to remove the tap gesture recognizer, so I just wanted to confirm that this is what would happen if a gesture I used was actually a tap.
The documentation says:
This method is invoked when the Cocoa Touch framework receives a
system interruption requiring cancellation of the touch event; for
this, it generates a UITouch object with a phase of
UITouchPhaseCancel. The interruption is something that might cause the
application to be no longer active or the view to be removed from the
window When an object receives a touchesCancelled:withEvent: message
it should clean up any state information that was established in its
touchesBegan:withEvent: implementation.
But I suspect my scenario just outlined is just as likely. Am I correct?

Related

UIGestureRecognizer Method Keeps Getting Called

I am trying use various NSTimers in conjunction with a pan gesture, firing them only when the translation in view reaches a certain point in the view, and invalidating the timers when they go beyond a certain point. However, I found that even if I fire the timers within the .Changed state, the gesture method itself is called continuously as the user pans. As such, the NSTimer is fired continuously and is not working as it should. Is the only option to move the NSTimers outside the pan gesture? Or is there another solution? Thanks.
the gesture method itself is called continuously as the user pans
This is correct behavior. It is up to you to deal with the gesture recognizer action method being called many times. You can and should distinguish why the action method is being called, by examining the gesture recognizer's state. It will be called once for the Begin state, many times for the Changed state, and one last time for the Ended state. Any gesture recognizer action method must take account of this and structure itself accordingly, usually as one big switch statement.

iOS UIGestureRecognizer

I have two questions:
Can I implement gesture recogniser that inherits from UISwipeGestureRecognizer and add logic to the UIEvent handlers?
Can I implement UIGestureRecognizer without attaching it to a UIView? Meaning, I will analyze and manage the UIEvent events and call the proper selector (touchesBegan, touchesMoved, touchesEnded, touchesCancelled)?
In the meantime I have problems reseting the gesture recogniser when the state is UIGestureRecognizerStateEnded.
You asked:
Can I implement gesture recogniser that inherits from UISwipeGestureRecognizer and add logic to the UIEvent handlers?
Yes. See Creating a Custom Gesture Recognizer in the Event Handling Guide for iOS. Also see WWDC 2010 session 121 - Advanced Gesture Recognition. It probably depends upon what you want to do, though, and you should see if you can accomplish what you want by configuring the standard swipe gesture's direction and numberOfTouches parameters. I've done more subclassing on continuous gestures like UIPanGestureRecognizer, but I see no reason why you couldn't do it on a swipe, too.
Can I implement UIGestureRecognizer without attaching it to a UIView? Meaning, I will analyze and manage the UIEvent events and call the proper selector (touchesBegan, touchesMoved, touchesEnded, touchesCancelled)?
No. Obviously you can create one, but it just won't receive any of the events until it's added to a UIView and that view receives touches.
In the meantime I have problems reseting the gesture recogniser when the state is UIGestureRecognizerStateEnded.
You'd have to submit a new question providing a relevant code snippet for us to help you on that one. In general, you'd do any post-gesture cleanup when your handler is called for UIGestureRecognizerStateEnded (and UIGestureRecognizerStateCancelled or UIGestureRecognizerStateFailed) and you'd initialize everything for the next gesture when you receive the next UIGestureRecognizerStateBegan.

touchesEnded or touchesCancelled not always called

This issue I think deserves its own question. Using the code attached to my solution to another problem I discovered the issue described here.
I have the main view controller set as a UIGestureRecognizerDelegate, and I implement touchesBegan, touchesMoved, touchesEnded, and touchesCancelled programming my solution with the assumption that for every touch object with a touchesBegan event there would be a touchesEnded or touchesCancelled event for that same object. I'm finding that not to be the case, though.
Scenario:
The following events happen in this order.
User starts gesture 1, touching the screen and sliding the finger.
User starts gesture 2, touching the screen at a different location.
User continues to slide both fingers at their respective parts of the screen.
User lifts finger off the screen for gesture 2.
User continues gesture 1.
User lifts finger off the screen for gesture 1.
Using NSLog to capture the details of the touch event, I find that a separate touch object is used for gesture 1 and gesture 2. But while touchesBegan, touchesMoved, and touchesEnded are all called for gesture 1, only touchesBegan and touchesMoved are called for gesture 2. The event touchesCancelled is not called for it either.
So how can I tell when gesture 2 finishes if touchesEnded and touchesCancelled are not called?
Edit: I found another post with similar symptoms. Most of my subviews are created programmatically, though. I'll try what was suggested there for the others. I'm skeptical it is the same issue, though, since in my testing, the touch locations are not anywhere near the other views.
Another edit: Following the recommendation in the link posted in my previous edit, I looked at the subviews, and one had user interaction checked. After I unchecked it, the behavior is slightly different. Now the second touch isn't noted at all in any of the touch events. I must be missing something basic. The main view, and the view with user interaction checked, by the way, both occupy the same space (one encapsulates the other).
My original assumption that each touch would have its own object that starts at touchesBegan and ends with either touchesEnded or touchesCancelled I think is correct. It is with my current implementation anyway. I originally wasn't seeing a second touch because Multiple Touch was not enabled for the view I was working with. I enabled that, per suggestion in the comments. After that, I was able to see some, but not all touch events for the second touch. The reason I was sometimes not seeing the second touch was because I had a subview that had user interaction enabled. Apparently, it was commandeering the touches. I unchecked that and then was able to see the touch objects.
I then switched tracking the touches by coordinates to touch IDs and was able to track the complete lifespan of all touches. Tracking by coordinates didn't work because I found that for the second touch, the touchesEnded coordinates were identical to the last one in touchesMoved rather than the previous location in touchesEnded matching the touchLocation in touchesMoved as with the first touch. If this sounds confusing, just track the touches by touch ID instead of by coordinates.
How about you put something like this in your touchesMoved method
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
NSArray* touchData = #[touches,event];
[self.timer invalidate];
self.timer = [NSTimer scheduledTimerWithTimeInterval:0.1 target:self
selector:#selector(touchesFinishedWithoutCallback:) userInfo:touchData
repeats:NO];
[self.timer fire];
}
The touchesFinishedWithoutCallback: method will only get called when touchesMoved stops getting called.
Needs elaborating for multiple touch, but could be a solution?

Custom UIGestureRecognizer Not Working As Expected

I have a UITableView which I present in a UIPopoverController. The table view presents a list of elements that can be dragged and dropped onto the main view.
When the user begins a pan gesture that is principally vertical at the outset, I want the UITableView to scroll as usual. When it's not principally vertical at the outset, I want the application to interpret this as a drag-and-drop action.
My unfortunately lengthy journey down this path has compelled me to create a custom UIGestureRecognizer. In an attempt to get the basics right, I left this custom gesturer as an empty implementation at first, one that merely calls the super version of each of the five custom methods Apple says should be overridden:
(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event;
(void)reset;
This results in nothing happening, i.e. the custom gesture's action method is never called, and the table view scrolls as usual.
For my next experiment, I set the gesture's state to UIGestureRecognizerStateBegan in the touchesBegan method.
This caused the gesture's action method to fire, making the gesture appear to behave just like the standard UIPanGestureRecognizer. This obviously suggested I was responsible for managing the gesture's state.
Next up, I set the gesture's state to UIGestureRecognizerStateChanged in the touchesMoved method. Everything still fine.
Now, instead, I tried setting the gesture's state to UIGestureRecognizerStateFailed in the touchesMoved method. I was expecting this to terminate the gesture and restore the flow of events to the table view, but it didn't. All it did was stop firing the gesture's action method.
Lastly, I set the gesture's state to UIGestureRecognizerStateFailed in the touchesBegan method, immediately after I had set it to UIGestureRecognizerStateBegan.
This causes the gesture to fire its action method exactly once, then pass all subsequent events to the table view.
So...sorry for such a long question...but why, if I cause the gesture to fail in the touchesBegan method (after first setting the state to UIGestureRecognizerStateBegan), does it redirect events to the table view, as expected. But if I try the same technique in touchesMoved (the only place I can detect that a move is principally vertical), why doesn't this redirection occur?
Sorry for making this more complicated than it actually was. After much reading and testing, I've finally figured out how to do this.
First, creating the custom UIGestureRecognizer was one of the proper solutions to this issue, but when I made my first test of the empty custom recognizer, I made a rookie mistake: I forgot to call [super touches...:touches withEvent:event] for each of the methods I overrode. This caused nothing to happen, so I set the state of the recognizer to UIGestureRecognizerStateBegan in touchesBegan, which did result in the action method being called once, thus convincing me I had to explicitly manage states, which is only partially true.
In truth, if you create an empty custom recognizer and call the appropriate super method in each method your override, your program will behave as expected. In this case, the action method will get called throughout the dragging motion. If, in touchesMoved, you set the recognizer's state to UIGestureRecognizerStateFailed, the events will bubble up to the super view (in this case a UITableView), also as expected.
The mistake I made and I think others might make is thinking there is a direct correlation between setting the gesture's state and the chronology of the standard methods when you subclass a gesture recognizer (i.e. touchesBegan, touchesMoved, etc.). There isn't - at least, it's not an exact mapping. You're better off to let the base behavior work as is, and only intervene where necessary. So, in my case, once I determined the user's drag was principally vertical, which I could only do in touchesMoved, I set the gesture recognizer's state to UIGestureRecognizerStateFailed in that method. This took the recognizer out of the picture and automatically forwarded a full set of events to the encompassing view.
For the sake of brevity, I've left out a ton of other stuff I learned through this exercise, but would like to point out that, of six or seven books on the subject, Matt Neuburg's Programming IOS 4 provided the best explanation of this subject by far. I hope that referral is allowed on this site. I am in no way affiliated with the author or publisher - just grateful for an excellent explanation!
That probably happens because responders expect to see an entire touch from beginning to end, not just part of one. Often, -touchesBegan:... sets up some state that's then modified in -touchesMoved..., and it really wouldn't make sense for a view to get a -touchesMoved... without having previously received -touchesBegan.... There's even a note in the documentation that says, in part:
All views that process touches,
including your own, expect (or should
expect) to receive a full touch-event
stream. If you prevent a UIKit
responder object from receiving
touches for a certain phase of an
event, the resulting behavior may be
undefined and probably undesirable.

How do I implement multitouch on iOS

I'd like to implement multitouch, and I was hoping to get some sanity checks from the brilliant folks here. :)
From what I can tell, my strategy to detect and track multitouch is going to be to use the touchesBegan _Moved and _Ended methods and use the allTouches method of the event parameter to get visibility on all relevant touches at any particular time.
I was thinking I'd essentially use the previousLocationInView as a way of linking touches that come in with my new events with the currently active touches, i.e. if there is a touchBegan for one that is at x,y = 10,14, then I can use the previous location of a touch in the next message to know which one this new touch is tied to as a way of keeping track of one finger's continuous motion etc. Does this make sense? If it does make sense, is there a better way to do it? I cannot hold onto UITouch or UIEvent pointers as a way of identifying touches with previous touches, so I cannot go that route. All I can think to do is tie them together via their previouslocationInView value (and to know which are 'new' touches).
You might want to take a look at gesture recognizers. From Apple's docs,
You could implement the touch-event handling code to recognize and handle these gestures, but that code would be complex, possibly buggy, and take some time to write. Alternatively, you could simplify the interpretation and handling of common gestures by using one of the gesture recognizer classes introduced in iOS 3.2. To use a gesture recognizer, you instantiate it, attach it to the view receiving touches, configure it, and assign it an action selector and a target object. When the gesture recognizer recognizes its gesture, it sends an action message to the target, allowing the target to respond to the gesture.
See the article on Gesture Recognizers and specifically the section titled "Creating Custom Gesture Recognizers." You will need an Apple Developer Center account to access this.

Resources