I added a custom gesture recognizer to a ViewController's view. Once a touch starts, the custom class reports the touch.force in NSLog every time the touchesMoved function fires off.
If you start touching and the move your finger even just a few millimeters, then the touchesMoved events will fire off continually until you lift up your finger and the touchesEnded event fires.
However if you start touching and DON'T move your finger at all, then the touchesMoved events will only fire off a few times, maybe 10-20 times, and then they just stop. After that, the touchesCancelled function does not get called and neither does touchesEnded.
Why is this happening? I want it to keep firing touchesMoved forever until the finger is lifted up.
I saw this other similar question here: iOS app stops responding to touch events until I move my finger on/off screen
But there is no answer to that one either.
I am implementing touchesBegan and touchesEnded in my iOS application trying to detect when the user puts a finger on the screen and when he releases it.
The problem I am having is that as soon as touchesBegan gets called, if the user rotates the device while still holding his finger on the screen, when he lets go of the screen, touchesEnded does not get called.
Does anyone know why this might be happening?
Are you getting touchesCancelled instead?
In general, the system will call either touchesEnded or touchesCancelled after a touchesBegan, so code should deal with both. Touches can be cancelled for various reasons, such as a gesture recognizer taking over, an non-interactive animation starting on the view, an incoming phone call, etc.
I have a project that I started out using a tap gesture recognizer for. I realized I didn't have enough control with the tap gesture recognizer, so I've started coding with using my viewcontroller as a UIGestureRecognizerDelegate. Just to make sure I was on the right track, I added methods for touchesBegan, touchesMoved, touchesEnded, touchesCancelled. The methods are empty except for NSLog calls so I can tell what is being fired when I try different things.
Things worked as expected except that I was getting a bunch of calls to touchesCancelled. I assume this is because of the tap gesture recognizer I still have in place. I'm not ready to remove the tap gesture recognizer, so I just wanted to confirm that this is what would happen if a gesture I used was actually a tap.
The documentation says:
This method is invoked when the Cocoa Touch framework receives a
system interruption requiring cancellation of the touch event; for
this, it generates a UITouch object with a phase of
UITouchPhaseCancel. The interruption is something that might cause the
application to be no longer active or the view to be removed from the
window When an object receives a touchesCancelled:withEvent: message
it should clean up any state information that was established in its
touchesBegan:withEvent: implementation.
But I suspect my scenario just outlined is just as likely. Am I correct?
I have a UITableView which I present in a UIPopoverController. The table view presents a list of elements that can be dragged and dropped onto the main view.
When the user begins a pan gesture that is principally vertical at the outset, I want the UITableView to scroll as usual. When it's not principally vertical at the outset, I want the application to interpret this as a drag-and-drop action.
My unfortunately lengthy journey down this path has compelled me to create a custom UIGestureRecognizer. In an attempt to get the basics right, I left this custom gesturer as an empty implementation at first, one that merely calls the super version of each of the five custom methods Apple says should be overridden:
(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event;
(void)reset;
This results in nothing happening, i.e. the custom gesture's action method is never called, and the table view scrolls as usual.
For my next experiment, I set the gesture's state to UIGestureRecognizerStateBegan in the touchesBegan method.
This caused the gesture's action method to fire, making the gesture appear to behave just like the standard UIPanGestureRecognizer. This obviously suggested I was responsible for managing the gesture's state.
Next up, I set the gesture's state to UIGestureRecognizerStateChanged in the touchesMoved method. Everything still fine.
Now, instead, I tried setting the gesture's state to UIGestureRecognizerStateFailed in the touchesMoved method. I was expecting this to terminate the gesture and restore the flow of events to the table view, but it didn't. All it did was stop firing the gesture's action method.
Lastly, I set the gesture's state to UIGestureRecognizerStateFailed in the touchesBegan method, immediately after I had set it to UIGestureRecognizerStateBegan.
This causes the gesture to fire its action method exactly once, then pass all subsequent events to the table view.
So...sorry for such a long question...but why, if I cause the gesture to fail in the touchesBegan method (after first setting the state to UIGestureRecognizerStateBegan), does it redirect events to the table view, as expected. But if I try the same technique in touchesMoved (the only place I can detect that a move is principally vertical), why doesn't this redirection occur?
Sorry for making this more complicated than it actually was. After much reading and testing, I've finally figured out how to do this.
First, creating the custom UIGestureRecognizer was one of the proper solutions to this issue, but when I made my first test of the empty custom recognizer, I made a rookie mistake: I forgot to call [super touches...:touches withEvent:event] for each of the methods I overrode. This caused nothing to happen, so I set the state of the recognizer to UIGestureRecognizerStateBegan in touchesBegan, which did result in the action method being called once, thus convincing me I had to explicitly manage states, which is only partially true.
In truth, if you create an empty custom recognizer and call the appropriate super method in each method your override, your program will behave as expected. In this case, the action method will get called throughout the dragging motion. If, in touchesMoved, you set the recognizer's state to UIGestureRecognizerStateFailed, the events will bubble up to the super view (in this case a UITableView), also as expected.
The mistake I made and I think others might make is thinking there is a direct correlation between setting the gesture's state and the chronology of the standard methods when you subclass a gesture recognizer (i.e. touchesBegan, touchesMoved, etc.). There isn't - at least, it's not an exact mapping. You're better off to let the base behavior work as is, and only intervene where necessary. So, in my case, once I determined the user's drag was principally vertical, which I could only do in touchesMoved, I set the gesture recognizer's state to UIGestureRecognizerStateFailed in that method. This took the recognizer out of the picture and automatically forwarded a full set of events to the encompassing view.
For the sake of brevity, I've left out a ton of other stuff I learned through this exercise, but would like to point out that, of six or seven books on the subject, Matt Neuburg's Programming IOS 4 provided the best explanation of this subject by far. I hope that referral is allowed on this site. I am in no way affiliated with the author or publisher - just grateful for an excellent explanation!
That probably happens because responders expect to see an entire touch from beginning to end, not just part of one. Often, -touchesBegan:... sets up some state that's then modified in -touchesMoved..., and it really wouldn't make sense for a view to get a -touchesMoved... without having previously received -touchesBegan.... There's even a note in the documentation that says, in part:
All views that process touches,
including your own, expect (or should
expect) to receive a full touch-event
stream. If you prevent a UIKit
responder object from receiving
touches for a certain phase of an
event, the resulting behavior may be
undefined and probably undesirable.
I have a UIButton (a subclass of one, actually) that interacts with the user via the touchesbegan: and touchesmoved: functions.
What I would like is for the user to be able to press down the button, drag their finger away, and have a second finger touch the button (all while the first finger has never left the screen).
Problem is, the second touch event never calls touchesbegan: unless the first finger has been released.
Is there some way to override this, or am I trying to do the impossible?
Have you tried setting multipleTouchesEnabled to YES?
If the interactions are using touchesbegan: and touchesmoved: then use a UIView instead of a UIButton. A button is a UIControl, and the way to interact with UIControls is
- (void)addTarget:(id)target action:(SEL)action forControlEvents:(UIControlEvents)controlEvents.
I'm not sure this two ways of getting events mix well.