I'm trying to use a gesture recognizer to allow the user to resize a view on the screen by dragging it with one finger... as such, I want to track all changes to the touch. I initially tried to use a PanGestureRecognizer for this, but I've run into an issue when the user touches the view with multiple fingers. I only want to track the first touch (ie the first finger the user places on the screen during a gesture), but I can't find a way to prevent additional touches from interfering with my ability to track the first. My initial solution was to simply set
recognizer.maximumNumberOfTouches = 1
However, this causes the entire gesture to be cancelled in the event of a second touch (I want to continue tracking the first touch in this case). But, of course, not setting this value to 1 causes the gesture to actually track multiple touches. How can I make a gesture recognizer that tracks all changes to only the first touch in the gesture, and isn't cancelled by additional touches on the screen?
From your comments and answer, it sounds like your use of a pan gesture recognizer was always wrong and you were doing something wrong in your action handler. No correct action handler for a pan gesture recognizer would ever use location(in:) for anything. location(in:) gives you a centroid of all touches, which is exactly what you say you don't want — and in any case there is no need to consult it.
If your goal is to drag a view, you use translation(in:).
If your goal is to track the actual location of a particular touch, you use location(ofTouch:in:).
If you discover that a touch is being delivered and you don't want it tracked, call ignore(_:for:).
Here is a UIGestureRecognizer subclass that will track all changes the first touch the user places on the screen at the start of a gesture, and will ignore all other fingers without cancelling the gesture. Note that if you're using functions/properties of the gesture recognizer (such as location(in:)), you'll probably have to override those as well.
class FirstTouchGestureRecognizer: UIGestureRecognizer {
private var firstTouch: UITouch? = nil
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
if firstTouch == nil {
firstTouch = touches.first!
self.state = .began
}
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent) {
if firstTouch != nil && touches.contains(firstTouch!) {
self.state = .changed
}
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent) {
if firstTouch != nil && touches.contains(firstTouch!) {
firstTouch = nil
self.state = .ended
}
}
}
Related
I have a custom UIGestureRecognizer for a two finger gesture that works perfectly except for it being very picky about how simultaneously the fingers have to touch the iOS-device for touchesBegan to be called with 2 touches. touchesBegan is often called with only one Touch even though I am trying to use two fingers.
Is there any way to make recognition for the number of Touches more forgiving in regards to how simultaneously you have to place your fingers on the touch screen?
I've noticed that a two finger tap is recognized even when you place first one finger and then another much later while still holding the first finger down.
Here is the code for my touchesBegan function:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
if touches.count != 2 {
state = .failed
return
}
// Capture the first touch and store some information about it.
if trackedTouch == nil {
trackedTouch = touches.min { $0.location(in: self.view?.window).x < $1.location(in: self.view?.window).x }
strokePhase = .topSwipeStarted
topSwipeStartPoint = (trackedTouch?.location(in: view?.window))!
// Ignore the other touch that had a larger x-value
for touch in touches {
if touch != trackedTouch {
ignore(touch, for: event)
}
}
}
}
For two-finger gestures, touchesBegan is most likely going to be called twice: once you put the first finger on the screen, and once for the second one.
In the state you keep, you should keep track of both touches (or for that matter, all current touches), and only start the gesture once both touches have been received and the gesture's start condition has been met.
public class TwoFingerGestureRecognizer: UIGestureRecognizer {
private var trackedTouches: Set<UITouch> = []
public override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
for touch in touches {
if self.trackedTouches.count < 2 {
self.trackedTouches.insert(touch)
}
else {
self.ignore(touch, for: event)
}
}
if self.trackedTouches.count == 2 {
// put your current logic here
}
}
public override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent) {
}
public override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent) {
self.trackedTouches.subtract(touches)
}
public override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent) {
self.trackedTouches.subtract(touches)
}
public override func reset() {
super.reset()
self.trackedTouches = []
}
}
don't worry about touchesBegan:withEvent: instead use touchesEnded:withEvent: or touchesMoved:withEvent: if the end state does not contain both fingers, set it as .failed otherwise set it as .ended
tapping the screen with more than one finger simultaneously is impossible, so during touchesMoved:withEvent: you will find two fingers. I'm not sure about touchesEnded:withEvent: this one probably won't work since removing two fingers simultaneously is just as hard as applying two fingers simultaneously, but it's worth a try to see how it reacts.
I'd recommend making your code a little more forgiving. Although touchesBegan/Moved/Ended/Cancelled respond to events of "one or more touches" (as stated in the Apple docs) relying on the precision of a user to simultaneously touch the screen with 2 fingers is not ideal. This is assuming you have multi-touch enabled, which it sounds like you do.
Try tracking the touches yourself and executing your logic when you're collection of touches reaches 2. Obviously you'll also have to track when your touch amount becomes more or less and handle things accordingly, but I'd guess you're already handling this if you're gesture is meant to be for 2 fingers only (aside from the extra logic you'd have to add in touchesBegan).
Not sure why the other guys answering with using touchesMoved:withEvent did not answer your question, but maybe you need a github example.
Double touch move example.
Is touchesMoved an option in order for you to achieve the desire outcome? Also you could implement a counter before setting the state to failed
Don't forget to set isMultipleTouchEnabled = true
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if touches.count != 2 {
print("we do NOT have two touches")
if counter > 100 { // 100 "more fogiven"
state = .failed
}
counter += 1
return
} else {
print("we have to two touches")
}
I have a strange case with UIPanGestureRecognizer in Swift.
I have a function that handles the pan gestures, and specifying "false" in UIGestureRecognizerDelegate such that no other gesture interferes with the Pan.
Here's the problematic case:
User touches with 1 finger and starts to pan
User puts a second finger on the screen - The second finger is ignored
The user lifts the first finger while the second is still touching - At this moment my handler gets called with
recognizer.state == .ended. The problem is that the location at this moment (which I get by calling recognizer.location(in: recognizer.view), returns the point (0,0)
Am I using the wrong way to get the point? It seems that since the second finger gets totally ignored, the first finger should behave as a regular pan, and I'll get the location when the state==ended.
I just tested it - locations are no longer available when the recognizer is ended (using the following)
recognizer.location(ofTouch: 0, in: recognizer.view)
I would add a variable that tracks the last known point in .began & .changed (I get data back in those) then use that data in .ended
EDIT: Getting the data out with subclassing
class pgr: UIPanGestureRecognizer {
var myTouch:Set<UITouch>!
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent) {
myTouch = touches
super.touchesEnded(touches, with: event)
}
}
Then back in your implementation
if sender.state == .ended {
print(sender.myTouch.first?.location(in: sender.view))
}
My question: Is there a way to adjust the "sensitivity" of UIPanGestureRecognizer so that it turns on 'sooner', i.e. after moving a fewer number of 'pixels'?
I have a simple app with a UIImageView, and pinch and pan gesture recognizers tied to this so that the user can zoom in and draw on the image by hand. Works fine.
However, I notice the stock UIPanGestureRecognizer doesn't return a value of UIGestureRecognizerState.Changed until the user's gesture has moved about 10 pixels.
Example: Here's a screenshot showing several lines that I've attempted to draw shorter & shorter, and there is a noticeable finite length below which no line gets drawn because the pan gesture recognizer never changes state.
IllustrationOfProgressivelyShorterLines.png
...i.e., to the right of the yellow line, I was still trying to draw, and my touches were being recognized as touchesMoved events, but the UIPanGestureRecognizer wasn't firing its own "Moved" event and thus nothing was getting drawn.
(Note/clarification: That image takes up the entirety of my iPad's screen, so my finger is physically moving more than an inch even in the cases where no state change occurs to the recognizer. It's just that we're 'zoomed in' in terms of the tranformation generated by the pinch gesture recognizer, so a few 'pixels' of the image take up a significant amount of the screen.)
This is not what I want. Any ideas on how to fix it?
Maybe some 'internal' parameter of UIPanGestureRecognizer I could get at if I sub-classed it or some such? I thought I'd try to sub-class the recognizer in a manner such as...
class BetterPanGestureRecognizer: UIPanGestureRecognizer {
var initialTouchLocation: CGPoint!
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesBegan(touches, withEvent: event)
initialTouchLocation = touches.first!.locationInView(view)
print("pan: touch begin detected")
print(self.state.hashValue) // this lets me check the state
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesMoved(touches, withEvent: event)
print("pan: touch move detected")
print(self.state.hashValue) // this remains at the "began" value until you get beyond about 10 pixels
let some_criterion = (touches.first!.isEqual(something) && event.isEqual(somethingElse))
if (some_criterion) {
self.state = UIGestureRecognizerState.Changed
}
}
}
...but I'm not sure what to use for some_criterion, etc.
Any suggestions?
.
Other alternatives that could work, but that I'd rather not have to do:
I could simply attach my UIPanGestureRecognizer to some parent,
non-zoomed view, and then use affine transforms & such to remap the
points of the pan touches onto the respective parts of the image.
So why am I not doing that? Because the code is written so that
lots of other objects hang off the image view and they all get the
same gesture recognizers and....everything works just great without
my having keep track of anything (e.g. affine transformations), and the problem only shows up if you're really-really zoomed in.
I could abandon UIPanGestureRecognizer, and effectively just write my own using touchesBegan and touchesMoved (which is kind of
what I'm doing), however I like how UIPanGestureRecognizer
differentiates itself from, say, pinch events, in a way that I don't
have to worry about coding up myself.
I could just specify some maximum zoom beyond which the user can't go. This fails to implement what I'm going for, i.e. I want to allow for fine-detail level of manipulation.
Thanks.
[Will choose your answer over mine (i.e., the following) if merited, so I won't 'accept' this answer just yet.]
Got it. The basic idea of the solution is to change the state whenever touches are moved, but use the delegate method regarding simultaneous gesture recognizers so as not to "lock" out any pinch (or rotation) gesture. This will allow for one- and/or multi-fingered panning, as you like, with no 'conflicts'.
This, then, is my code:
class BetterPanGestureRecognizer: UIPanGestureRecognizer, UIGestureRecognizerDelegate {
var initialTouchLocation: CGPoint!
override init(target: AnyObject?, action: Selector) {
super.init(target: target, action: action)
self.delegate = self
}
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesBegan(touches, withEvent: event)
initialTouchLocation = touches.first!.locationInView(view)
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesMoved(touches, withEvent: event)
if UIGestureRecognizerState.Possible == self.state {
self.state = UIGestureRecognizerState.Changed
}
}
func gestureRecognizer(_: UIGestureRecognizer,
shouldRecognizeSimultaneouslyWithGestureRecognizer:UIGestureRecognizer) -> Bool {
if !(shouldRecognizeSimultaneouslyWithGestureRecognizer is UIPanGestureRecognizer) {
return true
} else {
return false
}
}
}
Generally setting that "shouldRecognizeSimultaneouslyWithGestureRecognizer" delegate to true always is what many people may want. I make the delegate return false if the other recognizer is another Pan, just because I was noticing that without that logic (i.e., and making the delegate return true no matter what), it was "passing through" Pan gestures to underlying views and I didn't want that. You may just want to have it return true no matter what. Cheers.
Swift 5 + small improvement
I had a case when accepted solution conflicted with basic taps on toolbar which also had this betterPanGesture so I added minimum horizontal offset parameter to trigger state changing to .changed
class BetterPanGestureRecognizer: UIPanGestureRecognizer {
private var initialTouchLocation: CGPoint?
private let minHorizontalOffset: CGFloat = 5
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
super.touchesBegan(touches, with: event)
self.initialTouchLocation = touches.first?.location(in: self.view)
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent) {
super.touchesMoved(touches, with: event)
if self.state == .possible,
abs((touches.first?.location(in: self.view).x ?? 0) - (self.initialTouchLocation?.x ?? 0)) >= self.minHorizontalOffset {
self.state = .changed
}
}
}
Hi I am currently moving a sprite by checking if the button was pressed in touchesBegan and then updating the position in update. I am doing this so that the user does not need to keep pressing the move up/down/left/right etc button over and over. The problem is that sometimes the sprite does not stop moving which im sure is due to this. Does anyone know a better solution to this? For brevity I will show you the way I am taking care of the up button.
override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
for touch: AnyObject in touches {
// Get the location of the touch in this scene
let location = touch.locationInNode(self)
// Check if the location of the touch is within the button's bounds
if upButton.containsPoint(location) {
upButtonPressed = true
}
override func update(currentTime: CFTimeInterval) {
if upButtonPressed == true {
ball.position.y += 3
}
I kept it simple here but I do have all the conditions to stop movement in my code. I am simply wondering if there is an easier way to do this with maybe a long press gesture recognizer?
I have a view with a tap gesture recognizer. A subview of this view is an instance of my custom class, which inherits from UIControl. I am having an issue where the UIControl subclass will sometimes allow touch events to pass through to the parent view when it shouldn't.
Within the UIControl subclass, I have overridden these functions (code is in Swift)
override func beginTrackingWithTouch(touch: UITouch, withEvent event: UIEvent) -> Bool
{
return true
}
override func continueTrackingWithTouch(touch: UITouch, withEvent event: UIEvent) -> Bool
{
// The code here moves this UIControl so its center is at the touchpoint
return true
}
override func endTrackingWithTouch(touch: UITouch,withEvent event: UIEvent)
{
// Something important happens here!
}
This system works just fine if the user touches down within the UIControl, drags the control around in both X and Y directions, and then lifts off the screen. In this case, all three of these functions are called, and the "something important" happens.
However, if the user touches down with the UIControl, drags the control around only in the X direction, and then lifts off the screen, we have a problem. The first two functions are called, but when the touchpoint lifts off the screen, the tap gesture recognizer is called, and endTrackingWithTouch is not called.
How do I make sure that endTrackingWithTouch is always called?
I fixed this in a way that I consider to be a hack, but there's really no alternative, given how UIGestureRecognizer works.
What was happening was that the tap gesture recognizer was canceling the control's tracking and registering a tap gesture. This was because when I was dragging horizontally, I just happened to be dragging short distances, which gets interpreted as a tap gesture.
The tap gesture recognizer must be disabled while the UIControl is tracking:
override func beginTrackingWithTouch(touch: UITouch, withEvent event: UIEvent) -> Bool
{
pointerToSuperview.pauseGestureRecognizer()
return true
}
override func continueTrackingWithTouch(touch: UITouch, withEvent event: UIEvent) -> Bool
{
// The code here moves this UIControl so its center is at the touchpoint
return true
}
override func endTrackingWithTouch(touch: UITouch,withEvent event: UIEvent)
{
// Something important happens here!
pointerToSuperview.resumeGestureRecognizer()
}
override func cancelTrackingWithEvent(event: UIEvent?)
{
pointerToSuperview.resumeGestureRecognizer()
}
In the superview's class:
pauseGestureRecognizer()
{
tapGestureRecognizer.enabled = false
}
resumeGestureRecognizer()
{
tapGestureRecognizer.enabled = true
}
This works because I'm not dealing with multitouch (it's OK for me not to receive tap touch events while tracking touches with the UIControl).
Ideally, the control shouldn't have to tell the view to pause the gesture recognizer - the gesture recognizer shouldn't be meddling with the control's touches to begin with! However, even setting the gesture recognizer's cancelsTouchesInView to false cannot prevent this.
There's a way to fix this that's nicely self-contained: instantiate your own TapGestureRecognizer and attach it to your custom control, e.g. in Objective-C,
_tapTest = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tapped:)];
_tapTest.numberOfTapsRequired = 1;
[self addGestureRecognizer:_tapTest];
and then implement the tapped action handler to process the tap:
- (void)tapped:(UITapGestureRecognizer *)recognizer {...}
In my case, I handle tapped the same as endTrackingWithTouch:withEvent:; your mileage may vary.
This way, you get the tap before any superview can snatch it, and you don't have to worry about the view hierarchy behind your control.
When a UIControl is moved while tracking touches, it might cancel its tracking. Try overriding cancelTrackingWithEvent and see if this is the case. If you do see the cancel, you're going to have to track your touches in an unmoving view somewhere in the parent hierarchy of this control.
I know this is old, but I run into the same problem, check if one of your superviews has gesture recogniser, and deactivate them when you need to use the UIControl.
I actually ended changed the superview of the UIControl to the main window to avoid this conflicts (Because it was in a popup).