I have a view with a tap gesture recognizer. A subview of this view is an instance of my custom class, which inherits from UIControl. I am having an issue where the UIControl subclass will sometimes allow touch events to pass through to the parent view when it shouldn't.
Within the UIControl subclass, I have overridden these functions (code is in Swift)
override func beginTrackingWithTouch(touch: UITouch, withEvent event: UIEvent) -> Bool
{
return true
}
override func continueTrackingWithTouch(touch: UITouch, withEvent event: UIEvent) -> Bool
{
// The code here moves this UIControl so its center is at the touchpoint
return true
}
override func endTrackingWithTouch(touch: UITouch,withEvent event: UIEvent)
{
// Something important happens here!
}
This system works just fine if the user touches down within the UIControl, drags the control around in both X and Y directions, and then lifts off the screen. In this case, all three of these functions are called, and the "something important" happens.
However, if the user touches down with the UIControl, drags the control around only in the X direction, and then lifts off the screen, we have a problem. The first two functions are called, but when the touchpoint lifts off the screen, the tap gesture recognizer is called, and endTrackingWithTouch is not called.
How do I make sure that endTrackingWithTouch is always called?
I fixed this in a way that I consider to be a hack, but there's really no alternative, given how UIGestureRecognizer works.
What was happening was that the tap gesture recognizer was canceling the control's tracking and registering a tap gesture. This was because when I was dragging horizontally, I just happened to be dragging short distances, which gets interpreted as a tap gesture.
The tap gesture recognizer must be disabled while the UIControl is tracking:
override func beginTrackingWithTouch(touch: UITouch, withEvent event: UIEvent) -> Bool
{
pointerToSuperview.pauseGestureRecognizer()
return true
}
override func continueTrackingWithTouch(touch: UITouch, withEvent event: UIEvent) -> Bool
{
// The code here moves this UIControl so its center is at the touchpoint
return true
}
override func endTrackingWithTouch(touch: UITouch,withEvent event: UIEvent)
{
// Something important happens here!
pointerToSuperview.resumeGestureRecognizer()
}
override func cancelTrackingWithEvent(event: UIEvent?)
{
pointerToSuperview.resumeGestureRecognizer()
}
In the superview's class:
pauseGestureRecognizer()
{
tapGestureRecognizer.enabled = false
}
resumeGestureRecognizer()
{
tapGestureRecognizer.enabled = true
}
This works because I'm not dealing with multitouch (it's OK for me not to receive tap touch events while tracking touches with the UIControl).
Ideally, the control shouldn't have to tell the view to pause the gesture recognizer - the gesture recognizer shouldn't be meddling with the control's touches to begin with! However, even setting the gesture recognizer's cancelsTouchesInView to false cannot prevent this.
There's a way to fix this that's nicely self-contained: instantiate your own TapGestureRecognizer and attach it to your custom control, e.g. in Objective-C,
_tapTest = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tapped:)];
_tapTest.numberOfTapsRequired = 1;
[self addGestureRecognizer:_tapTest];
and then implement the tapped action handler to process the tap:
- (void)tapped:(UITapGestureRecognizer *)recognizer {...}
In my case, I handle tapped the same as endTrackingWithTouch:withEvent:; your mileage may vary.
This way, you get the tap before any superview can snatch it, and you don't have to worry about the view hierarchy behind your control.
When a UIControl is moved while tracking touches, it might cancel its tracking. Try overriding cancelTrackingWithEvent and see if this is the case. If you do see the cancel, you're going to have to track your touches in an unmoving view somewhere in the parent hierarchy of this control.
I know this is old, but I run into the same problem, check if one of your superviews has gesture recogniser, and deactivate them when you need to use the UIControl.
I actually ended changed the superview of the UIControl to the main window to avoid this conflicts (Because it was in a popup).
Related
I'm trying to use a gesture recognizer to allow the user to resize a view on the screen by dragging it with one finger... as such, I want to track all changes to the touch. I initially tried to use a PanGestureRecognizer for this, but I've run into an issue when the user touches the view with multiple fingers. I only want to track the first touch (ie the first finger the user places on the screen during a gesture), but I can't find a way to prevent additional touches from interfering with my ability to track the first. My initial solution was to simply set
recognizer.maximumNumberOfTouches = 1
However, this causes the entire gesture to be cancelled in the event of a second touch (I want to continue tracking the first touch in this case). But, of course, not setting this value to 1 causes the gesture to actually track multiple touches. How can I make a gesture recognizer that tracks all changes to only the first touch in the gesture, and isn't cancelled by additional touches on the screen?
From your comments and answer, it sounds like your use of a pan gesture recognizer was always wrong and you were doing something wrong in your action handler. No correct action handler for a pan gesture recognizer would ever use location(in:) for anything. location(in:) gives you a centroid of all touches, which is exactly what you say you don't want — and in any case there is no need to consult it.
If your goal is to drag a view, you use translation(in:).
If your goal is to track the actual location of a particular touch, you use location(ofTouch:in:).
If you discover that a touch is being delivered and you don't want it tracked, call ignore(_:for:).
Here is a UIGestureRecognizer subclass that will track all changes the first touch the user places on the screen at the start of a gesture, and will ignore all other fingers without cancelling the gesture. Note that if you're using functions/properties of the gesture recognizer (such as location(in:)), you'll probably have to override those as well.
class FirstTouchGestureRecognizer: UIGestureRecognizer {
private var firstTouch: UITouch? = nil
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
if firstTouch == nil {
firstTouch = touches.first!
self.state = .began
}
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent) {
if firstTouch != nil && touches.contains(firstTouch!) {
self.state = .changed
}
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent) {
if firstTouch != nil && touches.contains(firstTouch!) {
firstTouch = nil
self.state = .ended
}
}
}
I am in a situation wherein I have a UIView that I have added a subclassed UIGestureRecognizer to. I am using the following code;
class ButtonGestureRecognizer: UIGestureRecognizer {
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
//
state = .began
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent) {
//
state = .ended
}
}
This allows me to capture the touch "down" and touch "up" event, which I am using to scale the UIView larger when it is pressed down and back to its normal size when it is released.
I also need to add a UILongPressGestureRecognizer to this same view, wherein it will also scale up when held down, and return to its normal size when it is released, but perform a different action.
However, the subclassed Gesture Recognizer seems to be preventing the UILongPressGestureRecognizer from working. The only solution I've found so far is to give up my subclassed gesture recognizer and use a UITapGestureRecognizer (with minimum taps required set to one) and a UILongPressGestureRecognizer, but then I give up the ability to properly detect the tap/press began and end states.
Any way around this? Thanks!
There's a protocol called UIGestureRecognizerDelegate and it has this method
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
If you conform to that protocol and set your object as the delegate of your recognizers, you can return true on that method and your gesture recognizers will work together.
I am developing a keybaord extension for iOS. On iOS 9 the keys react imediatelly except for keys along left edge of the keyboard. Those react with around 0.2 second delay. The reason is that the touches are simply delivered with this delay to the UIView that is root view of my keyboard. On iOS 8 there is no such delay.
My guess is that this delay is cause by some logic that is supposed to recognize gesture for opening "running apps screen". That is fine but the delay on a keyboard is unacceptable. Is there any way how to get those events without delay? Perhaps just setting delaysTouchesBegan to false on some UIGestureRecognizer?
This is for anyone using later versions of iOS (this is working on iOS 9 and 10 for me). My issue was caused by the swipe to go back gesture interfering with my touchesBegan method by preventing it from firing on the very left edge of the screen until either the touch was ended, or the system recognised the movement to not be that of the swipe to go back gesture.
In your viewDidLoad function in your controller, simply put:
self.navigationController?.interactivePopGestureRecognizer?.delaysTouchesBegan = false
The official solution since iOS11 is overriding preferredScreenEdgesDeferringSystemGestures of your UIInputViewController.
https://developer.apple.com/documentation/uikit/uiviewcontroller/2887512-preferredscreenedgesdeferringsys
However, it doesn't seem to work on iOS 13 at least. As far as I understand, that happens due to preferredScreenEdgesDeferringSystemGestures not working properly when overridden inside UIInputViewController, at least on iOS 13.
When you override this property in a regular view controller, it works as expected:
override var preferredScreenEdgesDeferringSystemGestures: UIRectEdge {
return [.left, .bottom, .right]
}
That' not the case for UIInputViewController, though.
UPD: It appears, gesture recognizers will still get .began state update, without the delay. So, instead of following the rather messy solution below, you can add a custom gesture recognizer to handle touch events.
You can quickly test this adding UILongPressGestureRecognizer with minimumPressDuration = 0 to your control view.
Another solution:
My original workaround was calling touch down effects inside hitTest(_ point: CGPoint, with event: UIEvent?) -> UIView?, which is called even when the touches are delayed for the view.
You have to ignore the "real" touch down event, when it fires about 0.4s later or simultaneously with touch up inside event. Also, it's probably better to apply this hack only in case the tested point is inside ~20pt lateral margins.
So for example, for a view with equal to screen width, the implementation may look like:
let edgeProtectedZoneWidth: CGFloat = 20
override func hitTest(_ point: CGPoint, with event: UIEvent?) -> UIView? {
let result = super.hitTest(point, with: event)
guard result == self else {
return result
}
if point.x < edgeProtectedZoneWidth || point.x > bounds.width-edgeProtectedZoneWidth
{
if !alreadyTriggeredFocus {
isHighlighted = true
}
triggerFocus()
}
return result
}
private var alreadyTriggeredFocus: Bool = false
#objc override func triggerFocus() {
guard !alreadyTriggeredFocus else { return }
super.triggerFocus()
alreadyTriggeredFocus = true
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesCancelled(touches, with: event)
alreadyTriggeredFocus = false
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesEnded(touches, with: event)
alreadyTriggeredFocus = false
}
...where triggerFocus() is the method you call on touch down event. Alternatively, you may override touchesBegan(_:with:).
If you have access to the view's window property, you can access these system gesture recognizers and set delaysTouchesBegan to false.
Here's a sample code in swift that does that
if let window = view.window,
let recognizers = window.gestureRecognizers {
recognizers.forEach { r in
// add condition here to only affect recognizers that you need to
r.delaysTouchesBegan = false
}
}
Also relevant: UISystemGateGestureRecognizer and delayed taps near bottom of screen
My question: Is there a way to adjust the "sensitivity" of UIPanGestureRecognizer so that it turns on 'sooner', i.e. after moving a fewer number of 'pixels'?
I have a simple app with a UIImageView, and pinch and pan gesture recognizers tied to this so that the user can zoom in and draw on the image by hand. Works fine.
However, I notice the stock UIPanGestureRecognizer doesn't return a value of UIGestureRecognizerState.Changed until the user's gesture has moved about 10 pixels.
Example: Here's a screenshot showing several lines that I've attempted to draw shorter & shorter, and there is a noticeable finite length below which no line gets drawn because the pan gesture recognizer never changes state.
IllustrationOfProgressivelyShorterLines.png
...i.e., to the right of the yellow line, I was still trying to draw, and my touches were being recognized as touchesMoved events, but the UIPanGestureRecognizer wasn't firing its own "Moved" event and thus nothing was getting drawn.
(Note/clarification: That image takes up the entirety of my iPad's screen, so my finger is physically moving more than an inch even in the cases where no state change occurs to the recognizer. It's just that we're 'zoomed in' in terms of the tranformation generated by the pinch gesture recognizer, so a few 'pixels' of the image take up a significant amount of the screen.)
This is not what I want. Any ideas on how to fix it?
Maybe some 'internal' parameter of UIPanGestureRecognizer I could get at if I sub-classed it or some such? I thought I'd try to sub-class the recognizer in a manner such as...
class BetterPanGestureRecognizer: UIPanGestureRecognizer {
var initialTouchLocation: CGPoint!
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesBegan(touches, withEvent: event)
initialTouchLocation = touches.first!.locationInView(view)
print("pan: touch begin detected")
print(self.state.hashValue) // this lets me check the state
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesMoved(touches, withEvent: event)
print("pan: touch move detected")
print(self.state.hashValue) // this remains at the "began" value until you get beyond about 10 pixels
let some_criterion = (touches.first!.isEqual(something) && event.isEqual(somethingElse))
if (some_criterion) {
self.state = UIGestureRecognizerState.Changed
}
}
}
...but I'm not sure what to use for some_criterion, etc.
Any suggestions?
.
Other alternatives that could work, but that I'd rather not have to do:
I could simply attach my UIPanGestureRecognizer to some parent,
non-zoomed view, and then use affine transforms & such to remap the
points of the pan touches onto the respective parts of the image.
So why am I not doing that? Because the code is written so that
lots of other objects hang off the image view and they all get the
same gesture recognizers and....everything works just great without
my having keep track of anything (e.g. affine transformations), and the problem only shows up if you're really-really zoomed in.
I could abandon UIPanGestureRecognizer, and effectively just write my own using touchesBegan and touchesMoved (which is kind of
what I'm doing), however I like how UIPanGestureRecognizer
differentiates itself from, say, pinch events, in a way that I don't
have to worry about coding up myself.
I could just specify some maximum zoom beyond which the user can't go. This fails to implement what I'm going for, i.e. I want to allow for fine-detail level of manipulation.
Thanks.
[Will choose your answer over mine (i.e., the following) if merited, so I won't 'accept' this answer just yet.]
Got it. The basic idea of the solution is to change the state whenever touches are moved, but use the delegate method regarding simultaneous gesture recognizers so as not to "lock" out any pinch (or rotation) gesture. This will allow for one- and/or multi-fingered panning, as you like, with no 'conflicts'.
This, then, is my code:
class BetterPanGestureRecognizer: UIPanGestureRecognizer, UIGestureRecognizerDelegate {
var initialTouchLocation: CGPoint!
override init(target: AnyObject?, action: Selector) {
super.init(target: target, action: action)
self.delegate = self
}
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesBegan(touches, withEvent: event)
initialTouchLocation = touches.first!.locationInView(view)
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesMoved(touches, withEvent: event)
if UIGestureRecognizerState.Possible == self.state {
self.state = UIGestureRecognizerState.Changed
}
}
func gestureRecognizer(_: UIGestureRecognizer,
shouldRecognizeSimultaneouslyWithGestureRecognizer:UIGestureRecognizer) -> Bool {
if !(shouldRecognizeSimultaneouslyWithGestureRecognizer is UIPanGestureRecognizer) {
return true
} else {
return false
}
}
}
Generally setting that "shouldRecognizeSimultaneouslyWithGestureRecognizer" delegate to true always is what many people may want. I make the delegate return false if the other recognizer is another Pan, just because I was noticing that without that logic (i.e., and making the delegate return true no matter what), it was "passing through" Pan gestures to underlying views and I didn't want that. You may just want to have it return true no matter what. Cheers.
Swift 5 + small improvement
I had a case when accepted solution conflicted with basic taps on toolbar which also had this betterPanGesture so I added minimum horizontal offset parameter to trigger state changing to .changed
class BetterPanGestureRecognizer: UIPanGestureRecognizer {
private var initialTouchLocation: CGPoint?
private let minHorizontalOffset: CGFloat = 5
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
super.touchesBegan(touches, with: event)
self.initialTouchLocation = touches.first?.location(in: self.view)
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent) {
super.touchesMoved(touches, with: event)
if self.state == .possible,
abs((touches.first?.location(in: self.view).x ?? 0) - (self.initialTouchLocation?.x ?? 0)) >= self.minHorizontalOffset {
self.state = .changed
}
}
}
I have a iOS game and during loading screens any touches the user does seem to be buffered up, so once the loading it done (it can take a few seconds), I get the touch events.
Is there way for me to discard all touches?
Ok this is a really old question but for anyone stumbling upon this can simply follow either of the below two approaches:
Approach 1.
DiscardTouchView.addGestureRecognizer(UITapGestureRecognizer())
Basically adding an empty gesture. So on tapping that view nothing happens.
Approach 2.
In this case you don't add empty gesture recognizer to the view. In case DiscardTouchView is a subview of SomeParentView which has a UIGestureRecognizer object, you can get that object and ignore it.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let tappedView = touches.first?.view else { return }
if(tappedView == DiscardTouchView) {
guard let recognizer = touches.first?.gestureRecognizers?.first else { return }
recognizer.ignore(touches.first!, for: event!)
}
}
Setting userInteractionEnabled to false actually passes the touch event from subview to superview. It doesn't discard the touch event.
Setting userInteractionEnabled to true for a simple UIView does the same.