How can I know when the finger is down and when is it up with UITapGestureRecognizer?
The documentation says I should only handle UIGestureRecognizerStateEnded as tap so it means there is UIGestureRecognizerStateBegin when finger is down, but all I get is UIGestureRecognizerStateEnded.
The code I use to register the recognizer is:
[[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tap:)]
UITapGestureRecognizer is a discrete gesture recognizer, and therefore never transitions to the began or changed states. From the UIGestureRecognizer Class Reference:
Discrete gestures transition from Possible to either Recognized
(UIGestureRecognizerStateRecognized) or Failed
(UIGestureRecognizerStateFailed), depending on whether they
successfully interpret the gesture or not. If the gesture recognizer
transitions to Recognized, it sends its action message to its target.
(Remembering of course that UIGestureRecognizerStateRecognized == UIGestureRecognizerStateEnded).
The docs are saying that you should check the state of a tap gesture recognizer to see that it is in its ended state, before you fire your code to say that it has been recognized. They are not saying that the tap gesture actually transitions to the began or changed states (although I admit that the docs are a little misleading in the language used!).
If you want to check for the finger down event for a tap gesture recognizer, I would recommend just using touchesBegan:withEvent:, since this is what you are really after anyway.
You could override the delegate method -(BOOL)gestureRecognizer:shouldReceiveTouch:.
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldReceiveTouch:(UITouch *)touch {
NSLog(#"Hello from press down");
return YES;
}
Related
If you assign a UITapGestureRecognizer to a UIView the UIGestureRecognizerStateBegan doesn't appear when the user has touched the view.
// Tap
_tapGestureRecognizer =
[[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(tap:)];
[_someView addGestureRecognizer:_tapGestureRecognizer];
Instead the recognizer jumps straight to UIGestureRecognizerStateEnded when the user performs the tap.
I have to change that view to a UIButton and listen to the touchDown method.
_someButton = [[UIButton alloc] init];
[_someButton addTarget:self action:#selector(touchDown:) forControlEvents:UIControlEventTouchDown];
[self addSubview: _someButton];
I don't like changing the UIView to a UIButton just for this.
Can I use the UITapGestureRecognizer instead?
Let me start by saying that UITapGestureRecognizer docs clearly tell to expect a callback for all states.
For gesture recognition, the specified number of fingers must tap the view a specified number of times. Although taps are discrete gestures, they are discrete for each state of the gesture recognizer. The system sends the associated action message when the gesture begins and then again for each intermediate state until (and including) the ending state of the gesture. Code that handles tap gestures should test for the state of the gesture, for example:
func handleTap(sender: UITapGestureRecognizer) {
if sender.state == .ended {
// handling code
}
}
Hower it makes little to no sense (specially in case of single tap recognizer). You touched a view (that had the tap gesture added to it), you haven't yet lifted your finger, moved it etc. System can't know at the time of .touchDown event that this interaction is going to turn into a successful recognition of a tap (which requires lifting the finger up).
Essentially UITapGestureRecognizer (for a single touch tap) is a .touchDown + .touchUp combination. If anything else happens after .touchDown like a drag (.touchDragInside OR .touchDragExit), it may lead to successful recognition of a pan gesture (tableView scrolling etc.)
You can think of UITapGestureRecognizer roughly equivalent to .touchUpInside event for a button. A .touchUpInside event for a button doesn't call your function for .touchDown event, It is only possible to receive that event by explicitly asking for the same.
Why do the docs say so?
Maybe system is able to identify the .began state for other scenarios
a multi-tap gesture - double/triple tap (see UITapGestureReconizer.numberOfTapsRequired)
a multi-touch tap - 2/3 finger tap (see UITapGestureReconizer.numberOfTouchesRequired)
You have to test other scenarios for this if you want to know more.
EDIT : Will make it clearer what I am trying to achieve.
I have some annotations on a MKMapView that I want to make easy to drag. The standard Apple way to move the annotations is to tap once then quickly tap and hold and drag. The users of the app have complained that it is too difficult to do this.
So technically what I'd like to do is to use a pan gesture. The problem is that MKMapView is also using the pan gesture to move the map about.
What I would like to do is when the use does the pan gesture, check if the pan gesture starts really close to an annotation, if so then let the pan gesture handler move the annotation. I have this part working.
But if the pan gesture was not close to an annotation then pas the gesture on to MKMapView for it to be handled by it.
// EDIT END
I've a method to handle a pan gesture. This gets called as I would expect when there is a pan gesture on the MKMapView. Sometimes I would not want to handle the gesture in my method but to pass the gesture through to the MKMapView to pan/drag the map around like normal.
Here is an outline of what I've got so far.
The pan gesture is being handled by the method:
-(void)panGesture:(UIPanGestureRecognizer*)sender
Depending on some logic I would like to pass this gesture through to the MKMapView (self.mapView).
Can anybody share the code to do this please?
I tried [self.mapView gestureRecognizerShouldBegin:sender]; but nothing happened from this call.
- (void) addPanGesture
{
self.panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panGesture:)];
self.panGesture.delegate = self;
[self.panGesture setMinimumNumberOfTouches:1];
[self.panGesture setMaximumNumberOfTouches:1];
[self.mapView addGestureRecognizer:self.panGesture];
}
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
return YES;
}
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldBeRequiredToFailByGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
BOOL result = NO;
if ((gestureRecognizer == self.panGesture) && [[otherGestureRecognizer view] isDescendantOfView:[gestureRecognizer view]])
{
result = YES;
}
return result;
}
-(void)panGesture:(UIPanGestureRecognizer*)sender
{
//some logic to see if we will handle the gesture in this method or pass gesture on to the MKMapView
return;
}
The standard Apple way to move the annotations is to tap once then quickly tap and hold and drag. The users of the app have complained that it is too difficult to do this.
I'm not think so.. :(
I'm going to write my logic, Actually It is bad thing to use pan gesture even apple provide default drag and drop facility. BTW if you are going with your way that mentioned in your question then you must need to add separate pan gesture for each and every annotation those are displaying on the map. Add it's own tag and keep same name of gesture method for all so you will easily mange it by tag. so when end-user will tap/touch on any annotation then it's method will be called and you can get touched/tapped annotation by it's own tag. write your code for drag-drop annotation only. Not sure but might be solve your problem.
I agree completely with iPatel: If possible one should use the standard Apple way, since users are used to it.
But if you really want to implement you own solution, maybe the following might work (not tested):
The MapView has a built-in PanGestureRecognizer (called here builInPGR). I assume you added your own PanGestureRecognizer (called here ownPGR) to this MapView.
Now the following function in the UIGestureRecognizerDelegate protocol controls if two gesture recognisers should recognise simultaneously (see the docs):
func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer,
shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool
The default implementation returns false.
You mentioned that you can drag your annotations using ownPGR. This means that builInPGR does not recognise the pan gesture simultaneously with ownPGR.
If you implement now this delegate function and return true, ownPGR and builInPGR should work simultaneously: builInPGR would move the map, and ownPGR would check, if a region close to an annotation has been tapped, and if so, would move the annotation.
Now, this delegate function gives you also a pointer to builInPGR, the otherGestureRecognizer.
So if ownPGR decides to drag an annotation since it started close to it, you can set the property isEnabled of builInPGR to false. This lets builInPGR transition to a cancelled state, and the map would no longer be dragged.
Of course, you had to set builInPGR.isEnabled to true again, when ownPGR finishes dragging the annotation.
Once more: I don’t know if this works, but it might. And I strongly recommend to use the standard approach!
EDIT: I am sorry, I just realised that you wanted the answer in Obj-C. But, using the docs (just switch to Obj-C) it should be easy to test it in Obj-C.
I want to implement back navigation, using longpress and swipe to the left, without lifting the finger, but the swipe gesture isn't recognised, if I don't lift the finger after the longpress.
I also implemented the following delegate method, but the desired result isn't appearing. Any thoughts?
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
if (gestureRecognizer == _longPress && otherGestureRecognizer == _swipe) {
return YES;
}
if (gestureRecognizer == _swipe && otherGestureRecognizer == _longPress) {
return YES;
}
return NO;
edit:
- the longpress gesture fires method, which changes the background color of the current UIViewController (made it, just to see, if it fires).
-the swipe gesture fires method, -popViewController:animated
Don't use 2 different gesture recognisers because this is 1 gesture. You should create a custom gesture subclass to encode your logic so it's a single logical gesture for you to add and for the user to execute.
Inside your gesture i'd have a small state machine so you know when you start, when the long press time is up, if they have actually swiped enough. From each state you're only looking for one thing to happen, if anything else happens then you know it's a failure and the gesture can fail out.
I have added UIPanGestureRecognizer on the view of my UIViewController.
In this view (a calculator) are some UIButton triggered by the event .TouchUpInside.
The problem comes if I tap on the button and make a small pan at the same time (which might happen if you tap quickly a button and are moving to the next one at the same time). Then the pan gesture is triggered. I would like to avoid it when there is a tap on a button. But I would like to allow it if the tap takes too long (let say 0.3s is enough to trigger the pan gesture).
How can I achieve that?
Set the property cancelsTouchesInView of your gesture recognizer to NO.
Then your button should work properly.
You can use the delegate method like this-
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldReceiveTouch:(UITouch *)touch{
if ([touch.view isKindOfClass:[UIButton class]]) {
return NO;
}
return YES;
}
There is a dirty solution. You can just grab it in a UIScrollView
I have a map drawn with OpenGLES, and I have a pan gesture recognizer that has maximumNumberofTouches set to 1 to pan around the map and a pinch gesture recognizer for zooming. I want to start panning once im done zooming, (one finger is lifted off the screen) but the pan gesture recognizer doesnt kick in until pinchgesturerecognizer is done which is when it detects there are no fingers on the screen. Any ideas?
It may be possible to allow both gestures to be active via the delegate method:
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer
shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
// The two recognizers using the delegate (scale and rotate) should both be active.
//
return YES;
}
And keep a BOOL that tracks if the user is zooming, not allowing the code in the pan gesture to execute while that BOOL is YES. Altering the BOOL value by checking the number of touches in the touchesMoved:withEvent method (or perhaps some other UIGestureRegonizer method).
I think this should work, I do something similar in an app that allows scaling, rotating, and dragging, where dragging is only allowed when the user isn't scaling/rotating.
~Good Luck