UIPanGestureRecognizer not working as expected with multiple pans - ios

Essentially, what I want to do is to move a view around to follow the user's pan. This works fine as long as the same pan object is being used. The problem comes when the user releases and than starts another pan.
According to the documentation, the value in translationInView is relative to the position at the start of the pan.
So my strategy for handling this was to add two properties to my view so I can tell whether the same pan object is being used and what the reference location is. The self object is the object being moved. It is a UIView subclass.
CGPoint originalPoint;
if (pan == self.panObject) {
//If the pan object is the same as the one in the property, use the saved value as the reference point.
originalPoint = CGPointMake(self.panStartLocation.x, self.panStartLocation.y);
} else {
//If the pan object is DIFFERENT, set the originalPoint from the existing center.
//self.center is in self.superview's coordinate system.
originalPoint = CGPointMake(self.center.x, self.center.y);
self.panStartLocation = CGPointMake(originalPoint.x, originalPoint.y);
self.panObject = pan;
}
CGPoint translation = [pan translationInView:self.superview];
self.center = CGPointMake(originalPoint.x+translation.x, originalPoint.y+translation.y);
This scheme doesn't work because each pan object apparently is the same object. I've spent a bit of time in the debugger verifying this, and that seems to be true. I thought the pan object would be different for each touch. So since this doesn't work, what is the alternative?

I solved it. Here is the corrected code:
CGPoint originalPoint;
if (pan.state == UIGestureRecognizerStateBegan) {
originalPoint = CGPointMake(self.center.x, self.center.y);
self.panStartLocation = CGPointMake(originalPoint.x, originalPoint.y);
} else {
originalPoint = CGPointMake(self.panStartLocation.x, self.panStartLocation.y);
}
CGPoint translation = [pan translationInView:self.superview];
self.center = CGPointMake(originalPoint.x+translation.x, originalPoint.y+translation.y);
EDIT: A better approach is to take advantage of the fact that the gesture recognizer allows you to set the translation:
[sender setTranslation:CGPointMake(0.0, 0.0) inView:self.pieceBeingMoved];
Do this when you move your item, and then the new translation next time will be relative to the position you just moved to.

Related

How to implement panning by moving SCNCamera

I have looked at this post and rickster's answer and lorenzo's sample code
Lorenzo's code works well. I am trying to extend it and add a conventional pan where you can move the scene left, right, up and down.
Here is what I have tried:
-(void) handlePan2F :(UIPanGestureRecognizer*)recognizer {
CGPoint translation = [recognizer translationInView:recognizer.view];
CGFloat xVal = translation.x * 0.025 ; // apply some damping
CGFloat yVal = translation.y * 0.025 ;
if (recognizer.state == UIGestureRecognizerStateChanged) {
self.cameraOrbit.position = SCNVector3Make(-xVal, yVal ,0.0); // what should the value be for z?
}
}
The issues I am running into are:
The pan works kind of - but not well.
If I add a SCNLookAtConstraint to point an area in the scene then the
pan2F gesture rotates the scene. Why is that?
The behavior I am trying to implement is to just to pan left/right/up down over the scene without any scene rotation. How to fix this code?
I am willing to give up SCNLookAtConstraint as long as I can zoom to a specific area within the scene.

Touch locations UIPinchGestureRecognizer

I'm trying to resize an image using the UIPinchGesture recognizer and in order to carry that out I need to find the location of the center point upon which the pinching is focused on.
Originally I thought about creating a midpoint calculation for the center point based on the two touched points. The problem is that for some reason the returned points using the touch indexes are not the touch locations that are applied on screen.
For example, when I tried zooming using a touch that was approximately at (333, 187) and another at (1000, 563) the returned locations of the touches were (496, 279) and (170, 95).
What exactly are UITouch 1 and UITouch 2 the indexes of? How can I find the center-point value?
func handlePinchGesture(gesture: UIPinchGestureRecognizer){
// Finds the midpoint location of the pinch gesture
var touch1 = gesture.locationOfTouch(0, inView: self.view)
var touch2 = gesture.locationOfTouch(1, inView: self.view)
var midPointX = (touch1.x + touch2.x)/2
var midPointY = (touch1.y + touch2.y)/2
var touchedPoint = CGPointMake(midPointX, midPointY)
}
How can I find the center-point value
You don't have to find it. The gesture recognizer gives it to you. It is the gesture recognizer's locationInView:.
You can use the following code to get the touch location:
#objc func onGesture(gesture: UIPinchGestureRecognizer) {
print("Location", gesture.location(ofTouch: 0, in: nil))
}
CGPoint centerPoint = [recognizer locationInView: self.view];
Apple says:
The returned value is a generic single-point location for the gesture computed by the UIKit framework. It is usually the centroid of the touches involved in the gesture.
This will give you the center. Hope this helps.. :)

Is it possible to know when [UIDynamicItemBehavior addLinearVelocity:forItem:] has finished running?

I'm using a UIPanGestureRecognizer and UIAttachmentBehavior to move a UIView around the screen. When the user ends the gesture I apply the velocity of the gesture recognizer to the view using a UIDynamicItemBehavior and the addLinearVelocity:forItem: method.
Here is the code I use:
- (void)_handlePanGestureRecognized: (UIPanGestureRecognizer *)panGestureRecognizer
{
if (panGestureRecognizer.state == UIGestureRecognizerStateBegan)
{
_attachmentBehavior.anchorPoint = panGestureRecognizer.view.center;
[_dynamicAnimator addBehavior: _attachmentBehavior];
}
else if (panGestureRecognizer.state == UIGestureRecognizerStateChanged)
{
CGPoint point = [panGestureRecognizer locationInView: panGestureRecognizer.view.superview];
_attachmentBehavior.anchorPoint = point;
}
else if (panGestureRecognizer.state == UIGestureRecognizerStateEnded)
{
[_dynamicAnimator removeBehavior: _attachmentBehavior];
CGPoint velocity = [panGestureRecognizer velocityInView: panGestureRecognizer.view.superview];
[_dynamicItemBehavior addLinearVelocity: velocity
forItem: self];
}
}
When the view stops moving I would then like to have it snap to the closest edge of the screen but I currently have no way of knowing when it has stopped moving short of polling the view's center with a CADisplayLink.
Have you tried attaching a UIDynamicAnimatorDelegate to your animator, and using the dynamicAnimatorDidPause: method to trigger snapping to the closest edge?
From reading on the developer forums, it sounds like some have had problems with their views staying in motion for a very long time (jiggling back and forth by 1 pixel, for example), but perhaps this will work for your case.

Point inside a rotated CGRect

How would you properly determine if a point is inside a rotated CGRect/frame?
The frame is rotated with Core Graphics.
So far I've found an algorithm that calculates if a point is inside a triangle, but that's not quite what I need.
The frame being rotated is a regular UIView with a few subviews.
Let's imagine that you use transform property to rotate a view:
self.sampleView.transform = CGAffineTransformMakeRotation(M_PI_2 / 3.0);
If you then have a gesture recognizer, for example, you can see if the user tapped in that location using locationInView with the rotated view, and it automatically factors in the rotation for you:
- (void)handleTap:(UITapGestureRecognizer *)gesture
{
CGPoint location = [gesture locationInView:self.sampleView];
if (CGRectContainsPoint(self.sampleView.bounds, location))
NSLog(#"Yes");
else
NSLog(#"No");
}
Or you can use convertPoint:
- (void)handleTap:(UITapGestureRecognizer *)gesture
{
CGPoint locationInMainView = [gesture locationInView:self.view];
CGPoint locationInSampleView = [self.sampleView convertPoint:locationInMainView fromView:self.view];
if (CGRectContainsPoint(self.sampleView.bounds, locationInSampleView))
NSLog(#"Yes");
else
NSLog(#"No");
}
The convertPoint method obviously doesn't need to be used in a gesture recognizer, but rather it can be used in any context. But hopefully this illustrates the technique.
Use CGRectContainsPoint() to check whether a point is inside a rectangle or not.

Panning a UIImage with a Gesture Recognizer on a placeholder UIView

I need to crop an image to match a specific dimension. I have three layers in my view starting from the bottom:
I have the raw image in a UIImage. This image comes from the camera. (called cameraImage)
I have a UIView holding this image. When user clicks "crop", the UIView's bounds are used to crop the raw image inside it.
Above all of this I have a guide image which shows the user the dimensions they need to pan, rotate, and pinch their image to fit into.
I want to add the pan gesture to the top guide image and have it control the raw image at the bottom. So the guide image never moves but it is listening for the pan gesture.
I can't figure out how to reset the recognizer without it making my raw image jump back to zero. Maybe someone could help?
- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer {
CGPoint translation = [recognizer translationInView:recognizer.view];
recognizer.view.center = CGPointMake(recognizer.view.center.x+translation.x, recognizer.view.center.y+ translation.y);
[recognizer setTranslation:CGPointMake(0, 0) inView:recognizer.view];
}
The above code works great in my gesture is attached to the bottom image. The problem is when the user goes outside the view's bounds, the image stops panning and is basically stuck. You can't touch it anymore so it sits there. So I thought if i attached the gesture to the top it would solve this problem.
- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer {
CGPoint translation = [recognizer translationInView:recognizer.view];
cameraImage.center = CGPointMake(recognizer.view.center.x+translation.x, recognizer.view.center.y+ translation.y);
}
This almost works. I set the cameraImage's center and removed the third line which resets the recognizer. If I don't remove it, the cameraImage jumps back into the same position every time I try and pan. It almost works because when you click the image again it doesn't start from the pixel you touched. It moves the image back to the original position and then lets you pan.
First option:
when the recognizer enter the UIGestureRecognizerStateEndedstate
if(recofnizer.state == UIGestureRecognizerStateEnded )
{
...
}
You store the translation at that point in time in an instance varibale (#property) of your class.
And then you always add the saved translation to the new translation.
In code this would look like this:
- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer {
CGPoint translation = [recognizer translationInView:recognizer.view];
CGPoint updatedTranslation = CGPointMake(translation.x+self.savedTranslation.x,translation.y+self.savedTranslation.y);
cameraImage.center = CGPointMake(recognizer.view.center.x+updatedTranslation.x, recognizer.view.center.y+ updatedTranslation.y);
if(recofnizer.state == UIGestureRecognizerStateEnded )
{
self.savedTranslation = updatedTranslation;
}
}
Dont forget to add #property (nonatomic, assign) CGPoint savedTranslation; to your interface.
Also make sure the savedTranslation variable is initialized in the init method of your class to self.savedTranslation = CGPointMake(0,0);
Second option:
You should think about doing all that what you want in an scrollview with the imageview as the viewForZooming of the scrollview. This allows very smooth interaction, like users are used to!
Above this scrollview you can then put your mask/guide, but make sure to disable the userInteraction of the mask/guide view to make your user touch the scrollview below!

Resources