I'm using a UIRotationGestureRecognizer and in the target method have this code:
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *piece = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:piece];
CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
piece.center = locationInSuperview;
}
But I'm not quite understanding which value the locationInView: return value indicates, because there are supposed to be two fingers touching the screen.
The locationInView: method returns the center point of both touches. If you want to know the positions of the two individual touches, use locationOfTouch:inView:.
Reading the documentation is a good idea:
http://developer.apple.com/library/ios/#documentation/UIKit/Reference/UIGestureRecognizer_Class/Reference/Reference.html
It is the "centroid" of the multiple touches.
Related
I'm performing panning and pinching using apple example methods to maintain the zooming where the fingers pinch. The code moves the layer anchor point where the fingers touch. Everything works fine but when I add a subview the anchor point resets and everything is out of place.
I tried to change the subviews anchor points to match the superview ones but it comes back to the default anchor point.
Code example for pinching and the anchor point adjustment:
/**
Scale and rotation transforms are applied relative to the layer's anchor point
this method moves a gesture recognizer's view's anchor point between the user's
fingers.
*/
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer
{
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *piece = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:piece];
CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
piece.center = locationInSuperview;
}
}
/**
Scale the piece by the current scale.
Reset the gesture recognizer's rotation to 0 after applying so the next callback
is a delta from the current scale.
*/
- (IBAction)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale],[gestureRecognizer scale]);
[gestureRecognizer setScale:1];
}
}
I have a self.canvasView and a self.workingView The gestures are in self.canvasView and self.workingView is a subview of self.canvasView.
Everything works fine, but every time I add a a subview to the self.workingView the anchor point resets and everything is out of place.
How can I avoid this behaviour? I couldn't find in the documentation that addSubview resets the anchorPoint, so I assume that I'm doing something wrong here.
Pretty much I am trying to find a way where you can zoom and rotate at the same time on an image in an app.
I have found this code that instructs me to put it into the touchesMoved method:
UITouch *touch1 = [[allTouches allObjects] objectAtIndex:0];
UITouch *touch2 = [[allTouches allObjects] objectAtIndex:1];
CGPoint previousPoint1 = [touch1 previousLocationInView:nil];
CGPoint previousPoint2 = [touch2 previousLocationInView:nil];
CGFloat previousAngle = atan2 (previousPoint2.y - previousPoint1.y, previousPoint2.x - previousPoint1.x);
CGPoint currentPoint1 = [touch1 locationInView:nil];
CGPoint currentPoint2 = [touch2 locationInView:nil];
CGFloat currentAngle = atan2 (currentPoint2.y - currentPoint1.y, currentPoint2.x - currentPoint1.x);
transform = CGAffineTransformRotate(transform, currentAngle - previousAngle);
self.view.transform = transform;
Now that is only for rotating with two fingers but I need to be able to zoom at the same time too using two fingers. I have tried everything but I am just not sure what is wrong or how to go from here.
Anyway, the maps application does something similar where you can zoom in on the map and rotate it at the same time and that is what I am trying to accomplish on an image in my app.
Anyways, what do I do from this point?
Thanks!
Add UIPinchGestureRecognizer and UIRotationGestureRecognizer to your view, set their delegates and implement the below delegate function.
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
return YES;
}
That should do it.
I am new to iOS, I am using UIPanGestureRecognizer in my project. In which I have a requirement to get current touch point and previous touch point when I am dragging the view. I am struggling to get these two points.
If I use touchesBegan method Instead of using UIPanGestureRecognizer, I could get these two points by the following code:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
CGPoint touchPoint = [[touches anyObject] locationInView:self];
CGPoint previous=[[touches anyObject]previousLocationInView:self];
}
I need to get these two points in UIPanGestureRecognizer event fire method. How can I achieve this? please guide me.
You can use this:
CGPoint currentlocation = [recognizer locationInView:self.view];
Store previous location by setting current location if not found and adding current location everytime.
previousLocation = [recognizer locationInView:self.view];
When you link an UIPanGestureRecognizer to an IBAction, the action will get called on every change. The gesture recognizer also provides a property called state which indicates if it's the first UIGestureRecognizerStateBegan, the last UIGestureRecognizerStateEnded or just an event between UIGestureRecognizerStateChanged.
To solve your problem, try it like the following:
- (IBAction)panGestureMoveAround:(UIPanGestureRecognizer *)gesture {
if ([gesture state] == UIGestureRecognizerStateBegan) {
myVarToStoreTheBeganPosition = [gesture locationInView:self.view];
} else if ([gesture state] == UIGestureRecognizerStateEnded) {
CGPoint myNewPositionAtTheEnd = [gesture locationInView:self.view];
// and now handle it ;)
}
}
You may also have a look at the method called translationInView:.
If you don't want to store anything you can also do this :
let location = panRecognizer.location(in: self)
let translation = panRecognizer.translation(in: self)
let previousLocation = CGPoint(x: location.x - translation.x, y: location.y - translation.y)
There is a function in UITouch to get the previous touch in the view
(CGPoint)locationInView:(UIView *)view;
(CGPoint)previousLocationInView:(UIView *)view;
You should instantiate your pan gesture recognizer as follows:
UIPanGestureRecognizer* panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePan:)];
Then you should add panRecognizer to your view:
[aView addGestureRecognizer:panRecognizer];
The - (void)handlePan:(UIPanGestureRecognizer *)recognizer method will be called while the user interacts with the view. In handlePan: you can get the point touched like this:
CGPoint point = [recognizer locationInView:aView];
You can also get the state of the panRecognizer:
if (recognizer.state == UIGestureRecognizerStateBegan) {
//do something
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
//do something else
}
I am working on pinch in and pinch out feature on pdf pages. My pinch in and panning(moving) is working properly, but when user continuously moves the zoomed view, the zoom view goes outside the super view bounds.Something like this:
how can i limit the pan move so that user could not move the zoomed view/pdf outside the superview.
the relevant code i am using is:
// This method will handle the PINCH / ZOOM gesture
- (void)pinchZoom:(UIPinchGestureRecognizer *)gestureRecognizer
{
if([gestureRecognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [gestureRecognizer scale];
}
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
if (!zoomActive) {
zoomActive = YES;
panActive = YES;
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panMove:)];
[panGesture setMaximumNumberOfTouches:2];
[panGesture setDelegate:self];
[self addGestureRecognizer:panGesture];
[panGesture release];
}
CGFloat currentScale = [[[gestureRecognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 2.0;
const CGFloat kMinScale = 1.0;
CGFloat newScale = 1 - (lastScale - [gestureRecognizer scale]);
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], newScale, newScale);
[gestureRecognizer view].transform = transform;
lastScale = [gestureRecognizer scale]; // Store the previous scale factor for the next pinch gesture call
[delegate leavesView:self zoomingCurrentView:[gestureRecognizer scale]];
}
}
the method where i am handling the pan move:
// This method will handle the PAN / MOVE gesture
- (void)panMove:(UIPanGestureRecognizer *)gestureRecognizer
{
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGPoint translation = [gestureRecognizer translationInView:[[gestureRecognizer view] superview]];
[[gestureRecognizer view] setCenter:CGPointMake([[gestureRecognizer view] center].x + translation.x, [[gestureRecognizer view] center].y + translation.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[[gestureRecognizer view] superview]];
}
}
Please suggest how to handle pan/move limiting panning within its superview bounds.
Try this code in you panMove method. Its working fine in my case.
static CGPoint initialCenter;
if (recognizer.state == UIGestureRecognizerStateBegan)
{
initialCenter = recognizer.view.center;
}
CGPoint translation = [recognizer translationInView:recognizer.view];
CGPoint newCenter = CGPointMake(initialCenter.x + translation.x,
initialCenter.y + translation.y);
CGRect newFrame = recognizer.view.frame;
CGRect superViewBounds = recognizer.view.superview.bounds;
CGPoint superViewOrigin = recognizer.view.superview.frame.origin;
if(newCenter.x-(newFrame.size.width/2) >= (superViewBounds.size.width+superViewOrigin.x)-200 /*right*/
||
newCenter.x+(newFrame.size.width/2) <= (superViewOrigin.x+200) /*left*/
||
newCenter.y-(newFrame.size.height/2) >= (superViewBounds.size.height+superViewOrigin.y)-200 /*bottom*/
||
newCenter.y+(newFrame.size.height/2) <= (superViewOrigin.y+100)) /*top*/
{
return;
}else{
recognizer.view.center = newCenter;
}
You can add something like this to the code: (This is a little rough so you may have to work out some errors)
if([gestureRecognizer view].frame.bounds.x < self.view.bounds.x - panExapantion)
{
//then don't move it
}
//... repeat this for all sides (this is left), bottom, right, and top.
Edit:
Ok, consider a box inside of a boxes, thus we have an inside box and an outer box. If we don't want the inside box to go outside the outer box then we must have all these statements be true:
The moved left side of the inside box is not outside the left side of the outer box.
The moved right side of the inside box is not outside the right side of the outer box.
The moved bottom side of the inside box is not outside the bottom side of the outer box.
The moved top side of the inside box is not outside the top side of the outer box.
In your case the PDF is the inside box and the iPad is the outer box. In order to stop the pdf from going outside the box we need to check if each of these statements is true, and if one is false we do not move the PDF to it's new location OR we move the PDF just near the edge of the iPhone screen.
The problem is if the pinch and zoom is used then the suddenly the box will always be outside the outer box, so how do we fix it? We get how much pixels were added to the inside box when it was zoomed (for the sack of this explanation lets just call this the expansion). So we get how much the box was expanded by and subtract that value. Like so: (This is a dumbed down if statement and will not work in code)
If(outerBox.leftSide is less than innerBox.leftSide - panExpantion)
{
//Then the innerBox is outside the outterBox
}
I hoped this helped clarify!
I successfully implemented a pinch a zoom of a view. However, the view doesn't position itself where I wished it to be. For the stackoverflowers with an iPad, I would like my view to be centered like on the iPad Photos.app : when you pinch&zoom on an album, the photos present themselves in a view that is expanding. This view is approximately centered with the top right hand corner on the first finger and the bottom left hand finger on the other finger. I mixed it with a pan recognizer, but this way the user always has to pinch, and then pan to adjust.
Here are so graphic explanation, I could post a video of my app if that's unclear (no secret, i'm trying to reproduce the Photos.app of the iPad...)
So for an initial position of the fingers, begining zooming :
This is the actual "zoomed" frame for now. The square is bigger, but the position is below the fingers
Here is what I would like to have : same size, but different origin.x and y :
(sorry about my poor photoshop skills ^^)
You can get the CGPoint of the midpoint between two fingers via the following code in the method handlingPinchGesture.
CGPoint point = [sender locationInView:self];
My whole handlePinchGesture method is below.
/*
instance variables
CGFloat lastScale;
CGPoint lastPoint;
*/
- (void)handlePinchGesture:(UIPinchGestureRecognizer *)sender {
if ([sender numberOfTouches] < 2)
return;
if (sender.state == UIGestureRecognizerStateBegan) {
lastScale = 1.0;
lastPoint = [sender locationInView:self];
}
// Scale
CGFloat scale = 1.0 - (lastScale - sender.scale);
[self.layer setAffineTransform:
CGAffineTransformScale([self.layer affineTransform],
scale,
scale)];
lastScale = sender.scale;
// Translate
CGPoint point = [sender locationInView:self];
[self.layer setAffineTransform:
CGAffineTransformTranslate([self.layer affineTransform],
point.x - lastPoint.x,
point.y - lastPoint.y)];
lastPoint = [sender locationInView:self];
}
Have a look at the Touches sample project. Specifically these methods could help you:
// scale and rotation transforms are applied relative to the layer's anchor point
// this method moves a gesture recognizer's view's anchor point between the user's fingers
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer {
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *piece = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:piece];
CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
piece.center = locationInSuperview;
}
}
// scale the piece by the current scale
// reset the gesture recognizer's rotation to 0 after applying so the next callback is a delta from the current scale
- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer setScale:1];
}
}