In my app I have implemented pinch to zoom and panning. Both zoom and pan are working but I would like to limit panning so that the edge of the zoomed image cannot be dragged into the view leaving empty space.
The range of coordinates for different scale factors does not seem to be linear. For example, on an iPad with an image view that is 764x764, at 2x zoom, the range of the transform X coordinate for a full pan is -191 to 191. At 3x, the range is about -254 to 254.
So my question is, how do I calculate the pan limit for any given scale factor?
Here is my code for the gesture recognizers:
#interface myVC()
{
CGPoint ptPanZoom1;
CGPoint ptPanZoom2;
CGFloat fltScale;
}
#property (nonatomic, strong) UIImageView* imgView; // View hosting pan/zoom image
#end
- (IBAction) handlePan:(UIPanGestureRecognizer*) sender
{
if ( sender.state == UIGestureRecognizerStateBegan )
{
ptPanZoom2 = ptPanZoom1;
return;
}
if ( (sender.state == UIGestureRecognizerStateChanged)
|| (sender.state == UIGestureRecognizerStateEnded) )
{
CGPoint ptTrans = [sender translationInView:self.imgView];
ptPanZoom1.x = ptTrans.x + ptPanZoom2.x;
ptPanZoom1.y = ptTrans.y + ptPanZoom2.y;
CGAffineTransform trnsFrm1 = CGAffineTransformMakeTranslation(ptPanZoom1.x, ptPanZoom1.y);
CGAffineTransform trnsFrm2 = CGAffineTransformMakeScale(fltScale, fltScale);
self.imgView.transform = CGAffineTransformConcat(trnsFrm1, trnsFrm2);
}
}
- (IBAction) handlePinch:(UIPinchGestureRecognizer*) sender
{
if ( (sender.state == UIGestureRecognizerStateBegan)
|| (sender.state == UIGestureRecognizerStateChanged)
|| (sender.state == UIGestureRecognizerStateEnded) )
{
fltScale *= sender.scale;
sender.scale = 1.0;
if ( fltScale <= 1.0 )
{
fltScale = 1.0;
ptPanZoom1.x = 0.0; // When scale goes to 1, snap position back
ptPanZoom1.y = 0.0;
}
else if ( fltScale > 6.0 )
{
fltScale = 6.0;
}
CGAffineTransform trnsFrm1 = CGAffineTransformMakeTranslation(ptPanZoom1.x, ptPanZoom1.y);
CGAffineTransform trnsFrm2 = CGAffineTransformMakeScale(fltScale, fltScale);
self.imgView.transform = CGAffineTransformConcat(trnsFrm1, trnsFrm2);
}
}
I figured out the equation for determining the pan limits:
int iMaxPanX = ((self.imgView.bounds.size.width * (fltScale - 1.0)) / fltScale) / 2;
int iMinPanX = -iMaxPanX;
int iMaxPanY = ((self.imgView.bounds.size.height * (fltScale - 1.0)) / fltScale) / 2;
int iMinPanY = -iMaxPanY;
In my pan gesture handler I am allowing the pan to go slightly beyond the limits during a changed event so that the user can easily see they have reached the edge.
Related
I added a UIPinchGestureRecognizer and UIRotationGestureRecognizer on UITextView. When I do a pinch or rotation, I change the transform property of my UITextView object. However, if I rotate it at some angle and zoom in to some scale, the text will suddenly disappear.
If I continue to rotate it, the text can reappear at some angle.
Is there any way to fix the disappearance?
Here's what I did with the gesture recognizer methods.
- (void)onPinchtextView:(UIPinchGestureRecognizer *)pinchGestureRecognizer {
CGFloat scale = pinchGestureRecognizer.scale;
if (pinchGestureRecognizer.state == UIGestureRecognizerStateChanged || pinchGestureRecognizer.state == UIGestureRecognizerStateBegan) {
self.textView.transform = CGAffineTransformScale(self.textView.transform, scale, scale);
pinchGestureRecognizer.scale = 1.f;
}
- (void)onRotatetextView:(UIRotationGestureRecognizer *)rotateGestureRecognizer {
CGFloat rotation = rotateGestureRecognizer.rotation;
if (rotateGestureRecognizer.state == UIGestureRecognizerStateChanged || rotateGestureRecognizer.state == UIGestureRecognizerStateBegan) {
self.textView.transform = CGAffineTransformRotate(self.textView.transform, rotation);
[rotateGestureRecognizer setRotation:0];
}
}
There is a UIView A. I put a icon on view A and try to use pan gesture to scale and rotate this view A. The scale function works fine but I can't make rotation work. The code is as following. Any help will be appreciated. thanks
- (void)scaleAndRotateWatermark:(UIPanGestureRecognizer *)gesture
{
if (gesture.state == UIGestureRecognizerStateChanged
|| gesture.state == UIGestureRecognizerStateEnded)
{
UIView *child = gesture.view;
UIView *view = child.superview;
CGPoint translatedPoint = [gesture translationInView:view];
CGPoint originCenter = child.center;
CGPoint newCenter = CGPointMake(child.centerX+translatedPoint.x, child.centerY+translatedPoint.y);
float origX = originCenter.x;
float origY = originCenter.y;
float newX = newCenter.x;
float newY = newCenter.y;
float originDis = (origX*origX) + (origY*origY);
float newDis = (newX*newX) + (newY*newY);
float scale = newDis/originDis;
view.transform = CGAffineTransformScale(view.transform, scale, scale);
// rotation calulate here
// need your help
// end of rotation
[guesture setTranslation:CGPointZero inView:_videoPreviewLayer];
}
}
I want to zoom in out a CCNode by pinching and panning the screen. The node has a background which is very large but the portionof it shown on the screen. That node also contains other sprites.
What I have done by now is that first I register UIPinchGestureRecognizer
UIPinchGestureRecognizer * pinchRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinchFrom:)];
[[[CCDirector sharedDirector] view] addGestureRecognizer: pinchRecognizer];
-(void)handlePinchFrom:(UIPinchGestureRecognizer *) pinch
{
if(pinch.state == UIGestureRecognizerStateEnded) {
prevScale = 1;
}
else {
CGFloat dscale = [self scale] - prevScale + pinch.scale;
if(dscale > 0)
{
deltaScale = dscale;
}
CGAffineTransform transform = CGAffineTransformScale(pinch.view.transform, deltaScale, deltaScale);
[pinch.view setTransform: transform];
// [_contentNode setScale:deltaScale];
prevScale = pinch.scale;
}
}
The problem is that it scalw whole UIView not the CCNode. I have also tried to by setting the scale of my _contentNode.
**EDIT
I ave also tried this
- (void)handlePinchGesture:(UIPinchGestureRecognizer*)aPinchGestureRecognizer
{
if (pinch.state == UIGestureRecognizerStateBegan || pinch.state == UIGestureRecognizerStateChanged) {
CGPoint midpoint = [pinch locationInView:[CCDirector sharedDirector].view];
CGSize winSize = [CCDirector sharedDirector].viewSize;
float x = midpoint.x/winSize.width;
float y = midpoint.y/winSize.height;
_contentNode.anchorPoint = CGPointMake(x, y);
float scale = [pinch scale];
_contentNode.scale *= scale;
pinch.scale = 1;
}
}
But it zoom from the bottom left of the screen.
I had the same problem. I use CCScrollView, that contains CCNode that larger than device screen. I want scroll and zoom it, but node shouldnt scroll out of screen, and scale smaller than screen. So, i create my subclass of CCScrollView, where i handle pinch. It has some strange glitches, but it works fine at all.
When pinch began i set anchor point of my node to pinch center on node space. Then i need change position of my node proportional to shift of anchor point, so moving anchor point doesn't change nodes location on view:
- (void)handlePinch:(UIPinchGestureRecognizer*)recognizer
{
if (recognizer.state == UIGestureRecognizerStateEnded) {
_previousScale = self.contentNode.scale;
}
else if (recognizer.state == UIGestureRecognizerStateBegan) {
float X = [recognizer locationInNode:self.contentNode].x / self.contentNode.contentSize.width;
float Y = [recognizer locationInNode:self.contentNode].y / self.contentNode.contentSize.height;
float positionX = self.contentNode.position.x + self.contentNode.boundingBox.size.width * (X - self.contentNode.anchorPoint.x);
float positionY = self.contentNode.position.y + self.contentNode.boundingBox.size.height * (Y - self.contentNode.anchorPoint.y);
self.contentNode.anchorPoint = ccp(X, Y);
self.contentNode.position = ccp(positionX, positionY);
}
else {
CGFloat scale = _previousScale * recognizer.scale;
if (scale >= maxScale) {
self.contentNode.scale = maxScale;
}
else if (scale <= [self minScale]) {
self.contentNode.scale = [self minScale];
}
else {
self.contentNode.scale = scale;
}
}
}
Also i need change CCScrollView min and max scroll, so my node never scroll out of view. Default anchor point is (0,1), so i need shift min and max scroll proportional to the new anchor point.
- (float) maxScrollX
{
if (!self.contentNode) return 0;
float maxScroll = self.contentNode.boundingBox.size.width - self.contentSizeInPoints.width;
if (maxScroll < 0) maxScroll = 0;
return maxScroll - self.contentNode.boundingBox.size.width * self.contentNode.anchorPoint.x;
}
- (float) maxScrollY
{
if (!self.contentNode) return 0;
float maxScroll = self.contentNode.boundingBox.size.height - self.contentSizeInPoints.height;
if (maxScroll < 0) maxScroll = 0;
return maxScroll - self.contentNode.boundingBox.size.height * (1 - self.contentNode.anchorPoint.y);
}
- (float) minScrollX
{
float minScroll = [super minScrollX];
return minScroll - self.contentNode.boundingBox.size.width * self.contentNode.anchorPoint.x;
}
- (float) minScrollY
{
float minScroll = [super minScrollY];
return minScroll - self.contentNode.boundingBox.size.height * (1 - self.contentNode.anchorPoint.y);
}
UIGestureRecognizerStateEnded doesn't have locationInNode: method, so i added it by category. It just return touch location on node space:
#import "UIGestureRecognizer+locationInNode.h"
#implementation UIGestureRecognizer (locationInNode)
- (CGPoint) locationInNode:(CCNode*) node
{
CCDirector* dir = [CCDirector sharedDirector];
CGPoint touchLocation = [self locationInView: [self view]];
touchLocation = [dir convertToGL: touchLocation];
return [node convertToNodeSpace:touchLocation];
}
- (CGPoint) locationInWorld
{
CCDirector* dir = [CCDirector sharedDirector];
CGPoint touchLocation = [self locationInView: [self view]];
return [dir convertToGL: touchLocation];
}
#end
I'm trying to translate a UIView that has been either rotated and/or scaled using touches from the user. I try to translate it with user input as well:
- (void)handleObjectMove:(UIPanGestureRecognizer *)recognizer
{
static CGPoint lastPoint;
UIView *moveView = [recognizer view];
CGPoint newCoord = [recognizer locationInView:playArea];
// Check if this is the first touch
if( [recognizer state]==UIGestureRecognizerStateBegan )
{
// Store the initial touch so when we change positions we do not snap
lastPoint = newCoord;
}
// Create the frame offsets to use our finger position in the view.
float dX = newCoord.x;
float dY = newCoord.y;
dX-=lastPoint.x;
dY-=lastPoint.y;
// Figure out the translation based on how we are scaled
CGAffineTransform transform = [moveView transform];
CGFloat xScale = transform.a;
CGFloat yScale = transform.d;
dX/=xScale;
dY/=yScale;
lastPoint = newCoord;
[moveView setTransform:CGAffineTransformTranslate( transform, dX, dY )];
[recognizer setTranslation:CGPointZero inView:playArea];
}
But when I touch and move the view it gets translated in all different weird ways. Can I apply some sort of formula using the rotation values to translate properly?
The best solution I've found with having to use the least amount of math was to store the original translation, rotation, and scaling values separately and redo the transform when they were changed. My solution was to subclass a UIView with the following properties:
#property (nonatomic) CGPoint translation;
#property (nonatomic) CGFloat rotation;
#property (nonatomic) CGPoint scaling;
And the following functions:
- (void)rotationDelta:(CGFloat)delta
{
[self setRotation:[self rotation]+delta];
}
- (void)scalingDelta:(CGPoint)delta
{
[self setScaling:
(CGPoint){ [self scaling].x*delta.x, [self scaling].y*delta.y }];
}
- (void)translationDelta:(CGPoint)delta
{
[self setTranslation:
(CGPoint){ [self translation].x+delta.x, [self translation].y+delta.y }];
}
- (void)transformMe
{
// Start with the translation
CGAffineTransform transform = CGAffineTransformMakeTranslation( [self translation].x, [self translation].y );
// Apply scaling
transform = CGAffineTransformScale( transform, [self scaling].x, [self scaling].y );
// Apply rotation
transform = CGAffineTransformRotate( transform, [self rotation] );
[self setTransform:transform];
}
- (void)setScaling:(CGPoint)newScaling
{
scaling = newScaling;
[self transformMe];
}
- (void)setRotation:(CGFloat)newRotation
{
rotation = newRotation;
[self transformMe];
}
- (void)setTranslation:(CGPoint)newTranslation
{
translation = newTranslation;
[self transformMe];
}
And to use the following in the handlers:
- (void)handleObjectPinch:(UIPinchGestureRecognizer *)recognizer
{
if( [recognizer state] == UIGestureRecognizerStateEnded
|| [recognizer state] == UIGestureRecognizerStateChanged )
{
// Get my stuff
if( !selectedView )
return;
SelectableImageView *view = selectedView;
CGFloat scaleDelta = [recognizer scale];
[view scalingDelta:(CGPoint){ scaleDelta, scaleDelta }];
[recognizer setScale:1.0];
}
}
- (void)handleObjectMove:(UIPanGestureRecognizer *)recognizer
{
static CGPoint lastPoint;
SelectableImageView *moveView = (SelectableImageView *)[recognizer view];
CGPoint newCoord = [recognizer locationInView:playArea];
// Check if this is the first touch
if( [recognizer state]==UIGestureRecognizerStateBegan )
{
// Store the initial touch so when we change positions we do not snap
lastPoint = newCoord;
}
// Create the frame offsets to use our finger position in the view.
float dX = newCoord.x;
float dY = newCoord.y;
dX-=lastPoint.x;
dY-=lastPoint.y;
lastPoint = newCoord;
[moveView translationDelta:(CGPoint){ dX, dY }];
[recognizer setTranslation:CGPointZero inView:playArea];
}
- (void)handleRotation:(UIRotationGestureRecognizer *)recognizer
{
if( [recognizer state] == UIGestureRecognizerStateEnded
|| [recognizer state] == UIGestureRecognizerStateChanged )
{
if( !selectedView )
return;
SelectableImageView *view = selectedView;
CGFloat rotation = [recognizer rotation];
[view rotationDelta:rotation];
[recognizer setRotation:0.0];
}
}
Try Change moveView.center instead of Set (x,y) directly or either "CGAffineTransformTranslate"
Here is the Swift 4/5 version for a transformable UIView
class TransformableImageView: UIView{
var translation:CGPoint = CGPoint(x:0,y:0)
var scale:CGPoint = CGPoint(x:1, y:1)
var rotation:CGFloat = 0
override init (frame : CGRect) {
super.init(frame: frame)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func rotationDelta(delta:CGFloat) {
rotation = rotation + delta
}
func scaleDelta(delta:CGPoint){
scale = CGPoint(x: scale.x*delta.x, y: scale.y * delta.y)
}
func translationDelta(delta:CGPoint){
translation = CGPoint(x: translation.x+delta.x, y: translation.y + delta.y)
}
func transform(){
self.transform = CGAffineTransform.identity.translatedBy(x: translation.x, y: translation.y).scaledBy(x: scale.x, y: scale.y ).rotated(by: rotation )
}
}
I'm leaving this here as I also encountered the same problem. Here is how to do it in swift 2:
Add your top view as subview to your bottom view:
self.view.addSubview(topView)
Then add a PanGesture Recognizer to move on touch:
//Add PanGestureRecognizer to move
let panMoveGesture = UIPanGestureRecognizer(target: self, action: #selector(YourViewController.moveViewPanGesture(_:)))
topView.addGestureRecognizer(panMoveGesture)
And the function to move:
//Move function
func moveViewPanGesture(recognizer:UIPanGestureRecognizer)
{
if recognizer.state == .Changed {
var center = recognizer.view!.center
let translation = recognizer.translationInView(recognizer.view?.superview)
center = CGPoint(x:center.x + translation.x,
y:center.y + translation.y)
recognizer.view!.center = center
recognizer.setTranslation(CGPoint.zero, inView: recognizer.view)
}
}
Basically, you need to translate your view based on the bottom view which is its superview not itself. Like this: recognizer.view?.superview
Or if you also rotate the bottom view, you may add a view which is not going to have any trasformation, and add your bottom view to that not transforming view (very bottom view) and add your top view to bottom view accordingly as subview. Then you should translate your top view based on the very bottom view.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I needed a Pinch Recognizer that would scale in x, or y, or both directions depending on the direction of the pinch. I looked through many of the of the other questions here and they only had parts of the answer. Here's my complete solution that uses a custom UIPinchGestureRecognizer.
I created a custom version of a UIPinchGestureRecognizer. It uses the slope of line between the two fingers to determine the direction of the scale. It does 3 types: Vertical; Horizontal; and Combined(diagonal). Please see my notes at the bottom.
-(void) scaleTheView:(UIPinchGestureRecognizer *)pinchRecognizer
{
if ([pinchRecognizer state] == UIGestureRecognizerStateBegan || [pinchRecognizer state] == UIGestureRecognizerStateChanged) {
if ([pinchRecognizer numberOfTouches] > 1) {
UIView *theView = [pinchRecognizer view];
CGPoint locationOne = [pinchRecognizer locationOfTouch:0 inView:theView];
CGPoint locationTwo = [pinchRecognizer locationOfTouch:1 inView:theView];
NSLog(#"touch ONE = %f, %f", locationOne.x, locationOne.y);
NSLog(#"touch TWO = %f, %f", locationTwo.x, locationTwo.y);
[scalableView setBackgroundColor:[UIColor redColor]];
if (locationOne.x == locationTwo.x) {
// perfect vertical line
// not likely, but to avoid dividing by 0 in the slope equation
theSlope = 1000.0;
}else if (locationOne.y == locationTwo.y) {
// perfect horz line
// not likely, but to avoid any problems in the slope equation
theSlope = 0.0;
}else {
theSlope = (locationTwo.y - locationOne.y)/(locationTwo.x - locationOne.x);
}
double abSlope = ABS(theSlope);
if (abSlope < 0.5) {
// Horizontal pinch - scale in the X
[arrows setImage:[UIImage imageNamed:#"HorzArrows.png"]];
arrows.hidden = FALSE;
// tranform.a = X-axis
NSLog(#"transform.A = %f", scalableView.transform.a);
// tranform.d = Y-axis
NSLog(#"transform.D = %f", scalableView.transform.d);
// if hit scale limit along X-axis then stop scale and show Blocked image
if (((pinchRecognizer.scale > 1.0) && (scalableView.transform.a >= 2.0)) || ((pinchRecognizer.scale < 1.0) && (scalableView.transform.a <= 0.1))) {
blocked.hidden = FALSE;
arrows.hidden = TRUE;
} else {
// scale along X-axis
scalableView.transform = CGAffineTransformScale(scalableView.transform, pinchRecognizer.scale, 1.0);
pinchRecognizer.scale = 1.0;
blocked.hidden = TRUE;
arrows.hidden = FALSE;
}
}else if (abSlope > 1.7) {
// Vertical pinch - scale in the Y
[arrows setImage:[UIImage imageNamed:#"VerticalArrows.png"]];
arrows.hidden = FALSE;
NSLog(#"transform.A = %f", scalableView.transform.a);
NSLog(#"transform.D = %f", scalableView.transform.d);
// if hit scale limit along Y-axis then don't scale and show Blocked image
if (((pinchRecognizer.scale > 1.0) && (scalableView.transform.d >= 2.0)) || ((pinchRecognizer.scale < 1.0) && (scalableView.transform.d <= 0.1))) {
blocked.hidden = FALSE;
arrows.hidden = TRUE;
} else {
// scale along Y-axis
scalableView.transform = CGAffineTransformScale(scalableView.transform, 1.0, pinchRecognizer.scale);
pinchRecognizer.scale = 1.0;
blocked.hidden = TRUE;
arrows.hidden = FALSE;
}
} else {
// Diagonal pinch - scale in both directions
[arrows setImage:[UIImage imageNamed:#"CrossArrows.png"]];
blocked.hidden = TRUE;
arrows.hidden = FALSE;
NSLog(#"transform.A = %f", scalableView.transform.a);
NSLog(#"transform.D = %f", scalableView.transform.d);
// if we have hit any limit don't allow scaling
if ((((pinchRecognizer.scale > 1.0) && (scalableView.transform.a >= 2.0)) || ((pinchRecognizer.scale < 1.0) && (scalableView.transform.a <= 0.1))) || (((pinchRecognizer.scale > 1.0) && (scalableView.transform.d >= 2.0)) || ((pinchRecognizer.scale < 1.0) && (scalableView.transform.d <= 0.1)))) {
blocked.hidden = FALSE;
arrows.hidden = TRUE;
} else {
// scale in both directions
scalableView.transform = CGAffineTransformScale(scalableView.transform, pinchRecognizer.scale, pinchRecognizer.scale);
pinchRecognizer.scale = 1.0;
blocked.hidden = TRUE;
arrows.hidden = FALSE;
}
} // else for diagonal pinch
} // if numberOfTouches
} // StateBegan if
if ([pinchRecognizer state] == UIGestureRecognizerStateEnded || [pinchRecognizer state] == UIGestureRecognizerStateCancelled) {
NSLog(#"StateEnded StateCancelled");
[scalableView setBackgroundColor:[UIColor whiteColor]];
arrows.hidden = TRUE;
blocked.hidden = TRUE;
}
}
Remember to add the protocol to the view controller header file:
#interface WhiteViewController : UIViewController <UIGestureRecognizerDelegate>
{
IBOutlet UIView *scalableView;
IBOutlet UIView *mainView;
IBOutlet UIImageView *arrows;
IBOutlet UIImageView *blocked;
}
#property (strong, nonatomic) IBOutlet UIView *scalableView;
#property (strong, nonatomic) IBOutlet UIView *mainView;
#property (strong, nonatomic)IBOutlet UIImageView *arrows;
#property (strong, nonatomic)IBOutlet UIImageView *blocked;
-(void) scaleTheView:(UIPinchGestureRecognizer *)pinchRecognizer;
#end
And add the recognizer in the viewDidLoad:
- (void)viewDidLoad
{
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(scaleTheView:)];
[pinchGesture setDelegate:self];
[mainView addGestureRecognizer:pinchGesture];
arrows.hidden = TRUE;
blocked.hidden = TRUE;
[scalableView setBackgroundColor:[UIColor whiteColor]];
}
This is set up to use the main view to capture the pinch; and manipulate a second view. This way you can still scale it as the view gets small. You can change it to react directly to the scalable view.
LIMITS: I arbitrarily chose the starting size of my view so a scale limit of 2.0 would equal full screen. My lower scale is set at 0.1.
USER INTERACTION: I mess around with a lot of user interaction things like changing the view's background color and adding/changing arrows over the view to show direction. It's important to give them feedback during the scaling process, especially when changing directions like this codes allows.
BUG: There is a bug in Apple's UIPinchGestureRecognizer. It registers UIGestureRecognizerStateBegan with the touch of 2 fingers as you would expect. But once it is in StateBegan or StateChanged you can lift one finger and the state remains. It doesn't move to StateEnded or StateCancelled until BOTH fingers are lifted. This created a bug in my code and many headaches! The if numberOfTouches > 1 fixes it.
FUTURE: You can change the slope settings to scale in just one direction, or just 2. If you add the arrows images, you can see them change as you rotate your fingers.
Here's a solution in Swift:
extension UIPinchGestureRecognizer {
func scale(view: UIView) -> (x: CGFloat, y: CGFloat)? {
if numberOfTouches() > 1 {
let touch1 = self.locationOfTouch(0, inView: view)
let touch2 = self.locationOfTouch(1, inView: view)
let deltaX = abs(touch1.x - touch2.x)
let deltaY = abs(touch1.y - touch2.y)
let sum = deltaX + deltaY
if sum > 0 {
let scale = self.scale
return (1.0 + (scale - 1.0) * (deltaX / sum), 1.0 + (scale - 1.0) * (deltaY / sum))
}
}
return nil
}
}
This alternative solution determines the direction of scaling based on bearing angle rather than slope. I find it a bit easier to adjust the different zones using angle measurements.
#objc func viewPinched(sender: UIPinchGestureRecognizer) {
// Scale the view either vertically, horizontally, or diagonally based on the axis of the initial pinch
let locationOne = sender.location(ofTouch: 0, in: sender.view)
let locationTwo = sender.location(ofTouch: 1, in: sender.view)
let diffX = locationOne.x - locationTwo.x
let diffY = locationOne.y - locationTwo.y
// Break the plane into 3 equal segments
// Inverse tangent will return between π/2 and -π/2. Absolute value can be used to only consider 0 to π/2 - don't forget to handle divide by 0 case
// Breaking π/2 into three equal pieces, we get regions of 0 to π/6, π/6 to 2π/6, and 2π/6 to π/2 (note 2π/6 = π/3)
// Radian reminder - π/2 is 90 degreees :)
let bearingAngle = diffY == 0 ? CGFloat.pi / 2.0 : abs(atan(diffX/diffY))
if sender.state == .began {
// Determine type of pan based on bearing angle formed by the two touch points.
// Only do this when the pan begins - don't change type as the user rotates their fingers. Require a new gesture to change pan type
if bearingAngle < CGFloat.pi / 6.0 {
panType = .vertical
} else if bearingAngle < CGFloat.pi / 3.0 {
panType = .diagonal
} else if bearingAngle <= CGFloat.pi / 2.0 {
panType = .horizontal
}
}
// Scale the view based on the pan type
switch panType {
case .diagonal: transform = CGAffineTransform(scaleX: sender.scale, y: sender.scale)
case .horizontal: transform = CGAffineTransform(scaleX: sender.scale, y: 1.0)
case .vertical: transform = CGAffineTransform(scaleX: 1.0, y: sender.scale)
}
}