UIPanGestureRecognizer to implement scale and rotate an UIView - ios

There is a UIView A. I put a icon on view A and try to use pan gesture to scale and rotate this view A. The scale function works fine but I can't make rotation work. The code is as following. Any help will be appreciated. thanks
- (void)scaleAndRotateWatermark:(UIPanGestureRecognizer *)gesture
{
if (gesture.state == UIGestureRecognizerStateChanged
|| gesture.state == UIGestureRecognizerStateEnded)
{
UIView *child = gesture.view;
UIView *view = child.superview;
CGPoint translatedPoint = [gesture translationInView:view];
CGPoint originCenter = child.center;
CGPoint newCenter = CGPointMake(child.centerX+translatedPoint.x, child.centerY+translatedPoint.y);
float origX = originCenter.x;
float origY = originCenter.y;
float newX = newCenter.x;
float newY = newCenter.y;
float originDis = (origX*origX) + (origY*origY);
float newDis = (newX*newX) + (newY*newY);
float scale = newDis/originDis;
view.transform = CGAffineTransformScale(view.transform, scale, scale);
// rotation calulate here
// need your help
// end of rotation
[guesture setTranslation:CGPointZero inView:_videoPreviewLayer];
}
}

Related

Limit panning during pinch to zoom on iOS

In my app I have implemented pinch to zoom and panning. Both zoom and pan are working but I would like to limit panning so that the edge of the zoomed image cannot be dragged into the view leaving empty space.
The range of coordinates for different scale factors does not seem to be linear. For example, on an iPad with an image view that is 764x764, at 2x zoom, the range of the transform X coordinate for a full pan is -191 to 191. At 3x, the range is about -254 to 254.
So my question is, how do I calculate the pan limit for any given scale factor?
Here is my code for the gesture recognizers:
#interface myVC()
{
CGPoint ptPanZoom1;
CGPoint ptPanZoom2;
CGFloat fltScale;
}
#property (nonatomic, strong) UIImageView* imgView; // View hosting pan/zoom image
#end
- (IBAction) handlePan:(UIPanGestureRecognizer*) sender
{
if ( sender.state == UIGestureRecognizerStateBegan )
{
ptPanZoom2 = ptPanZoom1;
return;
}
if ( (sender.state == UIGestureRecognizerStateChanged)
|| (sender.state == UIGestureRecognizerStateEnded) )
{
CGPoint ptTrans = [sender translationInView:self.imgView];
ptPanZoom1.x = ptTrans.x + ptPanZoom2.x;
ptPanZoom1.y = ptTrans.y + ptPanZoom2.y;
CGAffineTransform trnsFrm1 = CGAffineTransformMakeTranslation(ptPanZoom1.x, ptPanZoom1.y);
CGAffineTransform trnsFrm2 = CGAffineTransformMakeScale(fltScale, fltScale);
self.imgView.transform = CGAffineTransformConcat(trnsFrm1, trnsFrm2);
}
}
- (IBAction) handlePinch:(UIPinchGestureRecognizer*) sender
{
if ( (sender.state == UIGestureRecognizerStateBegan)
|| (sender.state == UIGestureRecognizerStateChanged)
|| (sender.state == UIGestureRecognizerStateEnded) )
{
fltScale *= sender.scale;
sender.scale = 1.0;
if ( fltScale <= 1.0 )
{
fltScale = 1.0;
ptPanZoom1.x = 0.0; // When scale goes to 1, snap position back
ptPanZoom1.y = 0.0;
}
else if ( fltScale > 6.0 )
{
fltScale = 6.0;
}
CGAffineTransform trnsFrm1 = CGAffineTransformMakeTranslation(ptPanZoom1.x, ptPanZoom1.y);
CGAffineTransform trnsFrm2 = CGAffineTransformMakeScale(fltScale, fltScale);
self.imgView.transform = CGAffineTransformConcat(trnsFrm1, trnsFrm2);
}
}
I figured out the equation for determining the pan limits:
int iMaxPanX = ((self.imgView.bounds.size.width * (fltScale - 1.0)) / fltScale) / 2;
int iMinPanX = -iMaxPanX;
int iMaxPanY = ((self.imgView.bounds.size.height * (fltScale - 1.0)) / fltScale) / 2;
int iMinPanY = -iMaxPanY;
In my pan gesture handler I am allowing the pan to go slightly beyond the limits during a changed event so that the user can easily see they have reached the edge.

Translating a UIView after rotating

I'm trying to translate a UIView that has been either rotated and/or scaled using touches from the user. I try to translate it with user input as well:
- (void)handleObjectMove:(UIPanGestureRecognizer *)recognizer
{
static CGPoint lastPoint;
UIView *moveView = [recognizer view];
CGPoint newCoord = [recognizer locationInView:playArea];
// Check if this is the first touch
if( [recognizer state]==UIGestureRecognizerStateBegan )
{
// Store the initial touch so when we change positions we do not snap
lastPoint = newCoord;
}
// Create the frame offsets to use our finger position in the view.
float dX = newCoord.x;
float dY = newCoord.y;
dX-=lastPoint.x;
dY-=lastPoint.y;
// Figure out the translation based on how we are scaled
CGAffineTransform transform = [moveView transform];
CGFloat xScale = transform.a;
CGFloat yScale = transform.d;
dX/=xScale;
dY/=yScale;
lastPoint = newCoord;
[moveView setTransform:CGAffineTransformTranslate( transform, dX, dY )];
[recognizer setTranslation:CGPointZero inView:playArea];
}
But when I touch and move the view it gets translated in all different weird ways. Can I apply some sort of formula using the rotation values to translate properly?
The best solution I've found with having to use the least amount of math was to store the original translation, rotation, and scaling values separately and redo the transform when they were changed. My solution was to subclass a UIView with the following properties:
#property (nonatomic) CGPoint translation;
#property (nonatomic) CGFloat rotation;
#property (nonatomic) CGPoint scaling;
And the following functions:
- (void)rotationDelta:(CGFloat)delta
{
[self setRotation:[self rotation]+delta];
}
- (void)scalingDelta:(CGPoint)delta
{
[self setScaling:
(CGPoint){ [self scaling].x*delta.x, [self scaling].y*delta.y }];
}
- (void)translationDelta:(CGPoint)delta
{
[self setTranslation:
(CGPoint){ [self translation].x+delta.x, [self translation].y+delta.y }];
}
- (void)transformMe
{
// Start with the translation
CGAffineTransform transform = CGAffineTransformMakeTranslation( [self translation].x, [self translation].y );
// Apply scaling
transform = CGAffineTransformScale( transform, [self scaling].x, [self scaling].y );
// Apply rotation
transform = CGAffineTransformRotate( transform, [self rotation] );
[self setTransform:transform];
}
- (void)setScaling:(CGPoint)newScaling
{
scaling = newScaling;
[self transformMe];
}
- (void)setRotation:(CGFloat)newRotation
{
rotation = newRotation;
[self transformMe];
}
- (void)setTranslation:(CGPoint)newTranslation
{
translation = newTranslation;
[self transformMe];
}
And to use the following in the handlers:
- (void)handleObjectPinch:(UIPinchGestureRecognizer *)recognizer
{
if( [recognizer state] == UIGestureRecognizerStateEnded
|| [recognizer state] == UIGestureRecognizerStateChanged )
{
// Get my stuff
if( !selectedView )
return;
SelectableImageView *view = selectedView;
CGFloat scaleDelta = [recognizer scale];
[view scalingDelta:(CGPoint){ scaleDelta, scaleDelta }];
[recognizer setScale:1.0];
}
}
- (void)handleObjectMove:(UIPanGestureRecognizer *)recognizer
{
static CGPoint lastPoint;
SelectableImageView *moveView = (SelectableImageView *)[recognizer view];
CGPoint newCoord = [recognizer locationInView:playArea];
// Check if this is the first touch
if( [recognizer state]==UIGestureRecognizerStateBegan )
{
// Store the initial touch so when we change positions we do not snap
lastPoint = newCoord;
}
// Create the frame offsets to use our finger position in the view.
float dX = newCoord.x;
float dY = newCoord.y;
dX-=lastPoint.x;
dY-=lastPoint.y;
lastPoint = newCoord;
[moveView translationDelta:(CGPoint){ dX, dY }];
[recognizer setTranslation:CGPointZero inView:playArea];
}
- (void)handleRotation:(UIRotationGestureRecognizer *)recognizer
{
if( [recognizer state] == UIGestureRecognizerStateEnded
|| [recognizer state] == UIGestureRecognizerStateChanged )
{
if( !selectedView )
return;
SelectableImageView *view = selectedView;
CGFloat rotation = [recognizer rotation];
[view rotationDelta:rotation];
[recognizer setRotation:0.0];
}
}
Try Change moveView.center instead of Set (x,y) directly or either "CGAffineTransformTranslate"
Here is the Swift 4/5 version for a transformable UIView
class TransformableImageView: UIView{
var translation:CGPoint = CGPoint(x:0,y:0)
var scale:CGPoint = CGPoint(x:1, y:1)
var rotation:CGFloat = 0
override init (frame : CGRect) {
super.init(frame: frame)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func rotationDelta(delta:CGFloat) {
rotation = rotation + delta
}
func scaleDelta(delta:CGPoint){
scale = CGPoint(x: scale.x*delta.x, y: scale.y * delta.y)
}
func translationDelta(delta:CGPoint){
translation = CGPoint(x: translation.x+delta.x, y: translation.y + delta.y)
}
func transform(){
self.transform = CGAffineTransform.identity.translatedBy(x: translation.x, y: translation.y).scaledBy(x: scale.x, y: scale.y ).rotated(by: rotation )
}
}
I'm leaving this here as I also encountered the same problem. Here is how to do it in swift 2:
Add your top view as subview to your bottom view:
self.view.addSubview(topView)
Then add a PanGesture Recognizer to move on touch:
//Add PanGestureRecognizer to move
let panMoveGesture = UIPanGestureRecognizer(target: self, action: #selector(YourViewController.moveViewPanGesture(_:)))
topView.addGestureRecognizer(panMoveGesture)
And the function to move:
//Move function
func moveViewPanGesture(recognizer:UIPanGestureRecognizer)
{
if recognizer.state == .Changed {
var center = recognizer.view!.center
let translation = recognizer.translationInView(recognizer.view?.superview)
center = CGPoint(x:center.x + translation.x,
y:center.y + translation.y)
recognizer.view!.center = center
recognizer.setTranslation(CGPoint.zero, inView: recognizer.view)
}
}
Basically, you need to translate your view based on the bottom view which is its superview not itself. Like this: recognizer.view?.superview
Or if you also rotate the bottom view, you may add a view which is not going to have any trasformation, and add your bottom view to that not transforming view (very bottom view) and add your top view to bottom view accordingly as subview. Then you should translate your top view based on the very bottom view.

How to Resize UIView with Subview

My goal is to resize UIView with a handle - its subview. I got my project working perfectly to accomplish that: the handle has a PanGestureRecognizer and its handler method resizes the view (parent) using the following method:
-(IBAction)handleResizeGesture:(UIPanGestureRecognizer *)recognizer {
CGPoint touchLocation = [recognizer locationInView:container.superview];
CGPoint center = container.center;
switch (recognizer.state) {
case UIGestureRecognizerStateBegan: {
deltaAngle = atan2f(touchLocation.y - center.y, touchLocation.x - center.x) -
CGAffineTransformGetAngle(container.transform);
initialBounds = container.bounds;
initialDistance = CGPointGetDistance(center, touchLocation);
break;
}
case UIGestureRecognizerStateChanged: {
CGFloat scale = CGPointGetDistance(center, touchLocation)/initialDistance;
CGFloat minimumScale = self.minimumSize/MIN(initialBounds.size.width,
initialBounds.size.height);
scale = MAX(scale, minimumScale);
CGRect scaledBounds = CGRectScale(initialBounds, scale, scale);
container.bounds = scaledBounds;
[container setNeedsDisplay];
break;
}
case UIGestureRecognizerStateEnded:
break;
default:
break;
}
}
Please note that I use center and bounds properties because frame is NOT reliable when transform is applied to my view.
However, my requirement is really to resize the view in ANY direction - not only proportionally as the code does. The problem is that I am not finding the correct methods or approaches how this handle may resize its superview's bounds (width or height) so it always sticks to the corner while finger is dragging it around.
Here is my project if it is easier to see what I mean.
Updated solution as suggested by an answer below works very well but once transform is applied (e.g. in viewDidLoad I have container.transform = CGAffineTransformMakeRotation(90);) it does not:
case UIGestureRecognizerStateBegan: {
initialBounds = container.bounds;
initialDistance = CGPointGetDistance(center, touchLocation);
initialDistanceX = CGPointGetDistanceX(center, touchLocation);
initialDistanceY = CGPointGetDistanceY(center, touchLocation);
break;
}
case UIGestureRecognizerStateChanged: {
CGFloat scaleX = abs(center.x-touchLocation.x)/initialDistanceX;
CGFloat scaleY = abs(center.y-touchLocation.y)/initialDistanceY;
CGFloat minimumScale = self.minimumSize/MIN(initialBounds.size.width, initialBounds.size.height);
scaleX = MAX(scaleX, minimumScale);
scaleY = MAX(scaleY, minimumScale);
CGRect scaledBounds = CGRectScale(initialBounds, scaleX, scaleY);
container.bounds = scaledBounds;
[container setNeedsDisplay];
break;
}
where
CG_INLINE CGFloat CGPointGetDistanceX(CGPoint point1, CGPoint point2) {
return (point2.x - point1.x);
}
CG_INLINE CGFloat CGPointGetDistanceY(CGPoint point1, CGPoint point2) {
return (point2.y - point1.y);
}
You are setting the same scale parameter in your call to CGRectScale(initialBounds, scale, scale); try this:
case UIGestureRecognizerStateChanged: {
CGFloat scaleX = abs(center.x-touchLocation.x)/initialDistance;
CGFloat scaleY = abs(center.y-touchLocation.y)/initialDistance;
CGFloat minimumScale = self.minimumSize/MIN(initialBounds.size.width, initialBounds.size.height);
scaleX = MAX(scaleX, minimumScale);
scaleY = MAX(scaleY, minimumScale);
CGRect scaledBounds = CGRectScale(initialBounds, scaleX, scaleY);
container.bounds = scaledBounds;
[container setNeedsDisplay];
break;
You may also consider to store initialDistanceX and initialDistanceY.
Use UIPinchGestureRecognizer & UIPanGestureRecognizer.
Try this code
//--Create and configure the pinch gesture
UIPinchGestureRecognizer *pinchGestureRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchGestureDetected:)];
[pinchGestureRecognizer setDelegate:self];
[container.superview addGestureRecognizer:pinchGestureRecognizer];
//--Create and configure the pan gesture
UIPanGestureRecognizer *panGestureRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panGestureDetected:)];
[panGestureRecognizer setDelegate:self];
[container.superview addGestureRecognizer:panGestureRecognizer];
For UIPinchGestureRecognizer:
- (void)pinchGestureDetected:(UIPinchGestureRecognizer *)recognizer
{
UIGestureRecognizerState state = [recognizer state];
if (state == UIGestureRecognizerStateBegan || state == UIGestureRecognizerStateChanged)
{
CGFloat scale = [recognizer scale];
[recognizer.view setTransform:CGAffineTransformScale(recognizer.view.transform, scale, scale)];
[recognizer setScale:1.0];
panGesture = YES;
}
}
For UIPanGestureRecognizer :
- (void)panGestureDetected:(UIPanGestureRecognizer *)recognizer
{
UIGestureRecognizerState state = [recognizer state];
if (panGesture==YES)
{
if (state == UIGestureRecognizerStateBegan || state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [recognizer translationInView:recognizer.view];
[recognizer.view setTransform:CGAffineTransformTranslate(recognizer.view.transform, translation.x, translation.y)];
[recognizer setTranslation:CGPointZero inView:recognizer.view];
}}
}
I'd not control it with a subview. It gets messy when you apply transform to the outer view.
Simply add two view to the ViewController, and hook them up with some code.
I've altered the project, find test-001 at GitHub.
ViewController code can be empty for this.
The view you want to transform TransformableView needs a scale property, and a method to apply it (if you want to add other transformations as well).
#interface TransformableView : UIView
#property (nonatomic) CGSize scale;
-(void)applyTransforms;
#end
#implementation TransformableView
-(void)applyTransforms
{
// Do whatever additional transform you want (e.g. concat additional rotations / mirroring to the transform).
self.transform = CGAffineTransformMakeScale(self.scale.width, self.scale.height);
}
#end
And the DraggableCorner can manage the rest (with some basic math).
#class TransformableView;
#interface DraggableCorner : UIView
#property (nonatomic, weak) IBOutlet TransformableView *targetView; // Hook up in IB.
-(IBAction)panGestureRecognized:(UIPanGestureRecognizer*) recognizer;
#end
CG_INLINE CGPoint offsetOfPoints(CGPoint point1, CGPoint point2)
{ return (CGPoint){point1.x - point2.x, point1.y -point2.y}; }
CG_INLINE CGPoint addPoints(CGPoint point1, CGPoint point2)
{ return (CGPoint){point1.x + point2.x, point1.y + point2.y}; }
#interface DraggableCorner ()
#property (nonatomic) CGPoint touchOffset;
#end
#implementation DraggableCorner
-(IBAction)panGestureRecognized:(UIPanGestureRecognizer*) recognizer
{
// Location in superview.
CGPoint touchLocation = [recognizer locationInView:self.superview];
// Began.
if (recognizer.state == UIGestureRecognizerStateBegan)
{
// Finger distance from handler.
self.touchOffset = offsetOfPoints(self.center, touchLocation);
}
// Moved.
if (recognizer.state == UIGestureRecognizerStateChanged)
{
// Drag.
self.center = addPoints(touchLocation, self.touchOffset);
// Desired size.
CGPoint distanceFromTargetCenter = offsetOfPoints(self.center, self.targetView.center);
CGSize desiredTargetSize = (CGSize){distanceFromTargetCenter.x * 2.0, distanceFromTargetCenter.y * 2.0};
// -----
// You can put limitations here, simply clamp `desiredTargetSize` to some value.
// -----
// Scale needed for desired size.
CGSize targetSize = self.targetView.bounds.size;
CGSize targetRatio = (CGSize){desiredTargetSize.width / targetSize.width, desiredTargetSize.height / targetSize.height};
// Apply.
self.targetView.scale = targetRatio;
[self.targetView applyTransforms];
}
}
Set the classes in IB, and hook up the IBAction and the IBOutlet of DraggableView.

UIView flipping by UIPanGestureRecognizer

I'm trying to use UIPanGestureRecognizer to flip an UIView. I'm using below code to handle flipping the view,
- (void)handlePan:(UIPanGestureRecognizer*)gesture{
CGPoint translation = [gesture translationInView:self.view];
NSLog(#"PAN X VALUE %f", translation.x );
double percentageOfWidth = translation.x / (gesture.view.frame.size.width / 2);
float angle = (percentageOfWidth * 100) * M_PI_2 / 180.0f;
CALayer *layer = testView.layer;
CATransform3D flipTransform = CATransform3DIdentity;
flipTransform.m34 = -0.002f;
flipTransform = CATransform3DRotate(flipTransform, angle, 0.0f, 1.0f, 0.0f);
layer.transform = flipTransform;
}
My problem is when i pan, sometimes there are some quick jumpings happen, I believe thats because translation.x(PAN X VALUE) value jumps from few points ahead, In my case i need it to be very smooth.
Any help greatly appreciated.
Thanks in advance.
You can use the gestureRecognizerShouldBegin method, which u can limit the UIPanGestureRecognizer sensitivity.
Example:
- (BOOL)gestureRecognizerShouldBegin:(UIPanGestureRecognizer *)panGestureRecognizer {
CGPoint translation = [panGestureRecognizer translationInView:self.view];
return fabs(translation.y) < fabs(translation.x);
}
Here's how I solved this for a cube rotation - just take the amount dragged and divide:
- (void)panHandle:(UIPanGestureRecognizer*)recognizer;
{
if ([recognizer state] == UIGestureRecognizerStateBegan)
{
CGPoint translation = [recognizer translationInView:[self superview]];
_startingX = translation.x;
}
else if ([recognizer state] == UIGestureRecognizerStateChanged)
{
CGPoint translation = [recognizer translationInView:[self superview]];
CGFloat angle = -(_startingX - translation.x) / 4;
//Now do translation with angle
_transitionContainer.layer.sublayerTransform = [self->_currentMetrics asTransformWithAngle:angle];
}
else if ([recognizer state] == UIGestureRecognizerStateEnded)
{
[self transitionOrReturnToOrigin];
}
}

Does the bounds of a uiview change when a pinch gesture is used on it?

I am using the UIPinchGestureRecognizer to expand/reduce a uiview.
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(scaleElement:)];
[pinchGesture setDelegate:self];
[element addGestureRecognizer:pinchGesture];
[pinchGesture release];
//Scale element method
- (void)scaleElement:(UIPinchGestureRecognizer *)gestureRecognizer {
UIView *element = [gestureRecognizer view];
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan){
lastTouchPosition = [gestureRecognizer locationInView:element];
}
else if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged){
CGPoint currentTouchPosition = [gestureRecognizer locationInView:element];
CGPoint deltaMove = [self calculatePointDistancewithPoint1:currentTouchPosition andPoint2:lastTouchPosition];
float distance = sqrt(deltaMove.x*deltaMove.x + deltaMove.y*deltaMove.y);
float hScale = 1 - deltaMove.x/distance * (1-gestureRecognizer.scale);
float vScale = 1 - deltaMove.y/distance * (1-gestureRecognizer.scale);
if (distance == 0) {
hScale = 1;
vScale = 1;
}
element.transform = CGAffineTransformScale([element transform], hScale, vScale);
CGAffineTransform transform = CGAffineTransformMakeScale(hScale, vScale);
element.bounds = CGRectApplyAffineTransform(element.bounds, transform);
[gestureRecognizer setScale:1];
lastTouchPosition = currentTouchPosition;
}
if ([gestureRecognizer state] == UIGestureRecognizerStateEnded) {
NSLog(#"scaling over");
NSLog(#"bounds = %#",NSStringFromCGRect(element.bounds));
NSLog(#"frame = %#", NSStringFromCGRect(element.frame));
}
NSLog(#"scalePiece exit");
}
//adjustAnchorPointForGestureRecognizer method
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer {
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *elementToAdjust = gestureRecognizer.view;
[self.view bringSubviewToFront:gestureRecognizer.view];
CGPoint locationInView = [gestureRecognizer locationInView:elementToAdjust];
CGPoint locationInSuperview = [gestureRecognizer locationInView:elementToAdjust.superview];
elementToAdjust.layer.anchorPoint = CGPointMake(locationInView.x / elementToAdjust.bounds.size.width, locationInView.y / elementToAdjust.bounds.size.height);
elementToAdjust.center = locationInSuperview;
}
}
Console print:
bounds = {{0, 0}, {178.405, 179.018}}
frame = {{300.642, 566.184}, {192.899, 194.227}}
Why isn't the bounds adjusting when the frame is changing ?
Does it have anything to do with auto resizing masks as I have subviews to this view ?
This has nothing to do with UIPinchGestureRecognizer. This has everything to do with your setting the transform to perform scaling. Changing the transform does not change bounds. It just changes how this view's coordinate space (bounds) maps to the superview's coordinate space (frame). If you need bounds to match frame, you have to change one of them directly, not use transform. That said, you should generally use transform because it's much faster.
EDIT
If you don't mean to scale, but rather mean to resize the view, call setBounds:. You can find the new bounds by applying the transform to the bounds rather than the element.
CGPoint currentTouchPosition = [gestureRecognizer locationInView:element];
CGPoint deltaMove = [self calculatePointDistancewithPoint1:currentTouchPosition andPoint2:lastTouchPosition];
...
CGAffineTransform transform = CGAffineTransformMakeScale(hScale, vScale);
self.bounds = CGRectApplyAffineTransform(self.bounds, transform);

Resources