Scale and move an UIView using UIPanGestureRecognizer - ios

I have a UIView A. I put a icon on view A and try to use pan gesture to scale and move this view A.
I have try many solution, but i can not make it.
Please suggest help me?
More detail:
I will add more detail.
I have view A, add sub on self view. And i want when i draw panGestureRegonizer on view A, view A will move follow draw.
And while moving view A will scale. View A will scale to smaller when view move to top/left/bottom/right of sreen and scale to larger when view move to center of screen.

Let's say you have vc - ViewController and your UIView A is a subview of vc. You can add UIPanGestureRecognizer to A. Then drag the panGestureRegonizer to your vc as an action:
#IBAction func panGestureAction(_ sender: UIPanGestureRecognizer) {
//your code here
}
From the sender you can check view , location and state of the action. The state might impact your code in some cases, depending on what you are trying to achieve.
Then you need to modify the action to this:
#IBAction func panGestureAction(_ sender: UIPanGestureRecognizer) {
UIView.animateKeyframes(withDuration: 0.1, delay: 0, options: UIViewKeyframeAnimationOptions.calculationModeLinear, animations: {
let location = sender.location(in: sender.view?.superview)
sender.view?.center = location
})
}
Here sender.view?.superview equals vc.view. This code snippet will detect the pan gesture, and then will move A so A.center is matching the gesture's location. Note that duration 0.1 is giving smooth animation effect to the movement.
This will give you "move" functionality with pan gesture.
EDIT for scaling:
Logic: You have coordinate system(CS) with center, x and y. When the user uses pan gesture, he/she generates sequence of points in the CS. So our task is to measure the distance between the center of the CS and users' points. When we have the furthest distance, we can calculate scale factor for our scaling view.
var center: CGPoint! //center of the CS
let maxSize: CGSize = CGSize.init(width: 100, height: 100) // maximum size of our scaling view
var maxLengthToCenter: CGFloat! //maximum distance from the center of the CS to the furthest point in the CS
private func prepareForScaling() {
self.center = self.view.center //we set the center of our CS to equal the center of the VC's view
let frame = self.view.frame
//the furthest distance in the CS is the diagonal, and we calculate it using pythagoras theorem
self.maxLengthToCenter = (frame.width*frame.width + frame.height*frame.height).squareRoot()
}
Then we need to call our setup functional to have our data ready for scaling functionality - we can do this in viewDidLoad:
override func viewDidLoad() {
super.viewDidLoad()
self.prepareForScaling()
}
Then we need a helper function to calculates the scaled size of our view, for user's pan gesture current position on the screen.
private func scaledSize(for location: CGPoint) -> CGSize {
//calculate location x,y differences from the center
let xDifference = location.x - self.center.x
let yDifference = location.y - self.center.y
//calculate the scale factor - note that this factor will be between 0.0(center) and 0.5(diagonal - furthest point)
//It is due our measurement - from center to view's edge. Consider multiplying this factor with your custom constant.
let scaleFactor = (xDifference*xDifference + yDifference*yDifference).squareRoot()/maxLengthToCenter
//create scaled size with maxSize and current scale factor
let scaledSize = CGSize.init(width: maxSize.width*(1-scaleFactor), height: maxSize.height*(1-scaleFactor))
return scaledSize
}
And finally, we need to modify our pan gesture action to change the size of A:
#IBAction func panGestureAction(_ sender: UIPanGestureRecognizer) {
UIView.animateKeyframes(withDuration: 0.1, delay: 0, options: UIViewKeyframeAnimationOptions.calculationModeLinear, animations: {
let location = sender.location(in: sender.view?.superview)
sender.view?.frame = CGRect.init(origin: CGPoint.init(x: 0, y: 0), size: self.scaledSize(for: location))
sender.view?.center = location
})
}

Related

iOS -How to have a separate UIView take control of a UISlider's thumbRect

I have a button that I press and a custom alert appears with a normal UISlider inside of it. There are some other views and labels that I have that show the distance etc that are above and behind the slider. What happens is the thumbRect of the slider doesn't slide to well when touched. I have to be very accurate when trying to slide it and it seems buggy. What I want to do is add a clear UIView in front of it (and the other views and labels above it) and have the clear UIView take control of the slider using a UIGestureRecognizer.
Here's the setup so far. The clearView is what I want to use to take control of the slider. Right now it's behind everything and I colored the clearView red just so it's visible for the example
bgView.addSubview(slider) // slider added to bgView
let trackRect = slider.trackRect(forBounds: slider.frame)
let thumbRect = slider.thumbRect(forBounds: slider.bounds, trackRect: trackRect, value: slider.value)
milesLabel.center = CGPoint(x: thumbRect.midX, y: slider.frame.origin.y - 55)
bg.addSubview(milesLabel) // milesLabel added to bgView
// ** all the other views you see are added to the bgView and aligned with the thumbRect's center **
let milesLabelRect = bgView.convert(milesLabel.frame, to: self.view)
let sliderRect = bgView.convert(slider.frame, to: self.view)
let topOfMilesLabel = milesLabelRect.origin.y
let bottomOfSlider = sliderRect.maxY
let distance = bottomOfSlider - topOfMilesLabel
clearView.frame = CGRect(x: milesLabel.frame.minX, y: milesLabel.frame.minY, width: milesLabel.frame.width, height: distance)
bgView.addSubview(clearView) // clearView added to bgView
bgView.insertSubview(clearView, at: 0)
When I slide the slider everything successfully slides with it including the clearView but of course the thumbRect is still in control of everything.
#objc func onSliderValChanged(sender: UISlider, event: UIEvent) {
slider.value = round(sender.value)
UIView.animate(withDuration: 0.25, animations: {
self.slider.layoutIfNeeded()
let trackRect = sender.trackRect(forBounds: sender.frame)
let thumbRect = sender.thumbRect(forBounds: sender.bounds, trackRect: trackRect, value: sender.value)
self.milesLabel.center = CGPoint(x: thumbRect.midX, y: sender.frame.origin.y - 55)
// ** all the other views are aligned with the thumbRect's center as it slides **
self.clearView.frame.origin = self.milesLabel.frame.origin
})
}
The idea is to move the clearView to the front of everything and use that to control the thumbRect (don't forget it's only red for the example).
bgView.bringSubviewToFront(clearView)
// clearView.backgroundColor = UIColor.clear
I tried to use a UIPanGesture and a UILongPressGestureRecognizer to have the clearView take control of the slider but when I slide the slider only the thumbRect slides, the clearView stays still. Look where the clearView (which is red) is and look where the thumbRect is after I slid it to the 10 mile point.
// separately tried a UILongPressGestureRecognizer too
let panGesture = UIPanGestureRecognizer(target: self, action: #selector(draggedView(_:)))
clearView.isUserInteractionEnabled = true
clearView.addGestureRecognizer(panGesture)
#objc func draggedView(_ sender: UIPanGestureRecognizer) {
if slider.isHighlighted { return }
let point = sender.location(in: slider)
let percentage = Float(point.x / slider.frame.width)
let delta = percentage * (slider.maximumValue - slider.minimumValue)
let value = slider.minimumValue + delta
slider.setValue(value, animated: true)
}
If I play around with it enough the clearView sometimes slides with the the thumbRect but they definitely aren't in alignment and it's very buggy. I also lost control of all the other views the were aligned with the thumbRect.
How can I pass control from the slider's thumbRect to the clearView?
You might have your mind set on doing it this way, which is okay. But I have a fast alternative for you that might suit your needs.
If you subclass your playback slider and override the point() method you can increase the surface area of the slider's thumb.
class CustomPlaybackSlider: UISlider {
// Will increase target surface area for thumb.
override public func point(inside point: CGPoint, with event: UIEvent?) -> Bool {
var bounds: CGRect = self.bounds
// Set the inset to a negative value by how much you want the surface area to increase by.
bounds = bounds.insetBy(dx: -10, dy: -15)
return bounds.contains(point)
}
}
Might simplify things for you. Although, the only downside of this approach is that the surface area for the touch would extend equally down/up for your dx value and equally left/right for your dy value.
Hope this helps.
I found the key to the problem here in this answer. #PaulaHasstenteufel said
// also remember to call valueChanged if there's any
// custom behaviour going on there and pass the slider
// variable as the parameter, as indicated below
self.sliderValueChanged(slider)
What I had to do was
Use a UIPanGestureRecognizer
comment out the UISlider's target action
use the code from the target action's selector method in step-4 instead of using it in #objc func onSliderValChanged(sender: UISlider, event: UIEvent { }
take the code from step-3 and add it to a new function
add the function from step-5 to the bottom of the UIPanGetstureRecognizer's method
1- use the UIPanGestureRecognizer and make userInteraction is enabled for the UIView
let panGesture = UIPanGestureRecognizer(target: self, action: #selector(draggedView(_:)))
clearView.isUserInteractionEnabled = true
clearView.addGestureRecognizer(panGesture)
2- the Slider, I commented out it's addTarget for the .valueChanged event
let slider: UISlider = {
let slider = UISlider()
slider.translatesAutoresizingMaskIntoConstraints = false
slider.minimumValue = 1
slider.maximumValue = 5
slider.value = 1
slider.isContinuous = true
slider.maximumTrackTintColor = UIColor(white: 0, alpha: 0.3)
// ** COMMENT THIS OUT **
//slider.addTarget(self, action: #selector(onSliderValChanged(sender:event:)), for: .valueChanged)
return slider
}()
3- Take the code from the selector method the targetAction from above was using and instead insert that code in step 4
#objc func onSliderValChanged(sender: UISlider, event: UIEvent) {
/*
use this code in step-4 instead
slider.value = round(sender.value)
UIView.animate(withDuration: 0.25, animations: {
self.slider.layoutIfNeeded()
let trackRect = sender.trackRect(forBounds: sender.frame)
let thumbRect = sender.thumbRect(forBounds: sender.bounds, trackRect: trackRect, value: sender.value)
self.milesLabel.center = CGPoint(x: thumbRect.midX, y: sender.frame.origin.y - 55)
// ** all the other views are aligned with the thumbRect's center as it slides **
self.clearView.frame.origin = self.milesLabel.frame.origin
})
*/
}
4- Create a new method and insert the code from step-3
func sliderValueChanged(_ sender: UISlider, value: Float) {
// round the value for smooth sliding
sender.value = round(value)
UIView.animate(withDuration: 0.25, animations: {
self.slider.layoutIfNeeded()
let trackRect = sender.trackRect(forBounds: sender.frame)
let thumbRect = sender.thumbRect(forBounds: sender.bounds, trackRect: trackRect, value: sender.value)
self.milesLabel.center = CGPoint(x: thumbRect.midX, y: sender.frame.origin.y - 55)
// ** all the other views are aligned with the thumbRect's center as it slides **
self.clearView.frame.origin = self.milesLabel.frame.origin
})
}
5- I used the same code that I used for the UIGestureRecognizer but commented out the setValue function and instead used the function from step 4
#objc fileprivate func draggedView(_ sender: UIPanGestureRecognizer) {
if slider.isHighlighted { return }
let point = sender.location(in: slider)
let percentage = Float(point.x / slider.bounds.size.width)
let delta = percentage * (slider.maximumValue - slider.minimumValue)
let value = slider.minimumValue + delta
// I had to comment this out because the siding was to abrupt
//slider.setValue(value, animated: true)
// ** CALLING THIS FUNCTION FROM STEP-3 IS THE KEY TO GET THE UIView TO CONTROL THE SLIDER **
sliderValueChanged(slider, value: value)
}

Restrict pan gesture from moving outside of the left and right sides of the screen’s frame

I have an image view that pops up in the centre of the screen. You can pinch to zoom in or zoom out the image as well as move it horizontally. All these actions work perfectly. However I want to restrict the panning movements so users can't swipe beyond the left or right edges of the image. Below Ive posted screenshots along with explanations to show what I mean
Original View
Image moved to the right. At the moment you can see you can move it beyond the left edge of the image and it show black a black space. If the image width is equal to the screen width then I obviously want the pan gesture to be disable
Image zoomed in
Imaged moved to the left but again I want to be able to restrict the pan gesture to the edge of the image so there is no black space on the right of the image.
Ive tried looking around but I can't find anything that can help with my specific problem. Ive posted my code below. Any help would be much appreciated! Thanks in advance
func handlePan(_ gestureRecognizer: UIPanGestureRecognizer) {
guard let zoomView = gestureRecognizer.view else{return}
if gestureRecognizer.state == UIGestureRecognizerState.began || gestureRecognizer.state == UIGestureRecognizerState.changed {
let translation = gestureRecognizer.translation(in: self.view)
if(gestureRecognizer.view!.center.x < 300) {
gestureRecognizer.view!.center = CGPoint(x:gestureRecognizer.view!.center.x + translation.x, y: gestureRecognizer.view!.center.y)
}else{
gestureRecognizer.view!.center = CGPoint(x:299, y: gestureRecognizer.view!.center.y)
}
gestureRecognizer.setTranslation(CGPoint.zero, in: zoomView)
}
}
I followed this youtube video to get it done.
I can't help you with the zoom but I can help you stop the imageView from moving outside of the left and right sides when NOT zooming.
You didn't give any context as to wether or not your imageView was created in Storyboard or programmatically. The one in my example is programmatic and it's named orangeImageView.
Create a new project then just copy and paste this code inside of the View Controller. When you run the project you will see an orange imageView that you can move around but it won't go beyond the left or right side of the screen. I didn't bother with the top or bottom because it wasn't part of your question.
Follow Steps 1 - 8 inside #objc func panGestureHandler(_ gestureRecognizer: UIPanGestureRecognizer) for an explanation of what each step does and how it works.
Replace the orangeImageView with the imageView your using:
class ViewController: UIViewController {
// create frame in viewDidLoad
let orangeImageView: UIImageView = {
let imageView = UIImageView()
imageView.isUserInteractionEnabled = true
imageView.backgroundColor = .orange
return imageView
}()
var panGesture: UIPanGestureRecognizer!
override func viewDidLoad() {
super.viewDidLoad()
// create a frame for the orangeImageView and add it as a subview
orangeImageView.frame = CGRect(x: 50, y: 50, width: 100, height: 100)
view.addSubview(orangeImageView)
// initialize the pangesture and add it to the orangeImageView
panGesture = UIPanGestureRecognizer(target: self, action: #selector(panGestureHandler(_:)))
orangeImageView.addGestureRecognizer(panGesture)
}
#objc func panGestureHandler(_ gestureRecognizer: UIPanGestureRecognizer){
// 1. use these values to restrict the left and right sides so the orangeImageView won't go beyond these points
let leftSideRestrction = self.view.frame.minX
let rightSideRestriction = self.view.frame.maxX
// 2. use these values to redraw the orangeImageView's correct size in either Step 6 or Step 8 below
let imageViewHeight = self.orangeImageView.frame.size.height
let imageViewWidth = self.orangeImageView.frame.size.width
if gestureRecognizer.state == .changed || gestureRecognizer.state == .began {
let translation: CGPoint = gestureRecognizer.translation(in: self.view)
gestureRecognizer.view!.center = CGPoint(x: gestureRecognizer.view!.center.x + translation.x, y: gestureRecognizer.view!.center.y + translation.y)
gestureRecognizer.setTranslation(CGPoint.zero, in: self.view)
/*
3.
-get the the upper left hand corner of the imageView's X and Y origin to get the current location of the imageView as it's dragged across the screen.
-you need the orangeImageView.frame.origin.x value to make sure it doesn't go beyond the left or right edges
-you need the orangeImageView.frame.origin.y value to redraw it in Steps 6 and 8 at whatever Y position it's in when it hits either the left or right sides
*/
let imageViewCurrentOrginXValue = self.orangeImageView.frame.origin.x
let imageViewCurrentOrginYValue = self.orangeImageView.frame.origin.y
// 4. get the right side of the orangeImageView. It's computed using the orangeImageView.frame.origin.x + orangeImageView.frame.size.width
let imageViewRightEdgePosition = imageViewCurrentOrginXValue + imageViewWidth
// 5. if the the orangeImageView.frame.origin.x touches the left edge of the screen or beyond it proceed to Step 6
if imageViewCurrentOrginXValue <= leftSideRestrction {
// 6. redraw the orangeImageView's frame with x: being the far left side of the screen and Y being where ever the current orangeImageView.frame.origin.y is currently positioned at
orangeImageView.frame = CGRect(x: leftSideRestrction, y: imageViewCurrentOrginYValue, width: imageViewWidth, height: imageViewHeight)
}
// 7. if the the orangeImageView.frame.origin.x touches the right edge of the screen or beyond it proceed to Step 8
if imageViewRightEdgePosition >= rightSideRestriction{
// 8. redraw the orangeImageView's frame with x: being the rightSide of the screen - the orangeImageView's width and y: being where ever the current orangeImageView.frame.origin.y is currently positioned at
orangeImageView.frame = CGRect(x: rightSideRestriction - imageViewWidth, y: imageViewCurrentOrginYValue, width: imageViewWidth, height: imageViewHeight)
}
}
}
}

Scroll UIScrollView faster when scrolling it through another view

I have a UIScrollView with large UIImageView with image of very high resolution (about 10,000 x 10,000). In this UIImageView both zooming and scrolling is enabled. I also have a smaller UIImageView with the same image with much smaller resolution (about 100 x 100). I'm showing the visible portion of larger UIImageView on the smaller UIImageView. And the user can navigate to other places on larger UIImageView by panning on smaller UIImageView. The following images show what I'm trying to explain. My issue is the while panning on the smaller UIImageView the scrolling in larger UIScrollView really slow.
// function that handles the pan on green view
func handlePanNavigation(gestureRecognizer: UIPanGestureRecognizer) {
if gestureRecognizer.state == .began || gestureRecognizer.state == .changed {
let translation = gestureRecognizer.translation(in: navigationPanel)
guard let gv = gestureRecognizer.view else { return }
let point = CGPoint(x: gv.center.x + translation.x, y: gv.center.y + translation.y)
gestureRecognizer.view?.center = point
gestureRecognizer.setTranslation(.zero, in: navigationPanelView)
let transform = CGAffineTransform(scaleX: orgSize.width*tiledScrollView.zoomScale/navSize.width, y: orgSize.height*tiledScrollView.zoomScale/navSize.height)
let offset = navigationPanelView.frame.origin.applying(transform)
tiledScrollView.setContentOffset(offset, animated: true)
}
}
You should not animate the change of the content offset while applying a transformation of a user input real-time, since that can easily slow down the feedback.
Change
tiledScrollView.setContentOffset(offset, animated: true)
to
tiledScrollView.setContentOffset(offset, animated: false)
I'm not entirely certain how you want to accomplish this but if you want to slow-down or speed-up a pan gesture translation, add a multiplier.
switch gesture.state {
case .began:
gesture.setTranslation(CGPoint.zero, in: gesture.view)
case .changed:
gesture.setTranslation(CGPoint.zero, in: gesture.view)
if someView.frame.origin.y < someThreshold {
someView.center = CGPoint(x: someView.center.x, y: someView.center.y + (translation.y * 0.25))
}
...
Here, any pan upward beyond someThreshold is 4x slower. In your case, obviously, add a multiplier greater than 1.

CGAffineTransform an UIView with UIPanGestureRecognizer

I have view A, add sub on self view. And i want when i draw panGestureRegonizer on view A, view A will move follow draw.
And while moving view A will scale. View A will scale to smaller when view move to top/left/bottom/right of sreen and scale to larger when view move to center of screen.
I have try many solution, but i can not make it.
Please suggest help me?
Logic: You have coordinate system(CS) with center, x and y. When the user uses pan gesture, he/she generates sequence of points in the CS. So our task is to measure the distance between the center of the CS and users' points. When we have the furthest distance, we can calculate scale factor for our scaling view.
var center: CGPoint! //center of the CS
let maxSize: CGSize = CGSize.init(width: 100, height: 100) // maximum size of our scaling view
var maxLengthToCenter: CGFloat! //maximum distance from the center of the CS to the furthest point in the CS
private func prepareForScaling() {
self.center = self.view.center //we set the center of our CS to equal the center of the VC's view
let frame = self.view.frame
//the furthest distance in the CS is the diagonal, and we calculate it using pythagoras theorem
self.maxLengthToCenter = (frame.width*frame.width + frame.height*frame.height).squareRoot()
}
Then we need to call our setup functional to have our data ready for scaling functionality - we can do this in viewDidLoad:
override func viewDidLoad() {
super.viewDidLoad()
self.prepareForScaling()
}
Then we need a helper function to calculates the scaled size of our view, for user's pan gesture current position on the screen.
private func scaledSize(for location: CGPoint) -> CGSize {
//calculate location x,y differences from the center
let xDifference = location.x - self.center.x
let yDifference = location.y - self.center.y
//calculate the scale factor - note that this factor will be between 0.0(center) and 0.5(diagonal - furthest point)
//It is due our measurement - from center to view's edge. Consider multiplying this factor with your custom constant.
let scaleFactor = (xDifference*xDifference + yDifference*yDifference).squareRoot()/maxLengthToCenter
//create scaled size with maxSize and current scale factor
let scaledSize = CGSize.init(width: maxSize.width*(1-scaleFactor), height: maxSize.height*(1-scaleFactor))
return scaledSize
}
And finally, we need to modify our pan gesture action to change the size of A:
#IBAction func panGestureAction(_ sender: UIPanGestureRecognizer) {
UIView.animateKeyframes(withDuration: 0.1, delay: 0, options: UIViewKeyframeAnimationOptions.calculationModeLinear, animations: {
let location = sender.location(in: sender.view?.superview)
sender.view?.frame = CGRect.init(origin: CGPoint.init(x: 0, y: 0), size: self.scaledSize(for: location))
sender.view?.center = location
})
}

Place anchor point at the centre of the screen while doing gestures

I have a view with an image which responds to pinch, rotation and pan gestures. I want that pinching and rotation of the image would be done with respect to the anchor point placed in the middle of the screen, exactly as it is done using Xcode iPhone simulator by pressing the options key. How can I place the anchor point in the middle of the screen if the centre of the image might be scaled and panned to a different location?
Here's my scale and rotate gesture functions:
#IBAction func pinchGesture(_ gestureRecognizer: UIPinchGestureRecognizer) {
// Move the achor point of the view's layer to the touch point
// so that scaling the view and the layer becames simpler.
self.adjustAnchorPoint(gestureRecognizer: gestureRecognizer)
// Scale the view by the current scale factor.
if(gestureRecognizer.state == .began) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = gestureRecognizer.scale
}
if (gestureRecognizer.state == .began || gestureRecognizer.state == .changed) {
let currentScale = gestureRecognizer.view!.layer.value(forKeyPath:"transform.scale")! as! CGFloat
// Constants to adjust the max/min values of zoom
let kMaxScale:CGFloat = 15.0
let kMinScale:CGFloat = 1.0
var newScale = 1 - (lastScale - gestureRecognizer.scale)
newScale = min(newScale, kMaxScale / currentScale)
newScale = max(newScale, kMinScale / currentScale)
let transform = (gestureRecognizer.view?.transform)!.scaledBy(x: newScale, y: newScale);
gestureRecognizer.view?.transform = transform
lastScale = gestureRecognizer.scale // Store the previous scale factor for the next pinch gesture call
scale = currentScale // Save current scale for later use
}
}
#IBAction func rotationGesture(_ gestureRecognizer: UIRotationGestureRecognizer) {
// Move the achor point of the view's layer to the center of the
// user's two fingers. This creates a more natural looking rotation.
self.adjustAnchorPoint(gestureRecognizer: gestureRecognizer)
// Apply the rotation to the view's transform.
if gestureRecognizer.state == .began || gestureRecognizer.state == .changed {
gestureRecognizer.view?.transform = (gestureRecognizer.view?.transform.rotated(by: gestureRecognizer.rotation))!
// Set the rotation to 0 to avoid compouding the
// rotation in the view's transform.
angle += gestureRecognizer.rotation // Save rotation angle for later use
gestureRecognizer.rotation = 0.0
}
}
func adjustAnchorPoint(gestureRecognizer : UIGestureRecognizer) {
if gestureRecognizer.state == .began {
let view = gestureRecognizer.view
let locationInView = gestureRecognizer.location(in: view)
let locationInSuperview = gestureRecognizer.location(in: view?.superview)
// Move the anchor point to the the touch point and change the position of the view
view?.layer.anchorPoint = CGPoint(x: (locationInView.x / (view?.bounds.size.width)!), y: (locationInView.y / (view?.bounds.size.height)!))
view?.center = locationInSuperview
}
}
EDIT
I see that people aren't eager to get into this. Let me help by sharing some progress I've made in the past few days.
Firstly, I wrote a function centerAnchorPoint which correctly places the anchor point of an image to the centre of the screen regardless of where that anchor point was previously. However the image must not be scaled or rotated for it to work.
func setAnchorPoint(_ anchorPoint: CGPoint, forView view: UIView) {
var newPoint = CGPoint(x: view.bounds.size.width * anchorPoint.x, y: view.bounds.size.height * anchorPoint.y)
var oldPoint = CGPoint(x: view.bounds.size.width * view.layer.anchorPoint.x, y: view.bounds.size.height * view.layer.anchorPoint.y)
newPoint = newPoint.applying(view.transform)
oldPoint = oldPoint.applying(view.transform)
var position = view.layer.position
position.x -= oldPoint.x
position.x += newPoint.x
position.y -= oldPoint.y
position.y += newPoint.y
view.layer.position = position
view.layer.anchorPoint = anchorPoint
}
func centerAnchorPoint(gestureRecognizer : UIGestureRecognizer) {
if gestureRecognizer.state == .ended {
view?.layer.anchorPoint = CGPoint(x: (photo.bounds.midX / (view?.bounds.size.width)!), y: (photo.bounds.midY / (view?.bounds.size.height)!))
}
}
func centerAnchorPoint() {
// Position of the current anchor point
let currentPosition = photo.layer.anchorPoint
self.setAnchorPoint(CGPoint(x: 0.5, y: 0.5), forView: photo)
// Center of the image
let imageCenter = CGPoint(x: photo.center.x, y: photo.center.y)
self.setAnchorPoint(currentPosition, forView: photo)
// Center of the screen
let screenCenter = CGPoint(x: UIScreen.main.bounds.midX, y: UIScreen.main.bounds.midY)
// Distance between the centers
let distanceX = screenCenter.x - imageCenter.x
let distanceY = screenCenter.y - imageCenter.y
// Find new anchor point
let newAnchorPoint = CGPoint(x: (imageCenter.x+2*distanceX)/(UIScreen.main.bounds.size.width), y: (imageCenter.y+2*distanceY)/(UIScreen.main.bounds.size.height))
//photo.layer.anchorPoint = newAnchorPoint
self.setAnchorPoint(newAnchorPoint, forView: photo)
let dotPath = UIBezierPath(ovalIn: CGRect(x: photo.layer.position.x-2.5, y: photo.layer.position.y-2.5, width: 5, height: 5))
layer.path = dotPath.cgPath
}
Function setAchorPoint is used here to set anchor point to a new position without moving an image.
Then I updated panGesture function by inserting this at the end of it:
if gestureRecognizer.state == .ended {
self.centerAnchorPoint()
}
EDIT 2
Ok, so I'll try to simply explain the code above.
What I am doing is:
Finding the distance between the center of the photo and the center of the screen
Apply this formula to find the new position of anchor point:
newAnchorPointX = (imageCenter.x-distanceX)/screenWidth + distanceX/screenWidth
Then do the same for y position.
Set this point as a new anchor point without moving the photo using setAnchorPoint function
As I said this works great if the image is not scaled. If it is, then the anchor point does not stay at the center.
Strangely enough distanceX or distanceY doesn't exactly depend on scale value, so something like this doesn't quite work:
newAnchorPointX = (imageCenter.x-distanceX)/screenWidth + distanceX/(scaleValue*screenWidth)
EDIT 3
I figured out the scaling. It appears that the correct scale factor has to be:
scaleFactor = photo.frame.size.width/photo.layer.bounds.size.width
I used this instead of scaleValue and it worked splendidly.
So panning and scaling are done. The only thing left is rotation, but it appears that it's the hardest.
First thing I thought is to apply rotation matrix to increments in X and Y directions, like this:
let incrementX = (distanceX)/(screenWidth)
let incrementY = (distanceY)/(screenHeight)
// Applying rotation matrix
let incX = incrementX*cos(angle)+incrementY*sin(angle)
let incY = -incrementX*sin(angle)+incrementY*cos(angle)
// Find new anchor point
let newAnchorPoint = CGPoint(x: 0.5+incX, y: 0.5+incY)
However this doesn't work.
Since the question is mostly answered in the edits, I don't want to repeat myself too much.
Broadly what I changed from the code posted in the original question:
Deleted calls to adjustAnchorPoint function in pinch and rotation gesture functions.
Placed this piece of code in pan gesture function, so that the anchor point would update its position after panning the photo:
if gestureRecognizer.state == .ended {
self.centerAnchorPoint()
}
Updated centerAnchorPoint function to work for rotation.
A fully working centerAnchorPoint function (rotation included):
func centerAnchorPoint() {
// Scale factor
photo.transform = photo.transform.rotated(by: -angle)
let curScale = photo.frame.size.width / photo.layer.bounds.size.width
photo.transform = photo.transform.rotated(by: angle)
// Position of the current anchor point
let currentPosition = photo.layer.anchorPoint
self.setAnchorPoint(CGPoint(x: 0.5, y: 0.5), forView: photo)
// Center of the image
let imageCenter = CGPoint(x: photo.center.x, y: photo.center.y)
self.setAnchorPoint(currentPosition, forView: photo)
// Center of the screen
let screenCenter = CGPoint(x: UIScreen.main.bounds.midX, y: UIScreen.main.bounds.midY)
// Distance between the centers
let distanceX = screenCenter.x - imageCenter.x
let distanceY = screenCenter.y - imageCenter.y
// Apply rotational matrix to the distances
let distX = distanceX*cos(angle)+distanceY*sin(angle)
let distY = -distanceX*sin(angle)+distanceY*cos(angle)
let incrementX = (distX)/(curScale*UIScreen.main.bounds.size.width)
let incrementY = (distY)/(curScale*UIScreen.main.bounds.size.height)
// Find new anchor point
let newAnchorPoint = CGPoint(x: 0.5+incrementX, y: 0.5+incrementY)
self.setAnchorPoint(newAnchorPoint, forView: photo)
}
The key things to notice here is that the rotation matrix has to be applied to distanceX and distanceY. The scale factor is also updated to remain the same throughout the rotation.

Resources