swift - draw with finger over UIImageView - ios

In my app user needs to take a picture with the camera and then mark some areas in the image using a finger.
So I created UIImageView which holds the image from the camera, then added UIPanGestureRecognizer which listens for "drawing" gestures:
panGesture = UIPanGestureRecognizer(target: self, action: #selector(AttachmentInputViewController.handlePanGesture(_:)))
imageView.addGestureRecognizer(panGesture!)
handlePanGesture:
func handlePanGesture(_ sender: UIPanGestureRecognizer) {
let point = sender.location(in: sender.view)
switch sender.state {
case .began:
self.startAtPoint(point: point)
case .changed:
self.continueAtPoint(point: point)
case .ended:
self.endAtPoint(point: point)
case .failed:
self.endAtPoint(point: point)
default:
assert(false, "State not handled")
}
}
Then I create UIBezierPath which holds the "drawing" and create a separate image with those markings:
private func startAtPoint(point: CGPoint) {
path = UIBezierPath()
path.lineWidth = 5
path.move(to: point)
}
private func continueAtPoint(point: CGPoint) {
path.addLine(to: point)
}
private func endAtPoint(point: CGPoint) {
path.addLine(to: point)
path.addLine(to: point)
//path.close()
let imageWidth: CGFloat = imageView.image!.size.width
let imageHeight: CGFloat = imageView.image!.size.height
let strokeColor:UIColor = UIColor.red
// Make a graphics context
UIGraphicsBeginImageContextWithOptions(CGSize(width: imageWidth, height: imageHeight), false, 0.0)
let context = UIGraphicsGetCurrentContext()
context!.setStrokeColor(strokeColor.cgColor)
//for path in paths {
path.stroke()
//}
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
In the end I need to save the image with user's markings.
The problem is that image in UIImageView is set to scaleToFit and when I try to combine both camera image and markings image they don't match due to different resolutions and ratios.
I have a feeling there is a better way achieving this and would appreciate if anyone can recommend a best way.

You were very close! The bug was simply in your handlePanGesture() where you computed the point. You obtained the point of the gesture, which is a point in the coordinate space of the view of the gesture, which is your imageView.
You instead want the point in the coordinate space of the UIImage within your imageView. So you just need to convert.
#objc func handlePanGesture(_ sender: UIPanGestureRecognizer) {
guard let image = self.imageView.image else {return}
let point = sender.location(in: sender.view)
let rx = image.size.width / self.imageView.frame.size.width
let ry = image.size.height / self.imageView.frame.size.height
let pointInImage = CGPoint(x: point.x * rx, y: point.y * ry)
switch sender.state {
case .began:
self.startAtPoint(pointInImage)
case .changed:
self.continueAtPoint(pointInImage)
case .ended:
self.endAtPoint(pointInImage)
case .failed:
self.endAtPoint(pointInImage)
default:
assert(false, "State not handled")
}
}
Thanks for posting your question! Your overall solution worked great for allowing user draw a path with their finger atop an image. With the above tweak, and with my choice to close the path during endAtPoint(), I have exactly what I need to let user choose desired area of an image.
Also, one other thing to fix: In your endAtPoint() you called
path.addLine(to: point) twice rather than once.

Related

Zoom with one finger on mapbox map also seems to change the center location?

Below is my method for zooming. I am trying to implement it, so that when the user uses one finger to zoom (move finger up or down in specified region), it should zoom straight down or up just like normal (see snapchat as an example).
The problem is the code I have written changes latitude at the same time, which is unintended.
Why is this happening and how can I change it?
#objc func panGesture(_ sender: UIPanGestureRecognizer) {
print("recognized?")
// note that 'view' here is the overall video preview
let velocity = sender.velocity(in: mapView) //view
//print(sender.translation(in: view), "<-- waht is val?")
// print(sender.setTranslation(CGPoint(x: 16, y: 495), in: view), "<-- mmmmm")
if velocity.y >= 2 || velocity.y <= 2 {
let minimumZoomFactor: CGFloat = 1.0
let maximumZoomFactor: CGFloat = 17.0 // artificially set a max useable zoom of 14x (maybe shuld be icncrease?)
// clamp a zoom factor between minimumZoom and maximumZoom
func clampZoomFactor(_ factor: CGFloat) -> CGFloat {
return min(max(factor, minimumZoomFactor), maximumZoomFactor)
}
func update(scale factor: CGFloat) {
mapView.zoomLevel = Double(exactly: factor)! //maaybe setZoomLevel(... and animate it bit by bit?
}
var lastVal = 0
//BELOW IS SENDER.STATE THINGS!!!
switch sender.state {
case .began:
originalZoomLevel = mapView.zoomLevel//device.videoZoomFactor
//print(originalZoomLevel, "<-- what is initialZoom11111111???")
case .changed:
// distance in points for the full zoom range (e.g. min to max), could be view.frame.height
let fullRangeDistancePoints: CGFloat = 300.0 //dont know fi this is right??
// extract current distance travelled, from gesture start
let currentYTranslation: CGFloat = sender.translation(in: view).y
// calculate a normalized zoom factor between [-1,1], where up is positive (ie zooming in)
let normalizedZoomFactor = -1 * max(-1,min(1,currentYTranslation / fullRangeDistancePoints))
// calculate effective zoom scale to use
let newZoomFactor = clampZoomFactor(CGFloat(originalZoomLevel) + normalizedZoomFactor /** (maximumZoomFactor - minimumZoomFactor)*/)
print(originalZoomLevel, "<-- what is initialZoom???")
print(newZoomFactor, "<-- what is newZoomFactor???")
// update device's zoom factor'
update(scale: newZoomFactor)
print(lastVal - Int(mapView.centerCoordinate.latitude), " : The change is here")
lastVal = Int(mapView.centerCoordinate.latitude)
print(mapView.centerCoordinate, " : Cenetr courdenate in .changed")
case .ended, .cancelled:
print(originalZoomLevel, "<-- what is this???")
break
default:
break
}
}
}
The problem in your code is in your func update(scale ...)
Replace this code:
mapView.zoomLevel = Double(exactly: factor)!
With this:
mapView.setCenter(centerCoordOrig, zoomLevel: Double(exactly: factor)!, animated: false)
By the way, create a new variable outside for the centerCoordOrig.

UIPanGestureRecognizer sometimes doesn't get into End State

I'm developing a card view like in Tinder. When cards X origin is bigger than a value which I declare, It moves out the screen. Otherwise, It sticks to center again. I'm doing all of these things inside UIPanGestureRecognizer function. I can move the view in Change state. However, It sometimes doesn't get into end state so card is neither moves out of the screen or stick to center again. It just stays in some weird place.
So My problem is that card should go out of the screen like in below screenshot or stick into center.
I tried solutions in below post but nothing worked:
UIPanGestureRecognizer not calling End state
UIPanGestureRecognizer does not switch to state "End" or "Cancel" if user panned x and y in negative direction
/// This method handles the swiping gesture on each card and shows the appropriate emoji based on the card's center.
#objc func handleCardPan(sender: UIPanGestureRecognizer) {
// Ensure it's a horizontal drag
let velocity = sender.velocity(in: self.view)
if abs(velocity.y) > abs(velocity.x) {
return
}
// if we're in the process of hiding a card, don't let the user interace with the cards yet
if cardIsHiding { return }
// change this to your discretion - it represents how far the user must pan up or down to change the option
// distance user must pan right or left to trigger an option
let requiredOffsetFromCenter: CGFloat = 80
let panLocationInView = sender.location(in: view)
let panLocationInCard = sender.location(in: cards[0])
switch sender.state {
case .began:
dynamicAnimator.removeAllBehaviors()
let offset = UIOffsetMake(cards[0].bounds.midX, panLocationInCard.y)
// card is attached to center
cardAttachmentBehavior = UIAttachmentBehavior(item: cards[0], offsetFromCenter: offset, attachedToAnchor: panLocationInView)
//dynamicAnimator.addBehavior(cardAttachmentBehavior)
let translation = sender.translation(in: self.view)
print(sender.view!.center.x)
sender.view!.center = CGPoint(x: sender.view!.center.x + translation.x, y: sender.view!.center.y)
sender.setTranslation(CGPoint(x: 0, y: 0), in: self.view)
case .changed:
//cardAttachmentBehavior.anchorPoint = panLocationInView
let translation = sender.translation(in: self.view)
print(sender.view!.center.x)
sender.view!.center = CGPoint(x: sender.view!.center.x + translation.x, y: sender.view!.center.y)
sender.setTranslation(CGPoint(x: 0, y: 0), in: self.view)
case .ended:
dynamicAnimator.removeAllBehaviors()
if !(cards[0].center.x > (self.view.center.x + requiredOffsetFromCenter) || cards[0].center.x < (self.view.center.x - requiredOffsetFromCenter)) {
// snap to center
let snapBehavior = UISnapBehavior(item: cards[0], snapTo: CGPoint(x: self.view.frame.midX, y: self.view.frame.midY + 23))
dynamicAnimator.addBehavior(snapBehavior)
} else {
let velocity = sender.velocity(in: self.view)
let pushBehavior = UIPushBehavior(items: [cards[0]], mode: .instantaneous)
pushBehavior.pushDirection = CGVector(dx: velocity.x/10, dy: velocity.y/10)
pushBehavior.magnitude = 175
dynamicAnimator.addBehavior(pushBehavior)
// spin after throwing
var angular = CGFloat.pi / 2 // angular velocity of spin
let currentAngle: Double = atan2(Double(cards[0].transform.b), Double(cards[0].transform.a))
if currentAngle > 0 {
angular = angular * 1
} else {
angular = angular * -1
}
let itemBehavior = UIDynamicItemBehavior(items: [cards[0]])
itemBehavior.friction = 0.2
itemBehavior.allowsRotation = true
itemBehavior.addAngularVelocity(CGFloat(angular), for: cards[0])
dynamicAnimator.addBehavior(itemBehavior)
showNextCard()
hideFrontCard()
}
default:
break
}
}
I was checking If I'm swiping in horizontal with:
let velocity = sender.velocity(in: self.view)
if abs(velocity.y) > abs(velocity.x) {
return
}
For some reason, It was getting in to return while I'm swiping horizontal. When I comment this block of code, everything started to work :)

Drawing on a UIImageView inside a UIScrollView

I have a UIImageView inside a UIScrollView which automatically zooms out to fit the image supplied. The user can zoom as usual with a pinch gesture, and the pan gesture is set to require two touches since the drawing takes precedence.
On launch, everything looks great, but when I invoke my drawing code, this happens:
As you can see, when drawLineFrom(fromPoint:toPoint:) is invoked, the UIImageView shrinks. After that, the drawing appears to work as intended (though it skips the first part of the line on every touch).
My UIPanGestureRecognizer selector:
#objc func onOneFingerDrawing(_ sender: UIPanGestureRecognizer) {
switch sender.state {
case .began:
swiped = false
lastPoint = sender.location(in: drawView.imageView)
case .changed:
swiped = true
let currentPoint = sender.location(in: drawView.imageView)
drawLineFrom(fromPoint: lastPoint, toPoint: currentPoint)
lastPoint = currentPoint
case .ended:
guard drawView.scrollView.frame.contains(sender.location(in: drawView.imageView)) else {
return
}
if let newImage = drawView.imageView.image {
if history.count > historyIndex + 1 {
history.removeLast((history.count - 1) - historyIndex)
}
history.append(newImage)
historyIndex = history.count - 1
}
case .possible,
.cancelled,
.failed:
return
}
}
and my drawLineFrom(fromPoint:toPoint:):
#objc func drawLineFrom(fromPoint: CGPoint, toPoint: CGPoint) {
UIGraphicsBeginImageContextWithOptions(drawView.imageView.frame.size, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()
context?.interpolationQuality = .none
drawView.imageView.image?.draw(in: CGRect(x: 0, y: 0, width: drawView.imageView.frame.size.width, height: drawView.imageView.frame.size.height))
context?.move(to: fromPoint)
context?.addLine(to: toPoint)
context?.setLineCap(.round)
context?.setLineWidth(lineWidth)
context?.setStrokeColor(lineColor)
context?.setBlendMode(blendMode)
context?.strokePath()
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
drawView.imageView.image = newImage
}
There is issue with image constraint with scrollview. Whenever you start rendering the image then its frame is changing according to content. So, you need to add aspect ratio constraint (as i did) or any size constraint to image view. see gifs for reference.
Before add image view size constraint.
After adding image view size constraint.
The drawing skips the first part of the line because you are using a UIPanGestureRecognizer. They system first determines it is a pan before it sends you a .began event. You should swap it to a generic UIGestureRecognizer to have it started immediately. You'll want to add in additional logic in that case checking for movement and number of fingers.
As far as the resizing, its tough to say without more info. I'd color the background of the ImageView as well. The first question is, is it the whole imageView that is shrinking or is it just the actual image inside it.

Drag object on XZ plane

I am working on an augmented reality app and I would like to be able to drag an object in the space. The problem with the solutions I find here in SO, the ones that suggest using projectPoint/unprojectPoint, is that they produce movement along the XY plane.
I was trying to use the fingers movement on the screen as an offset for x and z coordinates of the node. The problem is that there is a lot of stuff to take in consideration (camera's position, node's position, node's rotation, etc..)
Is there a simpler way of doing this?
I have updated #Alok answer as in my case it is drgging in x plane only from above solution. So i have added y coordinates, working for me.
var PCoordx: Float = 0.0
var PCoordy: Float = 0.0
var PCoordz: Float = 0.0
#objc func handleDragGesture(_ sender: UIPanGestureRecognizer) {
switch sender.state {
case .began:
let hitNode = self.sceneView.hitTest(sender.location(in: self.sceneView),
options: nil)
self.PCoordx = (hitNode.first?.worldCoordinates.x)!
self.PCoordy = (hitNode.first?.worldCoordinates.y)!
self.PCoordz = (hitNode.first?.worldCoordinates.z)!
case .changed:
// when you start to pan in screen with your finger
// hittest gives new coordinates of touched location in sceneView
// coord-pcoord gives distance to move or distance paned in sceneview
let hitNode = sceneView.hitTest(sender.location(in: sceneView), options: nil)
if let coordx = hitNode.first?.worldCoordinates.x,
let coordy = hitNode.first?.worldCoordinates.y,
let coordz = hitNode.first?.worldCoordinates.z {
let action = SCNAction.moveBy(x: CGFloat(coordx - PCoordx),
y: CGFloat(coordy - PCoordy),
z: CGFloat(coordz - PCoordz),
duration: 0.0)
self.photoNode.runAction(action)
self.PCoordx = coordx
self.PCoordy = coordy
self.PCoordz = coordz
}
sender.setTranslation(CGPoint.zero, in: self.sceneView)
case .ended:
self.PCoordx = 0.0
self.PCoordy = 0.0
self.PCoordz = 0.0
default:
break
}
}
first you need to create floor or very large plane few meters (i have 10) below origin. This makes sure your hittest always returns value. Then using pan gesture :
//store previous coordinates from hittest to compare with current ones
var PCoordx: Float = 0.0
var PCoordz: Float = 0.0
#objc func move(_ gestureRecognizer: UIPanGestureRecognizer){
if gestureRecognizer.state == .began{
let hitNode = sceneView.hitTest(gestureRecognizer.location(in: sceneView), options: nil)
PCoordx = (hitNode.first?.worldCoordinates.x)!
PCoordz = (hitNode.first?.worldCoordinates.z)!
}
// when you start to pan in screen with your finger
// hittest gives new coordinates of touched location in sceneView
// coord-pcoord gives distance to move or distance paned in sceneview
if gestureRecognizer.state == .changed {
let hitNode = sceneView.hitTest(gestureRecognizer.location(in: sceneView), options: nil)
if let coordx = hitNode.first?.worldCoordinates.x{
if let coordz = hitNode.first?.worldCoordinates.z{
let action = SCNAction.moveBy(x: CGFloat(coordx-PCoordx), y: 0, z: CGFloat(coordz-PCoordz), duration: 0.1)
node.runAction(action)
PCoordx = coordx
PCoordz = coordz
}
}
gestureRecognizer.setTranslation(CGPoint.zero, in: sceneView)
}
if gestureRecognizer.state == .ended{
PCoordx = 0
PCoordz = 0
}
}
In my case there is only one node so i have not checked if required node is taped or not. You can always check for it if you have many nodes.
If I understand you correctly, I do this using a UIPanGestureRecognizer added to the ARSCNView.
In my case I want to check if pan was started on a given virtual object and keep track of which it was because I can have multiple, but if you have just one object you may not need the targetNode variable.
The 700 constant I use to divide I got it by trial and error to make the translation smoother, you may need to change it for your case.
Moving finger up, moves the object further away from camera and moving it down moves it nearer. Horizontal movement of fingers moves object left/right.
#objc func onTranslate(_ sender: UIPanGestureRecognizer) {
let position = sender.location(in: scnView)
let state = sender.state
if (state == .failed || state == .cancelled) {
return
}
if (state == .began) {
// Check pan began on a virtual object
if let objectNode = virtualObject(at: position).node {
targetNode = objectNode
latestTranslatePos = position
}
}
else if let _ = targetNode {
// Translate virtual object
let deltaX = Float(position.x - latestTranslatePos!.x)/700
let deltaY = Float(position.y - latestTranslatePos!.y)/700
targetNode!.localTranslate(by: SCNVector3Make(deltaX, 0.0, deltaY))
latestTranslatePos = position
if (state == .ended) {
targetNode = nil
}
}

Place anchor point at the centre of the screen while doing gestures

I have a view with an image which responds to pinch, rotation and pan gestures. I want that pinching and rotation of the image would be done with respect to the anchor point placed in the middle of the screen, exactly as it is done using Xcode iPhone simulator by pressing the options key. How can I place the anchor point in the middle of the screen if the centre of the image might be scaled and panned to a different location?
Here's my scale and rotate gesture functions:
#IBAction func pinchGesture(_ gestureRecognizer: UIPinchGestureRecognizer) {
// Move the achor point of the view's layer to the touch point
// so that scaling the view and the layer becames simpler.
self.adjustAnchorPoint(gestureRecognizer: gestureRecognizer)
// Scale the view by the current scale factor.
if(gestureRecognizer.state == .began) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = gestureRecognizer.scale
}
if (gestureRecognizer.state == .began || gestureRecognizer.state == .changed) {
let currentScale = gestureRecognizer.view!.layer.value(forKeyPath:"transform.scale")! as! CGFloat
// Constants to adjust the max/min values of zoom
let kMaxScale:CGFloat = 15.0
let kMinScale:CGFloat = 1.0
var newScale = 1 - (lastScale - gestureRecognizer.scale)
newScale = min(newScale, kMaxScale / currentScale)
newScale = max(newScale, kMinScale / currentScale)
let transform = (gestureRecognizer.view?.transform)!.scaledBy(x: newScale, y: newScale);
gestureRecognizer.view?.transform = transform
lastScale = gestureRecognizer.scale // Store the previous scale factor for the next pinch gesture call
scale = currentScale // Save current scale for later use
}
}
#IBAction func rotationGesture(_ gestureRecognizer: UIRotationGestureRecognizer) {
// Move the achor point of the view's layer to the center of the
// user's two fingers. This creates a more natural looking rotation.
self.adjustAnchorPoint(gestureRecognizer: gestureRecognizer)
// Apply the rotation to the view's transform.
if gestureRecognizer.state == .began || gestureRecognizer.state == .changed {
gestureRecognizer.view?.transform = (gestureRecognizer.view?.transform.rotated(by: gestureRecognizer.rotation))!
// Set the rotation to 0 to avoid compouding the
// rotation in the view's transform.
angle += gestureRecognizer.rotation // Save rotation angle for later use
gestureRecognizer.rotation = 0.0
}
}
func adjustAnchorPoint(gestureRecognizer : UIGestureRecognizer) {
if gestureRecognizer.state == .began {
let view = gestureRecognizer.view
let locationInView = gestureRecognizer.location(in: view)
let locationInSuperview = gestureRecognizer.location(in: view?.superview)
// Move the anchor point to the the touch point and change the position of the view
view?.layer.anchorPoint = CGPoint(x: (locationInView.x / (view?.bounds.size.width)!), y: (locationInView.y / (view?.bounds.size.height)!))
view?.center = locationInSuperview
}
}
EDIT
I see that people aren't eager to get into this. Let me help by sharing some progress I've made in the past few days.
Firstly, I wrote a function centerAnchorPoint which correctly places the anchor point of an image to the centre of the screen regardless of where that anchor point was previously. However the image must not be scaled or rotated for it to work.
func setAnchorPoint(_ anchorPoint: CGPoint, forView view: UIView) {
var newPoint = CGPoint(x: view.bounds.size.width * anchorPoint.x, y: view.bounds.size.height * anchorPoint.y)
var oldPoint = CGPoint(x: view.bounds.size.width * view.layer.anchorPoint.x, y: view.bounds.size.height * view.layer.anchorPoint.y)
newPoint = newPoint.applying(view.transform)
oldPoint = oldPoint.applying(view.transform)
var position = view.layer.position
position.x -= oldPoint.x
position.x += newPoint.x
position.y -= oldPoint.y
position.y += newPoint.y
view.layer.position = position
view.layer.anchorPoint = anchorPoint
}
func centerAnchorPoint(gestureRecognizer : UIGestureRecognizer) {
if gestureRecognizer.state == .ended {
view?.layer.anchorPoint = CGPoint(x: (photo.bounds.midX / (view?.bounds.size.width)!), y: (photo.bounds.midY / (view?.bounds.size.height)!))
}
}
func centerAnchorPoint() {
// Position of the current anchor point
let currentPosition = photo.layer.anchorPoint
self.setAnchorPoint(CGPoint(x: 0.5, y: 0.5), forView: photo)
// Center of the image
let imageCenter = CGPoint(x: photo.center.x, y: photo.center.y)
self.setAnchorPoint(currentPosition, forView: photo)
// Center of the screen
let screenCenter = CGPoint(x: UIScreen.main.bounds.midX, y: UIScreen.main.bounds.midY)
// Distance between the centers
let distanceX = screenCenter.x - imageCenter.x
let distanceY = screenCenter.y - imageCenter.y
// Find new anchor point
let newAnchorPoint = CGPoint(x: (imageCenter.x+2*distanceX)/(UIScreen.main.bounds.size.width), y: (imageCenter.y+2*distanceY)/(UIScreen.main.bounds.size.height))
//photo.layer.anchorPoint = newAnchorPoint
self.setAnchorPoint(newAnchorPoint, forView: photo)
let dotPath = UIBezierPath(ovalIn: CGRect(x: photo.layer.position.x-2.5, y: photo.layer.position.y-2.5, width: 5, height: 5))
layer.path = dotPath.cgPath
}
Function setAchorPoint is used here to set anchor point to a new position without moving an image.
Then I updated panGesture function by inserting this at the end of it:
if gestureRecognizer.state == .ended {
self.centerAnchorPoint()
}
EDIT 2
Ok, so I'll try to simply explain the code above.
What I am doing is:
Finding the distance between the center of the photo and the center of the screen
Apply this formula to find the new position of anchor point:
newAnchorPointX = (imageCenter.x-distanceX)/screenWidth + distanceX/screenWidth
Then do the same for y position.
Set this point as a new anchor point without moving the photo using setAnchorPoint function
As I said this works great if the image is not scaled. If it is, then the anchor point does not stay at the center.
Strangely enough distanceX or distanceY doesn't exactly depend on scale value, so something like this doesn't quite work:
newAnchorPointX = (imageCenter.x-distanceX)/screenWidth + distanceX/(scaleValue*screenWidth)
EDIT 3
I figured out the scaling. It appears that the correct scale factor has to be:
scaleFactor = photo.frame.size.width/photo.layer.bounds.size.width
I used this instead of scaleValue and it worked splendidly.
So panning and scaling are done. The only thing left is rotation, but it appears that it's the hardest.
First thing I thought is to apply rotation matrix to increments in X and Y directions, like this:
let incrementX = (distanceX)/(screenWidth)
let incrementY = (distanceY)/(screenHeight)
// Applying rotation matrix
let incX = incrementX*cos(angle)+incrementY*sin(angle)
let incY = -incrementX*sin(angle)+incrementY*cos(angle)
// Find new anchor point
let newAnchorPoint = CGPoint(x: 0.5+incX, y: 0.5+incY)
However this doesn't work.
Since the question is mostly answered in the edits, I don't want to repeat myself too much.
Broadly what I changed from the code posted in the original question:
Deleted calls to adjustAnchorPoint function in pinch and rotation gesture functions.
Placed this piece of code in pan gesture function, so that the anchor point would update its position after panning the photo:
if gestureRecognizer.state == .ended {
self.centerAnchorPoint()
}
Updated centerAnchorPoint function to work for rotation.
A fully working centerAnchorPoint function (rotation included):
func centerAnchorPoint() {
// Scale factor
photo.transform = photo.transform.rotated(by: -angle)
let curScale = photo.frame.size.width / photo.layer.bounds.size.width
photo.transform = photo.transform.rotated(by: angle)
// Position of the current anchor point
let currentPosition = photo.layer.anchorPoint
self.setAnchorPoint(CGPoint(x: 0.5, y: 0.5), forView: photo)
// Center of the image
let imageCenter = CGPoint(x: photo.center.x, y: photo.center.y)
self.setAnchorPoint(currentPosition, forView: photo)
// Center of the screen
let screenCenter = CGPoint(x: UIScreen.main.bounds.midX, y: UIScreen.main.bounds.midY)
// Distance between the centers
let distanceX = screenCenter.x - imageCenter.x
let distanceY = screenCenter.y - imageCenter.y
// Apply rotational matrix to the distances
let distX = distanceX*cos(angle)+distanceY*sin(angle)
let distY = -distanceX*sin(angle)+distanceY*cos(angle)
let incrementX = (distX)/(curScale*UIScreen.main.bounds.size.width)
let incrementY = (distY)/(curScale*UIScreen.main.bounds.size.height)
// Find new anchor point
let newAnchorPoint = CGPoint(x: 0.5+incrementX, y: 0.5+incrementY)
self.setAnchorPoint(newAnchorPoint, forView: photo)
}
The key things to notice here is that the rotation matrix has to be applied to distanceX and distanceY. The scale factor is also updated to remain the same throughout the rotation.

Resources