iOS CGAffineTransform with Masking - ios

Im currently developing an iOS Application where you can process an image. (rotating, zooming, translating). Im using an uiimageview where i added gestures. This works fine but i also have some masking rectangle of a fixed size. Initial State
After i processed my image i want the content which is inside my masking rectangle.
I also want the four edge points of the masking rectangle of the processed image.
I know i have to apply the imageview transform to the points somehow, but its not working.
let points = maskView.edgePoints()
let translateTransform = CGAffineTransform(translationX: translationPoint.x, y: translationPoint.y)
let rotateTransform = CGAffineTransform(rotationAngle: CGFloat(rotationAngle))
let scaleTransform = CGAffineTransform(scaleX: xScale, y: yScale)
let finalTransform = rotateTransform.concatenating(scaleTransform).concatenating(translateTransform)
let topleftPoint = points[0].applying(finalTransform)
let toprightPoint = points[1].applying(finalTransform)
let bottomleftPoint = points[2].applying(finalTransform)
let bottomrightPoint = points[3].applying(finalTransform)
Edge point results: Sample
Topleft: (50.75, -8.75)
Topright: (63.6072332330383, -365.252863911729)
Bottomleft: (-172.064289944831, -16.7857707706489)
Bottomright: (-159.207056711792, -373.288634682378)
But the Topleft should be something like (0,0)
and the Bottomleft something like (40,200)?
Maybe you can give me some hints or useful links!
Thx in advance!

The problem lies in your transformation order. Right now your transformation order is Rotate Scale Translate, it should be Scale Rotate Translate instead.
let finalTransform = scaleTransform.concatenating(rotateTransform).concatenating(translateTransform)

Related

How can I rotate and translate/scale an overlay so it matches the map rect?

I have an app where users can add an image to a map. This is pretty straightforward. It becomes much more difficult when I want to add rotation (taken from the current map heading). The code I use to create an image is pretty straightforward:
let imageAspectRatio = Double(image.size.height / image.size.width)
let mapAspectRatio = Double(visibleMapRect.size.height / visibleMapRect.size.width)
var mapRect = visibleMapRect
if mapAspectRatio > imageAspectRatio {
// Aspect ratio of map is bigger than aspect ratio of image (map is higher than the image), take away height from the rectangle
let heightChange = mapRect.size.height - mapRect.size.width * imageAspectRatio
mapRect.size.height = mapRect.size.width * imageAspectRatio
mapRect.origin.y += heightChange / 2
} else {
// Aspect ratio of map is smaller than aspect ratio of image (map is higher than the image), take away width from the rectangle
let widthChange = mapRect.size.width - mapRect.size.height / imageAspectRatio
mapRect.size.width = mapRect.size.height / imageAspectRatio
mapRect.origin.x += widthChange / 2
}
photos.append(ImageOverlay(image: image, boundingMapRect: mapRect, rotation: cameraHeading))
The ImageOverlay class inherits from MKOverlay, which I can easily draw on the map. Here's the code for that class:
class ImageOverlay: NSObject, MKOverlay {
let image: UIImage
let boundingMapRect: MKMapRect
let coordinate: CLLocationCoordinate2D
let rotation: CLLocationDirection
init(image: UIImage, boundingMapRect: MKMapRect, rotation: CLLocationDirection) {
self.image = UIImage.fixedOrientation(for: image) ?? image
self.boundingMapRect = boundingMapRect
self.coordinate = CLLocationCoordinate2D(latitude: boundingMapRect.midX, longitude: boundingMapRect.midY)
self.rotation = rotation
}
}
I have figured out by how much I need to scale and translate the context to fit on the correct location on the map (from which it was added). I can't figure out how to rotate the context to make the image render in the correct location.
I figured out that the map rotation was in degrees, and the rotate method takes radians (took longer than I dare to admit), but the image moves around when I apply the rotation.
I use the following code to render the overlay:
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in context: CGContext) {
guard let overlay = overlay as? ImageOverlay else {
return
}
let rect = self.rect(for: overlay.boundingMapRect)
context.scaleBy(x: 1.0, y: -1.0)
context.translateBy(x: 0.0, y: -rect.size.height)
context.rotate(by: CGFloat(overlay.rotation * Double.pi / 180))
context.draw(overlay.image.cgImage!, in: rect)
}
How do I need to rotate this context to get the image to be aligned properly?
This is an open source project with the code here
Edit: I have tried (and failed) to use some kind of trig function. If I scale by a factor of 3 * sin(rotation) / 4 (no clue where the 3/4 comes from), I get a correct scale for some rotations, but not for others.
It sounds like you trying to rotate the object in it's local coordinates, but are actually rotating in the world coordinates. I admit I'm not familiar with this library, but the moral of the story is that order of operation on transformations matter. But it looks like you have "TranslateBy" and you are sending in zero, which might not be moving it at all? If you are trying to translate back to local you'd need to translate to local by subtracting it's current coordinates in the CLLocationCoordinate2D struct.
Translate to local coordinates if not already there (which might be X:0, y:0, you might need to subtract the current coordinate values instead of trying to set them to a specific number like 0)
Apply rotation
Translate back to world coordinates (where you had it before, originally defined as CLLocationCoordinate2D)
This should allow the image to be in the correct position but now rotated to align with the heading.
Here is a paper which explains what you are probably encountering, although more in depth and specific to the matrix/opengl, but the first slide illustrates your issue.
Transforms PDF

UIBezierPath translation transform gives wrong answer

When I attempt to translate a path, to move it to an origin of {0, 0}, the resulting path bounds is in error. (Or, my assumptions are in error).
e.g. the path gives the following bounds info:
let bezier = UIBezierPath(cgPath: svgPath)
print(bezier.bounds)
// (0.0085, 0.7200, 68.5542, 41.1379)
print(bezier.cgPath.boundingBoxOfPath)
// (0.0085, 0.7200, 68.5542, 41.1379)
print(bezier.cgPath.boundingBox)
// (-1.25, -0.1070, 70.0360, 41.9650)
I (attempt to) move the path to the origin:
let origin = bezier.bounds.origin
bezier.apply(CGAffineTransform(translationX: -origin.x, y: -origin.y))
print(bezier.bounds)
// (0.0, -2.7755, 68.5542, 41.1379)
As you can see, the x origin component is correct at 0. But, the y component (-2.7755) has gone all kittywumpus. It should be 0, non?
The same thing happens when I perform the transform on the cgPath property.
Does anyone know what kind of circumstances could cause a UIBezierPath/CGPath to behave like this when translated? After reading the Apple docs, it seems that UIBezierPath/CGPath do not hold a transform state; the points are transformed immediately when the transform is called.
Thanks for any help.
Background:
The path data is from Font-Awesome SVGs, via PocketSVG. All files parse, and most draw OK. But a small subset exhibit the above translation issue. I'd like to know if I'm doing something fundamentally wrong or silly before I go ferreting through the SVG parsing, path-building code looking for defects.
BTW I am not drawing at this stage or otherwise dealing with a context; I am building paths prior to drawing.
[edit]
To check that PocketSVG was giving me properly formed data, I passed the same SVG to SwiftSVG, and got the same path data as PocketSVG, and the same result:
let svgURL = Bundle.main.url(forResource: "fa-mars-stroke-h", withExtension: "svg")!
var bezier = UIBezierPath.pathWithSVGURL(svgURL)!
print(bezier.bounds)
// (0.0085, 0.7200, 68.5542, 41.1379)
let origin = bezier.bounds.origin
let translation = CGAffineTransform(translationX: -origin.x, y: -origin.y)
bezier.apply(translation)
print(bezier.bounds)
// (0.0, -2.7755, 68.5542, 41.1379)
Once again, that y component should be 0, but is not. Very weird.
On a whim, I thought I'd try to apply a transformation again. And, it worked!
let translation2 = CGAffineTransform(translationX: -bezier.bounds.origin.x, y: -bezier.bounds.origin.y)
bezier.apply(translation2)
print(bezier.bounds)
// (0.0, 0.0, 68.5542491336633, 41.1379438254997)
Baffling! Am I overlooking something really basic here?
I have tried the same as you and is working for me in Xcode 8.3.2 / iOS 10
I struggled myself with the same problem, I managed to solve it by using the following snippet of code (Swift 5). I tested on an organic bezier shape and it works as expected:
extension CGRect {
var center: CGPoint { return CGPoint(x: midX, y: midY) }
}
extension UIBezierPath {
func center(inRect rect:CGRect) {
let rectCenter = rect.center
let bezierCenter = self.bounds.center
let translation = CGAffineTransform(translationX: rectCenter.x - bezierCenter.x, y: rectCenter.y - bezierCenter.y)
self.apply(translation)
}
}
Usage example:
override func viewDidLoad() {
super.viewDidLoad()
let bezier = UIBezierPath() // replace this with your bezier object
let shape = CAShapeLayer()
shape.strokeColor = UIColor.black.cgColor
shape.fillColor = UIColor.clear.cgColor
shape.bounds = self.view.bounds
shape.position = self.view.bounds.center
bezier.center(inRect: shape.bounds)
shape.path = bezier.cgPath
self.view.layer.addSublayer(shape)
}
It will display the shape in the center of the screen.

Resize sprite without shrinking contents

I have created a big circle with a UIBezierPath and turned it into a Sprite using this,
let path = UIBezierPath(arcCenter: CGPoint(x: 0, y: 0), radius: CGFloat(226), startAngle: 0.0, endAngle: CGFloat(M_PI * 2), clockwise: false)
// create a shape from the path and customize it
let shape = SKShapeNode(path: path.cgPath)
shape.lineWidth = 20
shape.position = center
shape.strokeColor = UIColor(red:0.98, green:0.99, blue:0.99, alpha:1.00)
let trackViewTexture = self.view!.texture(from: shape, crop: outerPath.bounds)
let trackViewSprite = SKSpriteNode(texture: trackViewTexture)
trackViewSprite.physicsBody = SKPhysicsBody(edgeChainFrom: innerPath.cgPath)
self.addChild(trackViewSprite)
It works fine. It creates the circle perfectly. But I need to resize it using
SKAction.resize(byWidth: -43, height: -43, duration: 0.3)
Which will make it a bit smaller. But, when it resizes the 20 line width I set now is very small because of the aspect fill. So when I shink it looks something like this:
But I need it to shrink like this-- keeping the 20 line width:
How would I do this?
Don't know if this would affect anything, but the sprites are rotating with an SKAction forever
-- EDIT --
Now, how do I use this method to scale to a specific size? Like turn 226x226 to 183x183?
Since by scaling down the circle, not only its radius gets scaled but its line's width too, you need to set a new lineWidth proportional with the scale. For example, when scaling the circle down by 2, you will need to double the lineWidth.
This can be done in two ways:
Setting the lineWidth in the completion block of the run(SKAction, completion: #escaping () -> Void) method. However this will result in seeing the line shrinking while the animation is running, then jumping to its new width once the animation finishes. If your animation is short, this may not be easy to observe tough.
Running a parallel animation together with the scaling one, which constantly adjusts the lineWidth. For this, you can use SKAction's customAction method.
Here is an example for your case:
let scale = CGFloat(0.5)
let finalLineWidth = initialLineWidth / scale
let animationDuration = 1.0
let scaleAction = SKAction.scale(by: scale, duration: animationDuration)
let lineWidthAction = SKAction.customAction(withDuration: animationDuration) { (shapeNode, time) in
if let shape = shapeNode as? SKShapeNode {
let progress = time / CGFloat(animationDuration)
shape.lineWidth = initialLineWidth + progress * (finalLineWidth - initialLineWidth)
}
}
let group = SKAction.group([scaleAction, lineWidthAction])
shape.run(group)
In this example, your shape will be scaled by 0.5, therefore in case of an initial line width of 10, the final width will be 20. First we create a scaleAction with a specified duration, then a custom action which will update the line's width every time its actionBlock is called, by using the progress of the animation to make the line's width look like it's not changing. At the end we group the two actions so they will run in parallel once you call run.
As a hint, you don't need to use Bezier paths to create circles, there is a init(circleOfRadius: CGFloat) initializer for SKShapeNode which creates a circle for you.

How to "center" SKTexture in SKSpriteNode

I'm trying to make Jigsaw puzzle game in SpriteKit. To make things easier I using 9x9 squared tiles board. On each tile is one childNode with piece of image from it area.
But here's starts my problem. Piece of jigsaw puzzle isn't perfect square, and when I apply SKTexture to node it just place from anchorPoint = {0,0}. And result isn't pretty, actually its terrible.
https://www.dropbox.com/s/2di30hk5evdd5fr/IMG_0086.jpg?dl=0
I managed to fix those tiles with right and top "hooks", but left and bottom side doesn't care about anything.
var sprite = SKSpriteNode()
let originSize = frame.size
let textureSize = texture.size()
sprite.size = originSize
sprite.texture = texture
sprite.size = texture.size()
let x = (textureSize.width - originSize.width)
let widthRate = x / textureSize.width
let y = (textureSize.height - originSize.height)
let heightRate = y / textureSize.height
sprite.anchorPoint = CGPoint(x: 0.5 - (widthRate * 0.5), y: 0.5 - (heightRate * 0.5))
sprite.position = CGPoint(x: frame.width * 0.5, y: frame.height * 0.5)
addChild(sprite)
Can you give me some advice?
I don't see a way you can get placement right without knowing more about the piece texture you are using because they will all be different. Like if the piece has a nob on any of the sides and the width width/height the nob will add to the texture. Hard to tell in the pic but even if it doesn't have a nob and instead has an inset it might add varying sizes.
Without knowing anything about how the texture is created I am not able to offer help on that. But I do believe the issue starts with that. If it were me I would create a square texture with additional alpha to center the piece correctly. So the center of that texture would always be placed in the center of a square on the grid.
With all that being said I do know that adding that texture to a node and then adding that node to a SKNode will make your placement go smoother with the way you currently have it. The trick will then only be placing that textured piece correctly within the empty SKNode.
For example...
let piece = SKSpriteNode()
let texturedPiece = SKSpriteNode(texture: texture)
//positioning
//offset x needs to be calculated with additional info about the texture
//for example it has just a nob on the right
let offsetX : CGFloat = -nobWidth/2
//offset y needs to be calculated with additional info about the texture
//for example it has a nob on the top and bottom
let offsetY : CGFloat = 0.0
texturedPiece.position = CGPointMake(offsetX, offsetY)
piece.addChild(texturedPiece)
let squareWidth = size.width/2
//Now that the textured piece is placed correctly within a parent
//placing the parent is super easy and consistent without messing
//with anchor points. This will also make rotations nice.
piece.position = CGPoint(x: squareWidth/2, y: squareWidth/2)
addChild(piece)
Hopefully that makes sense and didn't confuse things further.

CGAffineTransform scale and translation - jump before animation

I am struggling with an issue regarding CGAffineTransform scale and translation where when I set a transform in an animation block on a view that already has a transform the view jumps a bit before animating.
Example:
// somewhere in view did load or during initialization
var view = UIView()
view.frame = CGRectMake(0,0,100,100)
var scale = CGAffineTransformMakeScale(0.8,0.8)
var translation = CGAffineTransformMakeTranslation(100,100)
var concat = CGAffineTransformConcat(translation, scale)
view.transform = transform
// called sometime later
func buttonPressed() {
var secondScale = CGAffineTransformMakeScale(0.6,0.6)
var secondTranslation = CGAffineTransformMakeTranslation(150,300)
var secondConcat = CGAffineTransformConcat(secondTranslation, secondScale)
UIView.animateWithDuration(0.5, animations: { () -> Void in
view.transform = secondConcat
})
}
Now when buttonPressed() is called the view jumps to the top left about 10 pixels before starting to animate. I only witnessed this issue with a concat transform, using only a translation transform works fine.
Edit: Since I've done a lot of research regarding the matter I think I should mention that this issue appears regardless of whether or not auto layout is turned on
I ran into the same issue, but couldn't find the exact source of the problem. The jump seems to appear only in very specific conditions: If the view animates from a transform t1 to a transform t2 and both transforms are a combination of a scale and a translation (that's exactly your case). Given the following workaround, which doesn't make sense to me, I assume it's a bug in Core Animation.
First, I tried using CATransform3D instead of CGAffineTransform.
Old code:
var transform = CGAffineTransformIdentity
transform = CGAffineTransformScale(transform, 1.1, 1.1)
transform = CGAffineTransformTranslate(transform, 10, 10)
view.layer.setAffineTransform(transform)
New code:
var transform = CATransform3DIdentity
transform = CATransform3DScale(transform, 1.1, 1.1, 1.0)
transform = CATransform3DTranslate(transform, 10, 10, 0)
view.layer.transform = transform
The new code should be equivalent to the old one (the fourth parameter is set to 1.0 or 0 so that there is no scaling/translation in z direction), and in fact it shows the same jumping. However, here comes the black magic: In the scale transformation, change the z parameter to anything different from 1.0, like this:
transform = CATransform3DScale(transform, 1.1, 1.1, 1.01)
This parameter should have no effect, but now the jump is gone.
🎩✨
Looks like Apple UIView animation internal bug. When Apple interpolates CGAffineTransform changes between two values to create animation it should do following steps:
Extract translation, scale, and rotation
Interpolate extracted values form start to end
Assemble CGAffineTransform for each interpolation step
Assembling should be in following order:
Translation
Scaling
Rotation
But looks like Apple make translation after scaling and rotation. This bug should be fixed by Apple.
I dont know why, but this code can work
update:
I successfully combine scale, translate, and rotation together, from any transform state to any new transform state.
I think the transform is reinterpreted at the start of the animation.
the anchor of start transform is considered in new transform, and then we convert it to old transform.
self.v = UIView(frame: CGRect(x: 50, y: 50, width: 50, height: 50))
self.v?.backgroundColor = .blue
self.view.addSubview(v!)
func buttonPressed() {
let view = self.v!
let m1 = view.transform
let tempScale = CGFloat(arc4random()%10)/10 + 1.0
let tempRotae:CGFloat = 1
let m2 = m1.translatedBy(x: CGFloat(arc4random()%30), y: CGFloat(arc4random()%30)).scaledBy(x: tempScale, y: tempScale).rotated(by:tempRotae)
self.animationViewToNewTransform(view: view, newTranform: m2)
}
func animationViewToNewTransform(view: UIView, newTranform: CGAffineTransform) {
// 1. pointInView.apply(view.transform) is not correct point.
// the real matrix is mAnchorToOrigin.inverted().concatenating(m1).concatenating(mAnchorToOrigin)
// 2. animation begin trasform is relative to final transform in final transform coordinate
// anchor and mAnchor
let normalizedAnchor0 = view.layer.anchorPoint
let anchor0 = CGPoint(x: normalizedAnchor0.x * view.bounds.width, y: normalizedAnchor0.y * view.bounds.height)
let mAnchor0 = CGAffineTransform.identity.translatedBy(x: anchor0.x, y: anchor0.y)
// 0->1->2
//let origin = CGPoint(x: 0, y: 0)
//let m0 = CGAffineTransform.identity
let m1 = view.transform
let m2 = newTranform
// rotate and scale relative to anchor, not to origin
let matrix1 = mAnchor0.inverted().concatenating(m1).concatenating(mAnchor0)
let matrix2 = mAnchor0.inverted().concatenating(m2).concatenating(mAnchor0)
let anchor1 = anchor0.applying(matrix1)
let mAnchor1 = CGAffineTransform.identity.translatedBy(x: anchor1.x, y: anchor1.y)
let anchor2 = anchor0.applying(matrix2)
let txty2 = CGPoint(x: anchor2.x - anchor0.x, y: anchor2.y - anchor0.y)
let txty2plusAnchor2 = CGPoint(x: txty2.x + anchor2.x, y: txty2.y + anchor2.y)
let anchor1InM2System = anchor1.applying(matrix2.inverted()).applying(mAnchor0.inverted())
let txty2ToM0System = txty2plusAnchor2.applying(matrix2.inverted()).applying(mAnchor0.inverted())
let txty2ToM1System = txty2ToM0System.applying(mAnchor0).applying(matrix1).applying(mAnchor1.inverted())
var m1New = m1
m1New.tx = txty2ToM1System.x + anchor1InM2System.x
m1New.ty = txty2ToM1System.y + anchor1InM2System.y
view.transform = m1New
UIView.animate(withDuration: 1.4) {
view.transform = m2
}
}
I also try the zScale solution, it seems also work if set zScale non-1 at the first transform or at every transform
let oldTransform = view.layer.transform
let tempScale = CGFloat(arc4random()%10)/10 + 1.0
var newTransform = CATransform3DScale(oldTransform, tempScale, tempScale, 1.01)
newTransform = CATransform3DTranslate(newTransform, CGFloat(arc4random()%30), CGFloat(arc4random()%30), 0)
newTransform = CATransform3DRotate(newTransform, 1, 0, 0, 1)
UIView.animate(withDuration: 1.4) {
view.layer.transform = newTransform
}
Instead of CGAffineTransformMakeScale() and CGAffineTransformMakeTranslation(), which create a transform based off of CGAffineTransformIdentity (basically no transform), you want to scale and translate based on the view's current transform using CGAffineTransformScale() and CGAffineTransformTranslate(), which start with the existing transform.
The source of the issue is the lack of perspective information to the transform.
You can add perspective information modifying the m34 property of your 3d transform
var transform = CATransform3DIdentity
transform.m34 = 1.0 / 200 //your own perspective value here
transform = CATransform3DScale(transform, 1.1, 1.1, 1.0)
transform = CATransform3DTranslate(transform, 10, 10, 0)
view.layer.transform = transform

Resources