CGAffineTransform to scale width and height only - ios

How to scale a CGontext without affecting its origin i.e only the width and height should get scaled? If I use scale like the below one directly it scales the origin also.
context.scaledBy(x: 2.0, y: 2.0)
Is there a way to construct an AffineTransform that manipulates the width and height, leaving the origin untouched?
I want an AffineTransform that can be used to both CGContext and CGRect.
For example a CGRect rect = {x, y, w, h}
var t = CGAffineTransform.identity
t = t.scaledBy(x: sx, y: sy)
let tRect = rect.applying(t)
tRect will be {x * sx, y * sy, w * sx, h * sy}
But I want {x, y, w * sx, h * sy}. Though it can be achieved by calculation, I need CGAffineTransform to do this.

You need to translate the origin, then scale, then undo the translation:
import Foundation
import CoreGraphics
let rect = CGRect(x: 1, y: 2, width: 3, height: 4) // Whatever
// Translation to move rect's origin to <0,0>
let t0 = CGAffineTransform(translationX: -rect.origin.x, y: -rect.origin.y)
// Scale - <0,0> will not move, width & height will
let ts = CGAffineTransform(scaleX: 2, y: 3) // Whatever
// Translation to restore origin
let t1 = CGAffineTransform(translationX: rect.origin.x, y: rect.origin.y)
//Compound transform:
let t = t0.concatenating(ts).concatenating(t1)
// Test it:
let tRect = rect.applying(t) // 1, 2, 6, 12 as required

Related

iOS Core Graphic, how to calculate CGAffineTransform(scaleX offset?

While I am learning Core Graphic by ray wenderlich,
one step is to transform UIBezierPath, var transform = CGAffineTransform(scaleX: 0.8, y: 0.8),
I do not know why the step after is right,which is transform = transform.translatedBy(x: 15, y: 30)?
I don't know how the x and y position is calculated out.
By printing the UIBezierPath currentPoint print(medallionPath.currentPoint), I thought the width should be (x1 - x2) * 0.5, the height should be y1 - y2, I really don't know why it is
(x: 15, y: 30)
The whole code is following , tested in Playground
let size = CGSize(width: 120, height: 200)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
let context = UIGraphicsGetCurrentContext()!
//Gold colors
let darkGoldColor = UIColor(red: 0.6, green: 0.5, blue: 0.15, alpha: 1.0)
let midGoldColor = UIColor(red: 0.86, green: 0.73, blue: 0.3, alpha: 1.0)
let medallionPath = UIBezierPath(ovalIn: CGRect(x: 8, y: 72, width: 100, height: 100))
print(medallionPath.currentPoint)
// (108.0, 122.0)
print(medallionPath.bounds)
// (8.0, 72.0, 100.0, 100.0)
context.saveGState()
medallionPath.addClip()
darkGoldColor.setFill()
medallionPath.fill()
context.restoreGState()
// question
var transform = CGAffineTransform(scaleX: 0.8, y: 0.8)
// transform = transform.translatedBy(x: 15, y: 30)
medallionPath.lineWidth = 2.0
//apply the transform to the path
medallionPath.apply(transform)
print(medallionPath.currentPoint)
// (86.4, 97.6)
print(medallionPath.bounds)
// (6.4, 57.6, 80.0, 80.0)
medallionPath.stroke()
//This code must always be at the end of the playground
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
transform = transform.translatedBy(x: 15, y: 30)
is a translation. It shifts the entire path to the right by 15 and down by 30. Your question is where did the magic numbers 15 and 30 come from.
The medallion drawing has a circle with dimensions 100 x 100 starting at position (8, 72) as established by this line of code:
let medallionPath = UIBezierPath(ovalIn: CGRect(x: 8, y: 72, width: 100, height: 100))
The code then adds an inner ring by scaling the original path by 0.8, so it will be an 80 x 80 circle. So that the center of this smaller ring lines up with the center of the bigger circle, it will need to be offset an additional 10 in both the horizontal and vertical directions. (The smaller circle is 20 smaller horizontally and vertically, so shifting it by 1/2 of 20 gets it to align properly). The goal then is to have it positioned at (18, 82) with width: 80, height: 80.
So, we need to apply a translation (a shift) in the X and Y directions, such that when the path is scaled we end up with a path anchored at (18, 82). The tricky bit is that the scaling gets applied to the shift values, so that has to be accounted for as well.
So, we are starting with an X position of 8, and we want to apply some translation value dx so that when the result is scaled by 0.8 we end up with the value 18:
(8 + dx) * 0.8 = 18
solving for dx:
dx = (18 / 0.8) - 8
dx = 14.5
Similarly for Y, we are starting with a Y position of 72 and want to apply a translation dy such that when it is scaled by 0.8 we end up with 82:
(72 + dy) * 0.8 = 82
solving for dy:
dy = (82 / 0.8) - 72
dy = 30.5
So, the mathematically correct translation is (14.5, 30.5):
transform = transform.translatedBy(x: 14.5, y: 30.5)
Ray Wenderlich rounded those to (15, 30) for some reason known only to them (because the round numbers look better in the code perhaps?). It's possible that they didn't bother to do the math and just tried values until it looked right.
here scale is 0.8 means 80% of the current so its x, y, width and height all reduce to 80%
so new scale difference is
scaleDifference = ((1/ 0.8) - 1.0) * 100 = 25%
now total size difference is 0.25, here inner circle is in centre so centre point does not change so its x and y position change to calculate x and y
here path rect is (8, 72, 100, 100)
so formula to calculate
circle reduce 10% from both side, to keep it in centre needs to increase 10% x and y and reduce 10% in width and height so circle will be in centre
here scale is set to 80% so what ever x and y translation we are giving all will be calculated to 80%, for example if we give translate x = 10 and y = 10 system converts to its 80% so x = 8 and y = 8.
we have to add width difference to the x and y to keep it in centre
width and height is 100 so
100 * (scaleDifference / 2.0)
100 * (0.25 / 2.0) = 12.5
and due to scaling x and y also reduce to 0.8% so to make it 100% is
xDifference = 8 * scaleDifference = 2.0
yDifference = 72 * scaleDifference = 18.0
to keep circle in centre we have to add width difference with dx and height difference with dy to get final x and y translation value
dx = xDifference + widthDifference = 2.0 + 12.5 = 14.5
dy = yDifference + heightDifference = 18.0 + 12.5 = 30.5

How to find X, Y and Rotation value of a UIView after CATransform3D applied

I have a UIView on my screen. I am applying layer.transform to that view with translation and rotation according to users tap movement using tap and rotation gesture. At last i want to retrieve the final x and y position with the rotation separately. Could not find any such post here to get those information from transform. Can anyone help with this?
Here is the code am using to apply the transform.
var transform = CATransform3DIdentity
transform = CATransform3DTranslate(transform, displacementX, displacementY, 1.0)
transform = CATransform3DRotate(transform, gesture.rotation, 0, 0, 1.0)
self.currentItem.imageView.layer.transform = transform
Please refer the following code,
For Applying Transform,
let degrees = 90.0
let radians = CGFloat(degrees * Double.pi / 180)
sampleView.layer.transform = CATransform3DMakeRotation(radians, 0.0, 0.0, 1.0)
To get rotation angle after transform,
let radiansFromSampleView = atan2(sampleView.transform.b, sampleView.transform.a)
let DegreesFromRadiansOFSampleView = CGFloat(180 * Double(radiansFromSampleView) / Double.pi)
For x and y positions you can directly take from frame of the view even after transformation.
Hope this can be helpful.

Why is CAShapeLayer not centering to my UIView when I scale it?

let centerPointX = colorSizeGuide.bounds.midX / 2
let centerPointY = colorSizeGuide.bounds.midY / 2
let circleWidth: CGFloat = 10
let circleHeight: CGFloat = 10
shape.path = UIBezierPath(ovalIn: CGRect(x: centerPointX + circleWidth / 4, y: centerPointY + circleHeight / 4, width: circleWidth, height: circleHeight)).cgPath
shape.strokeColor = UIColor(r: 160, g: 150, b: 180).cgColor
shape.fillColor = UIColor(r: 160, g: 150, b: 180).cgColor
shape.anchorPoint = CGPoint(x: 0.5, y: 0.5)
shape.lineWidth = 0.1
shape.transform = CATransform3DMakeScale(4.0, 4.0, 1.0)
colorSizeGuide.layer.addSublayer(shape)
Here's what's happening. I need the CAShapeLayer to stay in the middle of the small gray area:
I struggle with affine transforms a little myself, but here's what I think is going on:
The scale takes place centered around 0,0, so it will grow out from that point. That means it will "push away" from the origin.
In order to grow from the center, you should shift the origin to the center point of your shape, scale, and then shift the origin back, by the now-scaled amount:
var transform = CATransform3DMakeTranslation(centerPointX, centerPointY, 0)
transform = CATransformScale(transform, 4.0, 4.0, 1.0)
var transform = CATransform3DTranslate(
transform,
-4.0 * centerPointX,
-4.0 * centerPointY,
0)
shape.transform = transform
BTW, I can't make any sense of the image you posted with your question. You say "I need the CAShapeLayer to stay in the middle of the small gray area" I gather your shape layer is one of the circles, but it isn't clear what you mean by "the small gray area." It looks like there might be an outline that got cropped somehow.

How to calculate the correct position for a matrix of circles to fit inside given view with proportional gaps?

I am pretty new to iOS development and I am trying to display a 10x10 grid inside a UIView respecting its bounds and I would like that the circles would be calculated based on the available width/height of the device.
What I tried so far without luck:
func setUpPoints() {
let matrixSize = 10
let diameter = (min(painelView.frame.size.width, painelView.frame.size.height) / CGFloat(matrixSize + 1)).rounded(.down)
let radius = diameter / 2
for i in 0...matrixSize {
for j in 0...matrixSize {
let x = CGFloat(i) * diameter + radius
let y = CGFloat(j) * diameter + radius
let frame = CGRect(x: x, y: y, width: diameter, height: diameter)
let circle = Circle(frame: frame)
circle.tag = j * matrixSize + i + 1
painelView.addSubview(circle)
}
}
}
My goal is to distribute the circles inside the gray rectangle proportionally so it will look like the Android pattern lock screen:
Can someone please give me some pointers?
Thanks.
If I understand what you are trying to do, then the following line:
let radius = (painelView.frame.size.width + painelView.frame.size.height) / CGFloat(matrixSize * 2)
should be:
let radius = (min(painelView.frame.size.width, painelView.frame.size.height) / CGFloat(matrixSize + 1)).rounded(.down)
The above change will allow the "square" of circles fit within whichever is smaller - the view's width or height, allowing for a gap around the "square" equal to half the diameter of each circle.
You also need to change both loops to start with 0.
for i in 0..<matrixSize {
for j in 0..<matrixSize {
BTW - your radius variable is really the diameter. And gap is really the radius.
The following code provides a border around the square of circles and it includes some space between the circles. Adjust as needed.
func setUpPoints() {
let matrixSize = 10
let borderRatio = CGFloat(0.5) // half a circle diameter - change as desired
let gapRatio = CGFloat(0.25) // quarter circle diameter - change as desired
let squareSize = min(painelView.frame.size.width, painelView.frame.size.height)
let diameter = (squareSize / (CGFloat(matrixSize) + 2 * borderRatio + CGFloat(matrixSize - 1) * gapRatio)).rounded(.down)
let centerToCenter = (diameter + diameter * gapRatio).rounded(.down)
let borderSize = (diameter * borderRatio).rounded()
for i in 0..<matrixSize {
for j in 0..<matrixSize {
let x = CGFloat(i) * centerToCenter + borderSize
let y = CGFloat(j) * centerToCenter + borderSize
let frame = CGRect(x: x, y: y, width: diameter, height: diameter)
let circle = Circle(frame: frame)
circle.tag = j * matrixSize + i + 1
painelView.addSubview(circle)
}
}
}

CraftAR Image recognition - Translating matchBoundingBox to points in screen

I am using on device image recognition from Catchoom CraftAR and working with the example available on Github https://github.com/Catchoom/craftar-example-ios-on-device-image-recognition.
The image recognition works, I would like to use the matchBoundingBox to draw some squares on all the 4 corners. Somehow the calculations I am doing are not working, I have based them on this article:
http://support.catchoom.com/customer/portal/articles/1886553-obtain-the-bounding-boxes-of-the-results-of-image-recognition
The square views are added to the scanning overlay and this is how I am calculating the points where to add the 4 views:
CraftARSearchResult *bestResult = [results objectAtIndex:0];
BoundingBox *box = bestResult.matchBoundingBox;
float w = self._preview.frame.size.width;
float h = self._preview.frame.size.height;
CGPoint tr = CGPointMake(w * box.topRightX , h * box.topRightY);
CGPoint tl = CGPointMake(w * box.topLeftX, h * box.topLeftY);
CGPoint br = CGPointMake(w * box.bottomRightX, h * box.bottomRightY);
CGPoint bl = CGPointMake(w * box.bottomLeftX, h * box.bottomLeftY);
The x position looks like it is pretty close, but the y position is completely off and looks like mirrored.
I am testing on iOS 10 iPhone 6s
Am I missing something?
The issue was that I was using the preview frame to make the translation to the points in screen. But the points that come through with bounding box are not relative to the preview view, they are relative to the VideoFrame (as the support people of catchoom.com pointed out). The VideoFrame size is set by the capturePreset which only accepts two values AVCaptureSessionPreset1280x720 and AVCaptureSessionPreset640x480. The default one is AVCaptureSessionPreset1280x720
So in my case I had to make the calculations with size 1280x720 and then make the conversion from those coordinates to the coordinates in my preview view size.
So it ended up looking like this:
let box = bestResult.matchBoundingBox
let wVideoFrame:CGFloat = 1080.0;
let hVideoFrame:CGFloat = 720.0;
let wRelativePreview = wVideoFrame/CGFloat(preview.frame.size.height)
let hRelativePreview = wVideoFrame/CGFloat(preview.frame.size.width)
var tl = CGPoint(x: wVideoFrame * CGFloat(box.topLeftX),y: hVideoFrame * CGFloat(box.topLeftY));
var tr = CGPoint(x: wVideoFrame * CGFloat(box.topRightX) ,y: hVideoFrame * CGFloat(box.topRightY));
var br = CGPoint(x: wVideoFrame * CGFloat(box.bottomRightX),y: hVideoFrame * CGFloat(box.bottomRightY));
var bl = CGPoint(x: wVideoFrame * CGFloat(box.bottomLeftX),y: hVideoFrame * CGFloat(box.bottomLeftY));
tl = CGPoint(x: tl.x/wRelativePreview, y: tl.y/hRelativePreview)
tr = CGPoint(x: tr.x/wRelativePreview, y: tr.y/hRelativePreview)
br = CGPoint(x: br.x/wRelativePreview, y: br.y/hRelativePreview)
bl = CGPoint(x: bl.x/wRelativePreview, y: bl.y/hRelativePreview)
// 4 square visualize top-left, top.right, bottom-left and bottom-right points
var fr = vTL.frame;
fr.origin = tl;
vTL.frame = fr;
fr.origin = tr;
vTR.frame = fr;
fr.origin = br;
vBR.frame = fr;
fr.origin = bl;
vBL.frame = fr;
Now the points looked quite ok on screen, but they looked some how rotated. So I rotated the view 90 degrees:
// overlay is the container of the 3 squares to visualize the points in screen
overlay.transform = CGAffineTransform(rotationAngle: CGFloat(M_PI/2.0))
Note this is not the official response from support from catchoom, this might not be 100% correct, but it worked for me quite well.

Resources