how to get the actual translation of a uiview - ios

I have an uiview in which I draw my own map with a dimension of say 1000 * 1000.
I am moving and scaling the map with CGAffineTransform....... (Translate, Scale) transformations.
Lets say the upper left coordinate of my map after translation is 20 (x),10(y), because I translatet the map 20 to the left and 10 up.
Is there a way to get the coordinate of the upperleft corner of my map from the view ? (I know I could maintain my own metadata for this and update my metadata after a translation by my self.)

extension CGAffineTransform {
var translation: CGPoint {
return CGPoint(x: tx, y: ty)
}
}

Related

How can I rotate and translate/scale an overlay so it matches the map rect?

I have an app where users can add an image to a map. This is pretty straightforward. It becomes much more difficult when I want to add rotation (taken from the current map heading). The code I use to create an image is pretty straightforward:
let imageAspectRatio = Double(image.size.height / image.size.width)
let mapAspectRatio = Double(visibleMapRect.size.height / visibleMapRect.size.width)
var mapRect = visibleMapRect
if mapAspectRatio > imageAspectRatio {
// Aspect ratio of map is bigger than aspect ratio of image (map is higher than the image), take away height from the rectangle
let heightChange = mapRect.size.height - mapRect.size.width * imageAspectRatio
mapRect.size.height = mapRect.size.width * imageAspectRatio
mapRect.origin.y += heightChange / 2
} else {
// Aspect ratio of map is smaller than aspect ratio of image (map is higher than the image), take away width from the rectangle
let widthChange = mapRect.size.width - mapRect.size.height / imageAspectRatio
mapRect.size.width = mapRect.size.height / imageAspectRatio
mapRect.origin.x += widthChange / 2
}
photos.append(ImageOverlay(image: image, boundingMapRect: mapRect, rotation: cameraHeading))
The ImageOverlay class inherits from MKOverlay, which I can easily draw on the map. Here's the code for that class:
class ImageOverlay: NSObject, MKOverlay {
let image: UIImage
let boundingMapRect: MKMapRect
let coordinate: CLLocationCoordinate2D
let rotation: CLLocationDirection
init(image: UIImage, boundingMapRect: MKMapRect, rotation: CLLocationDirection) {
self.image = UIImage.fixedOrientation(for: image) ?? image
self.boundingMapRect = boundingMapRect
self.coordinate = CLLocationCoordinate2D(latitude: boundingMapRect.midX, longitude: boundingMapRect.midY)
self.rotation = rotation
}
}
I have figured out by how much I need to scale and translate the context to fit on the correct location on the map (from which it was added). I can't figure out how to rotate the context to make the image render in the correct location.
I figured out that the map rotation was in degrees, and the rotate method takes radians (took longer than I dare to admit), but the image moves around when I apply the rotation.
I use the following code to render the overlay:
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in context: CGContext) {
guard let overlay = overlay as? ImageOverlay else {
return
}
let rect = self.rect(for: overlay.boundingMapRect)
context.scaleBy(x: 1.0, y: -1.0)
context.translateBy(x: 0.0, y: -rect.size.height)
context.rotate(by: CGFloat(overlay.rotation * Double.pi / 180))
context.draw(overlay.image.cgImage!, in: rect)
}
How do I need to rotate this context to get the image to be aligned properly?
This is an open source project with the code here
Edit: I have tried (and failed) to use some kind of trig function. If I scale by a factor of 3 * sin(rotation) / 4 (no clue where the 3/4 comes from), I get a correct scale for some rotations, but not for others.
It sounds like you trying to rotate the object in it's local coordinates, but are actually rotating in the world coordinates. I admit I'm not familiar with this library, but the moral of the story is that order of operation on transformations matter. But it looks like you have "TranslateBy" and you are sending in zero, which might not be moving it at all? If you are trying to translate back to local you'd need to translate to local by subtracting it's current coordinates in the CLLocationCoordinate2D struct.
Translate to local coordinates if not already there (which might be X:0, y:0, you might need to subtract the current coordinate values instead of trying to set them to a specific number like 0)
Apply rotation
Translate back to world coordinates (where you had it before, originally defined as CLLocationCoordinate2D)
This should allow the image to be in the correct position but now rotated to align with the heading.
Here is a paper which explains what you are probably encountering, although more in depth and specific to the matrix/opengl, but the first slide illustrates your issue.
Transforms PDF

How to do transforms on a CALayer?

Before writing this question, I've
had experience with Affine transforms for views
read the Transforms documentation in the Quartz 2D Programming Guide
seen this detailed CALayer tutorial
downloaded and run the LayerPlayer project from Github
However, I'm still having trouble understanding how to do basic transforms on a layer. Finding explanations and simple examples for translate, rotate and scale has been difficult.
Today I finally decided to sit down, make a test project, and figure them out. My answer is below.
Notes:
I only do Swift, but if someone else wants to add the Objective-C code, be my guest.
At this point I am only concerned with understanding 2D transforms.
Basics
There are a number of different transforms you can do on a layer, but the basic ones are
translate (move)
scale
rotate
To do transforms on a CALayer, you set the layer's transform property to a CATransform3D type. For example, to translate a layer, you would do something like this:
myLayer.transform = CATransform3DMakeTranslation(20, 30, 0)
The word Make is used in the name for creating the initial transform: CATransform3DMakeTranslation. Subsequent transforms that are applied omit the Make. See, for example, this rotation followed by a translation:
let rotation = CATransform3DMakeRotation(CGFloat.pi * 30.0 / 180.0, 20, 20, 0)
myLayer.transform = CATransform3DTranslate(rotation, 20, 30, 0)
Now that we have the basis of how to make a transform, let's look at some examples of how to do each one. First, though, I'll show how I set up the project in case you want to play around with it, too.
Setup
For the following examples I set up a Single View Application and added a UIView with a light blue background to the storyboard. I hooked up the view to the view controller with the following code:
import UIKit
class ViewController: UIViewController {
var myLayer = CATextLayer()
#IBOutlet weak var myView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
// setup the sublayer
addSubLayer()
// do the transform
transformExample()
}
func addSubLayer() {
myLayer.frame = CGRect(x: 0, y: 0, width: 100, height: 40)
myLayer.backgroundColor = UIColor.blue.cgColor
myLayer.string = "Hello"
myView.layer.addSublayer(myLayer)
}
//******** Replace this function with the examples below ********
func transformExample() {
// add transform code here ...
}
}
There are many different kinds of CALayer, but I chose to use CATextLayer so that the transforms will be more clear visually.
Translate
The translation transform moves the layer. The basic syntax is
CATransform3DMakeTranslation(_ tx: CGFloat, _ ty: CGFloat, _ tz: CGFloat)
where tx is the change in the x coordinates, ty is the change in y, and tz is the change in z.
Example
In iOS the origin of the coordinate system is in the top left, so if we wanted to move the layer 90 points to the right and 50 points down, we would do the following:
myLayer.transform = CATransform3DMakeTranslation(90, 50, 0)
Notes
Remember that you can paste this into the transformExample() method in the project code above.
Since we are just going to deal with two dimensions here, tz is set to 0.
The red line in the image above goes from the center of the original location to the center of the new location. That's because transforms are done in relation to the anchor point and the anchor point by default is in the center of the layer.
Scale
The scale transform stretches or squishes the layer. The basic syntax is
CATransform3DMakeScale(_ sx: CGFloat, _ sy: CGFloat, _ sz: CGFloat)
where sx, sy, and sz are the numbers by which to scale (multiply) the x, y, and z coordinates respectively.
Example
If we wanted to half the width and triple the height, we would do the following
myLayer.transform = CATransform3DMakeScale(0.5, 3.0, 1.0)
Notes
Since we are only working in two dimensions, we just multiply the z coordinates by 1.0 to leave them unaffected.
The red dot in the image above represents the anchor point. Notice how the scaling is done in relation to the anchor point. That is, everything is either stretched toward or away from the anchor point.
Rotate
The rotation transform rotates the layer around the anchor point (the center of the layer by default). The basic syntax is
CATransform3DMakeRotation(_ angle: CGFloat, _ x: CGFloat, _ y: CGFloat, _ z: CGFloat)
where angle is the angle in radians that the layer should be rotated and x, y, and z are the axes about which to rotate. Setting an axis to 0 cancels a rotation around that particular axis.
Example
If we wanted to rotate a layer clockwise 30 degrees, we would do the following:
let degrees = 30.0
let radians = CGFloat(degrees * Double.pi / 180)
myLayer.transform = CATransform3DMakeRotation(radians, 0.0, 0.0, 1.0)
Notes
Since we are working in two dimentions, we only want the xy plane to be rotated around the z axis. Thus we set x and y to 0.0 and set z to 1.0.
This rotated the layer in a clockwise direction. We could have rotated counterclockwise by setting z to -1.0.
The red dot shows where the anchor point is. The rotation is done around the anchor point.
Multiple transforms
In order to combine multiple transforms we could use concatination like this
CATransform3DConcat(_ a: CATransform3D, _ b: CATransform3D)
However, we will just do one after another. The first transform will use the Make in its name. The following transforms will not use Make, but they will take the previous transform as a parameter.
Example
This time we combine all three of the previous transforms.
let degrees = 30.0
let radians = CGFloat(degrees * Double.pi / 180)
// translate
var transform = CATransform3DMakeTranslation(90, 50, 0)
// rotate
transform = CATransform3DRotate(transform, radians, 0.0, 0.0, 1.0)
// scale
transform = CATransform3DScale(transform, 0.5, 3.0, 1.0)
// apply the transforms
myLayer.transform = transform
Notes
The order that the transforms are done in matters.
Everything was done in relation to the anchor point (red dot).
A Note about Anchor Point and Position
We did all our transforms above without changing the anchor point. Sometimes it is necessary to change it, though, like if you want to rotate around some other point besides the center. However, this can be a little tricky.
The anchor point and position are both at the same place. The anchor point is expressed as a unit of the layer's coordinate system (default is 0.5, 0.5) and the position is expressed in the superlayer's coordinate system. They can be set like this
myLayer.anchorPoint = CGPoint(x: 0.0, y: 1.0)
myLayer.position = CGPoint(x: 50, y: 50)
If you only set the anchor point without changing the position, then the frame changes so that the position will be in the right spot. Or more precisely, the frame is recalculated based on the new anchor point and old position. This usually gives unexpected results. The following two articles have an excellent discussion of this.
About the anchorPoint
Translate rotate translate?
See also
Border, rounded corners, and shadow on a CALayer
Using a border with a Bezier path for a layer

Scaling a UIView along one direction with UIPinchGestureRecognizer

In my app I have a draggable UIView to which I have added a UIPinchGestureRecognizer. Following this tutorial, as default the view is scaled along both x and y directions. Is it possibile to scale along only one direction using one finger? I mean, for example, I tap the UIView both with thumb and index fingers as usual but while I'am holding the thumb I could move only the index finger in one direction and then the UIView should scale along that dimension depending on the direction in which the index moves.
Actually, I have achieved this by adding pins to my UIView like the following:
I was just thinking my app might be of better use using UIPinchGestureRecognizer.
I hope I have explained myself, if you have some hints or tutorial or documentation to link to me I would be very grateful.
In CGAffineTransformScale(view.transform, recognizer.scale, recognizer.scale) update only one axis: X or Y with recognizer.scale and keep the other one 1.0f
CGAffineTransformMakeScale(CGFloat sx, CGFloat sy)
In order to know which axis you should scale, you need to do some math to see the angle between two fingers of a gesture and based on the angle to decide which axis to scale.
Maybe something like
enum Axis {
case X
case Y
}
func axisFromPoints(p1: CGPoint, _ p2: CGPoint) -> Axis {
let absolutePoint = CGPointMake(p2.x - p1.x, p2.y - p1.y)
let radians = atan2(Double(absolutePoint.x), Double(absolutePoint.y))
let absRad = fabs(radians)
if absRad > M_PI_4 && absRad < 3*M_PI_4 {
return .X
} else {
return .Y
}
}

UIViews with subviews: calculating position when scaling

I have a view that I draw using Core Graphics, which in this example is a segmented circle. The user can touch the circle to create a point along its circumference; this creates a subview on the UIView that contains the circle graphic.
Then I've implemented a pinch-zoom gesture which causes the circle to redraw to its new size. I've seen most implementations of pinch zoom use transform properties, but I've chosen to redraw because it's all vectors and gives a clean result.
My problem is repositioning the point views. I calculate the required position of those points based on the scale of the parent view: as it changes I update the x/y coords of the point views. However, it seems there are some precision issues: as the circle shape size increases, the points drift so they aren't right on the line anymore. Here's a couple examples:
This is where the circle is at 100% scale. Note the perfect positioning of that black point. But when you zoom in...
The point drifts off-line.
And here's some code. I derive the new size of the circle from the pinch gesture's scale (I modify if a bit to constrain and slow it down for UI purposes, so that's deltaScale) and then draw it like so:
let currentSize = self.shape!.bounds.size
let newSize = CGSize(width: self.originalSize.width * deltaScale, height: self.originalSize.height * deltaScale)
self.shape?.frame.size = newSize
self.shape?.center = self.originalCentre!
self.shape?.shapeSize = newSize
self.shape?.setNeedsDisplay()
As the pinch-zoom gesture completes, I calculate the factor:
let xScale = Double(newSize.width) / Double(currentSize.width)
let yScale = Double(newSize.height) / Double(currentSize.height)
self.points = self.points.map{(thisPoint) -> UIView in
thisPoint.center = CGPoint(x: Double(thisPoint.center.x) * xScale, y: Double(thisPoint.center.y) * yScale)
return thisPoint
}
(I was using CGFloats, but switched to Doubles in the hope that it would give me the precision I needed. Alas.)
You're accumulating roundoff errors. This is getting executed repeatedly:
thisPoint.center = CGPoint(x: Double(thisPoint.center.x) * xScale, y: Double(thisPoint.center.y) * yScale)
Repeating any calculation of the form 'x=f(x)' with anything less than unlimited precision will result in drift.
Trick is to not have 'thisPoint.center' on both sides of the equal sign. Best way to do that is to have thisPoint.center be a pure function of some other state. Commenter suggested storing desired angle, that would work well. Then you could do:
thisPoint.center = f(thisPoint.someRadians), where 'f' converts from polar to rectangular coordinates, factoring in the scale of the circle.

Check if node is visible on the screen

I currently have a large map that goes off the screen, because of this its coordinate system is very different from my other nodes. This has led me to a problem, because I'm needing to generate a random CGPoint within the bounds of this map, and then if that point is frame/on-screen I place a visible node there. However the check on wether or not the node is on screen continuously fails.
I'm checking if the node is in frame with the following code: CGRectContainsPoint(self.frame, values) (With values being the random CGPoint I generated). Now this is where my problem comes in, the coordinate system of the frame is completely different from the coordinate system of the map.
For example, in the picture below the ball with the arrows pointing to it is at coordinates (479, 402) in the scene's coordinates, but they are actually at (9691, 9753) in the map's coordinates.
I determined the coordinates using the touchesBegan event for those who are wondering. So basically, how do I convert that map coordinate system to one that will work for the frame?
Because as seen below, the dot is obviously in the frame however the CGRectContainsPoint always fails. I've tried doing scene.convertPoint(position, fromNode: map) but it didn't work.
Edit: (to clarify some things)
My view hierarchy looks something like this:
The map node goes off screen and is about 10,000x10,000 for size. (I have it as a scrolling type map). The origin (Or 0,0) for this node is in the bottom left corner, where the map starts, meaning the origin is offscreen. In the picture above, I'm near the top right part of the map. I'm generating a random CGPoint with the following code (Passing it the maps frame) as an extension to CGPoint:
static func randPoint(within: CGRect) -> CGPoint {
var point = within.origin
point.x += CGFloat(arc4random() % UInt32(within.size.width))
point.y += CGFloat(arc4random() % UInt32(within.size.height))
return point;
}
I then have the following code (Called in didMoveToView, note that I'm applying this to nodes I'm generating - I just left that code out). Where values is the random position.
let values = CGPoint.randPoint(map.totalFrame)
if !CGRectContainsPoint(self.frame, convertPointToView(scene!.convertPoint(values, fromNode: map))) {
color = UIColor.clearColor()
}
To make nodes that are off screen be invisible. (Since the user can scroll the map background). This always passes as true, making all nodes invisible, even though nodes are indeed within the frame (As seen in the picture above, where I commented out the clear color code).
If I understand your question correctly, you have an SKScene that contains an SKSpriteNode that is larger than the scene's view, and that you are randomly generating coordinates within that sprite's coordinate system that you want to map to the view.
You're on the right track with SKNode's convertPoint(_:fromNode:) (where your scene is the SKNode and your map is the fromNode). That should get you from the generated map coordinate to the scene coordinate. Next, convert that coordinate to the view's coordinate system using your scene's convertPointToView(_:). The point will be out of bounds if it is not in view.
Using a worldNode which includes a playerNode and having the camera center on this node, you can check on/off with this code:
float left = player.position.x - 700;
float right = player.position.x + 700;
float up = player.position.y + 450;
float down = player.position.y - 450;
if((object.position.x > left) && (object.position.x < right) && (object.position.y > down) && (object.position.y < up)) {
if((object.parent == nil) && (object.dead == false)) {
[worldNode addChild:object];
}
} else {
if(object.parent != nil) {
[object removeFromParent];
}
}
The numbers I used above are static. You can also make them dynamic:
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat screenWidth = screenRect.size.width;
CGFloat screenHeight = screenRect.size.height;
Diving the screenWidth by 2 for left and right. Same for screenHeight.

Resources