Transforming iOS device coordinates to user coordinates outside drawRect - ios

I'm using CoreGraphics in my UIView to draw a graph and I want to be able to interact with the graph using touch input. Since touches are received in device coordinates, I need to transform it into user coordinates in order to relate it to the graph, but that has become an obstacle since CGContextConvertPointToUserSpace doesn't work outside of the graphics drawing context.
Here's what I've tried.
In drawRect:
CGContextScaleCTM(ctx,...);
CGContextTranslateCTM(ctx,...); // transform graph to fit the view nicely
self.ctm = CGContextGetCTM(ctx); // save for later
// draw points using user coordinates
In my touch event handler:
CGPoint touchDevice = [gesture locationInView:self]; // touch point in device coords
CGPoint touchUser = CGPointApplyAffineTransform(touchDevice, self.ctm); // doesn't give me what I want
// CGContextConvertPointToUserSpace(touchDevice) <- what I want, but doesn't work here
Using the inverse of ctm doesn't work either. I'll admit I'm having trouble getting my head around the meaning and relationships between device coordinates, user coordinates, and the transformation matrix. I think it's not as simple as I want it to be.
EDIT: Some background from Apple's documentation (iOS Coordinate Systems and Drawing Model).
"A window is positioned and sized in screen coordinates, which are defined by the coordinate system for the display."
"Drawing commands make reference to a fixed-scale drawing space, known as the user coordinate space. The operating system maps coordinate units in this drawing space onto the actual pixels of the corresponding target device."
"You can change a view’s default coordinate system by modifying the current transformation matrix (CTM). The CTM maps points in a view’s coordinate system to points on the device’s screen."

I discovered that the CTM already included a transformation to map view coordinates (with origin at the top left) to screen coordinates (with origin at the bottom left). So (0,0) got transformed to (0,800), where the height of my view was 800, and (0,2) mapped to (0,798) etc. So I gather there are 3 coordinate systems we're talking about: screen coordinates, view/device coordinates, user coordinates. (Please correct me if I am wrong.)
The CGContext transform (CTM) maps from user coordinates all the way to screen coordinates. My solution was to maintain my own transform separately which maps from user coordinates to view coordinates. Then I could use it to go back to user coordinates from view coordinates.
My Solution:
In drawRect:
CGAffineTransform scale = CGAffineTransformMakeScale(...);
CGAffineTransform translate = CGAffineTransformMakeTranslation(...);
self.myTransform = CGAffineTransformConcat(translate, scale);
// draw points using user coordinates
In my touch event handler:
CGPoint touch = [gesture locationInView:self]; // touch point in view coords
CGPoint touchUser = CGPointApplyAffineTransform(touchPoint, CGAffineTransformInvert(self.myTransform)); // this does the trick
Alternate Solution:
Another approach is to manually setup an identical context, but I think this is more of a hack.
In my touch event handler:
#import <QuartzCore/QuartzCore.h>
CGPoint touch = [gesture locationInView:self]; // view coords
CGSize layerSize = [self.layer frame].size;
UIGraphicsBeginImageContext(layerSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// as in drawRect:
CGContextScaleCTM(...);
CGContextTranslateCTM(...);
CGPoint touchUser = CGContextConvertPointToUserSpace(context, touch); // now it gives me what I want
UIGraphicsEndImageContext();

Related

Detect touch event in CALayer's coordinate system

I have controller where are randomly falling a lot of pictures using CAKeyframeAnimation and I should crop this images by touching track,
any animations using CALayer for present the animated image, and I am trying to detect touch event inner this layer using [layer presentationLayer].
The problem is - for cropping this image I should create paths from my touching tracker segment and layer, I don't figure out yet how I can create this paths but the question here is how detect this touch point in falling CALayer coordinate system, attached picture more informative.
Any ideas?
For detecting touch point in layer related with controller coordinate system I am using this code:
- (void) touchesMoved:(NSSet *)touches :(CGPoint) movingPoint :(UIEvent *)event
{
NSArray *layers = [[contextView layer] sublayers];
for (CALayer *layer in layers) {
CGRect imageRect = [[layer presentationLayer] frame];
if(CGRectContainsPoint(imageRect, movingPoint)) {
NSLog(#"Image position - x %f y %f", movingPoint.x, movingPoint.y);
}
}
}
As you likely know, the point your receive is in the view's coordinate system, which should generally be identical to the view's main layer's coordinate system. (If not, there are still ways to convert it, but unless you've done something weird, it's easier just to rely on the fact that these are the same.)
It's also important to know that once you've started rotating something, its frame is undefined. If you think a little bit about how frames work, it should be obvious why this has to be the case (you can't define a diamond using an unrotated rectangle).
We can easily convert from one system to the other using convertPoint:fromLayer:. There is no touchesMoved:movingPoint: method in iOS, so I'm assuming this is some custom method where you've already worked out the point in your own coordinate system. So you'd want something like:
CGPoint pointInLayer = [[layer presentationLayer] convertPoint:movingPoint fromLayer:self.view.layer];
CGRect layerBounds = [[layer presentationLayer] bounds];
if (CGRectContainsPoint(layerBounds, pointInLayer)) {
// Intersect!
}
The bounds are always defined, since they're always in a layer's own coordinate system. So we convert the point into the layer's coordinate system and ask if this point exists in its bounds.

New center point for transformed UIView with new frame

I been working on an iOS app which should display indoor blueprints. You should be able to switch between floors and each floor image is controlled by gesture recognisers to handle pan, rotate and scale.
I have been using this example for the gesture recognisers: https://github.com/GreenvilleCocoa/UIGestures/blob/master/UIGestures/RPSimultaneousViewController.m
So now to the problem. Whenever the user switch floor I want to keep the transformation of the image as well as the corresponding center lat/lng. However, the new image can have another rotation offset and aspect ratio.
I have been able to update the new frame of the image with the new size and update the transform with the new rotation offset and verified it. It is when I try to calculate the new center point I can not get it to work. The following code is how I currently do it and it works as long as the view is not rotated:
-(void)changeFromFloor:(int)oldFloorNr toFloor:(int)newFloorNr
{
CGPoint centerPoint = CGPointMake(self.frame.size.width/2, self.frame.size.height/2);
// This is the old non transformed center point.
CGPoint oldCenterOnImage = [self.layer convertPoint:centerPoint toLayer:self.mapOverlayView.layer]; // Actual non transformed point
// This point is verified to be the corresponding non transformed center point
CGPoint newCenterOnImage = [self calculateNewCenterFor:oldCenterOnImage fromFloor:oldFloorNr toFloor:newFloorNr];
// Change image, sets a new image and change the fram of mapOverlayView
[self changeImageFromFloor:oldFloorNr toFloor:newFloorNr]
// Adjust transformed rotation on map if new map have different rotation
[self adjustRotationFromFloorNr:oldFloorNr toFloorNr:newFloorNr];
CGPoint centerOfMapOverlay = CGPointMake((self.mapOverlayView.frame.size.width / 2), (self.mapOverlayView.frame.size.height / 2));
CGPoint newCenterOnImageTransformed = CGPointApplyAffineTransform(newCenterOnImage, self.mapOverlayView.transform);
CGFloat newCenterX = centerPoint.x + centerOfMapOverlay.x - newCenterOnImageTransformed.x;
CGFloat newCenterY = centerPoint.y + centerOfMapOverlay.y - newCenterOnImageTransformed.y;
// This only works without any rotation
self.mapOverlayView.center = CGPointMake(newCenterX, newCenterY);
}
Any idea where I go wrong? I have been working with this problem some days now and I can not seem to figure it out.
Please let me know if I need to add something or if something is unclear.
Thanks!
Code added after help was given:
CGPoint centerOfMapOverlay = CGPointMake(
(self.mapOverlayView.bounds.size.width / 2,
(self.mapOverlayView.bounds.size.height / 2)
);
centerOfMapOverlay = CGPointApplyAffineTransform(
centerOfMapOverlay,
self.mapOverlayView.transform
);
If you change the transform on it's view then the frame property becomes undefined. You should instead use the center property to change the view's position and bounds.size to change the view's size.

How do i rotate an image inside a drawn polygon

I am a beginner programmer and this is my first app(I am still learning). I have overlaid a polygon onto a map view. I have set its fill color to an image because I'm trying to match an image to a satellite picture. I want to rotate it so that the polygon contents match the map. Is it possible to rotate the image? If not, is there an easier way to overlay an image onto a map view that I could use.
Here is my code:
-(MKOverlayView*)mapView:(MKMapView *)mapView viewForOverlay:(id )overlay {
MKPolygonView *polyView = [[MKPolygonView alloc] initWithOverlay:overlay];
polyView.strokeColor = [UIColor whiteColor];
polyView.fillColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"Campus-map labels.jpg"]];
return polyView;
}
Here's what I'm trying to do, if it helps:
http://i.stack.imgur.com/x53HU.jpg
The road which is circled in red should match up. I know that the polygon isn't in the right position -- this is to illustrate how the polygon needs to be rotated.
You can modify the transform property of the polyView object. For example:
polyView.transform = CGAffineTransformMakeRotation(M_PI_4);
will rotate the polygon by pi/4 radians (45 degrees), in a clockwise direction.
You might need to change the polygon's center property to get the effect you want. The center property determines the center of rotation around which the transform rotation takes place.

iOS Convert TouchBegan coordinates to OpenGL ES Coordinates

New to OpenGL ES here.
I'm using the following code to detect where I tapped in a GLKView (OpenGL ES 2.0). I would like to know if I touched my OpenGL drawn objects. It's all 2D.
How do I convert the coordinates I am getting to OpenGL ES 2.0 coordinates, which are seemingly -1.0 to 1.0 based? Are there already built in functions to do so?
Thanks.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self.view bounds];
UITouch* touch = [[event touchesForView:self.view] anyObject];
CGPoint location = [touch locationInView:self.view];
NSLog(#"x: %f y: %f", location.x, location.y);
}
-1 to 1 is clipping space. If your coordinate space is in clipping space when it displays on the screen, I'd say you forgot to convert the spaces using a projection matrix. If you're using GLKBaseEffect (which I don't recommend later down the road since it tends to memory leak everywhere) then you need to set <baseEffect>.transform.projectionMatrix to a matrix that will convert the space correctly. For example,
GLKBaseEffect* effect = [[GLKBaseEffect alloc] init];
GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, <width>, 0, <height>, 0.0f, 1.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
width and height would be the width and height of the device's screen/your GLKView/etc. This is automatically applied to the coordinates you pass in so that you can use normal coordinates ranging from 0 to <width> on the x axis and 0 to <height> on the y axis, with the origin in the lower left corner of the screen.
If you are using custom shaders like I am then you can pass in the projection matrix as a uniform using:
glUniformMatrix4fv(shaderLocations.projectionMatrix,1,0,projection.m)
where projection is the matrix and and shaderLocations.projectionMatrix is the identifier for the uniform-its name, as they say. You then need to multiply your position by the projection matrix.
Once you've converted away from clipping space, either by passing in the matrix manually or setting the correct property on GLKBaseEffect, the only difference between OpenGL space an UIKit space is that the y axis is flipped. I convert touches I receive through the touches methods and gesture recognizers like this.
CGPoint openGLTouch = CGPointMake(touch.x, self.view.bounds.size.height - touch.y);
I'll try my best to clarify if you have any questions but keep in mind I'm relatively new to OpenGL myself. :)

How to Rotate CAShapeLayer containing UIBezierPath? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Rotate CGPath without changing its position
I searched and tested a variety of code for a couple of hours and I can't get this to work.
I am adding an arbitrary UIBezierPath at a random location to a CAShapeLayer which gets added to a view. I need to rotate the path so that I can handle device rotations. I can rotate the layer instead of the path. I just need the result to be rotated.
I already have methods to handle transforming the bezier path by scaling and translation. It works great, but now I need to simply rotate 90 degrees left or right.
Any recommendations on how to do this?
Basic code:
UIBezierPath *path = <create arbitrary path>
CAShapeLayer *layer = [CAShapeLayer layer];
[self addPathToLayer:layer
fromPath:path];
// I could get the center of the box but where is the box center for the view it is in?
// CGRect box = CGPathGetPathBoundingBox(path.CGPath);
// layer.anchorPoint = ? How to find the center of the box for the anchor point?
// Rotating here appears to rotate around 0,0 of the view
layer.transform = CATransform3DMakeRotation(DegreesToRadians(-90), 0.0, 0.0, 1.0);
I see the following post:
BezierPath Rotation in a UIView
I suppose I could rotate as-is and then translate the path back into place. I just need to figure out what the translation values would be.
I should also state that what I am seeing after I try to rotate is that the image moves off-screen somewhere. I tried rotating 25 degrees to see movement and it pivots around the view's origin of 0,0 so that if I rotate 90 degrees the image is off-screen. I am running these test WITHOUT rotating the device - just to see how rotation works.
UPDATE #1 - 12/4/2012: For some bizarre reason if I set the position to a value I found empirically it moves the rotated bezier path into the correct position after rotation:
layer.position = CGPointMake(280, 60);
This values are a guess from starting/stopping the app and making adjustments. I have no idea why I need to adjust the position on rotation. The anchor point should be in the center of the layer. However, I did find that both the frame and position of a CAShapeLayer are all ZERO even though the path is set, and also the fact that the path is in the correct position within the view. The 280, 60 position shifts the path into what would be the center of the path bounding box when a rotation of +90 is made. If I change the rotation value I need to adjust the position. I should not have to do this manually adjustment.
I think a last resort is to somehow convert the bezier path to an image and then add it. I found that if I set the layer content to an image, then rotate, it rotates about its center point with no positional adjustment needed. Not so with setting the path.
UPDATE #2 12/4/2012 - I tried setting the frame and with fiddling I get it to center as follows:
CGRect box = CGPathGetPathBoundingBox(path.CGPath);
CGRect rect = CGRectMake(0, 0, box.origin.x + (3.5 * box.size.width), box.origin.y + (3.5 * box.size.height));
layer.frame = rect;
layer.transform = CATransform3DMakeRotation(DegreesToRadians(90), 0.0, 0.0, 1.0);
Why multiply by 3.5? I have no clue. I found that adding the box origin with about 3.5 times the size of the box shifts the rotated CAShapeLayer path to about where it should be.
There must be a better way to do this. This is a better solution than my previous post since the frame size does not depend on the rotation angle. I just don't know why the frame needs to be set to the value I am setting it to. I THOUGHT it should be
CGRectMake(0, 0, box.origin.x + (box.size.width / 2), box.origin.y + (box.size.height / 2));
However, it shifts the image to the left too much.
Another clue I found is that if I set the frame of [self view].frame (the frame of the entire parent view, which is the screen of the iPhone), then rotate, the rotation point is the center of the screen, an the path/image orbits around this center point. This is why I tried shifting the frame to what the center of the path should be so that it orbits around the box center.
UPDATE #3 12/4/2012 - I tried to render the layer as an image. However, it appears that just setting the path of a layer does not make it an "image" in the layer since it is empty
CGRect box = CGPathGetPathBoundingBox(path.CGPath);
layer.frame = box;
UIImage *image = [ImageHelper imageFromLayer:layer]; // ImageHelper library I created
CAShapeLayer *newLayer = [CAShapeLayer layer];
newLayer.frame = CGRectMake(box.origin.x, box.origin.y, image.size.width, image.size.height);
newLayer.contents = (id) image.CGImage;
It appears that rotating the layer with its path set is no different than simply rotating the bezier path itself. I will go back to rotating the bezier path and see if I can fiddle with the position elements or something. There's got to be a solution to this.
Goal: Rotate a UIBezierPath around its center point within the view it was originally created in.
UPDATE #4 12/4/2012 - I ran a series of tests measuring the values needed for translation in order to place a UIBezierPath in its previous center location.
CGAffineTransform rotate = CGAffineTransformMakeRotation(DegreesToRadians(-15));
[path applyTransform:rotate];
// CGAffineTransform translate = CGAffineTransformMakeTranslation(-110, 70); // -45
CGAffineTransform translate = CGAffineTransformMakeTranslation(-52, -58); // -15
[path applyTransform:translate];
However, the ratios of x/y translations do not correspond so I cannot extrapolate what translation is required based on the angle. It appears that 'CGAffineTransformMakeRotation' uses some arbitrary anchor put to make the rotation, which at the moment appears to be maybe (viewWidth / 2, 0). I am making this much harder than it needs to be. There's something I am missing to make a simple rotation so that the center point is maintained. I just need to "spin" the path 90 degrees left or right.
UPDATE #5 12/4/2012 - After running additional tests it appears that the anchor point for rotating a UIBezierPath is the origin from where all of the points were drawn. In this case the origin is 0,0 and all of the points are relative to that point. Therefore, it a rotation is applied, the rotation is occurring around the origin, and is why the path shifts up-right on -90 and up-left on 90. I need to somehow set the anchor point for the rotation to the center so it "spins" around the center, rather than the original origin point. 12 hours spent on this one issue.
After some detailed analysis and graphing the bounding box on paper I found my assertion that the origin of 0,0 is correct.
A solution to this problem is to translate the path (the underlying matrix) to the origin, with the center of the bounding box at origin, rotate, then translate the path back to its original location.
Here's how to rotate a UIBezierPath 90 degrees:
CGAffineTransform translate = CGAffineTransformMakeTranslation(-1 * (box.origin.x + (box.size.width / 2)), -1 * (box.origin.y + (box.size.height / 2)));
[path applyTransform:translate];
CGAffineTransform rotate = CGAffineTransformMakeRotation(DegreesToRadians(90));
[path applyTransform:rotate];
translate = CGAffineTransformMakeTranslation((box.origin.x + (box.size.width / 2)), (box.origin.y + (box.size.height / 2)));
[path applyTransform:translate];
Plug in -90 degrees to rotate in the other direction.
This formula can be used when rotating the device from portrait to landscape and vice/versa.
I still don't think this is the ideal solution but the result is what I need for now.
If anyone has a better solution for this please post.
UPDATE 12/7/2012 - I found what I think is the best solution, and very simple as I though it would be. Rather than using rotate, translate, and scale methods on the bezier path, I instead extract the array of points as CGPoint objects, and scale/translate them as needed based on the view size as well as the orientation. I then create a new bezier path and set the layer to this path.
The result is perfect scaling, translation, rotation.

Resources