Transform a CGRect in Core Graphic coordinates - ios

Is there a way to transform a CGRect with UIView system coordinates into Core Graphics coordinates, where the origin is in the lower-left corner?

Sure. You just have to subtract the y-origin and the height of the rect away from the view's height.
rect.origin.y = view.frame.size.height-(rect.origin.y+rect.size.height)
You can represent this with a CGAffineTransform like so:
CGAffineTransformMakeTranslation(0, view.size.height-((rect.origin.y*2.0)+rect.size.height))
You subtract the origin twice as you're now working with a relative value, instead of an absolute one.
However, if you only want to flip a context to work in UIView coordinates you'd want:
CGFloat ctxHeight = CGContextGetClipBoundingBox(c).size.height;
CGContextScaleCTM(c, 1, -1);
CGContextTranslateCTM(c, 0, -ctxHeight);

Related

Get real frame after affine transformation

The goal is to get real frames in pdf page to identify the searched string, I am using PDFKitten lib to highlight the text that was searched and trying to figure out how to get frames for highlighted text. The core method is next:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
{
CGContextSetFillColorWithColor(ctx, [[UIColor whiteColor] CGColor]);
CGContextFillRect(ctx, layer.bounds);
// Flip the coordinate system
CGContextTranslateCTM(ctx, 0.0, layer.bounds.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
// Transform coordinate system to match PDF
NSInteger rotationAngle = CGPDFPageGetRotationAngle(pdfPage);
CGAffineTransform transform = CGPDFPageGetDrawingTransform(pdfPage, kCGPDFCropBox, layer.bounds, -rotationAngle, YES);
CGContextConcatCTM(ctx, transform);
CGContextDrawPDFPage(ctx, pdfPage);
if (self.keyword)
{
CGContextSetFillColorWithColor(ctx, [[UIColor yellowColor] CGColor]);
CGContextSetBlendMode(ctx, kCGBlendModeMultiply);
for (Selection *s in self.selections)
{
NSLog(#"layer.bounds = %f, %f, %f, %f", layer.bounds.origin.x, layer.bounds.origin.y, layer.bounds.size.width, layer.bounds.size.height);
CGContextSaveGState(ctx);
CGContextConcatCTM(ctx, s.transform);
NSLog(#"s.frame = %f, %f, %f, %f", s.frame.origin.x, s.frame.origin.y, s.frame.size.width, s.frame.size.height);
CGContextFillRect(ctx, s.frame);
CGContextRestoreGState(ctx);
}
}
}
Size of layer is (612.000000, 792.000000), but the size of s.frame is (3.110400, 1.107000). How can I get real frame from rect that is filled yellow?
As Matt says, a view/layer's frame property is not valid unless the transform is the identity transform.
If you want to transform some rectangle using a transform then the CGRect structure isn't useful, since a CGRect specifies an origin and a size, and assumes that the other 3 points of the rect are shifted right/down from the origin.
In order to create a transformed rectangle you need to build 4 points for the upper left, upper right, lower left, and lower right points of the untransformed frame rectangle, and then apply the transform to those points, before applying the transform to the view.
See the function CGPoint CGPointApplyAffineTransform(CGPoint point, CGAffineTransform t) to apply a CGAffineTransform to a point.
Once you've done that you could use the transformed points to build a bezier path containing a polygon that is your transformed rectangle. (It may or may not be a rectangle after transformation, and the only sure-fire way to represent it is as 4 points that describe a quadralateral.)
Please use bounds property. Also you have to use bounds when create custom layout. It's transform free.
frame it's rectangle which defines the size and position of the view in its superlayer’s coordinate system. bounds it's rectangle defines the size and position of the layer in its itself coordinate system. 
https://developer.apple.com/reference/quartzcore/calayer/1410915-bounds

CGRectApplyAffineTransform and actual view's frame

such is the code:
-(void)viewDidAppear:(BOOL)animated{
[super viewDidAppear:animated];
UIView* view=[[UIView alloc] init];
CGRect frame=CGRectMake(10, 10, 100, 100);
view.frame=frame;
[self.view addSubview:view];
CGAffineTransform t1 = CGAffineTransformMakeTranslation(0, 100);
CGAffineTransform t2 = CGAffineTransformMakeScale(.8, .8);
CGAffineTransform t3 = CGAffineTransformConcat(t1, t2);
view.transform=t3;
CGRect rect = CGRectApplyAffineTransform(frame, t3);
NSLog(#"transform rect:%#", NSStringFromCGRect(rect));
NSLog(#"transform view rect:%#", NSStringFromCGRect(view.frame));
}
//output:
transform rect:{{8, 88}, {80, 80}}
transform view rect:{{20, 100}, {80, 80}}
a same rect apply a same transform,but get a different rect,that's why?
There is a difference between applying an affine transform on a CGRector UIView object:
Let's start with CGRectApplyAffineTransform, and look at the description from Apple docs :
Because affine transforms do not preserve rectangles in general, the
function CGRectApplyAffineTransform returns the smallest rectangle
that contains the transformed corner points of the rect parameter. If
the affine transform t consists solely of scaling and translation
operations, then the returned rectangle coincides with the rectangle
constructed from the four transformed corners.
In this case, youth functionapply the transform for each of the point, and returns a CGRectobject contains all these points.
t(10,10) ->(8,88)
t(10,110)->(8, 168)
t(110,10)->(88, 88)
t(110,110)->(88, 168)
The rect containing all these transformed points is correctly {{8, 88}, {80, 80}}
Now let's look at the description of the transformproperty from UIView documentation :
The origin of the transform is the value of the center property, or the layer’s anchorPoint property if it was changed. (Use the layer property to get the underlying Core Animation layer object.) The default value is CGAffineTransformIdentity.
Since you didn't change the layer's anchor point, the transform is applied from the center of the view.
The original center is (60,60). The transformed center is (60,140), since the scaling issue doesn't affect the origin point (which is the center).
You now have a (80,80) rect centered on (60,140) point : you can
find your {{20, 100}, {80, 80}} rectangle.
The documentation for UIView says this for the frame property
Warning: If the transform property is not the identity transform,
the value of this property is undefined and therefore should be
ignored.
To put Michaël Azevedo's answer into code:
private func UIViewApplyAffineTransform(_ view: UIView, _ transform: CGAffineTransform) -> CGRect {
let transformedCenter = view.center.applying(transform)
let transformedSize = view.frame.size.applying(transform)
let transformedOrigin = CGPoint(x: transformedCenter.x - transformedSize.width / 2,
y: transformedCenter.y - transformedSize.height / 2)
return CGRect(origin: transformedOrigin, size: transformedSize)
}

Place frame over UIImage not working

I have a UIView that needs to be placed over a UIImage inside of a UIImageView at specific coordinates. The coordinates for the frame are referenced from the top left corner and have a specified width and height refrenced from the original image.
So, to make the frame, I am first getting the CGRect of the image using a category from the following post: UIImage size in UIImageView
I then get a scale factor to shrink the size of the frame by taking the original height, dividing it by the scaled height, and then dividing all of my values by that.
Lastly, I take the image CGRect and add the scaled position values of the frame to get my final CGRect for the view. However, the frame is always up and to the right of the desired location. Can anyone see what I'm doing wrong?
Here's the code (new is just a custom object with the correct frame parameters):
CGRect imageBounds = [self.imageView displayedImageBounds];
float scaleFactor = AppDelegate.usedImage.size.height / imageBounds.size.height;
new.height /= scaleFactor;
new.width /= scaleFactor;
new.positionX /= scaleFactor;
new.positionY /= scaleFactor;
UIView *faceRectView = [[UIView alloc] init];
faceRectView.tag = idx;
faceRectView.backgroundColor = [UIColor whiteColor];
faceRectView.frame = CGRectMake((imageBounds.origin.x + new.positionX), (imageBounds.origin.y + new.positionY), new.width, new.height);
[self.view addSubview:faceRectView];
CGPoint is a C structure that defines a point in a coordinate system. The origin of this coordinate system is at the top left on iOS and at the bottom left on OS X. In other words, the orientation of its vertical axis differs on iOS and OS X.
CGSize is another simple C structure that defines a width and a height value, and CGRect has an origin field, a CGPoint, and a size field, a CGSize. Together the origin and size fields define the position and size of a rectangle.
On iOS and OS X, an application has multiple coordinate systems. On iOS, for example, the application's window is positioned in the screen's coordinate system and every subview of the window is positioned in the window's coordinate system. In other words, the subviews of a view are always positioned in the view's coordinate system.
Take this example of a frame
and notice how it differs from the concept of bounds
CGGeometry Reference is a collection of structures, constants, and functions that make it easier to work with coordinates and rectangles. You may have run into code snippets similar to this:
CGPoint point = CGPointMake(self.view.frame.origin.x + self.view.frame.size.width, self.view.frame.origin.y + self.view.frame.size.height);
Not only is this snippet hard to read, it's also quite verbose. We can rewrite this code snippet using two convenient functions defined in the CGGeometry Reference.
CGRect frame = self.view.frame;
CGPoint point = CGPointMake(CGRectGetMaxX(frame), CGRectGetMaxY(frame));
To simplify the above code snippet, we store the view's frame in a variable named frame and use CGRectGetMaxX and CGRectGetMaxY. The names of the functions are self-explanatory.
The CGGeometry Reference defines functions to return the smallest and largest values for the x- and y-coordinates of a rectangle as well as the x- and y-coordinates that lie at the rectangle's center. Two other convenient getter functions are CGRectGetWidth and CGRectGetHeight.
Finally to conclude, check out the implementation of CGRectMake.
CGRectMake(CGFloat x, CGFloat y, CGFloat width, CGFloat height)
{
CGRect rect;
rect.origin.x = x; rect.origin.y = y;
rect.size.width = width; rect.size.height = height;
return rect;
}
Can you add Like that
faceRectView.frame = CGRectMake((0.0), (0.0), new.width, new.height);

iOS - Draw image with CGContext and transform

I am trying to draw an image on top of another image. I have the image's size, transform and origin. My code below shows correct size and transform angle but not at the correct point.
Code:
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, [[UIScreen mainScreen] scale]);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect baseRect = CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height);
[backgroundImage drawInRect:baseRect];
CGRect newRect = CGRectMake(x, y, width, height);
CGContextTranslateCTM(context, x, y);
CGContextConcatCTM(context, watermarkImageView.transform);
CGContextTranslateCTM(context, -x, -y);
[watermarkImageView.image drawInRect:newRect];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
The watermark image should be placed like this:
But currently its looking like this:
What did I miss?
Thanks in advance
EDIT
The x,y is the edge of the bounding box
Your code doesn't show what watermarkImageView.transform is and that is important because when you concat transformations, the effects of previous transformations will also effect all the following transformations.
E.g. a translation that moves the object 10 pixels along the x-axis will move the object 10 pixels to the right. However, if you first have a rotation that rotates the object by 45 degrees and then add a translation that moves 10 pixels along the x-axis, the object will not move 10 pixels to the right, it will move 10 pixels along a line that is 45 degrees rotated, which means it will move about 7 pixels up and 7 pixels to the right. That's because a rotation does not really rotate the object itself, it actually rotates the whole coordinate system which causes the object to be drawn rotated.
See this image:
Initially the translation coordinate system (red lines) matches the "real coordinate" system. But after the rotation by 45 degrees, the translation coordinate system has been rotated and now translating across the red lines moves the object diagonally.
Think about a sheet of paper and a stamp. The stamp always has the same position and the same orientation, you cannot move or rotate the stamp. But you can move and rotate the sheet of paper below the stamp! And that's what your transformations do. They transform the sheet before the stamp is pressed upon it.
For most people it is very hard to imagine the effects of transforming the whole space, it's much easier for them to think about transforming the object. The trick is: You must read your transformations in the opposite order than you wrote them. I guess what you want to do is actually:
CGContextTranslateCTM(context, x, y);
CGContextConcatCTM(context, watermarkImageView.transform);
CGContextTranslateCTM(context,
-watermarkImageView.size.with * 0.5,
-watermarkImageView.size.height * 0.5
);
Now read them in the opposite order (from bottom to top). First you center the watermark around (0,0) by moving it up half the height and left half the width. Now the center of your watermark is exactly at (0,0). Then you rotate it as desired. Finally you move it to the desired position. Of course you wrote all transformations the other way round but that's only because you are transforming the coordinate space, not the object.
Centering your watermark prior to rotation is important because rotation always rotates around (0,0) coordinates. If you'd just rotate, the rotation looks like this:
That's not what you want as it will not just rotate the object but also changes its position. If you center the image around (0,0) first, the rotation looks like this instead:
The answer to my question was
I had to translate to the centre of where I want to draw the context.
CGContextTranslateCTM(context, imageView.center.x, imageView.center.y);
Then rotate context.
CGFloat angle = [(NSNumber *)[imageView valueForKeyPath:#"layer.transform.rotation.z"] floatValue];
CGContextRotateCTM(context, angle);
Then draw
[imageView.image drawInRect:CGRectMake(-width * 0.5f, -height * 0.5f, width, height)];

cgcontext rotate rectangle

guys!
I need to draw some image to CGContext.This is the relevant code:
CGContextSaveGState(UIGraphicsGetCurrentContext());
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect rect = r;
CGContextRotateCTM(ctx, DEGREES_TO_RADIANS(350));
[image drawInRect:r];
CGContextRestoreGState(UIGraphicsGetCurrentContext());
Actually,the rectangle is rotate and display on a area what is not my purpose.I just want to
rotate the image and display on the same position.
Any ideas ?????
Rotation is about the context's origin, which is the same point that rectangles are relative to. If you imagine a sheet of graph paper in the background, you can see what's going on more clearly:
The line is the “bottom” (y=0) of your window/view/layer/context. Of course, you can draw below the bottom if you want, and if your context is transformed the right way, you might even be able to see it.
Anyway, I'm assuming that what you want to do is rotate the rectangle in place, relative to an unrotated world, not rotate the world and everything in it.
The only way to rotate anything is to rotate the world, so that's how you need to do it:
Save the graphics state.
Translate the origin to the point where you want to draw the rectangle. (You probably want to translate to its center point, not the rectangle's origin.)
Rotate the context.
Draw the rectangle centered on the origin. In other words, your rectangle's origin point should be negative half its width and negative half its height (i.e., (CGPoint){ width / -2.0, height / -2.0 })—don't use the origin it had before, because you already used that in the translate step.
Restore the gstate so that future drawing isn't rotated.
What worked for me was to first use a rotation matrix to calculate the amount of translation required to keep your image centered. Below I assume you've already calculated centerX and centerY to be the center of your drawing frame and 'theta' is your desired rotation angle in radians.
let newX = centerX*cos(theta) - centerY*sin(theta)
let newY = centerX*sin(theta) + centerY*cos(theta)
CGContextTranslateCTM(context,newX,newY)
CGContextRotateCTM(context,theta)
<redraw your image here>
Worked just fine for me. Hope it helps.
use following code to rotate your image
// convert degrees to Radians
CGFloat DegreesToRadians(CGFloat degrees)
{
return degrees * M_PI / 180;
};
write it in drawRect method
// create new context
CGContextRef context = UIGraphicsGetCurrentContext();
// define rotation angle
CGContextRotateCTM(context, DegreesToRadians(45));
// get your UIImage
UIImage *img = [UIImage imageNamed:#"yourImageName"];
// Draw your image at rect
CGContextDrawImage(context, CGRectMake(100, 0, 100, 100), [img CGImage]);
// draw context
UIGraphicsGetImageFromCurrentImageContext();

Resources