I have a question about scale operation of a frame to a specific size.
I have a CGRect and would to resize it to a specific CGSize.
I would to move the center of this CGRect in proportion to my rescale value.
If by chance you're modifying a UIView, you could take this approach:
CGPoint previousCenter = view.center;
// Set width and height here:
view.frame = CGRectMake(view.frame.origin.x,view.frame.origin.y, width, height);
view.center = previousCenter;
That'll maintain the center point while changing the size of the view.
I am a little confused by your question, so I may not be answering it correctly. If you are using a CGAffineTransform to scale, as your tags suggest, that's another matter entirely.
You can use CGRectInset(rect, x, y). This will inset the CGRect by x and y, and push the origin in by x and y. (https://developer.apple.com/library/ios/documentation/graphicsimaging/reference/CGGeometry/Reference/reference.html#//apple_ref/c/func/CGRectInset)
CGSize targetSize = CGSizeMake(100.0f, 100.0f);
CGRect rect = CGRectMake(50.0f, 50.0f, 200.0f, 200.0f);
rect = CGRectInset(rect, roundf((rect.size.width - targetSize.width) / 2.0f), roundf((rect.size.height - targetSize.height) / 2.0f);
Edit: Note that I'm using the difference between the two sizes, halved. My justification here is that CGRectInset will affect the entire rect. What I mean by that is...
CGRect rect = CGRectMake(0, 0, 10, 10);
rect = CGRectInset(rect, 2, 2);
rect is now a CGRect with (2, 2, 6, 6)
Related
I can't get a CGAffineTransform transformation to expand a view in a table cell and cannot understand why.
Can anyone see what is wrong. Here is my code:
UIView* resizedView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, CGRectGetMaxX(view.frame), CGRectGetMaxY(view.frame))];
[resizedView addSubview:view];
[cell addSubview:view];
CGSize sizeDifference = CGSizeMake(resizedView.frame.size.width - view.frame.size.width, view.frame.size.height - view.frame.size.height);
NSLog(#"width = %f, height = %f", sizeDifference.width, sizeDifference.height);
CGSize transformRatio = CGSizeMake(resizedView.frame.size.width / view.frame.size.width, view.frame.size.height / view.frame.size.height);
// Original transform
CGAffineTransform transform = CGAffineTransformIdentity;
// Scale custom view so image will fill entire cell
transform = CGAffineTransformScale(transform, transformRatio.width, transformRatio.height);
// Move custom view so the old vie's top left aligns with the cell's top left
transform = CGAffineTransformTranslate(transform, -sizeDifference.width / 2.0, -sizeDifference.height / 2.0);
[resizedView setTransform:transform];
The view remains exactly the same. Would appreciate any suggestions.
I wanted to do something with Quartz 2D which I though should be simple, but turns out not to be :-(
What I try to do is the following. I want to rotate the drawing area by 90 degrees, so that basically anything I draw is rotated 90 degrees as well. Turns out the rotation works ok, but the rectangle I draw starts off screen, and it does not cover the whole height, but only is as heigh as the width (320 pixels) see screenshot.
Here's my code (inside drawRect):
CGContextRef ctx = UIGraphicsGetCurrentContext(); //get the graphics context
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGContextSetRGBStrokeColor(ctx, 0.0, 0.0, 0.0, 1);
CGContextSetLineWidth(ctx, 1.0) ;
float width = rect.size.width ;
float height = rect.size.height ;
CGContextTranslateCTM(ctx, rect.origin.x + width / 2, rect.origin.y + height / 2 ) ; // make rotation point the middle
CGContextRotateCTM(ctx, 1.57079633) ; // 90 degrees
CGContextTranslateCTM(ctx, - height / 2, - width / 2) ; // move x / y back to where they belong
CGRect myRect = CGRectMake(0, 0, height, width) ;
CGContextFillRect(ctx, myRect) ;
The result is as follows:
What am I missing here?
CGRect myRect = CGRectMake(0, 0, height, width) ;
Params are {x,y,w,h}
try
CGRect myRect = CGRectMake(0, 0, width, height) ;
also, you should implement a DEG_TO_RAD function to help with passing angles into these drawing functions to make your coding easier
#define DEG_TO_RAD(angle) ((angle) / 180.0 * M_PI)
This
CGContextRotateCTM(ctx, 1.57079633) ; // 90 degrees
becomes
CGContextRotateCTM(ctx, DEG_TO_RAD(90)) ;
Added benefit: You lose the comment and you can increment this value programatically in degrees
I found the problem. When creating the view I mistakenly used width for heigh and vice versa. After I've fixed that all I had to do within the view was the following:
CGContextTranslateCTM(ctx, rect.origin.x + width / 2, rect.origin.y + height / 2 ) ; // make rotation point the middle
CGContextRotateCTM(ctx, DEG_TO_RAD(90)) ;
CGContextTranslateCTM(ctx, - height / 2, - width / 2) ; // move x / y back to where they belong
CGRect myRect = CGRectMake(0, 0, height-5, width) ; // height determines width and width is height
CGContextClipToRect(ctx, myRect) ; // prevent lines being draws outside
From that onwards I use only myRect as a reference for all drawing operations.
And here's the result.
Similar to Instagram I have a square crop view (UIScrollView) that has a UIImageView inside it. So the user can drag a portrait or landscape image inside the square rect (equal to the width of the screen) and then the image should be cropped at the scroll offset. The UIImageView is set to aspect fit. The UIScrollView content size is set to a scale factor for either landscape or portrait, so that it correctly renders with aspect fit ratio.
When the user is done dragging I want to scale the image up based on a given size, let's say 1000x1000px square and then crop it at the scroll offset (using [UIImage drawAtPoint:CGPoint].
The problem is I can't get the math right to get the right offset point. If I get it close on a 6+ it will be way off on a 4S.
Here's my code for the scale and crop:
(UIImage *)squareImageFromImage:(UIImage *)image scaledToSize:(CGFloat)newSize {
CGAffineTransform scaleTransform;
CGPoint origin;
if (image.size.width > image.size.height) {
//landscape
CGFloat scaleRatio = newSize / image.size.height;
scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
origin = CGPointMake((int)(-self.scrollView.contentOffset.x*scaleRatio),0);
} else if (image.size.width < image.size.height) {
//portrait
CGFloat scaleRatio = newSize / image.size.width;
scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
origin = CGPointMake(0, (int)(-self.scrollView.contentOffset.y*scaleRatio));
} else {
//square
CGFloat scaleRatio = newSize / image.size.width;
scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
origin = CGPointMake(0, 0);
}
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, scaleTransform);
[image drawAtPoint:origin];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
So for example with landscape, if I drag the scroll left so that the image is cropped all the way to the right, my offset will be close on a 6+ but on a 4S it will be off by about 150-200 in terms of the CGPoint.
Here is my code for setting up the scroll view and image view:
CGRect cropRect = CGRectMake(0.0f,0.0,SCREEN_WIDTH,SCREEN_WIDTH);
CGFloat ratio = (int)self.image.size.height/self.image.size.width;
CGRect r = CGRectMake(0.0,0.0,SCREEN_WIDTH,SCREEN_WIDTH);
if (ratio>1.00) {
//portrait
r = CGRectMake(0.0,0.0,SCREEN_WIDTH,(int)(SCREEN_WIDTH*ratio));
} else if (ratio<1.00) {
//landscape
CGFloat size = (int)self.image.size.width/self.image.size.height;
cropOffset = (SCREEN_WIDTH*size)-SCREEN_WIDTH;
r = CGRectMake(0.0,0.0,(int)(SCREEN_WIDTH*size),SCREEN_WIDTH);
}
NSLog(#"r.size.height == %.4f",r.size.height);
self.scrollView.frame = cropRect;
self.scrollView.contentSize = r.size;
self.imageView = [[UIImageView alloc] initWithFrame:r];
self.imageView.backgroundColor = [UIColor clearColor];
self.imageView.contentMode = UIViewContentModeScaleAspectFit;
self.imageView.image = self.image;
[self.scrollView addSubview:self.imageView];
Cropping math can be tricky. It's been a while since I've had to deal with this, so hopefully I'm pointing you in the right direction. Here is a chunk of code from Pixology that grabs a scaled visible rect from a UIScrollView. I think the missing ingredient here might be zoomScale.
CGRect visibleRect;
visibleRect.origin = _scrollView.contentOffset;
visibleRect.size = _scrollView.bounds.size;
// figure in the scale
float theScale = 1.0 / _scrollView.zoomScale;
visibleRect.origin.x *= theScale;
visibleRect.origin.y *= theScale;
visibleRect.size.width *= theScale;
visibleRect.size.height *= theScale;
You may also need to figure in device screen scale:
CGFloat screenScale = [[UIScreen mainScreen] scale];
See how far you can get with this info, and let me know.
A CALayer can do it, and a UIImageView can do it. Can I directly display an image with aspect-fit with Core Graphics? The UIImage drawInRect does not allow me to set the resize mechanism.
If you're already linking AVFoundation, an aspect-fit function is provided in that framework:
CGRect AVMakeRectWithAspectRatioInsideRect(CGSize aspectRatio, CGRect boundingRect);
For instance, to scale an image to fit:
UIImage *image = …;
CRect targetBounds = self.layer.bounds;
// fit the image, preserving its aspect ratio, into our target bounds
CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size,
targetBounds);
// draw the image
CGContextDrawImage(context, imageRect, image.CGImage);
You need to do the math yourself. For example:
// desired maximum width/height of your image
UIImage *image = self.imageToDraw;
CGRect imageRect = CGRectMake(10, 10, 42, 42); // desired x/y coords, with maximum width/height
// calculate resize ratio, and apply to rect
CGFloat ratio = MIN(imageRect.size.width / image.size.width, imageRect.size.height / image.size.height);
imageRect.size.width = imageRect.size.width * ratio;
imageRect.size.height = imageRect.size.height * ratio;
// draw the image
CGContextDrawImage(context, imageRect, image.CGImage);
Alternatively, you can embed a UIImageView as a subview of your view, which gives you easy to use options for this. For similar ease of use but better performance, you can embed a layer containing the image in your view's layer. Either of these approaches would be worthy of a separate question, if you choose to go down that route.
Of course you can. It'll draw the image in whatever rect you pass. So just pass an aspect-fitted rect. Sure, you have to do a little bit of math yourself, but that's pretty easy.
here's the solution
CGSize imageSize = yourImage.size;
CGSize viewSize = CGSizeMake(450, 340); // size in which you want to draw
float hfactor = imageSize.width / viewSize.width;
float vfactor = imageSize.height / viewSize.height;
float factor = fmax(hfactor, vfactor);
// Divide the size by the greater of the vertical or horizontal shrinkage factor
float newWidth = imageSize.width / factor;
float newHeight = imageSize.height / factor;
CGRect newRect = CGRectMake(xOffset,yOffset, newWidth, newHeight);
[image drawInRect:newRect];
-- courtesy https://stackoverflow.com/a/1703210
I have a UIImage contained in a UIImageView. It's set to use the UIViewContentModeScaleAspectFit contentMode. The UIImageView is the size of the screen. The image is not, hence the scaleAspectFit mode. What I can't figure out is, where on the screen is the UIImage? What's it's frame? Where does the top left of the image appear on the screen? I can see it on the screen, but not in code. This should be simple, but I can't figure it out.
Try this in your UIImageView:
It will will compute the frame of the image, assuming you are using UIViewContentModeScaleAspectFit.
- (CGRect) imageFrame
{
float horizontalScale = self.frame.size.width / self.image.size.width;
float verticalScale = self.frame.size.height / self.image.size.height;
float scale = MIN(horizontalScale, verticalScale);
float xOffset = (self.frame.size.width - self.image.size.width * scale) / 2;
float yOffset = (self.frame.size.height - self.image.size.height * scale) / 2;
return CGRectMake(xOffset,
yOffset,
self.image.size.width * scale,
self.image.size.height * scale);
}
What it does is works out how much you need to shrink/enlarge the UIImage to fit it in the UIImageView in each dimension, and then picks the smallest scaling factor, to ensure that the image fits in the allotted space.
With that you know the size of the UI Image, and it's easy to calculate the X, Y offset w.r.t. the containing frame.