Trying to crop my UIImage to a 1:1 aspect ratio (square) but it keeps enlarging the image causing it to be blurry. Why? - ios

Given a UIImage, I'm trying to make it into a square. Just chop some of the largest dimension off to make it 1:1 in aspect ratio.
UIImage *pic = [UIImage imageNamed:#"pic"];
CGFloat originalWidth = pic.size.width;
CGFloat originalHeight = pic.size.height;
float smallestDimension = fminf(originalWidth, originalHeight);
CGRect square = CGRectMake(0, 0, smallestDimension, smallestDimension);
CGImageRef imageRef = CGImageCreateWithImageInRect([pic CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageView *imageView = [[UIImageView alloc] initWithImage:squareImage];
imageView.frame = CGRectMake(100, 100, imageView.bounds.size.width, imageView.bounds.size.height);
[self.view addSubview:imageView];
But this is what it results in:
When it should look like this, but just a little narrower.
Why is this? The images are pic(150x114) / pic#2x(300x228).

The problem is you're mixing up logical and pixel sizes. On non retina devices these two are the same, but on retina devices (like in your case) the pixel size is actually double the logical size.
Usually, when designing your GUI, you can always just think in logical sizes and coordinates, and iOS (or OS X) will make sure, that everything is doubled on retina screens. However, in some cases, especially when creating images yourself, you have to explicitly specify what size you mean.
UIImage's size method returns the logical size. That is the resolution on non-retina screens for instance. This is why CGImageCreateWithImageInRect will only create an new image, from the upper left half of the image.
Multiply your logical size with the scale of the image (1 on non-retina devices, 2 on retina devices):
CGFloat originalWidth = pic.size.width * pic.scale;
CGFloat originalHeight = pic.size.height * pic.scale;
This will make sure, that the new image is created from the full height (or width) of the original image. Now, one remaining problem is, that when you create a new UIImage using
UIImage *squareImage = [UIImage imageWithCGImage:imageRef];
iOS will think, this is a regular, non-retina image and it will display it twice as large as you would expect. To fix this, you have to specify the scale when you create the UIImage:
UIImage *squareImage = [UIImage imageWithCGImage:imageRef
scale:pic.scale
orientation:pic.imageOrientation];

Related

Resizing a photograph using UIGraphics but final image is slightly blurry

I am trying to resize an image using UIGraphics. The image is one taken with the camera, and I am using this code:
CGSize origImageSize = photograph.size;
//this saves as 140*140 for retina
CGRect newRect = CGRectMake(0, 0, 70, 70);
//scaling ratio
float ratio = MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
CGRect projectRect;
projectRect.size.width= ratio*origImageSize.width;
projectRect.size.height=ratio*origImageSize.height;
//center the image
projectRect.origin.x= ((newRect.size.width-projectRect.size.width)/2);
projectRect.origin.y=((newRect.size.height-projectRect.size.height)/2);
[photograph drawInRect:projectRect];
//get the image from the image context
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
For some reason the final photo isn't as sharp, it's slightly blurry. Am I doing anything wrong here? Any pointers would be really appreciated. thanks
I assume you calculate rectangle properly. Then make sure you use integral rectangle. Non-integral values may cause sub pixel rendering.
Run your projectRect through CGRectIntegral to get integral rectangle, then use it to render your image.
projectRect = CGRectIntegral(projectRect);

Getting black (empty) image from UIView drawViewHierarchyInRect:afterScreenUpdates:

After successfully using UIView’s new drawViewHierarchyInRect:afterScreenUpdates: method introduced in iOS 7 to obtain an image representation (via UIGraphicsGetImageFromCurrentImageContext()) for blurring my app also needed to obtain just a portion of a view. I managed to get it in the following manner:
UIImage *image;
CGSize blurredImageSize = [_blurImageView frame].size;
UIGraphicsBeginImageContextWithOptions(blurredImageSize, YES, .0f);
[aView drawViewHierarchyInRect: [aView bounds] afterScreenUpdates: YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This lets me retrieve aView’s content following _blurImageView’s frame.
Now, however, I would need to obtain a portion of aView, but this time this portion would be “inside”. Below is an image representing what I would like to achieve.
I have already tried creating a new graphics context and setting its size to the portion’s size (red box) and calling aView to draw in the rect that represents the red box’s frame (of course its superview’s frame being equal to aView’s) but the image obtained is all black (empty).
After a lot of tweaking I managed to find something that did the job, however I heavily doubt this is the way to go.
Here’s my [edited-for-Stack Overflow] code that works:
- (UIImage *) imageOfPortionOfABiggerView
{
UIView *bigViewToExtractFrom;
UIImage *image;
UIImage *wholeImage;
CGImageRef _image;
CGRect imageToExtractFrame;
CGFloat screenScale = [[UIScreen mainScreen] scale];
// have to scale the rect due to (I suppose) the screen's scale for Core Graphics.
imageToExtractFrame = CGRectApplyAffineTransform(imageToExtractFrame, CGAffineTransformMakeScale(screenScale, screenScale));
UIGraphicsBeginImageContextWithOptions([bigViewToExtractFrom bounds].size, YES, screenScale);
[bigViewToExtractFrom drawViewHierarchyInRect: [bigViewToExtractFrom bounds] afterScreenUpdates: NO];
wholeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// obtain a CGImage[Ref] from another CGImage, this lets me specify the rect to extract.
// However since the image is from a UIView which are all at 2x scale (retina) if you specify a rect in points CGImage will not take the screen's scale into consideration and will process the rect in pixels. You'll end up with an image from the wrong rect and half the size.
_image = CGImageCreateWithImageInRect([wholeImage CGImage], imageToExtractFrame);
wholeImage = nil;
// have to specify the image's scale due to CGImage not taking the screen's scale into consideration.
image = [UIImage imageWithCGImage: _image scale: screenScale orientation: UIImageOrientationUp];
CGImageRelease(_image);
return image;
}
I hope this will help anyone that stumped upon my issue. Feel free to improve my snippet.
Thanks

Core Graphics - how to crop non-transparent pixels out of a UIImage?

I have a UIImage that is reading from a transparent PNG (500px by 500px). Somewhere in the image, there is a picture that I want to crop out and save as a separate UIImage. I also want to store the X and Y coordinates based on how many transparent pixels there were on the left and top of the newly cropped rectangle.
I was able to crop an image with this code:
- (UIImage *)cropImage:(UIImage *)image atRect:(CGRect)rect
{
double scale = image.scale;
CGRect scaledRect = CGRectMake(rect.origin.x*scale,rect.origin.y*scale,rect.size.width*scale,rect.size.height*scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], scaledRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef scale:scale orientation:image.imageOrientation];
CGImageRelease(imageRef);
return cropped;
}
Which actually cuts off the transparent pixels on the top and left :S (this would be great if I was able to crop the pixels on right and bottom too!). It then resizes the rest of the image to the rectangle I specified. Unfortunately though I need to cut a picture that is in the middle of the image and I need the size to be able to be dynamic.
Been struggling with this for several hours now. Any ideas?
To crop an image, draw it into a smaller graphics context.
For example, let's say you have a 600x600 image. And let's say that you want to crop 200 pixels off all four sides. That leaves a 200x200 rectangle.
So you would make a 200x200 graphics context, using UIGraphicsBeginImageContextWithOptions. Then you would draw the image into it using drawAtPoint:, drawing at the point (-200,-200). If you think about it, you will see that that offset causes just the 200x200 from the middle of the original to be drawn into the actual bounds of the context. Thus you have cropped the image by 200 pixels on all four sides, which is what we wanted to do.
Thus here is a generalized version, assuming that we know the amount to crop from the left, right, top, and bottom:
UIImage* original = [UIImage imageNamed:#"original.png"];
CGSize sz = [original size];
CGFloat cropLeft = ...;
CGFloat cropRight = ...;
CGFloat cropTop = ...;
CGFloat cropBottom = ...;
UIGraphicsBeginImageContextWithOptions(
CGSizeMake(sz.width - cropLeft - cropRight, sz.height - cropTop - cropBottom),
NO, 0);
[original drawAtPoint:CGPointMake(-cropLeft, -cropTop)];
UIImage* cropped = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
After that, cropped is your cropped image.

Using one image png file for retina and normal screen in UIImageView

Say there is cool.png file with dimension 200 X 100 pixels and I'd like to use it for both retina and normal devices.
I need to get size of UIImageView 100 X 50 points.
I tried to decrease the size of UIImageView according to image dimensions and visually I don't see any difference whether I prepare two file with and without scale modifier #2x or use UIImageView to scale it by contentMode property.
BOOL retina = [[UIScreen mainScreen] scale] == 2.0 ? YES : NO;
UIImage *img = [UIImage imageNamed:#"cool.png"];
UIImageView *imgView = [[UIImageView alloc] initWithImage:img];
imgView.contentMode = UIViewContentModeScaleToFill;
CGFloat width = img.size.width;
CGFloat height = img.size.height;
if (!retina) {
width = width/2.0;
height = height/2.0;
}
imgView.frame = CGRectMake (somePoint.x, somePoint.y, width, height);
Is there something wrong in the approach?
You are taking the wrong approach here..
CGRect,Point and Size aren't measured in pixels.. They are points.. Points are iOSes coordinate system and will be scaled to the device... So for example if you make a UIView 320 points wide then it will fill the width on both a retina iPhone or iPad...
so if you want your cool.png to display at 100x50 points on all devices then you can simply set the frame to be 100x50, set the image to cool.png then set the imgView.contentView to UIViewContentModeScaleAspectFit... This will then rescale the 200x100 image to fit of its a non retina device... Then if it was a retina device it would be at full resolution (200x100) but in the 100x50 points...
However the #2x system was made for a reason as having to scale the images down increases loading times as it has to be scaled but if you can't use #2x images then you can still do the above

IOS : Reduce image size without reducing image quality

I am displaying an image in tableview cell (Image name saved in a plist). Before setting it to the cell, I am resizing the image to
imageSize = CGSizeMake(32, 32);
But, after resizing the image, quality is also getting degraded in retina display.
I have both the images added to the project (i.e. 1x and #2x).
This is how I am reducing the image size to 32x32.
+ (UIImage *)scale:(UIImage *)image toSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Any pointers on this is very much appreciated.
Thanks
try this : instead of UIGraphicsBeginImageContext(size);use UIGraphicsBeginImageContextWithOptions(size,NO,0.0);
from what i understand what you're doing there is resizing the image to 32x32 (in points) no matter what the resolution. the UIGraphicsBeginImageContextWithOptions scales the image to the scale of the device's screen..so you have the image resized to 32x32 points but the resolution is kept for retina display
(note that this is what i understood from apple's uikit reference..it may not be so..but it should)
read here

Resources