CGAffineTransformScale issues with image quality - ios

I have a view that is being scaled down via a UIPinchGesture. A subview of that view is a UIImageView with an associate image. The issue is that the scale down seems to affect the quality of the image noticeably when it gets to about a 1/4 of the originally larger size.
I'm curious if there is a way to solve this without redrawing the image at the changed size as it is scaled down.

I would say no.
The best option you have to improve image visible quality in all states (without rescaling anything) would be to scale the original image to a size halfway in between your biggest and smallest sizes on screen. That would result in the least image scaling by the GPU.
Other than that, just rescale the image whenever you are applying a transform to the view by:
+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
You may need to worry about aspect ratio, but that's another question entirely.

Related

Manage quality and size of an UIImage with alpha channel

I have an UIView with UIImageViews and UILabels, which I have to capture into image and then export to photo gallery. The image has a fixed size in pixels and must have alpha channel, because UIView background color is clear.
Now I use UIGraphicsBeginImageContextWithOptions with renderInContext or drawViewHierarchyInRect, then I resize image to a given size and save it with UIImagePNGRepresentation. It works - I get an UIImage of the exact pixel size I need, with alpha channel, saved in gallery.
UIGraphicsBeginImageContextWithOptions(_templateView.bounds.size, NO, 0.0);
[_templateView drawViewHierarchyInRect:_templateView.bounds afterScreenUpdates:NO];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsBeginImageContext(CGSizeMake(1080.0f, 1080.0f));
[img drawInRect:CGRectMake(0, 0, 1080.0f, 1080.0f)];
img = UIGraphicsGetImageFromCurrentImageContext();
NSData *pngImageData = UIImagePNGRepresentation(img);
The problem is the size of the result image. It is a way larger than expected. When I add only one UIImageView (filling parent UIView) with the image of 1.2Mb. it's capture results in 1.65Mb.. It is crucial because I have a limit size for an image. How can I reduce it's size? Is is possible to reduce quality of such an image with alpha channel?
I tried resize it to 50% and then again to 100% but it results even in largest size.

Resizing a photograph using UIGraphics but final image is slightly blurry

I am trying to resize an image using UIGraphics. The image is one taken with the camera, and I am using this code:
CGSize origImageSize = photograph.size;
//this saves as 140*140 for retina
CGRect newRect = CGRectMake(0, 0, 70, 70);
//scaling ratio
float ratio = MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
CGRect projectRect;
projectRect.size.width= ratio*origImageSize.width;
projectRect.size.height=ratio*origImageSize.height;
//center the image
projectRect.origin.x= ((newRect.size.width-projectRect.size.width)/2);
projectRect.origin.y=((newRect.size.height-projectRect.size.height)/2);
[photograph drawInRect:projectRect];
//get the image from the image context
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
For some reason the final photo isn't as sharp, it's slightly blurry. Am I doing anything wrong here? Any pointers would be really appreciated. thanks
I assume you calculate rectangle properly. Then make sure you use integral rectangle. Non-integral values may cause sub pixel rendering.
Run your projectRect through CGRectIntegral to get integral rectangle, then use it to render your image.
projectRect = CGRectIntegral(projectRect);

Core Graphics - how to crop non-transparent pixels out of a UIImage?

I have a UIImage that is reading from a transparent PNG (500px by 500px). Somewhere in the image, there is a picture that I want to crop out and save as a separate UIImage. I also want to store the X and Y coordinates based on how many transparent pixels there were on the left and top of the newly cropped rectangle.
I was able to crop an image with this code:
- (UIImage *)cropImage:(UIImage *)image atRect:(CGRect)rect
{
double scale = image.scale;
CGRect scaledRect = CGRectMake(rect.origin.x*scale,rect.origin.y*scale,rect.size.width*scale,rect.size.height*scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], scaledRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef scale:scale orientation:image.imageOrientation];
CGImageRelease(imageRef);
return cropped;
}
Which actually cuts off the transparent pixels on the top and left :S (this would be great if I was able to crop the pixels on right and bottom too!). It then resizes the rest of the image to the rectangle I specified. Unfortunately though I need to cut a picture that is in the middle of the image and I need the size to be able to be dynamic.
Been struggling with this for several hours now. Any ideas?
To crop an image, draw it into a smaller graphics context.
For example, let's say you have a 600x600 image. And let's say that you want to crop 200 pixels off all four sides. That leaves a 200x200 rectangle.
So you would make a 200x200 graphics context, using UIGraphicsBeginImageContextWithOptions. Then you would draw the image into it using drawAtPoint:, drawing at the point (-200,-200). If you think about it, you will see that that offset causes just the 200x200 from the middle of the original to be drawn into the actual bounds of the context. Thus you have cropped the image by 200 pixels on all four sides, which is what we wanted to do.
Thus here is a generalized version, assuming that we know the amount to crop from the left, right, top, and bottom:
UIImage* original = [UIImage imageNamed:#"original.png"];
CGSize sz = [original size];
CGFloat cropLeft = ...;
CGFloat cropRight = ...;
CGFloat cropTop = ...;
CGFloat cropBottom = ...;
UIGraphicsBeginImageContextWithOptions(
CGSizeMake(sz.width - cropLeft - cropRight, sz.height - cropTop - cropBottom),
NO, 0);
[original drawAtPoint:CGPointMake(-cropLeft, -cropTop)];
UIImage* cropped = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
After that, cropped is your cropped image.

Crop UIImage from a transformed UIImageView

I am letting the user capture an image from the camera or picking one from the library.
This image I display in an UIImageView.
The user can now scale and position the image within a bounding box, exactly like you would do using the UIImagePickerController when allowsEditing is set to YES.
When the user is satisfied with the result and taps Done I would like to produce a cropped UIImage.
The problem arises when using CGImageCreateWithImageInRect as this does not take the scaling into account. The transform is applied to the imageView like this:
CGAffineTransform transform = CGAffineTransformScale(self.imageView.transform, newScale, newScale);
[self.imageView setTransform:transform];
Using a gestureRecognizer.
I assume what is happening is; the UIImageView is scaled and moved, it then applies the UIViewContentModeScaleAspectFit to the UIImage is holds and when I ask it to crop the image, it does exactly that - whit no regards to the scaling positioning. The reason I think this, is that if I don't scale or move the image but just tap Done straight away the cropping works.
I crop the image like this:
- (UIImage *)cropImage:(UIImage*) img toRect:(CGRect)rect {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.imageView.image.scale orientation:self.imageView.image.imageOrientation];
// UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
Passing in a cropRect from a view that is a subView of my main view (the square overlay box, like in UIImagePickerController). Main UIView has a UIImageView that gets scaled and a UIView that displays the crop rectangle.
How can I get the "what you see is what you get" cropping and which factors must I take into account. Or maybe suggestions if I should implemented the hierarchy or scaling differently.
Try a simple trick. Apple has got samples on its site to show how to zoom into a photo using code. Once done zooming, using graphic context take the frame size of the bounding view, and take the image with that. Eg Uiview contains scroll view which has the zoomed image. So the scrollview zooms and so does your image, now take the frame size of your bounding UIview, and create an image context out of it and then save that as a new image. Tell me if that makes sense.
Cheers :)

IOS : Reduce image size without reducing image quality

I am displaying an image in tableview cell (Image name saved in a plist). Before setting it to the cell, I am resizing the image to
imageSize = CGSizeMake(32, 32);
But, after resizing the image, quality is also getting degraded in retina display.
I have both the images added to the project (i.e. 1x and #2x).
This is how I am reducing the image size to 32x32.
+ (UIImage *)scale:(UIImage *)image toSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Any pointers on this is very much appreciated.
Thanks
try this : instead of UIGraphicsBeginImageContext(size);use UIGraphicsBeginImageContextWithOptions(size,NO,0.0);
from what i understand what you're doing there is resizing the image to 32x32 (in points) no matter what the resolution. the UIGraphicsBeginImageContextWithOptions scales the image to the scale of the device's screen..so you have the image resized to 32x32 points but the resolution is kept for retina display
(note that this is what i understood from apple's uikit reference..it may not be so..but it should)
read here

Resources