Crop image to a square according to the size of a UIView/CGRect - ios

I have an implementation of AVCaptureSession and my goal is for the user to take a photo and only save the part of the image within the red square border, as shown below:
AVCaptureSession's previewLayer (the camera) spans from (0,0) (top left) to the bottom of my camera controls bar (the bar just above the view that contains the shutter). My navigation bar and controls bar are semi-transparent, so the camera can show through.
I'm using [captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; to ensure that the original image being saved to the camera roll is like Apple's camera.
The user will be able to take the photo in portrait, landscape left and right, so the cropping method must take this into account.
So far, I've tried to crop the original image using this code:
DDLogVerbose(#"%#: Image crop rect: (%f, %f, %f, %f)", THIS_FILE, self.imageCropRect.origin.x, self.imageCropRect.origin.y, self.imageCropRect.size.width, self.imageCropRect.size.height);
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.imageCropRect.size.width, self.imageCropRect.size.width), NO, 0.0);
// Create rect for image
CGRect rect = self.imageCropRect;
// Draw the image into the rect
[self.captureManager.stillImage drawInRect:rect];
// Saving the image, ending image context
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
However, when I look at the cropped image in the camera roll, it seems that it has just squashed the original image, and not discarded the top and bottom parts of the image like I'd like. It also results in 53 pixels of white space at the top of the "cropped" image, likely because of the y position of my CGRect.
This is my logging output for the CGRect:
Image crop rect: (0.000000, 53.000000, 320.000000, 322.000000)
This also describes the frame of the red bordered view in the superview.
Is there something crucial I'm overlooking?
P.S. The original image size (taken with a camera in portrait mode) is:
Original image size: (2448.000000, 3264.000000)

You can crop images with CGImageCreateWithImageInRect:
CGImageRef imageRef = CGImageCreateWithImageInRect([uncroppedImage CGImage], bounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);

Don't forget to add scale parameter otherwise you will get low resolution image
CGImageRef imageRef = CGImageCreateWithImageInRect([uncroppedImage CGImage], CGRectMake(0, 0, 30, 120));
[imageView setImage:[UIImage imageWithCGImage:imageRef scale:[[UIScreen mainScreen] scale] orientation:UIImageOrientationUp]];
CGImageRelease(imageRef);

Swift 3:
let imageRef:CGImage = uncroppedImage.cgImage!.cropping(to: bounds)!
let croppedImage:UIImage = UIImage(cgImage: imageRef)

Related

Getting black (empty) image from UIView drawViewHierarchyInRect:afterScreenUpdates:

After successfully using UIView’s new drawViewHierarchyInRect:afterScreenUpdates: method introduced in iOS 7 to obtain an image representation (via UIGraphicsGetImageFromCurrentImageContext()) for blurring my app also needed to obtain just a portion of a view. I managed to get it in the following manner:
UIImage *image;
CGSize blurredImageSize = [_blurImageView frame].size;
UIGraphicsBeginImageContextWithOptions(blurredImageSize, YES, .0f);
[aView drawViewHierarchyInRect: [aView bounds] afterScreenUpdates: YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This lets me retrieve aView’s content following _blurImageView’s frame.
Now, however, I would need to obtain a portion of aView, but this time this portion would be “inside”. Below is an image representing what I would like to achieve.
I have already tried creating a new graphics context and setting its size to the portion’s size (red box) and calling aView to draw in the rect that represents the red box’s frame (of course its superview’s frame being equal to aView’s) but the image obtained is all black (empty).
After a lot of tweaking I managed to find something that did the job, however I heavily doubt this is the way to go.
Here’s my [edited-for-Stack Overflow] code that works:
- (UIImage *) imageOfPortionOfABiggerView
{
UIView *bigViewToExtractFrom;
UIImage *image;
UIImage *wholeImage;
CGImageRef _image;
CGRect imageToExtractFrame;
CGFloat screenScale = [[UIScreen mainScreen] scale];
// have to scale the rect due to (I suppose) the screen's scale for Core Graphics.
imageToExtractFrame = CGRectApplyAffineTransform(imageToExtractFrame, CGAffineTransformMakeScale(screenScale, screenScale));
UIGraphicsBeginImageContextWithOptions([bigViewToExtractFrom bounds].size, YES, screenScale);
[bigViewToExtractFrom drawViewHierarchyInRect: [bigViewToExtractFrom bounds] afterScreenUpdates: NO];
wholeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// obtain a CGImage[Ref] from another CGImage, this lets me specify the rect to extract.
// However since the image is from a UIView which are all at 2x scale (retina) if you specify a rect in points CGImage will not take the screen's scale into consideration and will process the rect in pixels. You'll end up with an image from the wrong rect and half the size.
_image = CGImageCreateWithImageInRect([wholeImage CGImage], imageToExtractFrame);
wholeImage = nil;
// have to specify the image's scale due to CGImage not taking the screen's scale into consideration.
image = [UIImage imageWithCGImage: _image scale: screenScale orientation: UIImageOrientationUp];
CGImageRelease(_image);
return image;
}
I hope this will help anyone that stumped upon my issue. Feel free to improve my snippet.
Thanks

Core Graphics - how to crop non-transparent pixels out of a UIImage?

I have a UIImage that is reading from a transparent PNG (500px by 500px). Somewhere in the image, there is a picture that I want to crop out and save as a separate UIImage. I also want to store the X and Y coordinates based on how many transparent pixels there were on the left and top of the newly cropped rectangle.
I was able to crop an image with this code:
- (UIImage *)cropImage:(UIImage *)image atRect:(CGRect)rect
{
double scale = image.scale;
CGRect scaledRect = CGRectMake(rect.origin.x*scale,rect.origin.y*scale,rect.size.width*scale,rect.size.height*scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], scaledRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef scale:scale orientation:image.imageOrientation];
CGImageRelease(imageRef);
return cropped;
}
Which actually cuts off the transparent pixels on the top and left :S (this would be great if I was able to crop the pixels on right and bottom too!). It then resizes the rest of the image to the rectangle I specified. Unfortunately though I need to cut a picture that is in the middle of the image and I need the size to be able to be dynamic.
Been struggling with this for several hours now. Any ideas?
To crop an image, draw it into a smaller graphics context.
For example, let's say you have a 600x600 image. And let's say that you want to crop 200 pixels off all four sides. That leaves a 200x200 rectangle.
So you would make a 200x200 graphics context, using UIGraphicsBeginImageContextWithOptions. Then you would draw the image into it using drawAtPoint:, drawing at the point (-200,-200). If you think about it, you will see that that offset causes just the 200x200 from the middle of the original to be drawn into the actual bounds of the context. Thus you have cropped the image by 200 pixels on all four sides, which is what we wanted to do.
Thus here is a generalized version, assuming that we know the amount to crop from the left, right, top, and bottom:
UIImage* original = [UIImage imageNamed:#"original.png"];
CGSize sz = [original size];
CGFloat cropLeft = ...;
CGFloat cropRight = ...;
CGFloat cropTop = ...;
CGFloat cropBottom = ...;
UIGraphicsBeginImageContextWithOptions(
CGSizeMake(sz.width - cropLeft - cropRight, sz.height - cropTop - cropBottom),
NO, 0);
[original drawAtPoint:CGPointMake(-cropLeft, -cropTop)];
UIImage* cropped = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
After that, cropped is your cropped image.

Crop UIImage according to Image Resolution

I have one UIImageView. Its content mode is set to AspectFit.
[imageView setContentMode:UIViewContentModeScaleAspectFit].
I need to crop a subImage from this image. This is the code which crops the image:
CGImageRef imageRef = CGImageCreateWithImageInRect([imageView.image CGImage], customRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
where customRect is the rectangle from which I need to crop the image.
This is how I calculate it:
CGRect customRect = CGRectMake((cropView.frame.origin.x/xFactor),
(cropView.frame.origin.y/yFactor),
(cropView.frame.size.width/xFactor),
(cropView.frame.size.height/yFactor));
The problem comes in cropping. CGImageCreateWithImageInRect crops the given area according to the actual image size which, in some cases, is larger than the image view size. I tried using other approaches such as UIGraphics:getImageFromCurrentImageContext but these do not keep the image quality as much as it degrades them.

Cropping ellipse using core image in ios

I want to crop an ellipse from an image in ios. Using core image framework, I know know to crop a reactangular region.
Using core graphics, I am able to clip the elliptical region. But, the size of the cropped image is same as the size of the original image as I am applying mask to area outside the ellipse.
So, the goal is to crop the elliptical region from an image and size of cropped image won't exceed the rectangular bounds of that image.
Any help would be greatly appreciated. Thanks in advance.
You have to create a context in the correct size, try the following code:
- (UIImage *)cropImage:(UIImage *)input inElipse:(CGRect)rect {
CGRect drawArea = CGRectMake(-rect.origin.x, -rect.origin.y, input.size.width, input.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(ctx, CGRectMake(0, 0, rect.size.width, rect.size.height));
CGContextClip(ctx);
[input drawInRect:drawArea];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Maybe you have to adjust the drawArea to your needs as i did not test it.

Crop UIImage from a transformed UIImageView

I am letting the user capture an image from the camera or picking one from the library.
This image I display in an UIImageView.
The user can now scale and position the image within a bounding box, exactly like you would do using the UIImagePickerController when allowsEditing is set to YES.
When the user is satisfied with the result and taps Done I would like to produce a cropped UIImage.
The problem arises when using CGImageCreateWithImageInRect as this does not take the scaling into account. The transform is applied to the imageView like this:
CGAffineTransform transform = CGAffineTransformScale(self.imageView.transform, newScale, newScale);
[self.imageView setTransform:transform];
Using a gestureRecognizer.
I assume what is happening is; the UIImageView is scaled and moved, it then applies the UIViewContentModeScaleAspectFit to the UIImage is holds and when I ask it to crop the image, it does exactly that - whit no regards to the scaling positioning. The reason I think this, is that if I don't scale or move the image but just tap Done straight away the cropping works.
I crop the image like this:
- (UIImage *)cropImage:(UIImage*) img toRect:(CGRect)rect {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.imageView.image.scale orientation:self.imageView.image.imageOrientation];
// UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
Passing in a cropRect from a view that is a subView of my main view (the square overlay box, like in UIImagePickerController). Main UIView has a UIImageView that gets scaled and a UIView that displays the crop rectangle.
How can I get the "what you see is what you get" cropping and which factors must I take into account. Or maybe suggestions if I should implemented the hierarchy or scaling differently.
Try a simple trick. Apple has got samples on its site to show how to zoom into a photo using code. Once done zooming, using graphic context take the frame size of the bounding view, and take the image with that. Eg Uiview contains scroll view which has the zoomed image. So the scrollview zooms and so does your image, now take the frame size of your bounding UIview, and create an image context out of it and then save that as a new image. Tell me if that makes sense.
Cheers :)

Resources