How to crop a center square in a UIImage? - ios

Sorry about this question, but I searched a lot of threads here in S.O. and I found nothing.
The question is: how to crop a center square in a UIImage?
I tried the code bellow but without success. The cropping happens, but in the upper-left corner.
-(UIImage*)imageCrop:(UIImage*)original
{
UIImage *ret = nil;
// This calculates the crop area.
float originalWidth = original.size.width;
float originalHeight = original.size.height;
float edge = fminf(originalWidth, originalHeight);
float posX = (originalWidth - edge) / 2.0f;
float posY = (originalHeight - edge) / 2.0f;
CGRect cropSquare = CGRectMake(posX, posY,
edge, edge);
// This performs the image cropping.
CGImageRef imageRef = CGImageCreateWithImageInRect([original CGImage], cropSquare);
ret = [UIImage imageWithCGImage:imageRef
scale:original.scale
orientation:original.imageOrientation];
CGImageRelease(imageRef);
return ret;
}

Adding more information, I'm using this 'crop' after a photo capture.
After some tests, I found something that worked.
// If orientation indicates a change to portrait.
if(original.imageOrientation == UIImageOrientationLeft ||
original.imageOrientation == UIImageOrientationRight)
{
cropSquare = CGRectMake(posY, posX,
edge, edge);
}
else
{
cropSquare = CGRectMake(posX, posY,
edge, edge);
}

Related

Masking an image using bezierpath with image's full resolution

Hi, I have a path (shape) and a high-resolution image. I make the high res image to be AspectFit inside the view on which I draw the path and I want to mask the image with the path but at the full resolution of the image, not at the resolution in which we see the path. The problem, It works perfectly when I don't upscale them up for high-resolution masking but when I do, everything is messed up. The mask gets stretched and the origins don't make sense.
All I want is to be able to upscale the path with the same aspect ratio of the image (at the full resolution of the image) and position it correctly so It can mask the high res image properly.
I've tried this:
Masking CGContext with a CGPathRef?
and this
Creating mask with CGImageMaskCreate is all black (iphone)
and this
Clip UIImage to UIBezierPath (not masking)
none of which works correctly when I try to mask a high quality image (bigger than screen resolution)
EDIT I posted a working project to show the problem between normal quality masking (at screen's resolution) and high quality masking (at image's resolution) on github. I'd really appreciate any help.
https://github.com/Reza-Abdolahi/HighResMasking
If I understand your question correctly:
You have an image view containing an image that may have been scaled down (or even scaled up) using UIViewContentModeScaleAspectFit.
You have a bezier path whose points are in the geometry (coordinate system) of that image view.
And now you want to create a copy of the image, at its original resolution, masked by the bezier path.
We can think of the image as having its own geometry, with the origin at the top left corner of the image and one unit along each axis being one point. So what we need to do is:
Create a graphics renderer big enough to draw the image into without scaling. The geometry of this renderer is the image's geometry.
Transform the bezier path from the view geometry to the renderer geometry.
Apply the transformed path to the renderer's clip region.
Draw the image (untransformed) into the renderer.
Step 2 is the hard one, because we have to come up with the correct CGAffineTransform. In an aspect-fit scenario, the transform needs to not only scale the image, but possibly translate it along either the x axis or the y axis (but not both). But let's be more general and support other UIViewContentMode settings. Here's a category that lets you ask a UIImageView for the transform that converts points in the view's geometry to points in the image's geometry:
#implementation UIImageView (ImageGeometry)
/**
* Return a transform that converts points in my geometry to points in the
* image's geometry. The origin of the image's geometry is at its upper
* left corner, and one unit along each axis is one point in the image.
*/
- (CGAffineTransform)imageGeometryTransform {
CGRect viewBounds = self.bounds;
CGSize viewSize = viewBounds.size;
CGSize imageSize = self.image.size;
CGFloat xScale = imageSize.width / viewSize.width;
CGFloat yScale = imageSize.height / viewSize.height;
CGFloat tx, ty;
switch (self.contentMode) {
case UIViewContentModeScaleToFill: tx = 0; ty = 0; break;
case UIViewContentModeScaleAspectFit:
if (xScale > yScale) { tx = 0; ty = 0.5; yScale = xScale; }
else if (xScale < yScale) { tx = 0.5; ty = 0; xScale = yScale; }
else { tx = 0; ty = 0; }
break;
case UIViewContentModeScaleAspectFill:
if (xScale < yScale) { tx = 0; ty = 0.5; yScale = xScale; }
else if (xScale > yScale) { tx = 0.5; ty = 0; xScale = yScale; }
else { tx = 0; ty = 0; imageSize = viewSize; }
break;
case UIViewContentModeCenter: tx = 0.5; ty = 0.5; xScale = yScale = 1; break;
case UIViewContentModeTop: tx = 0.5; ty = 0; xScale = yScale = 1; break;
case UIViewContentModeBottom: tx = 0.5; ty = 1; xScale = yScale = 1; break;
case UIViewContentModeLeft: tx = 0; ty = 0.5; xScale = yScale = 1; break;
case UIViewContentModeRight: tx = 1; ty = 0.5; xScale = yScale = 1; break;
case UIViewContentModeTopLeft: tx = 0; ty = 0; xScale = yScale = 1; break;
case UIViewContentModeTopRight: tx = 1; ty = 0; xScale = yScale = 1; break;
case UIViewContentModeBottomLeft: tx = 0; ty = 1; xScale = yScale = 1; break;
case UIViewContentModeBottomRight: tx = 1; ty = 1; xScale = yScale = 1; break;
default: return CGAffineTransformIdentity; // Mode not supported by UIImageView.
}
tx *= (imageSize.width - xScale * (viewBounds.origin.x + viewSize.width));
ty *= (imageSize.height - yScale * (viewBounds.origin.y + viewSize.height));
CGAffineTransform transform = CGAffineTransformMakeTranslation(tx, ty);
transform = CGAffineTransformScale(transform, xScale, yScale);
return transform;
}
#end
Armed with this, we can write the code that masks the image. In my test app, I have a subclass of UIImageView named PathEditingView that handles the bezier path editing. So my view controller creates the masked image like this:
- (UIImage *)maskedImage {
UIImage *image = self.pathEditingView.image;
UIGraphicsImageRendererFormat *format = [[UIGraphicsImageRendererFormat alloc] init];
format.scale = image.scale;
format.prefersExtendedRange = image.imageRendererFormat.prefersExtendedRange;
format.opaque = NO;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:image.size format:format];
return [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
UIBezierPath *path = [self.pathEditingView.path copy];
[path applyTransform:self.pathEditingView.imageGeometryTransform];
CGContextRef gc = UIGraphicsGetCurrentContext();
CGContextAddPath(gc, path.CGPath);
CGContextClip(gc);
[image drawAtPoint:CGPointZero];
}];
}
And it looks like this:
Of course it's hard to tell that the output image is full-resolution. Let's fix that by cropping the output image to the bounding box of the bezier path:
- (UIImage *)maskedAndCroppedImage {
UIImage *image = self.pathEditingView.image;
UIBezierPath *path = [self.pathEditingView.path copy];
[path applyTransform:self.pathEditingView.imageGeometryTransform];
CGRect pathBounds = CGPathGetPathBoundingBox(path.CGPath);
UIGraphicsImageRendererFormat *format = [[UIGraphicsImageRendererFormat alloc] init];
format.scale = image.scale;
format.prefersExtendedRange = image.imageRendererFormat.prefersExtendedRange;
format.opaque = NO;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:pathBounds.size format:format];
return [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
CGContextRef gc = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(gc, -pathBounds.origin.x, -pathBounds.origin.y);
CGContextAddPath(gc, path.CGPath);
CGContextClip(gc);
[image drawAtPoint:CGPointZero];
}];
}
Masking and cropping together look like this:
You can see in this demo that the output image has much more detail than was visible in the input view, because it was generated at the full resolution of the input image.
As the secondary answer, I made it work with this code and for better understanding, you can get the working project on github here as well to see if it works on all cases or not.
my github project :
https://github.com/Reza-Abdolahi/HighResMasking
The part of code that solved the problem:
-(UIImage*)highResolutionMasking{
NSLog(#"///High quality (Image resolution) masking///////////////////////////////////////////////////");
//1.Rendering the path into an image with the size of _targetBound (which is the size of a device screen sized view in which the path is drawn.)
CGFloat aspectRatioOfImageBasedOnHeight = _highResolutionImage.size.height/ _highResolutionImage.size.width;
CGFloat aspectRatioOfTargetBoundBasedOnHeight = _targetBound.size.height/ _targetBound.size.width;
CGFloat pathScalingFactor = 0;
if ((_highResolutionImage.size.height >= _targetBound.size.height)||
(_highResolutionImage.size.width >= _targetBound.size.width)) {
//Then image is bigger than targetBound
if ((_highResolutionImage.size.height<=_highResolutionImage.size.width)) {
//The image is Horizontal
CGFloat newWidthForTargetBound =_highResolutionImage.size.width;
CGFloat ratioOfHighresImgWidthToTargetBoundWidth = (_highResolutionImage.size.width/_targetBound.size.width);
CGFloat newHeightForTargetBound = _targetBound.size.height* ratioOfHighresImgWidthToTargetBoundWidth;
_bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
pathScalingFactor = _highResolutionImage.size.width/_targetBound.size.width;
}else if((_highResolutionImage.size.height > _highResolutionImage.size.width)&&
(aspectRatioOfImageBasedOnHeight <= aspectRatioOfTargetBoundBasedOnHeight)){
//The image is Vertical but has smaller aspect ratio (based on height) than targetBound
CGFloat newWidthForTargetBound =_highResolutionImage.size.width;
CGFloat ratioOfHighresImgWidthToTargetBoundWidth = (_highResolutionImage.size.width/_targetBound.size.width);
CGFloat newHeightForTargetBound = _targetBound.size.height* ratioOfHighresImgWidthToTargetBoundWidth;
_bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
pathScalingFactor = _highResolutionImage.size.width/_targetBound.size.width;
}else if((_highResolutionImage.size.height > _highResolutionImage.size.width)&&
(aspectRatioOfImageBasedOnHeight > aspectRatioOfTargetBoundBasedOnHeight)){
CGFloat newHeightForTargetBound =_highResolutionImage.size.height;
CGFloat ratioOfHighresImgHeightToTargetBoundHeight = (_highResolutionImage.size.height/_targetBound.size.height);
CGFloat newWidthForTargetBound = _targetBound.size.width* ratioOfHighresImgHeightToTargetBoundHeight;
_bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
pathScalingFactor = _highResolutionImage.size.height/_targetBound.size.height;
}else{
//Do nothing
}
}else{
//Then image is smaller than targetBound
_bigTargetBound = _imageRect;
pathScalingFactor =1;
}
CGSize correctedSize = CGSizeMake(_highResolutionImage.size.width *_scale,
_highResolutionImage.size.height *_scale);
_bigImageRect= AVMakeRectWithAspectRatioInsideRect(correctedSize,_bigTargetBound);
//Scaling path
CGAffineTransform scaleTransform = CGAffineTransformIdentity;
scaleTransform = CGAffineTransformScale(scaleTransform, pathScalingFactor, pathScalingFactor);
CGPathRef scaledCGPath = CGPathCreateCopyByTransformingPath(_examplePath.CGPath,&scaleTransform);
//Render scaled path into image
UIGraphicsBeginImageContextWithOptions(_bigTargetBound.size, NO, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath (context, scaledCGPath);
CGContextSetFillColorWithColor (context, [UIColor redColor].CGColor);
CGContextSetStrokeColorWithColor (context, [UIColor redColor].CGColor);
CGContextDrawPath (context, kCGPathFillStroke);
UIImage * pathImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"High res pathImage.size: %#",NSStringFromCGSize(pathImage.size));
//Cropping it from targetBound into imageRect
_maskImage = [self cropThisImage:pathImage toRect:_bigImageRect];
NSLog(#"High res _croppedRenderedPathImage.size: %#",NSStringFromCGSize(_maskImage.size));
//Masking the high res image with my mask image which both have the same size now.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [_maskImage CGImage];
CGContextRef myContext = CGBitmapContextCreate (NULL, _highResolutionImage.size.width, _highResolutionImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (myContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = _maskImage.size.width/ _highResolutionImage.size.width;
if(ratio * _highResolutionImage.size.height < _maskImage.size.height) {
ratio = _maskImage.size.height/ _highResolutionImage.size.height;
}
CGRect rectForMask = {{0, 0}, {_maskImage.size.width, _maskImage.size.height}};
CGRect rectForImageDrawing = {{-((_highResolutionImage.size.width*ratio)-_maskImage.size.width)/2 , -((_highResolutionImage.size.height*ratio)-_maskImage.size.height)/2},
{_highResolutionImage.size.width*ratio, _highResolutionImage.size.height*ratio}};
CGContextClipToMask(myContext, rectForMask, maskImageRef);
CGContextDrawImage(myContext, rectForImageDrawing, _highResolutionImage.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(myContext);
CGContextRelease(myContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
-(UIImage *)cropThisImage:(UIImage*)image toRect:(CGRect)rect{
CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, rect);
UIImage *croppedImage = [UIImage imageWithCGImage:subImage];
CGImageRelease(subImage);
return croppedImage;
}

iOS/Prevent image aspect ratio from changing when resizing

I am using the code below first to create an image thumb (using a category) and then tailor the thumb to the VC in question, for example, make it round.
Somehow, the aspect ratio of images is not getting preserved with some getting squashed vertically...so a face looks like a sideways oval while others get squashed horizontally, so a round ball looks like an upright football. In the code for individual VCs, I am using UIViewContentModeScaleAspectFill and setting clip to bounds to yes but to no avail. Also tried checking these in Storybord but still no luck.
Can anyone see what might be wrong with code below?
//code in viewDidLoad
UIImage *thumbnail = [selectedImage createThumbnailToFillSize:CGSizeMake(side, side)];
//see createThumbNail method below
self.contactImage.image = thumbnail;
//image has been selected and trimmed to thumb. Now format it
CGSize itemSize = CGSizeMake(64, 64);
UIGraphicsBeginImageContextWithOptions(itemSize, NO, UIScreen.mainScreen.scale);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
self.contactImage.contentMode = UIViewContentModeScaleAspectFill;
self.contactImage.clipsToBounds = YES;
[self.contactImage.image drawInRect:imageRect];
self.contactImage.image = UIGraphicsGetImageFromCurrentImageContext();
self.contactImage.layer.cornerRadius=60.0;
UIGraphicsEndImageContext();
//Generic category to create thumb
-(UIImage *) createThumbnailToFillSize:(CGSize)size
{
CGSize mainImageSize = size;
UIImage *thumb;
CGFloat widthScaler = size.width / mainImageSize.width;
CGFloat heightScaler = size.height / mainImageSize.height;
CGSize repositionedMainImageSize = mainImageSize;
CGFloat scaleFactor;
// Determine if we should shrink based on width or hight
if(widthScaler > heightScaler)
{
// calculate based on width scaler
scaleFactor = widthScaler;
repositionedMainImageSize.height = ceil(size.height / scaleFactor);
}
else {
// calculate based on height scaler
scaleFactor = heightScaler;
repositionedMainImageSize.width = ceil(size.width / heightScaler);
}
UIGraphicsBeginImageContext(size);
CGFloat xInc = ((repositionedMainImageSize.width-mainImageSize.width) / 2.f) *scaleFactor;
CGFloat yInc = ((repositionedMainImageSize.height-mainImageSize.height) / 2.f) *scaleFactor;
[self drawInRect:CGRectMake(xInc, yInc, mainImageSize.width * scaleFactor, mainImageSize.height * scaleFactor)];
thumb = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumb;
}

Move UIImage inside UIImageView

I have an UIImageView (red squares) that will display a UIImage that must be scaled (I can receive images greater or smaller that the UIImageView). After scaling it, the showed part of the UIImage is the center of it.
What I need is to show the part of the image in the blue squares, how can I archive it?
I'm only able to get the image size (height and width), but it display the original size, when it's supposed to be the scaled one.
self.viewIm = [[UIImageView alloc] initWithFrame:CGRectMake(100, 100, 120, 80)];
self.viewIm.backgroundColor = [UIColor greenColor];
self.viewIm.layer.borderColor = [UIColor redColor].CGColor;
self.viewIm.layer.borderWidth = 5.0;
UIImage *im = [UIImage imageNamed:#"benjen"];
self.viewIm.image = im;
self.viewIm.contentMode = UIViewContentModeScaleAspectFill;
// self.viewim.clipsToBounds = YES;
[self.view addSubview:self.viewIm];
To do what you're trying to do, I'd recommend looking into CALayer's contentsRect property.
Since seeing your answer, I've been trying to work out the proper solution for a while, but the mathematics escapes me because contentsRect:'s x and y parameters seem sort of mysterious... But here's some code that may point you in the right direction...
float imageAspect = self.imageView.image.size.width/self.imageView.image.size.height;
float imageViewAspect = self.imageView.frame.size.width/self.imageView.frame.size.height;
if (imageAspect > imageViewAspect) {
float scaledImageWidth = self.imageView.frame.size.height * imageAspect;
float offsetWidth = -((scaledImageWidth-self.imageView.frame.size.width)/2);
self.imageView.layer.contentsRect = CGRectMake(offsetWidth/self.imageView.frame.size.width, 0.0, 1.0, 1.0);
} else if (imageAspect < imageViewAspect) {
float scaledImageHeight = self.imageView.frame.size.width * imageAspect;
float offsetHeight = ((scaledImageHeight-self.imageView.frame.size.height)/2);
self.imageView.layer.contentsRect = CGRectMake(0.0, offsetHeight/self.imageView.frame.size.height, 1.0, 1.0);
}
Try something like this:
CGRect cropRect = CGRectMake(0,0,200,200);
CGImageRef imageRef = CGImageCreateWithImageInRect([ImageToCrop CGImage],cropRect);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
I found a very good approximation on this answer. In that, the category resize the image, and use the center point to crop after that. I adapt it to crop using (0,0) as origin point. As I don't really need a category, I use it as a single method.
- (UIImage *)imageByScalingAndCropping:(UIImage *)image forSize:(CGSize)targetSize {
UIImage *sourceImage = image;
UIImage *newImage = nil;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetSize.width;
CGFloat scaledHeight = targetSize.height;
if (CGSizeEqualToSize(image.size, targetSize) == NO) {
if ((targetSize.width / image.size.width) > (targetSize.height / image.size.height)) {
scaleFactor = targetSize.width / image.size.width; // scale to fit height
} else {
scaleFactor = targetSize.height / image.size.height; // scale to fit width
}
scaledWidth = image.size.width * scaleFactor;
scaledHeight = image.size.height * scaleFactor;
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = CGPointZero;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil) {
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
And my call is something like this:
self.viewIm = [[UIImageView alloc] initWithFrame:CGRectMake(100, 100, 120, 80)];
self.viewIm.image = [self imageByScalingAndCropping:[UIImage imageNamed:#"benjen"] forSize:CGSizeMake(120, 80)];
[self.view addSubview:self.viewIm];
I've spent some time on this and finally created a Swift 3.2 solution (based on one of my answers on another thread, as well as one of the answers above). This code only allows for Y translation of the image, but with some tweaks anyone should be able to add horizontal translation as well ;)
let yOffset: CGFloat = 20
myImageView.contentMode = .scaleAspectFill
//scale image to fit the imageView's width (maintaining aspect ratio), but allow control over the image's Y position
UIGraphicsBeginImageContextWithOptions(myImageView.frame.size, myImageView.isOpaque, 0.0)
let ratio = myImage.size.width / myImage.size.height
let newHeight = myImageView.frame.width / ratio
let rect = CGRect(x: 0, y: -yOffset, width: myImageView.frame.width, height: newHeight)
myImage.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext() ?? myImage
UIGraphicsEndImageContext()
//set the new image
myImageView.image = newImage
Now you can adjust how far down or up you need the image to be by changing the yOffset.

Cropping UIimage in IOS

I am very new to IOS and the first task given to me is Image cropping. means If I am using an image as banner image and the given frame size is more then or smaller thn the size of the image . my code should automatically resize image in respective aspect ratio of the image and then set image in that frame.
I have done so much R&D and after that i have written code.
-(UIImage *)MyScaleNEwMethodwithImage:(UIImage *)image andframe:(CGRect)frame{
float bmHeight= image.size.height;
float bmWidth= image.size.width;
UIImage *RecievedImage=image;
if (bmHeight>bmWidth) {
float ratio = frame.size.height/frame.size.width;
float newbmHeight = ratio*bmWidth;
float cropedHeight= (bmHeight-newbmHeight)/2;
if (cropedHeight<0) {
float ratio1= frame.size.width/frame.size.height;
float newbmHeight1= (ratio1*bmHeight);
float cropedimageHeight1 = (bmWidth- newbmHeight1)/2;
CGRect cliprect = CGRectMake(cropedimageHeight1, 0,bmWidth-cropedimageHeight1,bmHeight);
CGImageRef imref = CGImageCreateWithImageInRect([image CGImage],cliprect);
UIImage *newSubImage = [UIImage imageWithCGImage:imref];
return newSubImage;
}
else
{
CGRect cliprect = CGRectMake(0,cropedHeight,bmWidth,bmHeight-cropedHeight);
CGImageRef imref = CGImageCreateWithImageInRect([image CGImage],cliprect);
UIImage *newSubImage = [UIImage imageWithCGImage:imref];
return newSubImage;
}
}
else
{
float ratio = frame.size.height/frame.size.width;
float newbmHeight = ratio*bmHeight;
float cropedHeight= (bmHeight-newbmHeight)/4;
if (cropedHeight<0) {
float ratio1= frame.size.width/frame.size.height;
float newbmHeight1= (ratio1*bmWidth);
float cropedimageHeight1 = (bmWidth- newbmHeight1)/2;
UIImageView *DummyImage=[[UIImageView alloc]initWithFrame:CGRectMake(0,cropedimageHeight1,bmWidth,(bmHeight-cropedimageHeight1))];
[DummyImage setImage:RecievedImage];
CGImageRef imageRef = CGImageCreateWithImageInRect([DummyImage.image CGImage], CGRectMake(0,cropedimageHeight1/2,bmWidth/2,(bmHeight-cropedimageHeight1)/2));
// or use the UIImage wherever you like
[DummyImage setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
UIImage *ScaledImage=[UIImage imageWithCGImage:imageRef];
return ScaledImage;
} else {
UIImageView *DummyImage=[[UIImageView alloc]initWithFrame:CGRectMake(cropedHeight,0,bmWidth-cropedHeight,bmHeight)];
[DummyImage setImage:RecievedImage];
CGImageRef imageRef = CGImageCreateWithImageInRect([DummyImage.image CGImage],CGRectMake(cropedHeight,2*cropedHeight,(bmWidth-cropedHeight),bmHeight/2));
// or use the UIImage wherever you like
[DummyImage setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
UIImage *ScaledImage=[UIImage imageWithCGImage:imageRef];
return ScaledImage;
}
}
}
In my frame i am getting required image but when screen changes i can see full image. i want to cut the unwanted image.
Thise piece of code may help you out
-(CGRect) cropImage:(CGRect)frame
{
NSAssert(self.contentMode == UIViewContentModeScaleAspectFit, #"content mode should be aspect fit");
CGFloat wScale = self.bounds.size.width / self.image.size.width;
CGFloat hScale = self.bounds.size.height / self.image.size.height;
float x, y, w, h, offset;
if (wScale<hScale) {
offset = (self.bounds.size.height - (self.image.size.height*widthScale))/2;
x = frame.origin.x / wScale;
y = (frame.origin.y-offset) / wScale;
w = frame.size.width / wScale;
h = frame.size.height / wScale;
} else {
offset = (self.bounds.size.width - (self.image.size.width*heightScale))/2;
x = (frame.origin.x-offset) / hScale;
y = frame.origin.y / hScale;
w = frame.size.width / hScale;
h = frame.size.height / hScale;
}
return CGRectMake(x, y, w, h);
}
The code which I had referred to crop the image as per aspect ratio is :
typedef enum {
MGImageResizeCrop,
MGImageResizeCropStart,
MGImageResizeCropEnd,
MGImageResizeScale
} MGImageResizingMethod;
- (UIImage *)imageToFitSize:(CGSize)fitSize method:(MGImageResizingMethod)resizeMethod
{
float imageScaleFactor = 1.0;
#if __IPHONE_OS_VERSION_MAX_ALLOWED >= 40000
if ([self respondsToSelector:#selector(scale)]) {
imageScaleFactor = [self scale];
}
#endif
float sourceWidth = [self size].width * imageScaleFactor;
float sourceHeight = [self size].height * imageScaleFactor;
float targetWidth = fitSize.width;
float targetHeight = fitSize.height;
BOOL cropping = !(resizeMethod == MGImageResizeScale);
// Calculate aspect ratios
float sourceRatio = sourceWidth / sourceHeight;
float targetRatio = targetWidth / targetHeight;
// Determine what side of the source image to use for proportional scaling
BOOL scaleWidth = (sourceRatio <= targetRatio);
// Deal with the case of just scaling proportionally to fit, without cropping
scaleWidth = (cropping) ? scaleWidth : !scaleWidth;
// Proportionally scale source image
float scalingFactor, scaledWidth, scaledHeight;
if (scaleWidth) {
scalingFactor = 1.0 / sourceRatio;
scaledWidth = targetWidth;
scaledHeight = round(targetWidth * scalingFactor);
} else {
scalingFactor = sourceRatio;
scaledWidth = round(targetHeight * scalingFactor);
scaledHeight = targetHeight;
}
float scaleFactor = scaledHeight / sourceHeight;
// Calculate compositing rectangles
CGRect sourceRect, destRect;
if (cropping) {
destRect = CGRectMake(0, 0, targetWidth, targetHeight);
float destX, destY;
if (resizeMethod == MGImageResizeCrop) {
// Crop center
destX = round((scaledWidth - targetWidth) / 2.0);
destY = round((scaledHeight - targetHeight) / 2.0);
} else if (resizeMethod == MGImageResizeCropStart) {
// Crop top or left (prefer top)
if (scaleWidth) {
// Crop top
destX = 0.0;
destY = 0.0;
} else {
// Crop left
destX = 0.0;
destY = round((scaledHeight - targetHeight) / 2.0);
}
} else if (resizeMethod == MGImageResizeCropEnd) {
// Crop bottom or right
if (scaleWidth) {
// Crop bottom
destX = round((scaledWidth - targetWidth) / 2.0);
destY = round(scaledHeight - targetHeight);
} else {
// Crop right
destX = round(scaledWidth - targetWidth);
destY = round((scaledHeight - targetHeight) / 2.0);
}
}
sourceRect = CGRectMake(destX / scaleFactor, destY / scaleFactor,
targetWidth / scaleFactor, targetHeight / scaleFactor);
} else {
sourceRect = CGRectMake(0, 0, sourceWidth, sourceHeight);
destRect = CGRectMake(0, 0, scaledWidth, scaledHeight);
}
// Create appropriately modified image.
UIImage *image = nil;
#if __IPHONE_OS_VERSION_MAX_ALLOWED >= 40000
if ([[[UIDevice currentDevice] systemVersion] floatValue] >= 4.0) {
UIGraphicsBeginImageContextWithOptions(destRect.size, NO, 0.0); // 0.0 for scale means "correct scale for device's main screen".
CGImageRef sourceImg = CGImageCreateWithImageInRect([self CGImage], sourceRect); // cropping happens here.
image = [UIImage imageWithCGImage:sourceImg scale:0.0 orientation:self.imageOrientation]; // create cropped UIImage.
[image drawInRect:destRect]; // the actual scaling happens here, and orientation is taken care of automatically.
CGImageRelease(sourceImg);
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
#endif
if (!image) {
// Try older method.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, fitSize.width, fitSize.height, 8, (fitSize.width * 4),
colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef sourceImg = CGImageCreateWithImageInRect([self CGImage], sourceRect);
CGContextDrawImage(context, destRect, sourceImg);
CGImageRelease(sourceImg);
CGImageRef finalImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
image = [UIImage imageWithCGImage:finalImage];
CGImageRelease(finalImage);
}
return image;
}
Check if this helps you.

Cropping an Image in portrait on iOS 7 results in incorrect orientation

I've got the following function, previous to iOS 7 & XCode 5 it worked as expected. The function takes an image and a cropSize. The image is the one to be cropped to a specified size, which is defined by CGSize cropSize. The purpose of the function is to crop the image to a certain size and then return the cropped image.
- (UIImage *) cropImage:(UIImage *)originalImage cropSize:(CGSize)cropSize
{
//calculate scale factor to go between cropframe and original image
float SF = originalImage.size.width / cropSize.width;
//find the centre x,y coordinates of image
float centreX = originalImage.size.width / 2;
float centreY = originalImage.size.height / 2;
//calculate crop parameters
float cropX = centreX - ((cropSize.width / 2) * SF);
float cropY = centreY - ((cropSize.height / 2) * SF);
CGRect cropRect = CGRectMake(cropX, cropY, (cropSize.width *SF), (cropSize.height * SF));
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], cropRect);
//keep orientation if landscape
UIImage *newImage;
if (originalImage.size.width > originalImage.size.height || originalImage.size.width == originalImage.size.height) {
newImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:originalImage.imageOrientation];
}
else
{
newImage = [UIImage imageWithCGImage:imageRef];
}
CGImageRelease(imageRef);
//Now want to scale down cropped image!
//want to multiply frames by 2 to get retina resolution
CGRect scaledImgRect = CGRectMake(0, 0, (cropSize.width * 2), (cropSize.height * 2));
UIGraphicsBeginImageContextWithOptions(scaledImgRect.size, NO, [UIScreen mainScreen].scale);
[newImage drawInRect:scaledImgRect];
UIImage *scaledNewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledNewImage;
}
The problem is that it all works fine in with a UIImage that is passed in that is in landscape orientation, the image is cropped as expected, however if the image passed in was taken in portrait, then the resulting image (the cropped result in scaledNewImage) is rotated 90 degrees on its side, which I don't want.
It is as if the portrait image is being worked on as if it's in landscape - and so the function cropped what should be a portrait orientated image in landscape instead of portrait.
This isn't so apparent if the crop area is square, however if the area to be cropped is a landscape rectangle then it's cropping it along the length of the portrait rather than the width. Hope I'm making sense!
This issue didn't occur prior to iOS 7 & XCode 5.. so I'm not sure exactly what's changed. Any help appreciated, thanks.
Solved this issue with the help of an answer here: https://stackoverflow.com/a/14712184/521653
- (UIImage *) cropImage:(UIImage *)originalImage cropSize:(CGSize)cropSize
{
NSLog(#"original image orientation:%d",originalImage.imageOrientation);
//calculate scale factor to go between cropframe and original image
float SF = originalImage.size.width / cropSize.width;
//find the centre x,y coordinates of image
float centreX = originalImage.size.width / 2;
float centreY = originalImage.size.height / 2;
//calculate crop parameters
float cropX = centreX - ((cropSize.width / 2) * SF);
float cropY = centreY - ((cropSize.height / 2) * SF);
CGRect cropRect = CGRectMake(cropX, cropY, (cropSize.width *SF), (cropSize.height * SF));
CGAffineTransform rectTransform;
switch (originalImage.imageOrientation)
{
case UIImageOrientationLeft:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(M_PI_2), 0, -originalImage.size.height);
break;
case UIImageOrientationRight:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(-M_PI_2), -originalImage.size.width, 0);
break;
case UIImageOrientationDown:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(-M_PI), -originalImage.size.width, -originalImage.size.height);
break;
default:
rectTransform = CGAffineTransformIdentity;
};
rectTransform = CGAffineTransformScale(rectTransform, originalImage.scale, originalImage.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], CGRectApplyAffineTransform(cropRect, rectTransform));
UIImage *result = [UIImage imageWithCGImage:imageRef scale:originalImage.scale orientation:originalImage.imageOrientation];
CGImageRelease(imageRef);
//return result;
//Now want to scale down cropped image!
//want to multiply frames by 2 to get retina resolution
CGRect scaledImgRect = CGRectMake(0, 0, (cropSize.width * 2), (cropSize.height * 2));
UIGraphicsBeginImageContextWithOptions(scaledImgRect.size, NO, [UIScreen mainScreen].scale);
[result drawInRect:scaledImgRect];
UIImage *scaledNewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledNewImage;
}
That's the updated method there. I implemented the code from the answer linked into my method and it solves the issues. Strange that I didn't have these before iOS 7!

Resources