I'm trying to crop a image into a square shape, and then scale it to a size 200x200. Here's the code I use, but it's not really working for me. The image I get is sometimes in a different orientation, or isn't cropped from the center, or is wider than it should be.
float scale = _avatar.image.scale;
UIImageOrientation orientation = _avatar.image.imageOrientation;
if(_avatar.image.size.width < _avatar.image.size.height){ // avatar is a UIImageView
float startingY = (_avatar.image.size.height-_avatar.image.size.width)/2; // image is taller, determine the origin of the width-sized square, which will be the new image
CGImageRef imageRef = CGImageCreateWithImageInRect([_avatar.image CGImage], CGRectMake(0, startingY, _avatar.image.size.width, _avatar.image.size.width));
_avatar.image = [UIImage imageWithCGImage:imageRef scale:scale orientation:orientation];
CGImageRelease(imageRef);
} else {
float startingX = (_avatar.image.size.width-_avatar.image.size.height)/2; // image is wider, determine the origin of the height-sized square, which will be the new image
CGImageRef imageRef = CGImageCreateWithImageInRect([_avatar.image CGImage], CGRectMake(startingX, 0, _avatar.image.size.height, _avatar.image.size.height));
_avatar.image = [UIImage imageWithCGImage:imageRef scale:scale orientation:orientation];
CGImageRelease(imageRef);
}
UIGraphicsBeginImageContextWithOptions(CGSizeMake(200, 200), YES, 0.0);
[_avatar.image drawInRect:CGRectMake(0, 0, 200, 200)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
How it must be done to achieve the correct result?
I would suggest you to use this category for UIImage posted on github. They are also really simple classes and you can also use them to learn what is under the hood.
The UIImage has done all the coordinate and orientation tricks for you. So you should not use the width and height from an UIImage object to caculate the square's coordinate. You can take advantages of the UIKit to crop an image.
CGFloat startingX = 0;
CGFloat startingY = 0;
CGFloat squareWidth;
if(_avatar.image.size.width < _avatar.image.size.height){ // avatar is a UIImageView
startingY = (_avatar.image.size.height-_avatar.image.size.width)/2; // image is taller, determine the origin of the width-sized square, which will be the new image
squareWidth = _avatar.image.size.width;
} else {
startingX = (_avatar.image.size.width-_avatar.image.size.height)/2; // image is wider, determine the origin of the height-sized square, which will be the new image
squareWidth = _avatar.image.size.height;
}
UIGraphicsBeginImageContextWithOptions(CGSizeMake(squareWidth, squareWidth), YES, 0.0);
[_avatar.image drawAtPoint:CGPointMake(-startingX, -startingY)]; // Make an offset to draw part of the image
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContextWithOptions(CGSizeMake(200, 200), YES, 0.0);
[croppedImage drawInRect:CGRectMake(0, 0, 200, 200)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Related
I have a scenario in my app where I take a screenshot of the video using
[myMovieController requestThumbnailImagesAtTimes:#[#(myMovieController.currentPlaybackTime)] timeOption:MPMovieTimeOptionExact];
which just works fine. Then I have to crop the image with touched location on the video. I have added gesture recognizer of myMovieController. I get the touch location from the gesture Recognizer.
then I use following code to take the screen shot
CGRect cropRect = tapCircleView.frame;
cropRect = CGRectMake(touchPoint.x * image.scale,
touchPoint.y * image.scale,
cropRect.size.width * image.scale,
cropRect.size.height * image.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect) ;
UIImage* cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
[self showImage:cropped];
where cropRect width and height is 150.
But crop result is not with correct x and y. And also resulting image is very pixilated.
I have tried every solution but it's not working.
What is it that I am missing?
Thanks.
The image size which captures screenshot is not same as device size on which application is running.
So instead of using same image, change size of image using following code :
CGRect rect = CGRectMake(0,0,[UIScreen mainScreen].bounds.size.width, [UIScreen mainScreen].bounds.size.height);
UIGraphicsBeginImageContext( rect.size );
[image drawInRect:rect];
UIImage *picture1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(picture1);
UIImage *img=[UIImage imageWithData:imageData];
And now just use this image wherever you want !!
Cheers.
I am having problem with UIImage resizing, image masking is working fine but after applying mask UIImage is starched, the problem is with scaling as image is not scaled properly.
CCClippingNode *clippingNode = [[CCClippingNode alloc] initWithStencil:pMaskingFrame ];
pTobeMasked.scaleX = (float)pMaskingFrame.contentSize.width / (float)pTobeMasked.contentSize.width;
pTobeMasked.scaleY = (float)pMaskingFrame.contentSize.height / (float)pTobeMasked.contentSize.height;
clippingNode.alphaThreshold = 0;
[pContainerNode addChild:clippingNode];
pTobeMasked.position = ccp(pMaskingFrame.position.x, pMaskingFrame.position.y);
[clippingNode addChild:pTobeMasked];
One of my project I have used below function to resize an image;
/*
method parameters definition
image : original image to be resized
size : new size
*/
+ (UIImage*)resizeImage:(UIImage *)image size:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
//here is the scaled image which has been changed to the size specified
UIGraphicsEndImageContext();
return newImage;
}
This will work like a charm. It's similar to the already posted answer, but it has some more options:
+(UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
When a image is cropped from the center then crop image will take the aspect ratio of source image,But According to my requirement, aspect ratio will be change with new crop size.
I want to get exact center part of image with new aspect ratio.For example a large image is of size (320*480) then I want to crop center part of image of size (100,100) and aspect ratio will also be 100*100 ,No outer white or black part is required and image quality will be high.
Cropping function :
- (UIImage *)cropImage:(UIImage*)image andFrame:(CGRect)rect {
//Note : rec is nothing but the image frame which u want to crop exactly.
rect = CGRectMake(rect.origin.x*image.scale,
rect.origin.y*image.scale,
rect.size.width*image.scale,
rect.size.height*image.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef
scale:image.scale
orientation:image.imageOrientation];
CGImageRelease(imageRef);
return result;
}
Please help me.
This might run
- (UIImage *)imageByCroppingImage:(UIImage *)image toSize:(CGSize)size
{
// not equivalent to image.size (which depends on the imageOrientation)!
double refWidth = CGImageGetWidth(image.CGImage);
double refHeight = CGImageGetHeight(image.CGImage);
double x = (refWidth - size.width) / 2.0;
double y = (refHeight - size.height) / 2.0;
CGRect cropRect = CGRectMake(x, y, size.height, size.width);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef scale:0.0 orientation:self.imageOrientation];
CGImageRelease(imageRef);
return cropped;
}
var imageView = UIImageView()
// height and width values corresponds to rectangle height and width
imageView = UIImageView(frame: CGRectMake(0, 0, width, height ))
imageView.image = UIImage(named: "Your Image Name")
// by setting content mode to .ScaleAspectFill image centrally fill in imageView. image might appear beyond image frame.
imageView.contentMode = .ScaleAspectFill
// by setting .clipsToBouds to true, image set to image frame.
imageView.clipsToBounds = true
view.addSubview(imageView)
// Work this code
-(UIImage *)croppedImage
{
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.bezierPath closePath];
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 0.0, 0.0, 0.0, 0.0);
_b_image = self.bg_imageview.image;
CGSize imageSize = _b_image.size;
CGRect imageRect = CGRectMake(0, 0, imageSize.width, imageSize.height);
UIGraphicsBeginImageContextWithOptions(imageSize, NO, [[UIScreen mainScreen] scale]);
[self.bezierPath addClip];
[_b_image drawInRect:imageRect];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Calculate crop rect from image
float imgHeight = 100.0f; //Any according to requirement
float imgWidth = 100.0f; //Any according to requirement
CGRect cropRect = CGRectMake((largeImage.size.width/2)-(imgWidth/2),largeImage.size.height/2)-(imgHeight/2),imgWidth,imgHeight);
Now crop it
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
// or use the UIImage wherever you like
UIImage *croppedImg = [UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
So I have a UIImage which I want to crop. I looked and found imageByCroppingToRect method for CIImage. So, I converted the data to CIImage instead of UIImage, crop it using the specified method and then convert the resulting CIImage to UIImage and then display it in a UIImageView.
My code is
NSData *data = [[NSData alloc]initWithData:[def objectForKey:#"imageData"]];
//UIImage *normalImage = [[UIImage alloc]initWithData:data];
CIImage *originalImage = [CIImage imageWithData:data];
[originalImage imageByCroppingToRect:CGRectMake(10, 72, 300, 300)];
self.imageView.image = [UIImage imageWithCIImage:originalImage];
The problem is the image gets rotated by 90 degrees and I am not sure if it is being cropped. This image is captured using the device's camera. I use AVFoundation to access the camera. My session preset is AVCaptureSessionPresetPhoto. I think this is why I get the zooming.
CGRect rect = CGRectMake(10, 72, 300, 300);
CGImageRef imref = CGImageCreateWithImageInRect([yourOriginalImage CGImage], rect);
UIImage *newSubImage = [UIImage imageWithCGImage:imref];
try this. may help u.
EDIT:
Firstly fix your image orientation:
refs : https://github.com/j3r3miah/mapmatic-ios/blob/master/Mapmatic/UIImage+FixOrientation.m
then use above code to crop the Image to Specified Rect.
Not really an answer to your question, but an answer to your problem
https://github.com/mbcharbonneau/UIImage-Categories
especially this file : https://github.com/mbcharbonneau/UIImage-Categories/blob/master/UIImage%2BResize.m
- (UIImage *)croppedImage:(CGRect)bounds {
CGFloat scale = MAX(self.scale, 1.0f);
CGRect scaledBounds = CGRectMake(bounds.origin.x * scale, bounds.origin.y * scale, bounds.size.width * scale, bounds.size.height * scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], scaledBounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:UIImageOrientationUp];
CGImageRelease(imageRef);
return croppedImage;
}
you will find there all you need to crop your image
I am using the following code to convert the contents of a UIView to a PNG image:
UIGraphicsBeginImageContext(myView.bounds.size);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This works fine. If the UIView is 500 pixels high, and I'd like to generate two images (one of the top half and one of the bottom half), how would I go about doing this?
Any help would be greatly appreciated.
There are a few ways you could do this. One way is to draw it into one big image, then make two sub-images:
static UIImage *halfOfImage(UIImage *fullImage, CGFloat yOffset) {
// Pass yOffset == 0 for the top half.
// Pass yOffset == 0.5 for the bottom half.
CGImageRef cgImage = fullImage.CGImage;
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
CGRect rect = CGRectMake(0, height * yOffset, width, height * 0.5f);
CGImageRef cgSubImage = CGImageCreateWithImageInRect(cgImage, rect);
UIImage *subImage = [UIImage imageWithCGImage:cgSubImage scale:fullImage.scale
orientation:fullImage.imageOrientation];
CGImageRelease(cgSubImage);
return subImage;
}
...
UIGraphicsBeginImageContext(myView.bounds.size);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *topHalfImage = halfOfImage(viewImage, 0);
UIImage *bottomHalfImage = halfOfImage(viewImage, 0.5f);