So I have a UIImage which I want to crop. I looked and found imageByCroppingToRect method for CIImage. So, I converted the data to CIImage instead of UIImage, crop it using the specified method and then convert the resulting CIImage to UIImage and then display it in a UIImageView.
My code is
NSData *data = [[NSData alloc]initWithData:[def objectForKey:#"imageData"]];
//UIImage *normalImage = [[UIImage alloc]initWithData:data];
CIImage *originalImage = [CIImage imageWithData:data];
[originalImage imageByCroppingToRect:CGRectMake(10, 72, 300, 300)];
self.imageView.image = [UIImage imageWithCIImage:originalImage];
The problem is the image gets rotated by 90 degrees and I am not sure if it is being cropped. This image is captured using the device's camera. I use AVFoundation to access the camera. My session preset is AVCaptureSessionPresetPhoto. I think this is why I get the zooming.
CGRect rect = CGRectMake(10, 72, 300, 300);
CGImageRef imref = CGImageCreateWithImageInRect([yourOriginalImage CGImage], rect);
UIImage *newSubImage = [UIImage imageWithCGImage:imref];
try this. may help u.
EDIT:
Firstly fix your image orientation:
refs : https://github.com/j3r3miah/mapmatic-ios/blob/master/Mapmatic/UIImage+FixOrientation.m
then use above code to crop the Image to Specified Rect.
Not really an answer to your question, but an answer to your problem
https://github.com/mbcharbonneau/UIImage-Categories
especially this file : https://github.com/mbcharbonneau/UIImage-Categories/blob/master/UIImage%2BResize.m
- (UIImage *)croppedImage:(CGRect)bounds {
CGFloat scale = MAX(self.scale, 1.0f);
CGRect scaledBounds = CGRectMake(bounds.origin.x * scale, bounds.origin.y * scale, bounds.size.width * scale, bounds.size.height * scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], scaledBounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:UIImageOrientationUp];
CGImageRelease(imageRef);
return croppedImage;
}
you will find there all you need to crop your image
Related
I have a scenario in my app where I take a screenshot of the video using
[myMovieController requestThumbnailImagesAtTimes:#[#(myMovieController.currentPlaybackTime)] timeOption:MPMovieTimeOptionExact];
which just works fine. Then I have to crop the image with touched location on the video. I have added gesture recognizer of myMovieController. I get the touch location from the gesture Recognizer.
then I use following code to take the screen shot
CGRect cropRect = tapCircleView.frame;
cropRect = CGRectMake(touchPoint.x * image.scale,
touchPoint.y * image.scale,
cropRect.size.width * image.scale,
cropRect.size.height * image.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect) ;
UIImage* cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
[self showImage:cropped];
where cropRect width and height is 150.
But crop result is not with correct x and y. And also resulting image is very pixilated.
I have tried every solution but it's not working.
What is it that I am missing?
Thanks.
The image size which captures screenshot is not same as device size on which application is running.
So instead of using same image, change size of image using following code :
CGRect rect = CGRectMake(0,0,[UIScreen mainScreen].bounds.size.width, [UIScreen mainScreen].bounds.size.height);
UIGraphicsBeginImageContext( rect.size );
[image drawInRect:rect];
UIImage *picture1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(picture1);
UIImage *img=[UIImage imageWithData:imageData];
And now just use this image wherever you want !!
Cheers.
I have a UIImageView and I want to blur only the bottom portion of the image.
Can someone help me with this ?
I have used UIImage + ImageEffects category to blur the image completely. How can I do it for a specific portion only ?
Split your UIImage into two UIImages. Blur the one you want, leave the other one unaffected. The following splits the image exactly at the center, adjust the rects in the CGImageCreateWithImageInRect calls if you want to move the blurred portion.
UIImage *image = [UIImage imageNamed:#"yourImage.png"];
CGFloat halfImageHeight = image.size.height / 2.f;
CGImageRef topImgRef = CGImageCreateWithImageInRect(image.CGImage, CGRectMake(0, 0, image.size.width, halfImageHeight));
UIImage *topImage = [UIImage imageWithCGImage:topImgRef];
CGImageRelease(topImgRef);
CGImageRef bottomImgRef = CGImageCreateWithImageInRect(image.CGImage, CGRectMake(0, halfImageHeight, image.size.width, halfImageHeight));
UIImage *bottomImage = [UIImage imageWithCGImage:bottomImgRef];
CGImageRelease(bottomImgRef);
// Add blur effects to bottomImage
I encountered a problem when I pinch, pan or rotate a UIImageview in drawInRect, the transform is not being preserved.
How can I preserve the transform in drawInRect?
I tried this but no go :(
- (UIImage*) combineImage:(UIImageView *)selectedImage withOverlay:(UIImageView *)overlayImage
{
/* Identify the region that needs to be cropped */
CGRect viewForImgFrame = self.viewForImg.frame;
NSLog(#"view %#", NSStringFromCGRect(viewForImgFrame));
NSLog(#"selectedImage Img value %#",selectedImage);
NSLog(#"overlayImage Img value %#",overlayImage);
CGSize newImageSize =self.viewForImg.frame.size;
NSLog(#"CGSize %#",NSStringFromCGSize(newImageSize));
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0); //retina res
//[self.viewForImg.layer renderInContext:UIGraphicsGetCurrentContext()];
[selectedImage.image drawInRect:CGRectMake(0, 0, selectedImage.frame.size.width, selectedImage.frame.size.height)];
CGContextConcatCTM(UIGraphicsGetCurrentContext(), overlayImage.transform);
[overlayImage.image drawInRect:CGRectMake(overlayImage.frame.origin.x, overlayImage.frame.origin.y, overlayImage.frame.size.width, overlayImage.frame.size.height)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
NSData *imgData = UIImageJPEGRepresentation(image, 0.9); //UIImagePNGRepresentation ( image ); // get JPEG representation
UIImage * imagePNG = [UIImage imageWithData:imgData]; // wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;
}
Any comments are greatly appreciated.
you need to try
CGRectApplyAffineTransform(<#CGRect rect#>, <#CGAffineTransform t#>)
your code should be like
CGContextConcatCTM(UIGraphicsGetCurrentContext(), overlayImage.transform);
CGRect rect = CGRectMake(overlayImage.frame.origin.x, overlayImage.frame.origin.y, overlayImage.frame.size.width, overlayImage.frame.size.height);
CGRect transformedRect = CGRectApplyAffineTransform(rect, overlayImage.transform);
[overlayImage.image drawInRect:transformedRect];
It turned out that the AutoresizingMask changed rect size
[slider setAutoresizingMask:UIViewAutoresizingNone];
I'm trying to crop a image into a square shape, and then scale it to a size 200x200. Here's the code I use, but it's not really working for me. The image I get is sometimes in a different orientation, or isn't cropped from the center, or is wider than it should be.
float scale = _avatar.image.scale;
UIImageOrientation orientation = _avatar.image.imageOrientation;
if(_avatar.image.size.width < _avatar.image.size.height){ // avatar is a UIImageView
float startingY = (_avatar.image.size.height-_avatar.image.size.width)/2; // image is taller, determine the origin of the width-sized square, which will be the new image
CGImageRef imageRef = CGImageCreateWithImageInRect([_avatar.image CGImage], CGRectMake(0, startingY, _avatar.image.size.width, _avatar.image.size.width));
_avatar.image = [UIImage imageWithCGImage:imageRef scale:scale orientation:orientation];
CGImageRelease(imageRef);
} else {
float startingX = (_avatar.image.size.width-_avatar.image.size.height)/2; // image is wider, determine the origin of the height-sized square, which will be the new image
CGImageRef imageRef = CGImageCreateWithImageInRect([_avatar.image CGImage], CGRectMake(startingX, 0, _avatar.image.size.height, _avatar.image.size.height));
_avatar.image = [UIImage imageWithCGImage:imageRef scale:scale orientation:orientation];
CGImageRelease(imageRef);
}
UIGraphicsBeginImageContextWithOptions(CGSizeMake(200, 200), YES, 0.0);
[_avatar.image drawInRect:CGRectMake(0, 0, 200, 200)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
How it must be done to achieve the correct result?
I would suggest you to use this category for UIImage posted on github. They are also really simple classes and you can also use them to learn what is under the hood.
The UIImage has done all the coordinate and orientation tricks for you. So you should not use the width and height from an UIImage object to caculate the square's coordinate. You can take advantages of the UIKit to crop an image.
CGFloat startingX = 0;
CGFloat startingY = 0;
CGFloat squareWidth;
if(_avatar.image.size.width < _avatar.image.size.height){ // avatar is a UIImageView
startingY = (_avatar.image.size.height-_avatar.image.size.width)/2; // image is taller, determine the origin of the width-sized square, which will be the new image
squareWidth = _avatar.image.size.width;
} else {
startingX = (_avatar.image.size.width-_avatar.image.size.height)/2; // image is wider, determine the origin of the height-sized square, which will be the new image
squareWidth = _avatar.image.size.height;
}
UIGraphicsBeginImageContextWithOptions(CGSizeMake(squareWidth, squareWidth), YES, 0.0);
[_avatar.image drawAtPoint:CGPointMake(-startingX, -startingY)]; // Make an offset to draw part of the image
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContextWithOptions(CGSizeMake(200, 200), YES, 0.0);
[croppedImage drawInRect:CGRectMake(0, 0, 200, 200)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *sticky = [UIImage imageNamed:#"Radio.png"];
[_imgViewSticky setImage:sticky];
CIImage *outputImage = [self.originalImage CIImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImg = [context createCGImage:outputImage fromRect:[outputImage extent]];
float widthRatio = [outputImage extent].size.width / 320;
float heighRatio = [outputImage extent].size.height / 480;
CGPoint cgStickyPoint = CGPointMake(_imgViewSticky.frame.origin.x * widthRatio, _imgViewSticky.frame.origin.y * heighRatio);
cgImg = [self setStickyForCGImage:cgImg withPosition:cgStickyPoint];
The last line returns a CGImageRef object.
And I'm assigning the value to final image like this:
UIImage *finalImage = [UIImage ImageWithCGImageRef:cgImg];
Yet I'm not getting the image. Any ideas why? Any Help is much appreciated.
I notice that your CIContext isn't receiving any drawing, which could be why you're not getting an image. I don't have a clear picture of what you want, but this code will superimpose one UIImage on top of another UIImage:
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, 0.0); //Create an image context
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)]; //Draw the first UIImage
[stickerImage drawInRect:stickerRect]; //Draw the second UIImage wherever you want on top of the first image
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext(); //Get the final UIImage result