How to do properly cropping of UIImage taken with UIImagePickerController? - ios

This is camera overlay for my app,
The yellow square is to indicate user that only photo inside this part (in camera) will be saved. It's like crop.
When I saved that capture image, it'll save zoomed photo [a big zoomed on photo],
What I found is, when I took a photo, it'll be of size of {2448, 3264}
I'm cropping the image like this,
- (UIImage *)imageByCroppingImage:(UIImage *)image toSize:(CGSize)size
{
double x = (image.size.width - size.width) / 2.0;
double y = (image.size.height - size.height) / 2.0;
CGRect cropRect = CGRectMake(x, y, size.height, size.width);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return cropped;
}
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info valueForKey:UIImagePickerControllerOriginalImage];
if (image) {
UIImage *newImage = [self imageByCroppingImage:image toSize:CGSizeMake(300.f, 300.f)];
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
}
}
Notes,
Orientation was fixed before perform cropping. Using this, http://pastebin.com/WYUkDLS0
That yellow square on camera is also same size that's width=300 and height=300.
If I'll set front camera for UIImagePickerController then it'll give me perfect output of cropped image. Yes this is really strange!
I've tried everything from here, Cropping an UIImage. Even https://github.com/Nyx0uf/NYXImagesKit won't help.
Any idea/suggestions?
Update:
From this question, Trying to crop my UIImage to a 1:1 aspect ratio (square) but it keeps enlarging the image causing it to be blurry. Why?
I followed the answer of #DrummerB like this,
CGFloat originalWidth = image.size.width * image.scale;
CGFloat originalHeight = image.size.height * image.scale;
float smallestDimension = fminf(originalWidth, originalHeight);
CGRect square = CGRectMake(0, 0, smallestDimension, smallestDimension);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:imageRef scale:image.scale orientation:image.imageOrientation];
UIImageWriteToSavedPhotosAlbum(squareImage, nil, nil, nil);
CGImageRelease(imageRef);
This is what I captured,
And it result me the following,
Now I'm getting the square photo, but note in output, still I'm getting photo outside that yellow square. What I want is to get photo which is reside in yellow square. Captured image is still of size, {w=2448, h=3264}. Note, that red circles which indicate outer part of image which should not include in output as that part is not inside yellow square.
What's wrong in this?

It looks like the image you are receiving in your implementation is returning an image crop of 300 by 300 pixels. The yellow square you have on screen is 300 by 300 points. Points are not the same as pixels. So if your photo 3264 pixels wide, then cropping it to 300 pixels would return an image of about 1/10th the original size.

Related

CGImageCreateWithImageInRect not cropping at correct position

I have a scenario in my app where I take a screenshot of the video using
[myMovieController requestThumbnailImagesAtTimes:#[#(myMovieController.currentPlaybackTime)] timeOption:MPMovieTimeOptionExact];
which just works fine. Then I have to crop the image with touched location on the video. I have added gesture recognizer of myMovieController. I get the touch location from the gesture Recognizer.
then I use following code to take the screen shot
CGRect cropRect = tapCircleView.frame;
cropRect = CGRectMake(touchPoint.x * image.scale,
touchPoint.y * image.scale,
cropRect.size.width * image.scale,
cropRect.size.height * image.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect) ;
UIImage* cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
[self showImage:cropped];
where cropRect width and height is 150.
But crop result is not with correct x and y. And also resulting image is very pixilated.
I have tried every solution but it's not working.
What is it that I am missing?
Thanks.
The image size which captures screenshot is not same as device size on which application is running.
So instead of using same image, change size of image using following code :
CGRect rect = CGRectMake(0,0,[UIScreen mainScreen].bounds.size.width, [UIScreen mainScreen].bounds.size.height);
UIGraphicsBeginImageContext( rect.size );
[image drawInRect:rect];
UIImage *picture1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(picture1);
UIImage *img=[UIImage imageWithData:imageData];
And now just use this image wherever you want !!
Cheers.

Cropping UIImage Complications in iOS 8.4 [duplicate]

This question already has answers here:
Cropping center square of UIImage
(19 answers)
Closed 7 years ago.
I'm currently cropping an UIImage with my
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info method like so:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
picker.allowsEditing = YES;
UIImage *img = [info objectForKey:UIImagePickerControllerOriginalImage];
CGRect cropRect = CGRectMake(0, 0, 2448, 3264);
UIImage *croppedImage = [img crop:cropRect];
imageToPass = croppedImage;
NSLog(#"Here's imageToPass: %#", imageToPass);
NSLog(#"and here's imageToPass' width: %f", imageToPass.size.width);
NSLog(#"and here's imageToPass' height: %f", imageToPass.size.height);
NSLog(#"and here's imageToPass' scale: %f", imageToPass.scale);
UINavigationController *postControl = [self.storyboard instantiateViewControllerWithIdentifier:#"postControl"];
RGPostViewController *postView = (RGPostViewController *)postControl.topViewController;
[postView storeImage:imageToPass];
[self.presentedViewController presentViewController:postControl animated:NO completion:nil];
}
My problem is that when I print the width and height of my imageToPass variable I find that the value is listed in points. My console prints like this:
I need to get an image returned that is cropped to be 320x320 in size. With my code CGRect cropRect = CGRectMake(0, 0, 2448, 3264); Im taking the original size of the photo, which by default with UIImagePickerController is, I'm assuming, 320x520 or something like that. Using point values I can see that 2448 is the points wide and 3264 is the height. From Google,
iPhone 5 display resolution is 1136 x 640 pixels. Measuring 4 inches diagonally, the new touch screen has an aspect ratio of 16:9 and is branded a Retina display with 326 ppi (pixels per inch).
Im not sure what to do here. Does the math 2448points/640px = 3.825 tell me that there is 3.825 points per pixel on a 326ppi screen?
PS keep in mind I'm trying to grab the 320x320 picture in the middle of the UIImagePickerControllerOriginalImage which means cutting of some top number of pixels and some bottom number of pixels determined in points I'm assuming.
EDIT
Here's the code for the crop: method in the fourth line of code above:
#import "UIImage+Crop.h"
#implementation UIImage (Crop)
- (UIImage *)crop:(CGRect)rect {
rect = CGRectMake(rect.origin.x*self.scale,
rect.origin.y*self.scale,
rect.size.width*self.scale,
rect.size.height*self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef
scale:self.scale
orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
#end
I have found that if I set my CGRect cropRect with CGRect cropRect = CGRectMake(264, 0, 2448, 3000); //3264 it actually removes 264 points from the top and bottom of the image. I understand that an iPhone 5s has a screen resolution of 326ppi(pixels per inch), how can I use this to successfully remove the amount of pixels that I need to remove.
You don't need to know about converting points/pixels/retina/non/etc because of a property of the screen called scale. You do need to use core graphics to do the actual crop though. Here's what it could look like:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
picker.allowsEditing = YES;
UIImage *img = [info objectForKey:UIImagePickerControllerOriginalImage];
// you want to make your rect in the center of your image, not at [0,0]
CGFloat cropSize = 320;
CGRect cropRect = CGRectMake(img.center.x - (cropSize / 2), img.center.y - (cropSize / 2), cropSize, cropSize);
// Make a new CGImageRef with the current graphics context, then use that to make the cropped UIImage. Make sure to release that image ref!
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], cropRect);
croppedImage = [UIImage imageWithCGImage: imageRef];
CGImageRelease(imageRef);
// Adjust the image for scale - this is how you handle retina/orientation.
imageToPass = [UIImage imageWithCGImage:imageRef
scale:self.scale
orientation:self.imageOrientation];
UINavigationController *postControl = [self.storyboard instantiateViewControllerWithIdentifier:#"postControl"];
RGPostViewController *postView = (RGPostViewController *)postControl.topViewController;
[postView storeImage:imageToPass];
[self.presentedViewController presentViewController:postControl animated:NO completion:nil];
}

Cropping an iPhone camera photo in ios programmatically

I have the portrait version working fine as the scale is 0.75. But I cannot get the landscape to work correctly. I am using UIImagePickerController.
// Landscape
// Full landscape image is width: 3264 and height 2448
CGRect cropped = CGRectMake(306.0, 0.0, 1836.0, 2448.0);
CGImageRef imageRef = CGImageCreateWithImageInRect ([photo CGImage], cropped);
// UIImage * croppedPhoto = [UIImage imageWithCGImage: imageRef];
UIImage * croppedPhoto = [UIImage imageWithCGImage:imageRef scale:photo.scale orientation:UIImageOrientationUp];
CGImageRelease (imageRef);
// Scale the photo down
CGRect rect = CGRectMake(0.0, 0.0, 450.0, 600.0);
UIGraphicsBeginImageContext( rect.size );
[croppedPhoto drawInRect:rect];
picture = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
I have the following issues:
I want the edges cut off to make the landscape into a portrait (the center part of the landscape needs to be converted to portrait).
Depending on which way I hold the device, the image is either upside down or correct.
What am I doing wrong?

Why when I add add a UIImage on top of another UIImage for a new image does the added image shrink?

I'm trying to add a video player icon on top of a thumbnail of a video.
I get the image from the YouTube API, then crop it to be square, then resize it to be the proper size. I then add my player icon image on top of it.
The problem lies in the fact that the player icon is much smaller than it should be on the thumbnail (it's 28x28pt when on screen it's much smaller). See in the below image where I added it to the cell to show the size it should be, versus the thumbnail size:
I crop it to a square with this method:
/**
* Given a UIImage, return it with a square aspect ratio (via cropping, not smushing).
*/
- (UIImage *)createSquareVersionOfImage:(UIImage *)image {
CGFloat originalWidth = image.size.width;
CGFloat originalHeight = image.size.height;
float smallestDimension = fminf(originalWidth, originalHeight);
// Determine the offset needed to crop the center of the image out.
CGFloat xOffsetToBeCentered = (originalWidth - smallestDimension) / 2;
CGFloat yOffsetToBeCentered = (originalHeight - smallestDimension) / 2;
// Create the square, making sure the position and dimensions are set appropriately for retina displays.
CGRect square = CGRectMake(xOffsetToBeCentered * image.scale, yOffsetToBeCentered * image.scale, smallestDimension * image.scale, smallestDimension *image.scale);
CGImageRef squareImageRef = CGImageCreateWithImageInRect([image CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:squareImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(squareImageRef);
return squareImage;
}
Resize it with this method:
/**
* Resize the given UIImage to a new size and return the newly resized image.
*/
- (UIImage *)resizeImage:(UIImage *)image toSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
And add it on top of the other image with this method:
/**
* Adds a UIImage on top of another UIImage and returns the result. The top image is centered.
*/
- (UIImage *)addImage:(UIImage *)additionalImage toImage:(UIImage *)backgroundImage {
UIGraphicsBeginImageContext(backgroundImage.size);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[additionalImage drawInRect:CGRectMake((backgroundImage.size.width - additionalImage.size.width) / 2, (backgroundImage.size.height - additionalImage.size.height) / 2, additionalImage.size.width, additionalImage.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
And this is how it is implemented:
UIImage *squareThumbnail = [self resizeImage:[self createSquareVersionOfImage:responseObject] toSize:CGSizeMake(110.0, 110.0)];
UIImage *playerIcon = [UIImage imageNamed:#"video-thumbnail-overlay"];
UIImage *squareThumbnailWithPlayerIcon = [self addImage:playerIcon toImage:squareThumbnail];
But in the end, the icon is always too small. The sizing things confuse me when working with images, as I'm used to it figuring out retina screen related things automatically, and for example in the above code block, I'm not sure why I set it to 110.0, 110.0 as it's a 55x55 UIImageView and I thought it scales automatically (but if I put it to 55 it's stretched terribly).
The reason you have to put 110 in your resizeImage call is because you are creating a CGGraphics context with a scale of 1.0. The graphics context for views in a view hierarchy on retina displays have a scale of 2.0 (provided you did nothing to scale anything else).
I believe that new UIImage that you create is now a "normal" image (Sorry I can't remember the technical term). It is not an #2x image. So its size that you will get when you ask for size will not scale for #2x.
Note this answer:
UIGraphicsGetImageFromCurrentImageContext retina resolutions?
I haven't tested this, but it should work. If it doesn't it should at least be more straightforward to debug.
//images should be passed in with their original scales
-(UIImage*)compositedImageWithSize:(CGSize)newSize bg:(UIImage*)backgroundImage fgImage:(UIImage*)foregroundImage{
//match the scale of screen.
CGFloat scale = [[UIScreen mainScreen] scale];
UIGraphicsBeginImageContextWithOptions(newSize, NO, scale);
//instead of resizing the image ahead of time, we just draw it into the context at the appropriate size. The context will clip the image.
CGRect aspectFillRect = CGRectZero;
if(newSize.width/newSize.height > backgroundImage.size.width/backgroundImage.size.height){
aspectFillRect.y = 0;
aspectFillRect.height = newSize.height;
CGFloat scaledWidth = (newSize.height / backgroundImage.size.height) * newSize.width;
aspectFillRect.x = (newSize.width - scaledWidth)/2.0;
aspectFillRect.width = scaledWidth;
}else{
aspectFillRect.x = 0;
aspectFillRect.width = newSize.width;
CGFloat scaledHeight = (newSize.width / backgroundImage.size.width) * newSize.height;
aspectFillRect.y = (newSize.height - scaledHeight)/2.0;
aspectFillRect.height = scaledHeight;
}
[backgroundImage drawInRect:aspectFillRect];
//pass in the 2x image for the fg image so it provides a better resolution
[foregroundImage drawInRect:CGRectMake((newSize.width - additionalImage.size.width) / 2, (newSize.height - additionalImage.size.height) / 2, additionalImage.size.width, additionalImage.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
You would skip all those methods you were calling before and do:
UIImage *playerIcon = [UIImage imageNamed:#"video-thumbnail-overlay"];
//pass in the non-retina scale of the image
UIImage *result = [self compositedImageWithSize:CGSizeMake(55.0, 55.0)
bg:responseObject
fg:playerIcon];
Hope this helps!

Crop a Portion of UIImage from Larger UIImage, and include non-image parts

I think I may have an odd request, however hopefully someone can help. I am using the well known UIScrollView + UIImageView to zoom into and out of an image, as well as pan. This works fine and dandy, but the current project we have needs to be able to crop the image, but also include the black bars on the sides if the image is smaller than the crop rectangle. See the images below.
We wish to capture everything inside of the blue box, including the white (which will be black, since opaque is set to YES).
This works great for images that are completely zoomed out (The white is just the UIImageView's extra space).
However the problem arises when we try to zoom into the image, and capture only that portion, plus the empty space.
This results in the following image
The problem we are seeing is we need to be able to create an image that is exactly what is in the Crop Rect, regardless if there is part of the image there or not. The other problem is we wish to have the ability to dynamically change the output resolution. The aspect ratio is 16:9, and for this example kMaxWidth = 1136 and kMaxHeight = 639, however in the future we may want to request a larger or smaller 16:9 resolution.
Below is the function I have so far:
- (UIImage *)createCroppedImageFromImage:(UIImage *)image {
CGSize newRect = CGSizeMake(kMaxWidth, kMaxHeight);
UIGraphicsBeginImageContextWithOptions(newRect, YES, 0.0);
// 0 is the edge of the screen, to help with zooming
CGFloat xDisplacement = ((abs(0 - imageView.frame.origin.x) * kMaxWidth) / (self.cropSize.width / self.scrollView.zoomScale) / self.scrollView.zoomScale);
CGFloat yDisplacement = ((abs(self.cropImageView.frame.origin.y - imageView.frame.origin.y) * kMaxHeight) / (self.cropSize.height / self.scrollView.zoomScale) / self.scrollView.zoomScale);
CGFloat newImageWidth = (self.image.size.width * kMaxWidth) / (self.cropSize.width / self.scrollView.zoomScale);
CGFloat newImageHeight = (self.image.size.height * kMaxHeight) / (self.cropSize.height / self.scrollView.zoomScale);
[image drawInRect:CGRectMake(xDisplacement, 0, newImageWidth, newImageHeight)];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Any help would be greatly appreciated.
I ended up just taking a screenshot, and cropping that. It seems to work well enough.
- (UIImage *)cropImage {
CGRect cropRect = self.cropOverlay.cropRect;
UIGraphicsBeginImageContext(self.view.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *fullScreenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef croppedImage = CGImageCreateWithImageInRect(fullScreenshot.CGImage, cropRect);
UIImage *crop = [[UIImage imageWithCGImage:croppedImage] resizedImage:self.outputSize interpolationQuality:kCGInterpolationHigh];
CGImageRelease(croppedImage);
return crop;
}
If using iOS 7, you would use drawViewHierarchyInRect:afterScreenUpdates:, instead of renderInContext:
I think the translated rect for the image view isn't calculated properly. Since UIImageView is the subview inside the UIScrollView, you should be able to calculate the visible rect by calling [scrollView convertRect:scrollView.bounds toView:imageView];. That will be the visible rect of your image view. All you need to now is crop it.
-(UIImage*)cropImage:(UIImage*)img inRect:(CGRect)rect{
CGImageRef cropped = CGImageCreateWithImageInRect(img.CGImage, rect);
UIImage *image = [UIImage imageWithCGImage:cropped];
CGImageRelease(cropped);
return image;
}
Edit: Yeah... I forgot to mention that cropping should be done in (0,1) coordinate space. I've modified the crop function for you, so it crops the image based on all parameters you provided, UIImageView inside UIScrollView and an image.
-(UIImage*)cropImage:(UIImage*)image inImageView:(UIImageView*)imageView scrollView:(UIScrollView*)scrollView{
// get visible rect from image scrollview
CGRect visibleRect = [scrollView convertRect:scrollView.bounds toView:imageView];
UIImage* rCroppedImage;
CALayer* maskLayer= [[CALayer alloc] init];
maskLayer.contents= (id)image.CGImage;
maskLayer.frame= CGRectMake(0, 0, visibleRect.size.width, visibleRect.size.height);
CGRect rect= CGRectMake(visibleRect.origin.x / image.size.width,
visibleRect.origin.y / image.size.height,
visibleRect.size.width / image.size.width,
visibleRect.size.height / image.size.height);
maskLayer.contentsRect= rect;
UIGraphicsBeginImageContext(visibleRect.size);
CGContextRef context= UIGraphicsGetCurrentContext();
[maskLayer renderInContext:context];
rCroppedImage= UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return rCroppedImage;
}

Resources