How to apply custom zoom to UIImagePicker camera? - ios

I have an custom camera control having slider to apply zoom. I am able to zoom image with following code:
self.pickerReference.cameraViewTransform = CGAffineTransformScale(CGAffineTransformIdentity, zoom, zoom);
But when i get image in didFinishPickingMediaWithInfo I get original image for UIImagePickerControllerOriginalImage, not the zoomed one. And for UIImagePickerControllerEditedImage there is no image.
I have also tried:
currentImage = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
UIImageView *v = [[UIImageView alloc]initWithImage:currentImage];
UIGraphicsBeginImageContext(v.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextScaleCTM(context, zoom, zoom);
[v drawRect:pickerReference.view.bounds];
CGContextRestoreGState(context);
zoomedCurImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Write image to PNG file
[UIImageJPEGRepresentation(zoomedCurImg, 0.4) writeToFile:imgPath atomically:YES];
This does apply zoom to original image but zoom is applied always at top left corner & not to where it is to be applied.
Please suggest solution. Thanks in advance.

I had a similar issue with a different piece of code and turning off auto layout fixed it for me. Not sure if that applies here but it's worth a shot.

Related

UIImageView image aspect ratio is messed up after redrawing it to create a round mask

My app sends a GET request to google to attain certain user information. One piece of crucial returned data is a users picture which is placed inside a UIImageView that is always exactly (100, 100) then redrawn to create a round mask for this imageView. These pictures come from different sources and thus always have different aspect ratios. Some have a smaller width compared to their height, sometimes it's vice-versa. This results in the image looking compressed. I've tried things such as the following (none of them worked):
_personImage.layer.masksToBounds = YES;
_personImage.layer.borderWidth = 0;
_personImage.contentMode = UIViewContentModeScaleAspectFit;
_personImage.clipsToBounds = YES;
Here is the code I use to redraw my images (it was attained from user fnc12 as the third answer in Making a UIImage to a circle form):
/** Returns a redrawn image that had a circular mask created for the inputted image. */
-(UIImage *)roundedRectImageFromImage:(UIImage *)image size:(CGSize)imageSize withCornerRadius:(float)cornerRadius
{
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0); //<== Notice 0.0 as third scale parameter. It is important because default draw scale ≠ 1.0. Try 1.0 - it will draw an ugly image...
CGRect bounds = (CGRect){CGPointZero, imageSize};
[[UIBezierPath bezierPathWithRoundedRect:bounds cornerRadius:cornerRadius] addClip];
[image drawInRect:bounds];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
This method is always called like so:
[_personImage setImage:[self roundedRectImageFromImage:image size:CGSizeMake(_personImage.frame.size.width, _personImage.frame.size.height) withCornerRadius:_personImage.frame.size.width/2]];
So I end up having a perfectly round image but the image it self isn't right aspect-wise. Please help.
P.S. Here's how images look when their width is roughly 70% that of their height before the redrawing of the image to create a round mask:
Hello dear friend there!
Here is my version that works:
Code in ViewController:
[self.profilePhotoImageView setContentMode:UIViewContentModeCenter];
[self.profilePhotoImageView setContentMode:UIViewContentModeScaleAspectFill];
[CALayer roundView:self.profilePhotoImageView];
roundView function in My CALayer+Additions class:
+(void)roundView:(UIView*)view{
CALayer *viewLayer = view.layer;
[viewLayer setCornerRadius:view.frame.size.width/2];
[viewLayer setBorderWidth:0];
[viewLayer setMasksToBounds:YES];
}
May be you should try to change your way to create rounded ImageView using my version that create rounded ImageView by modifying ImageView's view layer . Hope it helps.
To maintain aspect ratio of UIImageView, after setting image use following line of code.
[_personImage setContentMode:UIViewContentModeScaleAspectFill];
For detailed description follow reference link:
https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIImageView_Class/

Crop image to a square according to the size of a UIView/CGRect

I have an implementation of AVCaptureSession and my goal is for the user to take a photo and only save the part of the image within the red square border, as shown below:
AVCaptureSession's previewLayer (the camera) spans from (0,0) (top left) to the bottom of my camera controls bar (the bar just above the view that contains the shutter). My navigation bar and controls bar are semi-transparent, so the camera can show through.
I'm using [captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; to ensure that the original image being saved to the camera roll is like Apple's camera.
The user will be able to take the photo in portrait, landscape left and right, so the cropping method must take this into account.
So far, I've tried to crop the original image using this code:
DDLogVerbose(#"%#: Image crop rect: (%f, %f, %f, %f)", THIS_FILE, self.imageCropRect.origin.x, self.imageCropRect.origin.y, self.imageCropRect.size.width, self.imageCropRect.size.height);
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.imageCropRect.size.width, self.imageCropRect.size.width), NO, 0.0);
// Create rect for image
CGRect rect = self.imageCropRect;
// Draw the image into the rect
[self.captureManager.stillImage drawInRect:rect];
// Saving the image, ending image context
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
However, when I look at the cropped image in the camera roll, it seems that it has just squashed the original image, and not discarded the top and bottom parts of the image like I'd like. It also results in 53 pixels of white space at the top of the "cropped" image, likely because of the y position of my CGRect.
This is my logging output for the CGRect:
Image crop rect: (0.000000, 53.000000, 320.000000, 322.000000)
This also describes the frame of the red bordered view in the superview.
Is there something crucial I'm overlooking?
P.S. The original image size (taken with a camera in portrait mode) is:
Original image size: (2448.000000, 3264.000000)
You can crop images with CGImageCreateWithImageInRect:
CGImageRef imageRef = CGImageCreateWithImageInRect([uncroppedImage CGImage], bounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
Don't forget to add scale parameter otherwise you will get low resolution image
CGImageRef imageRef = CGImageCreateWithImageInRect([uncroppedImage CGImage], CGRectMake(0, 0, 30, 120));
[imageView setImage:[UIImage imageWithCGImage:imageRef scale:[[UIScreen mainScreen] scale] orientation:UIImageOrientationUp]];
CGImageRelease(imageRef);
Swift 3:
let imageRef:CGImage = uncroppedImage.cgImage!.cropping(to: bounds)!
let croppedImage:UIImage = UIImage(cgImage: imageRef)

Crop an area of oversized image to what is currently showing onscreen

I have an oversized image loaded in a image view that goes out of bounds both vertically and horizontally.
The end user can scroll around the image (the oversized imageview is in a scrollview) and when they find an area that they like I would like to crop out the area of the image that is shown on the screen. (much like a screenshot but only of the imageview.image I'm then going to put that into a different Imageview.
I can't seem to work out how to accomplish the "screenshot" of the area of the image view's image that is currently showing on the screen.
You can use CGImageCreateWithImageInRect to create a subimage of the displayed image. Use contentOffset and the scrollViews bounds to create the rect from which you want to create the image.
CGRect rect = CGRectMake(scrollView.contentOffset.x, scrollView.contentOffset.y, CGRectGetWidth(scrollView.bounds), CGRectGetHeight(scrollView.bounds));
CGImageRef subImageRef = CGImageCreateWithImageInRect([originalImage CGImage], rect);
If you zoom your scrollView you will need to take the zoomLevel into account too.
I ended up using the following code to achieve what I was looking for to grab the image. Thank you to Karl for his input and a thank you to iNoob whom answer to a previous question [Located here on StackOverflow][1] I used for mine.
Just use the below code to take a "screenshot" just set anything you don't want in the image to.hidden = True; before the code to hide it from the screenshot and set them to .Hidden = FALSE; after the code to bring them back.
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

UIView screenshot with magnificationFilter

I have a tiny qrcode UIImage set to a large UIImageView. In order to avoid any gradient from black to white when amplifying, I setted the UIImageView magnification filter to kCAFilterNearest as shown below (it works):
[QRCodeImageView layer].magnificationFilter = kCAFilterNearest;
Now I need to take a screenshot from this ImageView, but the result image is ignoring the magnification filter:
Here is my screenshot code:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(QRCodeImageView.frame.size.width, QRCodeImageView.frame.size.height),YES, 2.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
[QRCodeImageView.layer renderInContext:context];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
So, the question is, how to render in context with a given magnification filter?
Thanks in advance

Crop UIImage from a transformed UIImageView

I am letting the user capture an image from the camera or picking one from the library.
This image I display in an UIImageView.
The user can now scale and position the image within a bounding box, exactly like you would do using the UIImagePickerController when allowsEditing is set to YES.
When the user is satisfied with the result and taps Done I would like to produce a cropped UIImage.
The problem arises when using CGImageCreateWithImageInRect as this does not take the scaling into account. The transform is applied to the imageView like this:
CGAffineTransform transform = CGAffineTransformScale(self.imageView.transform, newScale, newScale);
[self.imageView setTransform:transform];
Using a gestureRecognizer.
I assume what is happening is; the UIImageView is scaled and moved, it then applies the UIViewContentModeScaleAspectFit to the UIImage is holds and when I ask it to crop the image, it does exactly that - whit no regards to the scaling positioning. The reason I think this, is that if I don't scale or move the image but just tap Done straight away the cropping works.
I crop the image like this:
- (UIImage *)cropImage:(UIImage*) img toRect:(CGRect)rect {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.imageView.image.scale orientation:self.imageView.image.imageOrientation];
// UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
Passing in a cropRect from a view that is a subView of my main view (the square overlay box, like in UIImagePickerController). Main UIView has a UIImageView that gets scaled and a UIView that displays the crop rectangle.
How can I get the "what you see is what you get" cropping and which factors must I take into account. Or maybe suggestions if I should implemented the hierarchy or scaling differently.
Try a simple trick. Apple has got samples on its site to show how to zoom into a photo using code. Once done zooming, using graphic context take the frame size of the bounding view, and take the image with that. Eg Uiview contains scroll view which has the zoomed image. So the scrollview zooms and so does your image, now take the frame size of your bounding UIview, and create an image context out of it and then save that as a new image. Tell me if that makes sense.
Cheers :)

Resources