Capturing image preview as like as on the screen with AVFoundation - ios

I set the AVCaptureSession preset PhotoPreset
self.session.sessionPreset = AVCaptureSessionPresetPhoto;
and then adding a new layer into my view with
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [self.view layer];
[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:[rootLayer bounds]];
[rootLayer addSublayer:previewLayer];
So far so good, However when I want to capture an image, I use the code below
AVCaptureConnection *videoConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
[self.session stopRunning];
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData ];
self.imageView.image = image; //IMAGEVIEW IS WITH THE BOUNDS OF SELF.VIEW
image = nil;
}];
Capturing an image is fine however, the captured image is different comparing to AVCaptureVideoPreviewLayer showing on the screen. What I really want to do is to show captured as like as appearing on AVCapturePreviewLayer layer. How can I achieve this? How should I resize and crop the captured image with respect to the bounds of self.view?

i am not sure this will help you or not but as you talking about cropping images so i crop image in ImagePickerView with following code I am not sure this will help you or not
- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info {
self.lastChosenMediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([lastChosenMediaType isEqual:(NSString *)kUTTypeImage]) {
UIImage *chosenImage = [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *shrunkenImage = shrinkImage(chosenImage, imageFrame.size);
self.imagee = shrunkenImage;
selectImage.image = imagee;
} [picker dismissModalViewControllerAnimated:YES];
}
- (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker {
[picker dismissModalViewControllerAnimated:YES];
}
// function for cropping images you can do some changes in parameter as per your requirements
static UIImage *shrinkImage(UIImage *original, CGSize size) {
CGFloat scale = [UIScreen mainScreen].scale;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, size.width * scale,
size.height * scale, 8, 0, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context,
CGRectMake(0, 0, size.width * scale, size.height * scale),
original.CGImage);
CGImageRef shrunken = CGBitmapContextCreateImage(context);
UIImage *final = [UIImage imageWithCGImage:shrunken];
CGContextRelease(context);
CGImageRelease(shrunken);
return final;
}

Related

Image becomes blur when applying CIFilter

I am working in a iOS app to crop the rectangle image from camera. And I am using the CIDetector to get the rect features and using CIFilter in order to crop the rectangle image. But after applying the filter the result image quality becomes very poor.
Here is my code below.
I am getting video capture output from the following delegate method
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
// Convert into CIIImage to find the rectfeatures.
self.sourceImage = [[CIImage alloc] initWithCGImage:[self imageFromSampleBuffer:sampleBuffer].CGImage options:nil];
}
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVPixelBufferRef pb = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciimg = [CIImage imageWithCVPixelBuffer:pb];
// show result
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef ref = [context createCGImage:ciimg fromRect:ciimg.extent];
UIImage *image = [UIImage imageWithCGImage:ref scale:1.0 orientation:(UIImageOrientationUp)];
CFRelease(ref);
return (image);
}
And I am running a NSTimer in the background which will start detect rect features from the captured source image for every 0.2 seconds
- (void)performRectangleDetection:(CIImage *)image{
if(image == nil)
return;
NSArray *rectFeatures = [self.rectangleDetector featuresInImage:image];
if ([rectFeatures count] > 0 ) {
[self capturedImage:image];
}
}
-(void)capturedImage:(CIImage *)image
{
NSArray *rectFeatures = [self.rectangleDetector featuresInImage:image];
CIImage *resultImage = [image copy];
for (CIRectangleFeature *feature in rectFeatures) {
resultImage = [image imageByApplyingFilter:#"CIPerspectiveCorrection"
withInputParameters:#{#"inputTopLeft":[CIVector vectorWithCGPoint:feature.topLeft] ,
#"inputTopRight": [CIVector vectorWithCGPoint:feature.topRight],
#"inputBottomLeft": [CIVector vectorWithCGPoint:feature.bottomLeft],
#"inputBottomRight": [CIVector vectorWithCGPoint:feature.bottomRight]}];
}
UIImage *capturedImage = [[UIImage alloc] initWithCIImage: resultImage];
UIImage *finalImage = [self imageWithImage:capturedImage scaledToSize:capturedImage.size];
}
The finalImage will be retrieved after sending to this method
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The final image quality becomes blurred sometimes. Is that because of the filter or because of camera output image ? Pls help me to solve this.
It is likely that you are not recreating the final image with the correct scale factor.
UIImage *finalImage = [UIImage imageWithCGImage:resultImage
scale:original.scale
orientation:original.imageOrientation];
If this doesn't solve the issue, please provide more code sample from the camera input, and how you converted the final CIImage from the filters into UIImage.
use the following method to crop image
-(UIImage*)cropImage:(UIImage*)image withRect:(CGRect)rect {
CGImageRef cgImage = CGImageCreateWithImageInRect(image.CGImage, rect);
UIImage *cropedImage = [UIImage imageWithCGImage:cgImage];
return cropedImage;
}

Objective C - Image from camera to set on UIButton

I am trying to set image from camera to set on my button but it did not accepting cropped image by default it sets to original image width and height which was captured by camera and because of this image looks shrink from top to bottom .
When I click on UIButton camera opens and then after image captured it shows on this UIButton
- (IBAction)ProfImageBtnCliked:(id)sender
{
UIImagePickerController *picker = [[UIImagePickerController alloc] init];
[picker setAllowsEditing:YES];
picker.delegate = self;
if ([UIImagePickerController isCameraDeviceAvailable:UIImagePickerControllerCameraDeviceRear] || [UIImagePickerController isCameraDeviceAvailable:UIImagePickerControllerCameraDeviceFront] )
{
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
picker.cameraDevice = UIImagePickerControllerCameraDeviceFront;
[self presentViewController:picker animated:YES completion:nil];
}
else
{
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Alert" message:#"No Camera Available"delegate:self cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alert show];
}
}
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[picker dismissViewControllerAnimated:YES completion:Nil];
NSData *dataForImage = UIImageJPEGRepresentation([info objectForKey:#"UIImagePickerControllerOriginalImage"], 1.0);
[ProfImgButton setImage:[UIImage imageWithData:dataForImage] forState:UIControlStateNormal];
ProfImgButton.userInteractionEnabled=NO;
ProfImgButton.contentMode=UIViewContentModeScaleAspectFill;
ProfImgButton.clipsToBounds=YES;
CGSize constraint = CGSizeMake(200,200);
NSData *imgData = UIImageJPEGRepresentation([self imageWithImage [UIImage imageWithData:dataForImage] scaledToSize:constraint], 0);
NSLog(#"Size of Image(bytes):%lu",(unsigned long)[imgData length]);
NSString *imageString = [imgData base64EncodedStringWithOptions:NSDataBase64EncodingEndLineWithLineFeed];
NSLog(#"%#",imageString);
//assign base64 image to image string
Base64ImageString=[imageString stringByReplacingOccurrencesOfString:#"+" withString:#"%2B"];
}
//reduce image size
-(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize;
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
You are setting the original image taken from your camera in your code here
[picker dismissViewControllerAnimated:YES completion:Nil];
NSData *dataForImage = UIImageJPEGRepresentation([info objectForKey:#"UIImagePickerControllerEditedImage"], 1.0);
[ProfImgButton setImage:[UIImage imageWithData:dataForImage] forState:UIControlStateNormal];
while you are resizing the image below
CGSize constraint = CGSizeMake(200,200);
NSData *imgData = UIImageJPEGRepresentation([self imageWithImage [UIImage imageWithData:dataForImage] scaledToSize:constraint], 0);
NSLog(#"Size of Image(bytes):%lu",(unsigned long)[imgData length]);
This data is not used anywhere. You are not setting it on your ProfImgButton
All you need to do is set the resized imagedata to your button
[ProfImgButton setImage:[UIImage imageWithData:imgData] forState:UIControlStateNormal];
Your code of cropping is wrong!
Write like this:-
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[picker dismissViewControllerAnimated:YES completion:nil];
CGSize size = CGSizeMake(200,200);
UIImage *originalImage=UIImageJPEGRepresentation([info objectForKey:#"UIImagePickerControllerEditedImage"], 1.0);
//CROP image code
UIGraphicsBeginImageContext(size);
[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//now assign image to button
[ProfImgButton setImage:croppedImage forState:UIControlStateNormal];
ProfImgButton.userInteractionEnabled=NO;
ProfImgButton.contentMode=UIViewContentModeScaleAspectFill;
ProfImgButton.clipsToBounds=YES;
NSData *imgData = UIImageJPEGRepresentation(croppedImage , 0);
NSLog(#"Size of Image(bytes):%lu",(unsigned long)[imgData length]);
NSString *imageString = [imgData base64EncodedStringWithOptions:NSDataBase64EncodingEndLineWithLineFeed];
NSLog(#"%#",imageString);
//assine base64 image to image string
Base64ImageString=[imageString stringByReplacingOccurrencesOfString:#"+" withString:#"%2B"];
}

How to get AVCaptureStillImageOutput same aspect ratio as the AVCaptureVideoPreviewLayer?

I'm capturing an image using AVFoundation. I'm using AVCaptureVideoPreviewLayer to display the camera feed on screen. This preview layer's frame gets the bounds of a UIView with dynamic dimensions:
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [self.cameraFeedView layer];
[rootLayer setMasksToBounds:YES];
CGRect frame = self.cameraFeedView.frame;
[previewLayer setFrame:frame];
previewLayer.frame = rootLayer.bounds;
[rootLayer insertSublayer:previewLayer atIndex:0];
And I'm using AVCaptureStillImageOutput to capture an image:
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer != NULL) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *capturedImage = [UIImage imageWithData:imageData];
}
}];
My problem is that the captured image is at the size of the iPhone camera (1280x960 - front camera), but I need it to be the same aspect ratio as the preview layer. For example, if the size of the preview layer is 150x100, I need the captured image to be 960x640. Is there any solution for this?
I also enter counter the same problem. You have to crop or resize output still image. But you should notice that output still`s scale and image orientation.
preview layer square frame
CGFloat width = CGRectGetWidth(self.view.bounds);
[self.captureVideoPreviewLayer setFrame:CGRectMake(0, 0, width, width)];
[self.cameraView.layer addSublayer:self.captureVideoPreviewLayer];
calculate cropped image`s frame
[self.captureStillImageOutput captureStillImageAsynchronouslyFromConnection:captureConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSData *data = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:data];
CGRect cropRect = CGRectMake((image.size.height - image.size.width) / 2, 0, image.size.width, image.size.width);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef scale:image.scale orientation:image.imageOrientation]; // always UIImageOrientationRight
CGImageRelease(imageRef);
}];

UIImagePickerController forces images in other views to full size

In one spot of my app, I need to use the camera, so I call up the UIImagePickerController. Unfortunately, once I return from the controller, most of the pictures in the app are full size, no matter what their UIImageView attributes say. The exception appears to be UIImageViews in UITableViewCells. This applies to all views in the app, not just ones that have direct connection to the viewcontroller that called the UIImagePickerController. Once while I was messing around, trying to troubleshoot, the problem seemed to disappear on its own, though I have not been able to replicate that.
The code is as follows.
-(void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
if(![UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]){
UIAlertView *errorAlertView = [[UIAlertView alloc] initWithTitle:#"Error"
message:#"Device has no camera"
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil];
[errorAlertView show];
[_app.navController popPage];
}
else if(firstTime){
UIImagePickerController *picker = [[UIImagePickerController alloc] init];
picker.delegate = self;
picker.allowsEditing = YES;
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
firstTime = false;
[self presentViewController:picker animated:YES completion:NULL];
}
}
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *cameraImage = info[UIImagePickerControllerEditedImage];
[picker dismissViewControllerAnimated:YES completion:NULL];
NSString *folderName = #"redApp";
if([_page hasChild:[RWPAGE FOLDER]]){
folderName = [_page getStringFromNode:[RWPAGE FOLDER]];
}
NSDate *datetimeNow = [[NSDate alloc] init];
NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
[dateFormatter setDateFormat:#"yyyy-MM-dd_HH-mm-ss-SSS"];
NSString *filename = [NSString stringWithFormat:#"%#.png",[dateFormatter stringFromDate:datetimeNow]];
NSString *applicationDocumentsDir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *folderPath = [applicationDocumentsDir stringByAppendingPathComponent:folderName];
NSString *filePath = [folderPath stringByAppendingPathComponent:filename];
NSError *error = nil;
if(![[NSFileManager defaultManager] fileExistsAtPath:folderPath isDirectory:nil]){
[[NSFileManager defaultManager] createDirectoryAtPath:folderPath withIntermediateDirectories:NO
attributes:nil error:&error];
}
if(error != nil){
NSLog(#"Create directory error: %#", error);
}
[UIImagePNGRepresentation(cameraImage) writeToFile:filePath options:NSDataWritingAtomic error:&error];
if(error != nil){
NSLog(#"Error in saving image to disk. Error : %#", error);
}
RWXmlNode *nextPage = [_xml getPage:[_page getStringFromNode:[RWPAGE CHILD]]];
nextPage = [nextPage deepClone];
[nextPage addNodeWithName:[RWPAGE FILEPATH] value:filePath];
[_app.navController pushViewWithPage:nextPage];
}
-(void)imagePickerControllerDidCancel:(UIImagePickerController *)picker {
[picker dismissViewControllerAnimated:YES completion:NULL];
[_app.navController popPage];
}
Edit:
To expand upon the above.
The base of the app is a Custom Container View Controller, acting mostly like a Navigation Controller. When a user navigates to a page (what I call the combination of a view and view controller) it is displayed on the Custom Container view, and the previous page is stored in a stack.
One of my pages calls upon a UIImagePicker. Once the image picker has been closed again, and I return to the app, problems appear across the app when I open new pages. I don't see problems on every page, but they are on several independent pages. Most pages look completely unaffected, while the problem pages appear to not obey their constraints.
if you want to compress image use
+(UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
or
+(UIImage *)scaleImage:(UIImage *)image toSize:(CGSize)targetSize {
CGFloat scaleFactor = 1.0;
if (image.size.width > targetSize.width || image.size.height > targetSize.height)
if (!((scaleFactor = (targetSize.width / image.size.width)) > (targetSize.height / image.size.height))) //scale to fit width, or
scaleFactor = targetSize.height / image.size.height; // scale to fit heigth.
UIGraphicsBeginImageContext(targetSize);
CGRect rect = CGRectMake((targetSize.width - image.size.width * scaleFactor) / 2,
(targetSize.height - image.size.height * scaleFactor) / 2,
image.size.width * scaleFactor, image.size.height * scaleFactor);
[image drawInRect:rect];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Try like this :-
UIImage *thumbnail =[UIImage imageNamed:#"yourimage.png"];
CGSize itemSize = CGSizeMake(35, 35);
UIGraphicsBeginImageContext(itemSize);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
[thumbnail drawInRect:imageRect];
// Now this below image contains compressed image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

How to crop an image from AVCapture to a rect seen on the display

This is driving me crazy because I can't get it to work. I have the following scenario:
I'm using an AVCaptureSession and an AVCaptureVideoPreviewLayer to create my own camera interface. The interface shows a rectangle. Below is the AVCaptureVideoPreviewLayer that fills the whole screen.
I want to the captured image to be cropped in a way, that the resulting image shows exactly the content seen in the rect on the display.
My setup looks like this:
_session = [[AVCaptureSession alloc] init];
AVCaptureSession *session = _session;
session.sessionPreset = AVCaptureSessionPresetPhoto;
AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (camera == nil) {
[self showImagePicker];
_isSetup = YES;
return;
}
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.frame = self.liveCapturePlaceholderView.bounds;
[self.liveCapturePlaceholderView.layer addSublayer:captureVideoPreviewLayer];
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&error];
if (error) {
HGAlertViewWrapper *av = [[HGAlertViewWrapper alloc] initWithTitle:kFailedConnectingToCameraAlertViewTitle message:kFailedConnectingToCameraAlertViewMessage cancelButtonTitle:kFailedConnectingToCameraAlertViewCancelButtonTitle otherButtonTitles:#[kFailedConnectingToCameraAlertViewRetryButtonTitle]];
[av showWithBlock:^(NSString *buttonTitle){
if ([buttonTitle isEqualToString:kFailedConnectingToCameraAlertViewCancelButtonTitle]) {
[self.delegate gloameCameraViewControllerDidCancel:self];
}
else {
[self setupAVSession];
}
}];
}
[session addInput:input];
NSDictionary *options = #{ AVVideoCodecKey : AVVideoCodecJPEG };
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[_stillImageOutput setOutputSettings:options];
[session addOutput:_stillImageOutput];
[session startRunning];
_isSetup = YES;
I'm capturing the image like this:
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
if (error) {
MWLogDebug(#"Error capturing image from camera. %#, %#", error, [error userInfo]);
_capturePreviewLayer.connection.enabled = YES;
}
else
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
CGRect cropRect = [self createCropRectForImage:image];
UIImage *croppedImage;// = [self cropImage:image toRect:cropRect];
UIGraphicsBeginImageContext(cropRect.size);
[image drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)];
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.capturedImage = croppedImage;
[_session stopRunning];
}
}];
In the createCropRectForImage: method I've tried various ways to calculate the rect to cut out of the image, but with no success so far.
- (CGRect)createCropRectForImage:(UIImage *)image
{
CGPoint maskTopLeftCorner = CGPointMake(self.maskRectView.frame.origin.x, self.maskRectView.frame.origin.y);
CGPoint maskBottomRightCorner = CGPointMake(self.maskRectView.frame.origin.x + self.maskRectView.frame.size.width, self.maskRectView.frame.origin.y + self.maskRectView.frame.size.height);
CGPoint maskTopLeftCornerInLayerCoords = [_capturePreviewLayer convertPoint:maskTopLeftCorner fromLayer:self.maskRectView.layer.superlayer];
CGPoint maskBottomRightCornerInLayerCoords = [_capturePreviewLayer convertPoint:maskBottomRightCorner fromLayer:self.maskRectView.layer.superlayer];
CGPoint maskTopLeftCornerInDeviceCoords = [_capturePreviewLayer captureDevicePointOfInterestForPoint:maskTopLeftCornerInLayerCoords];
CGPoint maskBottomRightCornerInDeviceCoords = [_capturePreviewLayer captureDevicePointOfInterestForPoint:maskBottomRightCornerInLayerCoords];
float x = maskTopLeftCornerInDeviceCoords.x * image.size.width;
float y = (1 - maskTopLeftCornerInDeviceCoords.y) * image.size.height;
float width = fabsf(maskTopLeftCornerInDeviceCoords.x - maskBottomRightCornerInDeviceCoords.x) * image.size.width;
float height = fabsf(maskTopLeftCornerInDeviceCoords.y - maskBottomRightCornerInDeviceCoords.y) * image.size.height;
return CGRectMake(x, y, width, height);
}
That is my current version but doesn't even get the proportions right. Could some one please help me!
I have also tried using this method to crop my image:
- (UIImage*)cropImage:(UIImage*)originalImage toRect:(CGRect)rect{
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], rect);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
CGContextRef bitmap = CGBitmapContextCreate(NULL, rect.size.width, rect.size.height, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
if (originalImage.imageOrientation == UIImageOrientationLeft) {
CGContextRotateCTM (bitmap, radians(90));
CGContextTranslateCTM (bitmap, 0, -rect.size.height);
} else if (originalImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM (bitmap, radians(-90));
CGContextTranslateCTM (bitmap, -rect.size.width, 0);
} else if (originalImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (originalImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, rect.size.width, rect.size.height);
CGContextRotateCTM (bitmap, radians(-180.));
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, rect.size.width, rect.size.height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *resultImage=[UIImage imageWithCGImage:ref];
CGImageRelease(imageRef);
CGContextRelease(bitmap);
CGImageRelease(ref);
return resultImage;
}
Does anybody have the 'right combination' of methods to make this work? :)
In Swift 3:
private func cropToPreviewLayer(originalImage: UIImage) -> UIImage {
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: previewLayer.bounds)
var cgImage = originalImage.cgImage!
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: outputRect.origin.x * width, y: outputRect.origin.y * height, width: outputRect.size.width * width, height: outputRect.size.height * height)
cgImage = cgImage.cropping(to: cropRect)!
let croppedUIImage = UIImage(cgImage: cgImage, scale: 1.0, orientation: originalImage.imageOrientation)
return croppedUIImage
}
I've solved this problem by using metadataOutputRectOfInterestForRect function.
It works with any orientation.
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if (error)
{
[_delegate cameraView:self error:#"Take picture failed"];
}
else
{
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *takenImage = [UIImage imageWithData:jpegData];
CGRect outputRect = [_previewLayer metadataOutputRectOfInterestForRect:_previewLayer.bounds];
CGImageRef takenCGImage = takenImage.CGImage;
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
CGImageRelease(cropCGImage);
}
}
];
The takenImage is still imageOrientation dependent image. You can delete orientation information for further image processing.
UIGraphicsBeginImageContext(takenImage.size);
[takenImage drawAtPoint:CGPointZero];
takenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In Swift 4:
I prefer to never force-unwrap to avoid crashes, so I use optionals and guards in mine.
private func cropToPreviewLayer(originalImage: UIImage) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: previewLayer.bounds)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: outputRect.origin.x * width, y: outputRect.origin.y * height, width: outputRect.size.width * width, height: outputRect.size.height * height)
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
AVMakeRectWithAspectRatioInsideRect
this api is from AVFoundation, it will return the crop region for the image given the crop size.

Resources