How to get AVCaptureStillImageOutput same aspect ratio as the AVCaptureVideoPreviewLayer? - ios

I'm capturing an image using AVFoundation. I'm using AVCaptureVideoPreviewLayer to display the camera feed on screen. This preview layer's frame gets the bounds of a UIView with dynamic dimensions:
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [self.cameraFeedView layer];
[rootLayer setMasksToBounds:YES];
CGRect frame = self.cameraFeedView.frame;
[previewLayer setFrame:frame];
previewLayer.frame = rootLayer.bounds;
[rootLayer insertSublayer:previewLayer atIndex:0];
And I'm using AVCaptureStillImageOutput to capture an image:
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer != NULL) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *capturedImage = [UIImage imageWithData:imageData];
}
}];
My problem is that the captured image is at the size of the iPhone camera (1280x960 - front camera), but I need it to be the same aspect ratio as the preview layer. For example, if the size of the preview layer is 150x100, I need the captured image to be 960x640. Is there any solution for this?

I also enter counter the same problem. You have to crop or resize output still image. But you should notice that output still`s scale and image orientation.
preview layer square frame
CGFloat width = CGRectGetWidth(self.view.bounds);
[self.captureVideoPreviewLayer setFrame:CGRectMake(0, 0, width, width)];
[self.cameraView.layer addSublayer:self.captureVideoPreviewLayer];
calculate cropped image`s frame
[self.captureStillImageOutput captureStillImageAsynchronouslyFromConnection:captureConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSData *data = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:data];
CGRect cropRect = CGRectMake((image.size.height - image.size.width) / 2, 0, image.size.width, image.size.width);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef scale:image.scale orientation:image.imageOrientation]; // always UIImageOrientationRight
CGImageRelease(imageRef);
}];

Related

(howto) Get face from input to front camera as UIImage

I'm resolving task of scanning front camera input for faces, detecting them and getting them as UIImage-objects.
I'm using AVFoundation to scan and detect faces.
Like this:
let input = try AVCaptureDeviceInput(device: captureDevice)
captureSession = AVCaptureSession()
captureSession!.addInput(input)
output = AVCaptureMetadataOutput()
captureSession?.addOutput(output)
output.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
output.metadataObjectTypes = [AVMetadataObjectTypeFace]
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)
captureSession?.startRunning()
In delegate method didOutputMetadataObjects I'm getting face as AVMetadataFaceObject and highliting it with red frame like this:
let metadataObj = metadataObjects[0] as! AVMetadataFaceObject
let faceObject = videoPreviewLayer?.transformedMetadataObjectForMetadataObject(metadataObj)
faceFrame?.frame = faceObject!.bounds
Question is: How can I get faces as UIImages?
I've tried to dance over the 'didOutputSampleBuffer' but it isn't called at all :c
I did the same thing using didOutputSampleBuffer and Objective-C. It looks like:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(__bridge NSDictionary *)attachments];
if (attachments)
CFRelease(attachments);
NSNumber *orientation = (__bridge NSNumber *)(CMGetAttachment(imageDataSampleBuffer, kCGImagePropertyOrientation, NULL));
NSArray *features = [[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:#{ CIDetectorAccuracy: CIDetectorAccuracyHigh }] featuresInImage:ciImage options:#{ CIDetectorImageOrientation: orientation }];
if (features.count == 1) {
CIFaceFeature *faceFeature = [features firstObject];
CGRect faceRect = faceFeature.bounds;
CGImageRef tempImage = [[CIContext contextWithOptions:nil] createCGImage:ciImage fromRect:ciImage.extent];
UIImage *image = [UIImage imageWithCGImage:tempImage scale:1.0 orientation:orientation.intValue];
UIImage *face = [image extractFace:faceRect];
}
}
where extractFace is an extension of UIImage:
- (UIImage *)extractFace:(CGRect)rect {
rect = CGRectMake(rect.origin.x * self.scale,
rect.origin.y * self.scale,
rect.size.width * self.scale,
rect.size.height * self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
Creating video output:
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCMPixelFormat_32BGRA] };
videoOutput.alwaysDiscardsLateVideoFrames = YES;
self.videoOutputQueue = dispatch_queue_create("OutputQueue", DISPATCH_QUEUE_SERIAL);
[videoOutput setSampleBufferDelegate:self queue:self.videoOutputQueue];
[self.session addOutput:videoOutput];
- (UIImage *) screenshot {
CGSize size = CGSizeMake(faceFrame.frame.size.width, faceFrame.frame.size.height);
UIGraphicsBeginImageContextWithOptions(size, NO, [UIScreen mainScreen].scale);
CGRect rec = CGRectMake(faceFrame.frame.origin.x, faceFrame.frame.orogin.y, faceFrame.frame.size.width, faceFrame.frame.size.height);
[_viewController.view drawViewHierarchyInRect:rec afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Take some cue from above
let contextImage: UIImage = <<screenshot>>!
let cropRect: CGRect = CGRectMake(x, y, width, height)
let imageRef: CGImageRef = CGImageCreateWithImageInRect(contextImage.CGImage, cropRect)
let image: UIImage = UIImage(CGImage: imageRef, scale: originalImage.scale, orientation: originalImage.imageOrientation)!
I will suggest to use UIImagePickerController class to implement your custom camera for picking multiple image for face detection. Please do check Apple's sample code PhotoPicker.
To some up use UIImagePickerControllerfor launching camera as sourceType. And handle its delegate imagePickerController:didFinishPickingMediaWithInfo: to capture image + you can also check takePicturefunction if it helps

AvCapture / AVCaptureVideoPreviewLayer troubles getting the correct visible image

I am currently having some huge troubles getting what I want from AVCapture and AVCaptureVideoPreviewLayer etc.
I am currently creating an app (available for Iphone devices but would be better if it would also works on ipad) where I want to put a small preview of my camera in the middle of my view as shown in this picture :
To do that, I want to keep the ratio of my camera so I used this configuration :
rgbaImage = nil;
NSArray *possibleDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *device = [possibleDevices firstObject];
if (!device) return;
AVCaptureSession *session = [[AVCaptureSession alloc] init];
self.captureSession = session;
self.captureDevice = device;
NSError *error = nil;
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if( !input )
{
[[[UIAlertView alloc] initWithTitle:NSLocalizedString(#"NoCameraAuthorizationTitle", nil)
message:NSLocalizedString(#"NoCameraAuthorizationMsg", nil)
delegate:self
cancelButtonTitle:NSLocalizedString(#"OK", nil)
otherButtonTitles:nil] show];
return;
}
[session beginConfiguration];
session.sessionPreset = AVCaptureSessionPresetPhoto;
[session addInput:input];
AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
[dataOutput setVideoSettings:#{(id)kCVPixelBufferPixelFormatTypeKey:#(kCVPixelFormatType_32BGRA)}];
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:dataOutput];
self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:self.stillImageOutput];
connection = [dataOutput.connections firstObject];
[self setupCameraOrientation];
NSError *errorLock;
if ([device lockForConfiguration:&errorLock])
{
// Frame rate
device.activeVideoMinFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);
device.activeVideoMaxFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);
AVCaptureFocusMode focusMode = AVCaptureFocusModeContinuousAutoFocus;
AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure;
CGPoint point = CGPointMake(0.5, 0.5);
if ([device isAutoFocusRangeRestrictionSupported])
{
device.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
}
if ([device isFocusPointOfInterestSupported] && [device isFocusModeSupported:focusMode])
{
[device setFocusPointOfInterest:point];
[device setFocusMode:focusMode];
}
if ([device isExposurePointOfInterestSupported] && [device isExposureModeSupported:exposureMode])
{
[device setExposurePointOfInterest:point];
[device setExposureMode:exposureMode];
}
if ([device isLowLightBoostSupported])
{
device.automaticallyEnablesLowLightBoostWhenAvailable = YES;
}
[device unlockForConfiguration];
}
if (device.isFlashAvailable)
{
[device lockForConfiguration:nil];
[device setFlashMode:AVCaptureFlashModeOff];
[device unlockForConfiguration];
if ([device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
[device lockForConfiguration:nil];
[device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[device unlockForConfiguration];
}
}
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = self.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.layer insertSublayer:previewLayer atIndex:0];
[session commitConfiguration];
As you can see I am using the AVLayerVideoGravityResizeAspectFill properties to ensure that I do have the proper ratio.
My trouble starts here as I tried many things but never really succeeded.
My goal is to get the picture equivalent to was the user can see in the previewLayer. Knowing that the video frame gives bigger image than the image you can see in the preview.
I tried 3 methods :
1) Using personal computing : as I know both the video frame size and my screen size and layer size and position, I tried to compute the ratio and use it to compute the equivalent position in the video frame. I actually found out that the video frame (sampleBuffer) is in pixels while the position I get from mainScreen bounds is apple mesure and has to be multiple by a ratio to get it in pixels which is my ratio assuming that the video frame size is the actual device full screen size.
--> This actually gave me a really good result on my IPAD, both height and width are good but the (x,y) origin is a bit moved from the original... (detail : actually if I remove 72 pixel from the position I find I get the good output)
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
{
if (self.forceStop) return;
if (_isStopped || _isCapturing || !CMSampleBufferIsValid(sampleBuffer)) return;
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
__block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGRect rect = image.extent;
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat screenWidth = screenRect.size.width/* * [UIScreen mainScreen].scale*/;
CGFloat screenHeight = screenRect.size.height/* * [UIScreen mainScreen].scale*/;
NSLog(#"%f, %f ---",screenWidth, screenHeight);
float myRatio = ( rect.size.height / screenHeight );
float myRatioW = ( rect.size.width / screenWidth );
NSLog(#"Ratio w :%f h:%f ---",myRatioW, myRatio);
CGPoint p = [captureViewControler.view convertPoint:previewLayer.frame.origin toView:nil];
NSLog(#"-Av-> %f, %f --> %f, %f", p.x, p.y, self.bounds.size.height, self.bounds.size.width);
rect.origin = CGPointMake(p.x * myRatioW, p.y * myRatio);
NSLog(#"%f, %f ----> %f %f", rect.origin.x, rect.origin.y, rect.size.width, rect.size.height);
NSLog(#"%f", previewLayer.frame.size.height * ( rect.size.height / screenHeight ));
rect.size = CGSizeMake(rect.size.width, previewLayer.frame.size.height * myRatio);
image = [image imageByCroppingToRect:rect];
its = [ImageUtils cropImageToRect:uiImage(sampleBuffer) toRect:rect];
NSLog(#"--------------------------------------------");
[captureViewControler sendToPreview:its];
}
2) Using StillImage capture : This method was actually working as long as I was on an IPAD. But the really trouble is that I am using those crop frames to feed an image library and the methods captureStillImageAsynchronouslyFromConnection is calling the system sounds for a picture (I read a lot about "solutions" like call another sound to avoid it etc etc but not working and actually not solving the freeze that goes with it on iphone 6) so this method seems inappropriate.
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connect in self.stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connect inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connect;
break;
}
}
if (videoConnection) { break; }
}
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if (error)
{
NSLog(#"Take picture failed");
}
else
{
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *takenImage = [UIImage imageWithData:jpegData];
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
NSLog(#"image cropped : %#", NSStringFromCGRect(outputRect));
CGImageRef takenCGImage = takenImage.CGImage;
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
NSLog(#"Size cropped : w: %zu h: %zu", width, height);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
NSLog(#"final cropped : %#", NSStringFromCGRect(cropRect));
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
CGImageRelease(cropCGImage);
its = [ImageUtils rotateUIImage:takenImage];
image = [[CIImage alloc] initWithImage:its];
}
3) Using metadataOuput with a ratio : This is actually not working at all but I thought it would help me the most as it works with this on the stillImage process (using the metadataOutputRectOfInterestForRect result to get the pourcentage and then combine it with the ratio). I wanted to use this and add the ratio difference between the pictures to get the correct output.
CGRect rect = image.extent;
CGSize size = CGSizeMake(1936.0, 2592.0);
float rh = (size.height / rect.size.height);
float rw = (size.width / rect.size.width);
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
NSLog(#"avant cropped : %#", NSStringFromCGRect(outputRect));
outputRect.origin.x = MIN(1.0, outputRect.origin.x * rw);
outputRect.origin.y = MIN(1.0, outputRect.origin.y * rh);
outputRect.size.width = MIN(1.0, outputRect.size.width * rw);
outputRect.size.height = MIN(1.0, outputRect.size.height * rh);
NSLog(#"final cropped : %#", NSStringFromCGRect(outputRect));
UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
NSLog(#"takenImage : %#", NSStringFromCGSize(takenImage.size));
CGImageRef takenCGImage = [[CIContext contextWithOptions:nil] createCGImage:image fromRect:[image extent]];
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
NSLog(#"Size cropped : w: %zu h: %zu", width, height);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
I hope someone will be able to help me with this.
Thanks a lot.
I finally found the solution using this code. My error was to try to use a ratio between images and not considering that metadataOutputRectOfInterestForRect returns a percentage value which doesn't need to be changed for the new other image.
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
__block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
outputRect.origin.y = outputRect.origin.x;
outputRect.origin.x = 0;
outputRect.size.height = outputRect.size.width;
outputRect.size.width = 1;
UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
CGImageRef takenCGImage = [cicontext createCGImage:image fromRect:[image extent]];
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
UIImage *its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
}

UIImagePickerController forces images in other views to full size

In one spot of my app, I need to use the camera, so I call up the UIImagePickerController. Unfortunately, once I return from the controller, most of the pictures in the app are full size, no matter what their UIImageView attributes say. The exception appears to be UIImageViews in UITableViewCells. This applies to all views in the app, not just ones that have direct connection to the viewcontroller that called the UIImagePickerController. Once while I was messing around, trying to troubleshoot, the problem seemed to disappear on its own, though I have not been able to replicate that.
The code is as follows.
-(void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
if(![UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]){
UIAlertView *errorAlertView = [[UIAlertView alloc] initWithTitle:#"Error"
message:#"Device has no camera"
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil];
[errorAlertView show];
[_app.navController popPage];
}
else if(firstTime){
UIImagePickerController *picker = [[UIImagePickerController alloc] init];
picker.delegate = self;
picker.allowsEditing = YES;
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
firstTime = false;
[self presentViewController:picker animated:YES completion:NULL];
}
}
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *cameraImage = info[UIImagePickerControllerEditedImage];
[picker dismissViewControllerAnimated:YES completion:NULL];
NSString *folderName = #"redApp";
if([_page hasChild:[RWPAGE FOLDER]]){
folderName = [_page getStringFromNode:[RWPAGE FOLDER]];
}
NSDate *datetimeNow = [[NSDate alloc] init];
NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
[dateFormatter setDateFormat:#"yyyy-MM-dd_HH-mm-ss-SSS"];
NSString *filename = [NSString stringWithFormat:#"%#.png",[dateFormatter stringFromDate:datetimeNow]];
NSString *applicationDocumentsDir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *folderPath = [applicationDocumentsDir stringByAppendingPathComponent:folderName];
NSString *filePath = [folderPath stringByAppendingPathComponent:filename];
NSError *error = nil;
if(![[NSFileManager defaultManager] fileExistsAtPath:folderPath isDirectory:nil]){
[[NSFileManager defaultManager] createDirectoryAtPath:folderPath withIntermediateDirectories:NO
attributes:nil error:&error];
}
if(error != nil){
NSLog(#"Create directory error: %#", error);
}
[UIImagePNGRepresentation(cameraImage) writeToFile:filePath options:NSDataWritingAtomic error:&error];
if(error != nil){
NSLog(#"Error in saving image to disk. Error : %#", error);
}
RWXmlNode *nextPage = [_xml getPage:[_page getStringFromNode:[RWPAGE CHILD]]];
nextPage = [nextPage deepClone];
[nextPage addNodeWithName:[RWPAGE FILEPATH] value:filePath];
[_app.navController pushViewWithPage:nextPage];
}
-(void)imagePickerControllerDidCancel:(UIImagePickerController *)picker {
[picker dismissViewControllerAnimated:YES completion:NULL];
[_app.navController popPage];
}
Edit:
To expand upon the above.
The base of the app is a Custom Container View Controller, acting mostly like a Navigation Controller. When a user navigates to a page (what I call the combination of a view and view controller) it is displayed on the Custom Container view, and the previous page is stored in a stack.
One of my pages calls upon a UIImagePicker. Once the image picker has been closed again, and I return to the app, problems appear across the app when I open new pages. I don't see problems on every page, but they are on several independent pages. Most pages look completely unaffected, while the problem pages appear to not obey their constraints.
if you want to compress image use
+(UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
or
+(UIImage *)scaleImage:(UIImage *)image toSize:(CGSize)targetSize {
CGFloat scaleFactor = 1.0;
if (image.size.width > targetSize.width || image.size.height > targetSize.height)
if (!((scaleFactor = (targetSize.width / image.size.width)) > (targetSize.height / image.size.height))) //scale to fit width, or
scaleFactor = targetSize.height / image.size.height; // scale to fit heigth.
UIGraphicsBeginImageContext(targetSize);
CGRect rect = CGRectMake((targetSize.width - image.size.width * scaleFactor) / 2,
(targetSize.height - image.size.height * scaleFactor) / 2,
image.size.width * scaleFactor, image.size.height * scaleFactor);
[image drawInRect:rect];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Try like this :-
UIImage *thumbnail =[UIImage imageNamed:#"yourimage.png"];
CGSize itemSize = CGSizeMake(35, 35);
UIGraphicsBeginImageContext(itemSize);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
[thumbnail drawInRect:imageRect];
// Now this below image contains compressed image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Capturing image preview as like as on the screen with AVFoundation

I set the AVCaptureSession preset PhotoPreset
self.session.sessionPreset = AVCaptureSessionPresetPhoto;
and then adding a new layer into my view with
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [self.view layer];
[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:[rootLayer bounds]];
[rootLayer addSublayer:previewLayer];
So far so good, However when I want to capture an image, I use the code below
AVCaptureConnection *videoConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
[self.session stopRunning];
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData ];
self.imageView.image = image; //IMAGEVIEW IS WITH THE BOUNDS OF SELF.VIEW
image = nil;
}];
Capturing an image is fine however, the captured image is different comparing to AVCaptureVideoPreviewLayer showing on the screen. What I really want to do is to show captured as like as appearing on AVCapturePreviewLayer layer. How can I achieve this? How should I resize and crop the captured image with respect to the bounds of self.view?
i am not sure this will help you or not but as you talking about cropping images so i crop image in ImagePickerView with following code I am not sure this will help you or not
- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info {
self.lastChosenMediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([lastChosenMediaType isEqual:(NSString *)kUTTypeImage]) {
UIImage *chosenImage = [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *shrunkenImage = shrinkImage(chosenImage, imageFrame.size);
self.imagee = shrunkenImage;
selectImage.image = imagee;
} [picker dismissModalViewControllerAnimated:YES];
}
- (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker {
[picker dismissModalViewControllerAnimated:YES];
}
// function for cropping images you can do some changes in parameter as per your requirements
static UIImage *shrinkImage(UIImage *original, CGSize size) {
CGFloat scale = [UIScreen mainScreen].scale;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, size.width * scale,
size.height * scale, 8, 0, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context,
CGRectMake(0, 0, size.width * scale, size.height * scale),
original.CGImage);
CGImageRef shrunken = CGBitmapContextCreateImage(context);
UIImage *final = [UIImage imageWithCGImage:shrunken];
CGContextRelease(context);
CGImageRelease(shrunken);
return final;
}

How to crop an image from AVCapture to a rect seen on the display

This is driving me crazy because I can't get it to work. I have the following scenario:
I'm using an AVCaptureSession and an AVCaptureVideoPreviewLayer to create my own camera interface. The interface shows a rectangle. Below is the AVCaptureVideoPreviewLayer that fills the whole screen.
I want to the captured image to be cropped in a way, that the resulting image shows exactly the content seen in the rect on the display.
My setup looks like this:
_session = [[AVCaptureSession alloc] init];
AVCaptureSession *session = _session;
session.sessionPreset = AVCaptureSessionPresetPhoto;
AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (camera == nil) {
[self showImagePicker];
_isSetup = YES;
return;
}
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.frame = self.liveCapturePlaceholderView.bounds;
[self.liveCapturePlaceholderView.layer addSublayer:captureVideoPreviewLayer];
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&error];
if (error) {
HGAlertViewWrapper *av = [[HGAlertViewWrapper alloc] initWithTitle:kFailedConnectingToCameraAlertViewTitle message:kFailedConnectingToCameraAlertViewMessage cancelButtonTitle:kFailedConnectingToCameraAlertViewCancelButtonTitle otherButtonTitles:#[kFailedConnectingToCameraAlertViewRetryButtonTitle]];
[av showWithBlock:^(NSString *buttonTitle){
if ([buttonTitle isEqualToString:kFailedConnectingToCameraAlertViewCancelButtonTitle]) {
[self.delegate gloameCameraViewControllerDidCancel:self];
}
else {
[self setupAVSession];
}
}];
}
[session addInput:input];
NSDictionary *options = #{ AVVideoCodecKey : AVVideoCodecJPEG };
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[_stillImageOutput setOutputSettings:options];
[session addOutput:_stillImageOutput];
[session startRunning];
_isSetup = YES;
I'm capturing the image like this:
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
if (error) {
MWLogDebug(#"Error capturing image from camera. %#, %#", error, [error userInfo]);
_capturePreviewLayer.connection.enabled = YES;
}
else
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
CGRect cropRect = [self createCropRectForImage:image];
UIImage *croppedImage;// = [self cropImage:image toRect:cropRect];
UIGraphicsBeginImageContext(cropRect.size);
[image drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)];
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.capturedImage = croppedImage;
[_session stopRunning];
}
}];
In the createCropRectForImage: method I've tried various ways to calculate the rect to cut out of the image, but with no success so far.
- (CGRect)createCropRectForImage:(UIImage *)image
{
CGPoint maskTopLeftCorner = CGPointMake(self.maskRectView.frame.origin.x, self.maskRectView.frame.origin.y);
CGPoint maskBottomRightCorner = CGPointMake(self.maskRectView.frame.origin.x + self.maskRectView.frame.size.width, self.maskRectView.frame.origin.y + self.maskRectView.frame.size.height);
CGPoint maskTopLeftCornerInLayerCoords = [_capturePreviewLayer convertPoint:maskTopLeftCorner fromLayer:self.maskRectView.layer.superlayer];
CGPoint maskBottomRightCornerInLayerCoords = [_capturePreviewLayer convertPoint:maskBottomRightCorner fromLayer:self.maskRectView.layer.superlayer];
CGPoint maskTopLeftCornerInDeviceCoords = [_capturePreviewLayer captureDevicePointOfInterestForPoint:maskTopLeftCornerInLayerCoords];
CGPoint maskBottomRightCornerInDeviceCoords = [_capturePreviewLayer captureDevicePointOfInterestForPoint:maskBottomRightCornerInLayerCoords];
float x = maskTopLeftCornerInDeviceCoords.x * image.size.width;
float y = (1 - maskTopLeftCornerInDeviceCoords.y) * image.size.height;
float width = fabsf(maskTopLeftCornerInDeviceCoords.x - maskBottomRightCornerInDeviceCoords.x) * image.size.width;
float height = fabsf(maskTopLeftCornerInDeviceCoords.y - maskBottomRightCornerInDeviceCoords.y) * image.size.height;
return CGRectMake(x, y, width, height);
}
That is my current version but doesn't even get the proportions right. Could some one please help me!
I have also tried using this method to crop my image:
- (UIImage*)cropImage:(UIImage*)originalImage toRect:(CGRect)rect{
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], rect);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
CGContextRef bitmap = CGBitmapContextCreate(NULL, rect.size.width, rect.size.height, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
if (originalImage.imageOrientation == UIImageOrientationLeft) {
CGContextRotateCTM (bitmap, radians(90));
CGContextTranslateCTM (bitmap, 0, -rect.size.height);
} else if (originalImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM (bitmap, radians(-90));
CGContextTranslateCTM (bitmap, -rect.size.width, 0);
} else if (originalImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (originalImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, rect.size.width, rect.size.height);
CGContextRotateCTM (bitmap, radians(-180.));
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, rect.size.width, rect.size.height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *resultImage=[UIImage imageWithCGImage:ref];
CGImageRelease(imageRef);
CGContextRelease(bitmap);
CGImageRelease(ref);
return resultImage;
}
Does anybody have the 'right combination' of methods to make this work? :)
In Swift 3:
private func cropToPreviewLayer(originalImage: UIImage) -> UIImage {
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: previewLayer.bounds)
var cgImage = originalImage.cgImage!
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: outputRect.origin.x * width, y: outputRect.origin.y * height, width: outputRect.size.width * width, height: outputRect.size.height * height)
cgImage = cgImage.cropping(to: cropRect)!
let croppedUIImage = UIImage(cgImage: cgImage, scale: 1.0, orientation: originalImage.imageOrientation)
return croppedUIImage
}
I've solved this problem by using metadataOutputRectOfInterestForRect function.
It works with any orientation.
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if (error)
{
[_delegate cameraView:self error:#"Take picture failed"];
}
else
{
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *takenImage = [UIImage imageWithData:jpegData];
CGRect outputRect = [_previewLayer metadataOutputRectOfInterestForRect:_previewLayer.bounds];
CGImageRef takenCGImage = takenImage.CGImage;
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
CGImageRelease(cropCGImage);
}
}
];
The takenImage is still imageOrientation dependent image. You can delete orientation information for further image processing.
UIGraphicsBeginImageContext(takenImage.size);
[takenImage drawAtPoint:CGPointZero];
takenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In Swift 4:
I prefer to never force-unwrap to avoid crashes, so I use optionals and guards in mine.
private func cropToPreviewLayer(originalImage: UIImage) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: previewLayer.bounds)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: outputRect.origin.x * width, y: outputRect.origin.y * height, width: outputRect.size.width * width, height: outputRect.size.height * height)
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
AVMakeRectWithAspectRatioInsideRect
this api is from AVFoundation, it will return the crop region for the image given the crop size.

Resources