(howto) Get face from input to front camera as UIImage - ios

I'm resolving task of scanning front camera input for faces, detecting them and getting them as UIImage-objects.
I'm using AVFoundation to scan and detect faces.
Like this:
let input = try AVCaptureDeviceInput(device: captureDevice)
captureSession = AVCaptureSession()
captureSession!.addInput(input)
output = AVCaptureMetadataOutput()
captureSession?.addOutput(output)
output.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
output.metadataObjectTypes = [AVMetadataObjectTypeFace]
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)
captureSession?.startRunning()
In delegate method didOutputMetadataObjects I'm getting face as AVMetadataFaceObject and highliting it with red frame like this:
let metadataObj = metadataObjects[0] as! AVMetadataFaceObject
let faceObject = videoPreviewLayer?.transformedMetadataObjectForMetadataObject(metadataObj)
faceFrame?.frame = faceObject!.bounds
Question is: How can I get faces as UIImages?
I've tried to dance over the 'didOutputSampleBuffer' but it isn't called at all :c

I did the same thing using didOutputSampleBuffer and Objective-C. It looks like:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(__bridge NSDictionary *)attachments];
if (attachments)
CFRelease(attachments);
NSNumber *orientation = (__bridge NSNumber *)(CMGetAttachment(imageDataSampleBuffer, kCGImagePropertyOrientation, NULL));
NSArray *features = [[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:#{ CIDetectorAccuracy: CIDetectorAccuracyHigh }] featuresInImage:ciImage options:#{ CIDetectorImageOrientation: orientation }];
if (features.count == 1) {
CIFaceFeature *faceFeature = [features firstObject];
CGRect faceRect = faceFeature.bounds;
CGImageRef tempImage = [[CIContext contextWithOptions:nil] createCGImage:ciImage fromRect:ciImage.extent];
UIImage *image = [UIImage imageWithCGImage:tempImage scale:1.0 orientation:orientation.intValue];
UIImage *face = [image extractFace:faceRect];
}
}
where extractFace is an extension of UIImage:
- (UIImage *)extractFace:(CGRect)rect {
rect = CGRectMake(rect.origin.x * self.scale,
rect.origin.y * self.scale,
rect.size.width * self.scale,
rect.size.height * self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
Creating video output:
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCMPixelFormat_32BGRA] };
videoOutput.alwaysDiscardsLateVideoFrames = YES;
self.videoOutputQueue = dispatch_queue_create("OutputQueue", DISPATCH_QUEUE_SERIAL);
[videoOutput setSampleBufferDelegate:self queue:self.videoOutputQueue];
[self.session addOutput:videoOutput];

- (UIImage *) screenshot {
CGSize size = CGSizeMake(faceFrame.frame.size.width, faceFrame.frame.size.height);
UIGraphicsBeginImageContextWithOptions(size, NO, [UIScreen mainScreen].scale);
CGRect rec = CGRectMake(faceFrame.frame.origin.x, faceFrame.frame.orogin.y, faceFrame.frame.size.width, faceFrame.frame.size.height);
[_viewController.view drawViewHierarchyInRect:rec afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Take some cue from above
let contextImage: UIImage = <<screenshot>>!
let cropRect: CGRect = CGRectMake(x, y, width, height)
let imageRef: CGImageRef = CGImageCreateWithImageInRect(contextImage.CGImage, cropRect)
let image: UIImage = UIImage(CGImage: imageRef, scale: originalImage.scale, orientation: originalImage.imageOrientation)!

I will suggest to use UIImagePickerController class to implement your custom camera for picking multiple image for face detection. Please do check Apple's sample code PhotoPicker.
To some up use UIImagePickerControllerfor launching camera as sourceType. And handle its delegate imagePickerController:didFinishPickingMediaWithInfo: to capture image + you can also check takePicturefunction if it helps

Related

Image becomes blur when applying CIFilter

I am working in a iOS app to crop the rectangle image from camera. And I am using the CIDetector to get the rect features and using CIFilter in order to crop the rectangle image. But after applying the filter the result image quality becomes very poor.
Here is my code below.
I am getting video capture output from the following delegate method
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
// Convert into CIIImage to find the rectfeatures.
self.sourceImage = [[CIImage alloc] initWithCGImage:[self imageFromSampleBuffer:sampleBuffer].CGImage options:nil];
}
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVPixelBufferRef pb = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciimg = [CIImage imageWithCVPixelBuffer:pb];
// show result
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef ref = [context createCGImage:ciimg fromRect:ciimg.extent];
UIImage *image = [UIImage imageWithCGImage:ref scale:1.0 orientation:(UIImageOrientationUp)];
CFRelease(ref);
return (image);
}
And I am running a NSTimer in the background which will start detect rect features from the captured source image for every 0.2 seconds
- (void)performRectangleDetection:(CIImage *)image{
if(image == nil)
return;
NSArray *rectFeatures = [self.rectangleDetector featuresInImage:image];
if ([rectFeatures count] > 0 ) {
[self capturedImage:image];
}
}
-(void)capturedImage:(CIImage *)image
{
NSArray *rectFeatures = [self.rectangleDetector featuresInImage:image];
CIImage *resultImage = [image copy];
for (CIRectangleFeature *feature in rectFeatures) {
resultImage = [image imageByApplyingFilter:#"CIPerspectiveCorrection"
withInputParameters:#{#"inputTopLeft":[CIVector vectorWithCGPoint:feature.topLeft] ,
#"inputTopRight": [CIVector vectorWithCGPoint:feature.topRight],
#"inputBottomLeft": [CIVector vectorWithCGPoint:feature.bottomLeft],
#"inputBottomRight": [CIVector vectorWithCGPoint:feature.bottomRight]}];
}
UIImage *capturedImage = [[UIImage alloc] initWithCIImage: resultImage];
UIImage *finalImage = [self imageWithImage:capturedImage scaledToSize:capturedImage.size];
}
The finalImage will be retrieved after sending to this method
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The final image quality becomes blurred sometimes. Is that because of the filter or because of camera output image ? Pls help me to solve this.
It is likely that you are not recreating the final image with the correct scale factor.
UIImage *finalImage = [UIImage imageWithCGImage:resultImage
scale:original.scale
orientation:original.imageOrientation];
If this doesn't solve the issue, please provide more code sample from the camera input, and how you converted the final CIImage from the filters into UIImage.
use the following method to crop image
-(UIImage*)cropImage:(UIImage*)image withRect:(CGRect)rect {
CGImageRef cgImage = CGImageCreateWithImageInRect(image.CGImage, rect);
UIImage *cropedImage = [UIImage imageWithCGImage:cgImage];
return cropedImage;
}

iOS - Safe way to blur a UIImage

In the past we had two different ways of blurring UIImages, and both led to crashes for our users. The first way crashes when the user puts the device into background (GPU error, not allowed to do anything in background).
The second one crashes with ESC_BAD_ACCESS errors.
What's a better (crash-safe) way to do it?
Version 1:
- (UIImage *)blurred:(float)inputRadius {
// create our blurred image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [CIImage imageWithCGImage:self.CGImage];
// setting up Gaussian Blur (we could use one of many filters offered by Core Image)
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:inputRadius] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *blurredImage = [UIImage imageWithCGImage:cgImage];
CFRelease(cgImage);
return blurredImage;
}
Version 2:
- (UIImage*)blurredImage:(CGFloat)blurRadius {
if (blurRadius < 0.0) {
blurRadius = 0.0;
}
CGImageRef img = self.CGImage;
CGFloat inputImageScale = self.scale;
vImage_Buffer inBuffer, outBuffer;
vImage_Error error;
void *pixelBuffer;
CGDataProviderRef inProvider = CGImageGetDataProvider(img);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
pixelBuffer = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
outBuffer.data = pixelBuffer;
outBuffer.width = CGImageGetWidth(img);
outBuffer.height = CGImageGetHeight(img);
outBuffer.rowBytes = CGImageGetBytesPerRow(img);
CGFloat inputRadius = blurRadius * inputImageScale;
if (inputRadius - 2. < __FLT_EPSILON__)
inputRadius = 2.;
uint32_t radius = floor((inputRadius * 3. * sqrt(2 * M_PI) / 4 + 0.5) / 2);
radius |= 1; // force radius to be odd so that the three box-blur methodology works.
// line of crash
error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, radius, radius, NULL, kvImageEdgeExtend);
if (!error) {
error = vImageBoxConvolve_ARGB8888(&outBuffer, &inBuffer, NULL, 0, 0, radius, radius, NULL, kvImageEdgeExtend);
}
if (error) {
return self;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(outBuffer.data,
outBuffer.width,
outBuffer.height,
8,
outBuffer.rowBytes,
colorSpace,
(CGBitmapInfo)kCGImageAlphaNoneSkipLast);
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(pixelBuffer);
CFRelease(inBitmapData);
CGImageRelease(imageRef);
return returnImage;
}

AvCapture / AVCaptureVideoPreviewLayer troubles getting the correct visible image

I am currently having some huge troubles getting what I want from AVCapture and AVCaptureVideoPreviewLayer etc.
I am currently creating an app (available for Iphone devices but would be better if it would also works on ipad) where I want to put a small preview of my camera in the middle of my view as shown in this picture :
To do that, I want to keep the ratio of my camera so I used this configuration :
rgbaImage = nil;
NSArray *possibleDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *device = [possibleDevices firstObject];
if (!device) return;
AVCaptureSession *session = [[AVCaptureSession alloc] init];
self.captureSession = session;
self.captureDevice = device;
NSError *error = nil;
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if( !input )
{
[[[UIAlertView alloc] initWithTitle:NSLocalizedString(#"NoCameraAuthorizationTitle", nil)
message:NSLocalizedString(#"NoCameraAuthorizationMsg", nil)
delegate:self
cancelButtonTitle:NSLocalizedString(#"OK", nil)
otherButtonTitles:nil] show];
return;
}
[session beginConfiguration];
session.sessionPreset = AVCaptureSessionPresetPhoto;
[session addInput:input];
AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
[dataOutput setVideoSettings:#{(id)kCVPixelBufferPixelFormatTypeKey:#(kCVPixelFormatType_32BGRA)}];
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:dataOutput];
self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:self.stillImageOutput];
connection = [dataOutput.connections firstObject];
[self setupCameraOrientation];
NSError *errorLock;
if ([device lockForConfiguration:&errorLock])
{
// Frame rate
device.activeVideoMinFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);
device.activeVideoMaxFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);
AVCaptureFocusMode focusMode = AVCaptureFocusModeContinuousAutoFocus;
AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure;
CGPoint point = CGPointMake(0.5, 0.5);
if ([device isAutoFocusRangeRestrictionSupported])
{
device.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
}
if ([device isFocusPointOfInterestSupported] && [device isFocusModeSupported:focusMode])
{
[device setFocusPointOfInterest:point];
[device setFocusMode:focusMode];
}
if ([device isExposurePointOfInterestSupported] && [device isExposureModeSupported:exposureMode])
{
[device setExposurePointOfInterest:point];
[device setExposureMode:exposureMode];
}
if ([device isLowLightBoostSupported])
{
device.automaticallyEnablesLowLightBoostWhenAvailable = YES;
}
[device unlockForConfiguration];
}
if (device.isFlashAvailable)
{
[device lockForConfiguration:nil];
[device setFlashMode:AVCaptureFlashModeOff];
[device unlockForConfiguration];
if ([device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
[device lockForConfiguration:nil];
[device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[device unlockForConfiguration];
}
}
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = self.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.layer insertSublayer:previewLayer atIndex:0];
[session commitConfiguration];
As you can see I am using the AVLayerVideoGravityResizeAspectFill properties to ensure that I do have the proper ratio.
My trouble starts here as I tried many things but never really succeeded.
My goal is to get the picture equivalent to was the user can see in the previewLayer. Knowing that the video frame gives bigger image than the image you can see in the preview.
I tried 3 methods :
1) Using personal computing : as I know both the video frame size and my screen size and layer size and position, I tried to compute the ratio and use it to compute the equivalent position in the video frame. I actually found out that the video frame (sampleBuffer) is in pixels while the position I get from mainScreen bounds is apple mesure and has to be multiple by a ratio to get it in pixels which is my ratio assuming that the video frame size is the actual device full screen size.
--> This actually gave me a really good result on my IPAD, both height and width are good but the (x,y) origin is a bit moved from the original... (detail : actually if I remove 72 pixel from the position I find I get the good output)
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
{
if (self.forceStop) return;
if (_isStopped || _isCapturing || !CMSampleBufferIsValid(sampleBuffer)) return;
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
__block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGRect rect = image.extent;
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat screenWidth = screenRect.size.width/* * [UIScreen mainScreen].scale*/;
CGFloat screenHeight = screenRect.size.height/* * [UIScreen mainScreen].scale*/;
NSLog(#"%f, %f ---",screenWidth, screenHeight);
float myRatio = ( rect.size.height / screenHeight );
float myRatioW = ( rect.size.width / screenWidth );
NSLog(#"Ratio w :%f h:%f ---",myRatioW, myRatio);
CGPoint p = [captureViewControler.view convertPoint:previewLayer.frame.origin toView:nil];
NSLog(#"-Av-> %f, %f --> %f, %f", p.x, p.y, self.bounds.size.height, self.bounds.size.width);
rect.origin = CGPointMake(p.x * myRatioW, p.y * myRatio);
NSLog(#"%f, %f ----> %f %f", rect.origin.x, rect.origin.y, rect.size.width, rect.size.height);
NSLog(#"%f", previewLayer.frame.size.height * ( rect.size.height / screenHeight ));
rect.size = CGSizeMake(rect.size.width, previewLayer.frame.size.height * myRatio);
image = [image imageByCroppingToRect:rect];
its = [ImageUtils cropImageToRect:uiImage(sampleBuffer) toRect:rect];
NSLog(#"--------------------------------------------");
[captureViewControler sendToPreview:its];
}
2) Using StillImage capture : This method was actually working as long as I was on an IPAD. But the really trouble is that I am using those crop frames to feed an image library and the methods captureStillImageAsynchronouslyFromConnection is calling the system sounds for a picture (I read a lot about "solutions" like call another sound to avoid it etc etc but not working and actually not solving the freeze that goes with it on iphone 6) so this method seems inappropriate.
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connect in self.stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connect inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connect;
break;
}
}
if (videoConnection) { break; }
}
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if (error)
{
NSLog(#"Take picture failed");
}
else
{
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *takenImage = [UIImage imageWithData:jpegData];
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
NSLog(#"image cropped : %#", NSStringFromCGRect(outputRect));
CGImageRef takenCGImage = takenImage.CGImage;
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
NSLog(#"Size cropped : w: %zu h: %zu", width, height);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
NSLog(#"final cropped : %#", NSStringFromCGRect(cropRect));
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
CGImageRelease(cropCGImage);
its = [ImageUtils rotateUIImage:takenImage];
image = [[CIImage alloc] initWithImage:its];
}
3) Using metadataOuput with a ratio : This is actually not working at all but I thought it would help me the most as it works with this on the stillImage process (using the metadataOutputRectOfInterestForRect result to get the pourcentage and then combine it with the ratio). I wanted to use this and add the ratio difference between the pictures to get the correct output.
CGRect rect = image.extent;
CGSize size = CGSizeMake(1936.0, 2592.0);
float rh = (size.height / rect.size.height);
float rw = (size.width / rect.size.width);
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
NSLog(#"avant cropped : %#", NSStringFromCGRect(outputRect));
outputRect.origin.x = MIN(1.0, outputRect.origin.x * rw);
outputRect.origin.y = MIN(1.0, outputRect.origin.y * rh);
outputRect.size.width = MIN(1.0, outputRect.size.width * rw);
outputRect.size.height = MIN(1.0, outputRect.size.height * rh);
NSLog(#"final cropped : %#", NSStringFromCGRect(outputRect));
UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
NSLog(#"takenImage : %#", NSStringFromCGSize(takenImage.size));
CGImageRef takenCGImage = [[CIContext contextWithOptions:nil] createCGImage:image fromRect:[image extent]];
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
NSLog(#"Size cropped : w: %zu h: %zu", width, height);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
I hope someone will be able to help me with this.
Thanks a lot.
I finally found the solution using this code. My error was to try to use a ratio between images and not considering that metadataOutputRectOfInterestForRect returns a percentage value which doesn't need to be changed for the new other image.
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
__block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
outputRect.origin.y = outputRect.origin.x;
outputRect.origin.x = 0;
outputRect.size.height = outputRect.size.width;
outputRect.size.width = 1;
UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
CGImageRef takenCGImage = [cicontext createCGImage:image fromRect:[image extent]];
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
UIImage *its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
}

CIGaussianBlur changes imageOrientation sometime

In my iOS app I want to apply a filter CIGaussianBlur on UIImage, when it gets a image having big height it rotates the image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image]; //get image for blur
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setDefaults];
[blurFilter setValue:inputImage forKey:#"inputImage"];
CGFloat blurLevel = 0.0f; // Set blur level
[blurFilter setValue:[NSNumber numberWithFloat:blurLevel] forKey:#"inputRadius"];
// set value for blur level
CIImage *outputImage = [blurFilter valueForKey:#"outputImage"];
CGRect rect = inputImage.extent; // Create Rect
rect.origin.x += blurLevel; // and set custom params
rect.origin.y += blurLevel; //
rect.size.height -= blurLevel*2.0f; //
rect.size.width -= blurLevel*2.0f; //
CGImageRef cgImage = [context createCGImage:outputImage fromRect:rect];
// Then apply new rect
UIImageOrientation originalOrientation = _imageView.image.imageOrientation;
CGFloat originalScale = _imageView.image.scale;
UIImage *fixedImage=[UIImage imageWithCGImage:cgImage scale:originalScale orientation:originalOrientation] ; //output of CIGaussianBlur
It works for me.
_imageView.image=image;
UIImageOrientation originalOrientation = _imageView.image.imageOrientation;
CGFloat originalScale = _imageView.image.scale;
UIImage *fixedImage=[UIImage imageWithCGImage:cgImage scale:originalScale orientation:originalOrientation] ;

How to crop an image from AVCapture to a rect seen on the display

This is driving me crazy because I can't get it to work. I have the following scenario:
I'm using an AVCaptureSession and an AVCaptureVideoPreviewLayer to create my own camera interface. The interface shows a rectangle. Below is the AVCaptureVideoPreviewLayer that fills the whole screen.
I want to the captured image to be cropped in a way, that the resulting image shows exactly the content seen in the rect on the display.
My setup looks like this:
_session = [[AVCaptureSession alloc] init];
AVCaptureSession *session = _session;
session.sessionPreset = AVCaptureSessionPresetPhoto;
AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (camera == nil) {
[self showImagePicker];
_isSetup = YES;
return;
}
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.frame = self.liveCapturePlaceholderView.bounds;
[self.liveCapturePlaceholderView.layer addSublayer:captureVideoPreviewLayer];
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&error];
if (error) {
HGAlertViewWrapper *av = [[HGAlertViewWrapper alloc] initWithTitle:kFailedConnectingToCameraAlertViewTitle message:kFailedConnectingToCameraAlertViewMessage cancelButtonTitle:kFailedConnectingToCameraAlertViewCancelButtonTitle otherButtonTitles:#[kFailedConnectingToCameraAlertViewRetryButtonTitle]];
[av showWithBlock:^(NSString *buttonTitle){
if ([buttonTitle isEqualToString:kFailedConnectingToCameraAlertViewCancelButtonTitle]) {
[self.delegate gloameCameraViewControllerDidCancel:self];
}
else {
[self setupAVSession];
}
}];
}
[session addInput:input];
NSDictionary *options = #{ AVVideoCodecKey : AVVideoCodecJPEG };
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[_stillImageOutput setOutputSettings:options];
[session addOutput:_stillImageOutput];
[session startRunning];
_isSetup = YES;
I'm capturing the image like this:
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
if (error) {
MWLogDebug(#"Error capturing image from camera. %#, %#", error, [error userInfo]);
_capturePreviewLayer.connection.enabled = YES;
}
else
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
CGRect cropRect = [self createCropRectForImage:image];
UIImage *croppedImage;// = [self cropImage:image toRect:cropRect];
UIGraphicsBeginImageContext(cropRect.size);
[image drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)];
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.capturedImage = croppedImage;
[_session stopRunning];
}
}];
In the createCropRectForImage: method I've tried various ways to calculate the rect to cut out of the image, but with no success so far.
- (CGRect)createCropRectForImage:(UIImage *)image
{
CGPoint maskTopLeftCorner = CGPointMake(self.maskRectView.frame.origin.x, self.maskRectView.frame.origin.y);
CGPoint maskBottomRightCorner = CGPointMake(self.maskRectView.frame.origin.x + self.maskRectView.frame.size.width, self.maskRectView.frame.origin.y + self.maskRectView.frame.size.height);
CGPoint maskTopLeftCornerInLayerCoords = [_capturePreviewLayer convertPoint:maskTopLeftCorner fromLayer:self.maskRectView.layer.superlayer];
CGPoint maskBottomRightCornerInLayerCoords = [_capturePreviewLayer convertPoint:maskBottomRightCorner fromLayer:self.maskRectView.layer.superlayer];
CGPoint maskTopLeftCornerInDeviceCoords = [_capturePreviewLayer captureDevicePointOfInterestForPoint:maskTopLeftCornerInLayerCoords];
CGPoint maskBottomRightCornerInDeviceCoords = [_capturePreviewLayer captureDevicePointOfInterestForPoint:maskBottomRightCornerInLayerCoords];
float x = maskTopLeftCornerInDeviceCoords.x * image.size.width;
float y = (1 - maskTopLeftCornerInDeviceCoords.y) * image.size.height;
float width = fabsf(maskTopLeftCornerInDeviceCoords.x - maskBottomRightCornerInDeviceCoords.x) * image.size.width;
float height = fabsf(maskTopLeftCornerInDeviceCoords.y - maskBottomRightCornerInDeviceCoords.y) * image.size.height;
return CGRectMake(x, y, width, height);
}
That is my current version but doesn't even get the proportions right. Could some one please help me!
I have also tried using this method to crop my image:
- (UIImage*)cropImage:(UIImage*)originalImage toRect:(CGRect)rect{
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], rect);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
CGContextRef bitmap = CGBitmapContextCreate(NULL, rect.size.width, rect.size.height, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
if (originalImage.imageOrientation == UIImageOrientationLeft) {
CGContextRotateCTM (bitmap, radians(90));
CGContextTranslateCTM (bitmap, 0, -rect.size.height);
} else if (originalImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM (bitmap, radians(-90));
CGContextTranslateCTM (bitmap, -rect.size.width, 0);
} else if (originalImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (originalImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, rect.size.width, rect.size.height);
CGContextRotateCTM (bitmap, radians(-180.));
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, rect.size.width, rect.size.height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *resultImage=[UIImage imageWithCGImage:ref];
CGImageRelease(imageRef);
CGContextRelease(bitmap);
CGImageRelease(ref);
return resultImage;
}
Does anybody have the 'right combination' of methods to make this work? :)
In Swift 3:
private func cropToPreviewLayer(originalImage: UIImage) -> UIImage {
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: previewLayer.bounds)
var cgImage = originalImage.cgImage!
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: outputRect.origin.x * width, y: outputRect.origin.y * height, width: outputRect.size.width * width, height: outputRect.size.height * height)
cgImage = cgImage.cropping(to: cropRect)!
let croppedUIImage = UIImage(cgImage: cgImage, scale: 1.0, orientation: originalImage.imageOrientation)
return croppedUIImage
}
I've solved this problem by using metadataOutputRectOfInterestForRect function.
It works with any orientation.
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if (error)
{
[_delegate cameraView:self error:#"Take picture failed"];
}
else
{
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *takenImage = [UIImage imageWithData:jpegData];
CGRect outputRect = [_previewLayer metadataOutputRectOfInterestForRect:_previewLayer.bounds];
CGImageRef takenCGImage = takenImage.CGImage;
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
CGImageRelease(cropCGImage);
}
}
];
The takenImage is still imageOrientation dependent image. You can delete orientation information for further image processing.
UIGraphicsBeginImageContext(takenImage.size);
[takenImage drawAtPoint:CGPointZero];
takenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In Swift 4:
I prefer to never force-unwrap to avoid crashes, so I use optionals and guards in mine.
private func cropToPreviewLayer(originalImage: UIImage) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: previewLayer.bounds)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: outputRect.origin.x * width, y: outputRect.origin.y * height, width: outputRect.size.width * width, height: outputRect.size.height * height)
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
AVMakeRectWithAspectRatioInsideRect
this api is from AVFoundation, it will return the crop region for the image given the crop size.

Resources