Image capture is distorted when using it for a CALayer - ios

I am working on a photo taking app. The app's preview layer is set to take up exactly half of the screen using this code:
[_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
This looks perfect and there is no distortion at all while the user is viewing the camera's "preview" / what they are seeing while taking the picture.
However, once they actually take the photo, I create a sub layer and set it's frame property to my preview layer's property, and set the photo as the contents of the sub layer.
This does technically work. Once the user takes the photo, the photo shows up on the top half of the screen like it should.
The only problem is that the photo is distorted.
It looks stretched out, almost as if I'm taking a landscape photo.
Any help is greatly appreciated I am totally desperate on this and have not been able to fix it after working on it all day today.
Here is all of my view controller's code:
#import "MediaCaptureVC.h"
#interface MediaCaptureVC ()
#end
#implementation MediaCaptureVC
- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
if (self) {
// Custom initialization
}
return self;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
AVCaptureSession *session =[[AVCaptureSession alloc]init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = [[NSError alloc]init];
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if([session canAddInput:deviceInput])
[session addInput:deviceInput];
_previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view]layer];
[rootLayer setMasksToBounds:YES];
[_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
[rootLayer insertSublayer:_previewLayer atIndex:0];
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:_stillImageOutput];
[session startRunning];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
-(UIImage*) rotate:(UIImage*) src andOrientation:(UIImageOrientation)orientation
{
UIGraphicsBeginImageContext(src.size);
CGContextRef context=(UIGraphicsGetCurrentContext());
if (orientation == UIImageOrientationRight) {
CGContextRotateCTM (context, 90/180*M_PI) ;
} else if (orientation == UIImageOrientationLeft) {
CGContextRotateCTM (context, -90/180*M_PI);
} else if (orientation == UIImageOrientationDown) {
// NOTHING
} else if (orientation == UIImageOrientationUp) {
CGContextRotateCTM (context, 90/180*M_PI);
}
[src drawAtPoint:CGPointMake(0, 0)];
UIImage *img=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
-(IBAction)stillImageCapture {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in _stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", _stillImageOutput);
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if(imageDataSampleBuffer) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc]initWithData:imageData];
image = [self rotate:image andOrientation:image.imageOrientation];
CALayer *subLayer = [CALayer layer];
CGImageRef imageRef = image.CGImage;
subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage;
subLayer.frame = _previewLayer.frame;
CALayer *rootLayer = [[self view]layer];
[rootLayer setMasksToBounds:YES];
[subLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
[_previewLayer addSublayer:subLayer];
NSLog(#"%#", subLayer.contents);
NSLog(#"Orientation: %d", image.imageOrientation);
}
}];
}
#end

Hi I hope this helps you -
The code seems more complex than it should be because most of the code is done on the CALayer level instead of the imageView / view level however I think the issue is that the proportion of the frame from the original capture to your mini viewport is different and this is distorting the UIImage in this statement :
[subLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
What needs to be done is to capture the proportion of sublayer.frame and get the best size that will fit into the rootlayer or the Image view associated with it
I have some code before that does this : coded a subroutine before that handles the proportion (Note you will need to adjust the origin of the frame to get what you want!)
...
CGRect newbounds = [self figure_proportion:image to_fit_rect(rootLayer.frame)
if (newbounds.size.height < rootLayer.frame.size.height) {
rootLayer ..... (Code to adjust origin of image view frame)
-(CGRect) figure_proportion:(UIImage *) image2 to_fit_rect:(CGRect) rect {
CGSize image_size = image2.size;
CGRect newrect = rect;
float wfactor = image_size.width/ image_size.height;
float hfactor = image_size.height/ image_size.width;
if (image2.size.width > image2.size.height) {
newrect.size.width = rect.size.width;
newrect.size.height = (rect.size.width * hfactor);
}
else if (image2.size.height > image2.size.width) {
newrect.size.height = rect.size.height;
newrect.size.width = (rect.size.height * wfactor);
}
else {
newrect.size.width = rect.size.width;
newrect.size.height = newrect.size.width;
}
if (newrect.size.height > rect.size.height) {
newrect.size.height = rect.size.height;
newrect.size.width = (newrect.size.height* wfactor);
}
if (newrect.size.width > rect.size.width) {
newrect.size.width = rect.size.width;
newrect.size.height = (newrect.size.width* hfactor);
}
return(newrect);
}

Related

AvCapture / AVCaptureVideoPreviewLayer troubles getting the correct visible image

I am currently having some huge troubles getting what I want from AVCapture and AVCaptureVideoPreviewLayer etc.
I am currently creating an app (available for Iphone devices but would be better if it would also works on ipad) where I want to put a small preview of my camera in the middle of my view as shown in this picture :
To do that, I want to keep the ratio of my camera so I used this configuration :
rgbaImage = nil;
NSArray *possibleDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *device = [possibleDevices firstObject];
if (!device) return;
AVCaptureSession *session = [[AVCaptureSession alloc] init];
self.captureSession = session;
self.captureDevice = device;
NSError *error = nil;
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if( !input )
{
[[[UIAlertView alloc] initWithTitle:NSLocalizedString(#"NoCameraAuthorizationTitle", nil)
message:NSLocalizedString(#"NoCameraAuthorizationMsg", nil)
delegate:self
cancelButtonTitle:NSLocalizedString(#"OK", nil)
otherButtonTitles:nil] show];
return;
}
[session beginConfiguration];
session.sessionPreset = AVCaptureSessionPresetPhoto;
[session addInput:input];
AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
[dataOutput setVideoSettings:#{(id)kCVPixelBufferPixelFormatTypeKey:#(kCVPixelFormatType_32BGRA)}];
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:dataOutput];
self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:self.stillImageOutput];
connection = [dataOutput.connections firstObject];
[self setupCameraOrientation];
NSError *errorLock;
if ([device lockForConfiguration:&errorLock])
{
// Frame rate
device.activeVideoMinFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);
device.activeVideoMaxFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);
AVCaptureFocusMode focusMode = AVCaptureFocusModeContinuousAutoFocus;
AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure;
CGPoint point = CGPointMake(0.5, 0.5);
if ([device isAutoFocusRangeRestrictionSupported])
{
device.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
}
if ([device isFocusPointOfInterestSupported] && [device isFocusModeSupported:focusMode])
{
[device setFocusPointOfInterest:point];
[device setFocusMode:focusMode];
}
if ([device isExposurePointOfInterestSupported] && [device isExposureModeSupported:exposureMode])
{
[device setExposurePointOfInterest:point];
[device setExposureMode:exposureMode];
}
if ([device isLowLightBoostSupported])
{
device.automaticallyEnablesLowLightBoostWhenAvailable = YES;
}
[device unlockForConfiguration];
}
if (device.isFlashAvailable)
{
[device lockForConfiguration:nil];
[device setFlashMode:AVCaptureFlashModeOff];
[device unlockForConfiguration];
if ([device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
[device lockForConfiguration:nil];
[device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[device unlockForConfiguration];
}
}
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = self.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.layer insertSublayer:previewLayer atIndex:0];
[session commitConfiguration];
As you can see I am using the AVLayerVideoGravityResizeAspectFill properties to ensure that I do have the proper ratio.
My trouble starts here as I tried many things but never really succeeded.
My goal is to get the picture equivalent to was the user can see in the previewLayer. Knowing that the video frame gives bigger image than the image you can see in the preview.
I tried 3 methods :
1) Using personal computing : as I know both the video frame size and my screen size and layer size and position, I tried to compute the ratio and use it to compute the equivalent position in the video frame. I actually found out that the video frame (sampleBuffer) is in pixels while the position I get from mainScreen bounds is apple mesure and has to be multiple by a ratio to get it in pixels which is my ratio assuming that the video frame size is the actual device full screen size.
--> This actually gave me a really good result on my IPAD, both height and width are good but the (x,y) origin is a bit moved from the original... (detail : actually if I remove 72 pixel from the position I find I get the good output)
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
{
if (self.forceStop) return;
if (_isStopped || _isCapturing || !CMSampleBufferIsValid(sampleBuffer)) return;
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
__block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGRect rect = image.extent;
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat screenWidth = screenRect.size.width/* * [UIScreen mainScreen].scale*/;
CGFloat screenHeight = screenRect.size.height/* * [UIScreen mainScreen].scale*/;
NSLog(#"%f, %f ---",screenWidth, screenHeight);
float myRatio = ( rect.size.height / screenHeight );
float myRatioW = ( rect.size.width / screenWidth );
NSLog(#"Ratio w :%f h:%f ---",myRatioW, myRatio);
CGPoint p = [captureViewControler.view convertPoint:previewLayer.frame.origin toView:nil];
NSLog(#"-Av-> %f, %f --> %f, %f", p.x, p.y, self.bounds.size.height, self.bounds.size.width);
rect.origin = CGPointMake(p.x * myRatioW, p.y * myRatio);
NSLog(#"%f, %f ----> %f %f", rect.origin.x, rect.origin.y, rect.size.width, rect.size.height);
NSLog(#"%f", previewLayer.frame.size.height * ( rect.size.height / screenHeight ));
rect.size = CGSizeMake(rect.size.width, previewLayer.frame.size.height * myRatio);
image = [image imageByCroppingToRect:rect];
its = [ImageUtils cropImageToRect:uiImage(sampleBuffer) toRect:rect];
NSLog(#"--------------------------------------------");
[captureViewControler sendToPreview:its];
}
2) Using StillImage capture : This method was actually working as long as I was on an IPAD. But the really trouble is that I am using those crop frames to feed an image library and the methods captureStillImageAsynchronouslyFromConnection is calling the system sounds for a picture (I read a lot about "solutions" like call another sound to avoid it etc etc but not working and actually not solving the freeze that goes with it on iphone 6) so this method seems inappropriate.
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connect in self.stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connect inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connect;
break;
}
}
if (videoConnection) { break; }
}
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if (error)
{
NSLog(#"Take picture failed");
}
else
{
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *takenImage = [UIImage imageWithData:jpegData];
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
NSLog(#"image cropped : %#", NSStringFromCGRect(outputRect));
CGImageRef takenCGImage = takenImage.CGImage;
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
NSLog(#"Size cropped : w: %zu h: %zu", width, height);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
NSLog(#"final cropped : %#", NSStringFromCGRect(cropRect));
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
CGImageRelease(cropCGImage);
its = [ImageUtils rotateUIImage:takenImage];
image = [[CIImage alloc] initWithImage:its];
}
3) Using metadataOuput with a ratio : This is actually not working at all but I thought it would help me the most as it works with this on the stillImage process (using the metadataOutputRectOfInterestForRect result to get the pourcentage and then combine it with the ratio). I wanted to use this and add the ratio difference between the pictures to get the correct output.
CGRect rect = image.extent;
CGSize size = CGSizeMake(1936.0, 2592.0);
float rh = (size.height / rect.size.height);
float rw = (size.width / rect.size.width);
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
NSLog(#"avant cropped : %#", NSStringFromCGRect(outputRect));
outputRect.origin.x = MIN(1.0, outputRect.origin.x * rw);
outputRect.origin.y = MIN(1.0, outputRect.origin.y * rh);
outputRect.size.width = MIN(1.0, outputRect.size.width * rw);
outputRect.size.height = MIN(1.0, outputRect.size.height * rh);
NSLog(#"final cropped : %#", NSStringFromCGRect(outputRect));
UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
NSLog(#"takenImage : %#", NSStringFromCGSize(takenImage.size));
CGImageRef takenCGImage = [[CIContext contextWithOptions:nil] createCGImage:image fromRect:[image extent]];
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
NSLog(#"Size cropped : w: %zu h: %zu", width, height);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
I hope someone will be able to help me with this.
Thanks a lot.
I finally found the solution using this code. My error was to try to use a ratio between images and not considering that metadataOutputRectOfInterestForRect returns a percentage value which doesn't need to be changed for the new other image.
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
__block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
outputRect.origin.y = outputRect.origin.x;
outputRect.origin.x = 0;
outputRect.size.height = outputRect.size.width;
outputRect.size.width = 1;
UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
CGImageRef takenCGImage = [cicontext createCGImage:image fromRect:[image extent]];
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
UIImage *its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
}

ios objective C screenshot sublayer not visible

I'm building an app where i want to take a snapshot from the camera and show it in a UIImageView. I'm able to take the snapshot but the AVCaptureVideoPreviewLayer is not visible in the screenshot. Does anyone know how to do that?
Here is my code:
#implementation ViewController
CGRect imgRect;
AVCaptureVideoPreviewLayer *previewLayer;
AVCaptureVideoDataOutput *output;
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *device =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//Input
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
if (!input)
{
NSLog(#"No Input");
}
[session addInput:input];
//Output
output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
CGFloat x = self.view.bounds.size.width * 0.5 - 128;
imgRect = CGRectMake(x, 64, 256, 256);
previewLayer.frame = imgRect;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
}
- (UIImage *)pb_takeSnapshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
a bit of help is very much appreciated.
Thank you in advance
Gilbert Avezaat
You should use AVCaptureStillImageOutput to get image from the camera connection,
Here is how you could do it,
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
stillImageOutput.outputSettings = #{
AVVideoCodecKey: AVVideoCodecJPEG,
(__bridge id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)
};
[stillImageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
}];
first check for Image is return or not . if return then ...
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
[self.view bringSubviewToFrunt:self.imgResult];
}
hope it help you .

Having issues with AVCaptureVideoOrientation showing camera preview incorrectly

I've been given some code from 2011 to update for a client, I've managed to get it all sorted for iOS7 etc and submitted the update to Apple.
I now have an issue with the app which I need to sort out as soon as possible. I'm really struggling to find a solution, I've tried all sorts!
In brief, this app allows users to take multiple photos in a custom camera view (with a camera preview at all times) then hit a done button to close the window and return to the previous screen.
However, when the camera preview shows on screen, it's is in the incorrect orientation, by 90 degrees counter clockwise.
Here is an example.....
I need to force the user to take the photo in landscape mode.
The second issue is if the user does take a landscape photo, it saves it 90 degrees counter clockwise as well.
Can anybody help????
Full code is below, this is using the original code that was supplied, witha few tweaks to try and fix it without much joy!
Many thanks in advance
Simon
//
// CameraViewController.m
// X
//
// Created by X on 8/9/11.
// Copyright 2011 __MyCompanyName__. All rights reserved.
//
#import "CameraViewController.h"
#import <ImageIO/ImageIO.h>
static AVCaptureVideoOrientation avOrientationForInterfaceOrientation(UIInterfaceOrientation iOrientation);
static AVCaptureVideoOrientation avOrientationForInterfaceOrientation(UIInterfaceOrientation iOrientation)
{
AVCaptureVideoOrientation result = iOrientation;
if ( iOrientation == UIInterfaceOrientationLandscapeLeft )
result = AVCaptureVideoOrientationLandscapeLeft;
else if ( iOrientation == UIInterfaceOrientationLandscapeRight )
result = AVCaptureVideoOrientationLandscapeRight;
return result;
}
#implementation CameraViewController
#synthesize stillImageOutput, delegate;
#synthesize session, captureVideoPreviewLayer;
- (void)willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration
{
AVCaptureVideoOrientation avcaptureOrientation = avOrientationForInterfaceOrientation(toInterfaceOrientation);
self.captureVideoPreviewLayer.connection.videoOrientation = avcaptureOrientation;
}
- (void)dealloc {
[stillImageOutput release];
[captureVideoPreviewLayer release];
[session release];
[super dealloc];
}
- (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemoryWarning];
// Release any cached data, images, etc that aren't in use.
}
#pragma mark - View lifecycle
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
// rotate the status bar
[UIApplication sharedApplication].statusBarOrientation = UIInterfaceOrientationLandscapeLeft;
// try and rotate the camera live preview
}
-(void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
self.session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
self.captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
AVCaptureVideoOrientation avcaptureOrientation = avOrientationForInterfaceOrientation(self.interfaceOrientation);
self.captureVideoPreviewLayer.connection.videoOrientation = avcaptureOrientation;
//captureVideoPreviewLayer.connection.videoOrientation = AVCaptureVideoOrientationLandscapeLeft;
NSLog(#"doing something");
captureVideoPreviewLayer.frame = cameraView.bounds;
[cameraView.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
canDismiss = 0;
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(didRotate:)
name:UIDeviceOrientationDidChangeNotification object:nil];
}
- (void)viewDidUnload {
[super viewDidUnload];
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
}
-(BOOL)shouldAutorotate {
return YES;
}
- (NSUInteger)supportedInterfaceOrientations
{
return UIInterfaceOrientationMaskLandscapeLeft;
}
// Set the camera to force itself to a landscape view
- (UIInterfaceOrientation)preferredInterfaceOrientationForPresentation
{
return UIInterfaceOrientationLandscapeLeft;
}
/*- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation {
// Return YES for supported orientations
//return (interfaceOrientation == UIInterfaceOrientationPortrait);
return (interfaceOrientation == UIInterfaceOrientationPortrait) | (interfaceOrientation == UIInterfaceOrientationLandscapeLeft) | (interfaceOrientation == UIInterfaceOrientationLandscapeRight);
}*/
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
if(interfaceOrientation == UIInterfaceOrientationLandscapeLeft)
{
captureVideoPreviewLayer.connection.videoOrientation = UIInterfaceOrientationLandscapeLeft;
}
// and so on for other orientations
return ((interfaceOrientation == UIInterfaceOrientationLandscapeLeft));
}
- (void) didRotate:(NSNotification *)notification {
[UIView beginAnimations:nil context:nil];
if ( UIInterfaceOrientationIsLandscape([[UIDevice currentDevice] orientation]) ) {
[topView setFrame:CGRectMake(0, 0, 320, 27)];
[bottomView setFrame:CGRectMake(0, 453, 320, 27)];
} else {
[topView setFrame:CGRectMake(0, 0, 320, 110)];
[bottomView setFrame:CGRectMake(0, 350, 320, 110)];
}
switch ([[UIDevice currentDevice] orientation]) {
case UIInterfaceOrientationLandscapeLeft:
btnDone.transform = CGAffineTransformMakeRotation(3*M_PI_2);
btnTakePicture.transform = CGAffineTransformMakeRotation(3*M_PI_2);
break;
default:
btnDone.transform = CGAffineTransformMakeRotation(M_PI_2);
btnTakePicture.transform = CGAffineTransformMakeRotation(M_PI_2);
break;
}
[UIView commitAnimations];
}
-(IBAction)takePhoto {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
canDismiss++;
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
int width = [[[NSUserDefaults standardUserDefaults] objectForKey:#"image_width"] intValue];
int height = [[[NSUserDefaults standardUserDefaults] objectForKey:#"image_height"] intValue];
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage * image = [[UIImage alloc] initWithData:imageData];
NSLog(#"Image w:%f h:%f", image.size.width, image.size.height);
UIImage * rotatedimage;
if ( UIInterfaceOrientationIsLandscape([[UIDevice currentDevice] orientation]) ) {
// Rotate the image
CGSize newSize = CGSizeMake(image.size.height, image.size.width);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate (NULL, newSize.width, newSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if ( [[UIDevice currentDevice] orientation] == UIInterfaceOrientationLandscapeLeft ) {
CGAffineTransform transform = CGAffineTransformMakeTranslation(0.0, newSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextScaleCTM(ctx, -1.0, 1.0);
CGContextTranslateCTM(ctx, -newSize.width, 0);
CGContextConcatCTM(ctx, transform);
}
CGContextDrawImage(ctx, CGRectMake(0, 0, newSize.width, newSize.height), image.CGImage);
CGImageRef sourceImageRef = CGBitmapContextCreateImage(ctx);
rotatedimage = [UIImage imageWithCGImage:sourceImageRef];
CGContextRelease(ctx);
NSLog(#"Rotated Image w:%f h:%f", rotatedimage.size.width, rotatedimage.size.height);
// Scale the image
newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext(newSize);
[rotatedimage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
//image is the original UIImage
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"New Image w:%f h:%f", newImage.size.width, newImage.size.height);
[delegate tookPhoto:newImage];
} else {
// Scale the image
CGSize newSize = CGSizeMake(width, image.size.height/(image.size.width/width));
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"New Image w:%f h:%f", newImage.size.width, newImage.size.height);
// Chop out the middle
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate (NULL, width, height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if ( [[UIDevice currentDevice] orientation] == UIInterfaceOrientationPortraitUpsideDown ) {
CGAffineTransform transform = CGAffineTransformMakeTranslation(0.0, newImage.size.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextScaleCTM(ctx, -1.0, 1.0);
CGContextTranslateCTM(ctx, -newImage.size.width, 0);
CGContextConcatCTM(ctx, transform);
CGContextDrawImage(ctx, CGRectMake(0, (newImage.size.height-height)/2, newImage.size.width, newImage.size.height), newImage.CGImage);
} else {
CGContextDrawImage(ctx, CGRectMake(0, -(newImage.size.height-height)/2, newImage.size.width, newImage.size.height), newImage.CGImage);
}
CGImageRef sourceImageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
[delegate tookPhoto:[UIImage imageWithCGImage:sourceImageRef]];
}
[image release];
canDismiss--;
}];
}
// action when the 'tick' button is pressed
-(IBAction)done {
if ( canDismiss == 0 ) {
self.delegate = nil;
// animate the camera view away
[self dismissViewControllerAnimated:YES completion:nil];
// swap the status bar back to the default portrait
[UIApplication sharedApplication].statusBarOrientation = UIInterfaceOrientationPortrait;
}
}
#end

Rotating camera and photo taken issue in iOS 7

I've been given an app from a few years ago to update from a new client, and been provided with the original code.
After making the updates they wanted to get the app working again, we discovered that there was a bug where if the user rotates their device to take a landscape photo, the view on screen doesn't rotate and takes the photo at 90 degrees, rather than a true landscape orientation.
This was meant to be a quick job, which has turned into a big headache.
Any ideas what I can do to the code below in order to fix this issue?
Many thanks!
//
// CameraViewController.m
//
//
// Created by XXX on 8/9/11.
// Copyright 2011 __MyCompanyName__. All rights reserved.
//
#import "CameraViewController.h"
#import <ImageIO/ImageIO.h>
#implementation CameraViewController
#synthesize stillImageOutput, delegate;
#synthesize session, captureVideoPreviewLayer;
- (void)dealloc {
[stillImageOutput release];
[captureVideoPreviewLayer release];
[session release];
[super dealloc];
}
- (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemoryWarning];
// Release any cached data, images, etc that aren't in use.
}
#pragma mark - View lifecycle
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
}
-(void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
self.session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
self.captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = cameraView.bounds;
[cameraView.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
canDismiss = 0;
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(didRotate:)
name:UIDeviceOrientationDidChangeNotification object:nil];
}
- (void)viewDidUnload {
[super viewDidUnload];
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
}
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation {
// Return YES for supported orientations
return (interfaceOrientation == UIInterfaceOrientationPortrait);
}
- (void) didRotate:(NSNotification *)notification {
[UIView beginAnimations:nil context:nil];
if ( UIInterfaceOrientationIsLandscape([[UIDevice currentDevice] orientation]) ) {
[topView setFrame:CGRectMake(0, 0, 320, 27)];
[bottomView setFrame:CGRectMake(0, 433, 320, 27)];
} else {
[topView setFrame:CGRectMake(0, 0, 320, 110)];
[bottomView setFrame:CGRectMake(0, 350, 320, 110)];
}
switch ([[UIDevice currentDevice] orientation]) {
case UIInterfaceOrientationPortrait:
btnDone.transform = CGAffineTransformIdentity;
btnTakePicture.transform = CGAffineTransformIdentity;
break;
case UIInterfaceOrientationPortraitUpsideDown:
btnDone.transform = CGAffineTransformMakeRotation(M_PI);
btnTakePicture.transform = CGAffineTransformMakeRotation(M_PI);
break;
case UIInterfaceOrientationLandscapeLeft:
btnDone.transform = CGAffineTransformMakeRotation(3*M_PI_2);
btnTakePicture.transform = CGAffineTransformMakeRotation(3*M_PI_2);
break;
default:
btnDone.transform = CGAffineTransformMakeRotation(M_PI_2);
btnTakePicture.transform = CGAffineTransformMakeRotation(M_PI_2);
break;
}
[UIView commitAnimations];
}
-(IBAction)takePhoto {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
canDismiss++;
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
int width = [[[NSUserDefaults standardUserDefaults] objectForKey:#"image_width"] intValue];
int height = [[[NSUserDefaults standardUserDefaults] objectForKey:#"image_height"] intValue];
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage * image = [[UIImage alloc] initWithData:imageData];
NSLog(#"Image w:%f h:%f", image.size.width, image.size.height);
UIImage * rotatedimage;
if ( UIInterfaceOrientationIsLandscape([[UIDevice currentDevice] orientation]) ) {
// Rotate the image
CGSize newSize = CGSizeMake(image.size.height, image.size.width);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate (NULL, newSize.width, newSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if ( [[UIDevice currentDevice] orientation] == UIInterfaceOrientationLandscapeLeft ) {
CGAffineTransform transform = CGAffineTransformMakeTranslation(0.0, newSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextScaleCTM(ctx, -1.0, 1.0);
CGContextTranslateCTM(ctx, -newSize.width, 0);
CGContextConcatCTM(ctx, transform);
}
CGContextDrawImage(ctx, CGRectMake(0, 0, newSize.width, newSize.height), image.CGImage);
CGImageRef sourceImageRef = CGBitmapContextCreateImage(ctx);
rotatedimage = [UIImage imageWithCGImage:sourceImageRef];
CGContextRelease(ctx);
NSLog(#"Rotated Image w:%f h:%f", rotatedimage.size.width, rotatedimage.size.height);
// Scale the image
newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext(newSize);
[rotatedimage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
//image is the original UIImage
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"New Image w:%f h:%f", newImage.size.width, newImage.size.height);
[delegate tookPhoto:newImage];
} else {
// Scale the image
CGSize newSize = CGSizeMake(width, image.size.height/(image.size.width/width));
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"New Image w:%f h:%f", newImage.size.width, newImage.size.height);
// Chop out the middle
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate (NULL, width, height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if ( [[UIDevice currentDevice] orientation] == UIInterfaceOrientationPortraitUpsideDown ) {
CGAffineTransform transform = CGAffineTransformMakeTranslation(0.0, newImage.size.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextScaleCTM(ctx, -1.0, 1.0);
CGContextTranslateCTM(ctx, -newImage.size.width, 0);
CGContextConcatCTM(ctx, transform);
CGContextDrawImage(ctx, CGRectMake(0, (newImage.size.height-height)/2, newImage.size.width, newImage.size.height), newImage.CGImage);
} else {
CGContextDrawImage(ctx, CGRectMake(0, -(newImage.size.height-height)/2, newImage.size.width, newImage.size.height), newImage.CGImage);
}
CGImageRef sourceImageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
[delegate tookPhoto:[UIImage imageWithCGImage:sourceImageRef]];
}
[image release];
canDismiss--;
}];
}
-(IBAction)done {
if ( canDismiss == 0 ) {
self.delegate = nil;
//[self dismissModalViewControllerAnimated:YES];
[self dismissViewControllerAnimated:YES completion:nil];
}
}
#end
If I'm not mistaking you have to place this code:
Try this:
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation {
// Return YES for supported orientations
if([landscapeAllowedID contains: currentViewID])
return (interfaceOrientation == UIInterfaceOrientationPortrait) | (interfaceOrientation == UIInterfaceOrientationLandscape);
}
EDIT
Also add this to the UIViewController.
-(NSUInteger)supportedInterfaceOrientations{
return UIInterfaceOrientationMaskPortrait | UIInterfaceOrientationMaskLandscape; // etc
}
If that doesn't work place the code in a custom UINavigationController. You have to the UINavigationController to let it know somehow what view is currently viewed.
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation {
// Return YES for supported orientations
if([landscapeAllowedID contains: currentViewID])
return (interfaceOrientation == UIInterfaceOrientationPortrait) | (interfaceOrientation == UIInterfaceOrientationLandscape);
}

Capture image with bounds?

I am able to capture images from the iOS rear facing camera. Everything is working flawlessly except I want it to take the picture as per the bounds in my UIView.
My code is below:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.frame = vImagePreview.bounds;
[vImagePreview.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device = [self backFacingCameraIfAvailable];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
[session startRunning];
[session addOutput:stillImageOutput];
}
-(AVCaptureDevice *)backFacingCameraIfAvailable{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices){
if (device.position == AVCaptureDevicePositionBack){
captureDevice = device;
break;
}
}
// couldn't find one on the back, so just get the default video device.
if (!captureDevice){
captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
return captureDevice;
}
And below is the code to capture the image:
- (IBAction)captureTask {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
stillImage = image;
}];
}
The issue i'm facing is that it's taking the picture, and saving to stillImage, however, the image is for the whole iPhone screen from what I can tell. It's not to the bounds of the UIView *vImagePreview I created. Is there a way to clip the bounds of the captured image??
[EDIT]
After reading the docs, I realized the image was proper resolution, as per here: session.sessionPreset = AVCaptureSessionPresetMedium;. Is there a way to make the image like a square? Like how Instagram makes their images? All of the session presets according to the docs are not at all squares :(
I tried with the below:
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResize;
However, it only resizes the image to fit the current view, doesn't make a square image.
I understand your frustration, presets should be customizable or have more options! What I do with my images is crop them about the center, for which I wrote the following code:
- (UIImage *)crop:(UIImage *)image from:(CGSize)src to:(CGSize)dst
{
CGPoint cropCenter = CGPointMake((src.width/2), (src.height/2));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width/2)), (cropCenter.y - (dst.height/2)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width, dst.height);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}
Where src represents the original dimensions and dst represents the cropped dimensions; and image is of course the image you want cropped.
If the device is retina display,
then this screenshot works like mentioned below:
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] == YES && [[UIScreen mainScreen] scale] == 2.00)
{
CGPoint cropCenter = CGPointMake((src.width), (src.height));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width)), (cropCenter.y - (dst.height)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width*2, dst.height*2);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}
else
{
CGPoint cropCenter = CGPointMake((src.width/2), (src.height/2));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width/2)), (cropCenter.y - (dst.height/2)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width, dst.height);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}

Resources