I'm building an app where i want to take a snapshot from the camera and show it in a UIImageView. I'm able to take the snapshot but the AVCaptureVideoPreviewLayer is not visible in the screenshot. Does anyone know how to do that?
Here is my code:
#implementation ViewController
CGRect imgRect;
AVCaptureVideoPreviewLayer *previewLayer;
AVCaptureVideoDataOutput *output;
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *device =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//Input
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
if (!input)
{
NSLog(#"No Input");
}
[session addInput:input];
//Output
output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
CGFloat x = self.view.bounds.size.width * 0.5 - 128;
imgRect = CGRectMake(x, 64, 256, 256);
previewLayer.frame = imgRect;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
}
- (UIImage *)pb_takeSnapshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
a bit of help is very much appreciated.
Thank you in advance
Gilbert Avezaat
You should use AVCaptureStillImageOutput to get image from the camera connection,
Here is how you could do it,
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
stillImageOutput.outputSettings = #{
AVVideoCodecKey: AVVideoCodecJPEG,
(__bridge id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)
};
[stillImageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
}];
first check for Image is return or not . if return then ...
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
[self.view bringSubviewToFrunt:self.imgResult];
}
hope it help you .
Related
I am making an app that scans a barcode that inverted color (black background & white bars). I have to use AVFoundation. Currently, I am using AVCaptureMetadataOutput. I can get it to work perfectly with a normal barcode. I need to invert the color on the white -> black & black -> white etc. Can I add a CIColorInvert to the Input in AVCaptureSession
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
mCaptureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&error];
if([mCaptureSession canAddInput:videoInput]){
[mCaptureSession addInput:videoInput];
} else {
NSLog(#"Could not add video input: %#", [error localizedDescription]);
}
// set up metadata output and this class as its delegate so that if metadata (barcode 39) is detected it will send the data to this class
AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
if([mCaptureSession canAddOutput:metadataOutput]){
[mCaptureSession addOutput:metadataOutput];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeCode39Code]];
} else {
NSLog(#"Could not add metadata output");
}
// sets up what the camera sees as a layer of the view
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:mCaptureSession];
//CGRect frame = CGRectMake(0.0 - 50, 0.0, 1024.0, 1024.0 + 720.0);
CGRect bounds=self.view.layer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.bounds=bounds;
previewLayer.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
NSArray *filters = [[NSArray alloc] initWithObjects:[CIFilter filterWithName:#"CIColorInvert"], nil];
[previewLayer setFilters:filters];
//[previewLayer setFrame:self.view.bounds];
[self.view.layer addSublayer:previewLayer];
//starts the camera session
[mCaptureSession startRunning];
}
I am trying to apply a CIFilter onto live camera feed (and be able to capture a filtered still image).
I have seen on StackOverflow some code pertaining the issue, but I haven't been able to get it to work.
My issue is that in the method captureOutput the filter seems correctly applied (I put a breakpoint in there and QuickLooked it), but I don't see it in my UIView (I see the original feed, without the filter).
Also I am not sure which output I should add to the session:
[self.session addOutput: self.stillOutput]; //AVCaptureStillImageOutput
[self.session addOutput: self.videoDataOut]; //AVCaptureVideoDataOutput
And which of those I should iterate through when looking for a connection (in findVideoConnection).
I am totally confused.
Here's some code:
viewDidLoad
-(void)viewDidLoad {
[super viewDidLoad];
self.shutterButton.userInteractionEnabled = YES;
self.context = [CIContext contextWithOptions: #{kCIContextUseSoftwareRenderer : #(YES)}];
self.filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[self.filter setValue:#15 forKey:kCIInputRadiusKey];
}
prepare session
-(void)prepareSessionWithDevicePosition: (AVCaptureDevicePosition)position {
AVCaptureDevice* device = [self videoDeviceWithPosition: position];
self.currentPosition = position;
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetPhoto;
NSError* error = nil;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice: device error: &error];
if ([self.session canAddInput: self.deviceInput]) {
[self.session addInput: self.deviceInput];
}
AVCaptureVideoPreviewLayer* previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession: self.session];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
self.videoDataOut = [AVCaptureVideoDataOutput new];
[self.videoDataOut setSampleBufferDelegate: self queue:dispatch_queue_create("bufferQueue", DISPATCH_QUEUE_SERIAL)];
self.videoDataOut.alwaysDiscardsLateVideoFrames = YES;
CALayer* rootLayer = [[self view] layer];
rootLayer.masksToBounds = YES;
CGRect frame = self.previewView.frame;
previewLayer.frame = frame;
[rootLayer insertSublayer: previewLayer atIndex: 1];
self.stillOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary* outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
self.stillOutput.outputSettings = outputSettings;
[self.session addOutput: self.stillOutput];
//tried [self.session addOutput: self.videoDataOut];
//and didn't work (filtered image didn't show, and also couldn't take pictures)
[self findVideoConnection];
}
find video connection
-(void)findVideoConnection {
for (AVCaptureConnection* connection in self.stillOutput.connections) {
//also tried self.videoDataOut.connections
for (AVCaptureInputPort* port in [connection inputPorts]) {
if ([[port mediaType] isEqualToString: AVMediaTypeVideo]) {
self.videoConnection = connection;
break;
}
}
if (self.videoConnection != nil) {
break;
}
}
}
capture output, apply filter and put it in the CALayer
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// turn buffer into an image we can manipulate
CIImage *result = [CIImage imageWithCVPixelBuffer:imageBuffer];
// filter
[self.filter setValue:result forKey: #"inputImage"];
// render image
CGImageRef blurredImage = [self.context createCGImage:self.filter.outputImage fromRect:result.extent];
UIImage* img = [UIImage imageWithCGImage: blurredImage];
//Did this to check whether the image was actually filtered.
//And surprisingly it was.
dispatch_async(dispatch_get_main_queue(), ^{
//The image present in my UIView is for some reason not blurred.
self.previewView.layer.contents = (__bridge id)blurredImage;
CGImageRelease(blurredImage);
});
}
I'm just starting out in Objective-C and I'm trying to create a simple app where it shows the camera view with a blur effect on it. I got the Camera output working with the AVFoundation framework. Now, I'm trying to hook up the Core image framework but to no knowledge how to, Apple documentation is confusing for me and searching for guides and tutorials online leads to no results. Thanks in advance for the help.
#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
#interface ViewController ()
#property (strong ,nonatomic) CIContext *context;
#end
#implementation ViewController
AVCaptureSession *session;
AVCaptureStillImageOutput *stillImageOutput;
-(CIContext *)context
{
if(!_context)
{
_context = [CIContext contextWithOptions:nil];
}
return _context;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}
-(void)viewWillAppear:(BOOL)animated{
session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ([session canAddInput:deviceInput]) {
[session addInput:deviceInput];
}
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view] layer];
[rootLayer setMasksToBounds:YES];
CGRect frame = self.imageView.frame;
[previewLayer setFrame:frame];
[previewLayer.connection setVideoOrientation:AVCaptureVideoOrientationLandscapeRight];
[rootLayer insertSublayer:previewLayer atIndex:0];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
}
#end
Here's something to get you started. This is an updated version of the code from the following link.
https://gist.github.com/eladb/9662102
The trick is to use the AVCaptureVideoDataOutputSampleBufferDelegate.
With this delegate, you can use imageWithCVPixelBuffer to construct a CIImage from your camera buffer.
Right now though I'm trying to figure out how to reduce lag. I'll update asap.
Update: Latency is now minimal, and on some effects unnoticeable. Unfortunately, it seems that blur is one of the slowest. You may want to look into vImage.
#import "ViewController.h"
#import <CoreImage/CoreImage.h>
#import <AVFoundation/AVFoundation.h>
#interface ViewController () {
}
#property (strong, nonatomic) CIContext *coreImageContext;
#property (strong, nonatomic) AVCaptureSession *cameraSession;
#property (strong, nonatomic) AVCaptureVideoDataOutput *videoOutput;
#property (strong, nonatomic) UIView *blurCameraView;
#property (strong, nonatomic) CIFilter *filter;
#property BOOL cameraOpen;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.blurCameraView = [[UIView alloc]initWithFrame:[[UIScreen mainScreen] bounds]];
[self.view addSubview:self.blurCameraView];
//setup filter
self.filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[self.filter setDefaults];
[self.filter setValue:#(3.0f) forKey:#"inputRadius"];
[self setupCamera];
[self openCamera];
// Do any additional setup after loading the view, typically from a nib.
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)setupCamera
{
self.coreImageContext = [CIContext contextWithOptions:#{kCIContextUseSoftwareRenderer : #(YES)}];
// session
self.cameraSession = [[AVCaptureSession alloc] init];
[self.cameraSession setSessionPreset:AVCaptureSessionPresetLow];
[self.cameraSession commitConfiguration];
// input
AVCaptureDevice *shootingCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *shootingDevice = [AVCaptureDeviceInput deviceInputWithDevice:shootingCamera error:NULL];
if ([self.cameraSession canAddInput:shootingDevice]) {
[self.cameraSession addInput:shootingDevice];
}
// video output
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoOutput.alwaysDiscardsLateVideoFrames = YES;
[self.videoOutput setSampleBufferDelegate:self queue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)];
if ([self.cameraSession canAddOutput:self.videoOutput]) {
[self.cameraSession addOutput:self.videoOutput];
}
if (self.videoOutput.connections.count > 0) {
AVCaptureConnection *connection = self.videoOutput.connections[0];
connection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
self.cameraOpen = NO;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// turn buffer into an image we can manipulate
CIImage *result = [CIImage imageWithCVPixelBuffer:imageBuffer];
// filter
[self.filter setValue:result forKey:#"inputImage"];
// render image
CGImageRef blurredImage = [self.coreImageContext createCGImage:self.filter.outputImage fromRect:result.extent];
dispatch_async(dispatch_get_main_queue(), ^{
self.blurCameraView.layer.contents = (__bridge id)blurredImage;
CGImageRelease(blurredImage);
});
}
- (void)openCamera {
if (self.cameraOpen) {
return;
}
self.blurCameraView.alpha = 0.0f;
[self.cameraSession startRunning];
[self.view layoutIfNeeded];
[UIView animateWithDuration:3.0f animations:^{
self.blurCameraView.alpha = 1.0f;
}];
self.cameraOpen = YES;
}
I'm capturing live video with the back camera on the iPhone with AVCaptureSession, applying some filters with CoreImage and then trying to output the resulting video with OpenGL ES. Most of the code is from an example from the WWDC 2012 session 'Core Image Techniques'.
Displaying the output of the filter chain using [UIImage imageWithCIImage:...] or by creating a CGImageRef for every frame works fine. However, when trying to display with OpenGL ES all I get is a black screen.
In the course they use a custom view class to display the output, however the code for that class isn't available. My view controller class extends GLKViewController and the class of it's view is set as GLKView.
I've searched for and downloaded all GLKit tutorials and examples I can find but nothing is helping. In particular I can't get any video output when I try to run the example from here either. Can anyone point me in the right direction?
#import "VideoViewController.h"
#interface VideoViewController ()
{
AVCaptureSession *_session;
EAGLContext *_eaglContext;
CIContext *_ciContext;
CIFilter *_sepia;
CIFilter *_bumpDistortion;
}
- (void)setupCamera;
- (void)setupFilters;
#end
#implementation VideoViewController
- (void)viewDidLoad
{
[super viewDidLoad];
GLKView *view = (GLKView *)self.view;
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
[EAGLContext setCurrentContext:_eaglContext];
view.context = _eaglContext;
// Configure renderbuffers created by the view
view.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
view.drawableStencilFormat = GLKViewDrawableStencilFormat8;
[self setupCamera];
[self setupFilters];
}
- (void)setupCamera {
_session = [AVCaptureSession new];
[_session beginConfiguration];
[_session setSessionPreset:AVCaptureSessionPreset640x480];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
[_session addInput:input];
AVCaptureVideoDataOutput *dataOutput = [AVCaptureVideoDataOutput new];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
NSDictionary *options;
options = #{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] };
[dataOutput setVideoSettings:options];
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:dataOutput];
[_session commitConfiguration];
}
#pragma mark Setup Filters
- (void)setupFilters {
_sepia = [CIFilter filterWithName:#"CISepiaTone"];
[_sepia setValue:#0.7 forKey:#"inputIntensity"];
_bumpDistortion = [CIFilter filterWithName:#"CIBumpDistortion"];
[_bumpDistortion setValue:[CIVector vectorWithX:240 Y:320] forKey:#"inputCenter"];
[_bumpDistortion setValue:[NSNumber numberWithFloat:200] forKey:#"inputRadius"];
[_bumpDistortion setValue:[NSNumber numberWithFloat:3.0] forKey:#"inputScale"];
}
#pragma mark Main Loop
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Grab the pixel buffer
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
// null colorspace to avoid colormatching
NSDictionary *options = #{ (id)kCIImageColorSpace : (id)kCFNull };
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer options:options];
image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(-M_PI/2.0)];
CGPoint origin = [image extent].origin;
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];
// Pass it through the filter chain
[_sepia setValue:image forKey:#"inputImage"];
[_bumpDistortion setValue:_sepia.outputImage forKey:#"inputImage"];
// Grab the final output image
image = _bumpDistortion.outputImage;
// draw to GLES context
[_ciContext drawImage:image inRect:CGRectMake(0, 0, 480, 640) fromRect:[image extent]];
// and present to screen
[_eaglContext presentRenderbuffer:GL_RENDERBUFFER];
NSLog(#"frame hatched");
[_sepia setValue:nil forKey:#"inputImage"];
}
- (void)loadView {
[super loadView];
// Initialize the CIContext with a null working space
NSDictionary *options = #{ (id)kCIContextWorkingColorSpace : (id)kCFNull };
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:options];
}
- (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
[_session startRunning];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#end
Wow, actually figured it out myself. This line of work may suit me after all ;)
First, for whatever reason, this code only works with OpenGL ES 2, not 3. Yet to figure out why.
Second, I was setting up the CIContext in the loadView method, which obviously runs before the viewDidLoad method and thus uses a not yet initialized EAGLContext.
I am able to capture images from the iOS rear facing camera. Everything is working flawlessly except I want it to take the picture as per the bounds in my UIView.
My code is below:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.frame = vImagePreview.bounds;
[vImagePreview.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device = [self backFacingCameraIfAvailable];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
[session startRunning];
[session addOutput:stillImageOutput];
}
-(AVCaptureDevice *)backFacingCameraIfAvailable{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices){
if (device.position == AVCaptureDevicePositionBack){
captureDevice = device;
break;
}
}
// couldn't find one on the back, so just get the default video device.
if (!captureDevice){
captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
return captureDevice;
}
And below is the code to capture the image:
- (IBAction)captureTask {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
stillImage = image;
}];
}
The issue i'm facing is that it's taking the picture, and saving to stillImage, however, the image is for the whole iPhone screen from what I can tell. It's not to the bounds of the UIView *vImagePreview I created. Is there a way to clip the bounds of the captured image??
[EDIT]
After reading the docs, I realized the image was proper resolution, as per here: session.sessionPreset = AVCaptureSessionPresetMedium;. Is there a way to make the image like a square? Like how Instagram makes their images? All of the session presets according to the docs are not at all squares :(
I tried with the below:
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResize;
However, it only resizes the image to fit the current view, doesn't make a square image.
I understand your frustration, presets should be customizable or have more options! What I do with my images is crop them about the center, for which I wrote the following code:
- (UIImage *)crop:(UIImage *)image from:(CGSize)src to:(CGSize)dst
{
CGPoint cropCenter = CGPointMake((src.width/2), (src.height/2));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width/2)), (cropCenter.y - (dst.height/2)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width, dst.height);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}
Where src represents the original dimensions and dst represents the cropped dimensions; and image is of course the image you want cropped.
If the device is retina display,
then this screenshot works like mentioned below:
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] == YES && [[UIScreen mainScreen] scale] == 2.00)
{
CGPoint cropCenter = CGPointMake((src.width), (src.height));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width)), (cropCenter.y - (dst.height)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width*2, dst.height*2);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}
else
{
CGPoint cropCenter = CGPointMake((src.width/2), (src.height/2));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width/2)), (cropCenter.y - (dst.height/2)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width, dst.height);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}