As title says i want to detect face and then to crop just the face area. This is what I have so far:
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
for (AVMetadataObject *face in metadataObjects) {
if ([face.type isEqualToString:AVMetadataObjectTypeFace]) {
AVCaptureConnection *stillConnection = [_stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
stillConnection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:stillConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (error) {
NSLog(#"There was a problem");
return;
}
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *stillImage = [UIImage imageWithData:jpegData];
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:[CIContext contextWithOptions:nil] options:nil];
CIImage *ciimage = [CIImage imageWithData:jpegData];
NSArray *features = [faceDetector featuresInImage:ciimage];
self.captureImageView.image = stillImage;
for(CIFeature *feature in features) {
if ([feature isKindOfClass:[CIFaceFeature class]]) {
CIFaceFeature *faceFeature = (CIFaceFeature *)feature;
CGImageRef imageRef = CGImageCreateWithImageInRect([stillImage CGImage], faceFeature.bounds);
self.detectedFaceImageView.image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
}
//[_session stopRunning];
}];
}
}
}
This code works partially, it can detect face, but it can not crop part with face, it is always cropping the wrong area, it cropping something at all. I have been browsing stack for answers, trying this and that, but to no avail.
Here is the answer
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// when do we start face detection
if (!_canStartDetection) return;
CIImage *ciimage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
NSArray *features = [_faceDetector featuresInImage:ciimage options:nil];
// find face feature
for(CIFeature *feature in features) {
// if not face feature ignore
if (![feature isKindOfClass:[CIFaceFeature class]]) continue;
// face detected
_canStartDetection = NO;
CIFaceFeature *faceFeature = (CIFaceFeature *)feature;
// crop detected face
CIVector *cropRect = [CIVector vectorWithCGRect:faceFeature.bounds];
CIFilter *cropFilter = [CIFilter filterWithName:#"CICrop"];
[cropFilter setValue:ciimage forKey:#"inputImage"];
[cropFilter setValue:cropRect forKey:#"inputRectangle"];
CIImage *croppedImage = [cropFilter valueForKey:#"outputImage"];
UIImage *stillImage = [UIImage imageWithCIImage:ciimage];
}
}
Note that I used this time AVCaptureVideoDataOutput, here is that code:
// set output for face frames
AVCaptureVideoDataOutput *output2 = [[AVCaptureVideoDataOutput alloc] init];
[_session addOutput:output2];
output2.videoSettings = #{(NSString*)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA)};
output2.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue = dispatch_queue_create("com.myapp.faceDetectionQueueSerial", DISPATCH_QUEUE_SERIAL);
[output2 setSampleBufferDelegate:self queue:queue];
Related
I am trying to scan QR images that user chooses from disk. I found a strange issue where all libraries I tried failed (CIDetector old port of ZXING or ZBAR).
I know that there are ways to add white background (e.g. redraw image or using CIFilter) so the image will get scanned.
What is the correct way of scanning QR codes with transparent background (configure CIContext or CIDetector). (the image below fails to scan on iOS and macOS).
https://en.wikipedia.org/wiki/QR_code#/media/File:QR_code_for_mobile_English_Wikipedia.svg
- (void)scanImage:(CIImage *)image
{
NSArray <CIFeature *>*features = [[self QRdetector] featuresInImage:image];
NSLog(#"Number of features found: %lu", [features count]);
}
- (CIDetector *)QRdetector
{
CIContext *context = [CIContext contextWithOptions:#{kCIContextWorkingColorSpace : [NSNull null]}]; //no difference using special options or nil as a context
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:context options:#{CIDetectorAccuracy : CIDetectorAccuracyHigh, CIDetectorAspectRatio : #(1)}];
return detector;
}
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
NSURL *URL = [[NSBundle mainBundle] URLForResource:#"transparentqrcode" withExtension:#"png"];
UIImage *image = [UIImage imageWithContentsOfFile:[URL path]];
CIImage *ciImage = [CIImage imageWithContentsOfURL:URL];
//CUSTOM CODE TO ADD WHITE BACKGROUND
CIFilter *filter = [CIFilter filterWithName:#"CISourceAtopCompositing"];
[filter setDefaults];
CIColor *whiteColor = [[CIColor alloc] initWithColor:[UIColor whiteColor]];
CIImage *colorImage = [CIImage imageWithColor:whiteColor];
colorImage = [colorImage imageByCroppingToRect:ciImage.extent];
[filter setValue:ciImage forKey:kCIInputImageKey];
[filter setValue:colorImage forKey:kCIInputBackgroundImageKey];
CIImage *newImage = [filter valueForKey:kCIOutputImageKey];
[self scanImage:ciImage];
return YES;
}
As mentioned in the comments, it appears CIDetector treats the alpha channel as black. Replacing it with white works --- unless the QRCode itself is white with a transparent background.
I haven't done any profiling to see if this would be quicker, but it might be a better option.
- (IBAction)didTap:(id)sender {
NSURL *URL = [[NSBundle mainBundle] URLForResource:#"transparentqrcode" withExtension:#"png"];
CIImage *ciImage = [CIImage imageWithContentsOfURL:URL];
NSArray <CIFeature *>*features = [self getImageFeatures:ciImage];
// if CIDetector failed to find / process a QRCode in the image,
// (such as when the image has a transparent background),
// invert the colors and try again
if (features.count == 0) {
CIFilter *filter = [CIFilter filterWithName:#"CIColorInvert"];
[filter setValue:ciImage forKey:kCIInputImageKey];
CIImage *newImage = [filter valueForKey:kCIOutputImageKey];
features = [self getImageFeatures:newImage];
}
if (features.count > 0) {
for (CIQRCodeFeature* qrFeature in features) {
NSLog(#"QRFeature.messageString : %# ", qrFeature.messageString);
}
} else {
NSLog(#"Unable to decode image!");
}
}
- (NSArray <CIFeature *>*)getImageFeatures:(CIImage *)image
{
NSArray <CIFeature *>*features = [[self QRdetector] featuresInImage:image];
NSLog(#"Number of features found: %lu", [features count]);
return features;
}
- (CIDetector *)QRdetector
{
CIContext *context = [CIContext contextWithOptions:#{kCIContextWorkingColorSpace : [NSNull null]}]; //no difference using special options or nil as a context
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:context options:#{CIDetectorAccuracy : CIDetectorAccuracyHigh, CIDetectorAspectRatio : #(1)}];
return detector;
}
I am trying to apply a CIFilter onto live camera feed (and be able to capture a filtered still image).
I have seen on StackOverflow some code pertaining the issue, but I haven't been able to get it to work.
My issue is that in the method captureOutput the filter seems correctly applied (I put a breakpoint in there and QuickLooked it), but I don't see it in my UIView (I see the original feed, without the filter).
Also I am not sure which output I should add to the session:
[self.session addOutput: self.stillOutput]; //AVCaptureStillImageOutput
[self.session addOutput: self.videoDataOut]; //AVCaptureVideoDataOutput
And which of those I should iterate through when looking for a connection (in findVideoConnection).
I am totally confused.
Here's some code:
viewDidLoad
-(void)viewDidLoad {
[super viewDidLoad];
self.shutterButton.userInteractionEnabled = YES;
self.context = [CIContext contextWithOptions: #{kCIContextUseSoftwareRenderer : #(YES)}];
self.filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[self.filter setValue:#15 forKey:kCIInputRadiusKey];
}
prepare session
-(void)prepareSessionWithDevicePosition: (AVCaptureDevicePosition)position {
AVCaptureDevice* device = [self videoDeviceWithPosition: position];
self.currentPosition = position;
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetPhoto;
NSError* error = nil;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice: device error: &error];
if ([self.session canAddInput: self.deviceInput]) {
[self.session addInput: self.deviceInput];
}
AVCaptureVideoPreviewLayer* previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession: self.session];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
self.videoDataOut = [AVCaptureVideoDataOutput new];
[self.videoDataOut setSampleBufferDelegate: self queue:dispatch_queue_create("bufferQueue", DISPATCH_QUEUE_SERIAL)];
self.videoDataOut.alwaysDiscardsLateVideoFrames = YES;
CALayer* rootLayer = [[self view] layer];
rootLayer.masksToBounds = YES;
CGRect frame = self.previewView.frame;
previewLayer.frame = frame;
[rootLayer insertSublayer: previewLayer atIndex: 1];
self.stillOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary* outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
self.stillOutput.outputSettings = outputSettings;
[self.session addOutput: self.stillOutput];
//tried [self.session addOutput: self.videoDataOut];
//and didn't work (filtered image didn't show, and also couldn't take pictures)
[self findVideoConnection];
}
find video connection
-(void)findVideoConnection {
for (AVCaptureConnection* connection in self.stillOutput.connections) {
//also tried self.videoDataOut.connections
for (AVCaptureInputPort* port in [connection inputPorts]) {
if ([[port mediaType] isEqualToString: AVMediaTypeVideo]) {
self.videoConnection = connection;
break;
}
}
if (self.videoConnection != nil) {
break;
}
}
}
capture output, apply filter and put it in the CALayer
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// turn buffer into an image we can manipulate
CIImage *result = [CIImage imageWithCVPixelBuffer:imageBuffer];
// filter
[self.filter setValue:result forKey: #"inputImage"];
// render image
CGImageRef blurredImage = [self.context createCGImage:self.filter.outputImage fromRect:result.extent];
UIImage* img = [UIImage imageWithCGImage: blurredImage];
//Did this to check whether the image was actually filtered.
//And surprisingly it was.
dispatch_async(dispatch_get_main_queue(), ^{
//The image present in my UIView is for some reason not blurred.
self.previewView.layer.contents = (__bridge id)blurredImage;
CGImageRelease(blurredImage);
});
}
I'm making use of AVFoundation to integrate a custom camera into my app... The problem is, I'm getting a rare but still occurring crash due to memory pressure, I'm not sure why as I'm using ARC, and Memory in Xcode is only around 20mb around the time of crash? What's going on?
Here's my code
- (void)setupCamera
{
self.session = [[AVCaptureSession alloc] init];
[self.session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
if ([self.session canAddInput:input]) {
[self.session addInput:input];
}
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
if ([self.session canAddOutput:output]) {
output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[self.session addOutput:output];
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
}
self.preview = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[self.preview setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.preview setFrame:self.cameraView.frame];
CALayer *cameraLayer = self.cameraView.layer;
[cameraLayer setMasksToBounds:YES];
[cameraLayer addSublayer:self.preview];
for (AVCaptureConnection *connection in output.connections) {
if (connection.supportsVideoOrientation) {
[connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
}
}
NSURL *shutterUrl = [[NSURL alloc] initWithString:[[NSBundle mainBundle] pathForResource: #"shutter" ofType: #"wav"]];
self.shutterPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:shutterUrl error:nil];
[self.shutterPlayer prepareToPlay];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if (self.doCapture) {
self.doCapture = NO;
[self.shutterPlayer setCurrentTime:0];
[self.shutterPlayer play];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef rawContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(rawContext);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CGContextRelease(rawContext);
CGColorSpaceRelease(colorSpace);
UIImage *rawImage = [UIImage imageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
float rawWidth = rawImage.size.width;
float rawHeight = rawImage.size.height;
CGRect cropRect = (rawHeight > rawWidth) ? CGRectMake(0, (rawHeight - rawWidth) / 2, rawWidth, rawWidth) : CGRectMake((rawWidth - rawHeight) / 2, 0, rawHeight, rawHeight);
CGImageRef croppedImageRef = CGImageCreateWithImageInRect([rawImage CGImage], cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:croppedImageRef];
CGImageRelease(croppedImageRef);
[self saveImage:croppedImage];
}
}
- (void)saveImage:(UIImage *)image
{
[self.capturedImages addObject:image];
NSArray *scrollItemXIB = [[NSBundle mainBundle] loadNibNamed:#"SellPreviewImagesScrollItemView" owner:self options:nil];
UIView *scrollItemView = [scrollItemXIB lastObject];
UIImageView *previewImage = (UIImageView *)[scrollItemView viewWithTag:PREVIEW_IMAGES_SCROLL_ITEM_TAG_IMAGE];
previewImage.image = image;
UIButton *deleteButton = (UIButton *)[scrollItemView viewWithTag:PREVIEW_IMAGES_SCROLL_ITEM_TAG_BADGE_BUTTON];
[deleteButton addTarget:self action:#selector(deleteImage:) forControlEvents:UIControlEventTouchUpInside];
UIButton *previewButton = (UIButton *)[scrollItemView viewWithTag:PREVIEW_IMAGES_SCROLL_ITEM_TAG_BUTTON];
[previewButton addTarget:self action:#selector(previewImage:) forControlEvents:UIControlEventTouchUpInside];
[self addItemToScroll:scrollItemView];
[self checkCapturedImagesLimit];
if ([self.capturedImages count] == 1) {
[self makeCoverPhoto:[self.capturedImages objectAtIndex:0]];
[self cells:self.previewImagesToggle setHidden:NO];
[self reloadDataAnimated:YES];
}
}
Where you set self.doCapture = true? It seems like you allocate a lot of memory for temporary objects. Try to use #autoreleasepool directive:
#autoreleasepool {
self.doCapture = NO;
[self.shutterPlayer setCurrentTime:0];
[self.shutterPlayer play];
...
if ([self.capturedImages count] == 1) {
[self makeCoverPhoto:[self.capturedImages objectAtIndex:0]];
[self cells:self.previewImagesToggle setHidden:NO];
[self reloadDataAnimated:YES];
}
}
I think I might have had the same problem ("Terminated Due To Memory Pressure"), and I just changed the setup of the AVCaptureSession from the viewDidAppear to ViewDidLoad. It was growing up to 100mb or so on the iPad, and now it is going up to 14mb with a big image overlay on top.
I'm capturing live video with the back camera on the iPhone with AVCaptureSession, applying some filters with CoreImage and then trying to output the resulting video with OpenGL ES. Most of the code is from an example from the WWDC 2012 session 'Core Image Techniques'.
Displaying the output of the filter chain using [UIImage imageWithCIImage:...] or by creating a CGImageRef for every frame works fine. However, when trying to display with OpenGL ES all I get is a black screen.
In the course they use a custom view class to display the output, however the code for that class isn't available. My view controller class extends GLKViewController and the class of it's view is set as GLKView.
I've searched for and downloaded all GLKit tutorials and examples I can find but nothing is helping. In particular I can't get any video output when I try to run the example from here either. Can anyone point me in the right direction?
#import "VideoViewController.h"
#interface VideoViewController ()
{
AVCaptureSession *_session;
EAGLContext *_eaglContext;
CIContext *_ciContext;
CIFilter *_sepia;
CIFilter *_bumpDistortion;
}
- (void)setupCamera;
- (void)setupFilters;
#end
#implementation VideoViewController
- (void)viewDidLoad
{
[super viewDidLoad];
GLKView *view = (GLKView *)self.view;
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
[EAGLContext setCurrentContext:_eaglContext];
view.context = _eaglContext;
// Configure renderbuffers created by the view
view.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
view.drawableStencilFormat = GLKViewDrawableStencilFormat8;
[self setupCamera];
[self setupFilters];
}
- (void)setupCamera {
_session = [AVCaptureSession new];
[_session beginConfiguration];
[_session setSessionPreset:AVCaptureSessionPreset640x480];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
[_session addInput:input];
AVCaptureVideoDataOutput *dataOutput = [AVCaptureVideoDataOutput new];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
NSDictionary *options;
options = #{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] };
[dataOutput setVideoSettings:options];
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:dataOutput];
[_session commitConfiguration];
}
#pragma mark Setup Filters
- (void)setupFilters {
_sepia = [CIFilter filterWithName:#"CISepiaTone"];
[_sepia setValue:#0.7 forKey:#"inputIntensity"];
_bumpDistortion = [CIFilter filterWithName:#"CIBumpDistortion"];
[_bumpDistortion setValue:[CIVector vectorWithX:240 Y:320] forKey:#"inputCenter"];
[_bumpDistortion setValue:[NSNumber numberWithFloat:200] forKey:#"inputRadius"];
[_bumpDistortion setValue:[NSNumber numberWithFloat:3.0] forKey:#"inputScale"];
}
#pragma mark Main Loop
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Grab the pixel buffer
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
// null colorspace to avoid colormatching
NSDictionary *options = #{ (id)kCIImageColorSpace : (id)kCFNull };
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer options:options];
image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(-M_PI/2.0)];
CGPoint origin = [image extent].origin;
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];
// Pass it through the filter chain
[_sepia setValue:image forKey:#"inputImage"];
[_bumpDistortion setValue:_sepia.outputImage forKey:#"inputImage"];
// Grab the final output image
image = _bumpDistortion.outputImage;
// draw to GLES context
[_ciContext drawImage:image inRect:CGRectMake(0, 0, 480, 640) fromRect:[image extent]];
// and present to screen
[_eaglContext presentRenderbuffer:GL_RENDERBUFFER];
NSLog(#"frame hatched");
[_sepia setValue:nil forKey:#"inputImage"];
}
- (void)loadView {
[super loadView];
// Initialize the CIContext with a null working space
NSDictionary *options = #{ (id)kCIContextWorkingColorSpace : (id)kCFNull };
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:options];
}
- (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
[_session startRunning];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#end
Wow, actually figured it out myself. This line of work may suit me after all ;)
First, for whatever reason, this code only works with OpenGL ES 2, not 3. Yet to figure out why.
Second, I was setting up the CIContext in the loadView method, which obviously runs before the viewDidLoad method and thus uses a not yet initialized EAGLContext.
I am able to capture images from the iOS rear facing camera. Everything is working flawlessly except I want it to take the picture as per the bounds in my UIView.
My code is below:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.frame = vImagePreview.bounds;
[vImagePreview.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device = [self backFacingCameraIfAvailable];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
[session startRunning];
[session addOutput:stillImageOutput];
}
-(AVCaptureDevice *)backFacingCameraIfAvailable{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices){
if (device.position == AVCaptureDevicePositionBack){
captureDevice = device;
break;
}
}
// couldn't find one on the back, so just get the default video device.
if (!captureDevice){
captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
return captureDevice;
}
And below is the code to capture the image:
- (IBAction)captureTask {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
stillImage = image;
}];
}
The issue i'm facing is that it's taking the picture, and saving to stillImage, however, the image is for the whole iPhone screen from what I can tell. It's not to the bounds of the UIView *vImagePreview I created. Is there a way to clip the bounds of the captured image??
[EDIT]
After reading the docs, I realized the image was proper resolution, as per here: session.sessionPreset = AVCaptureSessionPresetMedium;. Is there a way to make the image like a square? Like how Instagram makes their images? All of the session presets according to the docs are not at all squares :(
I tried with the below:
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResize;
However, it only resizes the image to fit the current view, doesn't make a square image.
I understand your frustration, presets should be customizable or have more options! What I do with my images is crop them about the center, for which I wrote the following code:
- (UIImage *)crop:(UIImage *)image from:(CGSize)src to:(CGSize)dst
{
CGPoint cropCenter = CGPointMake((src.width/2), (src.height/2));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width/2)), (cropCenter.y - (dst.height/2)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width, dst.height);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}
Where src represents the original dimensions and dst represents the cropped dimensions; and image is of course the image you want cropped.
If the device is retina display,
then this screenshot works like mentioned below:
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] == YES && [[UIScreen mainScreen] scale] == 2.00)
{
CGPoint cropCenter = CGPointMake((src.width), (src.height));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width)), (cropCenter.y - (dst.height)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width*2, dst.height*2);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}
else
{
CGPoint cropCenter = CGPointMake((src.width/2), (src.height/2));
CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width/2)), (cropCenter.y - (dst.height/2)));
CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width, dst.height);
CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
CGImageRelease(cropRef);
return cropImage;
}