Cropping captured image correctly when using AV Foundation - ios

I am using AV Foundation for a photo app. I have setup a preview layer that takes up the top half of the screen using this code:
[_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
Once the user captures the image, I rotate it so it's in the correct portrait orientation (since iOS likes to default images to landscape), and then I create a new sub layer and set it's contents property to the captured image. This makes the image display as the new layer on the top half of the screen.
The only problem is the image looks stretched and after doing research I learned that you need to crop the captured image, and I am having trouble getting my crop to work properly after trying for several hours.
Hopefully someone can recommend the correct crop code that I need. I just want to crop the captured image so that it is exactly what the user was seeing in the preview layer as they were taking the photo.
Here is my image capture code:
-(IBAction)stillImageCapture {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in _stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", _stillImageOutput);
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if(imageDataSampleBuffer) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc]initWithData:imageData];
image = [self rotate:image andOrientation:image.imageOrientation];
CALayer *subLayer = [CALayer layer];
CGImageRef imageRef = image.CGImage;
subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage;
subLayer.frame = _previewLayer.frame;
[_previewLayer addSublayer:subLayer];
}
}];
}

My advice is to not about how you got the image. Get things represented as UIImages first, without manipulation, then have handy a library of transformations that operate on UIImages. You can start with the accepted answer on this one for cropping.

Related

iPhone 6+ (Wrong scale?)

I have an iOS app using the camera to take pictures.
It uses a path(CGPath) drawn on the screen (for example a rectangle), and it takes a photo within that path. The app supports only portrait orientation.
For that to happen I use: AVCaptureSession, AVCaptureStillImageOutput, AVCaptureDevice, AVCaptureVideoPreviewLayer
(I guess all familiar to developers making this kind of apps).
My code uses UIScreen.mainScreen().bounds and UIScreen.mainScreen().scale to adapt do various devices and do its job.
It all goes fine(on iPhone 5, iPhone 6), until I try the app on an iPhone 6+ (running iOS 9.3.1) and see that something is wrong.
The picture taken is not layed out in the right place anymore.
I had someone try on an iPhone 6+, and by putting an appropriate message I was able to confirm that (UIScreen.mainScreen().scale) is what it shoud be: 3.0.
I have put the proper size launch images(640 × 960, 640 × 1136, 750 × 1334, 1242 × 2208) in the project.
So what could be the problem?
I use the code below in an app, it works on 6+.
The code starts a AVCaptureSession, pulling video input from the device's camera.
As it does so, it continuously updates the runImage var, from the captureOutput delegate function.
When the user wants to take a picture, the takePhoto method is called. This method creates a temporary UIImageview and feeds the runImage into it. This temp UIImageView is then used to draw another variable called currentImage to the scale of the device.
The currentImage, in my case, is square, matching the previewHolder frame, but I suppose you can make anything you want.
Declare these:
AVCaptureDevice * device;
AVCaptureDeviceInput * input;
AVCaptureVideoDataOutput * output;
AVCaptureSession * session;
AVCaptureVideoPreviewLayer * preview;
AVCaptureConnection * connection;
UIImage * runImage;
Load scanner:
-(void)loadScanner
{
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
output = [AVCaptureVideoDataOutput new];
session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
[session addInput:input];
[session addOutput:output];
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[output setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
preview = [AVCaptureVideoPreviewLayer layerWithSession:session];
preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
preview.frame = previewHolder.bounds;
connection = preview.connection;
[connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
[previewHolder.layer insertSublayer:preview atIndex:0];
}
Ongoing image capture, updates runImage var.
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
runImage = [self imageForBuffer:sampleBuffer];
}
Related to above.
-(UIImage *)imageForBuffer:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
UIImage * rotated = [[UIImage alloc] initWithCGImage:image.CGImage scale:1.0 orientation:UIImageOrientationRight];
return rotated;
}
On take photo:
-(void)takePhoto
{
UIImageView * temp = [UIImageView new];
temp.frame = previewHolder.frame;
temp.image = runImage;
temp.contentMode = UIViewContentModeScaleAspectFill;
temp.clipsToBounds = true;
[self.view addSubview:temp];
UIGraphicsBeginImageContextWithOptions(temp.bounds.size, NO, [UIScreen mainScreen].scale);
[temp drawViewHierarchyInRect:temp.bounds afterScreenUpdates:YES];
currentImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[temp removeFromSuperview];
//further code...
}
In case someone else has the same issue. Here is what made things go wrong for me:
I was naming a file : xyz#2x.png.
When UIScreen.mainScreen().scale == 3.0 (case of an iPhone 6+)
it has to be named : xyz#3x.png.

Why isn't UIImageWriteToSavedPhotosAlbum() working when using CIImage?

I'm generating a CIImage using a few chained filters and trying to output the generated image in the users photo album for certain debug purposes. The callback I supply to UIImageWriteToSavedPhotosAlbum() always has a nil error returned, so I assume nothing is going wrong. But the image never seems to show up.
I've used this function in the past to dump OpenGL buffers to the photo album for debugging, but I realize this isn't the same case. Should I be doing something differently?
-(void)cropAndSaveImage:(CIImage *)inputImage fromFeature:(CIFaceFeature *)feature
{
// First crop out the face.
[_cropFilter setValue:inputImage forKey:#"inputImage"];
[_cropFilter setValue:[CIVector vectorWithCGRect:feature.bounds] forKey:#"inputRectangle"];
CIImage * croppedImage = _cropFilter.outputImage;
__block CIImage * outImage = croppedImage;
dispatch_async(dispatch_get_main_queue(), ^{
UIImage * outUIImage = [UIImage imageWithCIImage:outImage];
UIImageWriteToSavedPhotosAlbum(outUIImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
});
}
-(void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo
{
NSLog(#"debug LBP output face. error: %#", error);
}
I've verified that the boundaries are never 0.
The callback output is always
debug LBP output face. error: (null)
I figured this out on my own and deleted the question, but then I thought maybe someone will get some use out of it. I say this because I came across an older answer that suggested the original implementation worked. But in actuality in had to do the following to make it work properly.
__block CIImage * outImage = _lbpFilter.outputImage;
dispatch_async(dispatch_get_main_queue(), ^{
CGImageRef imgRef = [self renderCIImage:outImage];
UIImage * uiImage = [UIImage imageWithCGImage:imgRef];
UIImageWriteToSavedPhotosAlbum(uiImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
});
+ (CGImageRef)renderCIImage:(CIImage *)img
{
if ( !m_ctx ) {
NSDictionary * options = #{kCIContextOutputColorSpace:[NSNull null], kCIContextWorkingColorSpace:[NSNull null]};
m_ctx = [CIContext contextWithOptions:options];
}
return [m_ctx createCGImage:img fromRect:img.extent];
}
Using filter.outputImage to convert to CGImage by CIContext.createCGImage() and converting CGImage to UIImage will save image successfully.

How to focus the near by objects in iOS 8?

I'm developing a QR code reader. My Codes are 1cm long and width. I'm using AVFoundation metadata to capture the machine readable codes and it works fine. But at the same time i need to take a picture of the QR code with the logo (Which is located in mid of the QR code). So I'm using AVCaptureVideoDataOutput and didOutputSampleBuffer to get the image stills. The problem comes in clarity of the image. it looks always blurry in the edges of the codes and logo. So i did a research on manual controls in and made some code changes for manual focusing but no luck till now.
How to focus (which is 10cm away from the camera and tiny)the near by objects?
Do we have any other way of getting the image after successful scan from the metadata?
What is difference between setFocusModeLockedWithLensPosition and focusPointOfInterest ?
Here is my code (part of it)
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
_session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
_session.sessionPreset = AVCaptureSessionPresetHigh;
// Find a suitable AVCaptureDevice
_device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([_device lockForConfiguration:&error]) {
[_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNone];
[_device setFocusModeLockedWithLensPosition:0.5 completionHandler:nil];
//[device setFocusMode:AVCaptureFocusModeAutoFocus];
// _device.focusPointOfInterest = CGPointMake(0.5,0.5);
// device.videoZoomFactor = 1.0 + 10;
[_device unlockForConfiguration];
}
// if ([_device isSmoothAutoFocusEnabled])
// {
// _device.smoothAutoFocusEnabled = NO;
// }
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:_device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[_session addInput:input];
// For scanning QR code
AVCaptureMetadataOutput *metaDataOutput = [[AVCaptureMetadataOutput alloc] init];
// Have to add the output before setting metadata types
[_session addOutput:metaDataOutput];
[metaDataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode]];
[metaDataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
//For saving the image to camera roll
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[_stillImageOutput setOutputSettings:outputSettings];
[_session addOutput:_stillImageOutput];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[_session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:_session];
// Assign session to an ivar.
[self setSession:_session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
NSLog(#"Adding video preview layer");
[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(#"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
[self.previewLayer setAffineTransform:CGAffineTransformMakeScale(3.5, 3.5)];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
[self switchONFlashLight];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[connection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
/ Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
if (metadataObjects != nil && [metadataObjects count] > 0) {
AVMetadataMachineReadableCodeObject *metadataObj = [metadataObjects objectAtIndex:0];
// if ([_device lockForConfiguration:nil]){
// [_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
// _device.focusPointOfInterest = CGPointMake(metadataObj.bounds.origin.x, metadataObj.bounds.origin.y);
// [_device unlockForConfiguration];
// }
if ([[metadataObj type] isEqualToString:AVMetadataObjectTypeQRCode]) {
[_lblStatus performSelectorOnMainThread:#selector(setText:) withObject:[metadataObj stringValue] waitUntilDone:NO];
}
}
}
Aiming for iOS 8 and latest iPhones only.
After did regressive research and got inputs from photographers. I'm sharing my answers for the future readers.
As of iOS 8 apple provides only three focus modes. Which are
typedef NS_ENUM(NSInteger, AVCaptureFocusMode) {
AVCaptureFocusModeLocked = 0,
AVCaptureFocusModeAutoFocus = 1,
AVCaptureFocusModeContinuousAutoFocus = 2,
} NS_AVAILABLE(10_7, 4_0);
To focus an object which is very near to the lens we can use AVCaptureAutoFocusRangeRestrictionNear
but for my need due to restrictions on minimum focus length with the iPhone cameras it is not possible to get the clear image of my codes.
AFAIK there is no way to get image data from metadata. My question itself wrong. but how ever you can get the image buffers from video frames. check out Capturing Video Frames as UIImage Objects for more info.
setFocusModeLockedWithLensPosition will lock the focus mode and will allow us to set the particular lens position which starts from 0.0 to 1.0.
focusPointOfInterest dont change the focus mode but it will just set points for focus. Best example would be tap to focus.

How to upload this photo?

I want to use both of the objective c methods listed below in my application. The first method uploads a UIImagePicker photograph to a local server.
// I would still like to use this method structure but with the `AVCam` classes.
-(void)uploadPhoto {
//upload the image and the title to the web service
[[API sharedInstance] commandWithParams:[NSMutableDictionary dictionaryWithObjectsAndKeys:#"upload", #"command", UIImageJPEGRepresentation(photo.image,70), #"file", fldTitle.text, #"title", nil] onCompletion:^(NSDictionary *json) {
//completion
if (![json objectForKey:#"error"]) {
//success
[[[UIAlertView alloc]initWithTitle:#"Success!" message:#"Your photo is uploaded" delegate:nil cancelButtonTitle:#"Yay!" otherButtonTitles: nil] show];
} else {
//error, check for expired session and if so - authorize the user
NSString* errorMsg = [json objectForKey:#"error"];
[UIAlertView error:errorMsg];
if ([#"Authorization required" compare:errorMsg]==NSOrderedSame) {
[self performSegueWithIdentifier:#"ShowLogin" sender:nil];
}
}
}];
}
I want to add a second method : The second method performs an IBAction picture snap using AVCam but I changed it to void to launch the the view loads using [self snapStillImage].
EDIT
- (IBAction)snapStillImage:(id)sender
{
dispatch_async([self sessionQueue], ^{
// Update the orientation on the still image output video connection before capturing.
[[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] setVideoOrientation:[[(AVCaptureVideoPreviewLayer *)[[self previewView] layer] connection] videoOrientation]];
// Flash set to Auto for Still Capture
[ViewController5 setFlashMode:AVCaptureFlashModeAuto forDevice:[[self videoDeviceInput] device]];
// Capture a still image.
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:[[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo] completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
[[[ALAssetsLibrary alloc] init] writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)[image imageOrientation] completionBlock:nil];
//
photo = [[UIImage alloc] initWithData:imageData];
}
}];
});
}
Can someone please set photo via AVCam? At the very least humor me and start a dialogue about AVFoundation and its appropriate classes for tackling an issue like this.
Additional info: The avcam method is simply an excerpt from this https://developer.apple.com/library/ios/samplecode/AVCam/Introduction/Intro.html
#Aksh1t I want to set an UIImage named image with the original contents of the AVFoundation snap. Not UIImagePicker. Here is the method that sets the outlet using UIImagePicker.
#pragma mark - Image picker delegate methods
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// Resize the image from the camera
UIImage *scaledImage = [image resizedImageWithContentMode:UIViewContentModeScaleAspectFill bounds:CGSizeMake(photo.frame.size.width, photo.frame.size.height) interpolationQuality:kCGInterpolationHigh];
// Crop the image to a square (yikes, fancy!)
UIImage *croppedImage = [scaledImage croppedImage:CGRectMake((scaledImage.size.width -photo.frame.size.width)/2, (scaledImage.size.height -photo.frame.size.height)/2, photo.frame.size.width, photo.frame.size.height)];
// Show the photo on the screen
photo.image = croppedImage;
[picker dismissModalViewControllerAnimated:NO];
}
After that I simply want to upload it using the first method I posted. Sorry for being unclear. I basically want to do this in my new app (i was unclear about what app).
Take a photo using AVCam
Set that photo to an UIImageView IBOutlet named photo
Upload photo (the original AVCam photo) to the server
The basic framework is above and I will answer any questions
The following line of code in your snapStillImage method takes a photo into the imageData variable.
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
Next, you are creating one UIImage object from this data like this
UIImage *image = [[UIImage alloc] initWithData:imageData];
Instead of the above code, make a global variable UIImage *photo;, and initialize that with the imageData when your snapStillImage takes the photo like this
photo = [[UIImage alloc] initWithData:imageData];
Since photo is a global variable, you will then be able to use that in your uploadPhoto method and send it to your server.
Hope this helps, and if you have any question, leave it in the comments.
Edit:
Since you already have a IBOutlet UIImageView *photo; in your file, you don't even need a global variable to store the UIImage. You can just replace the following line in your snapStillImage method:
UIImage *image = [[UIImage alloc] initWithData:imageData];
with this line
photo.image = [[UIImage alloc] initWithData:imageData];

AVCaptureOutput takes dark picture even with flash on

I have come up with an implementation of AVFoundation and ImageIO to take care of the photo taking in my application. I have an issue with it, however. The images I take are always dark, even if the flash goes off. Here's the code I use:
[[self currentCaptureOutput] captureStillImageAsynchronouslyFromConnection:[[self currentCaptureOutput].connections lastObject]
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
[[[blockSelf currentPreviewLayer] session] stopRunning];
if (!error) {
NSData *data = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef) data, NULL);
if (source) {
UIImage *image = [blockSelf imageWithSource:source];
[blockSelf updateWithCapturedImage:image];
CFRelease(source);
}
}
}];
Is there anything there that could cause the image taken to not include the flash?
I found I sometimes got dark images if the AVCaptureSession was set up immediately before this call. Perhaps it takes a while for the auto-exposure & white balance settings to adjust themselves.
The solution was to set up the AVCaptureSession, then wait until the AVCaptureDevice's adjustingExposure and adjustingWhiteBalance properties are both NO (observe these with KVO) before calling -[AVCaptureStillImageOutput captureStillImageAsynchronouslyFromConnection: completionHandler:].

Resources