Can't get AVFoundation to work with AVCaptureSessionPresetPhoto resolution - image-processing

I can't seem to get the pixel alignment to work with AVFoundation in AVCaptureSessionPresetPhoto resolution. Pixel alignment works fine with lower resolution like AVCaptureSessionPreset1280x720 (AVCaptureSessionPreset1280x720_Picture).
Specifically, when I uncomment these lines:
if ([captureSession canSetSessionPreset:AVCaptureSessionPresetPhoto]) {
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
} else {
NSLog(#"Unable to set resolution to AVCaptureSessionPresetPhoto");
}
I get a missed aligned image as shown in the 2nd image below. Any comments/suggestions are greatly appreciated.
Here's my code to set up the 1) capture session, 2) delegate callback, and 3) save one steaming image to verify pixel alignment.
1. Capture Session set up
- (void)InitCaptureSession {
captureSession = [[AVCaptureSession alloc] init];
if ([captureSession canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
[captureSession setSessionPreset:AVCaptureSessionPreset1280x720];
} else {
NSLog(#"Unable to set resolution to AVCaptureSessionPreset1280x720");
}
// if ([captureSession canSetSessionPreset:AVCaptureSessionPresetPhoto]) {
// [captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
// } else {
// NSLog(#"Unable to set resolution to AVCaptureSessionPresetPhoto");
// }
captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
videoInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:nil];
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
[captureSession addInput:videoInput];
[captureSession addOutput:captureOutput];
[captureOutput release];
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [previewView layer];// self.view.layer; //
[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:[rootLayer bounds]];
[rootLayer addSublayer:previewLayer];
[captureSession startRunning];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
static int processedImage = 0;
processedImage++;
if (processedImage==100) {
[self SaveImage:sampleBuffer];
}
[pool drain];
}
// Create a UIImage CMSampleBufferRef and save for verifying pixel alignment
- (void) SaveImage:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
CvSize imageSize;
imageSize.width = CVPixelBufferGetWidth(imageBuffer); ;
imageSize.height = CVPixelBufferGetHeight(imageBuffer);
IplImage *image = cvCreateImage(imageSize, IPL_DEPTH_8U, 1);
void *y_channel = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
char *tempPointer = image->imageData;
memcpy(tempPointer, y_channel, image->imageSize);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
8, 8, image->width,
colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *Saveimage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
UIImageWriteToSavedPhotosAlbum(Saveimage, nil, nil, nil);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}

Within SaveImage, the 5th argument of CGImageCreate is bytesPerRow and you should not pass image->width since the number of bytes per row may differ in case of memory alignment. This is the case with the AVCaptureSessionPresetPhoto where width = 852 (w/ an iPhone 4 camera) while the number of bytes per row for the 1-st plane (Y) is 864 since it is 16-bytes aligned.
1/ You should get the bytes per row as follow:
size_t bpr = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
2/ Then take care to take into account the bytes per row while copying the pixels to your IplImage:
char *y_channel = (char *) CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
// row by row copy
for (int i = 0; i < image->height; i++)
memcpy(tempPointer + i*image->widthStep, y_channel + i*bpr, image->width);
You can keep the [NSData dataWithBytes:image->imageData length:image->imageSize]; as is since image->imageSize takes into account the alignment (imageSize = height*widthStep).
3/ At last pass the IplImage width step as CGImageCreate 5-th parameter:
CGImageCreate(image->width, image->height, 8, 8, image->widthStep, ...);

Related

UIImage uncompressed from AVCaptureStillImageOutput

This is what I tried so far for the configuration of the camera:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetInputPriority];
AVCaptureDevice *videoDevice = [AVCamViewController deviceWithMediaType:AVMediaTypeVideo preferringPosition:AVCaptureDevicePositionBack];
NSError *errorVideo;
AVCaptureDeviceFormat *deviceFormat = nil;
for (AVCaptureDeviceFormat *format in videoDevice.formats) {
CMVideoDimensions dim = CMVideoFormatDescriptionGetDimensions(format.formatDescription);
if (dim.width == 2592 && dim.height == 1936) {
deviceFormat = format;
break;
}
}
[videoDevice lockForConfiguration:&errorVideo];
if (deviceFormat) {
videoDevice.activeFormat = deviceFormat;
if ([videoDevice isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) {
[videoDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
}
if ([videoDevice isAutoFocusRangeRestrictionSupported]) {
[videoDevice setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionFar];
}
}
[videoDevice unlockForConfiguration];
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if ([session canAddInput:videoDeviceInput]) {
[session addInput:videoDeviceInput];
}
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
if ([session canAddOutput:stillImageOutput]) {
[stillImageOutput setOutputSettings:#{(id)kCVPixelBufferPixelFormatTypeKey:#(kCVPixelFormatType_32BGRA)}];
[session addOutput:stillImageOutput];
}
This is what I tried for getting the UIImage from the CMSamplebuffer:
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer && !error) {
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *image = [self imageFromSampleBuffer:imageDataSampleBuffer];
});
}
}];
This is a Apple example code:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
But the image is always nil.
After making some debug. I found that this function returns always nil CMSampleBufferGetImageBuffer(sampleBuffer);
Can anyone help?
This is because the CMSampleBufferRef must be worked on immediately as it is deallocated very quickly and efficiently.
Here is my code for generating the image:
let connection = imageFileOutput.connectionWithMediaType(AVMediaTypeVideo)
if connection != nil {
imageFileOutput.captureStillImageAsynchronouslyFromConnection(connection) { [weak self] (buffer, err) -> Void in
if CMSampleBufferIsValid(buffer) {
let imageDataJpeg = self?.imageFromSampleBuffer(buffer)
} else {
print(err)
}
}
}
As you can see I turn it into an image while still in the scope of this function. Once it is an image I send it off for processing.

Xcode [noob] Release of objects, because of memory and CPU load

As a c# unity and php programmer i'm a complete noob in objective-c. I managed to copy paste a project together with cordoba-phonegap-objective C to make a project wich at its base concept is a mirror like app ( with al kinds of extra's, still to be programmed) .. the mirror works, but in xcode the load of the cpu and memory keeps adding and finally crashes.. as i'm searching i think i made a mistake not releasing objects but it's a lucky guess.. i hope you can help me...
#import "CanvasCamera.h"
#implementation CanvasCamera
- (void)startCapture:(CDVInvokedUrlCommand*)command
{
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPreset352x288;
self.device = [self frontCamera];
self.input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil];
self.output = [[AVCaptureVideoDataOutput alloc] init];
self.output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
dispatch_queue_t queue;
queue = dispatch_queue_create("canvas_camera_queue", NULL);
[self.output setSampleBufferDelegate:(id)self queue:queue];
[self.session addInput:self.input];
[self.session addOutput:self.output];
[self.session startRunning];
}
- (AVCaptureDevice *)frontCamera {
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionFront) {
return device;
}
}
return nil;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationUp];
NSData *imageData = UIImageJPEGRepresentation(image, 1.0);
NSString *encodedString = [imageData base64Encoding];
NSString *javascript = #"CanvasCamera.capture('data:image/jpeg;base64,";
javascript = [javascript stringByAppendingString:encodedString];
javascript = [javascript stringByAppendingString:#"');"];
[self.webView performSelectorOnMainThread:#selector(stringByEvaluatingJavaScriptFromString:) withObject:javascript waitUntilDone:YES];
CGImageRelease(newImage);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
#end
Review your use of NSAutoreleasepool, looking into Objective C's Automatic Reference Counting, and this documentation from Apple.

AVFoundation Custom Camera Causing Crash / Memory Pressure?

I'm making use of AVFoundation to integrate a custom camera into my app... The problem is, I'm getting a rare but still occurring crash due to memory pressure, I'm not sure why as I'm using ARC, and Memory in Xcode is only around 20mb around the time of crash? What's going on?
Here's my code
- (void)setupCamera
{
self.session = [[AVCaptureSession alloc] init];
[self.session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
if ([self.session canAddInput:input]) {
[self.session addInput:input];
}
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
if ([self.session canAddOutput:output]) {
output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[self.session addOutput:output];
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
}
self.preview = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
[self.preview setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.preview setFrame:self.cameraView.frame];
CALayer *cameraLayer = self.cameraView.layer;
[cameraLayer setMasksToBounds:YES];
[cameraLayer addSublayer:self.preview];
for (AVCaptureConnection *connection in output.connections) {
if (connection.supportsVideoOrientation) {
[connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
}
}
NSURL *shutterUrl = [[NSURL alloc] initWithString:[[NSBundle mainBundle] pathForResource: #"shutter" ofType: #"wav"]];
self.shutterPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:shutterUrl error:nil];
[self.shutterPlayer prepareToPlay];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if (self.doCapture) {
self.doCapture = NO;
[self.shutterPlayer setCurrentTime:0];
[self.shutterPlayer play];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef rawContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(rawContext);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CGContextRelease(rawContext);
CGColorSpaceRelease(colorSpace);
UIImage *rawImage = [UIImage imageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
float rawWidth = rawImage.size.width;
float rawHeight = rawImage.size.height;
CGRect cropRect = (rawHeight > rawWidth) ? CGRectMake(0, (rawHeight - rawWidth) / 2, rawWidth, rawWidth) : CGRectMake((rawWidth - rawHeight) / 2, 0, rawHeight, rawHeight);
CGImageRef croppedImageRef = CGImageCreateWithImageInRect([rawImage CGImage], cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:croppedImageRef];
CGImageRelease(croppedImageRef);
[self saveImage:croppedImage];
}
}
- (void)saveImage:(UIImage *)image
{
[self.capturedImages addObject:image];
NSArray *scrollItemXIB = [[NSBundle mainBundle] loadNibNamed:#"SellPreviewImagesScrollItemView" owner:self options:nil];
UIView *scrollItemView = [scrollItemXIB lastObject];
UIImageView *previewImage = (UIImageView *)[scrollItemView viewWithTag:PREVIEW_IMAGES_SCROLL_ITEM_TAG_IMAGE];
previewImage.image = image;
UIButton *deleteButton = (UIButton *)[scrollItemView viewWithTag:PREVIEW_IMAGES_SCROLL_ITEM_TAG_BADGE_BUTTON];
[deleteButton addTarget:self action:#selector(deleteImage:) forControlEvents:UIControlEventTouchUpInside];
UIButton *previewButton = (UIButton *)[scrollItemView viewWithTag:PREVIEW_IMAGES_SCROLL_ITEM_TAG_BUTTON];
[previewButton addTarget:self action:#selector(previewImage:) forControlEvents:UIControlEventTouchUpInside];
[self addItemToScroll:scrollItemView];
[self checkCapturedImagesLimit];
if ([self.capturedImages count] == 1) {
[self makeCoverPhoto:[self.capturedImages objectAtIndex:0]];
[self cells:self.previewImagesToggle setHidden:NO];
[self reloadDataAnimated:YES];
}
}
Where you set self.doCapture = true? It seems like you allocate a lot of memory for temporary objects. Try to use #autoreleasepool directive:
#autoreleasepool {
self.doCapture = NO;
[self.shutterPlayer setCurrentTime:0];
[self.shutterPlayer play];
...
if ([self.capturedImages count] == 1) {
[self makeCoverPhoto:[self.capturedImages objectAtIndex:0]];
[self cells:self.previewImagesToggle setHidden:NO];
[self reloadDataAnimated:YES];
}
}
I think I might have had the same problem ("Terminated Due To Memory Pressure"), and I just changed the setup of the AVCaptureSession from the viewDidAppear to ViewDidLoad. It was growing up to 100mb or so on the iPad, and now it is going up to 14mb with a big image overlay on top.

How to capture frame-by-frame images from iPhone video recording in real time

I am trying to measure the saturation of a selected color in real-time, like this:
I am following this guide from Apple. I updated the code to work with ARC, and of course made my view controller an AVCaptureVideoDataOutputSampleBufferDelegate, but I don't know how to actually start capturing the data, as in starting up the camera to get some actual input.
Here is my code:
#import "ViewController.h"
#interface ViewController ()
#property (nonatomic, strong) AVCaptureSession *session;
#property (nonatomic, strong) AVCaptureVideoPreviewLayer *previewLayer;
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib
[self setupCaptureSession];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:session];
// Assign session to an ivar.
[self setSession:session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(#"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
#end
This did it for me, it was all about setting up a video preview:
#import "ViewController.h"
#interface ViewController ()
#property (nonatomic, strong) AVCaptureSession *session;
#property (nonatomic, strong) AVCaptureVideoPreviewLayer *previewLayer;
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib
[self setupCaptureSession];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:session];
// Assign session to an ivar.
[self setSession:session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
NSLog(#"Adding video preview layer");
[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(#"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[connection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
#end

AVCaptureVideoDataOutput and AVCaptureMovieFileOutput at the same time?

Here was a similar question. And I have the same problem. Here is my code
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
self.packagesBufferMutableArray = [NSMutableArray array];
self.fps = 25;
[self initDateFormatter];
// [self setupCaptureSession];
[self performSelector:#selector(stopWork) withObject:nil afterDelay:5];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)viewDidAppear:(BOOL)animated
{
[super viewDidAppear:animated];
self.isExtra = YES;
[self.captureSession startRunning];
}
- (void)initDateFormatter {
self.dateFormatter = [NSDateFormatter new];
[_dateFormatter setDateFormat:#"yy-MM-dd--HH-mm-ss"];
}
- (NSString *)generateFilePathForMovie {
return [NSString stringWithFormat:#"%#/%#.mov",
[NSHomeDirectory() stringByAppendingPathComponent:#"Documents"],
[_dateFormatter stringFromDate:[NSDate date]]];
}
- (NSDictionary *)settingsForWriterInput {
int bitRate = (300 + /*self.currentQuality*/5 * 90) * 1024; //NORMAL 750 * 1024
NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:bitRate], AVVideoAverageBitRateKey,
nil];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:480], AVVideoWidthKey,
[NSNumber numberWithInt:320], AVVideoHeightKey,
codecSettings, AVVideoCompressionPropertiesKey,
nil];
return videoSettings;
}
- (AVAssetWriterInput *)createVideoWriterInput {
NSDictionary *videoSettings = [self settingsForWriterInput];
return [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
}
- (void)setupCaptureSession
{
NSError *error = nil;
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input)
{
// Handling the error appropriately.
}
[self.captureSession addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
self.assetWriterInput =[self createVideoWriterInput];// [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
self.assetWriterInput.expectsMediaDataInRealTime = YES;
AVAssetWriter *assetWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:[self generateFilePathForMovie]] fileType:AVFileTypeMPEG4 error:&error];
[assetWriter addInput:self.assetWriterInput];
self.assetWriterPixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:
[NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA],kCVPixelBufferPixelFormatTypeKey, nil]];
[self.captureSession addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
for (AVCaptureOutput* output in self.captureSession.outputs)
{
if ([output isKindOfClass:[AVCaptureVideoDataOutput class]])
{
AVCaptureConnection* connection = [output connectionWithMediaType:AVMediaTypeVideo];
CMTimeShow(connection.videoMinFrameDuration);
CMTimeShow(connection.videoMaxFrameDuration);
CMTime frameDuration = CMTimeMake(1, self.fps);
if (connection.isVideoMinFrameDurationSupported)
connection.videoMinFrameDuration = frameDuration;
if (connection.isVideoMaxFrameDurationSupported)
connection.videoMaxFrameDuration = frameDuration;
CMTimeShow(connection.videoMinFrameDuration);
CMTimeShow(connection.videoMaxFrameDuration);
}
else
{
AVCaptureConnection* connection = [output connectionWithMediaType:AVMediaTypeVideo];
CMTimeShow(connection.videoMinFrameDuration);
CMTimeShow(connection.videoMaxFrameDuration);
if (connection.isVideoMinFrameDurationSupported)
connection.videoMinFrameDuration = CMTimeMake(1, 20);
if (connection.isVideoMaxFrameDurationSupported)
connection.videoMaxFrameDuration = CMTimeMake(1, 20);
CMTimeShow(connection.videoMinFrameDuration);
CMTimeShow(connection.videoMaxFrameDuration);
}
}
// Start the session running to start the flow of data
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
[self.captureSession startRunning];
// Assign session to an ivar.
}
- (NSMutableDictionary *)createEmptyPackage
{
return [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInteger:0], #"framesCount", [NSMutableArray array], #"framesArray", nil];
}
- (void)updateCurrentPackageWithFrameImage:(UIImage *)frameImage
{
if (self.currentPackageMutableDictionary == nil)
{
NSLog(#"new package with number %d", self.packagesBufferMutableArray.count);
self.currentPackageMutableDictionary = [self createEmptyPackage];
}
NSInteger framesCount = [[self.currentPackageMutableDictionary objectForKey:#"framesCount"] integerValue];
NSMutableArray *framesArray = [self.currentPackageMutableDictionary objectForKey:#"framesArray"];
NSLog(#"added %d frame at current package", framesCount);
framesCount ++;
[framesArray addObject:frameImage];
[self.currentPackageMutableDictionary setObject:[NSNumber numberWithInteger:framesCount] forKey:#"framesCount"];
if (framesCount == self.fps)
{
[self.packagesBufferMutableArray addObject:[NSDictionary dictionaryWithDictionary:self.currentPackageMutableDictionary]];
self.currentPackageMutableDictionary = nil;
if ((self.packagesBufferMutableArray.count == 31) && !self.isExtra)
{
NSLog(#"remove old package");
[self.packagesBufferMutableArray removeObjectAtIndex:0];
}
}
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
// UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
[self updateCurrentPackageWithFrameImage:image];
// UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
// imageView.frame = CGRectMake(0, 0, image.size.width, image.size.height);
// [self.view addSubview:imageView];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// a very dense way to keep track of the time at which this frame
// occurs relative to the output stream, but it's just an example!
static int64_t frameNumber = 0;
if(self.assetWriterInput.readyForMoreMediaData)
[self.assetWriterPixelBufferAdaptor appendPixelBuffer:imageBuffer
withPresentationTime:CMTimeMake(frameNumber, 24)];
frameNumber++;
}
- (void) image: (UIImage *) image
didFinishSavingWithError: (NSError *) error
contextInfo: (void *) contextInfo
{
}
- (void)startWork
{
}
- (void)stopWork
{
[self.captureSession stopRunning];
[self.assetWriter finishWriting];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
So, it create mov file, but i can't run it with any players.
QuickTime Player can not open the "14-02-02 - 21-29-11.movĀ», as the movie file format is not recognized.
Maybe I must set some others parameters or use another algorithm?
Could someone tell or put an example?

Resources