I am new to Objective-C and iOS technology.I want to record the video through code and during run time, I have to get each frame as raw data for some processing.How can I achieve this? Please any one help me. Thanks in Advance. Here is my code so far:
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCaptureSession];
}
The viewDidAppear function
-(void)viewDidAppear:(BOOL)animated
{
if (!_bpickeropen)
{
_bpickeropen = true;
_picker = [[UIImagePickerController alloc] init];
_picker.delegate = self;
NSArray *sourceTypes = [UIImagePickerController availableMediaTypesForSourceType:picker.sourceType];
if (![sourceTypes containsObject:(NSString *)kUTTypeMovie ])
{
NSLog(#"device not supported");
return;
}
_picker.sourceType = UIImagePickerControllerSourceTypeCamera;
_picker.mediaTypes = [NSArray arrayWithObjects:(NSString *)kUTTypeMovie,nil];//,(NSString *) kUTTypeImage
_picker.videoQuality = UIImagePickerControllerQualityTypeHigh;
[self presentModalViewController:_picker animated:YES];
}
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
**NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
**
PROBLEMS
1.(Here i am getting the raw bytes only once)
2.(After that i want to store this raw bytes as binary file in app path).
// Do whatever with your bytes
NSLog(#"bytes per row %zd",bytesPerRow);
[dataForRawBytes writeToFile:[self datafilepath]atomically:YES];
NSLog(#"Sample Buffer Data is %#\n",dataForRawBytes);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
here i am setting the delegate of output// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input)
{
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]; //kCVPixelBufferPixelFormatTypeKey
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
// output.minFrameDuration = CMTimeMake(1, 15);
// Start the session running to start the flow of data
[session startRunning];
// Assign session to an ivar.
//[self setSession:session];
}
I appreciate any help.Thanks in advance.
You could look into the AVFoundation framework. It allows you access to the raw data generated from the camera.
This link is a good intro-level project to the AVFoundation video camera usage.
In order to get individual frames from the video output, you could use the AVCaptureVideoDataOutput class from the AVFoundation framework.
Hope this helps.
EDIT: You could look at the delegate functions of AVCaptureVideoDataOutputSampleBufferDelegate, in particular the captureOutput:didOutputSampleBuffer:fromConnection: method. This will be called every time a new frame is captured.
If you do not know how delegates work, this link is a good example of delegates.
Related
I'm developing a QR code reader. My Codes are 1cm long and width. I'm using AVFoundation metadata to capture the machine readable codes and it works fine. But at the same time i need to take a picture of the QR code with the logo (Which is located in mid of the QR code). So I'm using AVCaptureVideoDataOutput and didOutputSampleBuffer to get the image stills. The problem comes in clarity of the image. it looks always blurry in the edges of the codes and logo. So i did a research on manual controls in and made some code changes for manual focusing but no luck till now.
How to focus (which is 10cm away from the camera and tiny)the near by objects?
Do we have any other way of getting the image after successful scan from the metadata?
What is difference between setFocusModeLockedWithLensPosition and focusPointOfInterest ?
Here is my code (part of it)
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
_session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
_session.sessionPreset = AVCaptureSessionPresetHigh;
// Find a suitable AVCaptureDevice
_device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([_device lockForConfiguration:&error]) {
[_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNone];
[_device setFocusModeLockedWithLensPosition:0.5 completionHandler:nil];
//[device setFocusMode:AVCaptureFocusModeAutoFocus];
// _device.focusPointOfInterest = CGPointMake(0.5,0.5);
// device.videoZoomFactor = 1.0 + 10;
[_device unlockForConfiguration];
}
// if ([_device isSmoothAutoFocusEnabled])
// {
// _device.smoothAutoFocusEnabled = NO;
// }
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:_device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[_session addInput:input];
// For scanning QR code
AVCaptureMetadataOutput *metaDataOutput = [[AVCaptureMetadataOutput alloc] init];
// Have to add the output before setting metadata types
[_session addOutput:metaDataOutput];
[metaDataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode]];
[metaDataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
//For saving the image to camera roll
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[_stillImageOutput setOutputSettings:outputSettings];
[_session addOutput:_stillImageOutput];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[_session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:_session];
// Assign session to an ivar.
[self setSession:_session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
NSLog(#"Adding video preview layer");
[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(#"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
[self.previewLayer setAffineTransform:CGAffineTransformMakeScale(3.5, 3.5)];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
[self switchONFlashLight];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[connection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
/ Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
if (metadataObjects != nil && [metadataObjects count] > 0) {
AVMetadataMachineReadableCodeObject *metadataObj = [metadataObjects objectAtIndex:0];
// if ([_device lockForConfiguration:nil]){
// [_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
// _device.focusPointOfInterest = CGPointMake(metadataObj.bounds.origin.x, metadataObj.bounds.origin.y);
// [_device unlockForConfiguration];
// }
if ([[metadataObj type] isEqualToString:AVMetadataObjectTypeQRCode]) {
[_lblStatus performSelectorOnMainThread:#selector(setText:) withObject:[metadataObj stringValue] waitUntilDone:NO];
}
}
}
Aiming for iOS 8 and latest iPhones only.
After did regressive research and got inputs from photographers. I'm sharing my answers for the future readers.
As of iOS 8 apple provides only three focus modes. Which are
typedef NS_ENUM(NSInteger, AVCaptureFocusMode) {
AVCaptureFocusModeLocked = 0,
AVCaptureFocusModeAutoFocus = 1,
AVCaptureFocusModeContinuousAutoFocus = 2,
} NS_AVAILABLE(10_7, 4_0);
To focus an object which is very near to the lens we can use AVCaptureAutoFocusRangeRestrictionNear
but for my need due to restrictions on minimum focus length with the iPhone cameras it is not possible to get the clear image of my codes.
AFAIK there is no way to get image data from metadata. My question itself wrong. but how ever you can get the image buffers from video frames. check out Capturing Video Frames as UIImage Objects for more info.
setFocusModeLockedWithLensPosition will lock the focus mode and will allow us to set the particular lens position which starts from 0.0 to 1.0.
focusPointOfInterest dont change the focus mode but it will just set points for focus. Best example would be tap to focus.
I am using AVFoundation to capture a video and present it on iPhone screen.
I want to know every time a video frame is dropped and get its time,
I read that this delegate method is exactly what I want:
-(void)captureOutput:(AVCaptureOutput*)captureOutput didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection;
This is my code setting the delegate:
#interface PRVideoRecorderManger ()<AVCaptureFileOutputRecordingDelegate, AVCaptureVideoDataOutputSampleBufferDelegate>
AVCaptureVideoDataOutput *videoCaptureOutput = [[AVCaptureVideoDataOutput alloc] init];
dispatch_queue_t queue = dispatch_queue_create("myQueue", DISPATCH_QUEUE_SERIAL);
[videoCaptureOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoCaptureOutput setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
videoCaptureOutput.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]; //kCVPixelBufferPixelFormatTypeKey
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
videoCaptureOutput.minFrameDuration = CMTimeMake(1, videoFPS);
if([_captureSession canAddOutput:videoCaptureOutput])
{
[_captureSession addOutput:videoCaptureOutput];
}
else
{
NSLog(#"can't Add output");
}
_movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
[[_movieFileOutput connectionWithMediaType:AVMediaTypeVideo ] setVideoOrientation:orientation];
[_captureSession addOutput:_movieFileOutput];
[_movieFileOutput startRecordingToOutputFileURL:outputURL recordingDelegate:self];
[_captureSession startRunning];
But captureOutPut:didDropSampleBuffer is NEVER called, why is that?
EDIT:
I have realised that if i comment AVCaptureMovieFileOutput it does call
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
Is there a way to receive output sample buffer AND finish recording to output file at URL
because right now it only let me recieve one.
I have an iOS app that is using the front camera of the phone and setting up an AVCaptureSession to read through the incoming camera data. I set up a simple frame counter to check the speed of data incoming, and to my surprise, when the camera is in low light the frame rate (measured using the imagecount variable in the code) is very slow, but as soon as I move the phone into a brightly lit area the frame rate will almost triple. I would like to keep the high frame rate of image processing throughout and have set the minFrameDuration variable to 30 fps, but that didnt help. Any ideas on why this random behaviour?
Code to create the capture session is below:
#pragma mark Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetLow;
// Find a suitable AVCaptureDevice
//AVCaptureDevice *device=[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *frontCamera;
AVCaptureDevice *backCamera;
for (AVCaptureDevice *device in devices) {
if ([device hasMediaType:AVMediaTypeVideo]) {
if ([device position] == AVCaptureDevicePositionFront) {
backCamera = device;
}
else {
frontCamera = device;
}
}
}
//Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera
error:&error];
if (!input) {
//Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// If you wish to cap the frame rate to a known value, such as 30 fps, set
// minFrameDuration.
output.minFrameDuration = CMTimeMake(1,30);
//Start the session running to start the flow of data
[session startRunning];
}
#pragma mark Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
//counter to track frame rate
imagecount++;
//display to help see speed of images being processed on ios app
NSString *recognized = [[NSString alloc] initWithFormat:#"IMG COUNT - %d",imagecount];
[self performSelectorOnMainThread:#selector(debuggingText:) withObject:recognized waitUntilDone:YES];
}
When there is less light, the camera requires a longer exposure to get the same signal to noise ratio in each pixel. That is why you might expect the frame rate to drop in low light.
You are setting minFrameDuration to 1/30 s in an attempt to prevent long-exposure frames from slowing down the frame rate. However, you should be setting maxFrameDuration instead: your code as-is says the frame rate is no faster than 30 FPS, but it could be 10 FPS, or 1 FPS....
Also, the Documentation say to bracket any changes to these parameters with lockForConfiguration: and unlockForConfiguration: , so it may be that your changes just didn't take.
I am trying to do something very simple. I want to display the video layer in full screen, and once every second update an UIImage with the CMSampleBufferRef i got at that time. However i am running into two different problems. The first one is that changing the:
[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];
Will also modify the video preview layer, I thought it would only modify the rate at where av foundation sends the information to the delegate but it seems to affect the entire session (which looks more obvious). So this makes my video update every second. I guess i could omit those lines and simply add a timer in the delegate so that every second it sends the CMSampleBufferRef to another method to process it. But i dunno if this is the right approach.
My second problem is that the UIImageView is NOT updating, or sometimes it just updates once and doesn't change after. I am using this method to update it:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
//NSData *jpeg = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer] ;
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
[imageView setImage:image];
// Add your code here that uses the image.
NSLog(#"update");
}
Which i took from the apple examples. The method is being called correctly every second which i checked by reading the update message. But the image is not changing at all. Also is the sampleBuffer automatically destroyed or do i have to release it?
This are the other 2 important methods:
View Did Load:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
session = [[AVCaptureSession alloc] init];
// Add inputs and outputs.
if ([session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
session.sessionPreset = AVCaptureSessionPreset640x480;
}
else {
// Handle the failure.
NSLog(#"Cannot set session preset to 640x480");
}
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"Could create input: %#", error);
}
if ([session canAddInput:input]) {
[session addInput:input];
}
else {
// Handle the failure.
NSLog(#"Could not add input");
}
// DATA OUTPUT
dataOutput = [[AVCaptureVideoDataOutput alloc] init];
if ([session canAddOutput:dataOutput]) {
[session addOutput:dataOutput];
dataOutput.videoSettings =
[NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey: (id)kCVPixelBufferPixelFormatTypeKey];
//dataOutput.minFrameDuration = CMTimeMake(1, 15);
//dataOutput.minFrameDuration = CMTimeMake(1, 1);
AVCaptureConnection *connection = [dataOutput connectionWithMediaType:AVMediaTypeVideo];
[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];
}
else {
// Handle the failure.
NSLog(#"Could not add output");
}
// DATA OUTPUT END
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[dataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[captureVideoPreviewLayer setBounds:videoLayer.layer.bounds];
[captureVideoPreviewLayer setPosition:videoLayer.layer.position];
[videoLayer.layer addSublayer:captureVideoPreviewLayer];
[session startRunning];
}
Covert the CMSampleBufferRef to UIImage:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Thanks in advance for any help you can give me.
From the documentation for the captureOutput:didOutputSampleBuffer:fromConnection: method :
This method is called on the dispatch queue specified by the output’s sampleBufferCallbackQueue property.
This means that if you need to update the UI using the buffer in this method you need to do that on the main queue like this :
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
dispatch_async(dispatch_get_main_queue(), ^{
[imageView setImage:image];
});
}
EDIT : About your first questions :
I'm not sure I'm understanding the problem, but if you want to update the image only once every second you can also have a "lastImageUpdateTime" value to compare to in the "didOutputSampleBuffer" method and see if enough time passed and only update the image there, and ignore the sample buffer otherwise.
I am trying to throttle my video capture framerate for my application, as I have found that it is impacting VoiceOver performance.
At the moment, it captures frames from the video camera, and then processes the frames using OpenGL routines as quickly as possible. I would like to set a specific framerate in the capture process.
I was expecting to be able to do this by using videoMinFrameDuration or minFrameDuration, but this seems to make no difference to performance. Any ideas?
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == AVCaptureDevicePositionBack)
{
backFacingCamera = device;
// SET SOME OTHER PROPERTIES
}
}
// Create the capture session
captureSession = [[AVCaptureSession alloc] init];
// Add the video input
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
// Add the video frame output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// Start capturing
if([backFacingCamera supportsAVCaptureSessionPreset:AVCaptureSessionPreset1920x1080])
{
[captureSession setSessionPreset:AVCaptureSessionPreset1920x1080];
captureDeviceWidth = 1920;
captureDeviceHeight = 1080;
#if defined(VA_DEBUG)
NSLog(#"Video AVCaptureSessionPreset1920x1080");
#endif
}
else do some fall back stuff
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.supportsVideoMinFrameDuration)
conn.videoMinFrameDuration = CMTimeMake(1,2);
else
videoOutput.minFrameDuration = CMTimeMake(1,2);
if ([captureSession canAddInput:videoInput])
[captureSession addInput:videoInput];
if ([captureSession canAddOutput:videoOutput])
[captureSession addOutput:videoOutput];
if (![captureSession isRunning])
[captureSession startRunning];
Any ideas? Am I missing something? Is this the best way to throttle?
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.supportsVideoMinFrameDuration)
conn.videoMinFrameDuration = CMTimeMake(1,2);
else
videoOutput.minFrameDuration = CMTimeMake(1,2);
Mike Ullrich's answer worked up until ios 7. These two methods are unfortunately deprecated in ios7. You have to set the activeVideo{Min|Max}FrameDuration on the AVCaptureDevice itself. Something like:
int fps = 30; // Change this value
AVCaptureDevice *device = ...; // Get the active capture device
[device lockForConfiguration:nil];
[device setActiveVideoMinFrameDuration:CMTimeMake(1, fps)];
[device setActiveVideoMaxFrameDuration:CMTimeMake(1, fps)];
[device unlockForConfiguration];
Turns out you need to set both videoMinFrameDuration and videoMaxFrameDuration for either one to work.
eg:
[conn setVideoMinFrameDuration:CMTimeMake(1,1)];
[conn setVideoMaxFrameDuration:CMTimeMake(1,1)];