I'm currently creating an iOS application that uses the camera to capture 15 frames per second for 30 seconds (a total of 450 frames). The problem is that [self.session startRunning] (the last line of the code provided) doesn't seem to be working; I say this because I've set up an array called hues to take the average red values of each of the 450 frames that should be captured. However, even after starting and stopping detection, the array remains empty. What am I missing?
#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
#interface ViewController ()
#property (nonatomic, strong) AVCaptureSession *session;
#property (nonatomic, strong) NSMutableArray *hues;
#end
const int SECONDS = 15;
const int FPS = 30;
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
self.hues = [[NSMutableArray alloc] init]; // initiate things (from old code)
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetLow;
NSInteger numberOfFramesCaptured = self.hues.count;
// initialize the session with proper settings (from docs)
NSError *error = nil;
AVCaptureDevice *captureDevice; // initialize captureDevice and input, and add input (from old code)
AVCaptureDeviceInput *videoInput = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:&error];
if ([self.session canAddInput:videoInput])
{ // fails
[self.session addInput:videoInput];
}
// find the max fps we can get from the given device (from old code)
AVCaptureDeviceFormat *currentFormat = [captureDevice activeFormat];
for (AVCaptureDeviceFormat *format in captureDevice.formats) // executes twice
{ // skips all of this
NSArray *ranges = format.videoSupportedFrameRateRanges;
AVFrameRateRange *frameRates = ranges[0];
// find the lowest resolution format at the frame rate we want (from old code)
if (frameRates.maxFrameRate == FPS && (!currentFormat || (CMVideoFormatDescriptionGetDimensions(format.formatDescription).width < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).width && CMVideoFormatDescriptionGetDimensions(format.formatDescription).height < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).height)))
{
currentFormat = format;
}
}
// tell the device to use the max frame rate (from old code)
[captureDevice lockForConfiguration:nil];
captureDevice.torchMode = AVCaptureTorchModeOn;
captureDevice.activeFormat = currentFormat;
captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, FPS);
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, FPS);
[captureDevice unlockForConfiguration];
// set the output (from old code)
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// create a queue to run the capture on (from old code)
dispatch_queue_t queue = dispatch_queue_create("queue", NULL);
// setup our delegate (from old code)
[videoOutput setSampleBufferDelegate:self queue:queue];
// configure the pixel format (from old code)
videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
videoOutput.alwaysDiscardsLateVideoFrames = NO;
[self.session addOutput:videoOutput];
// start the video session
[self.session startRunning]; // PROBLEM OCCURS HERE
In your code AVCaptureDevice *captureDevice; never initialized.
Related
I am trying to do real time image processing with an iPhone 6 at 240fps. The problem is when I capture video at that speed, I can't process the image fast enough since I need to sample each pixel to get an average. Reducing the image resolution would easily solve this problem, but I'm not able to figure out how to do this. The available AVCaptureDeviceFormat's have options with 192x144 px, but at 30fps. All 240fps options all have larger dimensions. Here is how I am sampling the data:
- (void)startDetection
{
const int FRAMES_PER_SECOND = 240;
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetLow;
// Retrieve the back camera
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *captureDevice;
for (AVCaptureDevice *device in devices)
{
if ([device hasMediaType:AVMediaTypeVideo])
{
if (device.position == AVCaptureDevicePositionBack)
{
captureDevice = device;
break;
}
}
}
NSError *error;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:&error];
[self.session addInput:input];
if (error)
{
NSLog(#"%#", error);
}
// Find the max frame rate we can get from the given device
AVCaptureDeviceFormat *currentFormat;
for (AVCaptureDeviceFormat *format in captureDevice.formats)
{
NSArray *ranges = format.videoSupportedFrameRateRanges;
AVFrameRateRange *frameRates = ranges[0];
// Find the lowest resolution format at the frame rate we want.
if (frameRates.maxFrameRate == FRAMES_PER_SECOND && (!currentFormat || (CMVideoFormatDescriptionGetDimensions(format.formatDescription).width < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).width && CMVideoFormatDescriptionGetDimensions(format.formatDescription).height < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).height)))
{
currentFormat = format;
}
}
// Tell the device to use the max frame rate.
[captureDevice lockForConfiguration:nil];
captureDevice.torchMode=AVCaptureTorchModeOn;
captureDevice.activeFormat = currentFormat;
captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
[captureDevice setVideoZoomFactor:4];
[captureDevice unlockForConfiguration];
// Set the output
AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// create a queue to run the capture on
dispatch_queue_t captureQueue=dispatch_queue_create("catpureQueue", NULL);
// setup our delegate
[videoOutput setSampleBufferDelegate:self queue:captureQueue];
// configure the pixel format
videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey,
nil];
videoOutput.alwaysDiscardsLateVideoFrames = NO;
[self.session addOutput:videoOutput];
// Start the video session
[self.session startRunning];
}
Try GPUImage library. Each filter has method forceProcessingAtSize:. After force resize on GPU, you can retrieve data with GPUImageRawDataOutput.
I got 60fps with process image on CPU with this method.
I am new to Objective-C and iOS technology.I want to record the video through code and during run time, I have to get each frame as raw data for some processing.How can I achieve this? Please any one help me. Thanks in Advance. Here is my code so far:
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCaptureSession];
}
The viewDidAppear function
-(void)viewDidAppear:(BOOL)animated
{
if (!_bpickeropen)
{
_bpickeropen = true;
_picker = [[UIImagePickerController alloc] init];
_picker.delegate = self;
NSArray *sourceTypes = [UIImagePickerController availableMediaTypesForSourceType:picker.sourceType];
if (![sourceTypes containsObject:(NSString *)kUTTypeMovie ])
{
NSLog(#"device not supported");
return;
}
_picker.sourceType = UIImagePickerControllerSourceTypeCamera;
_picker.mediaTypes = [NSArray arrayWithObjects:(NSString *)kUTTypeMovie,nil];//,(NSString *) kUTTypeImage
_picker.videoQuality = UIImagePickerControllerQualityTypeHigh;
[self presentModalViewController:_picker animated:YES];
}
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
**NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
**
PROBLEMS
1.(Here i am getting the raw bytes only once)
2.(After that i want to store this raw bytes as binary file in app path).
// Do whatever with your bytes
NSLog(#"bytes per row %zd",bytesPerRow);
[dataForRawBytes writeToFile:[self datafilepath]atomically:YES];
NSLog(#"Sample Buffer Data is %#\n",dataForRawBytes);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
here i am setting the delegate of output// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input)
{
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]; //kCVPixelBufferPixelFormatTypeKey
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
// output.minFrameDuration = CMTimeMake(1, 15);
// Start the session running to start the flow of data
[session startRunning];
// Assign session to an ivar.
//[self setSession:session];
}
I appreciate any help.Thanks in advance.
You could look into the AVFoundation framework. It allows you access to the raw data generated from the camera.
This link is a good intro-level project to the AVFoundation video camera usage.
In order to get individual frames from the video output, you could use the AVCaptureVideoDataOutput class from the AVFoundation framework.
Hope this helps.
EDIT: You could look at the delegate functions of AVCaptureVideoDataOutputSampleBufferDelegate, in particular the captureOutput:didOutputSampleBuffer:fromConnection: method. This will be called every time a new frame is captured.
If you do not know how delegates work, this link is a good example of delegates.
I'm currently trying to use iOS 7's newest api's to scan code 39 barcodes, but it's driving me crazy. I have to hold the phone a specific way really still for like 10 seconds in order for it to detect it. I compared it to Red Laser, Zbar, etc and they could analyze it in 1 second even if it was a little skewed. I'm not sure if it's because of the way that I load my capture session or what. I'd appreciate the help. Any suggestions on how to improve performance?
Here's how I load the scanner in my viewDidLoad method:
//Initialize Laser View
laserView = [[UIView alloc] init];
laserView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
laserView.layer.borderColor = [UIColor redColor].CGColor;
laserView.layer.borderWidth = 8;
laserView.layer.cornerRadius = 10;
[self.view addSubview:laserView];
//Start Session
scannerSession = [[AVCaptureSession alloc] init];
scannerDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//Define Error Messages
NSError *error = nil;
//Define Input
scannerInput = [AVCaptureDeviceInput deviceInputWithDevice:scannerDevice error:&error];
//Check if Device has a Camera
if (scannerInput) {
[scannerSession addInput:scannerInput];
} else {
NSLog(#"Error: %#", error);
}
// Locks the configuration
BOOL success = [scannerDevice lockForConfiguration:nil];
if (success) {
if ([scannerDevice isAutoFocusRangeRestrictionSupported]) {
// Restricts the autofocus to near range (new in iOS 7)
[scannerDevice setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
}
}
// unlocks the configuration
[scannerDevice unlockForConfiguration];
//Define Output & Metadata Object Types
scannerOutput = [[AVCaptureMetadataOutput alloc] init];
[scannerOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[scannerSession addOutput:scannerOutput];
scannerOutput.metadataObjectTypes = [scannerOutput availableMetadataObjectTypes];
//Create Video Preview Layer
scannerPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:scannerSession];
scannerPreviewLayer.frame = self.view.bounds;
scannerPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:scannerPreviewLayer];
//Start Session
[scannerSession startRunning];
[self.view bringSubviewToFront:cancelButton];
[self.view bringSubviewToFront:laserView];
And:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
//Prepare Laser View
CGRect laser = CGRectZero;
//Format Date
NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
[dateFormatter setDateFormat:#"M/d"];
//Format Time
NSDateFormatter *timeFormatter = [[NSDateFormatter alloc] init];
[timeFormatter setDateFormat:#"h:ma"];
//Define Barcode Types to Recognize
AVMetadataMachineReadableCodeObject *barCodeObject;
NSString *idNumber = nil;
NSArray *barCodeTypes = #[AVMetadataObjectTypeCode39Code];
if ([metadataObjects count] > 1) {
NSLog(#"%lu Barcodes Found.", (unsigned long)[metadataObjects count]);
}
//Get String Value For Every Barcode (That Matches The Type We're Looking For)
for (AVMetadataObject *metadata in metadataObjects) {
for (NSString *type in barCodeTypes) {
//If The Barcode Is The Type We Need Then Get Data
if ([metadata.type isEqualToString:type]) {
barCodeObject = (AVMetadataMachineReadableCodeObject *)[scannerPreviewLayer transformedMetadataObjectForMetadataObject:(AVMetadataMachineReadableCodeObject *)metadata];
laser = barCodeObject.bounds;
idNumber = [(AVMetadataMachineReadableCodeObject *)metadata stringValue];
break;
}
}
// If IDNumber Found
if (idNumber != nil) {
//Stop Session
[scannerSession stopRunning];
[self vibrate];
NSLog(#"ID: %#", idNumber);
break;
}
//If IDNumber Is Not Found
else {
NSLog(#"No ID Found.");
}
}
//Update Laser
laserView.frame = laser;
}
I had a similar problem with the AVCaptureSession, the capture was very slow and sometimes it took long time to finish it.
Don't know if my solution is the one that is good for you but definetly can be helpfull to somebody else looking for this problem, like i did.
AVCaptureSession *captureSession = [AVCaptureSession new];
captureSession.sessionPreset = AVCaptureSessionPresetHigh;
With this code you force the camera to a high quality preset.
Hope this will help someone.
Try zooming in a little... videoDevice.videoZoomFactor = 2.0;
A lot of it has to do with the quality of the image (focus, glare, lighting, etc.) Under really good conditions, you should get scanning that is nearly instantaneous.
I suggested you put a NSLog in your delegate function captureOutput:
You will see that it gets called many times just for scanning a barcode once. Initializing NSDateFormatter is a very expensive operation per Why is allocating or initializing NSDateFormatter considered "expensive"?.
I would suggest that you create the NSDateFormatter outside of the delegate function and re-use it instead of creating it every time the function is called. This should make your app more responsive.
I am trying to throttle my video capture framerate for my application, as I have found that it is impacting VoiceOver performance.
At the moment, it captures frames from the video camera, and then processes the frames using OpenGL routines as quickly as possible. I would like to set a specific framerate in the capture process.
I was expecting to be able to do this by using videoMinFrameDuration or minFrameDuration, but this seems to make no difference to performance. Any ideas?
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == AVCaptureDevicePositionBack)
{
backFacingCamera = device;
// SET SOME OTHER PROPERTIES
}
}
// Create the capture session
captureSession = [[AVCaptureSession alloc] init];
// Add the video input
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
// Add the video frame output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// Start capturing
if([backFacingCamera supportsAVCaptureSessionPreset:AVCaptureSessionPreset1920x1080])
{
[captureSession setSessionPreset:AVCaptureSessionPreset1920x1080];
captureDeviceWidth = 1920;
captureDeviceHeight = 1080;
#if defined(VA_DEBUG)
NSLog(#"Video AVCaptureSessionPreset1920x1080");
#endif
}
else do some fall back stuff
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.supportsVideoMinFrameDuration)
conn.videoMinFrameDuration = CMTimeMake(1,2);
else
videoOutput.minFrameDuration = CMTimeMake(1,2);
if ([captureSession canAddInput:videoInput])
[captureSession addInput:videoInput];
if ([captureSession canAddOutput:videoOutput])
[captureSession addOutput:videoOutput];
if (![captureSession isRunning])
[captureSession startRunning];
Any ideas? Am I missing something? Is this the best way to throttle?
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.supportsVideoMinFrameDuration)
conn.videoMinFrameDuration = CMTimeMake(1,2);
else
videoOutput.minFrameDuration = CMTimeMake(1,2);
Mike Ullrich's answer worked up until ios 7. These two methods are unfortunately deprecated in ios7. You have to set the activeVideo{Min|Max}FrameDuration on the AVCaptureDevice itself. Something like:
int fps = 30; // Change this value
AVCaptureDevice *device = ...; // Get the active capture device
[device lockForConfiguration:nil];
[device setActiveVideoMinFrameDuration:CMTimeMake(1, fps)];
[device setActiveVideoMaxFrameDuration:CMTimeMake(1, fps)];
[device unlockForConfiguration];
Turns out you need to set both videoMinFrameDuration and videoMaxFrameDuration for either one to work.
eg:
[conn setVideoMinFrameDuration:CMTimeMake(1,1)];
[conn setVideoMaxFrameDuration:CMTimeMake(1,1)];
Is there a way to take a picture in code on the iPhone without going through the Apple controls? I have seen a bunch of apps that do this, but I'm not sure what API call to use.
EDIT: As suggested in the comments below, I have now explicitly shown how the AVCaptureSession needs to be declared and initialized. It seems that a few were doing the initialization wrong or declaring AVCaptureSession as a local variable in a method. This would not work.
Following code allows to take a picture using AVCaptureSession without user input:
// Get all cameras in the application and find the frontal camera.
AVCaptureDevice *frontalCamera;
NSArray *allCameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
// Find the frontal camera.
for ( int i = 0; i < allCameras.count; i++ ) {
AVCaptureDevice *camera = [allCameras objectAtIndex:i];
if ( camera.position == AVCaptureDevicePositionFront ) {
frontalCamera = camera;
}
}
// If we did not find the camera then do not take picture.
if ( frontalCamera != nil ) {
// Start the process of getting a picture.
session = [[AVCaptureSession alloc] init];
// Setup instance of input with frontal camera and add to session.
NSError *error;
AVCaptureDeviceInput *input =
[AVCaptureDeviceInput deviceInputWithDevice:frontalCamera error:&error];
if ( !error && [session canAddInput:input] ) {
// Add frontal camera to this session.
[session addInput:input];
// We need to capture still image.
AVCaptureStillImageOutput *output = [[AVCaptureStillImageOutput alloc] init];
// Captured image. settings.
[output setOutputSettings:
[[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil]];
if ( [session canAddOutput:output] ) {
[session addOutput:output];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in output.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
// Finally take the picture
if ( videoConnection ) {
[session startRunning];
[output captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer != NULL) {
NSData *imageData = [AVCaptureStillImageOutput
jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *photo = [[UIImage alloc] initWithData:imageData];
}
}];
}
}
}
}
session variable is of type AVCaptureSession and has been declared in .h file of the class (either as a property or as a private member of the class):
AVCaptureSession *session;
It will then need to be initialized somewhere for instance in the class' init method:
session = [[AVCaptureSession alloc] init]
Yes, there are two ways to do this. One, available in iOS 3.0+ is to use the UIImagePickerController class, setting the showsCameraControls property to NO, and setting the cameraOverlayView property to your own custom controls. Two, available in iOS 4.0+ is to configure an AVCaptureSession, providing it with an AVCaptureDeviceInput using the appropriate camera device, and AVCaptureStillImageOutput. The first approach is much simpler, and works on more iOS version, but the second approach gives you much greater control over photo resolution and file options.