How to get 240fps on iPhone 6 with low resolution - ios

I am trying to do real time image processing with an iPhone 6 at 240fps. The problem is when I capture video at that speed, I can't process the image fast enough since I need to sample each pixel to get an average. Reducing the image resolution would easily solve this problem, but I'm not able to figure out how to do this. The available AVCaptureDeviceFormat's have options with 192x144 px, but at 30fps. All 240fps options all have larger dimensions. Here is how I am sampling the data:
- (void)startDetection
{
const int FRAMES_PER_SECOND = 240;
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetLow;
// Retrieve the back camera
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *captureDevice;
for (AVCaptureDevice *device in devices)
{
if ([device hasMediaType:AVMediaTypeVideo])
{
if (device.position == AVCaptureDevicePositionBack)
{
captureDevice = device;
break;
}
}
}
NSError *error;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:&error];
[self.session addInput:input];
if (error)
{
NSLog(#"%#", error);
}
// Find the max frame rate we can get from the given device
AVCaptureDeviceFormat *currentFormat;
for (AVCaptureDeviceFormat *format in captureDevice.formats)
{
NSArray *ranges = format.videoSupportedFrameRateRanges;
AVFrameRateRange *frameRates = ranges[0];
// Find the lowest resolution format at the frame rate we want.
if (frameRates.maxFrameRate == FRAMES_PER_SECOND && (!currentFormat || (CMVideoFormatDescriptionGetDimensions(format.formatDescription).width < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).width && CMVideoFormatDescriptionGetDimensions(format.formatDescription).height < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).height)))
{
currentFormat = format;
}
}
// Tell the device to use the max frame rate.
[captureDevice lockForConfiguration:nil];
captureDevice.torchMode=AVCaptureTorchModeOn;
captureDevice.activeFormat = currentFormat;
captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
[captureDevice setVideoZoomFactor:4];
[captureDevice unlockForConfiguration];
// Set the output
AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// create a queue to run the capture on
dispatch_queue_t captureQueue=dispatch_queue_create("catpureQueue", NULL);
// setup our delegate
[videoOutput setSampleBufferDelegate:self queue:captureQueue];
// configure the pixel format
videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey,
nil];
videoOutput.alwaysDiscardsLateVideoFrames = NO;
[self.session addOutput:videoOutput];
// Start the video session
[self.session startRunning];
}

Try GPUImage library. Each filter has method forceProcessingAtSize:. After force resize on GPU, you can retrieve data with GPUImageRawDataOutput.
I got 60fps with process image on CPU with this method.

Related

iOS get Camera orientation

on iOS, is-it possible to get the Camera orientation (in degrees) like Android ?
Camera.CameraInfo info = new Camera.CameraInfo();
Camera.getCameraInfo(cameraId, info);
int degrees = info.orientation;
I tried
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for(AVCaptureDevice *camera in devices) {
AVCaptureDeviceInput *deviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:nil];
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[session addInput:deviceInput];
AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc] init];
[session addOutput:output];
AVCaptureConnection *connection = [output connectionWithMediaType:AVMediaTypeMetadata];
NSLog(#((int)connection.videoOrientation).stringValue);
}
but it return "0" for all devices
Thanks
AVCaptureConnection has the videoOrientation property.
You can also try using device orientation, if you need to transform photo output somehow. Here you can find some instruction, how to work with device orientation if you need.

XCode Barcode Scanner Phonegap Auto Focus

I develop phonegap ios application.I used barcode scanner zxing library. But I have a problem
How to implement camera auto focus ?
thank you
My Code:
-(NSString*)setUpCaptureSession {
NSError* error = nil;
AVCaptureSession* captureSession = [[[AVCaptureSession alloc] init] autorelease];
self.captureSession = captureSession;
AVCaptureDevice* device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (!device) return #"unable to obtain video capture device";
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) return #"unable to obtain video capture device input";
AVCaptureVideoDataOutput* output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
if (!output) return #"unable to obtain video capture output";
NSDictionary* videoOutputSettings = [NSDictionary
dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey
];
output.alwaysDiscardsLateVideoFrames = YES;
output.videoSettings = videoOutputSettings;
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if (![captureSession canSetSessionPreset:AVCaptureSessionPresetMedium]) {
return #"unable to preset medium quality video capture";
}
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
if ([captureSession canAddInput:input]) {
[captureSession addInput:input];
}
else {
return #"unable to add video capture device input to session";
}
if ([captureSession canAddOutput:output]) {
[captureSession addOutput:output];
}
else {
return #"unable to add video capture output to session";
}
// setup capture preview layer
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
// run on next event loop pass [captureSession startRunning]
[captureSession performSelector:#selector(startRunning) withObject:nil afterDelay:0];
return nil;
}
Unfortunately it appears that the plugin you are using doesn't expose the capture device directly. It does, however, expose the AVCaptureSession via the captureSession property. From this property you should be able to work backwards to get the AVCaptureInputDevice
AVCaptureSession *session=[zxing captureSession]; //Assuming zxing the variable holding a reference to your zxing instance
NSArray *inputs= [session inputs];
AVCaptureInputDevice *input=(AVCaptureInputDevice *)inputs[0]; // Obtain first input device
AVCaptureDevice *device=input.device;
NSError *error;
if ([device lockForConfiguration:&error])
{
device.focusMode=AVCaptureFocusModeContinuousAutoFocus;
[device unlockForConfiguration];
}
else
{
// TODO Handle the device lock error
}

Get Recording Video Raw frames

I am new to Objective-C and iOS technology.I want to record the video through code and during run time, I have to get each frame as raw data for some processing.How can I achieve this? Please any one help me. Thanks in Advance. Here is my code so far:
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCaptureSession];
}
The viewDidAppear function
-(void)viewDidAppear:(BOOL)animated
{
if (!_bpickeropen)
{
_bpickeropen = true;
_picker = [[UIImagePickerController alloc] init];
_picker.delegate = self;
NSArray *sourceTypes = [UIImagePickerController availableMediaTypesForSourceType:picker.sourceType];
if (![sourceTypes containsObject:(NSString *)kUTTypeMovie ])
{
NSLog(#"device not supported");
return;
}
_picker.sourceType = UIImagePickerControllerSourceTypeCamera;
_picker.mediaTypes = [NSArray arrayWithObjects:(NSString *)kUTTypeMovie,nil];//,(NSString *) kUTTypeImage
_picker.videoQuality = UIImagePickerControllerQualityTypeHigh;
[self presentModalViewController:_picker animated:YES];
}
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
**NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
**
PROBLEMS
1.(Here i am getting the raw bytes only once)
2.(After that i want to store this raw bytes as binary file in app path).
// Do whatever with your bytes
NSLog(#"bytes per row %zd",bytesPerRow);
[dataForRawBytes writeToFile:[self datafilepath]atomically:YES];
NSLog(#"Sample Buffer Data is %#\n",dataForRawBytes);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
here i am setting the delegate of output// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input)
{
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]; //kCVPixelBufferPixelFormatTypeKey
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
// output.minFrameDuration = CMTimeMake(1, 15);
// Start the session running to start the flow of data
[session startRunning];
// Assign session to an ivar.
//[self setSession:session];
}
I appreciate any help.Thanks in advance.
You could look into the AVFoundation framework. It allows you access to the raw data generated from the camera.
This link is a good intro-level project to the AVFoundation video camera usage.
In order to get individual frames from the video output, you could use the AVCaptureVideoDataOutput class from the AVFoundation framework.
Hope this helps.
EDIT: You could look at the delegate functions of AVCaptureVideoDataOutputSampleBufferDelegate, in particular the captureOutput:didOutputSampleBuffer:fromConnection: method. This will be called every time a new frame is captured.
If you do not know how delegates work, this link is a good example of delegates.

iOS app Video Capture Slows down in low light

I have an iOS app that is using the front camera of the phone and setting up an AVCaptureSession to read through the incoming camera data. I set up a simple frame counter to check the speed of data incoming, and to my surprise, when the camera is in low light the frame rate (measured using the imagecount variable in the code) is very slow, but as soon as I move the phone into a brightly lit area the frame rate will almost triple. I would like to keep the high frame rate of image processing throughout and have set the minFrameDuration variable to 30 fps, but that didnt help. Any ideas on why this random behaviour?
Code to create the capture session is below:
#pragma mark Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetLow;
// Find a suitable AVCaptureDevice
//AVCaptureDevice *device=[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *frontCamera;
AVCaptureDevice *backCamera;
for (AVCaptureDevice *device in devices) {
if ([device hasMediaType:AVMediaTypeVideo]) {
if ([device position] == AVCaptureDevicePositionFront) {
backCamera = device;
}
else {
frontCamera = device;
}
}
}
//Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera
error:&error];
if (!input) {
//Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// If you wish to cap the frame rate to a known value, such as 30 fps, set
// minFrameDuration.
output.minFrameDuration = CMTimeMake(1,30);
//Start the session running to start the flow of data
[session startRunning];
}
#pragma mark Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
//counter to track frame rate
imagecount++;
//display to help see speed of images being processed on ios app
NSString *recognized = [[NSString alloc] initWithFormat:#"IMG COUNT - %d",imagecount];
[self performSelectorOnMainThread:#selector(debuggingText:) withObject:recognized waitUntilDone:YES];
}
When there is less light, the camera requires a longer exposure to get the same signal to noise ratio in each pixel. That is why you might expect the frame rate to drop in low light.
You are setting minFrameDuration to 1/30 s in an attempt to prevent long-exposure frames from slowing down the frame rate. However, you should be setting maxFrameDuration instead: your code as-is says the frame rate is no faster than 30 FPS, but it could be 10 FPS, or 1 FPS....
Also, the Documentation say to bracket any changes to these parameters with lockForConfiguration: and unlockForConfiguration: , so it may be that your changes just didn't take.

I want to throttle video capture frame rate in AVCapture framework

I am trying to throttle my video capture framerate for my application, as I have found that it is impacting VoiceOver performance.
At the moment, it captures frames from the video camera, and then processes the frames using OpenGL routines as quickly as possible. I would like to set a specific framerate in the capture process.
I was expecting to be able to do this by using videoMinFrameDuration or minFrameDuration, but this seems to make no difference to performance. Any ideas?
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == AVCaptureDevicePositionBack)
{
backFacingCamera = device;
// SET SOME OTHER PROPERTIES
}
}
// Create the capture session
captureSession = [[AVCaptureSession alloc] init];
// Add the video input
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
// Add the video frame output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// Start capturing
if([backFacingCamera supportsAVCaptureSessionPreset:AVCaptureSessionPreset1920x1080])
{
[captureSession setSessionPreset:AVCaptureSessionPreset1920x1080];
captureDeviceWidth = 1920;
captureDeviceHeight = 1080;
#if defined(VA_DEBUG)
NSLog(#"Video AVCaptureSessionPreset1920x1080");
#endif
}
else do some fall back stuff
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.supportsVideoMinFrameDuration)
conn.videoMinFrameDuration = CMTimeMake(1,2);
else
videoOutput.minFrameDuration = CMTimeMake(1,2);
if ([captureSession canAddInput:videoInput])
[captureSession addInput:videoInput];
if ([captureSession canAddOutput:videoOutput])
[captureSession addOutput:videoOutput];
if (![captureSession isRunning])
[captureSession startRunning];
Any ideas? Am I missing something? Is this the best way to throttle?
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.supportsVideoMinFrameDuration)
conn.videoMinFrameDuration = CMTimeMake(1,2);
else
videoOutput.minFrameDuration = CMTimeMake(1,2);
Mike Ullrich's answer worked up until ios 7. These two methods are unfortunately deprecated in ios7. You have to set the activeVideo{Min|Max}FrameDuration on the AVCaptureDevice itself. Something like:
int fps = 30; // Change this value
AVCaptureDevice *device = ...; // Get the active capture device
[device lockForConfiguration:nil];
[device setActiveVideoMinFrameDuration:CMTimeMake(1, fps)];
[device setActiveVideoMaxFrameDuration:CMTimeMake(1, fps)];
[device unlockForConfiguration];
Turns out you need to set both videoMinFrameDuration and videoMaxFrameDuration for either one to work.
eg:
[conn setVideoMinFrameDuration:CMTimeMake(1,1)];
[conn setVideoMaxFrameDuration:CMTimeMake(1,1)];

Resources