AVCapture capturing and getting framebuffer at 60 fps in iOS 7 - ios

I'm developping an app which requires capturing framebuffer at as much fps as possible. I've already figured out how to force iphone to capture at 60 fps but
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
method is being called only 15 times a second, which means that iPhone downgrades capture output to 15 fps.
Has anybody faced such problem? Is there any possibility to increase capturing frame rate?
Update my code:
camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if([camera isTorchModeSupported:AVCaptureTorchModeOn]) {
[camera lockForConfiguration:nil];
camera.torchMode=AVCaptureTorchModeOn;
[camera unlockForConfiguration];
}
[self configureCameraForHighestFrameRate:camera];
// Create a AVCaptureInput with the camera device
NSError *error=nil;
AVCaptureInput* cameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error];
if (cameraInput == nil) {
NSLog(#"Error to create camera capture:%#",error);
}
// Set the output
AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// create a queue to run the capture on
dispatch_queue_t captureQueue=dispatch_queue_create("captureQueue", NULL);
// setup our delegate
[videoOutput setSampleBufferDelegate:self queue:captureQueue];
// configure the pixel format
videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey,
nil];
// Add the input and output
[captureSession addInput:cameraInput];
[captureSession addOutput:videoOutput];
I took configureCameraForHighestFrameRate method here https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVCaptureDevice_Class/Reference/Reference.html

I am getting samples at 60 fps on the iPhone 5 and 120 fps on the iPhone 5s, both when doing real time motion detection in captureOutput and when saving the frames to a video using AVAssetWriter.
You have to set thew AVCaptureSession to a format that supports 60 fps:
AVsession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *capInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (capInput) [AVsession addInput:capInput];
for(AVCaptureDeviceFormat *vFormat in [videoDevice formats] )
{
CMFormatDescriptionRef description= vFormat.formatDescription;
float maxrate=((AVFrameRateRange*)[vFormat.videoSupportedFrameRateRanges objectAtIndex:0]).maxFrameRate;
if(maxrate>59 && CMFormatDescriptionGetMediaSubType(description)==kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)
{
if ( YES == [videoDevice lockForConfiguration:NULL] )
{
videoDevice.activeFormat = vFormat;
[videoDevice setActiveVideoMinFrameDuration:CMTimeMake(10,600)];
[videoDevice setActiveVideoMaxFrameDuration:CMTimeMake(10,600)];
[videoDevice unlockForConfiguration];
NSLog(#"formats %# %# %#",vFormat.mediaType,vFormat.formatDescription,vFormat.videoSupportedFrameRateRanges);
}
}
}
prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: AVsession];
prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer: prevLayer];
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
dispatch_queue_t videoQueue = dispatch_queue_create("videoQueue", NULL);
[videoOut setSampleBufferDelegate:self queue:videoQueue];
videoOut.videoSettings = #{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)};
videoOut.alwaysDiscardsLateVideoFrames=YES;
if (videoOut)
{
[AVsession addOutput:videoOut];
videoConnection = [videoOut connectionWithMediaType:AVMediaTypeVideo];
}
Two other comment if you want to write to a file using AVAssetWriter. Don't use the pixelAdaptor, just ad the samples with
[videoWriterInput appendSampleBuffer:sampleBuffer]
Secondly when setting up the assetwriter use
[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings
sourceFormatHint:formatDescription];
The sourceFormatHint makes a difference in writing speed.

I have developed the same function for Swift 2.0. I post here the code for who could need it:
// Set your desired frame rate
func setupCamera(maxFpsDesired: Double = 120) {
var captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPreset1920x1080
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
do{ let input = try AVCaptureDeviceInput(device: backCamera)
captureSession.addInput(input) }
catch { print("Error: can't access camera")
return
}
do {
var finalFormat = AVCaptureDeviceFormat()
var maxFps: Double = 0
for vFormat in backCamera!.formats {
var ranges = vFormat.videoSupportedFrameRateRanges as! [AVFrameRateRange]
let frameRates = ranges[0]
/*
"frameRates.maxFrameRate >= maxFps" select the video format
desired with the highest resolution available, because
the camera formats are ordered; else
"frameRates.maxFrameRate > maxFps" select the first
format available with the desired fps
*/
if frameRates.maxFrameRate >= maxFps && frameRates.maxFrameRate <= maxFpsDesired {
maxFps = frameRates.maxFrameRate
finalFormat = vFormat as! AVCaptureDeviceFormat
}
}
if maxFps != 0 {
let timeValue = Int64(1200.0 / maxFps)
let timeScale: Int64 = 1200
try backCamera!.lockForConfiguration()
backCamera!.activeFormat = finalFormat
backCamera!.activeVideoMinFrameDuration = CMTimeMake(timeValue, timeScale)
backCamera!.activeVideoMaxFrameDuration = CMTimeMake(timeValue, timeScale) backCamera!.focusMode = AVCaptureFocusMode.AutoFocus
backCamera!.unlockForConfiguration()
}
}
catch {
print("Something was wrong")
}
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.alwaysDiscardsLateVideoFrames = true
videoOutput.videoSettings = NSDictionary(object: Int(kCVPixelFormatType_32BGRA),
forKey: kCVPixelBufferPixelFormatTypeKey as String) as [NSObject : AnyObject]
videoOutput.setSampleBufferDelegate(self, queue: dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL))
if captureSession.canAddOutput(videoOutput){
captureSession.addOutput(videoOutput) }
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.transform = CATransform3DMakeRotation(-1.5708, 0, 0, 1);
previewLayer.frame = self.view.bounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
self.view.layer.addSublayer(previewLayer)
captureSession.startRunning()
}

Had the same problem. Fixed by using this function after [AVCaptureSession addInput:cameraDeviceInput]. Somehow I could not change the framerate on my iPad pro before capture session was started. So at first I changed video format after the device was added to the capture session.
- (void)switchFormatWithDesiredFPS:(CGFloat)desiredFPS
{
BOOL isRunning = _captureSession.isRunning;
if (isRunning) [_captureSession stopRunning];
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceFormat *selectedFormat = nil;
int32_t maxWidth = 0;
AVFrameRateRange *frameRateRange = nil;
for (AVCaptureDeviceFormat *format in [videoDevice formats]) {
for (AVFrameRateRange *range in format.videoSupportedFrameRateRanges) {
CMFormatDescriptionRef desc = format.formatDescription;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(desc);
int32_t width = dimensions.width;
if (range.minFrameRate <= desiredFPS && desiredFPS <= range.maxFrameRate && width >= maxWidth) {
selectedFormat = format;
frameRateRange = range;
maxWidth = width;
}
}
}
if (selectedFormat) {
if ([videoDevice lockForConfiguration:nil]) {
NSLog(#"selected format:%#", selectedFormat);
videoDevice.activeFormat = selectedFormat;
videoDevice.activeVideoMinFrameDuration = CMTimeMake(1, (int32_t)desiredFPS);
videoDevice.activeVideoMaxFrameDuration = CMTimeMake(1, (int32_t)desiredFPS);
[videoDevice unlockForConfiguration];
}
}
if (isRunning) [_captureSession startRunning];
}

Related

AVCaptureSession get same settings as built in iPhone camera

I've been stuck on this for quite some time. There appears to be a zoom factor when I am recording with the built in camera app on my iPhone. However, I cannot seem to get the same result when I use an AVCaptureSession inside of my app. Here is an example of what I'm talking about. The top was recorded from the iPhone camera app, and the bottom was recorded inside of my app using AVCaptureSession. It's like there is a fish eye effect occurring. My code for setting up my camera is below.
- (void)configureCameraSession{
_captureSession = [AVCaptureSession new];
[_captureSession beginConfiguration];
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(!_device){
[self noValidCamera];
return;
}
AVCaptureDeviceFormat *currentFormat;
int frameRate = 60;
for (AVCaptureDeviceFormat *format in _device.formats)
{
NSArray *ranges = format.videoSupportedFrameRateRanges;
AVFrameRateRange *frameRates = ranges[0];
//CMVideoFormatDescriptionGet
int resolutionWidth = CMVideoFormatDescriptionGetDimensions(format.formatDescription).width;
if(frameRates.maxFrameRate > 59 && resolutionWidth >= 1920){
CameraFormat *cformat = [[CameraFormat alloc] init];
cformat.format = format;
cformat.fps = 60;
if(frameRates.maxFrameRate > 119){
cformat.fps = 120;
}
if(frameRates.maxFrameRate > 239){
cformat.fps = 240;
}
NSString *resolution;
if(resolutionWidth > 1920){
resolution = #"4K ";
}
else{
resolution = [[NSNumber numberWithInt:CMVideoFormatDescriptionGetDimensions(format.formatDescription).height] stringValue];
resolution = [resolution stringByAppendingString:#"p "];
}
//stringValue];
NSString *fps = [[NSNumber numberWithInt:cformat.fps] stringValue];
fps = [fps stringByAppendingString:#" FPS"];
cformat.label = [resolution stringByAppendingString:fps];
BOOL isUniqueFormat = YES;
for(int i = 0; i < [_formatList count]; i++){
if([_formatList[i].label isEqualToString:cformat.label]){
isUniqueFormat = NO;
break;
}
}
if(isUniqueFormat){
[_formatList addObject:cformat];
frameRate = cformat.fps;
currentFormat = cformat.format;
}
}
}
if(!currentFormat){
[self noValidCamera];
return;
}
_currentCameraFormat = _analysisViewController.currentCameraFormat;
if(_currentCameraFormat.fps == 0){
_currentCameraFormatIndex = 0;
_currentCameraFormat = _formatList[_currentCameraFormatIndex];
_analysisViewController.currentCameraFormat = _currentCameraFormat;
currentFormat = _currentCameraFormat.format;
frameRate = _currentCameraFormat.fps;
}
else{
currentFormat = _currentCameraFormat.format;
frameRate = _currentCameraFormat.fps;
}
NSString *resolution;
if(CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).width > 1920){
resolution = #"4K ";
}
else{
resolution = [[NSNumber numberWithInt:CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).height] stringValue];
resolution = [resolution stringByAppendingString:#"p "];
}
NSString *fps = [[NSNumber numberWithInt:frameRate] stringValue];
fps = [fps stringByAppendingString:#" FPS ▼"];
[self.videoLabelButton setTitle:[resolution stringByAppendingString:fps] forState:UIControlStateNormal];
[_device lockForConfiguration:nil];
_device.activeFormat = currentFormat;
_device.activeVideoMinFrameDuration = CMTimeMake(1, frameRate);
_device.activeVideoMaxFrameDuration = CMTimeMake(1, frameRate);
[_device unlockForConfiguration];
//Input
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:_device error:nil];
[_captureSession addInput:input];
//Output
_movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
if([_captureSession canAddOutput:_movieFileOutput]){
[_captureSession addOutput:_movieFileOutput];
}
[self setMovieOutputOrientation];
//Preview Layer
_previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession];
_previewView = [[UIView alloc] initWithFrame:self.imageView.bounds];
[self addFocusToView:_previewView];
[_previewView addSubview:self.imageView];
_previewLayer.frame = _previewView.bounds;
_previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[_previewView.layer addSublayer:_previewLayer];
_previewLayer.connection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
AVCaptureVideoStabilizationMode stabilizationMode = AVCaptureVideoStabilizationModeCinematic;
if ([_device.activeFormat isVideoStabilizationModeSupported:stabilizationMode]) {
[_previewLayer.connection setPreferredVideoStabilizationMode:stabilizationMode];
}
[self.view addSubview:_previewView];
[self.view addSubview:self.topInfoBar];
[self.view addSubview:self.recordButton];
[self.view addSubview:self.doneButton];
[_captureSession commitConfiguration];
[_captureSession startRunning];
}
Okay so I finally figured it out. The issue was that on some formats video stabilization is supported so Apple will use it if it can. The problem is that I was not turning this on when I could. Basically check to see if that option is available for the device's active format and turn it on Auto.
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [_movieFileOutput connections]){
for ( AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
}
}
}
if([videoConnection isVideoOrientationSupported]){
[videoConnection setVideoOrientation:[self videoOrientationFromCurrentDeviceOrientation]];
}
//Check if we can use video stabilization and if so then set it to auto
if (videoConnection.supportsVideoStabilization) {
videoConnection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto;
}

how to correctly start a camera session using AVCapture session/AVCapture

I want to make an iOS app in objective C. Right now I'm stuck on making the preview layer to the AVCapture preview output. Could someone please tell me how to successfully start an image capture session using the AVCapture camera session in iOS Objective C? Any help is much appreciated. Thank you.
I give you answer for AVCaptureSession
-(void)capture
{
NSError *error=nil;
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *inputDevice = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for(AVCaptureDevice *camera in devices)
{
if([camera position] == AVCaptureDevicePositionBack) // is Back camera
{
inputDevice = camera;
break;
}
}
[session addInput:inputDevice];
//Output
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview Layer
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
previewLayer.frame = viewForCamera.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[viewForCamera.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
Try this code to get camera id.
NSString *cameraID = nil;
NSArray *captureDeviceType = #[AVCaptureDeviceTypeBuiltInWideAngleCamera];
AVCaptureDeviceDiscoverySession *captureDevice =
[AVCaptureDeviceDiscoverySession
discoverySessionWithDeviceTypes:captureDeviceType
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionUnspecified];
cameraID = [captureDevice.devices.lastObject localizedName];

ios switch camera front and back resolution is changed AVCaptureSessionPresetHigh

I used GPUImage lib, my front camera session preset is AVCaptureSessionPresetPhoto, back camera is AVCaptureSessionPresetHigh,
if (self.isFrontFacingCameraPresent) {
[self setCaptureSessionPreset: AVCaptureSessionPresetHigh];
} else {
[self setCaptureSessionPreset:AVCaptureSessionPresetPhoto];
}
[self rotateCamera];
The initial status is using front camera, the resolution is 1280x960;
Now changed back camera, the resolution is 1920x1080;
Then change front camera, the resolution is 1280x720, it's very strange;
I checked this delegate method:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
fetched the width and height:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
int bufferWidth = (int) CVPixelBufferGetWidth(cameraFrame);
int bufferHeight = (int) CVPixelBufferGetHeight(cameraFrame);
The bufferHeight is 720, I don't why when changed back front camera, the height changed from 960 to 720! Maybe it's apple's bug?
I solved the issue, bye change the rotateCamera function, I rewrite a function used to switch camera between front and back:
- (void)switchCameraFrontAndBack {
NSError *error;
AVCaptureDeviceInput *newVideoInput;
AVCaptureDevicePosition currentCameraPosition = self.cameraPosition;
if (currentCameraPosition == AVCaptureDevicePositionBack)
{
currentCameraPosition = AVCaptureDevicePositionFront;
}
else
{
currentCameraPosition = AVCaptureDevicePositionBack;
}
AVCaptureDevice *backFacingCamera = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == currentCameraPosition)
{
backFacingCamera = device;
}
}
newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error];
if (newVideoInput != nil)
{
[_captureSession beginConfiguration];
[_captureSession removeInput:videoInput];
[self configSessionPreset:currentCameraPosition];
if ([_captureSession canAddInput:newVideoInput])
{
[_captureSession addInput:newVideoInput];
videoInput = newVideoInput;
}
else
{
[_captureSession addInput:videoInput];
}
[_captureSession commitConfiguration];
}
_inputCamera = backFacingCamera;
[self setOutputImageOrientation:self.outputImageOrientation];
}
- (void)configSessionPreset:(AVCaptureDevicePosition)currentPosition {
if (currentPosition == AVCaptureDevicePositionBack) {
if (WIDTH <= Iphone4SWidth) {
if ([self.captureSession canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
[self setCaptureSessionPreset:AVCaptureSessionPreset1280x720];
} else if ([self.captureSession canSetSessionPreset:AVCaptureSessionPreset1920x1080]) {
[self setCaptureSessionPreset:AVCaptureSessionPreset1920x1080];
}
} else {
if ([self.captureSession canSetSessionPreset:AVCaptureSessionPreset1920x1080]) {
[self setCaptureSessionPreset:AVCaptureSessionPreset1920x1080];
} else {
[self setCaptureSessionPreset: AVCaptureSessionPresetHigh];
}
}
} else {
[self setCaptureSessionPreset:AVCaptureSessionPresetPhoto];
}
}
The bufferHeight is 720, I don't why when changed back front camera, the height changed from 960 to 720! Maybe it's apple's bug?
when use AVCaptureSessionPresetHigh, the actual resolution ratio is diffrent from diffrent camera, the Front and the Back is diffrent, it will get the highest resolution of the camera . I guess you used the iphone5.

How to Record Video Using Front and Back Camera and still keep recording?

I'm using AVFoundation. I wanna to record video using both (front and Back side) camera. I record video on one side when i change the camera mode back to front, the camera still freeze. Is it possible to record video continuously on both side.
Sample Code:
- (void) startup
{
if (_session == nil)
{
NSLog(#"Starting up server");
self.isCapturing = NO;
self.isPaused = NO;
_currentFile = 0;
_discont = NO;
// create capture device with video input
_session = [[AVCaptureSession alloc] init];
AVCaptureDevice *backCamera = [self frontCamera];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:nil];
[_session addInput:input];
// audio input from default mic
AVCaptureDevice* mic = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* micinput = [AVCaptureDeviceInput deviceInputWithDevice:mic error:nil];
[_session addInput:micinput];
// create an output for YUV output with self as delegate
_captureQueue = dispatch_queue_create("com.softcraftsystems.comss", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput* videoout = [[AVCaptureVideoDataOutput alloc] init];
[videoout setSampleBufferDelegate:self queue:_captureQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
videoout.videoSettings = setcapSettings;
[_session addOutput:videoout];
_videoConnection = [videoout connectionWithMediaType:AVMediaTypeVideo];
// find the actual dimensions used so we can set up the encoder to the same.
NSDictionary* actual = videoout.videoSettings;
_cy = [[actual objectForKey:#"Height"] integerValue];
_cx = [[actual objectForKey:#"Width"] integerValue];
AVCaptureAudioDataOutput* audioout = [[AVCaptureAudioDataOutput alloc] init];
[audioout setSampleBufferDelegate:self queue:_captureQueue];
[_session addOutput:audioout];
_audioConnection = [audioout connectionWithMediaType:AVMediaTypeAudio];
// for audio, we want the channels and sample rate, but we can't get those from audioout.audiosettings on ios, so
// we need to wait for the first sample
// start capture and a preview layer
[_session startRunning];
_preview = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
}
}
- (AVCaptureDevice *)frontCamera
{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionFront) {
return device;
}
}
return nil;
}
- (AVCaptureDevice *)backCamera
{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionBack) {
return device;
}
}
return nil;
}
- (void) startupFront
{
_session = nil;
[_session stopRunning];
if (_session == nil)
{
NSLog(#"Starting up server");
self.isCapturing = NO;
self.isPaused = NO;
_currentFile = 0;
_discont = NO;
// create capture device with video input
_session = [[AVCaptureSession alloc] init];
AVCaptureDevice *backCamera = [self backCamera];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:nil];
[_session addInput:input];
// audio input from default mic
AVCaptureDevice* mic = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* micinput = [AVCaptureDeviceInput deviceInputWithDevice:mic error:nil];
[_session addInput:micinput];
// create an output for YUV output with self as delegate
_captureQueue = dispatch_queue_create("com.softcraftsystems.comss", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput* videoout = [[AVCaptureVideoDataOutput alloc] init];
[videoout setSampleBufferDelegate:self queue:_captureQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
videoout.videoSettings = setcapSettings;
[_session addOutput:videoout];
_videoConnection = [videoout connectionWithMediaType:AVMediaTypeVideo];
// find the actual dimensions used so we can set up the encoder to the same.
NSDictionary* actual = videoout.videoSettings;
_cy = [[actual objectForKey:#"Height"] integerValue];
_cx = [[actual objectForKey:#"Width"] integerValue];
AVCaptureAudioDataOutput* audioout = [[AVCaptureAudioDataOutput alloc] init];
[audioout setSampleBufferDelegate:self queue:_captureQueue];
[_session addOutput:audioout];
_audioConnection = [audioout connectionWithMediaType:AVMediaTypeAudio];
// for audio, we want the channels and sample rate, but we can't get those from audioout.audiosettings on ios, so
// we need to wait for the first sample
// start capture and a preview layer
[_session startRunning];
_preview = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
}
}
- (void) startCapture
{
#synchronized(self)
{
if (!self.isCapturing)
{
NSLog(#"starting capture");
// create the encoder once we have the audio params
_encoder = nil;
self.isPaused = NO;
_discont = NO;
_timeOffset = CMTimeMake(0, 0);
self.isCapturing = YES;
}
}
}
- (void) stopCapture
{
#synchronized(self)
{
if (self.isCapturing)
{
NSString* filename = [NSString stringWithFormat:#"capture%d.mp4", _currentFile];
NSString* path = [NSTemporaryDirectory() stringByAppendingPathComponent:filename];
NSURL* url = [NSURL fileURLWithPath:path];
_currentFile++;
// serialize with audio and video capture
self.isCapturing = NO;
dispatch_async(_captureQueue, ^{
[_encoder finishWithCompletionHandler:^{
self.isCapturing = NO;
_encoder = nil;
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeVideoAtPathToSavedPhotosAlbum:url completionBlock:^(NSURL *assetURL, NSError *error){
NSLog(#"save completed");
[[NSFileManager defaultManager] removeItemAtPath:path error:nil];
}];
}];
});
}
}
}
- (void) pauseCapture
{
#synchronized(self)
{
if (self.isCapturing)
{
NSLog(#"Pausing capture");
self.isPaused = YES;
_discont = YES;
}
}
}
- (void) resumeCapture
{
#synchronized(self)
{
if (self.isPaused)
{
NSLog(#"Resuming capture");
self.isPaused = NO;
}
}
}
- (CMSampleBufferRef) adjustTime:(CMSampleBufferRef) sample by:(CMTime) offset
{
CMItemCount count;
CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
for (CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = CMTimeSubtract(pInfo[i].decodeTimeStamp, offset);
pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, offset);
}
CMSampleBufferRef sout;
CMSampleBufferCreateCopyWithNewTiming(nil, sample, count, pInfo, &sout);
free(pInfo);
return sout;
}
- (void) setAudioFormat:(CMFormatDescriptionRef) fmt
{
const AudioStreamBasicDescription *asbd = CMAudioFormatDescriptionGetStreamBasicDescription(fmt);
_samplerate = asbd->mSampleRate;
_channels = asbd->mChannelsPerFrame;
}
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
BOOL bVideo = YES;
#synchronized(self)
{
if (!self.isCapturing || self.isPaused)
{
return;
}
if (connection != _videoConnection)
{
bVideo = NO;
}
if ((_encoder == nil) && !bVideo)
{
CMFormatDescriptionRef fmt = CMSampleBufferGetFormatDescription(sampleBuffer);
[self setAudioFormat:fmt];
NSString* filename = [NSString stringWithFormat:#"capture%d.mp4", _currentFile];
NSString* path = [NSTemporaryDirectory() stringByAppendingPathComponent:filename];
_encoder = [VideoEncoder encoderForPath:path Height:_cy width:_cx channels:_channels samples:_samplerate];
}
if (_discont)
{
if (bVideo)
{
return;
}
_discont = NO;
// calc adjustment
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTime last = bVideo ? _lastVideo : _lastAudio;
if (last.flags & kCMTimeFlags_Valid)
{
if (_timeOffset.flags & kCMTimeFlags_Valid)
{
pts = CMTimeSubtract(pts, _timeOffset);
}
CMTime offset = CMTimeSubtract(pts, last);
NSLog(#"Setting offset from %s", bVideo?"video": "audio");
NSLog(#"Adding %f to %f (pts %f)", ((double)offset.value)/offset.timescale, ((double)_timeOffset.value)/_timeOffset.timescale, ((double)pts.value/pts.timescale));
// this stops us having to set a scale for _timeOffset before we see the first video time
if (_timeOffset.value == 0)
{
_timeOffset = offset;
}
else
{
_timeOffset = CMTimeAdd(_timeOffset, offset);
}
}
_lastVideo.flags = 0;
_lastAudio.flags = 0;
}
// retain so that we can release either this or modified one
CFRetain(sampleBuffer);
if (_timeOffset.value > 0)
{
CFRelease(sampleBuffer);
sampleBuffer = [self adjustTime:sampleBuffer by:_timeOffset];
}
// record most recent time so we know the length of the pause
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTime dur = CMSampleBufferGetDuration(sampleBuffer);
if (dur.value > 0)
{
pts = CMTimeAdd(pts, dur);
}
if (bVideo)
{
_lastVideo = pts;
}
else
{
_lastAudio = pts;
}
}
// pass frame to encoder
[_encoder encodeFrame:sampleBuffer isVideo:bVideo];
CFRelease(sampleBuffer);
}
- (void) shutdown
{
NSLog(#"shutting down server");
if (_session)
{
[_session stopRunning];
_session = nil;
}
[_encoder finishWithCompletionHandler:^{
NSLog(#"Capture completed");
}];
}
According to me. it is not possible, continue recording when we switch the camera,
because, there resolution and quality difference between them, a video can have only one resolution and quality throughout the video.
and secondly, every time you switch between camera it'll alloc and initialize the camera.
unfortunately its not possible according to me.
but if you find solution, do tell me please.

iOS7 AVCapture captureOutput never gets called

Please understand that I cannot upload the whole code here.
I have
#interface BcsProcessor : NSObject <AVCaptureMetadataOutputObjectsDelegate> {}
and BcsProcessor has setupCaptureSession and captureOutput method.
- (void)captureOutput:(AVCaptureOutput*)captureOutput didOutputMetadataObjects:(NSArray*)metadataObjects fromConnection:(AVCaptureConnection*)connection
- (NSString*)setUpCaptureSession {
NSError* error = nil;
AVCaptureSession* captureSession = [[[AVCaptureSession alloc] init] autorelease];
self.captureSession = captureSession;
AVCaptureDevice* __block device = nil;
if (self.isFrontCamera) {
NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
[devices enumerateObjectsUsingBlock:^(AVCaptureDevice *obj, NSUInteger idx, BOOL *stop) {
if (obj.position == AVCaptureDevicePositionFront) {
device = obj;
}
}];
} else {
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
AVCaptureMetadataOutput* output = [[[AVCaptureMetadataOutput alloc] init] autorelease];
output.metadataObjectTypes = output.availableMetadataObjectTypes
dispatch_queue_t outputQueue = dispatch_queue_create("com.1337labz.featurebuild.metadata", 0);
[output setMetadataObjectsDelegate:self queue:outputQueue];
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
if ([captureSession canAddInput:input]) {
[captureSession addInput:input];
}
if ([captureSession canAddOutput:output]) {
[captureSession addOutput:output];
}
// setup capture preview layer
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
// run on next event loop pass [captureSession startRunning]
[captureSession performSelector:#selector(startRunning) withObject:nil afterDelay:0];
return nil;
}
So the code above sets up the session and add AVCaptureMetadataOutput. and BcsProcessor is supposed to receive the captured metadata. but my captureOutput method does not receive any data, or gets called.
I'll appreciate any help or comments.
First make sure your input and output are correctly added to the session. You can check by logging captureSession.inputs and captureSession.outputs.
Second make sure output.metadataObjectTypes is correctly setup meaning output of availableMetadataObjectTypes is not empty. I believe this will be empty if you call it before adding the output.
and finally i don't see you adding the preview layer to the views layer
try after you init your layer with session...
self.previewLayer.frame = self.view.layer.bounds;
[self.view.layer addSublayer:previewLayer];

Resources