48 fps causes sigart on activeVideoMinFrameDuration update - ios

Hi I am using PBJVision to do some video processing iOS. When setting to double default frame rate 24 fps -> 48 fps. I get the following error:
2015-10-16 15:02:48.929 RecordGram[2425:515618] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVCaptureVideoDevice setActiveVideoMinFrameDuration:] - the passed activeFrameDuration 1:48 is not supported by the device. Use -activeFormat.videoSupportedFrameRateRanges to discover valid ranges.'
Here is the method from PBJVision where the error occurs. I highlighted the line where the error is. Note the method supportsVideoFrameRate: returns true.
// framerate
- (void)setVideoFrameRate:(NSInteger)videoFrameRate
{
if (![self supportsVideoFrameRate:videoFrameRate]) {
DLog(#"frame rate range not supported for current device format");
return;
}
BOOL isRecording = _flags.recording;
if (isRecording) {
[self pauseVideoCapture];
}
CMTime fps = CMTimeMake(1, (int32_t)videoFrameRate);
if (floor(NSFoundationVersionNumber) > NSFoundationVersionNumber_iOS_6_1) {
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceFormat *supportingFormat = nil;
int32_t maxWidth = 0;
NSArray *formats = [videoDevice formats];
for (AVCaptureDeviceFormat *format in formats) {
NSArray *videoSupportedFrameRateRanges = format.videoSupportedFrameRateRanges;
for (AVFrameRateRange *range in videoSupportedFrameRateRanges) {
CMFormatDescriptionRef desc = format.formatDescription;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(desc);
int32_t width = dimensions.width;
if (range.minFrameRate <= videoFrameRate && videoFrameRate <= range.maxFrameRate && width >= maxWidth) {
supportingFormat = format;
maxWidth = width;
}
}
}
if (supportingFormat) {
NSError *error = nil;
if ([_currentDevice lockForConfiguration:&error]) {
_currentDevice.activeVideoMinFrameDuration = fps; <<<<<<======ERROR IS HERE, THIS LINE IS WHERE THE ERROR OCCURS
_currentDevice.activeVideoMaxFrameDuration = fps;
_videoFrameRate = videoFrameRate;
[_currentDevice unlockForConfiguration];
} else if (error) {
DLog(#"error locking device for frame rate change (%#)", error);
}
}
[self _enqueueBlockOnMainQueue:^{
if ([_delegate respondsToSelector:#selector(visionDidChangeVideoFormatAndFrameRate:)])
[_delegate visionDidChangeVideoFormatAndFrameRate:self];
}];
} else {
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
AVCaptureConnection *connection = [_currentOutput connectionWithMediaType:AVMediaTypeVideo];
if (connection.isVideoMaxFrameDurationSupported) {
connection.videoMaxFrameDuration = fps;
} else {
DLog(#"failed to set frame rate");
}
if (connection.isVideoMinFrameDurationSupported) {
connection.videoMinFrameDuration = fps;
_videoFrameRate = videoFrameRate;
} else {
DLog(#"failed to set frame rate");
}
[self _enqueueBlockOnMainQueue:^{
if ([_delegate respondsToSelector:#selector(visionDidChangeVideoFormatAndFrameRate:)])
[_delegate visionDidChangeVideoFormatAndFrameRate:self];
}];
#pragma clang diagnostic pop
}
if (isRecording) {
[self resumeVideoCapture];
}
}
Here is the supportsVideoFrameRate: method
- (BOOL)supportsVideoFrameRate:(NSInteger)videoFrameRate
{
if (floor(NSFoundationVersionNumber) > NSFoundationVersionNumber_iOS_6_1) {
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSArray *formats = [videoDevice formats];
for (AVCaptureDeviceFormat *format in formats) {
NSArray *videoSupportedFrameRateRanges = [format videoSupportedFrameRateRanges];
for (AVFrameRateRange *frameRateRange in videoSupportedFrameRateRanges) {
if ( (frameRateRange.minFrameRate <= videoFrameRate) && (videoFrameRate <= frameRateRange.maxFrameRate) ) {
return YES;
}
}
}
} else {
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
AVCaptureConnection *connection = [_currentOutput connectionWithMediaType:AVMediaTypeVideo];
return (connection.isVideoMaxFrameDurationSupported && connection.isVideoMinFrameDurationSupported);
#pragma clang diagnostic pop
}
return NO;
}
Anybody have any ideas?

The problem here is that the "Active Device" or "activeFormat" is not what the code from PBJVision is checking. It is checking all formats and the default device...
So for the face camera 48 fps is not supported. simple check:
RGLog(#"default video frame rate = %ld", (long)[vision videoFrameRate]);
bool supportsDouble = [vision supportsVideoFrameRate:2*[vision videoFrameRate]];
bool supportsQuad = [vision supportsVideoFrameRate:4*[vision videoFrameRate]];
bool supportsHalf = [vision supportsVideoFrameRate:[vision videoFrameRate]/2];
bool supportsQuarter = [vision supportsVideoFrameRate:[vision videoFrameRate]/4];
RGLog(#"x2 = %d, x4 = %d, /2 = %d, /4 = %d", supportsDouble, supportsQuad, supportsHalf, supportsQuarter);
which logs
2015-10-16 15:32:24.279 RecordGram[2436:519667] default video frame rate = 24
2015-10-16 15:32:24.280 RecordGram[2436:519667] x2 = 0, x4 = 0, /2 = 1, /4 = 1
After changing the supportsVideoFrameRate: method to look at _currentDevice instead of [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]
- (BOOL)supportsVideoFrameRate:(NSInteger)videoFrameRate
{
if (floor(NSFoundationVersionNumber) > NSFoundationVersionNumber_iOS_6_1) {
AVCaptureDevice *videoDevice = _currentDevice//[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSArray *formats = [videoDevice formats];
for (AVCaptureDeviceFormat *format in formats) {
NSArray *videoSupportedFrameRateRanges = [format videoSupportedFrameRateRanges];
for (AVFrameRateRange *frameRateRange in videoSupportedFrameRateRanges) {
if ( (frameRateRange.minFrameRate <= videoFrameRate) && (videoFrameRate <= frameRateRange.maxFrameRate) ) {
return YES;
}
}
}
} else {
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
AVCaptureConnection *connection = [_currentOutput connectionWithMediaType:AVMediaTypeVideo];
return (connection.isVideoMaxFrameDurationSupported && connection.isVideoMinFrameDurationSupported);
#pragma clang diagnostic pop
}
return NO;
}

Related

Can't start iPhone X flash in touch mode

I have problems with running iPhone X flash in torch mode.
Back AVCaptureDeviceTypeBuiltInTelephotoCamera selected as capture device:
com.apple.avfoundation.avcapturedevice.built-in_video:2' -
AVCaptureDeviceTypeBuiltInTelephotoCamera
After checking touch mode availability with:
[self.captureDevice isTorchModeSupported:AVCaptureTorchModeOn]
I'm trying to switch flash light into torch mode with
[self.captureDevice lockForConfiguration:nil];
BOOL result = [self.captureDevice setTorchModeOnWithLevel:1 error:&error];
[self.captureDevice unlockForConfiguration];
This call is successful. result == YES and error == nil. But flash light blinks once then turns off.
I saw this behaviour on iPhone X myself and there is a reports of the same behaviour from iPhone 8 and iPhone 8 Plus owners. Some users say that this problem appeared after update to iOS 11.1. But I couldn't reproduce it with iPhone 8 myself.
Is there any ideas how to fix or debug this problem?
Full code snippet from my app listed below:
// Retrieve the back camera
if ([AVCaptureDeviceDiscoverySession class]) {
DDLogDebug(#"Search camera with AVCaptureDeviceDiscoverySession");
AVCaptureDevice* camera =
[AVCaptureDeviceDiscoverySession
discoverySessionWithDeviceTypes: #[AVCaptureDeviceTypeBuiltInTelephotoCamera]
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack].devices.firstObject;
if (!camera) {
camera = [AVCaptureDeviceDiscoverySession
discoverySessionWithDeviceTypes: #[AVCaptureDeviceTypeBuiltInTelephotoCamera]
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack].devices.firstObject;
}
DDLogDebug(#"Did find %# camera", camera);
self.captureDevice = camera;
} else {
DDLogDebug(#"Haven't found camera device with AVCaptureDeviceDiscoverySession");
}
if (!self.captureDevice) {
DDLogDebug(#"Searching at [AVCaptureDevice devices], where %lu devices available", (unsigned long)AVCaptureDevice.devices.count);
for (AVCaptureDevice *device in [AVCaptureDevice devices]) {
if ([device hasMediaType:AVMediaTypeVideo] && [device hasTorch]) {
self.captureDevice = device;
break;
}
}
}
if (!self.captureDevice) {
NSError* error = [NSError buildError:^(MRErrorBuilder *builder) {
builder.localizedDescription = NSLocalizedString(#"There is no camera devices able to measure heart rate", nil);
builder.domain = kWTCameraHeartRateMonitorError;
builder.code = 27172;
}];
DDLogError(#"%#", error);
self.session = nil;
self.handler(0, 0, error);
return NO;
}
NSError *error;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:self.captureDevice
error:&error];
if (error) {
DDLogError(#"%#", error);
self.session = nil;
self.handler(0, 0, error);
return NO;
}
NSString* deviceType = [self.captureDevice respondsToSelector:#selector(deviceType)] ? self.captureDevice.deviceType : #"Unknown";
DDLogDebug(#"Configurating camera '%#'/'%#' - %# id %# at %ld connected: %#", self.captureDevice.localizedName, self.captureDevice.modelID, deviceType, self.captureDevice.uniqueID, (long)self.captureDevice.position, self.captureDevice.connected?#"YES":#"NO");
self.session = [[AVCaptureSession alloc] init];
NSString* preset = [self.session canSetSessionPreset:AVCaptureSessionPresetLow] ? AVCaptureSessionPresetLow : nil;
if (preset) {
self.session.sessionPreset = preset;
}
[self.session beginConfiguration];
[self.session addInput:input];
// Find the max frame rate we can get from the given device
AVCaptureDeviceFormat *currentFormat;
for (AVCaptureDeviceFormat *format in self.captureDevice.formats)
{
NSArray *ranges = format.videoSupportedFrameRateRanges;
AVFrameRateRange *frameRates = ranges[0];
// Find the lowest resolution format at the frame rate we want.
if (frameRates.maxFrameRate == FRAMES_PER_SECOND && (!currentFormat || (CMVideoFormatDescriptionGetDimensions(format.formatDescription).width < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).width && CMVideoFormatDescriptionGetDimensions(format.formatDescription).height < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).height)))
{
currentFormat = format;
}
}
if (![self.captureDevice isTorchModeSupported:AVCaptureTorchModeOn]) {
NSError* error = [NSError buildError:^(MRErrorBuilder *builder) {
builder.localizedDescription = NSLocalizedString(#"Torch mode is not supported for your camera", nil);
builder.domain = kWTCameraHeartRateMonitorError;
builder.code = 28633;
}];
self.session = nil;
DDLogError(#"%#", error);
self.session = nil;
self.handler(0, 0, error);
return NO;
}
// Tell the device to use the max frame rate.
[self.captureDevice lockForConfiguration:nil];
DDLogVerbose(#"Turn on tourch mode with level 0.5");
self.captureDevice.flashMode = AVCaptureFlashModeOff;
BOOL result = [self.captureDevice setTorchModeOnWithLevel:0.5 error:&error];
if (!result) {
DDLogError(#"%#", error);
self.session = nil;
self.handler(0, 0, error);
return NO;
}
[self.captureDevice setFocusMode:AVCaptureFocusModeLocked];
[self.captureDevice setFocusModeLockedWithLensPosition:1.0
completionHandler:nil];
self.captureDevice.activeFormat = currentFormat;
self.captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
self.captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
[self.captureDevice unlockForConfiguration];
// Set the output
AVCaptureVideoDataOutput* videoOutput = [AVCaptureVideoDataOutput new];
// create a queue to run the capture on
dispatch_queue_t captureQueue=dispatch_queue_create("catpureQueue", DISPATCH_QUEUE_SERIAL);
// setup our delegate
[videoOutput setSampleBufferDelegate:self queue:captureQueue];
// configure the pixel format
videoOutput.videoSettings = #{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)};
videoOutput.alwaysDiscardsLateVideoFrames = NO;
[self.session addOutput:videoOutput];
if (debugPath) {
NSError* error;
[[NSFileManager defaultManager] removeItemAtPath:debugPath
error:nil];
BOOL result =
[[NSFileManager defaultManager] createDirectoryAtPath:debugPath
withIntermediateDirectories:YES
attributes:nil
error:&error];
if (result) {
[self setupDebugRecordAt:debugPath withFormat:currentFormat];
} else {
DDLogError(#"%#", error);
}
const char* path = [debugPath cStringUsingEncoding:NSUTF8StringEncoding];
self.filter->setDebugPath(path);
}
// Start the video session
[self.session commitConfiguration];
self.frameNumber = 0;
[self.assetWriter startWriting];
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
[self.session startRunning];
Finally, the problem is fixed. I'm not sure about exact reason. Any information related to this issue are appreciated.
This problem exists on iPhone 8, 8+ and iPhone X running iOS 11.1. I've reproduced this behaviour on iPhone 8 after updating form iOS from 11.0 to 11.1.
What I noticed is that torch turns on after calling
BOOL result = [self.captureDevice setTorchModeOnWithLevel:0.5 error:&error];
and turns off after
[self.captureDevice setFocusMode:AVCaptureFocusModeLocked];
or
[self.session commitConfiguration];
So the solution was to perform torch configuration where ALL other session and device configurations are finished and session is started.
My current implementation is:
// Session configuration ...
[self.session startRunning];
if (![self.captureDevice isTorchModeSupported:AVCaptureTorchModeOn]) {
NSError* error = [NSError buildError:^(MRErrorBuilder *builder) {
builder.localizedDescription = NSLocalizedString(#"Torch mode is not supported for your camera", nil);
builder.domain = kWTCameraHeartRateMonitorError;
builder.code = 28633;
}];
DDLogError(#"%#", error);
if (self.session) {
[self.session stopRunning];
}
self.session = nil;
self.handler(0, 0, error);
return NO;
}
[self.captureDevice lockForConfiguration:nil];
self.captureDevice.flashMode = AVCaptureFlashModeOff;
[self.captureDevice setFocusMode:AVCaptureFocusModeLocked];
[self.captureDevice setFocusModeLockedWithLensPosition:1.0
completionHandler:nil];
self.captureDevice.activeFormat = currentFormat;
self.captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
self.captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
// This call should be placed AFTER all other configurations
BOOL result = [self.captureDevice setTorchModeOnWithLevel:0.5 error:&error];
if (!result) {
DDLogError(#"%#", error);
self.session = nil;
self.handler(0, 0, error);
return NO;
}
[self.captureDevice unlockForConfiguration];

iphone camera iso value

I set the shutter speed, ISO and other camera parameters and capture an image using the code below. If I set the ISO to something like 119, it gets reported as 125 in the Exif info (it is always rounded to a standard value). How can I tell what the real ISO was? The maxISO gets reported as 734 but if I set it to 734 it is 800 in the Exif information. I had the same problem with the exposure time. If I set it to 800ms it shows up as 1s in the but the value calculated from the ShutterSpeedValue is correct.
- (void)setCameraSettings:(long)expTime1000thSec iso:(int)isoValue
{
if ( currentCaptureDevice ) {
[captureSession beginConfiguration];
NSError *error = nil;
if ([currentCaptureDevice lockForConfiguration:&error]) {
if ([currentCaptureDevice isExposureModeSupported:AVCaptureExposureModeLocked]) {
CMTime minTime, maxTime, exposureTime;
if ( isoValue < minISO ) {
isoValue = minISO;
} else if ( isoValue > maxISO ) {
isoValue = maxISO;
}
exposureTime = CMTimeMake(expTime1000thSec, EXP_TIME_UNIT); // in 1/EXP_TIME_UNIT of a second
minTime = currentCaptureDevice.activeFormat.minExposureDuration;
maxTime = currentCaptureDevice.activeFormat.maxExposureDuration;
if ( CMTimeCompare(exposureTime, minTime) < 0 ) {
exposureTime = minTime;
} else if ( CMTimeCompare(exposureTime, maxTime) > 0 ) {
exposureTime = maxTime;
}
NSLog(#"setting exp time to %lld/%d s (want %ld) iso=%d", exposureTime.value, exposureTime.timescale, expTime1000thSec, isoValue);
[currentCaptureDevice setExposureModeCustomWithDuration:exposureTime ISO:isoValue completionHandler:nil];
}
if (currentCaptureDevice.lowLightBoostSupported) {
currentCaptureDevice.automaticallyEnablesLowLightBoostWhenAvailable = NO;
NSLog(#"setting automaticallyEnablesLowLightBoostWhenAvailable = NO");
}
// lock the gains
if ([currentCaptureDevice isWhiteBalanceModeSupported:AVCaptureWhiteBalanceModeLocked]) {
currentCaptureDevice.whiteBalanceMode = AVCaptureWhiteBalanceModeLocked;
NSLog(#"setting AVCaptureWhiteBalanceModeLocked");
}
// set the gains
AVCaptureWhiteBalanceGains gains;
gains.redGain = 1.0;
gains.greenGain = 1.0;
gains.blueGain = 1.0;
AVCaptureWhiteBalanceGains normalizedGains = [self normalizedGains:gains];
[currentCaptureDevice setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:normalizedGains completionHandler:nil];
NSLog(#"setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains g.red=%.2lf g.green=%.2lf g.blue=%.2lf",
normalizedGains.redGain, normalizedGains.greenGain, normalizedGains.blueGain);
[currentCaptureDevice unlockForConfiguration];
}
[captureSession commitConfiguration];
}
}
- (void)captureStillImage
{
NSLog(#"about to request a capture from: %#", [self stillImageOutput]);
if ( videoConnection ) {
waitingForCapture = true;
NSLog(#"requesting a capture from: %#", [self stillImageOutput]);
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
if ( imageSampleBuffer ) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSLog(#"name: %#", [currentCaptureDevice localizedName]);
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
[self setStillImage:image];
NSDictionary *dict = (__bridge NSDictionary*)exifAttachments;
NSString *value = [dict objectForKey:#"PixelXDimension"];
[self setImageWidth:[NSNumber numberWithInt:[value intValue]]];
value = [dict objectForKey:#"PixelYDimension"];
[self setImageHeight:[NSNumber numberWithInt:[value intValue]]];
value = [dict objectForKey:#"BrightnessValue"];
[self setImageBrightnessValue:[NSNumber numberWithFloat:[value floatValue]]];
value = [dict objectForKey:#"ShutterSpeedValue"];
double val = [value doubleValue];
val = 1.0 / pow(2.0, val);
[self setImageExposureTime:[NSNumber numberWithDouble:val]];
value = [dict objectForKey:#"ApertureValue"];
[self setImageApertureValue:[NSNumber numberWithFloat:[value floatValue]]];
NSArray *values = [dict objectForKey:#"ISOSpeedRatings"];
[self setImageISOSpeedRatings:[NSNumber numberWithInt:[ [values objectAtIndex:0] intValue]]];
NSLog(#"r/g/b gains = %.2lf/%.2lf/%.2lf",
currentCaptureDevice.deviceWhiteBalanceGains.redGain,
currentCaptureDevice.deviceWhiteBalanceGains.greenGain,
currentCaptureDevice.deviceWhiteBalanceGains.blueGain);
[[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];
} else {
NSLog(#"imageSampleBuffer = NULL");
}
waitingForCapture = false;
}];
} else {
NSLog(#"can't capture from: videoConnection");
}
}
I answered your other question, same thing, you are not setting the CustomSettings mode:
[device lockForConfiguration:nil];
if([device isExposureModeSupported:AVCaptureExposureModeCustom]){
[device setExposureMode:AVCaptureExposureModeCustom];
[device setExposureModeCustomWithDuration:exposureTime ISO:exposureISO completionHandler:^(CMTime syncTime) {}];
[device setExposureTargetBias:exposureBIAS completionHandler:^(CMTime syncTime) {}];
}
Regards.
I added:
[device setExposureMode:AVCaptureExposureModeCustom];
and changed my completionHandler from nil to:
completionHandler:^(CMTime syncTime) {}
and I still see the same results. It seems that ISO can only be set to the "standard" values and not to any value between minISO and maxISO.

How to Record Video Using Front and Back Camera and still keep recording?

I'm using AVFoundation. I wanna to record video using both (front and Back side) camera. I record video on one side when i change the camera mode back to front, the camera still freeze. Is it possible to record video continuously on both side.
Sample Code:
- (void) startup
{
if (_session == nil)
{
NSLog(#"Starting up server");
self.isCapturing = NO;
self.isPaused = NO;
_currentFile = 0;
_discont = NO;
// create capture device with video input
_session = [[AVCaptureSession alloc] init];
AVCaptureDevice *backCamera = [self frontCamera];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:nil];
[_session addInput:input];
// audio input from default mic
AVCaptureDevice* mic = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* micinput = [AVCaptureDeviceInput deviceInputWithDevice:mic error:nil];
[_session addInput:micinput];
// create an output for YUV output with self as delegate
_captureQueue = dispatch_queue_create("com.softcraftsystems.comss", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput* videoout = [[AVCaptureVideoDataOutput alloc] init];
[videoout setSampleBufferDelegate:self queue:_captureQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
videoout.videoSettings = setcapSettings;
[_session addOutput:videoout];
_videoConnection = [videoout connectionWithMediaType:AVMediaTypeVideo];
// find the actual dimensions used so we can set up the encoder to the same.
NSDictionary* actual = videoout.videoSettings;
_cy = [[actual objectForKey:#"Height"] integerValue];
_cx = [[actual objectForKey:#"Width"] integerValue];
AVCaptureAudioDataOutput* audioout = [[AVCaptureAudioDataOutput alloc] init];
[audioout setSampleBufferDelegate:self queue:_captureQueue];
[_session addOutput:audioout];
_audioConnection = [audioout connectionWithMediaType:AVMediaTypeAudio];
// for audio, we want the channels and sample rate, but we can't get those from audioout.audiosettings on ios, so
// we need to wait for the first sample
// start capture and a preview layer
[_session startRunning];
_preview = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
}
}
- (AVCaptureDevice *)frontCamera
{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionFront) {
return device;
}
}
return nil;
}
- (AVCaptureDevice *)backCamera
{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionBack) {
return device;
}
}
return nil;
}
- (void) startupFront
{
_session = nil;
[_session stopRunning];
if (_session == nil)
{
NSLog(#"Starting up server");
self.isCapturing = NO;
self.isPaused = NO;
_currentFile = 0;
_discont = NO;
// create capture device with video input
_session = [[AVCaptureSession alloc] init];
AVCaptureDevice *backCamera = [self backCamera];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:nil];
[_session addInput:input];
// audio input from default mic
AVCaptureDevice* mic = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* micinput = [AVCaptureDeviceInput deviceInputWithDevice:mic error:nil];
[_session addInput:micinput];
// create an output for YUV output with self as delegate
_captureQueue = dispatch_queue_create("com.softcraftsystems.comss", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput* videoout = [[AVCaptureVideoDataOutput alloc] init];
[videoout setSampleBufferDelegate:self queue:_captureQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
videoout.videoSettings = setcapSettings;
[_session addOutput:videoout];
_videoConnection = [videoout connectionWithMediaType:AVMediaTypeVideo];
// find the actual dimensions used so we can set up the encoder to the same.
NSDictionary* actual = videoout.videoSettings;
_cy = [[actual objectForKey:#"Height"] integerValue];
_cx = [[actual objectForKey:#"Width"] integerValue];
AVCaptureAudioDataOutput* audioout = [[AVCaptureAudioDataOutput alloc] init];
[audioout setSampleBufferDelegate:self queue:_captureQueue];
[_session addOutput:audioout];
_audioConnection = [audioout connectionWithMediaType:AVMediaTypeAudio];
// for audio, we want the channels and sample rate, but we can't get those from audioout.audiosettings on ios, so
// we need to wait for the first sample
// start capture and a preview layer
[_session startRunning];
_preview = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
}
}
- (void) startCapture
{
#synchronized(self)
{
if (!self.isCapturing)
{
NSLog(#"starting capture");
// create the encoder once we have the audio params
_encoder = nil;
self.isPaused = NO;
_discont = NO;
_timeOffset = CMTimeMake(0, 0);
self.isCapturing = YES;
}
}
}
- (void) stopCapture
{
#synchronized(self)
{
if (self.isCapturing)
{
NSString* filename = [NSString stringWithFormat:#"capture%d.mp4", _currentFile];
NSString* path = [NSTemporaryDirectory() stringByAppendingPathComponent:filename];
NSURL* url = [NSURL fileURLWithPath:path];
_currentFile++;
// serialize with audio and video capture
self.isCapturing = NO;
dispatch_async(_captureQueue, ^{
[_encoder finishWithCompletionHandler:^{
self.isCapturing = NO;
_encoder = nil;
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeVideoAtPathToSavedPhotosAlbum:url completionBlock:^(NSURL *assetURL, NSError *error){
NSLog(#"save completed");
[[NSFileManager defaultManager] removeItemAtPath:path error:nil];
}];
}];
});
}
}
}
- (void) pauseCapture
{
#synchronized(self)
{
if (self.isCapturing)
{
NSLog(#"Pausing capture");
self.isPaused = YES;
_discont = YES;
}
}
}
- (void) resumeCapture
{
#synchronized(self)
{
if (self.isPaused)
{
NSLog(#"Resuming capture");
self.isPaused = NO;
}
}
}
- (CMSampleBufferRef) adjustTime:(CMSampleBufferRef) sample by:(CMTime) offset
{
CMItemCount count;
CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
for (CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = CMTimeSubtract(pInfo[i].decodeTimeStamp, offset);
pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, offset);
}
CMSampleBufferRef sout;
CMSampleBufferCreateCopyWithNewTiming(nil, sample, count, pInfo, &sout);
free(pInfo);
return sout;
}
- (void) setAudioFormat:(CMFormatDescriptionRef) fmt
{
const AudioStreamBasicDescription *asbd = CMAudioFormatDescriptionGetStreamBasicDescription(fmt);
_samplerate = asbd->mSampleRate;
_channels = asbd->mChannelsPerFrame;
}
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
BOOL bVideo = YES;
#synchronized(self)
{
if (!self.isCapturing || self.isPaused)
{
return;
}
if (connection != _videoConnection)
{
bVideo = NO;
}
if ((_encoder == nil) && !bVideo)
{
CMFormatDescriptionRef fmt = CMSampleBufferGetFormatDescription(sampleBuffer);
[self setAudioFormat:fmt];
NSString* filename = [NSString stringWithFormat:#"capture%d.mp4", _currentFile];
NSString* path = [NSTemporaryDirectory() stringByAppendingPathComponent:filename];
_encoder = [VideoEncoder encoderForPath:path Height:_cy width:_cx channels:_channels samples:_samplerate];
}
if (_discont)
{
if (bVideo)
{
return;
}
_discont = NO;
// calc adjustment
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTime last = bVideo ? _lastVideo : _lastAudio;
if (last.flags & kCMTimeFlags_Valid)
{
if (_timeOffset.flags & kCMTimeFlags_Valid)
{
pts = CMTimeSubtract(pts, _timeOffset);
}
CMTime offset = CMTimeSubtract(pts, last);
NSLog(#"Setting offset from %s", bVideo?"video": "audio");
NSLog(#"Adding %f to %f (pts %f)", ((double)offset.value)/offset.timescale, ((double)_timeOffset.value)/_timeOffset.timescale, ((double)pts.value/pts.timescale));
// this stops us having to set a scale for _timeOffset before we see the first video time
if (_timeOffset.value == 0)
{
_timeOffset = offset;
}
else
{
_timeOffset = CMTimeAdd(_timeOffset, offset);
}
}
_lastVideo.flags = 0;
_lastAudio.flags = 0;
}
// retain so that we can release either this or modified one
CFRetain(sampleBuffer);
if (_timeOffset.value > 0)
{
CFRelease(sampleBuffer);
sampleBuffer = [self adjustTime:sampleBuffer by:_timeOffset];
}
// record most recent time so we know the length of the pause
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTime dur = CMSampleBufferGetDuration(sampleBuffer);
if (dur.value > 0)
{
pts = CMTimeAdd(pts, dur);
}
if (bVideo)
{
_lastVideo = pts;
}
else
{
_lastAudio = pts;
}
}
// pass frame to encoder
[_encoder encodeFrame:sampleBuffer isVideo:bVideo];
CFRelease(sampleBuffer);
}
- (void) shutdown
{
NSLog(#"shutting down server");
if (_session)
{
[_session stopRunning];
_session = nil;
}
[_encoder finishWithCompletionHandler:^{
NSLog(#"Capture completed");
}];
}
According to me. it is not possible, continue recording when we switch the camera,
because, there resolution and quality difference between them, a video can have only one resolution and quality throughout the video.
and secondly, every time you switch between camera it'll alloc and initialize the camera.
unfortunately its not possible according to me.
but if you find solution, do tell me please.

How to find the memory leak in ios xcode?

It's my RTSP streaming ios application with FFMPEG decoder and it streaming fine, But the memory continuously increasing while running. Please help me, Is it a memory leak ?. And how can I track the leak ?.
Its my video streaming class: RTSPPlayer.m
#import "RTSPPlayer.h"
#import "Utilities.h"
#import "AudioStreamer.h"
#interface RTSPPlayer ()
#property (nonatomic, retain) AudioStreamer *audioController;
#end
#interface RTSPPlayer (private)
-(void)convertFrameToRGB;
-(UIImage *)imageFromAVPicture:(AVPicture)pict width:(int)width height:(int)height;
-(void)setupScaler;
#end
#implementation RTSPPlayer
#synthesize audioController = _audioController;
#synthesize audioPacketQueue,audioPacketQueueSize;
#synthesize _audioStream,_audioCodecContext;
#synthesize emptyAudioBuffer;
#synthesize outputWidth, outputHeight;
- (void)setOutputWidth:(int)newValue
{
if (outputWidth != newValue) {
outputWidth = newValue;
[self setupScaler];
}
}
- (void)setOutputHeight:(int)newValue
{
if (outputHeight != newValue) {
outputHeight = newValue;
[self setupScaler];
}
}
- (UIImage *)currentImage
{
if (!pFrame->data[0]) return nil;
[self convertFrameToRGB];
return [self imageFromAVPicture:picture width:outputWidth height:outputHeight];
}
- (double)duration
{
return (double)pFormatCtx->duration / AV_TIME_BASE;
}
- (double)currentTime
{
AVRational timeBase = pFormatCtx->streams[videoStream]->time_base;
return packet.pts * (double)timeBase.num / timeBase.den;
}
- (int)sourceWidth
{
return pCodecCtx->width;
}
- (int)sourceHeight
{
return pCodecCtx->height;
}
- (id)initWithVideo:(NSString *)moviePath usesTcp:(BOOL)usesTcp
{
if (!(self=[super init])) return nil;
AVCodec *pCodec;
// Register all formats and codecs
avcodec_register_all();
av_register_all();
avformat_network_init();
// Set the RTSP Options
AVDictionary *opts = 0;
if (usesTcp)
av_dict_set(&opts, "rtsp_transport", "tcp", 0);
if (avformat_open_input(&pFormatCtx, [moviePath UTF8String], NULL, &opts) !=0 ) {
av_log(NULL, AV_LOG_ERROR, "Couldn't open file\n");
goto initError;
}
// Retrieve stream information
if (avformat_find_stream_info(pFormatCtx,NULL) < 0) {
av_log(NULL, AV_LOG_ERROR, "Couldn't find stream information\n");
goto initError;
}
// Find the first video stream
videoStream=-1;
audioStream=-1;
for (int i=0; i<pFormatCtx->nb_streams; i++) {
if (pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
NSLog(#"found video stream");
videoStream=i;
}
if (pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO) {
audioStream=i;
NSLog(#"found audio stream");
}
}
if (videoStream==-1 && audioStream==-1) {
goto initError;
}
// Get a pointer to the codec context for the video stream
pCodecCtx = pFormatCtx->streams[videoStream]->codec;
// Find the decoder for the video stream
pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
if (pCodec == NULL) {
av_log(NULL, AV_LOG_ERROR, "Unsupported codec!\n");
goto initError;
}
// Open codec
if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video decoder\n");
goto initError;
}
if (audioStream > -1 ) {
NSLog(#"set up audiodecoder");
[self setupAudioDecoder];
}
// Allocate video frame
pFrame = avcodec_alloc_frame();
outputWidth = pCodecCtx->width;
self.outputHeight = pCodecCtx->height;
return self;
initError:
// [self release];
return nil;
}
- (void)setupScaler
{
// Release old picture and scaler
avpicture_free(&picture);
sws_freeContext(img_convert_ctx);
// Allocate RGB picture
avpicture_alloc(&picture, PIX_FMT_RGB24, outputWidth, outputHeight);
// Setup scaler
static int sws_flags = SWS_FAST_BILINEAR;
img_convert_ctx = sws_getContext(pCodecCtx->width,
pCodecCtx->height,
pCodecCtx->pix_fmt,
outputWidth,
outputHeight,
PIX_FMT_RGB24,
sws_flags, NULL, NULL, NULL);
}
- (void)seekTime:(double)seconds
{
AVRational timeBase = pFormatCtx->streams[videoStream]->time_base;
int64_t targetFrame = (int64_t)((double)timeBase.den / timeBase.num * seconds);
avformat_seek_file(pFormatCtx, videoStream, targetFrame, targetFrame, targetFrame, AVSEEK_FLAG_FRAME);
avcodec_flush_buffers(pCodecCtx);
}
- (void)dealloc
{
// Free scaler
sws_freeContext(img_convert_ctx);
// Free RGB picture
avpicture_free(&picture);
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
// Free the YUV frame
av_free(pFrame);
// Close the codec
if (pCodecCtx) avcodec_close(pCodecCtx);
// Close the video file
if (pFormatCtx) avformat_close_input(&pFormatCtx);
[_audioController _stopAudio];
// [_audioController release];
_audioController = nil;
// [audioPacketQueue release];
audioPacketQueue = nil;
// [audioPacketQueueLock release];
audioPacketQueueLock = nil;
// [super dealloc];
}
- (BOOL)stepFrame
{
// AVPacket packet;
int frameFinished=0;
while (!frameFinished && av_read_frame(pFormatCtx, &packet) >=0 ) {
// Is this a packet from the video stream?
if(packet.stream_index==videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
}
if (packet.stream_index==audioStream) {
// NSLog(#"audio stream");
[audioPacketQueueLock lock];
audioPacketQueueSize += packet.size;
[audioPacketQueue addObject:[NSMutableData dataWithBytes:&packet length:sizeof(packet)]];
[audioPacketQueueLock unlock];
if (!primed) {
primed=YES;
[_audioController _startAudio];
}
if (emptyAudioBuffer) {
[_audioController enqueueBuffer:emptyAudioBuffer];
}
}
}
return frameFinished!=0;
}
- (void)convertFrameToRGB
{
sws_scale(img_convert_ctx,
pFrame->data,
pFrame->linesize,
0,
pCodecCtx->height,
picture.data,
picture.linesize);
}
- (UIImage *)imageFromAVPicture:(AVPicture)pict width:(int)width height:(int)height
{
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CFDataRef data = CFDataCreateWithBytesNoCopy(kCFAllocatorDefault, pict.data[0], pict.linesize[0]*height,kCFAllocatorNull);
CGDataProviderRef provider = CGDataProviderCreateWithCFData(data);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImage = CGImageCreate(width,
height,
8,
24,
pict.linesize[0],
colorSpace,
bitmapInfo,
provider,
NULL,
NO,
kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGDataProviderRelease(provider);
CFRelease(data);
return image;
}
- (void)setupAudioDecoder
{
if (audioStream >= 0) {
_audioBufferSize = AVCODEC_MAX_AUDIO_FRAME_SIZE;
_audioBuffer = av_malloc(_audioBufferSize);
_inBuffer = NO;
_audioCodecContext = pFormatCtx->streams[audioStream]->codec;
_audioStream = pFormatCtx->streams[audioStream];
AVCodec *codec = avcodec_find_decoder(_audioCodecContext->codec_id);
if (codec == NULL) {
NSLog(#"Not found audio codec.");
return;
}
if (avcodec_open2(_audioCodecContext, codec, NULL) < 0) {
NSLog(#"Could not open audio codec.");
return;
}
if (audioPacketQueue) {
// [audioPacketQueue release];
audioPacketQueue = nil;
}
audioPacketQueue = [[NSMutableArray alloc] init];
if (audioPacketQueueLock) {
// [audioPacketQueueLock release];
audioPacketQueueLock = nil;
}
audioPacketQueueLock = [[NSLock alloc] init];
if (_audioController) {
[_audioController _stopAudio];
// [_audioController release];
_audioController = nil;
}
_audioController = [[AudioStreamer alloc] initWithStreamer:self];
} else {
pFormatCtx->streams[audioStream]->discard = AVDISCARD_ALL;
audioStream = -1;
}
}
- (void)nextPacket
{
_inBuffer = NO;
}
- (AVPacket*)readPacket
{
if (_currentPacket.size > 0 || _inBuffer) return &_currentPacket;
NSMutableData *packetData = [audioPacketQueue objectAtIndex:0];
_packet = [packetData mutableBytes];
if (_packet) {
if (_packet->dts != AV_NOPTS_VALUE) {
_packet->dts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
}
if (_packet->pts != AV_NOPTS_VALUE) {
_packet->pts += av_rescale_q(0, AV_TIME_BASE_Q, _audioStream->time_base);
}
[audioPacketQueueLock lock];
audioPacketQueueSize -= _packet->size;
if ([audioPacketQueue count] > 0) {
[audioPacketQueue removeObjectAtIndex:0];
}
[audioPacketQueueLock unlock];
_currentPacket = *(_packet);
}
return &_currentPacket;
}
- (void)closeAudio
{
[_audioController _stopAudio];
primed=NO;
}
#end
Presented as an answer for formatting and images.
Use instruments to check for leaks and memory loss due to retained but not leaked memory. The latter is unused memory that is still pointed to. Use Mark Generation (Heapshot) in the Allocations instrument on Instruments.
For HowTo use Heapshot to find memory creap, see: bbum blog
Basically the method is to run Instruments allocate tool, take a heapshot, run an iteration of your code and take another heapshot repeating 3 or 4 times. This will indicate memory that is allocated and not released during the iterations.
To figure out the results disclose to see the individual allocations.
If you need to see where retains, releases and autoreleases occur for an object use instruments:
Run in instruments, in Allocations set "Record reference counts" on (For Xcode 5 and lower you have to stop recording to set the option). Cause the app to run, stop recording, drill down and you will be able to see where all retains, releases and autoreleases occurred.

Changing AVCaptureDeviceInput leads to AVAssetWriterStatusFailed

I am trying to change the Camera View Front and Back.it is working well.if video is recorded without flipping with Pause/Record Option it is working fine.But if we Flip Camera View once then, further recording video is not saving which leads to AVAssetWriterStatusFailed-The operation could not be completed. Can anybody help me to find where i have gone wrong ? Below is my code.
Camera.m
- (void)flipCamera{
NSArray * inputs = _session.inputs;
for ( AVCaptureDeviceInput * INPUT in inputs ) {
AVCaptureDevice * Device = INPUT.device ;
if ( [ Device hasMediaType : AVMediaTypeVideo ] ) {
AVCaptureDevicePosition position = Device . position ; AVCaptureDevice * newCamera = nil ; AVCaptureDeviceInput * newInput = nil ;
if ( position == AVCaptureDevicePositionFront )
newCamera = [ self cameraWithPosition : AVCaptureDevicePositionBack ] ;
else
newCamera = [ self cameraWithPosition : AVCaptureDevicePositionFront ] ; newInput = [ AVCaptureDeviceInput deviceInputWithDevice : newCamera error : nil ] ;
// beginConfiguration ensures that pending changes are not applied immediately
[ _session beginConfiguration ] ;
[ _session removeInput : INPUT ] ;
[ _session addInput : newInput ] ;
// Changes take effect once the outermost commitConfiguration is invoked.
[ _session commitConfiguration ] ;
break ;
}
}
for ( AVCaptureDeviceInput * INPUT in inputs ) {
AVCaptureDevice * Device = INPUT.device ;
if ( [ Device hasMediaType : AVMediaTypeAudio ] ) {
// audio input from default mic
AVCaptureDevice* mic = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* newInput = [AVCaptureDeviceInput deviceInputWithDevice:mic error:nil];
// [_session addInput:micinput];
// beginConfiguration ensures that pending changes are not applied immediately
[ _session beginConfiguration ] ;
[ _session removeInput : INPUT ] ;
[ _session addInput : newInput ] ;
// Changes take effect once the outermost commitConfiguration is invoked.
[ _session commitConfiguration ] ;
break ;
}
}
}
- ( AVCaptureDevice * ) cameraWithPosition : ( AVCaptureDevicePosition ) position
{
NSArray * Devices = [ AVCaptureDevice devicesWithMediaType : AVMediaTypeVideo ] ;
for ( AVCaptureDevice * Device in Devices )
if ( Device . position == position )
return Device ;
return nil ;
}
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
BOOL bVideo = YES;
#synchronized(self)
{
if (!self.isCapturing || self.isPaused)
{
return;
}
if (connection != _videoConnection)
{
bVideo = NO;
}
if ((_encoder == nil) && !bVideo)
{
CMFormatDescriptionRef fmt = CMSampleBufferGetFormatDescription(sampleBuffer);
[self setAudioFormat:fmt];
NSString* filename = [NSString stringWithFormat:#"capture%d.mp4", _currentFile];
NSString* path = [NSTemporaryDirectory() stringByAppendingPathComponent:filename];
_encoder = [VideoEncoder encoderForPath:path Height:_cy width:_cx channels:_channels samples:_samplerate];
}
if (_discont)
{
if (bVideo)
{
return;
}
_discont = NO;
// calc adjustment
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTime last = bVideo ? _lastVideo : _lastAudio;
if (last.flags & kCMTimeFlags_Valid)
{
if (_timeOffset.flags & kCMTimeFlags_Valid)
{
pts = CMTimeSubtract(pts, _timeOffset);
}
CMTime offset = CMTimeSubtract(pts, last);
NSLog(#"Setting offset from %s", bVideo?"video": "audio");
NSLog(#"Adding %f to %f (pts %f)", ((double)offset.value)/offset.timescale, ((double)_timeOffset.value)/_timeOffset.timescale, ((double)pts.value/pts.timescale));
// this stops us having to set a scale for _timeOffset before we see the first video time
if (_timeOffset.value == 0)
{
_timeOffset = offset;
}
else
{
_timeOffset = CMTimeAdd(_timeOffset, offset);
}
}
_lastVideo.flags = 0;
_lastAudio.flags = 0;
}
// retain so that we can release either this or modified one
CFRetain(sampleBuffer);
if (_timeOffset.value > 0)
{
CFRelease(sampleBuffer);
sampleBuffer = [self adjustTime:sampleBuffer by:_timeOffset];
}
// record most recent time so we know the length of the pause
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTime dur = CMSampleBufferGetDuration(sampleBuffer);
if (dur.value > 0)
{
pts = CMTimeAdd(pts, dur);
}
if (bVideo)
{
_lastVideo = pts;
}
else
{
_lastAudio = pts;
}
}
// pass frame to encoder
[_encoder encodeFrame:sampleBuffer isVideo:bVideo];
CFRelease(sampleBuffer);
}
Encoder.m
- (BOOL) encodeFrame:(CMSampleBufferRef) sampleBuffer isVideo:(BOOL)bVideo
{
if (CMSampleBufferDataIsReady(sampleBuffer))
{
if (_writer.status == AVAssetWriterStatusUnknown)
{
CMTime startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
[_writer startWriting];
[_writer startSessionAtSourceTime:startTime];
}
if (_writer.status == AVAssetWriterStatusFailed)
{ // If Camera View is Flipped then Loop Enters inside this condition - writer error The operation could not be completed
NSLog(#"writer error %#", _writer.error.localizedDescription);
return NO;
}
if (bVideo)
{
if (_videoInput.readyForMoreMediaData == YES)
{
[_videoInput appendSampleBuffer:sampleBuffer];
return YES;
}
}
else
{
if (_audioInput.readyForMoreMediaData)
{
[_audioInput appendSampleBuffer:sampleBuffer];
return YES;
}
}
}
return NO;
}
Thanks in Advance.
The problem is this line:
if (connection != _videoConnection)
{
bVideo = NO;
}
When you change the camera a new videoConnection is created, I don't know where either how. But if you change this line like below it works:
//if (connection != _videoConnection)
if ([connection.output connectionWithMediaType:AVMediaTypeVideo] == nil)
{
bVideo = NO;
}

Resources