Set OpenCV's iOS CvVideoCamera default fps above 30 - ios

Whenever trying to set the CvVideoCamera's default fps to above 30 it stays set to 30 fps. It allows me to set it lower, but nothing above 30 fps. I'm using an iPhone 7 so I know it is capable of shooting 1920x1080 video at 60fps. I have looked into using AVCaptureSession, but OpenCV's CvVideoCamera allows easy access and processing of individual frames so I would like to stick with it if at all possible.
self.videoCamera = [[CvVideoCamera alloc]initWithParentView:self.videoPreviewView];
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset1920x1080;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationLandscapeLeft;
self.videoCamera.defaultFPS = 60; //This still sets it to 30 FPS
self.videoCamera.grayscaleMode = NO;
self.videoCamera.delegate = self;

Related

GPU Image : Best resolution with Higher FPS

I am using GPU framework, to record videos.
#property (nonatomic, strong) GPUImageStillCamera *stillCamera;
I am setting this property which changes Resolution and FPS.
self.stillCamera.captureSession.sessionPreset = AVCaptureSessionPresetHigh;
On iPhone6, 6+ and iPhone5s I am getting 1920*1080 with 30fps.
For iPod5th generation and iPhone5, I need 24fps with 1920*1080 resolution, same as Native camera app.
But I am Only getting 8-9FPS with 1920*1080 resolution.
I have tried value AVCaptureSessionPreset1920x1080 as well.
I have also tried,
[self.stillCamera.inputCamera lockForConfiguration:nil];
self.stillCamera.inputCamera.activeVideoMinFrameDuration = CMTimeMake(1, 24);
self.stillCamera.inputCamera.activeVideoMaxFrameDuration = CMTimeMake(1, 24);
[self.stillCamera.inputCamera unlockForConfiguration];
but this not affecting when I am using self.stillCamera.captureSession.sessionPreset = AVCaptureSessionPresetHigh;
I have also tried,
[self.stillCamera setFrameRate:24];
this is also not working with self.stillCamera.captureSession.sessionPreset = AVCaptureSessionPresetHigh;
I have also tried this function [self configureCameraForHighestFrameRate:self.stillCamera.inputCamera];as well, from apple documentation https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVCaptureDevice_Class/index.html
but I am only getting FPS near 18, with 1280*720 resolution,
I need 1920*1080 with 24fps, on iPod5th and iPhone5.
Please help me.
You help will be appriciated.
Thanks in advance.

AVCaptureSessionPresetLow on iPhone 6

I'm a n00b to AVCaptureSession. I'm using OpenTok to implement video chat. I want to preserve bandwidth and the UI is designed so the video views are only 100 x 100 presently.
This is part of the code from an OpenTok example where it sets the preset:
- (void) setCaptureSessionPreset:(NSString*)preset {
AVCaptureSession *session = [self captureSession];
if ([session canSetSessionPreset:preset] &&
![preset isEqualToString:session.sessionPreset]) {
[_captureSession beginConfiguration];
_captureSession.sessionPreset = preset;
_capturePreset = preset;
[_videoOutput setVideoSettings:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
kCVPixelBufferPixelFormatTypeKey,
nil]];
[_captureSession commitConfiguration];
}
}
When I pass in AVCaptureSessionPresetLow (on an iPhone 6) I get NO. Is there any way I can set AVCaptureSession so I can only capture video with a frame as close to 100 x 100 as possible?
Also, is this the correct strategy for trying to save bandwidth?
You cannot force the camera to a resolution it does not support.
A lower resolution frame size will lead to lower network traffic.
Lowering FPS is one other way.
A view size does not have to map to a resolution. You can always fit a frame in any size view.
Look at the Let-Build-OTPublisher app in OpenTok SDK and more specifically TBExampleVideoCapture.m file, on how resolution and FPS are handled.

How to set videoMaximumDuration for AvcaptureSession in iOS

I am using AVCaptureSession for Recording Video.But i can't able to set maximum Video length.If i am using ImagePicker Controller there is method is used for set maximum video duration like videoMaximumDuration .But In AVCaptureSession how i can set MaximumDuration .please help me..Advance Thanks
You can set the maximum duration using the property maxRecordedDuration of your AVCaptureMovieFileOutput settings.
Here's an example.
self.movieFileOutput = [[AVCaptureMovieFileOutput alloc]init];
Float64 maximumVideoLength = 60; //Whatever value you wish to set as the maximum, in seconds
int32_t prefferedTimeScale = 30 //Frames per second
CMTime maxDuration = CMTimeMakeWithSeconds(maximumVideoLength, preferredTimescale_;
self.movieFileOutput.maxRecordedDuration = maxDuration;
self.movieFileOutput.minFreeDiskSpaceLimit = 1024*1024;
if(self.captureSession canAddOutput:self.movieFileOutput){
[self.captureSession addOutput:self.movieFileOutput];
}
I hope this answers your question

Moving an image with ios 7 parallax effect

I just saw Facebook's new paper app that makes images move based on the parallax effect. So it zoom's the image to full screen and when you tilt the screen, it scrolls the image to the side you tilted. I have been able to add the parallax effect like apple has it, but not like how facebook has it. Does anyone have any ideas how they did this. Here is the basic code I was using for parallax:
UIInterpolatingMotionEffect *interpolationHorizontal = [[UIInterpolatingMotionEffect alloc]initWithKeyPath:#"center.x" type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];
interpolationHorizontal.minimumRelativeValue = #-10.0;
interpolationHorizontal.maximumRelativeValue = #10.0;
UIInterpolatingMotionEffect *interpolationVertical = [[UIInterpolatingMotionEffect alloc]initWithKeyPath:#"center.y" type:UIInterpolatingMotionEffectTypeTiltAlongVerticalAxis];
interpolationVertical.minimumRelativeValue = #-10.0;
interpolationVertical.maximumRelativeValue = #10.0;
[self.backgroundView addMotionEffect:interpolationHorizontal];
[self.backgroundView addMotionEffect:interpolationVertical];
Update:
Just found a very nice 3rd party library for achieving this, it's called CRMotionView, it works very smooth and you can modify a lot of things.
here is the github link: https://github.com/chroman/CRMotionView
==================================================================================
i was thinking the same parallax when i first saw Facebook paper app. but after play with my code for a little bit, i don't think parallax is what we looking for. I could be wrong, but i went to do it all from the base, gyro motion manager. Here is my sample code:
//import the motion manager frame work first
#import <CoreMotion/CoreMotion.h>
//then need to add a motionManager
#property (strong, nonatomic) CMMotionManager *motionManager;
//you can paste all those codes in view did load
//i added a scroll view on the view controller nib file
self.mainScrollView.frame = CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height);
//we don't want it to bounce at each end of the image
self.mainScrollView.bounces = NO;
//and we don't want to allow user scrolling in this case
self.mainScrollView.userInteractionEnabled = NO;
//set up the image view
UIImage *image= [UIImage imageNamed:#"YOUR_IMAGE_NAME"];
UIImageView *movingImageView = [[UIImageView alloc]initWithImage:image];
[self.mainScrollView addSubview:movingImageView];
//set up the content size based on the image size
//in facebook paper case, vertical rotation doesn't do anything
//so we dont have to set up the content size height
self.mainScrollView.contentSize = CGSizeMake(movingImageView.frame.size.width, self.mainScrollView.frame.size.height);
//center the image at intial
self.mainScrollView.contentOffset = CGPointMake((self.mainScrollView.contentSize.width - self.view.frame.size.width) / 2, 0);
//inital the motionManager and detec the Gyroscrope for every 1/60 second
//the interval may not need to be that fast
self.motionManager = [[CMMotionManager alloc] init];
self.motionManager.gyroUpdateInterval = 1/60;
//this is how fast the image should move when rotate the device, the larger the number, the less the roation required.
CGFloat motionMovingRate = 4;
//get the max and min offset x value
int maxXOffset = self.mainScrollView.contentSize.width - self.mainScrollView.frame.size.width;
int minXOffset = 0;
[self.motionManager startGyroUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler:^(CMGyroData *gyroData, NSError *error) {
//since our hands are not prefectly steady
//so it will always have small rotation rate between 0.01 - 0.05
//i am ignoring if the rotation rate is less then 0.1
//if you want this to be more sensitive, lower the value here
if (fabs(gyroData.rotationRate.y) >= 0.1) {
CGFloat targetX = self.mainScrollView.contentOffset.x - gyroData.rotationRate.y * motionMovingRate;
//check if the target x is less than min or larger than max
//if do, use min or max
if(targetX > maxXOffset)
targetX = maxXOffset;
else if (targetX < minXOffset)
targetX = minXOffset;
//set up the content off
self.mainScrollView.contentOffset = CGPointMake(targetX, 0);
}
}];
I tested this on my device, worked pretty similar like facebook new app.
However, this is just an example code i wrote in half hour, may not be 100% accurate, but hope this would give you some ideas.

AV Foundation: AVCaptureVideoPreviewLayer and frame duration

I am using the AV Foundation to process frames from the video camera (iPhone 4s, iOS 6.1.2). I am setting up AVCaptureSession, AVCaptureDeviceInput, AVCaptureVideoDataOutput per the AV Foundation programming guide. Everything works as expected and I am able to recieve frames in the captureOutput:didOutputSampleBuffer:fromConnection: delegate.
I also have a preview layer set like this:
AVCaptureVideoPreviewLayer *videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession];
[videoPreviewLayer setFrame:self.view.bounds];
videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer insertSublayer:videoPreviewLayer atIndex:0];
Thing is, I don't need 30 frames per second in my frame handling and I am not able to process them so fast anyway. So I am using this code to limit the frame duration:
// videoOutput is AVCaptureVideoDataOutput set earlier
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
[conn setVideoMinFrameDuration:CMTimeMake(1, 10)];
[conn setVideoMaxFrameDuration:CMTimeMake(1, 2)];
This works fine and limits the frames recieved by the captureOutput delegate.
However, this also limits the frames per second on the preview layer and preview video becomes very unresponsive.
I understand from the documentation that the frame duration is set independently on the connection and the preview layer has indeed a different AVCaptureConnection. Checking the mix/max frame durations on [videoPreviewLayer connection] shows that it's indeed set to the defaults (1/30 and 1/24) and different than the durations set on the connection of the AVCaptureVideoDataOutput.
So, is it possible to limit the frame duration only on the frame capturing output and still see a 1/24-1/30 frame duration on the preview video? How?
Thanks.
While you're correct that there are two AVCaptureConnections, that doesn't mean they can have independently set the minimum and maximum frame durations. This is because they are sharing the same physical hardware.
If connection #1 is activating the rolling shutter at a rate of (say) five frames/sec with a frame duration of 1/5 sec, there is no way that connection #2 can simultaneously activate the shutter 30 times/sec with a frame duration of 1/30 sec.
To get the effect you want would require two cameras!
The only way to get close to what you want is to follow an approach along the lines of that outlined by Kaelin Colclasure in the answer of 22 March.
You do have options of being a little more sophisticated within that approach, however. For example, you can use a counter to decide which frames to drop, rather than making the thread sleep. You can make that counter respond to the actual frame-rate that's coming through (which you can get from the metadata that comes in to the captureOutput:didOutputSampleBuffer:fromConnection: delegate along with the image data, or which you can calculate yourself by manually timing the frames). You can even do a very reasonable imitation of a longer exposure by compositing frames rather than dropping them—just as a number of "slow shutter" apps in the App Store do (leaving aside details—such as differing rolling shutter artefacts—there's not really that much difference between one frame scanned at 1/5 sec and five frames each scanned at 1/25 sec and then glued together).
Yes, it's a bit of work, but you are trying to make one video camera behave like two, in real time—and that's never going to be easy.
Think of it this way:
You ask the capture device to limit frame duration, so you get better exposure.
Fine.
You want to preview at higher frame rate.
If you were to preview at higher rate, then the capture device (the camera) would NOT have enough time to expose the frame so you get better exposure at the captured frames.
It is like asking to see different frames in preview than the ones captured.
I think that, if it was possible, it would also be a negative user experience.
I had the same issue for my Cocoa (Mac OS X) application. Here's how I solved it:
First, make sure to process the captured frames on a separate dispatch queue. Also make sure any frames you're not ready to process are discarded; this is the default, but I set the flag below anyway just to document that I'm depending on it.
videoQueue = dispatch_queue_create("com.ohmware.LabCam.videoQueue", DISPATCH_QUEUE_SERIAL);
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setSampleBufferDelegate:self
queue:videoQueue];
[session addOutput:videoOutput];
Then when processing the frames in the delegate, you can simply have the thread sleep for the desired time interval. Frames that the delegate is not awake to handle are quietly discarded. I implement the optional method for counting dropped frames below just as a sanity check; my application never logs dropping any frames using this technique.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection;
{
OSAtomicAdd64(1, &videoSampleBufferDropCount);
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection;
{
int64_t savedSampleBufferDropCount = videoSampleBufferDropCount;
if (savedSampleBufferDropCount && OSAtomicCompareAndSwap64(savedSampleBufferDropCount, 0, &videoSampleBufferDropCount)) {
NSLog(#"Dropped %lld video sample buffers!!!", savedSampleBufferDropCount);
}
// NSLog(#"%s", __func__);
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage * cameraImage = [CIImage imageWithCVImageBuffer:imageBuffer];
CIImage * faceImage = [self faceImage:cameraImage];
dispatch_sync(dispatch_get_main_queue(), ^ {
[_imageView setCIImage:faceImage];
});
}
[NSThread sleepForTimeInterval:0.5]; // Only want ~2 frames/sec.
}
Hope this helps.

Resources