Whenever trying to set the CvVideoCamera's default fps to above 30 it stays set to 30 fps. It allows me to set it lower, but nothing above 30 fps. I'm using an iPhone 7 so I know it is capable of shooting 1920x1080 video at 60fps. I have looked into using AVCaptureSession, but OpenCV's CvVideoCamera allows easy access and processing of individual frames so I would like to stick with it if at all possible.
self.videoCamera = [[CvVideoCamera alloc]initWithParentView:self.videoPreviewView];
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset1920x1080;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationLandscapeLeft;
self.videoCamera.defaultFPS = 60; //This still sets it to 30 FPS
self.videoCamera.grayscaleMode = NO;
self.videoCamera.delegate = self;
I am trying to capture video with PBJVision.
I set up camera like this
vision.cameraMode = PBJCameraModeVideo;
vision.cameraOrientation = PBJCameraOrientationPortrait;
vision.outputFormat = PBJOutputFormatWidescreen;
And this produces output 1280x720 where 1280 is width.
Setting orientation to Landscape ROTATES the stream.
I have been trying to record video with GPUImage, and there I can
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset1280x720
cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
_movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:_movieURL size:CGSizeMake(720.0, 1280.0)];
So that i get vertical output.
I would like to achieve vertical output for PBJVision, because I experience problems with GPUImage writing video to disk. (I will make another question for that).
What method/property of AVFoundation is responsible for giving the vertical output instead of horizontal?
Sorry for the question, I have been googling 2 days - can't find the answer.
I was having the same issue. I changed the output format to vision.outputFormat = PBJOutputFormatPreset; and I'm getting that portrait/vertical output.
I'm a n00b to AVCaptureSession. I'm using OpenTok to implement video chat. I want to preserve bandwidth and the UI is designed so the video views are only 100 x 100 presently.
This is part of the code from an OpenTok example where it sets the preset:
- (void) setCaptureSessionPreset:(NSString*)preset {
AVCaptureSession *session = [self captureSession];
if ([session canSetSessionPreset:preset] &&
![preset isEqualToString:session.sessionPreset]) {
[_captureSession beginConfiguration];
_captureSession.sessionPreset = preset;
_capturePreset = preset;
[_videoOutput setVideoSettings:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
kCVPixelBufferPixelFormatTypeKey,
nil]];
[_captureSession commitConfiguration];
}
}
When I pass in AVCaptureSessionPresetLow (on an iPhone 6) I get NO. Is there any way I can set AVCaptureSession so I can only capture video with a frame as close to 100 x 100 as possible?
Also, is this the correct strategy for trying to save bandwidth?
You cannot force the camera to a resolution it does not support.
A lower resolution frame size will lead to lower network traffic.
Lowering FPS is one other way.
A view size does not have to map to a resolution. You can always fit a frame in any size view.
Look at the Let-Build-OTPublisher app in OpenTok SDK and more specifically TBExampleVideoCapture.m file, on how resolution and FPS are handled.
I am developing an augmented reality application with AVFoundation. Basically, I need to start up the camera, provide a instant preview, and get image samples every 1 second. Currently I am using AVCaptureVideoPreviewLayer for camera preview, and AVCaptureVideoDataOutput to get sample frames.
But the problem is, the reasonable frame rate for AVCaptureVideoPreviewLayer is way too high for AVCaptureVideoDataOutput. How can I apply different frame rates to them?
Thanks.
There is no answer yet, so I put my temporary solution here:
Firstly, add a property:
#property (assign, nonatomic) NSTimeInterval lastFrameTimestamp;
And the delegate method:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
NSTimeInterval currentFrameTimestamp = (double)timestamp.value / timestamp.timescale;
if (currentFrameTimestamp - self.lastFrameTimestamp > secondsBetweenSampling) {
self.lastFrameTimestamp = currentFrameTimestamp;
// deal with the sampleBuffer here
}
}
The idea is pretty simple. As we can see, I just keep the frame rate high so the preview layer is satisfied. But upon receiving an output, I check the timestamp every time to decide whether to deal with that frame or just ignore it.
Still looking for better ideas ;)
I am using the AV Foundation to process frames from the video camera (iPhone 4s, iOS 6.1.2). I am setting up AVCaptureSession, AVCaptureDeviceInput, AVCaptureVideoDataOutput per the AV Foundation programming guide. Everything works as expected and I am able to recieve frames in the captureOutput:didOutputSampleBuffer:fromConnection: delegate.
I also have a preview layer set like this:
AVCaptureVideoPreviewLayer *videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession];
[videoPreviewLayer setFrame:self.view.bounds];
videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer insertSublayer:videoPreviewLayer atIndex:0];
Thing is, I don't need 30 frames per second in my frame handling and I am not able to process them so fast anyway. So I am using this code to limit the frame duration:
// videoOutput is AVCaptureVideoDataOutput set earlier
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
[conn setVideoMinFrameDuration:CMTimeMake(1, 10)];
[conn setVideoMaxFrameDuration:CMTimeMake(1, 2)];
This works fine and limits the frames recieved by the captureOutput delegate.
However, this also limits the frames per second on the preview layer and preview video becomes very unresponsive.
I understand from the documentation that the frame duration is set independently on the connection and the preview layer has indeed a different AVCaptureConnection. Checking the mix/max frame durations on [videoPreviewLayer connection] shows that it's indeed set to the defaults (1/30 and 1/24) and different than the durations set on the connection of the AVCaptureVideoDataOutput.
So, is it possible to limit the frame duration only on the frame capturing output and still see a 1/24-1/30 frame duration on the preview video? How?
Thanks.
While you're correct that there are two AVCaptureConnections, that doesn't mean they can have independently set the minimum and maximum frame durations. This is because they are sharing the same physical hardware.
If connection #1 is activating the rolling shutter at a rate of (say) five frames/sec with a frame duration of 1/5 sec, there is no way that connection #2 can simultaneously activate the shutter 30 times/sec with a frame duration of 1/30 sec.
To get the effect you want would require two cameras!
The only way to get close to what you want is to follow an approach along the lines of that outlined by Kaelin Colclasure in the answer of 22 March.
You do have options of being a little more sophisticated within that approach, however. For example, you can use a counter to decide which frames to drop, rather than making the thread sleep. You can make that counter respond to the actual frame-rate that's coming through (which you can get from the metadata that comes in to the captureOutput:didOutputSampleBuffer:fromConnection: delegate along with the image data, or which you can calculate yourself by manually timing the frames). You can even do a very reasonable imitation of a longer exposure by compositing frames rather than dropping them—just as a number of "slow shutter" apps in the App Store do (leaving aside details—such as differing rolling shutter artefacts—there's not really that much difference between one frame scanned at 1/5 sec and five frames each scanned at 1/25 sec and then glued together).
Yes, it's a bit of work, but you are trying to make one video camera behave like two, in real time—and that's never going to be easy.
Think of it this way:
You ask the capture device to limit frame duration, so you get better exposure.
Fine.
You want to preview at higher frame rate.
If you were to preview at higher rate, then the capture device (the camera) would NOT have enough time to expose the frame so you get better exposure at the captured frames.
It is like asking to see different frames in preview than the ones captured.
I think that, if it was possible, it would also be a negative user experience.
I had the same issue for my Cocoa (Mac OS X) application. Here's how I solved it:
First, make sure to process the captured frames on a separate dispatch queue. Also make sure any frames you're not ready to process are discarded; this is the default, but I set the flag below anyway just to document that I'm depending on it.
videoQueue = dispatch_queue_create("com.ohmware.LabCam.videoQueue", DISPATCH_QUEUE_SERIAL);
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setSampleBufferDelegate:self
queue:videoQueue];
[session addOutput:videoOutput];
Then when processing the frames in the delegate, you can simply have the thread sleep for the desired time interval. Frames that the delegate is not awake to handle are quietly discarded. I implement the optional method for counting dropped frames below just as a sanity check; my application never logs dropping any frames using this technique.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection;
{
OSAtomicAdd64(1, &videoSampleBufferDropCount);
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection;
{
int64_t savedSampleBufferDropCount = videoSampleBufferDropCount;
if (savedSampleBufferDropCount && OSAtomicCompareAndSwap64(savedSampleBufferDropCount, 0, &videoSampleBufferDropCount)) {
NSLog(#"Dropped %lld video sample buffers!!!", savedSampleBufferDropCount);
}
// NSLog(#"%s", __func__);
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage * cameraImage = [CIImage imageWithCVImageBuffer:imageBuffer];
CIImage * faceImage = [self faceImage:cameraImage];
dispatch_sync(dispatch_get_main_queue(), ^ {
[_imageView setCIImage:faceImage];
});
}
[NSThread sleepForTimeInterval:0.5]; // Only want ~2 frames/sec.
}
Hope this helps.