I'm a n00b to AVCaptureSession. I'm using OpenTok to implement video chat. I want to preserve bandwidth and the UI is designed so the video views are only 100 x 100 presently.
This is part of the code from an OpenTok example where it sets the preset:
- (void) setCaptureSessionPreset:(NSString*)preset {
AVCaptureSession *session = [self captureSession];
if ([session canSetSessionPreset:preset] &&
![preset isEqualToString:session.sessionPreset]) {
[_captureSession beginConfiguration];
_captureSession.sessionPreset = preset;
_capturePreset = preset;
[_videoOutput setVideoSettings:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
kCVPixelBufferPixelFormatTypeKey,
nil]];
[_captureSession commitConfiguration];
}
}
When I pass in AVCaptureSessionPresetLow (on an iPhone 6) I get NO. Is there any way I can set AVCaptureSession so I can only capture video with a frame as close to 100 x 100 as possible?
Also, is this the correct strategy for trying to save bandwidth?
You cannot force the camera to a resolution it does not support.
A lower resolution frame size will lead to lower network traffic.
Lowering FPS is one other way.
A view size does not have to map to a resolution. You can always fit a frame in any size view.
Look at the Let-Build-OTPublisher app in OpenTok SDK and more specifically TBExampleVideoCapture.m file, on how resolution and FPS are handled.
Related
I need some help with our CameraView inside our app. We have built a CameraView with Rectangle / Zoom capabilities (in a webView). When integrating this webCameraView in our app, accessing the camera does not work. Seems like we do have 2 options:
Make the app allow camera access from a webView (is this still not possible 2021?)
Build a copy of the webCameraView directly in the app with rectangle frame instead
Above is what the webCameraView looks like in the app. Rectangle / Zoom is displayed, but the app does not get access to the camera.
If it is not possible to access camera directly in the app via a webView I need to go for alternative nr 2.
I will try to build the same UI via Objc in the app directly. But good updated tutorials for Objc is hard to find or being outdated. All Swift boys have taken over the town! This what I have done so far:
- (void) viewDidLoad {
//-- Setup Capture Session.
_captureSession = [[AVCaptureSession alloc] init];
//-- Creata a video device and input from that Device. Add the input to the capture session.
AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(videoDevice == nil)
assert(0);
//-- Add the device to the session.
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice
error:&error];
if(error)
assert(0);
[_captureSession addInput:input];
//-- Configure the preview layer
_previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession];
_previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[_previewLayer setFrame:CGRectMake(0, 0,
self.view.frame.size.width,
self.view.frame.size.height)];
//-- Add the layer to the view that should display the camera input
[self.view.layer addSublayer:_previewLayer];
//-- Start the camera
[_captureSession startRunning];
[super viewDidLoad];
}
The code above gives me access to the camera
How do I create a dragable Rectangle / Zoom area in the middle of the screen via Objc?
Also With capabilities to take a photo (and the photo taken should only represent the inside-area of the rectangle-area) ?
Help!
The sample code you provided is for native camera control, I wonder what it has to do with webview camera?
If you want to turn on the system camera in webview and process the acquired content, you can try to look at webrtc. This has version restrictions, but it is open in WKWebView after iOS 14.3.
Im doing a custom camera to film at Full HD or just HD quality. The issue is that after I set the Camera to 25 Frames with the follow code:
- (void) setFrameRate:(AVCaptureDevice*) camera {
NSError *error;
if (![camera lockForConfiguration:&error]) {
NSLog(#"Could not lock device %# for configuration: %#", camera, error);
return;
}
AVCaptureDeviceFormat *format = camera.activeFormat;
double epsilon = 0.00000001;
int desiredFrameRate = 25;
for (AVFrameRateRange *range in format.videoSupportedFrameRateRanges) {
if (range.minFrameRate <= (desiredFrameRate + epsilon) &&
range.maxFrameRate >= (desiredFrameRate - epsilon)) {
[camera setActiveVideoMaxFrameDuration:CMTimeMake(10, desiredFrameRate*10)];
[camera setActiveVideoMinFrameDuration:CMTimeMake(10, desiredFrameRate*10)];
break;
}
}
[camera unlockForConfiguration];
}
It changes the video fps but not to exactly 25 frames per second like I set in method. It fluctuate between 23.93 and 25.50 frames per second.
Anyone knows why?
After several attempts and debugging I find out that the issue with the frame rate not being exactly 25 frame has to do with the recording method and not wiht the device setup.
I was using AVAssetWriter object to record the video like the example showed in the follow link (https://reformatcode.com/code/ios/ios-8-film-from-both-back-and-front-camera).
But in no way was possible to get the exactly 25 fps.
Change the object for recording video for AVCaptureMovieFileOutput and was supper easy from there setting up and recording. Result its much more precise, between 25 and 25.01.
I'm using AVCaptureDevice to capture video frame to do some image process and would like to control the exposure duration, ISO by using
setExposureModeCustomWithDuration:(CMTime)duration ISO:(float)ISO completionHandler:(void (^)(CMTime syncTime))handler
And I think it will be keep the exposure level as my setting.
When I set the mode to AVCaptureExposureModeCustom, I can see preview image's brightness was changed when I move camera to capture different position. (Especially from light to dark side)
But when I change the exposure mode to AVCaptureExposureModeLocked, the brightness will be fixed.
I have been check the white-balance, focus, torchmode are locked or disable, isAdjustingExposure is keeping false,
and exposure duration and ISO parameter are not change during this problem happening.
- (void) setCameraExposure:(CMTime)exposureDuration ISO:(int)iso
{
[self.avSession beginConfiguration];
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (videoDevice) {
[videoDevice lockForConfiguration:nil];
// Set exposure duration and ISO in custom mode
if ([videoDevice isExposureModeSupported:AVCaptureExposureModeCustom]) {
[videoDevice setExposureModeCustomWithDuration:exposureDuration
ISO:iso
completionHandler:nil];
}
// If I use AVCaptureExposureModeLocked, the brightness was fixed
//if ([videoDevice isExposureModeSupported:AVCaptureExposureModeLocked]) {
// [videoDevice setExposureMode:AVCaptureExposureModeLocked];
//}
// Lock white balance
if ([videoDevice isWhiteBalanceModeSupported:AVCaptureWhiteBalanceModeLocked]) {
[videoDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeLocked];
}
[videoDevice unlockForConfiguration];
}
[self.avSession commitConfiguration];
}
I want camera don't change brightness when I using AVCaptureExposureModeCustom, is it possible?
I've tried multiple variations of this, but none of them seem to work. Any ideas?
I am trying to create a custom camera experience on iOS and the following code snippet is as far as I got. Basically I want the usual camera view (i.e. with the following buttons: capture, flash, grid, front/back, cancel). But the only difference between the normal camera and mine is that I want a square for the preview surface; not a rectangle. And then, what you see is what you get (WYSIWYG) such that there is no cropping necessary; as the user would have taken a square picture in the first place.
I have also been looking at the library https://github.com/danielebogo/DBCamera but I am not seeing how to customize it to my end. Any help? Thanks.
MY CODE SO FAR:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *device =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//Input
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
if (!input)
{
NSLog(#"No Input");
}
[session addInput:input];
//Output
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings =
#{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview Layer
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
UIView *myView = self.view;
previewLayer.frame = myView.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
This is the only custom code in a single view project on Xcode
You have two options for doing what you want, either stick with and customize a UIImagePickerController, or create your own by using the AVFoundation.
The UIImagePickerController does provide a fair bit of customization options, and this similar thread has some good information on that: link.
If you still want to make your own, I suggest heading over to the Apple Documentation and checking out this demo project called AVCam: link. However, it's way more in-depth than you'll probably need so I can recommend this video tutorial as well: link.
If going for the last option, I would like to mention that to make the "actual camera" fit the frame of your previewLayer, you can set the videoGravity on the AVCaptureVideoPreviewLayer to AVLayerVideoGravityResizeAspectFill.
Working with a custom camera can be a bit of a pain, but it’ll pay dividends given that you’ll really be able to customize your app experience.
The easiest way to do it is to use TGCameraViewController.
Using this TGCameraViewController, you can edit whole camera view. Also, It provides following functionalities:-
Easy way to access album (camera roll)
Flash auto, off and on
Focus
Front and back camera
Also you can refer AVCamManual: Extending AVCam to Use Manual Capture document for creating own custom camera.
I am using the AV Foundation to process frames from the video camera (iPhone 4s, iOS 6.1.2). I am setting up AVCaptureSession, AVCaptureDeviceInput, AVCaptureVideoDataOutput per the AV Foundation programming guide. Everything works as expected and I am able to recieve frames in the captureOutput:didOutputSampleBuffer:fromConnection: delegate.
I also have a preview layer set like this:
AVCaptureVideoPreviewLayer *videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession];
[videoPreviewLayer setFrame:self.view.bounds];
videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer insertSublayer:videoPreviewLayer atIndex:0];
Thing is, I don't need 30 frames per second in my frame handling and I am not able to process them so fast anyway. So I am using this code to limit the frame duration:
// videoOutput is AVCaptureVideoDataOutput set earlier
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
[conn setVideoMinFrameDuration:CMTimeMake(1, 10)];
[conn setVideoMaxFrameDuration:CMTimeMake(1, 2)];
This works fine and limits the frames recieved by the captureOutput delegate.
However, this also limits the frames per second on the preview layer and preview video becomes very unresponsive.
I understand from the documentation that the frame duration is set independently on the connection and the preview layer has indeed a different AVCaptureConnection. Checking the mix/max frame durations on [videoPreviewLayer connection] shows that it's indeed set to the defaults (1/30 and 1/24) and different than the durations set on the connection of the AVCaptureVideoDataOutput.
So, is it possible to limit the frame duration only on the frame capturing output and still see a 1/24-1/30 frame duration on the preview video? How?
Thanks.
While you're correct that there are two AVCaptureConnections, that doesn't mean they can have independently set the minimum and maximum frame durations. This is because they are sharing the same physical hardware.
If connection #1 is activating the rolling shutter at a rate of (say) five frames/sec with a frame duration of 1/5 sec, there is no way that connection #2 can simultaneously activate the shutter 30 times/sec with a frame duration of 1/30 sec.
To get the effect you want would require two cameras!
The only way to get close to what you want is to follow an approach along the lines of that outlined by Kaelin Colclasure in the answer of 22 March.
You do have options of being a little more sophisticated within that approach, however. For example, you can use a counter to decide which frames to drop, rather than making the thread sleep. You can make that counter respond to the actual frame-rate that's coming through (which you can get from the metadata that comes in to the captureOutput:didOutputSampleBuffer:fromConnection: delegate along with the image data, or which you can calculate yourself by manually timing the frames). You can even do a very reasonable imitation of a longer exposure by compositing frames rather than dropping them—just as a number of "slow shutter" apps in the App Store do (leaving aside details—such as differing rolling shutter artefacts—there's not really that much difference between one frame scanned at 1/5 sec and five frames each scanned at 1/25 sec and then glued together).
Yes, it's a bit of work, but you are trying to make one video camera behave like two, in real time—and that's never going to be easy.
Think of it this way:
You ask the capture device to limit frame duration, so you get better exposure.
Fine.
You want to preview at higher frame rate.
If you were to preview at higher rate, then the capture device (the camera) would NOT have enough time to expose the frame so you get better exposure at the captured frames.
It is like asking to see different frames in preview than the ones captured.
I think that, if it was possible, it would also be a negative user experience.
I had the same issue for my Cocoa (Mac OS X) application. Here's how I solved it:
First, make sure to process the captured frames on a separate dispatch queue. Also make sure any frames you're not ready to process are discarded; this is the default, but I set the flag below anyway just to document that I'm depending on it.
videoQueue = dispatch_queue_create("com.ohmware.LabCam.videoQueue", DISPATCH_QUEUE_SERIAL);
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setSampleBufferDelegate:self
queue:videoQueue];
[session addOutput:videoOutput];
Then when processing the frames in the delegate, you can simply have the thread sleep for the desired time interval. Frames that the delegate is not awake to handle are quietly discarded. I implement the optional method for counting dropped frames below just as a sanity check; my application never logs dropping any frames using this technique.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection;
{
OSAtomicAdd64(1, &videoSampleBufferDropCount);
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection;
{
int64_t savedSampleBufferDropCount = videoSampleBufferDropCount;
if (savedSampleBufferDropCount && OSAtomicCompareAndSwap64(savedSampleBufferDropCount, 0, &videoSampleBufferDropCount)) {
NSLog(#"Dropped %lld video sample buffers!!!", savedSampleBufferDropCount);
}
// NSLog(#"%s", __func__);
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage * cameraImage = [CIImage imageWithCVImageBuffer:imageBuffer];
CIImage * faceImage = [self faceImage:cameraImage];
dispatch_sync(dispatch_get_main_queue(), ^ {
[_imageView setCIImage:faceImage];
});
}
[NSThread sleepForTimeInterval:0.5]; // Only want ~2 frames/sec.
}
Hope this helps.