My app uses video recording, and so I use the AVCaptureSession. When I see the Capture Session preview, however, I notice that the quality is lower, especially when pointing the camera over text on a television or computer screen. How can I increase the quality of my Capture Session so that it will display text on computer screens more clearly? This is the part of my code that deals with video quality.
self.CaptureSession = [[AVCaptureSession alloc] init];
self.CaptureSession.automaticallyConfiguresApplicationAudioSession = NO;
[self.CaptureSession setSessionPreset:AVCaptureSessionPresetHigh];
//----- ADD INPUTS -----
//ADD VIDEO INPUT
self.VideoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([self.VideoDevice hasTorch] == YES){
self.flashSet.hidden = NO;
self.flashOverlayObject.hidden = NO;
}
[self.VideoDevice lockForConfiguration:nil];
[self.VideoDevice setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
[self.VideoDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[self.VideoDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
[self.VideoDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance];
if(self.VideoDevice.lowLightBoostSupported){
self.VideoDevice.automaticallyEnablesLowLightBoostWhenAvailable = YES;
}
[self.VideoDevice unlockForConfiguration];
if (self.VideoDevice)
{
NSError *error;
self.VideoInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:self.VideoDevice error:&error];
if (!error)
{
if ([self.CaptureSession canAddInput:self.VideoInputDevice])
[self.CaptureSession addInput:self.VideoInputDevice];
}
}
Related
I'm writing an app that needs to look at the raw video (custom edge detection etc) and use the meta data barcode reader.
even though the AVCaptureSession has an addOutput: method instead of setOutput: method, that's exactly what it's doing - first one in wins.
if I add AVCaptureVideoDataOutput as output first - it's delegate gets called.
if I add AVCaptureMetadataOutput as output first - it's delegate gets called.
Has anyone figured out a way around this?
short of removing the other one every other frame?
I was able to add both AVCaptureVideoDataOutput and AVCaptureMetadataOutput.
NSError *error = nil;
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];
// Select a video device, make an input
AVCaptureDevice *captureDevice;
AVCaptureDevicePosition desiredPosition = AVCaptureDevicePositionFront;
// Find the front facing camera
for (AVCaptureDevice *device in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if ([device position] == desiredPosition) {
captureDevice = device;
break;
}
}
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (!error) {
[self.captureSession beginConfiguration];
// add the input to the session
if ([self.captureSession canAddInput:deviceInput]) {
[self.captureSession addInput:deviceInput];
}
AVCaptureMetadataOutput *metadataOutput = [AVCaptureMetadataOutput new];
if ([self.captureSession canAddOutput:metadataOutput]) {
[self.captureSession addOutput:metadataOutput];
self.metaDataOutputQueue = dispatch_queue_create("MetaDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[metadataOutput setMetadataObjectsDelegate:self queue:self.metaDataOutputQueue];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode]];
}
self.videoDataOutput = [AVCaptureVideoDataOutput new];
if ([self.captureSession canAddOutput:self.videoDataOutput]) {
[self.captureSession addOutput:self.videoDataOutput];
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[self.videoDataOutput setVideoSettings:rgbOutputSettings];
[self.videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];
self.videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[self.videoDataOutput setSampleBufferDelegate:self queue:self.videoDataOutputQueue];
[[self.videoDataOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:YES];
}
[self.captureSession commitConfiguration];
[self.captureSession startRunning];
}
I am newbie to iPhone and have made a demo using AVFoundation for taking pics. I am taking 5 max images but issue is when I am taking 1st image it always come as compare to other images. Can anybody help me to resolve it? Code is as below:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
//Added by jigar
session.sessionPreset = AVCaptureSessionPreset640x480;
//[[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] supportsAvCaptureSessionPreset:AVCaptureSessionPreset640x480];
//End
// Create device input and add to current session
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error: nil];
[session addInput:input];
// Create video output and add to current session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Start session configuration
[session beginConfiguration];
[device lockForConfiguration:nil];
// Set torch to on
if(flashMode == UIImagePickerControllerCameraFlashModeAuto)
{
device.torchMode = AVCaptureTorchModeAuto;
}
else if(flashMode == UIImagePickerControllerCameraFlashModeOn){
device.torchMode = AVCaptureTorchModeOn;
}
else if(flashMode == UIImagePickerControllerCameraFlashModeOff){
device.torchMode = AVCaptureTorchModeOff;
}
[device unlockForConfiguration];
[session commitConfiguration];
// [session startRunning];
[currentPicker takePicture];
// [session stopRunning];
// session = nil;
Using XCode-6.4, iOS-8.4.1:
With AVCaptureSession, I would like to zoom further out (as much as the iPhone camera possibly can manage!)
I already use the "setVideoZoomFactor" method set equal to 1 (= its smallest value allowed). This works quite good (see code-example at the very bottom...). But then I did the following observation (recognising that the camera in photo-mode possibly manages to zoom even further out than being in video-mode):
The iPhone-camera in photo-mode shows a completely different zoom than the camera being in video-mode (at least for my iPhone 5S). You can test yourself using the native "Camera App" on your iPhone. Switch between PHOTO and VIDEO and you will see that the Photo-mode can possibly zoom further out than video-zoomfactor=1). How is that possible ???
And moreover, is there any way in achieving the same minimal zoomfactor the photo-mode achieves also in video-mode using AVCam under iOS ????
Here is an illustration of what the zoom-difference is between photo-mode and video-mode of my 5S iPhone (see picture):
Here is the code of the AVCamViewController:
- (void)viewDidLoad
{
[super viewDidLoad];
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self setSession:session];
// Setup the preview view
[[self previewView] setSession:session];
// Check for device authorization
[self checkDeviceAuthorizationStatus];
dispatch_queue_t sessionQueue = dispatch_queue_create("session queue", DISPATCH_QUEUE_SERIAL);
[self setSessionQueue:sessionQueue];
// http://stackoverflow.com/questions/25110055/ios-captureoutputdidoutputsamplebufferfromconnection-is-not-called
dispatch_async(sessionQueue, ^{
self.session = [AVCaptureSession new];
self.session.sessionPreset = AVCaptureSessionPresetMedium;
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *backCamera;
for (AVCaptureDevice *device in devices) {
if ([device hasMediaType:AVMediaTypeVideo]) {
if ([device position] == AVCaptureDevicePositionBack) {
backCamera = device;
}
}
}
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:&error];
if (error) {
NSLog(#"%#",error);
}
if ([self.session canAddInput:input]) {
[self.session addInput:input];
}
AVCaptureVideoDataOutput *output = [AVCaptureVideoDataOutput new];
[output setSampleBufferDelegate:self queue:sessionQueue];
output.videoSettings = #{(id)kCVPixelBufferPixelFormatTypeKey:#(kCVPixelFormatType_32BGRA)};
if ([self.session canAddOutput:output]) {
[self.session addOutput:output];
}
// Apply initial VideoZoomFactor to the device
NSNumber *DefaultZoomFactor = [NSNumber numberWithFloat:1.0];
if ([backCamera lockForConfiguration:&error])
{
// HERE IS THE ZOOMING DONE !!!!!!
[backCamera setVideoZoomFactor:[DefaultZoomFactor floatValue]];
[backCamera unlockForConfiguration];
}
else
{
NSLog(#"%#", error);
}
[self.session startRunning];
});
}
If your problem is that your app is zooming the photo by comparison to native iOS camera app, then this setup will probably help you.
I had "this" issue and the following solution fix it.
The solution:
Configure your session
- (void)viewDidLoad
{
[super viewDidLoad];
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self setSession:session];
--> self.session.sessionPreset = AVCaptureSessionPresetPhoto; <---
...
}
You must be aware that setup will only work for photos and not for videos. If you try with videos, your app shall crash.
You can configure your session based upon your needs (photo or video)
For video you can use this value: AVCaptureSessionPresetHigh
BR.
In my App I'm capturing images using AVFoundation
I made a button to switch between front and back cameras but it won't work.
Here's the code I used :
if (captureDevice.position == AVCaptureDevicePositionFront) {
for ( AVCaptureDevice *device in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] ) {
if ( device.position == AVCaptureDevicePositionBack) {
NSError * error;
AVCaptureDeviceInput * newDeviceInput = [[AVCaptureDeviceInput alloc]initWithDevice:device error:&error];
[captureSesion beginConfiguration];
for (AVCaptureDeviceInput *oldInput in [captureSesion inputs]) {
[captureSesion removeInput:oldInput];
}
if ([captureSesion canAddInput:newDeviceInput]) {
[captureSesion addInput:newDeviceInput];
}
[captureSesion commitConfiguration];
break;
}
}
}
THX.
If your captureSession's sessionPreset is not compatible with the camera you're switching to it will fail the canAddInput test. I always reset to AVCaptureSessionPresetHigh before toggling cameras then try to switch it to whatever preset I have preferred. Here's the code I use:
- (void)toggleCamera {
AVCaptureDevicePosition newPosition = self.currentCameraPossition == AVCaptureDevicePositionBack ? AVCaptureDevicePositionFront : AVCaptureDevicePositionBack;
AVCaptureDevice *device = [self videoDeviceWithPosition:newPosition];
AVCaptureDeviceInput *deviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:device error:nil];
[_captureSession beginConfiguration];
[_captureSession removeInput:self.deviceInput];
[_captureSession setSessionPreset:AVCaptureSessionPresetHigh]; //Always reset preset before testing canAddInput because preset will cause it to return NO
if ([_captureSession canAddInput:deviceInput]) {
[_captureSession addInput:deviceInput];
self.deviceInput = deviceInput;
self.currentCameraPossition = newPosition;
} else {
[_captureSession addInput:self.deviceInput];
}
if ([device supportsAVCaptureSessionPreset:self.sessionPreset]) {
[_captureSession setSessionPreset:self.sessionPreset];
}
if ([device lockForConfiguration:nil]) {
[device setSubjectAreaChangeMonitoringEnabled:YES];
[device unlockForConfiguration];
}
[_captureSession commitConfiguration];
}
I have seen issues with toggle code not working if it is not run on the main thread. Can you try wrapping your code with the following block:
dispatch_async(dispatch_get_main_queue(), ^{
// Your camera toggle code goes here
});
just wondering if this is possible:
I've been looking at various solutions for displaying the camera preview; and while doing so in full-screen mode is relatively straight-forward, what I'd like to do is to have it scaled to 50% of the screen and presented side by side with a graphic (not an overlay, but a separate graphic to the left of the camera preview which takes up equal space). Basically the purpose is to allow the user to compare the camera preview with the graphic.
So, what I need to know is:
a) is it possible to scale the camera preview to a lower resolution
b) can it share the screen on an iPad with another graphic which isn't an overlay
c) if a and b are true, is there any example source I might be pointed to please?
Thanks!
You can just use next code:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.opaque = YES;
previewLayer.contentsScale = self.view.contentScaleFactor;
previewLayer.frame = self.view.bounds;
previewLayer.needsDisplayOnBoundsChange = YES;
[self.view.layer addSublayer:previewLayer];
Just replace line 5 to set preview layer another frame.
You can create captureSession with this code
captureSession = [[AVCaptureSession alloc] init];
if(!captureSession)
{
NSLog(#"Failed to create video capture session");
return NO;
}
[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPreset640x480;
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
videoDevice.position = AVCaptureDevicePositionFront;
if(!videoDevice)
{
NSLog(#"Couldn't create video capture device");
[captureSession release];
captureSession = nil;
return NO;
}
if([videoDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
NSError *deviceError = nil;
if([videoDevice lockForConfiguration:&deviceError])
{
[videoDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[videoDevice unlockForConfiguration];
}
else
{
NSLog(#"Couldn't lock device for configuration");
}
}
NSError *error;
AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(!videoIn)
{
NSLog(#"Couldn't create video capture device input: %# - %#", [error localizedDescription], [error localizedFailureReason]);
[captureSession release];
captureSession = nil;
return NO;
}
if(![captureSession canAddInput:videoIn])
{
NSLog(#"Couldn't add video capture device input");
[captureSession release];
captureSession = nil;
return NO;
}
[captureSession addInput:videoIn];
[captureSession commitConfiguration];