iOS 5 rear camera preview in non full-screen mode - ios

just wondering if this is possible:
I've been looking at various solutions for displaying the camera preview; and while doing so in full-screen mode is relatively straight-forward, what I'd like to do is to have it scaled to 50% of the screen and presented side by side with a graphic (not an overlay, but a separate graphic to the left of the camera preview which takes up equal space). Basically the purpose is to allow the user to compare the camera preview with the graphic.
So, what I need to know is:
a) is it possible to scale the camera preview to a lower resolution
b) can it share the screen on an iPad with another graphic which isn't an overlay
c) if a and b are true, is there any example source I might be pointed to please?
Thanks!

You can just use next code:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.opaque = YES;
previewLayer.contentsScale = self.view.contentScaleFactor;
previewLayer.frame = self.view.bounds;
previewLayer.needsDisplayOnBoundsChange = YES;
[self.view.layer addSublayer:previewLayer];
Just replace line 5 to set preview layer another frame.
You can create captureSession with this code
captureSession = [[AVCaptureSession alloc] init];
if(!captureSession)
{
NSLog(#"Failed to create video capture session");
return NO;
}
[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPreset640x480;
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
videoDevice.position = AVCaptureDevicePositionFront;
if(!videoDevice)
{
NSLog(#"Couldn't create video capture device");
[captureSession release];
captureSession = nil;
return NO;
}
if([videoDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
NSError *deviceError = nil;
if([videoDevice lockForConfiguration:&deviceError])
{
[videoDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[videoDevice unlockForConfiguration];
}
else
{
NSLog(#"Couldn't lock device for configuration");
}
}
NSError *error;
AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(!videoIn)
{
NSLog(#"Couldn't create video capture device input: %# - %#", [error localizedDescription], [error localizedFailureReason]);
[captureSession release];
captureSession = nil;
return NO;
}
if(![captureSession canAddInput:videoIn])
{
NSLog(#"Couldn't add video capture device input");
[captureSession release];
captureSession = nil;
return NO;
}
[captureSession addInput:videoIn];
[captureSession commitConfiguration];

Related

AVCaptureSession Low Preview Quality

My app uses video recording, and so I use the AVCaptureSession. When I see the Capture Session preview, however, I notice that the quality is lower, especially when pointing the camera over text on a television or computer screen. How can I increase the quality of my Capture Session so that it will display text on computer screens more clearly? This is the part of my code that deals with video quality.
self.CaptureSession = [[AVCaptureSession alloc] init];
self.CaptureSession.automaticallyConfiguresApplicationAudioSession = NO;
[self.CaptureSession setSessionPreset:AVCaptureSessionPresetHigh];
//----- ADD INPUTS -----
//ADD VIDEO INPUT
self.VideoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([self.VideoDevice hasTorch] == YES){
self.flashSet.hidden = NO;
self.flashOverlayObject.hidden = NO;
}
[self.VideoDevice lockForConfiguration:nil];
[self.VideoDevice setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
[self.VideoDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[self.VideoDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
[self.VideoDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance];
if(self.VideoDevice.lowLightBoostSupported){
self.VideoDevice.automaticallyEnablesLowLightBoostWhenAvailable = YES;
}
[self.VideoDevice unlockForConfiguration];
if (self.VideoDevice)
{
NSError *error;
self.VideoInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:self.VideoDevice error:&error];
if (!error)
{
if ([self.CaptureSession canAddInput:self.VideoInputDevice])
[self.CaptureSession addInput:self.VideoInputDevice];
}
}

I need to use both AVCaptureVideoDataOutput and AVCaptureMetadataOutput

I'm writing an app that needs to look at the raw video (custom edge detection etc) and use the meta data barcode reader.
even though the AVCaptureSession has an addOutput: method instead of setOutput: method, that's exactly what it's doing - first one in wins.
if I add AVCaptureVideoDataOutput as output first - it's delegate gets called.
if I add AVCaptureMetadataOutput as output first - it's delegate gets called.
Has anyone figured out a way around this?
short of removing the other one every other frame?
I was able to add both AVCaptureVideoDataOutput and AVCaptureMetadataOutput.
NSError *error = nil;
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];
// Select a video device, make an input
AVCaptureDevice *captureDevice;
AVCaptureDevicePosition desiredPosition = AVCaptureDevicePositionFront;
// Find the front facing camera
for (AVCaptureDevice *device in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if ([device position] == desiredPosition) {
captureDevice = device;
break;
}
}
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (!error) {
[self.captureSession beginConfiguration];
// add the input to the session
if ([self.captureSession canAddInput:deviceInput]) {
[self.captureSession addInput:deviceInput];
}
AVCaptureMetadataOutput *metadataOutput = [AVCaptureMetadataOutput new];
if ([self.captureSession canAddOutput:metadataOutput]) {
[self.captureSession addOutput:metadataOutput];
self.metaDataOutputQueue = dispatch_queue_create("MetaDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[metadataOutput setMetadataObjectsDelegate:self queue:self.metaDataOutputQueue];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode]];
}
self.videoDataOutput = [AVCaptureVideoDataOutput new];
if ([self.captureSession canAddOutput:self.videoDataOutput]) {
[self.captureSession addOutput:self.videoDataOutput];
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[self.videoDataOutput setVideoSettings:rgbOutputSettings];
[self.videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];
self.videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[self.videoDataOutput setSampleBufferDelegate:self queue:self.videoDataOutputQueue];
[[self.videoDataOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:YES];
}
[self.captureSession commitConfiguration];
[self.captureSession startRunning];
}

AvFoundation takes black image from camera in Iphone

I am newbie to iPhone and have made a demo using AVFoundation for taking pics. I am taking 5 max images but issue is when I am taking 1st image it always come as compare to other images. Can anybody help me to resolve it? Code is as below:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
//Added by jigar
session.sessionPreset = AVCaptureSessionPreset640x480;
//[[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] supportsAvCaptureSessionPreset:AVCaptureSessionPreset640x480];
//End
// Create device input and add to current session
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error: nil];
[session addInput:input];
// Create video output and add to current session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Start session configuration
[session beginConfiguration];
[device lockForConfiguration:nil];
// Set torch to on
if(flashMode == UIImagePickerControllerCameraFlashModeAuto)
{
device.torchMode = AVCaptureTorchModeAuto;
}
else if(flashMode == UIImagePickerControllerCameraFlashModeOn){
device.torchMode = AVCaptureTorchModeOn;
}
else if(flashMode == UIImagePickerControllerCameraFlashModeOff){
device.torchMode = AVCaptureTorchModeOff;
}
[device unlockForConfiguration];
[session commitConfiguration];
// [session startRunning];
[currentPicker takePicture];
// [session stopRunning];
// session = nil;

Method to toggle camera inputs does not work in reverse

I have a video preview layer which shows the video feed from a given input. I want the user to be able to switch camera with single button. The button I have created will switch the camera from the initial back facing camera to the front facing camera but does not work in reverse.
Here is the method:
- (IBAction)CameraToggleButtonPressed:(id)sender {
if (_captureSession) {
NSLog(#"Toggle camera");
NSError *error;
AVCaptureInput *videoInput = [self videoInput];
AVCaptureInput *newVideoInput;
AVCaptureDevicePosition position = [[VideoInputDevice device] position];
[_captureSession beginConfiguration];
[_captureSession removeInput:_videoInput];
[_captureSession removeInput:_audioInput];
[_captureSession removeOutput:_movieOutput];
[_captureSession removeOutput:_stillImageOutput];
//Get new input
AVCaptureDevice *newCamera = nil;
if(((AVCaptureDeviceInput*)videoInput).device.position == AVCaptureDevicePositionBack)
{
newCamera = [self cameraWithPosition:AVCaptureDevicePositionFront];
}
else
{
newCamera = [self cameraWithPosition:AVCaptureDevicePositionBack];
}
newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:newCamera error:&error];
if(!newVideoInput || error)
{
NSLog(#"Error creating capture device input: %#", error.localizedDescription);
}
else
{
[self.captureSession addInput:newVideoInput];
[self.captureSession addInput:self.audioInput];
[self.captureSession addOutput:self.movieOutput];
[self.captureSession addOutput:self.stillImageOutput];
}
[_captureSession commitConfiguration];
}
}
Why is does the button not work in reverse?

AVCam: Zoom further out (as it is possible for photo-mode)?

Using XCode-6.4, iOS-8.4.1:
With AVCaptureSession, I would like to zoom further out (as much as the iPhone camera possibly can manage!)
I already use the "setVideoZoomFactor" method set equal to 1 (= its smallest value allowed). This works quite good (see code-example at the very bottom...). But then I did the following observation (recognising that the camera in photo-mode possibly manages to zoom even further out than being in video-mode):
The iPhone-camera in photo-mode shows a completely different zoom than the camera being in video-mode (at least for my iPhone 5S). You can test yourself using the native "Camera App" on your iPhone. Switch between PHOTO and VIDEO and you will see that the Photo-mode can possibly zoom further out than video-zoomfactor=1). How is that possible ???
And moreover, is there any way in achieving the same minimal zoomfactor the photo-mode achieves also in video-mode using AVCam under iOS ????
Here is an illustration of what the zoom-difference is between photo-mode and video-mode of my 5S iPhone (see picture):
Here is the code of the AVCamViewController:
- (void)viewDidLoad
{
[super viewDidLoad];
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self setSession:session];
// Setup the preview view
[[self previewView] setSession:session];
// Check for device authorization
[self checkDeviceAuthorizationStatus];
dispatch_queue_t sessionQueue = dispatch_queue_create("session queue", DISPATCH_QUEUE_SERIAL);
[self setSessionQueue:sessionQueue];
// http://stackoverflow.com/questions/25110055/ios-captureoutputdidoutputsamplebufferfromconnection-is-not-called
dispatch_async(sessionQueue, ^{
self.session = [AVCaptureSession new];
self.session.sessionPreset = AVCaptureSessionPresetMedium;
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *backCamera;
for (AVCaptureDevice *device in devices) {
if ([device hasMediaType:AVMediaTypeVideo]) {
if ([device position] == AVCaptureDevicePositionBack) {
backCamera = device;
}
}
}
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:&error];
if (error) {
NSLog(#"%#",error);
}
if ([self.session canAddInput:input]) {
[self.session addInput:input];
}
AVCaptureVideoDataOutput *output = [AVCaptureVideoDataOutput new];
[output setSampleBufferDelegate:self queue:sessionQueue];
output.videoSettings = #{(id)kCVPixelBufferPixelFormatTypeKey:#(kCVPixelFormatType_32BGRA)};
if ([self.session canAddOutput:output]) {
[self.session addOutput:output];
}
// Apply initial VideoZoomFactor to the device
NSNumber *DefaultZoomFactor = [NSNumber numberWithFloat:1.0];
if ([backCamera lockForConfiguration:&error])
{
// HERE IS THE ZOOMING DONE !!!!!!
[backCamera setVideoZoomFactor:[DefaultZoomFactor floatValue]];
[backCamera unlockForConfiguration];
}
else
{
NSLog(#"%#", error);
}
[self.session startRunning];
});
}
If your problem is that your app is zooming the photo by comparison to native iOS camera app, then this setup will probably help you.
I had "this" issue and the following solution fix it.
The solution:
Configure your session
- (void)viewDidLoad
{
[super viewDidLoad];
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self setSession:session];
--> self.session.sessionPreset = AVCaptureSessionPresetPhoto; <---
...
}
You must be aware that setup will only work for photos and not for videos. If you try with videos, your app shall crash.
You can configure your session based upon your needs (photo or video)
For video you can use this value: AVCaptureSessionPresetHigh
BR.

Resources