Using XCode-6.4, iOS-8.4.1:
With AVCaptureSession, I would like to zoom further out (as much as the iPhone camera possibly can manage!)
I already use the "setVideoZoomFactor" method set equal to 1 (= its smallest value allowed). This works quite good (see code-example at the very bottom...). But then I did the following observation (recognising that the camera in photo-mode possibly manages to zoom even further out than being in video-mode):
The iPhone-camera in photo-mode shows a completely different zoom than the camera being in video-mode (at least for my iPhone 5S). You can test yourself using the native "Camera App" on your iPhone. Switch between PHOTO and VIDEO and you will see that the Photo-mode can possibly zoom further out than video-zoomfactor=1). How is that possible ???
And moreover, is there any way in achieving the same minimal zoomfactor the photo-mode achieves also in video-mode using AVCam under iOS ????
Here is an illustration of what the zoom-difference is between photo-mode and video-mode of my 5S iPhone (see picture):
Here is the code of the AVCamViewController:
- (void)viewDidLoad
{
[super viewDidLoad];
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self setSession:session];
// Setup the preview view
[[self previewView] setSession:session];
// Check for device authorization
[self checkDeviceAuthorizationStatus];
dispatch_queue_t sessionQueue = dispatch_queue_create("session queue", DISPATCH_QUEUE_SERIAL);
[self setSessionQueue:sessionQueue];
// http://stackoverflow.com/questions/25110055/ios-captureoutputdidoutputsamplebufferfromconnection-is-not-called
dispatch_async(sessionQueue, ^{
self.session = [AVCaptureSession new];
self.session.sessionPreset = AVCaptureSessionPresetMedium;
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *backCamera;
for (AVCaptureDevice *device in devices) {
if ([device hasMediaType:AVMediaTypeVideo]) {
if ([device position] == AVCaptureDevicePositionBack) {
backCamera = device;
}
}
}
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:&error];
if (error) {
NSLog(#"%#",error);
}
if ([self.session canAddInput:input]) {
[self.session addInput:input];
}
AVCaptureVideoDataOutput *output = [AVCaptureVideoDataOutput new];
[output setSampleBufferDelegate:self queue:sessionQueue];
output.videoSettings = #{(id)kCVPixelBufferPixelFormatTypeKey:#(kCVPixelFormatType_32BGRA)};
if ([self.session canAddOutput:output]) {
[self.session addOutput:output];
}
// Apply initial VideoZoomFactor to the device
NSNumber *DefaultZoomFactor = [NSNumber numberWithFloat:1.0];
if ([backCamera lockForConfiguration:&error])
{
// HERE IS THE ZOOMING DONE !!!!!!
[backCamera setVideoZoomFactor:[DefaultZoomFactor floatValue]];
[backCamera unlockForConfiguration];
}
else
{
NSLog(#"%#", error);
}
[self.session startRunning];
});
}
If your problem is that your app is zooming the photo by comparison to native iOS camera app, then this setup will probably help you.
I had "this" issue and the following solution fix it.
The solution:
Configure your session
- (void)viewDidLoad
{
[super viewDidLoad];
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self setSession:session];
--> self.session.sessionPreset = AVCaptureSessionPresetPhoto; <---
...
}
You must be aware that setup will only work for photos and not for videos. If you try with videos, your app shall crash.
You can configure your session based upon your needs (photo or video)
For video you can use this value: AVCaptureSessionPresetHigh
BR.
Related
My app uses video recording, and so I use the AVCaptureSession. When I see the Capture Session preview, however, I notice that the quality is lower, especially when pointing the camera over text on a television or computer screen. How can I increase the quality of my Capture Session so that it will display text on computer screens more clearly? This is the part of my code that deals with video quality.
self.CaptureSession = [[AVCaptureSession alloc] init];
self.CaptureSession.automaticallyConfiguresApplicationAudioSession = NO;
[self.CaptureSession setSessionPreset:AVCaptureSessionPresetHigh];
//----- ADD INPUTS -----
//ADD VIDEO INPUT
self.VideoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([self.VideoDevice hasTorch] == YES){
self.flashSet.hidden = NO;
self.flashOverlayObject.hidden = NO;
}
[self.VideoDevice lockForConfiguration:nil];
[self.VideoDevice setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
[self.VideoDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[self.VideoDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
[self.VideoDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance];
if(self.VideoDevice.lowLightBoostSupported){
self.VideoDevice.automaticallyEnablesLowLightBoostWhenAvailable = YES;
}
[self.VideoDevice unlockForConfiguration];
if (self.VideoDevice)
{
NSError *error;
self.VideoInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:self.VideoDevice error:&error];
if (!error)
{
if ([self.CaptureSession canAddInput:self.VideoInputDevice])
[self.CaptureSession addInput:self.VideoInputDevice];
}
}
I'm writing an app that needs to look at the raw video (custom edge detection etc) and use the meta data barcode reader.
even though the AVCaptureSession has an addOutput: method instead of setOutput: method, that's exactly what it's doing - first one in wins.
if I add AVCaptureVideoDataOutput as output first - it's delegate gets called.
if I add AVCaptureMetadataOutput as output first - it's delegate gets called.
Has anyone figured out a way around this?
short of removing the other one every other frame?
I was able to add both AVCaptureVideoDataOutput and AVCaptureMetadataOutput.
NSError *error = nil;
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];
// Select a video device, make an input
AVCaptureDevice *captureDevice;
AVCaptureDevicePosition desiredPosition = AVCaptureDevicePositionFront;
// Find the front facing camera
for (AVCaptureDevice *device in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if ([device position] == desiredPosition) {
captureDevice = device;
break;
}
}
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (!error) {
[self.captureSession beginConfiguration];
// add the input to the session
if ([self.captureSession canAddInput:deviceInput]) {
[self.captureSession addInput:deviceInput];
}
AVCaptureMetadataOutput *metadataOutput = [AVCaptureMetadataOutput new];
if ([self.captureSession canAddOutput:metadataOutput]) {
[self.captureSession addOutput:metadataOutput];
self.metaDataOutputQueue = dispatch_queue_create("MetaDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[metadataOutput setMetadataObjectsDelegate:self queue:self.metaDataOutputQueue];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode]];
}
self.videoDataOutput = [AVCaptureVideoDataOutput new];
if ([self.captureSession canAddOutput:self.videoDataOutput]) {
[self.captureSession addOutput:self.videoDataOutput];
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[self.videoDataOutput setVideoSettings:rgbOutputSettings];
[self.videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];
self.videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[self.videoDataOutput setSampleBufferDelegate:self queue:self.videoDataOutputQueue];
[[self.videoDataOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:YES];
}
[self.captureSession commitConfiguration];
[self.captureSession startRunning];
}
I am using QuickBlox iOS SDK for vidoe chating in my app. It works fine. Now I want to record the chat video and save it in camera roll. How can I do that.
I have gone through their documentation and implemented this -
-(IBAction)record:(id)sender{
// Create video Chat
videoChat = [[QBChat instance] createAndRegisterVideoChatInstance];
[videoChat setIsUseCustomVideoChatCaptureSession:YES];
// Create capture session
captureSession = [[AVCaptureSession alloc] init];
// ... setup capture session here
/*We create a serial queue to handle the processing of our frames*/
dispatch_queue_t callbackQueue= dispatch_queue_create("cameraQueue", NULL);
[videoCaptureOutput setSampleBufferDelegate:self queue:callbackQueue];
/*We start the capture*/
[captureSession startRunning];
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Do something with samples
// ...
// forward video samples to SDK
[videoChat processVideoChatCaptureVideoSample:sampleBuffer];
}
But I am not sure what to do from here.
How should I get the video data ?
From the quickblox docs
To setup a custom video capture session you simply follow these steps:
create an instance of AVCaptureSession
setup the input and output
implement frames callback and forward all frames to the QuickBlox iOS SDK
tell the QuickBlox SDK that you will use your own capture session
To setup a custom video capture session, setup input and output:
-(void) setupVideoCapture{
self.captureSession = [[AVCaptureSession alloc] init];
__block NSError *error = nil;
// set preset
[self.captureSession setSessionPreset:AVCaptureSessionPresetLow];
// Setup the Video input
AVCaptureDevice *videoDevice = [self frontFacingCamera];
//
AVCaptureDeviceInput *captureVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(error){
QBDLogEx(#"deviceInputWithDevice Video error: %#", error);
}else{
if ([self.captureSession canAddInput:captureVideoInput]){
[self.captureSession addInput:captureVideoInput];
}else{
QBDLogEx(#"cantAddInput Video");
}
}
// Setup Video output
AVCaptureVideoDataOutput *videoCaptureOutput = [[AVCaptureVideoDataOutput alloc] init];
videoCaptureOutput.alwaysDiscardsLateVideoFrames = YES;
//
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[videoCaptureOutput setVideoSettings:videoSettings];
/*And we create a capture session*/
if([self.captureSession canAddOutput:videoCaptureOutput]){
[self.captureSession addOutput:videoCaptureOutput];
}else{
QBDLogEx(#"cantAddOutput");
}
[videoCaptureOutput release];
// set FPS
int framesPerSecond = 3;
AVCaptureConnection *conn = [videoCaptureOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.isVideoMinFrameDurationSupported){
conn.videoMinFrameDuration = CMTimeMake(1, framesPerSecond);
}
if (conn.isVideoMaxFrameDurationSupported){
conn.videoMaxFrameDuration = CMTimeMake(1, framesPerSecond);
}
/*We create a serial queue to handle the processing of our frames*/
dispatch_queue_t callbackQueue= dispatch_queue_create("cameraQueue", NULL);
[videoCaptureOutput setSampleBufferDelegate:self queue:callbackQueue];
dispatch_release(callbackQueue);
// Add preview layer
AVCaptureVideoPreviewLayer *prewLayer = [[[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession] autorelease];
[prewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CGRect layerRect = [[myVideoView layer] bounds];
[prewLayer setBounds:layerRect];
[prewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];
myVideoView.hidden = NO;
[myVideoView.layer addSublayer:prewLayer];
/*We start the capture*/
[self.captureSession startRunning];
}
- (AVCaptureDevice *) cameraWithPosition:(AVCaptureDevicePosition) position{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == position) {
return device;
}
}
return nil;
}
- (AVCaptureDevice *) backFacingCamera{
return [self cameraWithPosition:AVCaptureDevicePositionBack];
}
- (AVCaptureDevice *) frontFacingCamera{
return [self cameraWithPosition:AVCaptureDevicePositionFront];
}
Implement frames callback:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Usually we just forward camera frames to QuickBlox SDK
// But we also can do something with them before, for example - apply some video filters or so
[self.videoChat processVideoChatCaptureVideoSample:sampleBuffer];
}
Tell to QuickBlox iOS SDK that we use our own video capture session:
self.videoChat = [[QBChat instance] createAndRegisterVideoChatInstance];
self.videoChat.viewToRenderOpponentVideoStream = opponentVideoView;
//
// we use own video capture session
self.videoChat.isUseCustomVideoChatCaptureSession = YES;
I have a single view application in which I am trying to test iOS7's AVCaptureMetadataOutput based on this explanation. My ViewController conforms to AVCaptureMetadataOutputObjectsDelegate and the code looks like this (almost exactly the same as Mattt's):
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
// Testing the VIN Scanner before I make it part of the library
NSLog(#"Setting up the vin scanner");
AVCaptureSession *session = [[AVCaptureSession alloc] init];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (input) {
[session addInput:input];
} else {
NSLog(#"Error: %#", error);
}
AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc] init];
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:output];
[session startRunning];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputMetadataObjects:(NSArray *)metadataObjects
fromConnection:(AVCaptureConnection *)connection
{
NSString *code = nil;
for (AVMetadataObject *metadata in metadataObjects) {
if ([metadata.type isEqualToString:AVMetadataObjectTypeCode39Code]) {
code = [(AVMetadataMachineReadableCodeObject *)metadata stringValue];
break;
}
}
NSLog(#"code: %#", code);
}
When I run this on an iOS7 device (I've tried an iPhone 4 and iPhone 4s) XCode logs "Setting up the vin scanner" but the camera (ie the AVCaptureSession) never opens.
Edit 1:
I added the following code to show the camera output on screen:
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
// Display full screen
previewLayer.frame = self.view.frame;
// Add the video preview layer to the view
[self.view.layer addSublayer:previewLayer];
But the display is very odd, does not conform to the screen and the way it rotates does not make sense. The other issue is that when I focus the camera on a bar code the metadata delegate method is never called. Please see pictures below:
The camera will not open the way it does for the UIImagePickerController. The problem is that your code does nothing with the output. You'll need to add a preview layer to display the output of the camera as it streams in.
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
// Display full screen
previewLayer.frame = CGRectMake(0.0, 0.0, self.view.frame.size.width, self.view.frame.size.height);
// Add the video preview layer to the view
[self.view.layer addSublayer:previewLayer];
[session startRunning];
Edit**
After taking a deeper look at your code I noticed a few more issues.
First you need to also set the MetaDataObjectTypes you want to search for, right now your not looking for any valid object types. This should be added after you add the output to the session. You can view the full list of available types in the documentation
[output setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode, AVMetadataObjectTypeEAN13Code]];
Second your AVCaptureSession *session is a local variable in your viewDidLoad, take this and place it just after your #interface ViewController () as shown below.
#interface ViewController ()
#property (nonatomic, strong) AVCaptureSession *session;
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
self.session = [[AVCaptureSession alloc] init];
// Testing the VIN Scanner before I make it part of the library
NSLog(#"Setting up the vin scanner");
self.session = [[AVCaptureSession alloc] init];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (input) {
[self.session addInput:input];
} else {
NSLog(#"Error: %#", error);
}
AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc] init];
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[self.session addOutput:output];
[output setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode, AVMetadataObjectTypeEAN13Code]];
[self.session startRunning];
}
just wondering if this is possible:
I've been looking at various solutions for displaying the camera preview; and while doing so in full-screen mode is relatively straight-forward, what I'd like to do is to have it scaled to 50% of the screen and presented side by side with a graphic (not an overlay, but a separate graphic to the left of the camera preview which takes up equal space). Basically the purpose is to allow the user to compare the camera preview with the graphic.
So, what I need to know is:
a) is it possible to scale the camera preview to a lower resolution
b) can it share the screen on an iPad with another graphic which isn't an overlay
c) if a and b are true, is there any example source I might be pointed to please?
Thanks!
You can just use next code:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.opaque = YES;
previewLayer.contentsScale = self.view.contentScaleFactor;
previewLayer.frame = self.view.bounds;
previewLayer.needsDisplayOnBoundsChange = YES;
[self.view.layer addSublayer:previewLayer];
Just replace line 5 to set preview layer another frame.
You can create captureSession with this code
captureSession = [[AVCaptureSession alloc] init];
if(!captureSession)
{
NSLog(#"Failed to create video capture session");
return NO;
}
[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPreset640x480;
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
videoDevice.position = AVCaptureDevicePositionFront;
if(!videoDevice)
{
NSLog(#"Couldn't create video capture device");
[captureSession release];
captureSession = nil;
return NO;
}
if([videoDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
NSError *deviceError = nil;
if([videoDevice lockForConfiguration:&deviceError])
{
[videoDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[videoDevice unlockForConfiguration];
}
else
{
NSLog(#"Couldn't lock device for configuration");
}
}
NSError *error;
AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(!videoIn)
{
NSLog(#"Couldn't create video capture device input: %# - %#", [error localizedDescription], [error localizedFailureReason]);
[captureSession release];
captureSession = nil;
return NO;
}
if(![captureSession canAddInput:videoIn])
{
NSLog(#"Couldn't add video capture device input");
[captureSession release];
captureSession = nil;
return NO;
}
[captureSession addInput:videoIn];
[captureSession commitConfiguration];