How to set scanning bound using AVFoundation - ios

I am trying barcode scanner in one of my iOS application. I had successfully implemented the barcode scanner.
But currently barcode scanning is displayed in full screen only. But what I want is that, the video should be viewed in full screen and the barcode should be scanned in particular portion only. That is, if the barcode is placed in that portion then only it should be displayed. Below is my current code:
session=[[AVCaptureSession alloc]init];
device=[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error=nil;
input=[AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (input) {
[session addInput:input];
}
else{
NSLog(#"Errod : %#",error);
}
output=[[AVCaptureMetadataOutput alloc]init];
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:output];
output.metadataObjectTypes=[output availableMetadataObjectTypes];
prevLayer=[AVCaptureVideoPreviewLayer layerWithSession:session];
[prevLayer setFrame:self.view.bounds];
[prevLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.view.layer addSublayer:prevLayer];
[session startRunning];
[self.view bringSubviewToFront:self.lblCode];
[self.view bringSubviewToFront:self.imgShade1];
[self.view bringSubviewToFront:self.imgShade2];

This is what you are after:
CGRect visibleMetadataOutputRect = [prevLayer metadataOutputRectOfInterestForRect:areaOfInterest];
output.rectOfInterest = visibleMetadataOutputRect;
where areaOfInterest is a CGRect. Hope this solves the issue.

Maybe it's late to answer this question, but I've just overcome this issue myself. So, hope to help others later.
The keyword is rectOfInterest of AVCaptureMetadataOutput, and here's how I set mine.
CGSize size = self.view.bounds.size;
_output.rectOfInterest = CGRectMake(cropRect.origin.y/size.height, cropRect.origin.x/size.width, cropRect.size.height/size.height, cropRect.size.width/size.width);
For more detail, you can check Document from Apple Inc.
Good Luck. :)

Related

My Application get crashed for Barcode Scanning in above iOS 9.3 version

Hi I am trying to develop app for 1D and 2D Barcode scanning, it works well in iOS 9.3 and Xcode 7.3 but when I am trying to run same application in iOS 10 and Xcode 8.2 application get crashed on below line.
Please help on it.
[_session addOutput:_output];
-(void)setupCaptureSession{
_session = [[AVCaptureSession alloc] init];
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
_input = [AVCaptureDeviceInput deviceInputWithDevice:_device error:&error];
if (_input) {
[_session addInput:_input];
} else {
NSLog(#"Error: %#", error);
}
_output = [[AVCaptureMetadataOutput alloc] init];
[_output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:_output]; // here Application get crashed.
_output.metadataObjectTypes = [_output availableMetadataObjectTypes];
_prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_prevLayer.frame = _previewView.bounds;
_prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[_previewView.layer addSublayer:_prevLayer];
//[self.view];
//[_session startRunning];
[_previewView bringSubviewToFront:_highlightView];
}
Hi I have just commented, following code of line:
[_previewView.layer addSublayer:_prevLayer];
and added below code of line in my app and app works without error:
[_previewView.layer insertSublayer:_prevLayer atIndex:0];
Did you add Camera Usage Description to your plist file? If no, take a look on this blog.
iOS 10 requires more privacy with the usage of hardware input sources.
Your app works with a camera, and you need to provide additional explanation of why it needs a camera.
So, go to your Info.plist file and add an additional key-value pair dictionary there.
For key choose - Privacy Camera Usage Description
For value add some string like next - App needs a camera to make amazing photos, scan barcodes, etc...
To be sure if everything is Ok, go to Settings iOS app and check there for camera toggle switched to On for you application.

camera not focusing on iPhone 4, running iOS 7.1

We are having trouble after the iOS upgrade went from 7.0.6 to 7.1.0. I don't see this issue on iPhone 4s, 5, 5c, nor 5s running iOS 7.1 So much for all the non-fragmentation talk. I am posting the camera initialization code:
- (void)initCapture
{
//Setting up the AVCaptureDevice (camera)
AVCaptureDevice* inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError* cameraError;
if ([inputDevice lockForConfiguration:&cameraError])
{
if ([inputDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
NSLog(#"AVCaptureDevice is set to video with continuous auto focus");
CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f);
[inputDevice setFocusPointOfInterest:autofocusPoint];
[inputDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
}
[inputDevice unlockForConfiguration];
}
//setting up the input streams
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:nil];
//setting up up the AVCaptureVideoDataOutput
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
[captureOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
//setting up video settings
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
//passing the settings to the AVCaptureVideoDataOutput
[captureOutput setVideoSettings:videoSettings];
//setting up the AVCaptureSession
captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
[captureSession addInput:captureInput];
[captureSession addOutput:captureOutput];
if (!prevLayer)
{
prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
}
NSLog(#"initCapture preview Layer %p %#", self.prevLayer, self.prevLayer);
self.prevLayer.frame = self.view.bounds;
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer: self.prevLayer];
[self.captureSession startRunning];
}
Any help would be greatly appreciated...
The code provided by Apple you are using is outdated - they have fully rewritten it now. I'd try my luck and go for the new workflow.
Check it out here.
To close this thread up, we were using the camera for scanning of QR codes in addition to the libzxing. We decided to implement native iOS 7.0 AVCaptureMetadataOutputObjectsDelegate instead of the older AVCaptureVideoDataOutputSampleBufferDelegate. The Metadata delegate is much simpler and cleaner, and we found the example in http://nshipster.com/ios7/ very helpful.
Here are some ideas to diagnose your problem:
You have no else case for if ([inputDevice lockForConfiguration:&cameraError]). Add one.
In the else case, log the error contained in cameraError.
You have no else case for if ([inputDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]). Add one; log that, or add a breakpoint there to test in your debugging.
You don't check the return value of the property focusPointOfInterestSupported, before attempting setFocusPointOfInterest.
Consider calling setFocusMode before setFocusPointOfInterest (not sure if it matters, but that's what I have)
In general, you may want to do all your checks before attempting to lock the configuration.
Following neuman8's comment stating that something in libzxing is preventing the refocus, I did some investigating myself
I found the following line in the Decoder.mm file to be the culprit.
ArrayRef<char> subsetData (subsetBytesPerRow * subsetHeight);
It seems that ArrayRef is a class in zxing/common/Array.h file that attempts to allocate an array with the specified size. It did not seem to do anything wrong, but I guessed that the allocation of about 170k char element array may take some time and be the culprit for slowing down the blocking call enough to prevent other threads from running.
So, I tried to just put in a brute force solution to test the hypothesis. I added a sleep just after the allocation.
[NSThread sleepForTimeInterval:0.02];
The camera started focusing again and was able to decipher the QR codes.
I am still unable to find a better way to resolve this. Is there anyone who is able to figure a more efficient allocation of the large array, or have a more elegant way of yielding the thread for the camera focus?Otherwise this should solve the problem for now, even if it is ugly.

Keep getting "inactive/invalid connection passed" error when trying to capture still image using AV Foundation

I have a View Controller that is using AV Foundation. As soon as the View controller loads, the user is able to see exactly what the input device is seeing. This is because I have started the AVCaptureSession in the viewDidLoad method implementation.
Here is the code that I have in viewDidLoad:
[super viewDidLoad];
AVCaptureSession *session =[[AVCaptureSession alloc]init];
[session setSessionPreset:AVCaptureSessionPresetHigh];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = [[NSError alloc]init];
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if([session canAddInput:deviceInput])
[session addInput:deviceInput];
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view]layer];
[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
[rootLayer insertSublayer:previewLayer atIndex:0];
[session startRunning];
And then I have an IBAction method implementation that has been connected to a UIButton for this view controller. Here is the IBAction implementation's code:
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc]init];
AVCaptureConnection *connection = [[AVCaptureConnection alloc]init];
[stillImageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSLog(#"Image Data Captured: %#", imageDataSampleBuffer);
NSLog(#"Any errors? %#", error);
if(imageDataSampleBuffer) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc]initWithData:imageData];
NSLog(#"%#", image);
}
}];
When I run the app on my iPhone and press the button connected to this implementation, I get this error in the console:
*** -[AVCaptureStillImageOutput captureStillImageAsynchronouslyFromConnection:completionHandler:] - inactive/invalid connection passed.'
I looked in the xcode docs and it does say "You can only add an AVCaptureConnection instance to a session using addConnection: if canAddConnection: returns YES", but I have tried doing the method call on my AVCaptureSession object for addConnection and canAddConnection but they don't even show up as available options.
I also read somewhere else that for iOS you don't have to manually create a connection, but this doesn't make sense to me because in my IBAction's code there is a method call of: captureStillImageAsynchronouslyFromConnection: which requires an input.
So if the connection is automatically created for you, what is it called so I can use it for the input?
This is my first time working with AV Foundation and I just can't seem to figure out this connection error.
Any help is greatly appreciated.
When you set up camera, add a stillImageOutput to your AVCaptureSession.
self.stillImageOutput = AVCaptureStillImageOutput()
let stillSettings = [AVVideoCodecJPEG:AVVideoCodecKey]
self.stillImageOutput.outputSettings = stillSettings
if(self.session.canAddOutput(self.stillImageOutput)){
self.session.addOutput(self.stillImageOutput)
}
Then when taking photo, get the AVCaptureSession from stillImageOutput.
func takePhoto(sender: UIButton!){
let connection = self.stillImageOutput.connectionWithMediaType(AVMediaTypeVideo)
if(connection.enabled){
self.stillImageOutput.captureStillImageAsynchronouslyFromConnection(connection, completionHandler: {(buffer: CMSampleBuffer!, error: NSError!) -> Void in
println("picture taken")//this never gets executed
})
}
}
Did you have a property to retain the AVCaptureSession, like
#property (nonatomic, strong) AVCaptureSession *captureSession;
//...
self.captureSession = session;
I hope it helps you.

Camera feed slow to load with AVCaptureSession on iOS - How can I speed it up?

Right now I'm trying to allow users to take pictures in my app without using UIImagePickerController. I'm using AVCaptureSession and all the related classes to load a camera feed as a sublayer on a full-screen view I have on one of my view controllers. The code works but unfortunately the camera is very slow to load. Usually takes 2-3 seconds. Here is my code:
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
if ([session canSetSessionPreset:AVCaptureSessionPresetHigh])
//Check size based configs are supported before setting them
[session setSessionPreset:AVCaptureSessionPresetHigh];
[session setSessionPreset:AVCaptureSessionPreset1280x720];
CALayer *viewLayer = self.liveCameraFeed.layer;
//NSLog(#"viewLayer = %#", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = viewLayer.bounds;
[viewLayer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device;
if(isFront)
{
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
else
{
device = [self frontCamera];
}
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput * audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
[session addInput:audioInput];
NSError *error = nil;
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
//NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
[session startRunning];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
Is there any way to speed it up? I've already tried loading it on another thread using Grand Central Dispatch and NSThread and though that stopped the app from freezing it made the loading of the camera take even longer. Any help is appreciated.
In my case, I need to wait for session to start running
dispatch_async(queue) {
self.session.startRunning()
dispatch_async(dispatch_get_main_queue()) {
self.delegate?.cameraManDidStart(self)
let layer = AVCaptureVideoPreviewLayer(session: self.session)
}
}
Waiting for AVCaptureSession's startRunning function was my solution too. You can run startRunning in global async and then in main thread you can add your AVCaptureVideoPreviewLayer.
Swift 4 sample
DispatchQueue.global().async {
self.captureSession.startRunning()
DispatchQueue.main.async {
let videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
}
}
You can load the AVCaptureSession at the time of viewWillAppear. It works for me. When I switch to the view with the AVCaptureSession from other view, then I see the camera running immediately.
For anyone interested the solution I came up with was preloading the camera on a different thread and keeping it open.
I tried all the above methods but it was not as good as Instagram or Facebook, So I loaded the AVCaptureDevice, AVCaptureVideoPreviewLayer, AVCaptureSession in the Parent Screen and passed it as parameter to the Child Screen. It was loading very rapidly.

AVCaptureVideoPreviewLayer connection crash on iOS5 only [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I'm using some code from Apple's "SquareCam" source code. It runs fine on iOS6, but on iOS5 I get a crash:
AVCaptureSession *session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
// Select a video device, make an input
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
require( error == nil, bail );
{
isUsingFrontFacingCamera = NO;
if ( [session canAddInput:deviceInput] )
[session addInput:deviceInput];
// Make a still image output
self.stillImageOutput = [AVCaptureStillImageOutput new];
[self.stillImageOutput addObserver:self forKeyPath:#"capturingStillImage" options:NSKeyValueObservingOptionNew context:(__bridge void *)(AVCaptureStillImageIsCapturingStillImageContext)];
if ( [session canAddOutput:self.stillImageOutput] )
[session addOutput:self.stillImageOutput];
// Make a video data output
self.videoDataOutput = [AVCaptureVideoDataOutput new];
// we want BGRA, both CoreGraphics and OpenGL work well with 'BGRA'
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[self.videoDataOutput setVideoSettings:rgbOutputSettings];
[self.videoDataOutput setAlwaysDiscardsLateVideoFrames:YES]; // discard if the data output queue is blocked (as we process the still image)
// create a serial dispatch queue used for the sample buffer delegate as well as when a still image is captured
// a serial dispatch queue must be used to guarantee that video frames will be delivered in order
// see the header doc for setSampleBufferDelegate:queue: for more information
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[self.videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
if ( [session canAddOutput:self.videoDataOutput] )
[session addOutput:self.videoDataOutput];
[[self.videoDataOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:NO];
effectiveScale = 1.0;
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
//[self.previewLayer setBackgroundColor:[[UIColor redColor] CGColor]];
//self.previewLayer.orientation = UIInterfaceOrientationLandscapeLeft;
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//Get Preview Layer connection - this is for orientation purposes
AVCaptureConnection *previewLayerConnection=self.previewLayer.connection; //THIS CRASHES ON IOS5
if ([previewLayerConnection isVideoOrientationSupported])
[previewLayerConnection setVideoOrientation:[[UIApplication sharedApplication] statusBarOrientation]];
CALayer *rootLayer = [self.previewView layer];
[rootLayer setMasksToBounds:YES];
[self.previewLayer setFrame:self.view.bounds];
[rootLayer addSublayer:self.previewLayer];
[session startRunning];
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
This is the line it crashes on:
AVCaptureConnection *previewLayerConnection=self.previewLayer.connection;
This is the error message:
-[AVCaptureVideoPreviewLayer connection]: unrecognized selector sent to instance
I don't really understand AVCapture that much in the first place. I'm just trying to take a picture. But why would this work fine on iOS6 but not iOS5?
The error message says:
-[AVCaptureVideoPreviewLayer connection]: unrecognized selector sent to instance
So it's telling you that you can't say connection to an AVCaptureVideoPreviewLayer.
And indeed, the docs on AVCaptureVideoPreviewLayer say:
connection
Available in iOS 6.0 and later.
So there's the reason: in iOS 5 there's no connection property of an AVCaptureVideoPreviewLayer.

Resources