camera not focusing on iPhone 4, running iOS 7.1 - ios

We are having trouble after the iOS upgrade went from 7.0.6 to 7.1.0. I don't see this issue on iPhone 4s, 5, 5c, nor 5s running iOS 7.1 So much for all the non-fragmentation talk. I am posting the camera initialization code:
- (void)initCapture
{
//Setting up the AVCaptureDevice (camera)
AVCaptureDevice* inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError* cameraError;
if ([inputDevice lockForConfiguration:&cameraError])
{
if ([inputDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
{
NSLog(#"AVCaptureDevice is set to video with continuous auto focus");
CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f);
[inputDevice setFocusPointOfInterest:autofocusPoint];
[inputDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
}
[inputDevice unlockForConfiguration];
}
//setting up the input streams
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:nil];
//setting up up the AVCaptureVideoDataOutput
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
[captureOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
//setting up video settings
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
//passing the settings to the AVCaptureVideoDataOutput
[captureOutput setVideoSettings:videoSettings];
//setting up the AVCaptureSession
captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
[captureSession addInput:captureInput];
[captureSession addOutput:captureOutput];
if (!prevLayer)
{
prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
}
NSLog(#"initCapture preview Layer %p %#", self.prevLayer, self.prevLayer);
self.prevLayer.frame = self.view.bounds;
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer: self.prevLayer];
[self.captureSession startRunning];
}
Any help would be greatly appreciated...

The code provided by Apple you are using is outdated - they have fully rewritten it now. I'd try my luck and go for the new workflow.
Check it out here.

To close this thread up, we were using the camera for scanning of QR codes in addition to the libzxing. We decided to implement native iOS 7.0 AVCaptureMetadataOutputObjectsDelegate instead of the older AVCaptureVideoDataOutputSampleBufferDelegate. The Metadata delegate is much simpler and cleaner, and we found the example in http://nshipster.com/ios7/ very helpful.

Here are some ideas to diagnose your problem:
You have no else case for if ([inputDevice lockForConfiguration:&cameraError]). Add one.
In the else case, log the error contained in cameraError.
You have no else case for if ([inputDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]). Add one; log that, or add a breakpoint there to test in your debugging.
You don't check the return value of the property focusPointOfInterestSupported, before attempting setFocusPointOfInterest.
Consider calling setFocusMode before setFocusPointOfInterest (not sure if it matters, but that's what I have)
In general, you may want to do all your checks before attempting to lock the configuration.

Following neuman8's comment stating that something in libzxing is preventing the refocus, I did some investigating myself
I found the following line in the Decoder.mm file to be the culprit.
ArrayRef<char> subsetData (subsetBytesPerRow * subsetHeight);
It seems that ArrayRef is a class in zxing/common/Array.h file that attempts to allocate an array with the specified size. It did not seem to do anything wrong, but I guessed that the allocation of about 170k char element array may take some time and be the culprit for slowing down the blocking call enough to prevent other threads from running.
So, I tried to just put in a brute force solution to test the hypothesis. I added a sleep just after the allocation.
[NSThread sleepForTimeInterval:0.02];
The camera started focusing again and was able to decipher the QR codes.
I am still unable to find a better way to resolve this. Is there anyone who is able to figure a more efficient allocation of the large array, or have a more elegant way of yielding the thread for the camera focus?Otherwise this should solve the problem for now, even if it is ugly.

Related

iOS ARKit breaks regular camera quality

I'm using two separate iOS libraries that make use of the device's camera.
The first one, is a library used to capture regular photos using the camera. The second one, is a library that uses ARKit to measure the world.
Somehow, after using the ARKit code, the regular camera quality (with the exact same settings and initialization code) renders a much lower quality (a lot of noise in the image, looks like post-processing is missing) preview and captured image. A full app restart is required to return the camera to its original quality.
I know this may be vague, but here's the code for each library (more or less). Any ideas what could be missing? Why would ARKit permanently change the camera's settings? I could easily fix it if I knew which setting is getting lost/changed after ARKit is used.
Code sample for iOS image capture (removed error checking and boilerplate):
- (void)initializeCaptureSessionInput
{
AVCaptureDevice *captureDevice = [self getDevice];
[self.session beginConfiguration];
NSError *error = nil;
AVCaptureDeviceInput *captureDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
self.session.sessionPreset = AVCaptureSessionPresetPhoto;
[self.session addInput:captureDeviceInput];
self.videoCaptureDeviceInput = captureDeviceInput;
[self.previewLayer.connection setVideoOrientation:orientation];
[self.session commitConfiguration];
}
- (void)startSession
{
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
if ([self.session canAddOutput:stillImageOutput]) {
stillImageOutput.outputSettings = #{AVVideoCodecKey : AVVideoCodecJPEG, AVVideoQualityKey: #(1.0)};
[self.session addOutput:stillImageOutput];
[stillImageOutput setHighResolutionStillImageOutputEnabled:YES];
self.stillImageOutput = stillImageOutput;
}
[self.session startRunning];
}
[self initializeCaptureSessionInput];
[self startSession];
AVCaptureConnection *connection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[connection setVideoOrientation:orientation];
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
// photo result here...
}]
Code for ARKit:
private var sceneView = ARSCNView()
... other vars...
... init code ...
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.vertical, .horizontal]
// this should technically use Lidar sensors and greatly
// improve accuracy
if #available(iOS 13.4, *) {
if(ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh)){
configuration.sceneReconstruction = .mesh
}
} else {
// Fallback on earlier versions
}
//sceneView.preferredFramesPerSecond = 30
sceneView.automaticallyUpdatesLighting = true
//sceneView.debugOptions = [.showFeaturePoints]
sceneView.showsStatistics = false
sceneView.antialiasingMode = .multisampling4X
// Set the view's delegate and session delegate
sceneView.delegate = self
sceneView.session.delegate = self
// Run the view's session
arReady = false
arStatus = "off"
measuringStatus = "off"
sceneView.session.run(configuration)
Image samples:
high quality: https://zinspectordev2.s3.amazonaws.com/usi/2/16146392129fa3017be37a4b63bbfd0e753a62c462.JPEG
low quality: https://zinspectordev2.s3.amazonaws.com/usi/2/1614639283607613c3083344f39adc3c40c74f0217.JPEG
It happens because ARKit's maximum output resolution is lower than the camera's. You can check ARWorldTrackingConfiguration.supportedVideoFormats for a list of ARConfiguration.VideoFormat to see all available resolutions for the current device.
No work arounds found. However, this is definitely an apple bug as it doesn't happen in newer devices. Looking forward for an iphone 7 update.

Multiple AVCaptureVideoDataOutput per a single AVCaptureDevice at the same time

Scenario
I am working on an application that does video processing and streaming. I already have video capture from the back camera streaming perfectly. The problem is I have to do my processing to the video data also, but only locally. As it turns out the API I am using to do the local video processing requires a different pixel format than the APIs I am using to stream the data to my server. It seems I need to have two separate sessions capturing video from the back camera simultaneously. That would allow one session to do the processing and one for streaming.
Problem
Every time I attempt to create a new session to use the same AVCaptureDevice (back), my streaming immediately stops. Code below:
captureSession = [[AVCaptureSession alloc] init];
AVCaptureDeviceInput *videoIn = [[AVCaptureDeviceInput alloc]
initWithDevice:[self videoDeviceWithPosition:AVCaptureDevicePositionBack]
error:nil];
if ([captureSession canAddInput:videoIn])
{
[captureSession addInput:videoIn];
}
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
[videoOut setAlwaysDiscardsLateVideoFrames:YES];
[videoOut setVideoSettings:
#{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)}];
dispatch_queue_t videoCaptureQueue =
dispatch_queue_create("Video Process Queue", DISPATCH_QUEUE_SERIAL);
[videoOut setSampleBufferDelegate:self queue:videoCaptureQueue];
if ([captureSession canAddOutput:videoOut]) {
[captureSession addOutput:videoOut];
}
I receive an interruption reason videoDeviceInUseByAnotherClient.
videoDeviceInUseByAnotherClient: An interruption caused by the video device temporarily being made unavailable (for example, when used by another capture session).
I have also tried adding the output of the original capture session to the new session but every time the canAddOutput: method returns NO. My guess is because there is already a session associated with that output.
Question
How do I use the same AVCaptureDevice to output to two separate AVCaptureVideoDataOutputs at the same time? Or how can I achieve the same thing as the diagram below?

My Application get crashed for Barcode Scanning in above iOS 9.3 version

Hi I am trying to develop app for 1D and 2D Barcode scanning, it works well in iOS 9.3 and Xcode 7.3 but when I am trying to run same application in iOS 10 and Xcode 8.2 application get crashed on below line.
Please help on it.
[_session addOutput:_output];
-(void)setupCaptureSession{
_session = [[AVCaptureSession alloc] init];
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
_input = [AVCaptureDeviceInput deviceInputWithDevice:_device error:&error];
if (_input) {
[_session addInput:_input];
} else {
NSLog(#"Error: %#", error);
}
_output = [[AVCaptureMetadataOutput alloc] init];
[_output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:_output]; // here Application get crashed.
_output.metadataObjectTypes = [_output availableMetadataObjectTypes];
_prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_prevLayer.frame = _previewView.bounds;
_prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[_previewView.layer addSublayer:_prevLayer];
//[self.view];
//[_session startRunning];
[_previewView bringSubviewToFront:_highlightView];
}
Hi I have just commented, following code of line:
[_previewView.layer addSublayer:_prevLayer];
and added below code of line in my app and app works without error:
[_previewView.layer insertSublayer:_prevLayer atIndex:0];
Did you add Camera Usage Description to your plist file? If no, take a look on this blog.
iOS 10 requires more privacy with the usage of hardware input sources.
Your app works with a camera, and you need to provide additional explanation of why it needs a camera.
So, go to your Info.plist file and add an additional key-value pair dictionary there.
For key choose - Privacy Camera Usage Description
For value add some string like next - App needs a camera to make amazing photos, scan barcodes, etc...
To be sure if everything is Ok, go to Settings iOS app and check there for camera toggle switched to On for you application.

How to set scanning bound using AVFoundation

I am trying barcode scanner in one of my iOS application. I had successfully implemented the barcode scanner.
But currently barcode scanning is displayed in full screen only. But what I want is that, the video should be viewed in full screen and the barcode should be scanned in particular portion only. That is, if the barcode is placed in that portion then only it should be displayed. Below is my current code:
session=[[AVCaptureSession alloc]init];
device=[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error=nil;
input=[AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (input) {
[session addInput:input];
}
else{
NSLog(#"Errod : %#",error);
}
output=[[AVCaptureMetadataOutput alloc]init];
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:output];
output.metadataObjectTypes=[output availableMetadataObjectTypes];
prevLayer=[AVCaptureVideoPreviewLayer layerWithSession:session];
[prevLayer setFrame:self.view.bounds];
[prevLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.view.layer addSublayer:prevLayer];
[session startRunning];
[self.view bringSubviewToFront:self.lblCode];
[self.view bringSubviewToFront:self.imgShade1];
[self.view bringSubviewToFront:self.imgShade2];
This is what you are after:
CGRect visibleMetadataOutputRect = [prevLayer metadataOutputRectOfInterestForRect:areaOfInterest];
output.rectOfInterest = visibleMetadataOutputRect;
where areaOfInterest is a CGRect. Hope this solves the issue.
Maybe it's late to answer this question, but I've just overcome this issue myself. So, hope to help others later.
The keyword is rectOfInterest of AVCaptureMetadataOutput, and here's how I set mine.
CGSize size = self.view.bounds.size;
_output.rectOfInterest = CGRectMake(cropRect.origin.y/size.height, cropRect.origin.x/size.width, cropRect.size.height/size.height, cropRect.size.width/size.width);
For more detail, you can check Document from Apple Inc.
Good Luck. :)

Camera feed slow to load with AVCaptureSession on iOS - How can I speed it up?

Right now I'm trying to allow users to take pictures in my app without using UIImagePickerController. I'm using AVCaptureSession and all the related classes to load a camera feed as a sublayer on a full-screen view I have on one of my view controllers. The code works but unfortunately the camera is very slow to load. Usually takes 2-3 seconds. Here is my code:
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
if ([session canSetSessionPreset:AVCaptureSessionPresetHigh])
//Check size based configs are supported before setting them
[session setSessionPreset:AVCaptureSessionPresetHigh];
[session setSessionPreset:AVCaptureSessionPreset1280x720];
CALayer *viewLayer = self.liveCameraFeed.layer;
//NSLog(#"viewLayer = %#", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = viewLayer.bounds;
[viewLayer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device;
if(isFront)
{
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
else
{
device = [self frontCamera];
}
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput * audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
[session addInput:audioInput];
NSError *error = nil;
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
//NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
[session startRunning];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
Is there any way to speed it up? I've already tried loading it on another thread using Grand Central Dispatch and NSThread and though that stopped the app from freezing it made the loading of the camera take even longer. Any help is appreciated.
In my case, I need to wait for session to start running
dispatch_async(queue) {
self.session.startRunning()
dispatch_async(dispatch_get_main_queue()) {
self.delegate?.cameraManDidStart(self)
let layer = AVCaptureVideoPreviewLayer(session: self.session)
}
}
Waiting for AVCaptureSession's startRunning function was my solution too. You can run startRunning in global async and then in main thread you can add your AVCaptureVideoPreviewLayer.
Swift 4 sample
DispatchQueue.global().async {
self.captureSession.startRunning()
DispatchQueue.main.async {
let videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
}
}
You can load the AVCaptureSession at the time of viewWillAppear. It works for me. When I switch to the view with the AVCaptureSession from other view, then I see the camera running immediately.
For anyone interested the solution I came up with was preloading the camera on a different thread and keeping it open.
I tried all the above methods but it was not as good as Instagram or Facebook, So I loaded the AVCaptureDevice, AVCaptureVideoPreviewLayer, AVCaptureSession in the Parent Screen and passed it as parameter to the Child Screen. It was loading very rapidly.

Resources