After setting focusPointOfInterest AVCaptureOutput stops how to start? - ios

We are setting focusPointOfInterest but just after AVCaptureOutput stops although on iPhone screen we can still see live camera feed.
We could not find the reason and how to solve.
We are using SwiftUI by the way.

Are you locking/unlocking your device for configuration? Something like this usually works for me:
func setFocusPointOfInterest(device: AVCaptureDevice, focusPoint: CGPoint) throws {
if !device.isFocusPointOfInterestSupported ||
!device.isFocusModeSupported(.autoFocus) ||
device.isAdjustingFocus ||
device.isAdjustingExposure {
return
}
try device.lockForConfiguration()
device.focusPointOfInterest = focusPoint
device.focusMode = .autoFocus
device.unlockForConfiguration()
}

Related

How to get the reason of isFlashAvailable and isTorchAvailable return false - Reason?

So I build a custom camera app, and in my camera app it has a flash button, I need to update the device flash mode according to the user tap action. Apparently, I stumble to property isFlashAvailable for flash mode and isTorchAvailable for torch mode. It seems pretty straightforward but if this property returns false, I need to know the reason why flash or torch is unavailable. Is there any suggestion of how to get any specific reason for this case?
In docs, it only said that
for example, the device overheats and needs to cool off.
It is fine if that is the only possible reason, but I am not sure, and maybe you have any suggestion about this?
And here are some snippets of my implementation
/// Updates the device's flash on, auto, or off and returns wether it is successfull or not.
#discardableResult
public func updateFlash(mode: AVCaptureDevice.FlashMode) -> Bool {
guard let device = AVCaptureDevice.default(for: .video),
device.hasFlash,
device.isFlashAvailable else { return false }
flashMode = mode
return true
}
/// Updates the device's torch on, auto, or off and returns wether it is successfull or not.
#discardableResult
public func updateTorch(mode: AVCaptureDevice.TorchMode) -> Bool {
guard let device = AVCaptureDevice.default(for: .video),
device.hasTorch,
device.isTorchAvailable,
device.isTorchModeSupported(mode) else { return false }
do {
try device.lockForConfiguration()
device.torchMode = mode
device.unlockForConfiguration()
return true
} catch {
return false
}
}
Reference
https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624627-isflashavailable#declaration
https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624626-istorchavailable

proximityMonitoring may not be working as intended

for my iOS app i want to implement a feature where the screen should turns off (like when you answer a phone call) when the device is faced down.
so I've started by detecting the device orientation:
//in my ViewDidLoad
NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(self.rotated(_:)), name: UIDeviceOrientationDidChangeNotification, object: nil)
//called when the device changes orientation
func rotated(notification: NSNotification){
if UIDevice.currentDevice().orientation == UIDeviceOrientation.FaceDown{
print("device = faced down")
}else{
print("device != faced down")
}
}
when device is down i've called
UIDevice.currentDevice().proximityMonitoringEnabled = true
else
UIDevice.currentDevice().proximityMonitoringEnabled = false
the problem is UIDeviceOrientationDidChangeNotification seems to act a little bit late so when the rotated() function is called, the device is already faced down and it turns out that in order for proximityMonitoringEnabled = true to turn off the screen the proximity sensor should not be already covered !
I'm pretty sure that this is an Apple limitation but maybe someone out there did found a solution or came across a workaround!
thanks in advance.
Approach:
Since iOS doesn't provide before changing Orientation we couldn't rely on 'UIDeviceOrientationDidChangeNotification'. Instead we can use CoreMotion Framework and access hardware's gyroscope for detecting possible FaceDown orientations and set proximityMonitoringEnabled appropriately.
Gyroscope Data:
Using Gyroscope observations below values can possibly detect FaceDown Orientation.
let gyroData = (minX:-3.78, minY:-3.38, minZ:-5.33, maxX:3.29, maxY:4.94, maxZ:3.36)
Solution in Swift:
class ProximityViewController: UIViewController {
let cmManager = CMMotionManager(), gyroData = (minX:-3.78, minY:-3.38, minZ:-5.33, maxX:3.29, maxY:4.94, maxZ:3.36)
override func viewDidLoad() {
super.viewDidLoad()
//Using GyroMotion
experimentCoreMotion()
}
//MARK: - Using Core Motion
func experimentCoreMotion() {
if cmManager.gyroAvailable {
//Enable device orientation notification
UIDevice.currentDevice().beginGeneratingDeviceOrientationNotifications()
cmManager.gyroUpdateInterval = 0.1
handleFaceDownOrientation()
}
}
func handleFaceDownOrientation() {
cmManager.startGyroUpdatesToQueue(NSOperationQueue.currentQueue()!, withHandler: { (data:CMGyroData?, error: NSError?) in
if self.isGyroDataInRange(data!) {
UIDevice.currentDevice().proximityMonitoringEnabled = (UIDevice.currentDevice().orientation == .FaceDown)
if UIDevice.currentDevice().orientation == .FaceDown { print("FaceDown detected") }
else { print("Orientation is not facedown") }
}
})
}
func isGyroDataInRange(val: CMGyroData) -> Bool {
return ((val.rotationRate.x > gyroData.minX && val.rotationRate.x < gyroData.maxX) &&
(val.rotationRate.y > gyroData.minY && val.rotationRate.y < gyroData.maxY) &&
(val.rotationRate.z > gyroData.minZ && val.rotationRate.z < gyroData.maxZ))
}
}
Hope my solution solves your query. Let me know the solution is working fine for your requirement.
Gyroscope & Accelerometer Observations:
I've experimented the possible values of FaceDown orientation using Gyroscope & Accelerometer. IMO, Gyroscope data seems fine but it's open to explore other hardware's sensors to detect FaceDown Orientation.

Switching Cameras slow in AVCaptureSession

I've looked at many other questions like this, and tried a lot of the solutions, but this case is a bit different. I'm using AVCaptureVideoDataOutputSampleBufferDelegate so that I can apply CIFilters to the live video feed. I'm using the following method to change cameras:
func changeCameras() {
captureSession.stopRunning()
var desiredPosition: AVCaptureDevicePosition?
if front {
desiredPosition = AVCaptureDevicePosition.Back
} else {
desiredPosition = AVCaptureDevicePosition.Front
}
let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) as? [AVCaptureDevice]
for device in devices! {
if device.position == desiredPosition {
self.captureSession.beginConfiguration()
do {
let input = try AVCaptureDeviceInput(device: device)
for oldInput in self.captureSession.inputs {
print(oldInput)
self.captureSession.removeInput(oldInput as! AVCaptureInput)
}
print(input)
self.captureSession.addInput(input)
self.captureSession.commitConfiguration()
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.captureSession.startRunning()
})
} catch { print("evic failed")}
}
}
front = !front
}
The methods that I am using to set up the camera (called in viewDidLoad) and receive the sampleBuffer from the delegate are here: https://gist.github.com/JoeyBodnar/17e22e3c04093caa54cf240ed8b1b601.
One problem is that when pressing the button to change cameras, it takes a solid 4-5 seconds of the screen freezing before changing. I've tried the above method, as well as creating a separate queue to run the entire function on, and it still takes a long time. I've never had this problem when switching cameras just using the regular AVVideoPreviewLayer, so I think this may be caused in part by the fact that i'm using the sample buffer delegate, but can't quite piece together how/why. Any help is appreciated. thanks!

How to focus FRONT camera on device using iOS, swift?

it seems that front camera doesn't support focusMode.
func configureDevice() {
if let device = captureDevice {
let focusMode: AVCaptureFocusMode = .AutoFocus
if device.isFocusModeSupported(focusMode) {
device.lockForConfiguration(nil)
device.focusMode = AVCaptureFocusMode.AutoFocus
device.unlockForConfiguration()
println("configured device")
}
}
}
This code doesn't run because
if device.isFocusModeSupported(focusMode)
returns false.
But within the built-in-camera-app, front camera can focus on tap.
Is there any way implement tap-to-focus on the FRONT camera?
The front-facing camera does not support tap-to-focus on any iPhone. You can use device.focusPointOfInterestSupported property to check if you can do tap-to-focus (but you will get false, as with isFocusModeSupported())
What you are seeing is tap-for-exposure, and you can check for that with device.exposurePointOfInterestSupported. Once you know that you can use it, use device.exposurePointOfInterest to set your PoI.
All the details of each mode is explained to detail in Apple Docs
Hope it helps!

AVFoundation: toggle camera fails at CanAddInput

I am trying to add a rotate camera function with AVFoundation to allow the user to toggle between the front-facing and back-facing cameras.
As shown in the code below, I've put in some println() statements and all the values seem legit but the code always drops to the failed else-clause when testing CanAddInput().
I've tried setting the sessionPreset (which is in another function that initializes the session beforehand) to various values including AVCaptureSessionPresetHigh and AVCaptureSessionPresetLow but that didn't help.
#IBAction func rotateCameraPressed(sender: AnyObject) {
// Loop through all the capture devices to find right ones
var backCameraDevice : AVCaptureDevice?
var frontCameraDevice : AVCaptureDevice?
let devices = AVCaptureDevice.devices()
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Define devices
if (device.position == AVCaptureDevicePosition.Back) {
backCameraDevice = device as? AVCaptureDevice
} else if (device.position == AVCaptureDevicePosition.Front) {
frontCameraDevice = device as? AVCaptureDevice
}
}
}
// Assign found devices to corresponding input
var backInput : AVCaptureDeviceInput?
var frontInput : AVCaptureDeviceInput?
var error: NSError?
if let backDevice = backCameraDevice {
println("Back device is \(backDevice)")
backInput = AVCaptureDeviceInput(device : backDevice, error: &error)
}
if let frontDevice = frontCameraDevice {
println("Front device is \(frontDevice)")
frontInput = AVCaptureDeviceInput(device : frontDevice, error: &error)
}
// Now rotate the camera
isBackCamera = !isBackCamera // toggle camera position
if isBackCamera {
// remove front and add back
captureSession!.removeInput(frontInput)
if let bi = backInput {
println("Back input is \(bi)")
if captureSession!.canAddInput(bi) {
captureSession!.addInput(bi)
} else {
println("Cannot add back input!")
}
}
} else {
// remove back and add front
captureSession!.removeInput(backInput)
if let fi = frontInput {
println("Front input is \(fi)")
if captureSession!.canAddInput(fi) {
captureSession!.addInput(fi)
} else {
println("Cannot add front input!")
}
}
}
}
The problem seems to be the fact that the derived input from the devices found in the iteration do not actually match the input in the captureSession variable. This appears to be a new thing since all the code I've seen posted about this would find and remove the input for the current camera by iterating through the list of devices, as I've done in my code.
This doesn't seem to work anymore - well, at least not in the code I posted, which is based upon all the sources I've been able to dig up (that all happen to be in Objective C). The reason canAddInput() fails is that the removeInput() never succeeds; the fact that it doesn't issue the usual error about not being able to have multiple input devices is strange (since it would have helped with the debugging).
Anyway, the fix is to not remove the input on the derived input from the found device (which used to work). Instead, remove the input device that is actually there by going into the captureSession.inputs variable and doing a removeInput() on that.
To scrunch all that babble to code, here's what I did:
for ii in captureSession!.inputs {
captureSession!.removeInput(ii as! AVCaptureInput)
}
And that did the trick! :)

Resources