iOS, How to do face tracking using the rear camera? - ios

I'm planning to use ARKit's camera feed as input into Apple's Vision API so I can recognize people's faces in screen-space, no depth information required. To simplify the process, I'm attempting to modify Apple's face tracking over frames example here: Tracking the User’s Face in Real Time
I thought that I could simply change the function here:
fileprivate func configureFrontCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .front)
if let device = deviceDiscoverySession.devices.first {
if let deviceInput = try? AVCaptureDeviceInput(device: device) {
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}
if let highestResolution = self.highestResolution420Format(for: device) {
try device.lockForConfiguration()
device.activeFormat = highestResolution.format
device.unlockForConfiguration()
return (device, highestResolution.resolution)
}
}
}
throw NSError(domain: "ViewController", code: 1, userInfo: nil)
}
In the first line of the function, one of the arguments is .front for front-facing camera. I changed this to .back. This successfully gives me the rear-facing camera. However, the recognition region seems a little bit choppy, and as soon as it fixates on a face in the image, Xcode reports the error:
VisionFaceTrack[877:54517] [ServicesDaemonManager] interruptionHandler is called. -[FontServicesDaemonManager connection]_block_invoke
Message from debugger: Terminated due to memory issue
In other words, the program crashes when a face is recognized, it seems. Clearly there is more to this than simply changing the constant used. Perhaps there is a buffer somewhere with the wrong size, or a wrong resolution. May I have help figuring out what may be wrong here?
A better solution would also include information about how to achieve this with arkit's camera feed, but I'm pretty sure it's the same idea with the CVPixelBuffer.
How would I adapt this example to use the rear camera?
EDIT: I think the issue is that my device has too little memory to support the algorithm using the back camera, as the back camera has a higher resolution.
However, even on another higher performance device, the tracking quality is pretty bad. --yet the vision algorithm only needs raw images, doesn't it? In that case, shouldn't this work? I can't find any examples online of using the back camera for face tracking.

Here's how I adapted the sample to make it work on my iPad Pro.
1) Download the sample project from here: Tracking the User’s Face in Real Time.
2) Change the function which loads the front facing camera to use back facing. Rename it to configureBackCamera and call this method setupAVCaptureSession:
fileprivate func configureBackCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back)
if let device = deviceDiscoverySession.devices.first {
if let deviceInput = try? AVCaptureDeviceInput(device: device) {
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}
if let highestResolution = self.highestResolution420Format(for: device) {
try device.lockForConfiguration()
device.activeFormat = highestResolution.format
device.unlockForConfiguration()
return (device, highestResolution.resolution)
}
}
}
throw NSError(domain: "ViewController", code: 1, userInfo: nil)
}
3) Change the implementation of the method highestResolution420Format. The problem is, now that the back-facing camera is used, you have access to formats with much higher resolution than with front-facing camera, which can impact the performance of the tracking. You need to adapt to your use case, but here's an example of limiting the resolution to 1080p.
fileprivate func highestResolution420Format(for device: AVCaptureDevice) -> (format: AVCaptureDevice.Format, resolution: CGSize)? {
var highestResolutionFormat: AVCaptureDevice.Format? = nil
var highestResolutionDimensions = CMVideoDimensions(width: 0, height: 0)
for format in device.formats {
let deviceFormat = format as AVCaptureDevice.Format
let deviceFormatDescription = deviceFormat.formatDescription
if CMFormatDescriptionGetMediaSubType(deviceFormatDescription) == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange {
let candidateDimensions = CMVideoFormatDescriptionGetDimensions(deviceFormatDescription)
if (candidateDimensions.height > 1080) {
continue
}
if (highestResolutionFormat == nil) || (candidateDimensions.width > highestResolutionDimensions.width) {
highestResolutionFormat = deviceFormat
highestResolutionDimensions = candidateDimensions
}
}
}
if highestResolutionFormat != nil {
let resolution = CGSize(width: CGFloat(highestResolutionDimensions.width), height: CGFloat(highestResolutionDimensions.height))
return (highestResolutionFormat!, resolution)
}
return nil
}
4) Now the tracking will work, but the face positions will not be correct. The reason is that UI presentation is wrong, because original sample was designed for front facing cameras with mirrored display, while the back facing camera doesn't need mirroring.
In order to tweak for this, simply change updateLayerGeometry() method. Specifically, you need to change this:
// Scale and mirror the image to ensure upright presentation.
let affineTransform = CGAffineTransform(rotationAngle: radiansForDegrees(rotation))
.scaledBy(x: scaleX, y: -scaleY)
overlayLayer.setAffineTransform(affineTransform)
into this:
// Scale the image to ensure upright presentation.
let affineTransform = CGAffineTransform(rotationAngle: radiansForDegrees(rotation))
.scaledBy(x: -scaleX, y: -scaleY)
overlayLayer.setAffineTransform(affineTransform)
After this, the tracking should work and the results should be correct.

Related

is the zoom factor for builtInTripleCamera's .5 zoom 1.0?

I'm working on a camera app and I currently don't have a device with a triple camera (so I cannot test). When an app initially configures a camera, I want the camera setting with no zoom in and out until the user starts pinching in or out (then starts zooming in or out).
In the video zoom factor documentation, it says
Allowed values range from 1.0 (full field of view) to the value of the active format’s videoMaxZoomFactor property.
So, when the triple camera uses its 2x zoom out, that time the video zoom factor is 1.0?
and if it doesn't use any zoom functionality, the zoom factor becomes 2.0 (like the double of the 2x zoom out's zoom factor)?
Therefore, when configuring the camera with no zoom or out, do I need to set 2.0 for zoom factor only when using the triple camera?
var videoDevice: AVCaptureDevice?
init() {
getBestDevice()
setInitialZoomFactor(for: self.videoDevice)
}
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInTripleCamera, .builtInDualWideCamera, .builtInDualCamera, .builtInWideAngleCamera], mediaType: .video, position: .back)
func getBestDevice() {
let devices = discoverySession.devices
guard !devices.isEmpty else { fatalError("Missing capture devices.")}
videoDevice = devices.first
}
func setInitialZoomFactor(for device: AVCaptureDevice?) {
guard let device = device else { return }
do {
try videoDevice.lockForConfiguration()
if videoDevice.deviceType == .builtInTripleCamera {
videoDevice.videoZoomFactor = 2.0
} else {
videoDevice.videoZoomFactor = 1.0
}
videoDevice.unlockForConfiguration()
} catch {
print("got error")
}
}

How Can I Add DeviceMotion Capabilities to a Swift Playground?

I am working on a Swift playground and I am trying to use this code to get the device motion.
#objc func update()
{
if let deviceMotion = motionManager.deviceMotion {
print("Device Motion Yaw: \(deviceMotion.attitude.yaw)")
}
}
However, it seems that device motion does not work on a Swift playground even though it works in iOS. How would I change a playground to support device motion? I am using an iPad running iOS 12 and the latest version of Swift Playgrounds and a Mac for the code. I know that the method gets called perfectly, and the code runs perfectly when I put it as part of an iOS app on both an iPad and an iPhone. How would I modify a playground to support this, as from my understanding it does not by default?
It is entirely possible. I’ve done it on several occasions. You’ll need a CMMotionManager class. There are many ways to do this, but I would recommend using a timer. Here is some example code, taken from Apple’s developer documentation and modified to fit the question.
let motion = CMMotionManager()
func startDeviceMotion() {
if motion.isDeviceMotionAvailable {
//How often to push updates
self.motion.deviceMotionUpdateInterval = 1.0/60.0
self.motion.showsDeviceMovementDisplay = true
self.motion.startDeviceMotionUpdates(using: .xMagneticNorthZVertical)
// Configure a timer to fetch the motion data.
self.timer = Timer(fire: Date(), interval: (1.0 / 60.0), repeats: true,
block: { (timer) in
if let data = self.motion.deviceMotion {
let x = data.attitude.pitch
let y = data.attitude.roll
let z = data.attitude.yaw
//Use the data
}
})
RunLoop.current.add(self.timer!, forMode: RunLoop.Mode.default)
}
}
startDeviceMotionUpdates()
Either do that or try something like this, also from the documentation
func startQueuedUpdates() {
if motion.isDeviceMotionAvailable { self.motion.deviceMotionUpdateInterval = 1.0 / 60.0
self.motion.showsDeviceMovementDisplay = true
self.motion.startDeviceMotionUpdates(using: .xMagneticNorthZVertical,
to: self.queue, withHandler: { (data, error) in
// Make sure the data is valid before accessing it.
if let validData = data {
// Get the attitude relative to the magnetic north reference frame.
let roll = validData.attitude.roll
let pitch = validData.attitude.pitch
let yaw = validData.attitude.yaw
// Use the motion data in your app.
}
})
}
}

I get this ridiculous value when reading IOS camera's lens position

Using this code with the BuiltInWideAngleCamera in Swift on an iPhone XS MAX running iOS 12.1.2:
let lensPos: Float = AVCaptureDevice.currentLensPosition;
lockCameraForSettings();
self.inputDevice?.setFocusModeLocked(lensPosition: LensPos, completionHandler: { (time) -> Void in})
unlockCameraForShooting();
results in a crash:
[AVCaptureDevice setFocusModeLockedWithLensPosition:completionHandler:] The passed lensPosition -340282346638528859811704183484516925440.000000 is out-of-range [0, 1]'
The camera is running and visibly in focus on the screen preview. How is it possible for it to be in this configuration?
Inserting a constant value between 0-1 works, at least in that it does not throw an error.
I believe you mean to use .lensPosition instead of .currentLensPosition which is a special constant representing the position of the lens. You can only access .lensPosition when you are referencing a instance of type AVCaptureDevice.
var captureDevice: AVCaptureDevice?
// Plus models and X's
if let device = AVCaptureDevice.default(.builtInDualCamera,
for: .video, position: .back) {
captureDevice = device
// Single Lens devices.
} else if let device = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video, position: .back) {
captureDevice = device
} else {
// No camera was found, is it broke?
print("Missing expected back camera device.")
}
if let device = captureDevice {
// We have a device, do something with it.
print(device.lensPosition)
}

iOS camera view opening up at black intermittently

I have a viewcontroller which has a button that calls a second viewcontroller, which adds a video sublayer and calls the camera.
The code has been working fine until I tried adding other things like another button to the second viewcontroller, then, it would sometimes work and sometimes not work.
By "not work" I mean it would open up a black screen with nothing at all. Does not respond to anything.
I've deleted the buttons / code etc but it hasn't fixed anything.
It seems it would sometimes just work. I.e. after it works I can add a button or change code and it would work and then it would show black screen again.
There are no build errors and trace and it basically is sitting there waiting for me to do something (like press the record button) but nothing is showing.
I've read that I should "bringsubviewtofront" but that doesn't seem to do anything.
Any suggestions?
Thanks in advance.
UPDATE: I think I found something related. I was trying to do a programmatically position a button on the screen using CGRect and part of that involved getting the text view's width and height.
I found that the code crashed with "expected to find optional value but found nil" message, i.e. I couldn't do anything like: textView.frame.width, textView.frame.height, textView.translatesAutoresizingMaskIntoConstraints = false etc.
At first I thought it was my code but after trying it on another VC using the same code, it suddenly started working again, i.e. I get values for textView.frame.width and textView.frame.height.
And my camera started showing preview!
So I reckon when the preview is black, then my buttons and text views have no values.
let captureSession = AVCaptureSession()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
captureSession.sessionPreset = AVCaptureSession.Preset.high
// loop through all devices looking for cameras
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
let devices = deviceDiscoverySession.devices
for device in devices {
if (device.hasMediaType(AVMediaType.video)) {
if device.position == AVCaptureDevice.Position.back {
backCamera = device
} else if device.position == AVCaptureDevice.Position.front {
frontCamera = device
}
}
}
currentDevice = frontCamera
// look through all devices looking for microphone
let audioDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInMicrophone], mediaType: AVMediaType.audio, position: AVCaptureDevice.Position.unspecified)
let audioDevices = audioDiscoverySession.devices
for audioDevice in audioDevices {
if (audioDevice.hasMediaType(AVMediaType.audio)) {
audioCapture = audioDevice
}
}
// set up input output
do {
// setup camera input
let captureDeviceInput = try AVCaptureDeviceInput(device: currentDevice!)
captureSession.addInput(captureDeviceInput)
// setup audio input
let captureDeviceAudio = try AVCaptureDeviceInput(device: audioCapture!)
captureSession.addInput(captureDeviceAudio)
videoFileOutput = AVCaptureMovieFileOutput()
captureSession.addOutput(videoFileOutput!)
} catch {
print(error)
}
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.connection?.videoOrientation = currentVideoOrientation()
cameraPreviewLayer?.frame = self.view.frame
self.view.layer.insertSublayer(cameraPreviewLayer!, at: 0)
captureSession.startRunning()
}
OK, I find out how to resolve it but I don't know why it's doing it, other than it's a bug in Xcode.
It seems the problem has nothing to do with the video sublayer and its code.
I have Text Views and buttons etc on this ViewController.
I found that if I change the size of a button or TextView, e.g. increase and decrease the size of a text view, then the problem goes away.
The problem comes back if you then change something, e.g. code or move buttons around etc. but if you go back to the Text View and change the size, then it'll work again.
That's the workaround but I don't know what triggers this problem.
Check if you have provided the code to ask for the permission to your app to access the camera. If you have added the code but not allowed the app to access camera while it was asking permission, then got to settings > app name > camera and switch on the access. If not add this:
//Permission for Camera
AVCaptureDevice.requestAccess(for: AVMediaType.video) { response in
if response {
//access granted
} else {
//Take required action
}
}

Capturing image with avfoundation

I am using AvFoundation for camera.
This is my live preview:
It looks good. When user presses to "Button" I am creating a snapshot on same screen. (Like snapchat)
I am using following code for capturing image and showing it on the screen:
self.stillOutput.captureStillImageAsynchronouslyFromConnection(videoConnection){
(imageSampleBuffer : CMSampleBuffer!, _) in
let imageDataJpeg = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
let pickedImage: UIImage = UIImage(data: imageDataJpeg)!
self.captureSession.stopRunning()
self.previewImageView.frame = CGRect(x:0, y:0, width:UIScreen.mainScreen().bounds.width, height:UIScreen.mainScreen().bounds.height)
self.previewImageView.image = pickedImage
self.previewImageView.layer.zPosition = 100
}
After user captures an image screen looks like this:
Please look at the marked area. It wasn't looking on the live preview screen(Screenshot 1).
I mean live preview is not showing everything. But I am sure my live preview works well because I compared with other camera apps and everything was same as my live preview screen. I guess I have a problem with captured image.
I am creating live preview with following code:
override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let devices = AVCaptureDevice.devices()
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
}
}
}
if captureDevice != nil {
beginSession()
}
}
func beginSession() {
let err : NSError? = nil
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch{
}
captureSession.addOutput(stillOutput)
if err != nil {
print("error: \(err?.localizedDescription)")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity=AVLayerVideoGravityResizeAspectFill
self.cameraLayer.layer.addSublayer(previewLayer!)
previewLayer?.frame = self.cameraLayer.frame
captureSession.startRunning()
}
My cameraLayer :
How can I resolve this problem?
Presumably you are using an AVCaptureVideoPreviewLayer. So it sounds like this layer is incorrectly placed or incorrectly sized, or it has the wrong AVLayerVideoGravity setting. Part of the image is off the screen or cropped; that's why you don't see that part of what the camera sees while you are previewing.
Ok, I found the solution.
I used the
captureSession.sessionPreset = AVCaptureSessionPresetHigh
Instead of
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
Then problem fixed.

Resources