AVCaptureVideoPreviewLayer converts capture device coordinates incorrectly - ios

I'm working on implementing a camera barcode scanner with Vision framework. My setup seems fairly simple: a preview layer for video from the device's rear camera, frames from the video stream are processed using VNDetectBarcodesRequest, and if a barcode is detected, I would like to show a rectangle around it in the preview.
Vision framework coordinates are normalized to the video frame size, so I have to convert them back to the layer coordinates to be able to draw the rectangle. The recommended way to do this is to use AVCaptureVideoPreviewLayer.layerPointConverted(fromCaptureDevicePoint:). There is also a difference between the point of origin (it's in the lower left corner for Vision framework), so I have to take care of that as well. Here's what the result looks like:
let convertPoint = { (point: CGPoint) in
let coordinateFlip = CGAffineTransform.identity
.translatedBy(x: 0, y: 1)
.scaledBy(x: 1, y: -1)
return videoLayer.layerPointConverted(fromCaptureDevicePoint: point.applying(coordinateFlip))
}
The problem is that this conversion is incorrect: the rectangle ends up being rotated 90º clockwise:
I have no idea why, I set video orientation to .portrait on both AVCaptureVideoPreviewLayer.connection and AVCaptureVideoDataOutput.connection(with: .video). The Vision coordinates seem to be correct, if I convert them manually like so:
extension VNBarcodeObservation {
func verticesConvertedToCoordinatesOf(_ targetLayer: CALayer) -> [CGPoint] {
let layerSize = targetLayer.bounds.size
return [topLeft, topRight, bottomRight, bottomLeft]
.map { CGPoint(x: $0.x * layerSize.width, y: $0.y * layerSize.height) }
}
}
the result is correct:
Why is AVCaptureVideoPreviewLayer.layerPointConverted(fromCaptureDevicePoint:) returning incorrect results?
It seems like somehow AVCaptureVideoPreviewLayer is stuck in the .landscapeRight orientation.
Here's how I set up video capture:
let captureSessionQueue = DispatchQueue.global(qos: .userInteractive)
let camera = AVCaptureDevice.default(for: .video)
let session = AVCaptureSession()
session.beginConfiguration()
session.setCamera(camera, preferredPresets: [.hd1280x720])
let request = VNDetectBarcodesRequest(symbologies: VNDetectBarcodesRequest.supportedSymbologies)
request.regionOfInterest = CGRect(x: 0.15, y: 0.45, width: 0.7, height: 0.4)
let delegate = CaptureVideoDataOutputDelegate()
delegate.vnRequests = [request]
let deviceOutput = AVCaptureVideoDataOutput()
deviceOutput.setSampleBufferDelegate(delegate, queue: captureSessionQueue)
deviceOutput.alwaysDiscardsLateVideoFrames = true
session.addOutput(deviceOutput)
deviceOutput.connection(with: .video)?.videoOrientation = .portrait
session.commitConfiguration()
captureSessionQueue.async {
session.startRunning()
}
and CaptureVideoDataOutputDelegate:
public class CaptureVideoDataOutputDelegate: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
public var vnRequests: [VNRequest]?
private let vnHandler = VNSequenceRequestHandler()
private var frameCounter: Int64 = 0
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
defer { frameCounter += 1 }
guard
// Try to reduce CPU usage and energy consumption by processing only every 10th frame
frameCounter.isMultiple(of: 10),
let vnRequests = vnRequests,
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
else { return }
do {
let size = CGSize(width: CVPixelBufferGetWidth(pixelBuffer), height: CVPixelBufferGetHeight(pixelBuffer))
print(">>> will scan a \(size) frame #\(frameCounter) for barcodes (connection: \(connection), orientation: \(connection.videoOrientation))")
try vnHandler.perform(vnRequests, on: pixelBuffer, orientation: .up)
} catch {
print(">>> failed to scan frame #\(frameCounter) for barcodes: \(error)")
}
}
}
I have verified that both deviceOutput.connection(with: .video) and videoLayer.connection are not nil when I try to configure video orientation. Examining the size of buffer received by CaptureVideoDataOutputDelegate confirms that orientation is correct (the size is 720x1280, not 1280x720). Orientation of the connection is also .portrait at this point.

Related

How to calculate regionOfInterest for iOS Vision's VNRequest

We have implemented scanning app which shows camera preview at bottom of screen with 300px height and screen width.
What is the way of calculating RegionOfInterest to pass Vision to detect barcodes as per camera preview size?
We have configured camera as below-
func setupCamera() {
guard let captureDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back) else {
print("Could not create capture device.")
return
}
self.captureDevice = captureDevice
if captureDevice.supportsSessionPreset(.hd4K3840x2160) {
captureSession.sessionPreset = AVCaptureSession.Preset.hd4K3840x2160
bufferAspectRatio = 3840.0 / 2160.0
} else {
captureSession.sessionPreset = AVCaptureSession.Preset.hd1920x1080
bufferAspectRatio = 1920.0 / 1080.0
}
guard let deviceInput = try? AVCaptureDeviceInput(device: captureDevice) else {
print("Could not create device input.")
return
}
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}
// Configure video data output.
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
if captureSession.canAddOutput(videoDataOutput) {
captureSession.addOutput(videoDataOutput)
videoDataOutput.connection(with: AVMediaType.video)?.preferredVideoStabilizationMode = .off
} else {
print("Could not add VDO output")
return
}
// Set zoom and autofocus to help focus on very small text.
do {
try captureDevice.lockForConfiguration()
//captureDevice.videoZoomFactor = 2
captureDevice.autoFocusRangeRestriction = .near
captureDevice.unlockForConfiguration()
} catch {
print("Could not set zoom level due to error: \(error)")
return
}
captureSession.startRunning()
}
Calculation of RegionOfInterest as below
func calculateRegionOfInterest() {
// Figure out size of ROI.
let size: CGSize = CGSize(width: previewWidth/screenWidth, height: previewHeight/screenHeight) //ratio of preview to screen.
// Make it centered.
regionOfInterest.origin = CGPoint(x: (1 - size.width) / 2, y: (1 - size.height) / 2)
regionOfInterest.size = size
// ROI changed, update transform.
setupOrientationAndTransform()
// Update the cutout to match the new ROI.
DispatchQueue.main.async {
// Wait for the next run cycle before updating the cutout. This
// ensures that the preview layer already has its new orientation.
self.updateCutout()
}
}
func updateCutout() {
// Figure out where the cutout ends up in layer coordinates.
let roiRectTransform = bottomToTopTransform.concatenating(uiRotationTransform)
let cutout = previewView.videoPreviewLayer.layerRectConverted(fromMetadataOutputRect: regionOfInterest.applying(roiRectTransform))
// Create the mask.
let path = UIBezierPath(rect: cutoutView.frame)
path.append(UIBezierPath(rect: cutout))
maskLayer.path = path.cgPath
// Move the number view down to under cutout.
var numFrame = cutout
numFrame.origin.y += numFrame.size.height
numberView.frame = numFrame
}
func setupOrientationAndTransform() {
// Recalculate the affine transform between Vision coordinates and AVF coordinates.
// Compensate for region of interest.
let roi = regionOfInterest
roiToGlobalTransform = CGAffineTransform(translationX: roi.origin.x, y: roi.origin.y).scaledBy(x: roi.width, y: roi.height)
// Compensate for orientation (buffers always come in the same orientation).
switch currentOrientation {
case .landscapeLeft:
textOrientation = CGImagePropertyOrientation.up
uiRotationTransform = CGAffineTransform.identity
case .landscapeRight:
textOrientation = CGImagePropertyOrientation.down
uiRotationTransform = CGAffineTransform(translationX: 1, y: 1).rotated(by: CGFloat.pi)
case .portraitUpsideDown:
textOrientation = CGImagePropertyOrientation.left
uiRotationTransform = CGAffineTransform(translationX: 1, y: 0).rotated(by: CGFloat.pi / 2)
default: // We default everything else to .portraitUp
textOrientation = CGImagePropertyOrientation.right
uiRotationTransform = CGAffineTransform(translationX: 0, y: 1).rotated(by: -CGFloat.pi / 2)
}
// Full Vision ROI to AVF transform.
visionToAVFTransform = roiToGlobalTransform.concatenating(bottomToTopTransform).concatenating(uiRotationTransform)
}
Setting regionOfInterest and textOrientation to VNImageRequestHandler as below-
override func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
var request: VNDetectBarcodesRequest()
if #available(iOS 15.0, *) {
request.revision = VNDetectBarcodesRequestRevision2
} else {
// Fallback on earlier versions
request.revision = VNDetectBarcodesRequestRevision1
}
// Only run on the region of interest for maximum speed.
request.regionOfInterest = regionOfInterest
let requestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: textOrientation, options: [:])
do {
try requestHandler.perform([request])
} catch {
print(error)
}
}
}
Region of Interest is working as expected for iPad in Portrait mode but for iPod and iPad in Landscape mode the above code does not scan barcodes from top and bottom corners when barcode is completely visible in camera preview.
Reference links:
Vision framework barcode detection region of interest not working
https://developer.apple.com/documentation/vision/reading_phone_numbers_in_real_time

AVCaptureVideoDataOutputSampleBufferDelegate drop frames using CIFilters for video filtering

I have very strange case where AVCaptureVideoDataOutputSampleBufferDelegate drops frames if I use 13 different filter chains. Let me explain:
I have CameraController setup, nothing special, here is my delegate method:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if !paused {
if connection.output?.connection(with: .audio) == nil {
//capture video
// my try to avoid "Out of buffers error", no luck ;(
lastCapturedBuffer = nil
let err = CMSampleBufferCreateCopy(allocator: kCFAllocatorDefault, sampleBuffer: sampleBuffer, sampleBufferOut: &lastCapturedBuffer)
if err == noErr {
}
connection.videoOrientation = .portrait
// getting image
let pixelBuffer = CMSampleBufferGetImageBuffer(lastCapturedBuffer!)
// remove if any
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
// captured - is just ciimage property
captured = CIImage(cvPixelBuffer: pixelBuffer!)
//remove if any
CVPixelBufferUnlockBaseAddress(pixelBuffer!,CVPixelBufferLockFlags(rawValue: 0))
//CVPixelBufferUnlockBaseAddress(pixelBuffer!, .readOnly)
// transform image to targer resolution
let srcWidth = CGFloat(captured.extent.width)
let srcHeight = CGFloat(captured.extent.height)
let dstWidth: CGFloat = ConstantsManager.shared.k_video_width
let dstHeight: CGFloat = ConstantsManager.shared.k_video_height
let scaleX = dstWidth / srcWidth
let scaleY = dstHeight / srcHeight
var transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
captured = captured.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: dstWidth, height: dstHeight))
// mirror for front camera
if front {
var t = CGAffineTransform.init(scaleX: -1, y: 1)
t = t.translatedBy(x: -ConstantsManager.shared.k_video_width, y: 0)
captured = captured.transformed(by: t)
}
// video capture logic
let writable = canWrite()
if writable,
sessionAtSourceTime == nil {
sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(lastCapturedBuffer!)
videoWriter.startSession(atSourceTime: sessionAtSourceTime!)
}
if writable, (videoWriterInput.isReadyForMoreMediaData) {
videoWriterInput.append(lastCapturedBuffer!)
}
// apply effect in realtime <- here is problem. If I comment next line, it will be fixed but effect will n't be applied
captured = FilterManager.shared.applyFilterForCamera(inputImage: captured)
// current frame in case user wants to save image as photo
self.capturedPhoto = captured
// sent frame to Camcoder view controller
self.delegate?.didCapturedFrame(frame: captured)
} else {
// capture sound
let writable = canWrite()
if writable, (audioWriterInput.isReadyForMoreMediaData) {
//print("write audio buffer")
audioWriterInput?.append(lastCapturedBuffer!)
}
}
} else {
// paused
}
}
I also implemented didDrop delegate method, here is how I figure out why it drops frames:
func captureOutput(_ output: AVCaptureOutput, didDrop sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
print("did drop")
var mode: CMAttachmentMode = 0
let reason = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_DroppedFrameReason, attachmentModeOut: &mode)
print("reason \(String(describing: reason))") // Optional(OutOfBuffers)
}
So I did it like a pro and just commented parts of code to find where is the problem. So, it here:
captured = FilterManager.shared.applyFilterForCamera(inputImage: captured)
FilterManager - is singleton, here is called func:
func applyFilterForCamera(inputImage: CIImage) -> CIImage {
return currentVsFilter!.apply(sourceImage: inputImage)
}
currentVsFilter is object of VSFilter type - here is example of one:
import Foundation
import AVKit
class TestFilter: CustomFilter {
let _name = "Тестовый Фильтр"
let _displayName = "Test Filter"
var tempImage: CIImage?
var final: CGImage?
override func name() -> String {
return _name
}
override func displayName() -> String {
return _displayName
}
override init() {
super.init()
print("Test Filter init")
// setup my custom kernel filter
self.noise.type = GlitchFilter.GlitchType.allCases[2]
}
// this returns composition for playback using AVPlayer
override func composition(asset: AVAsset) -> AVMutableVideoComposition {
let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
let inputImage = request.sourceImage.cropped(to: request.sourceImage.extent)
DispatchQueue.global(qos: .userInitiated).async {
let output = self.apply(sourceImage: inputImage, forComposition: true)
request.finish(with: output, context: nil)
}
})
let size = FilterManager.shared.cropRectForOrientation().size
composition.renderSize = size
return composition
}
// this returns actual filtered CIImage, used for both AVPlayer composition and realtime camera
override func apply(sourceImage: CIImage, forComposition: Bool = false) -> CIImage {
// rendered text
tempImage = FilterManager.shared.textRenderedImage()
// some filters chained one by one
self.screenBlend?.setValue(tempImage, forKey: kCIInputImageKey)
self.screenBlend?.setValue(sourceImage, forKey: kCIInputBackgroundImageKey)
self.noise.inputImage = self.screenBlend?.outputImage
self.noise.inputAmount = CGFloat.random(in: 1.0...3.0)
// result
tempImage = self.noise.outputImage
// correct crop
let rect = forComposition ? FilterManager.shared.cropRectForOrientation() : FilterManager.shared.cropRect
final = self.context.createCGImage(tempImage!, from: rect!)
return CIImage(cgImage: final!)
}
}
And now, the most strange thing, I have 30 VSFilters and when I got to 13(switching one by one by UIButton) I got error "Out of Buffer", this one:
kCMSampleBufferDroppedFrameReason_OutOfBuffers
What I tested:
I changed vsFilters order in filters array inside FilterManager singleton - same
I tried switch from first to 12 one by one, then go back - works, but after I switched to 13tn(of 30th from 0) - bug
Looks like it can handle only 12 VSFIlter objects, like if it retains them somehow or maybe it's related to threading, I don't know.
This app made for iOs devices, tested on iPhone X iOs 13.3.1
This is video editor app to apply different effects to both live stream from camera and video files from camera roll
Maybe someone has experience with this?
Have a great day
Best, Victor
Edit 1. If I reinit cameraController(AVCaptureSession. input/output devices) it works but this is ugly option and it adds lag when switching filters
Ok, so I finally won this battle. In case some one else get this "OutOfBuffer" problem, here is my solution
As I figured out, CIFilter grabs CVPixelBuffer and don't release it while filtering images. It's kinda creates one huge buffer, I guess. Strange thing: it don't create memory leak, so I guess it grabs not particular buffer but creates strong reference to it. As rumors(me) say, it can handle only 12 such references.
So, my approach was to copy CVPixelBuffer and then work with it instead of buffer I got from AVCaptureVideoDataOutputSampleBufferDelegate didOutput func
Here is my new code:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if !paused {
//print("camera controller \(id) got frame")
if connection.output?.connection(with: .audio) == nil {
//capture video
connection.videoOrientation = .portrait
// getting image
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
// this works!
let copyBuffer = pixelBuffer.copy()
// captured - is just ciimage property
captured = CIImage(cvPixelBuffer: copyBuffer)
//remove if any
// transform image to targer resolution
let srcWidth = CGFloat(captured.extent.width)
let srcHeight = CGFloat(captured.extent.height)
let dstWidth: CGFloat = ConstantsManager.shared.k_video_width
let dstHeight: CGFloat = ConstantsManager.shared.k_video_height
let scaleX = dstWidth / srcWidth
let scaleY = dstHeight / srcHeight
var transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
captured = captured.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: dstWidth, height: dstHeight))
// mirror for front camera
if front {
var t = CGAffineTransform.init(scaleX: -1, y: 1)
t = t.translatedBy(x: -ConstantsManager.shared.k_video_width, y: 0)
captured = captured.transformed(by: t)
}
// video capture logic
let writable = canWrite()
if writable,
sessionAtSourceTime == nil {
sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
videoWriter.startSession(atSourceTime: sessionAtSourceTime!)
}
if writable, (videoWriterInput.isReadyForMoreMediaData) {
videoWriterInput.append(sampleBuffer)
}
self.captured = FilterManager.shared.applyFilterForCamera(inputImage: self.captured)
// current frame in case user wants to save image as photo
self.capturedPhoto = captured
// sent frame to Camcoder view controller
self.delegate?.didCapturedFrame(frame: captured)
} else {
// capture sound
let writable = canWrite()
if writable, (audioWriterInput.isReadyForMoreMediaData) {
//print("write audio buffer")
audioWriterInput?.append(sampleBuffer)
}
}
} else {
// paused
//print("paused camera controller \(id)")
}
}
and there is func to copy buffer:
func copy() -> CVPixelBuffer {
precondition(CFGetTypeID(self) == CVPixelBufferGetTypeID(), "copy() cannot be called on a non-CVPixelBuffer")
var _copy : CVPixelBuffer?
CVPixelBufferCreate(
kCFAllocatorDefault,
CVPixelBufferGetWidth(self),
CVPixelBufferGetHeight(self),
CVPixelBufferGetPixelFormatType(self),
nil,
&_copy)
guard let copy = _copy else { fatalError() }
CVPixelBufferLockBaseAddress(self, CVPixelBufferLockFlags.readOnly)
CVPixelBufferLockBaseAddress(copy, CVPixelBufferLockFlags(rawValue: 0))
let copyBaseAddress = CVPixelBufferGetBaseAddress(copy)
let currBaseAddress = CVPixelBufferGetBaseAddress(self)
print("copy data size: \(CVPixelBufferGetDataSize(copy))")
print("self data size: \(CVPixelBufferGetDataSize(self))")
memcpy(copyBaseAddress, currBaseAddress, CVPixelBufferGetDataSize(copy))
//memcpy(copyBaseAddress, currBaseAddress, CVPixelBufferGetDataSize(self) * 2)
CVPixelBufferUnlockBaseAddress(copy, CVPixelBufferLockFlags(rawValue: 0))
CVPixelBufferUnlockBaseAddress(self, CVPixelBufferLockFlags.readOnly)
return copy
}
I used it as extension
I hope, this will help anyone with similar problem
Best, Victor

slow frame rate when rendering cifiltered ciimage and MTKView while using face detection (Vision and CIDetection)

I have an app which does real time filtering on camera feed, i'm getting each frame from camera and then do some filtering using CIFilter and then pass the final frame(CIImage) to MTKView to be shown on my swiftUI view, it works fine, but when i want to do face/body detection, real time, on camera feed, frame rate goes down to 8 frames per second and super laggy.
i tried anything i could find on the internet, using vision, CIDetector, CoreML, everything is the same result, well, i would do this on global thread, which makes the UI responsive but the feed which i'm showing into the main view is still laggy, but things like scrollview are working fine.
so i tried to change the view from MTKView to UIImageView, Xcode shows its rendering at 120FPS (which i dont understand why, its 30FPS when not using any face detection) but the feed is still laggy, cannot keep up somehow to the output frame rate, i'm new to this, i dont understand why is it like that.
i also tried just to pass the coming image to MTKView (without any filtering in between, with face detection) also the same laggy result, without face detection, it goes to 30FPS (why not 120?).
this is the code i'm using for converting sampleBuffer to ciImage
extension CICameraCapture: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
var ciImage = CIImage(cvImageBuffer: imageBuffer)
if self.cameraPosition == AVCaptureDevice.Position.front {
ciImage = ciImage.oriented(.downMirrored)
}
ciImage = ciImage.transformed(by: CGAffineTransform(rotationAngle: 3 * .pi / 2))
ciImage = ciImage.transformToOrigin(withSize: ciImage.extent.size)
detectFace(image: ciImage) // this is for detecting face realtime, i have done it in vision
//and also cidetector - cidetector is a little bit faster when setted to low accuracy
//but still not desired result(frame rate)
DispatchQueue.main.async {
self.callback(ciImage)
}
}
}
and this is the MTKView code, which is very simple and basic implementation of it:
import MetalKit
import CoreImage
class MetalRenderView: MTKView {
//var textureCache: CVMetalTextureCache?
override init(frame frameRect: CGRect, device: MTLDevice?) {
super.init(frame: frameRect, device: device)
if super.device == nil {
fatalError("No support for Metal. Sorry")
}
framebufferOnly = false
preferredFramesPerSecond = 120
sampleCount = 2
}
required init(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
private lazy var commandQueue: MTLCommandQueue? = {
[unowned self] in
return self.device!.makeCommandQueue()
}()
private lazy var ciContext: CIContext = {
[unowned self] in
return CIContext(mtlDevice: self.device!)
}()
var image: CIImage? {
didSet {
renderImage()
}
}
private func renderImage() {
guard var image = image else { return }
image = image.transformToOrigin(withSize: drawableSize) // this is an extension to resize
//the image to the render size so i dont get the render error while rendering a frame
let commandBuffer = commandQueue?.makeCommandBuffer()
let destination = CIRenderDestination(width: Int(drawableSize.width),
height: Int(drawableSize.height),
pixelFormat: .bgra8Unorm,
commandBuffer: commandBuffer) { () -> MTLTexture in
return self.currentDrawable!.texture
}
try! ciContext.startTask(toRender: image, to: destination)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
draw()
}
}
and here is the code for face detection using CIDetector:
func detectFace (image: CIImage){
//DispatchQueue.global().async {
let options = [CIDetectorAccuracy: CIDetectorAccuracyHigh,
CIDetectorSmile: true, CIDetectorTypeFace: true] as [String : Any]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil,
options: options)!
let faces = faceDetector.features(in: image)
if let face = faces.first as? CIFaceFeature {
AppState.shared.mouth = face.mouthPosition
AppState.shared.leftEye = face.leftEyePosition
AppState.shared.rightEye = face.rightEyePosition
}
//}
}
what I have tried
1) different face detection methods, using Vision, CIDetector and also CoreML(this one not very deeply as i dont have experience in it)
I would get the detection info, but frame rate is 8 or at the best case its 15 (which would be a delayed detection)
2) I've read somewhere that it might be result of the image colorsapce so i have tried different video setting and different rendering colorspace, still no change in the frame rate.
3) I'm somehow sure that it might be regarding to pixelbuffer release time, so i deep copied the imageBuffer and pass it to the detection, beside some memory issues it went up to 15 FPS, but still not minimum 30FPS. in here i also tried to convert imageBuffer to ciimage and then render ciimage to cgimage and the back to ciimage to just release the buffer, but also could not get more than 15FPS (well on average, sometimes goes to 17 or 19, but still laggy)
i'm new in this and still trying to figure it out, i would appreciate any suggestions, samples or tips that could direct me to a better path of solving this.
update
this is the camera capture setup code:
class CICameraCapture: NSObject {
typealias Callback = (CIImage?) -> ()
private var cameraPosition = AVCaptureDevice.Position.front
var ciContext: CIContext?
let callback: Callback
private let session = AVCaptureSession()
private let sampleBufferQueue = DispatchQueue(label: "buffer", qos: .userInitiated)//, attributes: [], autoreleaseFrequency: .workItem)
// face detection
//private var sequenceHandler = VNSequenceRequestHandler()
//var request: VNCoreMLRequest!
//var visionModel: VNCoreMLModel!
//let detectionQ = DispatchQueue(label: "detectionQ", qos: .background)//, attributes: [], autoreleaseFrequency: .workItem)
init(callback: #escaping Callback) {
self.callback = callback
super.init()
prepareSession()
ciContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)
}
func start() {
session.startRunning()
}
func stop() {
session.stopRunning()
}
private func prepareSession() {
session.sessionPreset = .high //.hd1920x1080
let cameraDiscovery = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInDualCamera, .builtInWideAngleCamera], mediaType: .video, position: cameraPosition)
guard let camera = cameraDiscovery.devices.first else { fatalError("Can't get hold of the camera") }
//try! camera.lockForConfiguration()
//camera.activeVideoMinFrameDuration = camera.formats[0].videoSupportedFrameRateRanges[0].minFrameDuration
//camera.activeVideoMaxFrameDuration = camera.formats[0].videoSupportedFrameRateRanges[0].maxFrameDuration
//camera.unlockForConfiguration()
guard let input = try? AVCaptureDeviceInput(device: camera) else { fatalError("Can't get hold of the camera") }
session.addInput(input)
let output = AVCaptureVideoDataOutput()
output.videoSettings = [:]
//print(output.videoSettings.description)
//[875704438, 875704422, 1111970369]
//output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32BGRA)]
output.setSampleBufferDelegate(self, queue: sampleBufferQueue)
session.addOutput(output)
session.commitConfiguration()
}
}

Face Detection with Camera

How can I do face detection in realtime just as "Camera" does?
I noticed that AVCaptureStillImageOutput is deprecated after 10.0, so I use
AVCapturePhotoOutput instead. However, I found that the image I saved for facial detection is not so satisfied? Any ideas?
UPDATE
After giving a try of #Shravya Boggarapu mentioned. Currently, I use AVCaptureMetadataOutput to detect the face without CIFaceDetector. It works as expected. However, when I'm trying to draw bounds of the face, it seems mislocated. Any idea?
let metaDataOutput = AVCaptureMetadataOutput()
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .back)
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
// MetadataOutput instead
if(captureSession.canAddOutput(metaDataOutput)) {
captureSession.addOutput(metaDataOutput)
metaDataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
metaDataOutput.metadataObjectTypes = [AVMetadataObjectTypeFace]
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.frame = cameraView.bounds
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
cameraView.layer.addSublayer(previewLayer!)
captureSession.startRunning()
}
}
} catch {
print(error.localizedDescription)
}
and
extension CameraViewController: AVCaptureMetadataOutputObjectsDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [Any]!, from connection: AVCaptureConnection!) {
if findFaceControl {
findFaceControl = false
for metadataObject in metadataObjects {
if (metadataObject as AnyObject).type == AVMetadataObjectTypeFace {
print("😇😍😎")
print(metadataObject)
let bounds = (metadataObject as! AVMetadataFaceObject).bounds
print("origin x: \(bounds.origin.x)")
print("origin y: \(bounds.origin.y)")
print("size width: \(bounds.size.width)")
print("size height: \(bounds.size.height)")
print("cameraView width: \(self.cameraView.frame.width)")
print("cameraView height: \(self.cameraView.frame.height)")
var face = CGRect()
face.origin.x = bounds.origin.x * self.cameraView.frame.width
face.origin.y = bounds.origin.y * self.cameraView.frame.height
face.size.width = bounds.size.width * self.cameraView.frame.width
face.size.height = bounds.size.height * self.cameraView.frame.height
print(face)
showBounds(at: face)
}
}
}
}
}
Original
see in Github
var captureSession = AVCaptureSession()
var photoOutput = AVCapturePhotoOutput()
var previewLayer: AVCaptureVideoPreviewLayer?
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(true)
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if(captureSession.canAddOutput(photoOutput)){
captureSession.addOutput(photoOutput)
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.frame = cameraView.bounds
cameraView.layer.addSublayer(previewLayer!)
}
}
} catch {
print(error.localizedDescription)
}
}
func captureImage() {
let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType
]
settings.previewPhotoFormat = previewFormat
photoOutput.capturePhoto(with: settings, delegate: self)
}
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let error = error {
print(error.localizedDescription)
}
// Not include previewPhotoSampleBuffer
if let sampleBuffer = photoSampleBuffer,
let dataImage = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: nil) {
self.imageView.image = UIImage(data: dataImage)
self.imageView.isHidden = false
self.previewLayer?.isHidden = true
self.findFace(img: self.imageView.image!)
}
}
The findFace works with normal image. However, the image I capture via camera will not work or sometimes only recognize one face.
Normal Image
Capture Image
func findFace(img: UIImage) {
guard let faceImage = CIImage(image: img) else { return }
let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
// For converting the Core Image Coordinates to UIView Coordinates
let detectedImageSize = faceImage.extent.size
var transform = CGAffineTransform(scaleX: 1, y: -1)
transform = transform.translatedBy(x: 0, y: -detectedImageSize.height)
if let faces = faceDetector?.features(in: faceImage, options: [CIDetectorSmile: true, CIDetectorEyeBlink: true]) {
for face in faces as! [CIFaceFeature] {
// Apply the transform to convert the coordinates
var faceViewBounds = face.bounds.applying(transform)
// Calculate the actual position and size of the rectangle in the image view
let viewSize = imageView.bounds.size
let scale = min(viewSize.width / detectedImageSize.width,
viewSize.height / detectedImageSize.height)
let offsetX = (viewSize.width - detectedImageSize.width * scale) / 2
let offsetY = (viewSize.height - detectedImageSize.height * scale) / 2
faceViewBounds = faceViewBounds.applying(CGAffineTransform(scaleX: scale, y: scale))
print("faceBounds = \(faceViewBounds)")
faceViewBounds.origin.x += offsetX
faceViewBounds.origin.y += offsetY
showBounds(at: faceViewBounds)
}
if faces.count != 0 {
print("Number of faces: \(faces.count)")
} else {
print("No faces 😢")
}
}
}
func showBounds(at bounds: CGRect) {
let indicator = UIView(frame: bounds)
indicator.frame = bounds
indicator.layer.borderWidth = 3
indicator.layer.borderColor = UIColor.red.cgColor
indicator.backgroundColor = .clear
self.imageView.addSubview(indicator)
faceBoxes.append(indicator)
}
There are two ways to detect faces: CIFaceDetector and AVCaptureMetadataOutput. Depending on your requirements, choose what is relevant for you.
CIFaceDetector has more features, it gives you the location of the eyes and mouth, a smile detector, etc.
On the other hand, AVCaptureMetadataOutput is computed on the frames and the detected faces are tracked and there is no extra code to be added by us. I find that, because of tracking. faces are detected more reliably in this process. The downside of this is that you will simply detect faces, no the position of the eyes or mouth.
Another advantage of this method is that orientation issues are smaller as you can use videoOrientation whenever the device orientation changes and the orientation of the faces will relative to that orientation.
In my case, my application uses YUV420 as the required format so using CIDetector (which works with RGB) in real-time was not viable. Using AVCaptureMetadataOutput saved a lot of effort and performed more reliably due to continuous tracking.
Once I had the bounding box for the faces, I coded extra features, such as skin detection and applied it on the still image.
Note: When you capture a still image, the face box information is added along with the metadata so there are no sync issues.
You can also use a combination of the two to get better results.
Explore and evaluate the pros and cons as per your application.
The face rectangle is wrt image origin. So, for the screen, it may be different.
Use:
for (AVMetadataFaceObject *faceFeatures in metadataObjects) {
CGRect face = faceFeatures.bounds;
CGRect facePreviewBounds = CGRectMake(face.origin.y * previewLayerRect.size.width,
face.origin.x * previewLayerRect.size.height,
face.size.width * previewLayerRect.size.height,
face.size.height * previewLayerRect.size.width);
/* Draw rectangle facePreviewBounds on screen */
}
To perform face detection on iOS, there are either CIDetector (Apple)
or Mobile Vision (Google) API.
IMO, Google Mobile Vision provides better performance.
If you are interested, here is the project you can play with. (iOS 10.2, Swift 3)
After WWDC 2017, Apple introduces CoreML in iOS 11.
The Vision framework makes the face detection more accurate :)
I've made a Demo Project. containing Vision v.s. CIDetector. Also, it contains face landmarks detection in real time.
A bit late, but here it is the solution for the coordinates problem. There is a method you can call on the preview layer to transform the metadata object to your coordinate system: transformedMetadataObject(for: metadataObject).
guard let transformedObject = previewLayer.transformedMetadataObject(for: metadataObject) else {
continue
}
let bounds = transformedObject.bounds
showBounds(at: bounds)
Source: https://developer.apple.com/documentation/avfoundation/avcapturevideopreviewlayer/1623501-transformedmetadataobjectformeta
By the way, in case you are using (or upgrade your project to) Swift 4, the delegate method of AVCaptureMetadataOutputsObject has change into:
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection)
Kind regards
extension CameraViewController: AVCaptureMetadataOutputObjectsDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [Any]!, from connection: AVCaptureConnection!) {
if findFaceControl {
findFaceControl = false
let faces = metadata.flatMap { $0 as? AVMetadataFaceObject } .flatMap { (face) -> CGRect in
guard let localizedFace =
previewLayer?.transformedMetadataObject(for: face) else { return nil }
return localizedFace.bounds }
for face in faces {
let temp = UIView(frame: face)
temp.layer.borderColor = UIColor.white
temp.layer.borderWidth = 2.0
view.addSubview(view: temp)
}
}
}
}
Be sure to remove the views created by didOutputMetadataObjects.
Keeping track of the active facial ids is the best way to do this ^
Also when you're trying to find the location of faces for your preview layer, it is much easier to use facial data and transform. Also I think CIDetector is junk, metadataoutput will use hardware stuff for face detection making it really fast.
Create CaptureSession
For AVCaptureVideoDataOutput create following settings
output.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as AnyHashable: Int(kCMPixelFormat_32BGRA) ]
3.When you receive CMSampleBuffer, create image
DispatchQueue.main.async {
let sampleImg = self.imageFromSampleBuffer(sampleBuffer: sampleBuffer)
self.imageView.image = sampleImg
}
func imageFromSampleBuffer(sampleBuffer : CMSampleBuffer) -> UIImage
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer!);
let height = CVPixelBufferGetHeight(imageBuffer!);
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
//let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = context?.makeImage();
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
// Create an image object from the Quartz image
let image = UIImage.init(cgImage: quartzImage!);
return (image);
}
By looking at your code I detected 2 things that could lead to wrong/poor face detection.
One of them is the face detector features options where you are filtering the results by [CIDetectorSmile: true, CIDetectorEyeBlink: true]. Try to set it to nil: faceDetector?.features(in: faceImage, options: nil)
Another guess I have is the result image orientation. I noticed you use AVCapturePhotoOutput.jpegPhotoDataRepresentation method to generate the source image for the detection and the system, by default, it generates that image with a specific orientation, of type Left/LandscapeLeft, I think. So, basically you can tell the face detector to have that in mind by using the CIDetectorImageOrientation key.
CIDetectorImageOrientation: the value for this key is an integer NSNumber from 1..8 such as that found in kCGImagePropertyOrientation. If present, the detection will be done based on that orientation but the coordinates in the returned features will still be based on those of the image.
Try to set it like faceDetector?.features(in: faceImage, options: [CIDetectorImageOrientation: 8 /*Left, bottom*/]).

Image output from AVFoundation occupied only 1/4 of screen

I played around with AVFoundation try to apply filter to live video. I tried to apply filter to AVCaptureVideoDataOutput, but the output occupied only 1/4 of the view.
Here are some of my related code
Capturing
let availableCameraDevices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)
for device in availableCameraDevices as! [AVCaptureDevice] {
if device.position == .Back {
backCameraDevice = device
} else if device.position == .Front {
frontCameraDevice = device
}
}
Configure output
private func configureVideoOutput() {
videoOutput = AVCaptureVideoDataOutput()
videoOutput?.setSampleBufferDelegate(self, queue: dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL))
if session.canAddOutput(videoOutput) {
session.addOutput(videoOutput)
}
}
Get image
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer:
CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
// Grab the pixelbuffer
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
// create a CIImage from it, rotate it, and zero the origin
var image = CIImage(CVPixelBuffer: pixelBuffer)
image = image.imageByApplyingTransform(CGAffineTransformMakeRotation(CGFloat(-M_PI_2)))
let origin = image.extent.origin
image = image.imageByApplyingTransform(CGAffineTransformMakeTranslation(-origin.x, -origin.y))
self.manualDelegate?.cameraController(self, didOutputImage: image)
}
Render
func cameraController(cameraController: CameraController, didOutputImage image: CIImage) {
if glContext != EAGLContext.currentContext() {
EAGLContext.setCurrentContext(glContext)
}
let filteredImage = image.imageByApplyingFilter("CIColorControls", withInputParameters: [kCIInputSaturationKey: 0.0])
var rect = view.bounds
glView.bindDrawable()
ciContext.drawImage(filteredImage, inRect: rect, fromRect: image.extent)
glView.display()
}
I expected retina display and scale factor causing this, but don't sure where should I deal with this. I already set content scale factor to GLKView, but no luck.
private var glView: GLKView {
// Set in storyboard
return view as! GLKView
}
glView.contentScaleFactor = glView.bounds.size.width / UIScreen.mainScreen().bounds.size.width * UIScreen.mainScreen().scale
Your problem is with the output rect used in the drawImage function:
ciContext.drawImage(filteredImage, inRect: rect, fromRect: image.extent)
The image's extent is in actual pixels, while the view's bounds are points, and are not adjusted by the contentScaleFactor to get pixels. Your device undoubtedly has a contentScaleFactor of 2.0, thus it's 1/2 the size in each dimension.
Instead, set the rect as:
var rect = CGRect(x: 0, y: 0, width: glView.drawableWidth,
height: glView.drawableHeight)
drawableWidth and drawableHeight return the dimension in pixels, accounting for the contentScaleFactor. See:
https://developer.apple.com/reference/glkit/glkview/1615591-drawablewidth
Also, there is no need to set glView's contentScaleFactor
Which format are you setting before starting the capture? Are you sure that the video preview layer is filling the whole screen?
You have 2 ways to set resolution during an avcapture:
Choose the AVCaptureDeviceFormat with the highest resolution, by looking through available capture format
Using the sessionPreset property of you capture session. Doc here.

Resources