How to implement 2x zoom from camera app with AVCaptureVideoPreviewLayer - ios

I have a AVCaptureVideoPreviewLayerin my app that works well and is showing the same preview video as the camera app. I would like to implement the 2x zoom functionality of the camera app. How do I do this?
Basically I want my previewlayer to change the video feed to same scale as what you see in the camera app when you tap on the 1x icon to change it to 2x.
setting up preview layer
func startSession(){
captureSession = AVCaptureSession()
captureSession?.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
// Catch error using the do catch block
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if (captureSession?.canAddInput(input) != nil){
captureSession?.addInput(input)
// Setup the preview layer
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
tempImageView.layer.addSublayer(previewLayer!)
captureSession?.startRunning()
// Set up AVCaptureVideoDataOutput
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_32BGRA as UInt32)]
dataOutput.alwaysDiscardsLateVideoFrames = true
if (captureSession?.canAddOutput(dataOutput) == true) {
captureSession?.addOutput(dataOutput)
}
let queue = DispatchQueue(label: "com.bigbob.videoQueue")
dataOutput.setSampleBufferDelegate(self, queue: queue)
}
} catch _ {
print("Error setting up camera!")
}

Set the videoZoomFactor of your AVCaptureDevice.defaultDevice and the preview layer's zoom will follow suit. Note Swift 4 it is now called AVCaptureDevice.default.
do {
try backCamera?.lockForConfiguration()
let zoomFactor:CGFloat = 2
backCamera?.videoZoomFactor = zoomFactor
backCamera?.unlockForConfiguration()
} catch {
//Catch error from lockForConfiguration
}

Here's a bit of an updated answer that fist checks to make sure the zoom factor is available before you even attempt to set it. The will prevent possibly unneeded exception catches and you can adjust the zoom check and set easily with one variable.
if let captureDevice = AVCaptureDevice.default(for: AVMediaType.video) {
let zoomFactor : CGFloat = 2
if (captureDevice.maxAvailableVideoZoomFactor >= zoomFactor) {
try? captureDevice.lockForConfiguration()
captureDevice.videoZoomFactor = zoomFactor
captureDevice.unlockForConfiguration()
}
}

Related

Setting `device.activeVideoMinFrameDuration` and `device.activeVideoMaxFrameDuration` doesn't do anything

I'm making a customized camera app, and I'd like to get the camera input at 60fps (I don't care much about quality) and show it in a UIView.
What I have so far is the following:
struct CustomCameraView: UIViewRepresentable {
func makeUIView(context: Context) -> some UIView {
let previewView = UIImageView()
func configureCameraForHighestFrameRate(device: AVCaptureDevice) {
do {
try device.lockForConfiguration()
let fr = CMTimeMake(value: 1, timescale: 60)
device.activeVideoMinFrameDuration = fr
device.activeVideoMaxFrameDuration = fr
device.unlockForConfiguration()
} catch {
print("error setting frame rate")
}
}
func setupLivePreview() {
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspect
videoPreviewLayer.connection?.videoOrientation = .portrait
previewView.layer.addSublayer(videoPreviewLayer)
captureSession.commitConfiguration()
DispatchQueue.global(qos: .userInitiated).async {
captureSession.startRunning()
DispatchQueue.main.async {
videoPreviewLayer.frame = previewView.bounds
}
}
}
var captureSession: AVCaptureSession!
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
captureSession = AVCaptureSession()
captureSession.beginConfiguration()
guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video) else {
print("Unable to access back camera!")
return UIImageView()
}
configureCameraForHighestFrameRate(device: backCamera)
do {
let input = try AVCaptureDeviceInput(device: backCamera)
print("max frame duration on input",
input.device.activeVideoMaxFrameDuration)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
setupLivePreview()
}
}
catch let error {
print("Error Unable to initialize back camera: \(error.localizedDescription)")
}
return previewView
}
func updateUIView(_ uiView: UIViewType, context: Context) {}
}
Pretty simple, I access the camera, set its activeVideoMinFrameDuration and activeVideoMaxFrameDurations and I use a AVCaptureVideoPreviewLayer to replace the UIImageView's CALayer.
I've tried changing the timescale of CMTime to 30, 20, 10, even 1 but it doesn't have any effect. What should I do?
I don't get any errors, and here's my system info:
iPhone 11 Pro
XCode Version 12.0.1 (12A7300)
iOS 14.0.1
Turns out Apple's documentation is wrong:
Directly configuring a capture device’s activeFormat property changes the capture session’s preset to inputPriority.
This is not true, I have to set the capture session's preset to .inputPriority for the captured device's configuration to take place. So adding this line solved the problem:
captureSession.sessionPreset = .inputPriority
I've made a post at the Apple Developer Forum, hopefully they'll take a look at it.

the specified colorspace format is not supported. falling back on libyuv

I'm currently working on a camera app. Everything worked fine, but when I tried to change the constraints of the Vision View the log suddenly printed this error.
[warning]the specified colorspace format is not supported. falling back on libyuv.
I have no Idea where it comes from and what I should change. Below I'll past the relevant code where I set up the camera.
func initializeCameraSession() {
// Set up Values
//1: Create a new AV Session
// , xf , AVCaptureVideoDataOutputSampleBufferDelegate // Get camera devices
let devices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .front).devices
//2: Select a capture device
avSession.sessionPreset = .low
do {
if let captureDevice = devices.first {
let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
}
avSession.beginConfiguration()
if avSession.canAddInput(captureDeviceInput) {
avSession.addInput(captureDeviceInput)
self.videoDeviceInput = captureDeviceInput
} else {
print("Couldn't add video device input to the session.")
avSession.commitConfiguration()
return
}
avSession.commitConfiguration()
}
} catch {
print(error.localizedDescription)
}
//3: Show output on a preview layer
let captureOutput = AVCaptureVideoDataOutput()
captureOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
captureOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as String) : NSNumber(value: kCVPixelFormatType_32BGRA )]
avSession.addOutput(captureOutput)
let previewLayer = AVCaptureVideoPreviewLayer(session: avSession)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewLayer.connection?.videoOrientation = .portrait
previewLayer.frame = visionView.bounds
visionView.layer.addSublayer(previewLayer)
view.bringSubviewToFront(visionView)
visionView.isHidden = true
visionView.alpha = 0.0
avSession.startRunning()
}
}

Custom camera view with UIImage as new overlay

I want to make a custom cameraView overlay. I want to use the overlay which is an image as a template. But the rect of clear space will change depends on phone.
Template:
I tried to create a view as container behind image. But the image that got captured will include the part that I dont want
self.session = AVCaptureSession()
self.session!.sessionPreset = AVCaptureSession.Preset.photo
let backCamera = AVCaptureDevice.default(for: AVMediaType.video)
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera!)
} catch let error1 as NSError {
error = error1
input = nil
print(error!.localizedDescription)
}
if error == nil && session!.canAddInput(input) {
session!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if session!.canAddOutput(stillImageOutput!) {
session!.addOutput(stillImageOutput!)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.session!)
videoPreviewLayer!.videoGravity = AVLayerVideoGravity.resizeAspect
videoPreviewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
previewView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
}
}
//inviewdidload
videoPreviewLayer!.frame = previewView.bounds//previewView is uiview behind image
The expected result is that the white space would be the custom camera view. also it seems that AVCaptureStillImageOutput was deprecated in iOS 10.0

How to add autofocus to AVCaptureSession? SWIFT

I'm using AVFoundation to recognize text and perform OCR. How do I add autofocus? I don't want to have the yellow square thing when user taps the screen, I just want it to automatically focus on the object, a credit card for example.
Here is my session code.
func setupSession() {
session = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPresetHigh
let camera = AVCaptureDevice
.defaultDeviceWithMediaType(AVMediaTypeVideo)
do { input = try AVCaptureDeviceInput(device: camera) } catch { return }
output = AVCaptureStillImageOutput()
output.outputSettings = [ AVVideoCodecKey: AVVideoCodecJPEG ]
guard session.canAddInput(input)
&& session.canAddOutput(output) else { return }
session.addInput(input)
session.addOutput(output)
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspect
previewLayer!.connection?.videoOrientation = .Portrait
view.layer.addSublayer(previewLayer!)
session.startRunning()
}
On my 6S the default camera focus mode is .ContinuousAutoFocus, which continuously focuses on whatever is taking up most of the camera's field of vision. Sounds like that's what you want.
You can check if your camera supports auto focus like so:
camera.isFocusModeSupported(.ContinuousAutoFocus)
and if it's not already set, set it like so:
try! camera.lockForConfiguration()
camera.focusMode = .ContinuousAutoFocus
camera.unlockForConfiguration()
Here is what I did:
//get instance of phone camera
let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
//try to enable auto focus
if(captureDevice!.isFocusModeSupported(.continuousAutoFocus)) {
try! captureDevice!.lockForConfiguration()
captureDevice!.focusMode = .continuousAutoFocus
captureDevice!.unlockForConfiguration()
}

iOS Swift - AVCaptureSession - Capture frames respecting frame rate

I'm trying to build an app which will capture frames from the camera and process them with OpenCV before saving those files to the device, but at a specific frame rate.
What I'm stuck on at the moment is the fact that AVCaptureVideoDataOutputSampleBufferDelegate doesn't appear to respect the AVCaptureDevice.activeVideoMinFrameDuration, or AVCaptureDevice.activeVideoMaxFrameDuration settings.
captureOutput runs far quicker than 2 frames per second as the above settings would indicate.
Do you happen to know how one could achieve this, with or without the delegate?
ViewController:
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewDidAppear(animated: Bool) {
setupCaptureSession()
}
func setupCaptureSession() {
let session : AVCaptureSession = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPreset1280x720
let videoDevices : [AVCaptureDevice] = AVCaptureDevice.devices() as! [AVCaptureDevice]
for device in videoDevices {
if device.position == AVCaptureDevicePosition.Back {
let captureDevice : AVCaptureDevice = device
do {
try captureDevice.lockForConfiguration()
captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, 2)
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, 2)
captureDevice.unlockForConfiguration()
let input : AVCaptureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
if session.canAddInput(input) {
try session.addInput(input)
}
let output : AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
let dispatch_queue : dispatch_queue_t = dispatch_queue_create("streamoutput", nil)
output.setSampleBufferDelegate(self, queue: dispatch_queue)
session.addOutput(output)
session.startRunning()
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.connection.videoOrientation = .LandscapeRight
let previewBounds : CGRect = CGRectMake(0,0,self.view.frame.width/2,self.view.frame.height+20)
previewLayer.backgroundColor = UIColor.blackColor().CGColor
previewLayer.frame = previewBounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.imageView.layer.addSublayer(previewLayer)
self.previewMat.frame = CGRectMake(previewBounds.width, 0, previewBounds.width, previewBounds.height)
} catch _ {
}
break
}
}
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
self.wrapper.processBuffer(self.getUiImageFromBuffer(sampleBuffer), self.previewMat)
}
So I've figured out the problem.
In the comments section for AVCaptureDevice.h above the activeVideoMinFrameDuration property it states:
On iOS, the receiver's activeVideoMinFrameDuration resets to its
default value under the following conditions:
The receiver's activeFormat changes
The receiver's AVCaptureDeviceInput's session's sessionPreset changes
The receiver's AVCaptureDeviceInput is added to a session
The last bullet point was causing my problem, so doing the following solved the problem for me:
do {
let input : AVCaptureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
if session.canAddInput(input) {
try session.addInput(input)
}
try captureDevice.lockForConfiguration()
captureDevice.activeVideoMinFrameDuration = CMTimeMake(value: 1, timescale: 2)
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(value: 1, timescale: 2)
captureDevice.unlockForConfiguration()
let output : AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
let dispatch_queue : dispatch_queue_t = dispatch_queue_create("streamoutput", nil)
output.setSampleBufferDelegate(self, queue: dispatch_queue)
session.addOutput(output)

Resources