Trying to make a Camera app, new to Swift - ios

This is the ViewController file. I'm following the tutorial here:
https://www.appcoda.com/avfoundation-swift-guide/
I don't understand the errors:
"Initializer for conditional binding must have Optional type, not '[AVCaptureDevice]'"
and
"Value of optional type 'AVCapturePhotoOutput?' not unwrapped; did you mean to use '!' or '?'?"
What do these errors mean? How can I fix them?
//
// CameraController.swift
// AV Foundation
//
// Created by ben on 5/10/18.
// Copyright © 2018 Pranjal Satija. All rights reserved.
//
import Foundation
import AVFoundation
import UIKit
class CameraController {
var previewLayer: AVCaptureVideoPreviewLayer?
var captureSession: AVCaptureSession?
var currentCameraPosition: CameraPosition?
var frontCamera: AVCaptureDevice?
var frontCameraInput: AVCaptureDeviceInput?
var photoOutput: AVCapturePhotoOutput?
var rearCamera: AVCaptureDevice?
var rearCameraInput: AVCaptureDeviceInput?
}
extension CameraController {
func displayPreview(on view: UIView) throws {
guard let captureSession = self.captureSession, captureSession.isRunning else { throw CameraControllerError.captureSessionIsMissing }
self.previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = .portrait
view.layer.insertSublayer(self.previewLayer!, at: 0)
self.previewLayer?.frame = view.frame
}
func prepare(completionHandler: #escaping (Error?) -> Void) {
func createCaptureSession() {
self.captureSession = AVCaptureSession()
}
func configureCaptureDevices() throws {
let session = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .unspecified)
guard let cameras = (session.devices.flatMap { $0 }), !cameras.isEmpty else { throw CameraControllerError.noCamerasAvailable }
for camera in cameras {
if camera.position == .front {
self.frontCamera = camera
}
if camera.position == .back {
self.rearCamera = camera
try camera.lockForConfiguration()
camera.focusMode = .autoFocus
camera.unlockForConfiguration()
}
}
}
func configureDeviceInputs() throws {
guard let captureSession = self.captureSession else { throw CameraControllerError.captureSessionIsMissing }
if let rearCamera = self.rearCamera {
self.rearCameraInput = try AVCaptureDeviceInput(device: rearCamera)
if captureSession.canAddInput(self.rearCameraInput!) { captureSession.addInput(self.rearCameraInput!) }
self.currentCameraPosition = .rear
}
else if let frontCamera = self.frontCamera {
self.frontCameraInput = try AVCaptureDeviceInput(device: frontCamera)
if captureSession.canAddInput(self.frontCameraInput!) { captureSession.addInput(self.frontCameraInput!) }
else { throw CameraControllerError.inputsAreInvalid }
self.currentCameraPosition = .front
}
else { throw CameraControllerError.noCamerasAvailable }
}
func configurePhotoOutput() throws {
guard let captureSession = self.captureSession else { throw CameraControllerError.captureSessionIsMissing }
self.photoOutput = AVCapturePhotoOutput()
self.photoOutput!.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey : AVVideoCodecJPEG])], completionHandler: nil)
if captureSession.canAddOutput(self.photoOutput) { captureSession.addOutput(self.photoOutput) }
captureSession.startRunning()
}
DispatchQueue(label: "prepare").async {
do {
createCaptureSession()
try configureCaptureDevices()
try configureDeviceInputs()
try configurePhotoOutput()
}
catch {
DispatchQueue.main.async {
completionHandler(error)
}
return
}
DispatchQueue.main.async {
completionHandler(nil)
}
}
}
}
extension CameraController {
enum CameraControllerError: Swift.Error {
case captureSessionAlreadyRunning
case captureSessionIsMissing
case inputsAreInvalid
case invalidOperation
case noCamerasAvailable
case unknown
}
public enum CameraPosition {
case front
case rear
}
}

In one of your comments you finally pointed out which line is causing an error:
guard let cameras = (session.devices.flatMap { $0 }), !cameras.isEmpty else { throw CameraControllerError.noCamerasAvailable }
There are several issues with this. As you can find in the Control Flow - Early Exit section of the Swift book, guard let is used to verify that an optional variable isn't nil.
The error is telling you that the expression (session.devices.flatMap { $0 }) isn't an optional. In fact, the use of flatMap here is pointless since session.devices is an array of non-optional values ([AVCaptureDevice]]).
You should rewrite the guard to:
guard !session.devices.isEmpty else {
throw CameraControllerError.noCamerasAvailable
}
And then the loop becomes:
for camera in session.devices {

Related

iOS Swift AVFoundation Video Recording AVCaptureMovieFileOutput isRecording value false every time

I'm trying to record and save video using AVFoundation Framework with both front and rear camera. I'm able to start session but unable to save video recording in document directory.
I check movieOutput.isRecording it gives false every time. Hence the delegate output method is also not called due to this. Even Start delegate is not called on start recording.
import UIKit
import Foundation
import AVKit
import AVFoundation
class AppVideoRecorder: NSObject {
private var session = AVCaptureSession()
private var movieOutput = AVCaptureMovieFileOutput()
private var camera: AVCaptureDevice?
private var activeInput: AVCaptureDeviceInput?
private var previewLayer = AVCaptureVideoPreviewLayer()
private var renderView: UIView!
var isFrontCamera: Bool = false
init(for view: UIView) {
self.renderView = view
}
deinit {
print("Called")
}
func setupSession() {
self.session.sessionPreset = .high
// Setup Camera
self.camera = AVCaptureDevice.default(
.builtInWideAngleCamera,
for: .video,
position: self.isFrontCamera ? .front : .back
)
if let camera = self.camera {
do {
let input = try AVCaptureDeviceInput(device: camera)
if self.session.canAddInput(input) {
self.session.addInput(input)
self.activeInput = input
}
} catch {
print(error)
}
}
// Setup Microphone
if let microphone = AVCaptureDevice.default(for: .audio) {
do {
let micInput = try AVCaptureDeviceInput(device: microphone)
if self.session.canAddInput(micInput) {
self.session.addInput(micInput)
}
} catch {
print(error)
}
}
// Movie output
if self.session.canAddOutput(self.movieOutput) {
self.session.addOutput(self.movieOutput)
}
}
func setupPreview() {
// Configure previewLayer
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
self.previewLayer.frame = self.renderView.bounds
self.previewLayer.videoGravity = .resizeAspectFill
self.renderView.layer.addSublayer(self.previewLayer)
}
func startSession() {
if self.session.isRunning { return }
DispatchQueue.main.async {
self.session.startRunning()
}
}
func stopSession() {
if self.session.isRunning {
DispatchQueue.main.async {
self.session.stopRunning()
}
}
}
func removeInput() {
guard let input = self.activeInput else { return }
self.session.removeInput(input)
}
func isCameraOn(completion: #escaping (Bool) -> Void) {
if AVCaptureDevice.authorizationStatus(for: .video) == .authorized {
completion(true)
} else {
AVCaptureDevice.requestAccess(for: .video,
completionHandler: { (granted) in
completion(granted)
})
}
}
func toggleCamera() {
self.session.beginConfiguration()
for input in self.session.inputs {
if let inputObj = input as? AVCaptureDeviceInput {
self.session.removeInput(inputObj)
}
}
self.camera = AVCaptureDevice.default(
.builtInWideAngleCamera,
for: .video,
position: self.isFrontCamera ? .front : .back
)
if let camera = self.camera {
do {
let input = try AVCaptureDeviceInput(device: camera)
if self.session.canAddInput(input) {
self.session.addInput(input)
self.activeInput = input
}
} catch {
print(error)
}
}
self.session.commitConfiguration()
}
}
extension AppVideoRecorder: AVCaptureFileOutputRecordingDelegate {
private var currentVideoOrientation: AVCaptureVideoOrientation {
var orientation: AVCaptureVideoOrientation
switch UIDevice.current.orientation {
case .portrait:
orientation = AVCaptureVideoOrientation.portrait
case .landscapeRight:
orientation = AVCaptureVideoOrientation.landscapeLeft
case .portraitUpsideDown:
orientation = AVCaptureVideoOrientation.portraitUpsideDown
default:
orientation = AVCaptureVideoOrientation.landscapeRight
}
return orientation
}
func recordVideo() {
if self.movieOutput.isRecording { // FALSE EVERY TIME
self.stopRecording()
} else {
self.startRecording()
}
}
private func startRecording() {
guard let connection = self.movieOutput.connection(with: .video),
let device = self.activeInput?.device else { return }
// handle return error
if connection.isVideoOrientationSupported {
connection.videoOrientation = self.currentVideoOrientation
}
if connection.isVideoStabilizationSupported {
connection.preferredVideoStabilizationMode = .auto
}
if device.isSmoothAutoFocusSupported {
do {
try device.lockForConfiguration()
device.isSmoothAutoFocusEnabled = false
device.unlockForConfiguration()
} catch {
print("Error setting configuration: \(error)")
}
}
let paths = FileManager.default.urls(for: .documentDirectory,
in: .userDomainMask)
guard let path = paths.first else { return }
let fileUrl = path.appendingPathComponent("celeb_video.mp4")
try? FileManager.default.removeItem(at: fileUrl)
self.movieOutput.startRecording(to: fileUrl, recordingDelegate: self)
}
private func stopRecording() {
self.movieOutput.stopRecording()
}
func fileOutput(_ output: AVCaptureFileOutput,
didFinishRecordingTo outputFileURL: URL,
from connections: [AVCaptureConnection],
error: Error?) {
print("DELEGATE CALL BACK")
if let error = error {
//do something
print(error)
} else {
//do something
print(outputFileURL.path)
// UISaveVideoAtPathToSavedPhotosAlbum(outputFileURL.path, nil, nil, nil)
}
}
func fileOutput(_ output: AVCaptureFileOutput,
didStartRecordingTo fileURL: URL,
from connections: [AVCaptureConnection]) {
print("didStartRecordingTo CALL BACK:", fileURL.path)
}
}
Here is my calling code in view controller. recordingView is UIView
private lazy var recorder: AppVideoRecorder = {
return AppVideoRecorder(for: self.recordingView)
}()
#IBAction func recordingAction(_ sender: UIButton) {
sender.isSelected.toggle()
if sender.isSelected {
self.recorder.setupSession()
self.recorder.setupPreview()
self.recorder.startSession()
self.recorder.recordVideo()
} else {
self.recorder.recordVideo()
self.recorder.removeInput()
self.recorder.stopSession()
}
}
#IBAction func swapCameraAction(_ sender: UIButton) {
sender.isSelected.toggle()
self.recorder.isFrontCamera = sender.isSelected
self.recorder.toggleCamera()
}
Please let me know what I missed.
As from the link Starting video recording immediately with AVCaptureMovieFileOutput
I have added notifications, now it is working as it takes time to start.
private func setupNotifications() {
NotificationCenter.default.addObserver(self,
selector: #selector(sessionDidStartRunning(_:)),
name: .AVCaptureSessionDidStartRunning,
object: nil)
NotificationCenter.default.addObserver(self,
selector: #selector(sessionDidStopRunning(_:)),
name: .AVCaptureSessionDidStopRunning,
object: nil)
}
#objc
private func sessionDidStartRunning(_ notification: NSNotification) {
self.startRecording()
}
#objc
private func sessionDidStopRunning(_ notification: NSNotification) {
}

How to switch camera using AVFoundation

I have implemented the preview camera using AVFoundation, its working fine. But I have a hard time to switch the camera back and front. I have added a switch button at the bottom bar. By default, its the back camera, I want to switch it to front. How can I do that?
class FifteenSecsViewController: UIViewController, AVCaptureFileOutputRecordingDelegate {
#IBOutlet weak var camPreview: UIView!
let captureSession = AVCaptureSession()
let movieOutput = AVCaptureMovieFileOutput()
var previewLayer: AVCaptureVideoPreviewLayer!
var activeInput: AVCaptureDeviceInput!
var outputURL: URL!
override func viewDidLoad() {
super.viewDidLoad()
if setupSession() {
setupPreview()
startSession()
}
self.switchCameraButton.addTarget(self, action: #selector(switchButtonTapped), for: .touchUpInside)
}
func setupSession() -> Bool {
captureSession.sessionPreset = AVCaptureSession.Preset.high
// Setup Camera
let camera: AVCaptureDevice?
camera = AVCaptureDevice.default(for: .video)
do {
let input = try AVCaptureDeviceInput(device: camera!)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
activeInput = input
}
} catch {
print("Error setting device video input: \(error)")
return false
}
// Setup Microphone
let microphone = AVCaptureDevice.default(for: .audio)
do {
let micInput = try AVCaptureDeviceInput(device: microphone!)
if captureSession.canAddInput(micInput) {
captureSession.addInput(micInput)
}
} catch {
print("Error setting device audio input: \(error)")
return false
}
// Movie output
let seconds : Int64 = 3
let maxDuration = CMTime(seconds: Double(seconds),
preferredTimescale: 1)
movieOutput.maxRecordedDuration = maxDuration
if captureSession.canAddOutput(movieOutput) {
captureSession.addOutput(movieOutput)
}
return true
}
func setupPreview() {
// Configure previewLayer
previewLayer = AVCaptureVideoPreviewLayer(session:
captureSession)
previewLayer.frame = camPreview.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
camPreview.layer.addSublayer(previewLayer)
}
//MARK:- Camera Session
func startSession() {
if !captureSession.isRunning {
videoQueue().async {
self.captureSession.startRunning()
}
}
}
#objc func switchButtonTapped(){
// what to write here??
}
}
Function switchButtonTapped is an actionTarget of UIButton. If I add this code in this button:
#objc func switchButtonTapped(){
if setupSession() {
setupPreview()
startSession()
}
}
Camerapreview screen shows a white screen and got stuck.
Try this code:
func switchCamera() {
session?.beginConfiguration()
let currentInput = session?.inputs.first as? AVCaptureDeviceInput
session?.removeInput(currentInput!)
let newCameraDevice = currentInput?.device.position == .back ? getCamera(with: .front) : getCamera(with: .back)
let newVideoInput = try? AVCaptureDeviceInput(device: newCameraDevice!)
session?.addInput(newVideoInput!)
session?.commitConfiguration()
}
func getCamera(with position: AVCaptureDevice.Position) -> AVCaptureDevice? {
guard let devices = AVCaptureDevice.devices(for: AVMediaType.video) as? [AVCaptureDevice] else {
return nil
}
return devices.filter {
$0.position == position
}.first
}
To begin create a device input for the front camera:
let frontDevice: AVCaptureDevice? = {
for device in AVCaptureDevice.devices(for: AVMediaType.video) {
if device.position == .front {
return device
}
}
return nil
}()
lazy var frontDeviceInput: AVCaptureDeviceInput? = {
if let _frontDevice = self.frontDevice {
return try? AVCaptureDeviceInput(device: _frontDevice)
}
return nil
}()
Then in your switchButtonTapped, if there is a front camera you can do the switch between the front and the ones:
func switchButtonTapped() {
if let _frontDeviceInput = frontDeviceInput {
captureSession.beginConfiguration()
if let _currentInput = captureSession.inputs.first as? AVCaptureDeviceInput {
captureSession.removeInput(_currentInput)
let newDeviceInput = (_currentInput.device.position == .front) ? activeInput : _frontDeviceInput
captureSession.addInput(newDeviceInput!)
}
captureSession.commitConfiguration()
}
}
If you need more details, don't hesitate.

'No active and enabled video connection' error when capturing photo with TrueDepth cam

I am trying to record depth data from the TrueDepth camera along with a photo. But when calling
AVCapturePhotoOutput capturePhoto(withSettings,delegate)
I get an exception stating:
No active and enabled video connection
I configure the camera and outputs like so (basically following the guide from Apple about photo capturing and capturing depth):
func configurePhotoOutput() throws {
self.captureSession = AVCaptureSession()
guard self.captureSession != nil else {
return
}
// Select a depth-capable capture device.
guard let videoDevice = AVCaptureDevice.default(.builtInTrueDepthCamera,
for: .video, position: .unspecified)
else { fatalError("No dual camera.") }
// Select a depth (not disparity) format that works with the active color format.
let availableFormats = videoDevice.activeFormat.supportedDepthDataFormats
let depthFormat = availableFormats.first(where: { format in
let pixelFormatType = CMFormatDescriptionGetMediaSubType(format.formatDescription)
return (pixelFormatType == kCVPixelFormatType_DepthFloat16 ||
pixelFormatType == kCVPixelFormatType_DepthFloat32)
})
do {
try videoDevice.lockForConfiguration()
videoDevice.activeDepthDataFormat = depthFormat
videoDevice.unlockForConfiguration()
} catch {
print("Could not lock device for configuration: \(error)")
return
}
self.captureSession!.beginConfiguration()
// add video input
guard let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice),
self.captureSession!.canAddInput(videoDeviceInput)
else { fatalError("Can't add video input.") }
self.captureSession!.addInput(videoDeviceInput)
// add video output
if self.captureSession!.canAddOutput(videoOutput) {
self.captureSession!.addOutput(videoOutput)
videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
} else { fatalError("Can't add video output.") }
// Set up photo output for depth data capture.
let photoOutput = AVCapturePhotoOutput()
photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
guard self.captureSession!.canAddOutput(photoOutput)
else { fatalError("Can't add photo output.") }
self.captureSession!.addOutput(photoOutput)
self.captureSession!.sessionPreset = .photo
self.captureSession!.commitConfiguration()
self.captureSession!.startRunning()
}
And the code responsible for capturing the photo:
func captureImage(delegate: AVCapturePhotoCaptureDelegate,completion: #escaping (UIImage?, Error?) -> Void) {
let photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])
photoSettings.isDepthDataDeliveryEnabled =
self.photoOutput.isDepthDataDeliverySupported
photoSettings.isDepthDataFiltered = false
self.photoOutput.capturePhoto(with: photoSettings, delegate: delegate) // <---- error is being thrown on this call
self.photoCaptureCompletionBlock = completion
}
What I am I doing wrong in this configuration?
solved it with the following implementation:
Any comments / remarks are highly appreciated!
import AVFoundation
import UIKit
class CameraController: NSObject {
var captureSession: AVCaptureSession?
var videoDevice: AVCaptureDevice?
var previewLayer: AVCaptureVideoPreviewLayer?
var videoOutput = AVCaptureVideoDataOutput()
var photoOutput = AVCapturePhotoOutput()
func prepare(completionHandler: #escaping (Error?) -> Void) {
func createCaptureSession() {
captureSession = AVCaptureSession()
}
func configureCaptureDevices() throws {
// Select a depth-capable capture device.
guard let vd = AVCaptureDevice.default(.builtInTrueDepthCamera,
for: .video, position: .unspecified)
else { fatalError("No dual camera.") }
videoDevice = vd
// Select a depth (not disparity) format that works with the active color format.
let availableFormats = videoDevice!.activeFormat.supportedDepthDataFormats
let depthFormat = availableFormats.first(where: { format in
let pixelFormatType = CMFormatDescriptionGetMediaSubType(format.formatDescription)
return (pixelFormatType == kCVPixelFormatType_DepthFloat16 ||
pixelFormatType == kCVPixelFormatType_DepthFloat32)
})
do {
try videoDevice!.lockForConfiguration()
videoDevice!.activeDepthDataFormat = depthFormat
videoDevice!.unlockForConfiguration()
} catch {
print("Could not lock device for configuration: \(error)")
return
}
}
func configureDeviceInputs() throws {
if( captureSession == nil) {
throw CameraControllerError.captureSessionIsMissing
}
captureSession?.beginConfiguration()
// add video input
guard let videoDeviceInput = try? AVCaptureDeviceInput(device: self.videoDevice!),
captureSession!.canAddInput(videoDeviceInput)
else { fatalError("Can't add video input.") }
captureSession!.addInput(videoDeviceInput)
captureSession?.commitConfiguration()
}
func configurePhotoOutput() throws {
guard let captureSession = self.captureSession else { throw CameraControllerError.captureSessionIsMissing }
captureSession.beginConfiguration()
// Set up photo output for depth data capture.
photoOutput = AVCapturePhotoOutput()
photoOutput.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])], completionHandler: nil)
guard captureSession.canAddOutput(photoOutput)
else { fatalError("Can't add photo output.") }
captureSession.addOutput(photoOutput)
// must be set after photoOutput is added to captureSession. Why???
photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
captureSession.sessionPreset = .photo
captureSession.commitConfiguration()
captureSession.startRunning()
}
DispatchQueue(label: "prepare").async {
do {
createCaptureSession()
try configureCaptureDevices()
try configureDeviceInputs()
try configurePhotoOutput()
}
catch {
DispatchQueue.main.async {
completionHandler(error)
}
return
}
DispatchQueue.main.async {
completionHandler(nil)
}
}
}
func displayPreview(on view: UIView) throws {
guard let captureSession = self.captureSession, captureSession.isRunning else { throw CameraControllerError.captureSessionIsMissing }
self.previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = .portrait
view.layer.insertSublayer(self.previewLayer!, at: 0)
self.previewLayer?.frame = view.frame
}
func captureImage(delegate: AVCapturePhotoCaptureDelegate,completion: #escaping (UIImage?, Error?) -> Void) {
let photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])
photoSettings.isDepthDataDeliveryEnabled = true
photoSettings.isDepthDataFiltered = false
self.photoOutput.capturePhoto(with: photoSettings, delegate: delegate)
self.photoCaptureCompletionBlock = completion
}
var photoCaptureCompletionBlock: ((UIImage?, Error?) -> Void)?
}
extension CameraController {
public enum CameraPosition {
case front
case rear
}
enum CameraControllerError: Swift.Error {
case captureSessionAlreadyRunning
case captureSessionIsMissing
case inputsAreInvalid
case invalidOperation
case noCamerasAvailable
case unknown
}
}

Camera app freezes during phone call

I have a bug in my camera app. If you open the app while on a phone call, the entire app freezes. I've tried using AVCaptureSessionWasInterrupted and AVCaptureSessionInterruptionEnded notifications to handle the audio input management during a phone call, but have had no luck fixing the issue. When I comment out the audio input setup, the app no longer freezes during a phone call, so I'm pretty confident the issue lies somewhere with the audio management.
Why is the app freezing during phone calls and how can I fix it?
Thanks in advance!
Relevant code:
class CameraManager: NSObject {
static let shared = CameraManager()
private let notificationQueue = OperationQueue.main
var delegate: CameraManagerDelegate? = nil
let session = AVCaptureSession()
var captureDeviceInput: AVCaptureDeviceInput? = nil
var audioInput: AVCaptureDeviceInput? = nil
let photoOutput = AVCapturePhotoOutput()
let videoOutput = AVCaptureMovieFileOutput()
var isRecording: Bool {
return videoOutput.isRecording
}
func getCurrentVideoCaptureDevice() throws -> AVCaptureDevice {
guard let device = self.captureDeviceInput?.device else {
throw CameraManagerError.missingCaptureDeviceInput
}
return device
}
func getZoomFactor() throws -> CGFloat {
return try getCurrentVideoCaptureDevice().videoZoomFactor
}
func getMaxZoomFactor() throws -> CGFloat {
return try getCurrentVideoCaptureDevice().activeFormat.videoMaxZoomFactor
}
override init() {
super.init()
NotificationCenter.default.addObserver(forName: Notification.Name.UIApplicationDidBecomeActive, object: nil, queue: notificationQueue) { [unowned self] (notification) in
self.session.startRunning()
try? self.setupCamera()
try? self.setZoomLevel(zoomLevel: 1.0)
if Settings.shared.autoRecord {
try? self.startRecording()
}
}
NotificationCenter.default.addObserver(forName: Notification.Name.UIApplicationWillResignActive, object: nil, queue: notificationQueue) { [unowned self] (notification) in
self.stopRecording()
self.session.stopRunning()
}
NotificationCenter.default.addObserver(forName: Notification.Name.AVCaptureSessionWasInterrupted, object: nil, queue: notificationQueue) { [unowned self] (notification) in
if let audioInput = self.audioInput {
self.session.removeInput(audioInput)
}
}
NotificationCenter.default.addObserver(forName: Notification.Name.AVCaptureSessionInterruptionEnded, object: nil, queue: notificationQueue) { [unowned self] (notification) in
try? self.setupAudio()
}
try? self.setupSession()
}
func setupSession() throws {
session.sessionPreset = .high
if !session.isRunning {
session.startRunning()
}
if Utils.checkPermissions() {
try setupInputs()
setupOutputs()
}
}
func setupInputs() throws {
try setupCamera()
try setupAudio()
}
func setupCamera() throws {
do {
try setCamera(position: Settings.shared.defaultCamera)
} catch CameraManagerError.unableToFindCaptureDevice(let position) {
//some devices don't have a front camera, so try the back for setup
if position == .front {
try setCamera(position: .back)
}
}
}
func setupAudio() throws {
if let audioInput = self.audioInput {
self.session.removeInput(audioInput)
}
guard let audioDevice = AVCaptureDevice.default(for: .audio) else {
throw CameraManagerError.unableToGetAudioDevice
}
let audioInput = try AVCaptureDeviceInput(device: audioDevice)
if session.canAddInput(audioInput) {
session.addInput(audioInput)
self.audioInput = audioInput
} else {
self.delegate?.unableToAddAudioInput()
}
}
func setupOutputs() {
self.photoOutput.isHighResolutionCaptureEnabled = true
guard session.canAddOutput(self.photoOutput) else {
//error
return
}
session.addOutput(self.photoOutput)
guard session.canAddOutput(self.videoOutput) else {
//error
return
}
session.addOutput(self.videoOutput)
}
func startRecording() throws {
if !self.videoOutput.isRecording {
let documentDirectory = try FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor:nil, create:false)
let url = documentDirectory.appendingPathComponent(UUID().uuidString + ".mov")
self.videoOutput.startRecording(to: url, recordingDelegate: self)
}
}
func stopRecording() {
if self.videoOutput.isRecording {
self.videoOutput.stopRecording()
}
}
func setZoomLevel(zoomLevel: CGFloat) throws {
guard let captureDevice = self.captureDeviceInput?.device else {
throw CameraManagerError.missingCaptureDevice
}
try captureDevice.lockForConfiguration()
captureDevice.videoZoomFactor = zoomLevel
captureDevice.unlockForConfiguration()
}
func capturePhoto() {
let photoOutputSettings = AVCapturePhotoSettings()
photoOutputSettings.flashMode = Settings.shared.flash
photoOutputSettings.isAutoStillImageStabilizationEnabled = true
photoOutputSettings.isHighResolutionPhotoEnabled = true
self.photoOutput.capturePhoto(with: photoOutputSettings, delegate: self)
}
func toggleCamera() throws {
if let captureDeviceInput = self.captureDeviceInput,
captureDeviceInput.device.position == .back {
try setCamera(position: .front)
} else {
try setCamera(position: .back)
}
}
func setCamera(position: AVCaptureDevice.Position) throws {
if let captureDeviceInput = self.captureDeviceInput {
if captureDeviceInput.device.position == position {
return
} else {
session.removeInput(captureDeviceInput)
}
}
var device: AVCaptureDevice? = nil
switch position {
case .front:
device = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
default:
device = AVCaptureDevice.default(for: .video)
}
guard let nonNilDevice = device else {
throw CameraManagerError.unableToFindCaptureDevice(position)
}
try nonNilDevice.lockForConfiguration()
if nonNilDevice.isFocusModeSupported(.continuousAutoFocus) {
nonNilDevice.focusMode = .continuousAutoFocus
}
if nonNilDevice.isExposureModeSupported(.continuousAutoExposure) {
nonNilDevice.exposureMode = .continuousAutoExposure
}
nonNilDevice.unlockForConfiguration()
let input = try AVCaptureDeviceInput(device: nonNilDevice)
guard session.canAddInput(input) else {
throw CameraManagerError.unableToAddCaptureDeviceInput
}
session.addInput(input)
self.captureDeviceInput = input
}
func setFocus(point: CGPoint) throws {
guard let device = self.captureDeviceInput?.device else {
throw CameraManagerError.missingCaptureDeviceInput
}
guard device.isFocusPointOfInterestSupported && device.isFocusModeSupported(.autoFocus) else {
throw CameraManagerError.notSupportedByDevice
}
try device.lockForConfiguration()
device.focusPointOfInterest = point
device.focusMode = .autoFocus
device.unlockForConfiguration()
}
func setExposure(point: CGPoint) throws {
guard let device = self.captureDeviceInput?.device else {
throw CameraManagerError.missingCaptureDeviceInput
}
guard device.isExposurePointOfInterestSupported && device.isExposureModeSupported(.autoExpose) else {
throw CameraManagerError.notSupportedByDevice
}
try device.lockForConfiguration()
device.exposurePointOfInterest = point
device.exposureMode = .autoExpose
device.unlockForConfiguration()
}
}
extension CameraManager: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
self.delegate?.cameraManagerWillCapturePhoto()
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imageData = photo.fileDataRepresentation() else {
//error
return
}
let capturedImage = UIImage.init(data: imageData , scale: 1.0)
if let image = capturedImage {
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
self.delegate?.cameraManagerDidFinishProcessingPhoto()
}
}
extension CameraManager: AVCaptureFileOutputRecordingDelegate {
func fileOutput(_ output: AVCaptureFileOutput, didStartRecordingTo fileURL: URL, from connections: [AVCaptureConnection]) {
self.delegate?.cameraManagerDidStartRecording()
}
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
self.delegate?.cameraManagerDidFinishRecording()
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputFileURL)
}) { saved, error in
if saved {
do {
try FileManager.default.removeItem(at: outputFileURL)
} catch _ as NSError {
//error
}
}
}
}
}

Retrieve CVSampleBuffer from AVCapturePhoto obtained through AVCapturePhotoCaptureDelegate

As in the title, I'm trying to retrieve the CVPixelBuffer for a captured photo from output of the method:
AVCapturePhotoCaptureDelegate.photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?)
The photo parameter's pixelBuffer is nil in the delegate method call and I'd like to use it for some low level image manipulation.
I've been mostly following the sample code which can be found at:
https://developer.apple.com/library/content/samplecode/AVCam/Introduction/Intro.html
and the AVFoundation documentation.
Since the AVFoundation session configuration is kinda lengthy and might provide some answers, I'll just paste the whole object that handles it, which should contain all of the related code:
protocol CameraServiceDelegate: class {
func cameraServiceDidCapturePhoto(withBuffer buffer: CVPixelBuffer)
func cameraServiceEncounteredError(_ error: Error?)
}
final class CameraService: NSObject {
struct BufferRetrievalFailure: Error {}
weak var delegate: CameraServiceDelegate?
private let session = AVCaptureSession()
private var discoverySession = AVCaptureDevice.DiscoverySession(
deviceTypes: [.builtInDualCamera, .builtInWideAngleCamera],
mediaType: .video,
position: .back
)
private var deviceInput: AVCaptureDeviceInput!
private let photoOutput = AVCapturePhotoOutput()
private let sessionQueue = DispatchQueue(label: "av-capture-session.serial.queue")
private var captureDevice: AVCaptureDevice? {
return .default(.builtInDualCamera, for: .video, position: .back)
?? .default(.builtInWideAngleCamera, for: .video, position: .back)
?? .default(.builtInWideAngleCamera, for: .video, position: .front)
}
func setup(with layer: AVCaptureVideoPreviewLayer) {
layer.session = session
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .authorized:
break
case .notDetermined:
requestVideoAuthorization()
default:
assertionFailure("Just enable video, this is not a real app.")
}
sessionQueue.async { [weak self] in
self?.setupAVSession(with: layer)
}
}
func resume() {
sessionQueue.async { [weak session] in
session?.startRunning()
}
}
func suspend() {
sessionQueue.async { [weak session] in
session?.stopRunning()
}
}
func capturePhoto() {
sessionQueue.async { [weak self] in
guard let strongSelf = self else {
return
}
strongSelf.photoOutput.capturePhoto(with: strongSelf.capturePhotoSettings(), delegate: strongSelf)
}
}
private func requestVideoAuthorization() {
sessionQueue.suspend()
AVCaptureDevice.requestAccess(for: .video) { [weak sessionQueue] isAuthorized in
guard isAuthorized else {
assertionFailure("Just enable video, this is not a real app.")
return
}
sessionQueue?.resume()
}
}
private func setupAVSession(with layer: AVCaptureVideoPreviewLayer) {
session.beginConfiguration()
session.sessionPreset = .photo
setupVideoInput()
setupVideoPreviewViewLayer(with: layer)
setupPhotoOutput()
session.commitConfiguration()
}
private func setupVideoInput() {
guard let videoDevice = captureDevice,
let deviceInput = try? AVCaptureDeviceInput(device: videoDevice),
session.canAddInput(deviceInput) else {
fatalError("Could not retrieve suitable capture device or configure video device input.")
}
self.deviceInput = deviceInput
session.addInput(deviceInput)
}
private func setupVideoPreviewViewLayer(with layer: AVCaptureVideoPreviewLayer) {
DispatchQueue.main.async {
let statusBarOrientation = UIApplication.shared.statusBarOrientation
layer.connection?.videoOrientation =
statusBarOrientation != .unknown
? AVCaptureVideoOrientation(rawValue: statusBarOrientation.rawValue)!
: .portrait
}
}
private func setupPhotoOutput() {
guard session.canAddOutput(photoOutput) else {
fatalError("Could not configure photo output.")
}
session.addOutput(photoOutput)
photoOutput.isHighResolutionCaptureEnabled = true
photoOutput.isLivePhotoCaptureEnabled = false
photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
}
private func capturePhotoSettings() -> AVCapturePhotoSettings {
let settings: AVCapturePhotoSettings
if photoOutput.availablePhotoCodecTypes.contains(.hevc) {
settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])
} else {
settings = AVCapturePhotoSettings()
}
settings.isHighResolutionPhotoEnabled = true
settings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliveryEnabled
return settings
}
}
extension CameraService: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard error == nil else {
delegate?.cameraServiceEncounteredError(error)
return
}
guard let buffer = photo.pixelBuffer else {
delegate?.cameraServiceEncounteredError(BufferRetrievalFailure())
return
}
delegate?.cameraServiceDidCapturePhoto(withBuffer: buffer)
}
}
I don't have a code sample for you because I'm working in Xamarin, but you need to set the previewPhotoFormat on the AVCapturePhotoSettings object used when creating the capture. An example I found online:
var settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [
kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: self.capturedButton.frame.width,
kCVPixelBufferHeightKey as String: self.capturedButton.frame.height
] as [String : Any]
settings.previewPhotoFormat = previewFormat
Personally I inspect the availablePreviewPhotoPixelFormatTypes to see if the format that I require for my analysis (kCVPixelFormatType_32BGRA) is even in there. I haven't encountered a device without it so far.

Resources