AVFoundation camera previewlayer doesn't match horizontal orientation (Swift 4) - ios

I've been taking bits and pieces of code from the internet and stackoverflow to get a simple camera app working.
However, I've noticed that if I flip my phone to the sideway position,
I see two problems:
1)the
camera preview layer only takes up half the screen
2)and the camera's orientation doesn't seem to be changing; it stays fixed in the vertical position
My constraints seem to be fine, and if I look at various simulators' UIimageView(Camera's preview layer) in the hortionzal position, the UImage is strecthed properly. So not sure why the camera preview layer is only stretching to half the screen.
(ImagePreview = Camera preview layer)
As for the orientation problem, this seems to be a coding problem?
I looked up some posts on stackoverflow, but I didn't see anything for Swift 4.
Not sure if these is an easy way to do this in Swift 4.
iPhone AVFoundation camera orientation
Here is some of the code from my camera app:
import Foundation
import AVFoundation
import UIKit
class CameraSetup{
var captureSession = AVCaptureSession()
var frontCam : AVCaptureDevice?
var backCam : AVCaptureDevice?
var currentCam: AVCaptureDevice?
var captureInput: AVCaptureDeviceInput?
var captureOutput: AVCapturePhotoOutput?
var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
func captureDevice()
{
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .unspecified)
for device in discoverySession.devices{
if device.position == .front{
frontCam = device
}
else if device.position == .back{
backCam = device}
do {
try backCam?.lockForConfiguration()
backCam?.focusMode = .autoFocus
backCam?.exposureMode = .autoExpose
backCam?.unlockForConfiguration()
}
catch let error{
print(error)}
}
}
func configureCaptureInput(){
currentCam = backCam!
do{
captureInput = try AVCaptureDeviceInput(device: currentCam!)
if captureSession.canAddInput(captureInput!){
captureSession.addInput(captureInput!)
}
}
catch let error{
print(error)
}
}
func configureCaptureOutput(){
captureOutput = AVCapturePhotoOutput()
captureOutput!.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey : AVVideoCodecType.jpeg])], completionHandler: nil)
if captureSession.canAddOutput(captureOutput!){
captureSession.addOutput(captureOutput!)
}
captureSession.startRunning()
}
Here is the PreviewLayer function:
func configurePreviewLayer(view: UIView){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resize;
cameraPreviewLayer?.zPosition = -1;
view.layer.insertSublayer(cameraPreviewLayer!, at: 0)
cameraPreviewLayer?.frame = view.bounds
}
EDIT:
As suggested I made the move the view.bounds line one line above:
func configurePreviewLayer(view: UIView){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resize;
cameraPreviewLayer?.zPosition = -1;
cameraPreviewLayer?.frame = view.bounds
view.layer.insertSublayer(cameraPreviewLayer!, at: 0)
However, the problem still persists:
Here is the horizontal view:

I think you should user UIView instead of UIImageView.. and try this :
cameraPreviewlayer?.videoGravity = AVLayerVideoGravityResize;
cameraPreviewlayer?.zPosition = -1;
cameraPreviewlayer?.frame = self.imagePreview.bounds

Related

Camera View adjusting

I'm not too experienced with Swift or Xcode so any help would be appreciated!
I have made a separate .swift file for my QR/Camera Controller. I found this tutorial online on how to make a QR Code Reader and I typed in the Code provided and everything is fine except the Camera View isn't appearing properly on the Screen (using iPhone 8). How can I adjust the Video View?
Code:
import UIKit
import AVFoundation
class CameraController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate, AVCapturePhotoCaptureDelegate, AVCaptureMetadataOutputObjectsDelegate {
#IBOutlet weak var previewView: UIView!
#IBOutlet weak var lblOutput: UILabel!
var imageOrientation: AVCaptureVideoOrientation?
var captureSession: AVCaptureSession?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var capturePhotoOutput: AVCapturePhotoOutput?
override func viewDidLoad() {
super.viewDidLoad()
// Get an instance of the AVCaptureDevice class to initialize a
// device object and provide the video as the media type parameter
guard let captureDevice = AVCaptureDevice.default(for: AVMediaType.video) else {
fatalError("No video device found")
}
// handler chiamato quando viene cambiato orientamento
self.imageOrientation = AVCaptureVideoOrientation.portrait
do {
// Get an instance of the AVCaptureDeviceInput class using the previous deivce object
let input = try AVCaptureDeviceInput(device: captureDevice)
// Initialize the captureSession object
captureSession = AVCaptureSession()
// Set the input device on the capture session
captureSession?.addInput(input)
// Get an instance of ACCapturePhotoOutput class
capturePhotoOutput = AVCapturePhotoOutput()
capturePhotoOutput?.isHighResolutionCaptureEnabled = true
// Set the output on the capture session
captureSession?.addOutput(capturePhotoOutput!)
captureSession?.sessionPreset = .high
// Initialize a AVCaptureMetadataOutput object and set it as the input device
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
captureMetadataOutput.metadataObjectTypes = [AVMetadataObject.ObjectType.qr]
//Initialise the video preview layer and add it as a sublayer to the viewPreview view's layer
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
previewView.layer.addSublayer(videoPreviewLayer!)
//start video capture
captureSession?.startRunning()
} catch {
//If any error occurs, simply print it out
print(error)
return
}
}
override func viewWillAppear(_ animated: Bool) {
navigationController?.setNavigationBarHidden(true, animated: false)
self.captureSession?.startRunning()
}
// Find a camera with the specified AVCaptureDevicePosition, returning nil if one is not found
func cameraWithPosition(position: AVCaptureDevice.Position) -> AVCaptureDevice? {
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .unspecified)
for device in discoverySession.devices {
if device.position == position {
return device
}
}
return nil
}
func metadataOutput(_ captureOutput: AVCaptureMetadataOutput,
didOutput metadataObjects: [AVMetadataObject],
from connection: AVCaptureConnection) {
// Check if the metadataObjects array is contains at least one object.
if metadataObjects.count == 0 {
return
}
//self.captureSession?.stopRunning()
// Get the metadata object.
let metadataObj = metadataObjects[0] as! AVMetadataMachineReadableCodeObject
if metadataObj.type == AVMetadataObject.ObjectType.qr {
if let outputString = metadataObj.stringValue {
DispatchQueue.main.async {
print(outputString)
self.lblOutput.text = outputString
}
}
}
}
}
Image of current view:
The highlighted white box is the UIView
The mistake is you use frame of view but add videoPreviewLayer to the previewView which is smaller (like you showed in storyboard).
Replace the line with viewPreviewLayer frame configuration.
//Initialise the video preview layer and add it as a sublayer to the viewPreview view's layer
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
previewView.layer.addSublayer(videoPreviewLayer!)
this line
videoPreviewLayer?.frame = view.layer.bounds
to
videoPreviewLayer?.frame = previewView.layer.bounds
You should use NSLayoutConstraint from the storyboard.
Step #1
this is your current state
step #2
add top, leading, trailing and bottom constraint
step #3
final result
I would expect one of the following happens:
- You didn't setup your constraints properly
- Your view resizes
- You used incorrect view to set size of your layer
Setting up constraints is nearly impossible to explain by writing it. There are many ways of setting them up so I made a very short video that explains one way (or two) about setting up constraints.
The second and third can be explained in this snippet:
override func viewDidLoad() {
super.viewDidLoad()
...
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewView.layer.addSublayer(videoPreviewLayer!)
updatePreviewLayerFrame()
...
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
updatePreviewLayerFrame()
}
private func updatePreviewLayerFrame() {
videoPreviewLayer?.frame = previewView.bounds
}
Overriding viewDidLayoutSubviews should resize your layer as this method is called whenever the view controller "resizes". It is also called shortly after the viewDidLoad. Also note that a previewView is used to determine the frame: videoPreviewLayer?.frame = previewView.bounds.
Layers do not automatically resize with their parent view. That means your videoPreviewLayer gets the frame from the original (not yet layouted) previewView and never changes it. To update the layer, you can override this method:
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
// you need to keep a reference for that
self.videoPreviewLayer.frame = self.previewView.bounds
}
Alternatively, and I think that's better, you can check out how the preview view is implemented in Apple's AVCam example app. Resizing will be handled by Auto Layout when using their approach.

iPhoneX: AVCaptureVideoPreviewLayer not occupying the entire screen?

I have made a custom camera in a UIViewController but I am not able to preview the camera output to the entire screen on the iPhoneX. There appears to be a substantial padding between the camera view and the edge of the screen. My view seems to have been even inset from the safe area for sure. Please can anyone advise?
iPhoneXs Max:
my code:
class CaptureImageViewController: UIViewController, UIImagePickerControllerDelegate, AVCaptureVideoDataOutputSampleBufferDelegate {
override func viewWillAppear(_ animated: Bool) {
startAVCaptureSession()
}
func startAVCaptureSession() {
print("START CAPTURE SESSION!!")
// Setting Up a Capture Session
self.captureSession = AVCaptureSession()
captureSession.beginConfiguration()
// Configure input
let videoDevice = AVCaptureDevice.default(for: .video)
guard
let videoDeviceInput = try? AVCaptureDeviceInput.init(device: videoDevice!) as AVCaptureInput,
self.captureSession.canAddInput(videoDeviceInput)else {return}
self.captureSession.addInput(videoDeviceInput)
// Capture video output
let videoOutput = AVCaptureVideoDataOutput.init()
guard self.captureSession.canAddOutput(videoOutput) else {return}
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.init(label: "videoQueue"))
self.captureSession.addOutput(videoOutput)
// start
self.captureSession.commitConfiguration()
self.captureSession.startRunning()
// Display camera preview
previewLayer = AVCaptureVideoPreviewLayer.init(session: self.captureSession)
// Use 'insertSublayer' to enable button to be viewable
view.layer.insertSublayer(previewLayer, at: 0)
previewLayer.frame = view.frame
previewFrame = previewLayer.frame
}
}
My layout:
Try to change previewLayer.frame = view.frame to
let frame = UIScreen.main.bounds
view.layer.insertSublayer(previewLayer, at: 0)
previewLayer.frame = frame
previewFrame = previewLayer.frame

Fill view with IOS camera preview layer

Working through a tutorial, and I'm trying to make a full screen preview. Currently, my camera seems very square, and I am fairly confident this might be an aspect fit issue. The camera seems to hit the right and left bounds. How can I pin the my preview to the bottom of the nav bar and the bottom of the screen, and the sides?
import UIKit
import AVFoundation
class HomeViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
let captureSession = AVCaptureSession()
var previewLayer:CALayer!
var captureDevice:AVCaptureDevice!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
prepareCamera()
}
func prepareCamera(){
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
if let availableDevices = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .back).devices {
captureDevice = availableDevices.first
beginSession()
}
}
func beginSession () {
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
captureSession.addInput(captureDeviceInput)
} catch {
print(error.localizedDescription)
}
if let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) {
self.previewLayer = previewLayer
self.view.layer.addSublayer(self.previewLayer)
self.previewLayer.frame = self.view.layer.frame
//ADD CONSTRAINTS HERE
captureSession.startRunning()
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString):NSNumber(value:kCVPixelFormatType_32BGRA)]
dataOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(dataOutput) {
captureSession.addOutput(dataOutput)
}
captureSession.commitConfiguration()
// let queue = DispatchQueue(label: com.willkie.captureQueue)
// dataOutput.setSampleBufferDelegate(self, queue: queue)
}
}
func captureOutput(_ captureOutput: AVCaptureOutput!, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
I think probably changing your this particular line should work -
self.previewLayer.frame = self.view.layer.frame
to
self.previewLayer.frame = self.view.layer.bounds
Also you should add your sublayer only after you have adjusted the frame in the above code. Right now you have added before.
self.view.layer.addSublayer(self.previewLayer)
And you also need to add gravity
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
If it still doesn't work, add a custom view to your viewcontroller and set it to the size on which you want to show the camera preview and then you can use this -
self.previewLayer.frame = self.customViewOutlet.layer.bounds
I solved it with Swift 4 like this:
Don't forget to import AVKit
override func viewDidLoad() {
super.viewDidLoad()
let captureSession = AVCaptureSession()
guard let captureDevice = AVCaptureDevice.default(for: .video),
let input = try? AVCaptureDeviceInput(device: captureDevice) else {return}
captureSession.addInput(input)
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.frame = view.frame
previewLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(previewLayer)
}
I am not sure this is the best way to go about this, but what worked for me was setting it to the screen size frame. So in that case it would be:
self.previewLayer.frame = UIScreen.main.bounds
Hope this helps

Integrating Custom Camera View AVCaptureDevice

I'm trying to integrate a custom camera view and following some slightly outdated code whilst doing so. I've had several errors but believed I've fixed them bar 2.
Here is the current code so far:
import Foundation
import AVFoundation
import UIKit
class setupView : UIViewController {
#IBOutlet var cameraView: UIView!
#IBOutlet var nameTextField: UITextField!
var captureSession = AVCaptureSession()
var stillImageOutput = AVCapturePhotoOutput()
var previewLayer = AVCaptureVideoPreviewLayer()
override func viewDidLoad() {
let session = AVCaptureDeviceDiscoverySession.init(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .back)
if let device = session?.devices[0] {
if device.position == AVCaptureDevicePosition.back {
do {
let input = try AVCaptureDeviceInput(device: device )
if captureSession.canAddInput(input){
captureSession.addInput(input)
stillImageOutput.outputSettings = [AVVideoCodecKey : AVVideoCodecJPEG]
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer)
previewLayer.bounds = cameraView.frame
previewLayer.position = CGPoint(x: cameraView.frame.width / 2, y:cameraView.frame.height / 2)
}
}
} catch {
}
}
}
}
#IBAction func takePhoto(_ sender: Any) {
}
#IBAction func submitAction(_ sender: Any) {
}
}
I'm currently getting 2 errors:
"Value of type AVCapturePhotoOutput" has no member "outputSettings"
"Value of type "AVCaptureVideoPreviewLayer" has no member
"AVLayerVideoGravityResizeAspectFill"
You are almost there. The problem is some of the AVFoundation classes are deprecated and there is more than one way to take a photo now. Here is the issues with your code.
"Value of type AVCapturePhotoOutput" has no member "outputSettings"
It is because actually AVCapturePhotoOutput don't have any member defined as outputSettings. Check out full documentation of AVCapturePhotoOutput
Actually outputSettings is member of AVCaptureStillImageOutput and the same is deprecated from iOS 10.0
"Value of type "AVCaptureVideoPreviewLayer" has no member
"AVLayerVideoGravityResizeAspectFill"
Again the same mistake, as per your code there is no member for AVCaptureVideoPreviewLayer. In case if you want to set the video preview layer set it like below.
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
As like you mentioned the code is outdated and its using deprecated AVCaptureStillImageOutput
If really want to use AVCapturePhotoOutput then you should follow the below steps.
These are the steps to capture a photo.
Create an AVCapturePhotoOutput object. Use its properties to determine supported capture settings and to enable certain features (for example, whether to capture Live Photos).
Create and configure an AVCapturePhotoSettings object to choose features and settings for a specific capture (for example, whether to enable image stabilization or flash).
Capture an image by passing your photo settings object to the capturePhoto(with:delegate:) method along with a delegate object implementing the AVCapturePhotoCaptureDelegate protocol. The photo capture output then calls your delegate to notify you of significant events during the capture process.
have this below code on your clickCapture method and don't forgot to confirm and implement to delegate in your class.
let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160,
]
settings.previewPhotoFormat = previewFormat
self.cameraOutput.capturePhoto(with: settings, delegate: self)
if you would like to know the different way to capturing photo from avfoundation check out my previous SO answer
Also Apple documentation explains very clear for How to use AVCapturePhotoOutput
import AVFoundation
import Foundation
#IBOutlet weak var mainimage: UIImageView!
let captureSession = AVCaptureSession()
let stillImageOutput = AVCaptureStillImageOutput()
var previewLayer : AVCaptureVideoPreviewLayer?
var captureDevice : AVCaptureDevice?
override func viewDidLoad() {
super.viewDidLoad()
captureSession.sessionPreset = AVCaptureSessionPresetHigh
if let devices = AVCaptureDevice.devices() as? [AVCaptureDevice] {
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.front) {
captureDevice = device
if captureDevice != nil {
print("Capture device found")
beginSession()
}
}
}
}
}
}
func beginSession() {
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
stillImageOutput.outputSettings = [AVVideoCodecKey:AVVideoCodecJPEG]
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
}
}
catch {
print("error: \(error.localizedDescription)")
}
guard let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) else {
print("no preview layer")
return
}
self.view.layer.addSublayer(previewLayer)
previewLayer.frame = self.view.layer.frame
captureSession.startRunning()
self.view.addSubview(mainimage)
}
This code is working in my app

Toggle front/back cameras

Following my previous question, I've made a lot of changes to my code which has bought up a new set of questions which is why I decided to create a new question instead of following the previous.
I'm currently playing around with Swift development where I'm trying to make a basic application which displays the camera within a UIImageview.
I am using the AVFoundation framework.
So far, I've been able to set up the front facing camera on load. My question is, how would I go about implemented the ability to toggle the camera upon a button click?
I firstly inititialise instances:
var captureSession = AVCaptureSession()
var sessionOutput = AVCapturePhotoOutput()
var sessionOutputSetting = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecJPEG])
var previewLayer = AVCaptureVideoPreviewLayer()
I also created a toggle bool:
// Bool to manage camera toggle. False = front-face (default)
var toggle = false
In the viewWillApear I call pickCamera function which checks the value of toggle and creates a device descovery session:
func pickCamera(which: Bool) {
if (which == true) {
let deviceDescovery = AVCaptureDeviceDiscoverySession(deviceTypes: [AVCaptureDeviceType.builtInDualCamera, AVCaptureDeviceType.builtInTelephotoCamera, AVCaptureDeviceType.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: AVCaptureDevicePosition.back)
startCamera(deviceDesc: deviceDescovery!)
toggle = true;
} else if (which == false) {
let deviceDescovery = AVCaptureDeviceDiscoverySession(deviceTypes: [AVCaptureDeviceType.builtInDualCamera, AVCaptureDeviceType.builtInTelephotoCamera, AVCaptureDeviceType.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: AVCaptureDevicePosition.front)
startCamera(deviceDesc: deviceDescovery!)
toggle = false;
}
}
The startCamera function then creates and sets the captureSession and adds to the parent layer to display:
func startCamera(deviceDesc: AVCaptureDeviceDiscoverySession) {
for device in (deviceDesc.devices)! {
if (device.position == AVCaptureDevicePosition.back) {
do {
let input = try AVCaptureDeviceInput(device: device)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if (captureSession.canAddOutput(sessionOutput)) {
captureSession.addOutput(sessionOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
captureSession.startRunning()
cameraView.layer.addSublayer(previewLayer)
}
}
}
catch {
print("Exception")
}
} else if (device.position == AVCaptureDevicePosition.front) {
do {
let input = try AVCaptureDeviceInput(device: device)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if (captureSession.canAddOutput(sessionOutput)) {
captureSession.addOutput(sessionOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
captureSession.startRunning()
cameraView.layer.addSublayer(previewLayer)
}
}
}
catch {
print("Exception")
}
}
}
}
I've also added a button with an action:
#IBAction func toggleCamera(_ sender: Any) {
if (toggle == false) {
print("Changing to back camera")
previewLayer.removeFromSuperlayer()
toggle = true;
pickCamera(which: true)
} else if (toggle == true) {
print("Changing to front camera")
previewLayer.removeFromSuperlayer()
toggle = false;
pickCamera(which: false)
}
}
The toggle method is supposed to clear the current view and call the pickCamera method which should create a new instance of the alternative camera.
Although for some reason, this is not working. I'm guessing it's something to do with not properly clearing the previous view/adding the new view correctly but again i'm unsure.
Thank you for taking the time to look at my problem and please ask if i'm missing information or have not explained myself properly.
Update
Finally fixed. The problem lied with not stopping the current captureSession before creating a new captureSession.
To fix this, I updated the toggleCamera function to include:
let currentCameraInput: AVCaptureInput = captureSession.inputs[0] as! AVCaptureInput
For anyone interested in the code, look here

Resources