Following my previous question, I've made a lot of changes to my code which has bought up a new set of questions which is why I decided to create a new question instead of following the previous.
I'm currently playing around with Swift development where I'm trying to make a basic application which displays the camera within a UIImageview.
I am using the AVFoundation framework.
So far, I've been able to set up the front facing camera on load. My question is, how would I go about implemented the ability to toggle the camera upon a button click?
I firstly inititialise instances:
var captureSession = AVCaptureSession()
var sessionOutput = AVCapturePhotoOutput()
var sessionOutputSetting = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecJPEG])
var previewLayer = AVCaptureVideoPreviewLayer()
I also created a toggle bool:
// Bool to manage camera toggle. False = front-face (default)
var toggle = false
In the viewWillApear I call pickCamera function which checks the value of toggle and creates a device descovery session:
func pickCamera(which: Bool) {
if (which == true) {
let deviceDescovery = AVCaptureDeviceDiscoverySession(deviceTypes: [AVCaptureDeviceType.builtInDualCamera, AVCaptureDeviceType.builtInTelephotoCamera, AVCaptureDeviceType.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: AVCaptureDevicePosition.back)
startCamera(deviceDesc: deviceDescovery!)
toggle = true;
} else if (which == false) {
let deviceDescovery = AVCaptureDeviceDiscoverySession(deviceTypes: [AVCaptureDeviceType.builtInDualCamera, AVCaptureDeviceType.builtInTelephotoCamera, AVCaptureDeviceType.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: AVCaptureDevicePosition.front)
startCamera(deviceDesc: deviceDescovery!)
toggle = false;
}
}
The startCamera function then creates and sets the captureSession and adds to the parent layer to display:
func startCamera(deviceDesc: AVCaptureDeviceDiscoverySession) {
for device in (deviceDesc.devices)! {
if (device.position == AVCaptureDevicePosition.back) {
do {
let input = try AVCaptureDeviceInput(device: device)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if (captureSession.canAddOutput(sessionOutput)) {
captureSession.addOutput(sessionOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
captureSession.startRunning()
cameraView.layer.addSublayer(previewLayer)
}
}
}
catch {
print("Exception")
}
} else if (device.position == AVCaptureDevicePosition.front) {
do {
let input = try AVCaptureDeviceInput(device: device)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if (captureSession.canAddOutput(sessionOutput)) {
captureSession.addOutput(sessionOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
captureSession.startRunning()
cameraView.layer.addSublayer(previewLayer)
}
}
}
catch {
print("Exception")
}
}
}
}
I've also added a button with an action:
#IBAction func toggleCamera(_ sender: Any) {
if (toggle == false) {
print("Changing to back camera")
previewLayer.removeFromSuperlayer()
toggle = true;
pickCamera(which: true)
} else if (toggle == true) {
print("Changing to front camera")
previewLayer.removeFromSuperlayer()
toggle = false;
pickCamera(which: false)
}
}
The toggle method is supposed to clear the current view and call the pickCamera method which should create a new instance of the alternative camera.
Although for some reason, this is not working. I'm guessing it's something to do with not properly clearing the previous view/adding the new view correctly but again i'm unsure.
Thank you for taking the time to look at my problem and please ask if i'm missing information or have not explained myself properly.
Update
Finally fixed. The problem lied with not stopping the current captureSession before creating a new captureSession.
To fix this, I updated the toggleCamera function to include:
let currentCameraInput: AVCaptureInput = captureSession.inputs[0] as! AVCaptureInput
For anyone interested in the code, look here
Related
I'm creating a view to take a photo and send it to a server. I made a button to switch the camera from the back camera to the front camera and vice-versa, but I'm not sure how to implement it. I've done my research but I'm still confused on how to implement it in my project because I'm new to iOS programming. Here are some of my codes:
Some of my AVFoundation variables
private var session: AVCaptureSession!
private var stillImageOutput: AVCapturePhotoOutput!
private var stillImageSettings: AVCapturePhotoSettings!
private var previewLayer: AVCaptureVideoPreviewLayer!
My setupCamera() function. I call this function on viewDidAppear() as well
func setupCamera() {
shutterButton.isUserInteractionEnabled = false
if self.session != nil {
self.session.stopRunning()
}
session = AVCaptureSession()
session.sessionPreset = .photo
guard let setCam: AVCaptureDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) else {return}
if setCam.isFocusModeSupported(.continuousAutoFocus) && setCam.isFocusPointOfInterestSupported {
do {
try setCam.lockForConfiguration()
let focusPoint = CGPoint(x: 0.5, y: 0.5)
setCam.focusPointOfInterest = focusPoint
setCam.focusMode = .continuousAutoFocus
setCam.isSubjectAreaChangeMonitoringEnabled = true
setCam.unlockForConfiguration()
} catch let error {
print(error.localizedDescription)
}
}
AVCaptureDevice.requestAccess(for: AVMediaType.video) { response in
if response {
// Access Granted
} else {
// Access Denied
let alert = KPMAlertController(title: "Sorry", message: "Mohon maaf, untuk melakukan pengambilan foto, anda harus memberi akses penggunaan kamera", icon: UIImage(named: "ic_info_big"))
let okAct = KPMAlertAction(title: "OK", type: .regular, handler : {
// Open setting
UIApplication.shared.open(URL(string: UIApplication.openSettingsURLString)!)
self.navigationController?.popViewController(animated: true)
})
alert.addAction(okAct)
self.present(alert, animated: true, completion: nil)
}
}
do {
let input: AVCaptureDeviceInput = try AVCaptureDeviceInput(device: setCam)
if self.session.canAddInput(input) {
self.session.addInput(input)
self.stillImageOutput = AVCapturePhotoOutput()
if self.session.canAddOutput(self.stillImageOutput) {
self.session.addOutput(self.stillImageOutput)
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
self.previewLayer.frame = self.cameraView.bounds
self.previewLayer.videoGravity = .resizeAspectFill
self.previewLayer.connection?.videoOrientation = .portrait
self.cameraView.layer.addSublayer(self.previewLayer)
self.shutterButton.isUserInteractionEnabled = true
self.session.startRunning()
self.backButton.superview?.bringSubviewToFront(self.backButton)
self.shutterButton.superview?.bringSubviewToFront(self.shutterButton)
}
}
ProgressHUD.dismiss()
} catch let error {
print(error.localizedDescription)
ProgressHUD.dismiss()
}
}
And I want to implement the switch function here
#IBAction func switchCameraAction(_ sender: Any) {
// The code to implement switching camera functionalities
}
If you need more code feel free to ask me. Any help would be appreciated. Thank you.
I have an app that has a snapchat type camera where the UIView displays the back camera. I have a button on top and when I click that button I would like to take a picture. Right now when I click that button it simply opens up another camera.
This is the code for the button click:
#IBAction func takePhoto(_ sender: UIButton) {
imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType = .camera
present(imagePicker, animated: true, completion: nil)
}
However, as stated above, that is redundant since my ViewController displays a camera on ViewDidAppear.
override func viewDidAppear(_ animated: Bool) {
self.ShowCamera(self.frontCamera)
fullView.isHidden = false
}
func ShowCamera(_ front: Bool) {
self.captureSession.sessionPreset = AVCaptureSession.Preset.photo
if let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [ .builtInWideAngleCamera,.builtInMicrophone],
mediaType: AVMediaType.video, position: .back).devices.first {
self.captureDevice = availableDevices
if captureSession.isRunning != true {
self.beginSession()
}
}
if self.captureDevice == nil {
print("capture device is nil")
return
}
do {
try self.captureSession.removeInput(AVCaptureDeviceInput(device: self.captureDevice!))
} catch let error as NSError {
print(error)
}
}
func beginSession() {
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
captureSession.addInput(captureDeviceInput)
} catch {
print(error.localizedDescription)
}
captureSession.startRunning()
let preview = AVCaptureVideoPreviewLayer(session: captureSession)
self.previewLayer = preview
preview.videoGravity = AVLayerVideoGravity.resizeAspectFill
CameraView.layer.insertSublayer(self.previewLayer, at: 0)
self.previewLayer.frame = self.CameraView.layer.frame
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_32BGRA)] as [String : Any]
dataOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(dataOutput)
{
captureSession.addOutput(dataOutput)
}
captureSession.commitConfiguration()
}
All the code above simply gets the UIView and shows the camera. The button for TakePhoto is a sublayer that shows on top of the camera image. When I click that button I want to use whatever image is displaying on my camera.
The command to capture a photo from the running session is
guard let output = captureSession.outputs[0] as? AVCapturePhotoOutput
else {return}
output.capturePhoto(with: settings, delegate: self)
Here, self, is a AVCapturePhotoCaptureDelegate. You then receive the photo thru the delegate messages and extract and save it.
I have a collectionView which has cells acting as screens. When I swipe to the camera cell after opening the app there is a lag for a second and then afterwards the swiping is smooth back and forth below is a video of this lag. Is there anyway to prevent this maybe start the capture session in the background before the cell is reached? Thank you for your help.
Code for Camera Cell
import UIKit
import AVFoundation
class MainCameraCollectionViewCell: UICollectionViewCell {
var captureSession = AVCaptureSession()
private var sessionQueue: DispatchQueue!
var captureConnection = AVCaptureConnection()
var backCamera: AVCaptureDevice?
var frontCamera: AVCaptureDevice?
var currentCamera: AVCaptureDevice?
var photoOutPut: AVCapturePhotoOutput?
var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
var image: UIImage?
var usingFrontCamera = false
override func awakeFromNib() {
super.awakeFromNib()
setupCaptureSession()
setupDevice()
setupInput()
self.setupPreviewLayer()
startRunningCaptureSession
}
func setupCaptureSession(){
captureSession.sessionPreset = AVCaptureSession.Preset.photo
sessionQueue = DispatchQueue(label: "session queue")
}
func setupDevice(usingFrontCamera:Bool = false){
DispatchQueue.main.async {
//sessionQueue.async {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
let devices = deviceDiscoverySession.devices
for device in devices{
if usingFrontCamera && device.position == AVCaptureDevice.Position.front {
//backCamera = device
self.currentCamera = device
} else if device.position == AVCaptureDevice.Position.back {
//frontCamera = device
self.currentCamera = device
}
}
}
}
func setupInput() {
DispatchQueue.main.async {
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: self.currentCamera!)
if self.captureSession.canAddInput(captureDeviceInput) {
self.captureSession.addInput(captureDeviceInput)
}
self.photoOutPut = AVCapturePhotoOutput()
self.photoOutPut?.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format:[AVVideoCodecKey: AVVideoCodecType.jpeg])], completionHandler: nil)
if self.captureSession.canAddOutput(self.photoOutPut!) {
self.captureSession.addOutput(self.photoOutPut!)
}
} catch {
print(error)
}
}
}
func setupPreviewLayer(){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraPreviewLayer?.frame = CGRect(x: 0, y: 0, width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)
self.layer.insertSublayer(cameraPreviewLayer!, at: 0)
}
func startRunningCaptureSession(){
captureSession.startRunning()
}
#IBAction func cameraButton_Touched(_ sender: Any) {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
//
settings.isAutoStillImageStabilizationEnabled = true
if let photoOutputConnection = self.photoOutPut?.connection(with: .video){
photoOutputConnection.videoOrientation = (cameraPreviewLayer?.connection?.videoOrientation)!
}
}
#IBAction func Flip_camera(_ sender: UIButton?) {
print("Flip Touched")
self.captureSession.beginConfiguration()
if let inputs = self.captureSession.inputs as? [AVCaptureDeviceInput] {
for input in inputs {
self.captureSession.removeInput(input)
print("input removed")
}
//This seemed to have fixed it
for output in self.captureSession.outputs{
captureSession.removeOutput(output)
print("out put removed")
}
}
self.usingFrontCamera = !self.usingFrontCamera
self.setupCaptureSession()
self.setupDevice(usingFrontCamera: self.usingFrontCamera)
self.setupInput()
self.captureSession.commitConfiguration()
self.startRunningCaptureSession()
}
}
Initializing the camera takes time. Once your app requests use of the camera, supporting software has to be initialized in the background, which isn't really possible to speed up.
I would recommend placing anything related to AVFoundation in a background thread and initialize it after your app loads. That way, the camera will be ready for the user once he/she is ready to swipe to the camera cell. If you don't want to preload, you could at least still place the AVFoundation in the background and utilize some kind of activity indicator to show the user that something is loading instead of just allowing your main thread to be blocked while the camera is booting up.
I've been taking bits and pieces of code from the internet and stackoverflow to get a simple camera app working.
However, I've noticed that if I flip my phone to the sideway position,
I see two problems:
1)the
camera preview layer only takes up half the screen
2)and the camera's orientation doesn't seem to be changing; it stays fixed in the vertical position
My constraints seem to be fine, and if I look at various simulators' UIimageView(Camera's preview layer) in the hortionzal position, the UImage is strecthed properly. So not sure why the camera preview layer is only stretching to half the screen.
(ImagePreview = Camera preview layer)
As for the orientation problem, this seems to be a coding problem?
I looked up some posts on stackoverflow, but I didn't see anything for Swift 4.
Not sure if these is an easy way to do this in Swift 4.
iPhone AVFoundation camera orientation
Here is some of the code from my camera app:
import Foundation
import AVFoundation
import UIKit
class CameraSetup{
var captureSession = AVCaptureSession()
var frontCam : AVCaptureDevice?
var backCam : AVCaptureDevice?
var currentCam: AVCaptureDevice?
var captureInput: AVCaptureDeviceInput?
var captureOutput: AVCapturePhotoOutput?
var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
func captureDevice()
{
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .unspecified)
for device in discoverySession.devices{
if device.position == .front{
frontCam = device
}
else if device.position == .back{
backCam = device}
do {
try backCam?.lockForConfiguration()
backCam?.focusMode = .autoFocus
backCam?.exposureMode = .autoExpose
backCam?.unlockForConfiguration()
}
catch let error{
print(error)}
}
}
func configureCaptureInput(){
currentCam = backCam!
do{
captureInput = try AVCaptureDeviceInput(device: currentCam!)
if captureSession.canAddInput(captureInput!){
captureSession.addInput(captureInput!)
}
}
catch let error{
print(error)
}
}
func configureCaptureOutput(){
captureOutput = AVCapturePhotoOutput()
captureOutput!.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey : AVVideoCodecType.jpeg])], completionHandler: nil)
if captureSession.canAddOutput(captureOutput!){
captureSession.addOutput(captureOutput!)
}
captureSession.startRunning()
}
Here is the PreviewLayer function:
func configurePreviewLayer(view: UIView){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resize;
cameraPreviewLayer?.zPosition = -1;
view.layer.insertSublayer(cameraPreviewLayer!, at: 0)
cameraPreviewLayer?.frame = view.bounds
}
EDIT:
As suggested I made the move the view.bounds line one line above:
func configurePreviewLayer(view: UIView){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resize;
cameraPreviewLayer?.zPosition = -1;
cameraPreviewLayer?.frame = view.bounds
view.layer.insertSublayer(cameraPreviewLayer!, at: 0)
However, the problem still persists:
Here is the horizontal view:
I think you should user UIView instead of UIImageView.. and try this :
cameraPreviewlayer?.videoGravity = AVLayerVideoGravityResize;
cameraPreviewlayer?.zPosition = -1;
cameraPreviewlayer?.frame = self.imagePreview.bounds
I have been following this tutorial: https://www.youtube.com/watch?v=w0O3ZGUS3pk and managed to get to the point where I can take photos, and see the camera output on my UIView.
However I need to record video instead of taking photos. I have looked on stackOverflow and in the AVFoundation help, but couldn't get it to work. This is my code so far:
override func viewDidAppear(_ animated: Bool) {
let devices = AVCaptureDevice.devices()
for device in devices! {
if (device as AnyObject).position == AVCaptureDevicePosition.back {
do {
let input = try AVCaptureDeviceInput(device: device as! AVCaptureDevice)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
if captureSession.canAddOutput(sessionOutput) {
captureSession.addOutput(sessionOutput)
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer)
previewLayer.position = CGPoint(x: self.cameraView.frame.width/2, y: self.cameraView.frame.height/2)
previewLayer.bounds = cameraView.bounds
}
}
}
catch {
print("error")
}
}
}
}
#IBAction func recordPress(_ sender: Any) {
if let videoConnection = sessionOutput.connection(withMediaType: AVMediaTypeVideo) {
}
}
I saw a question on here relating to this tutorial, but it didnt have the answer.
How would you use the AVFoundation to record video in this way?