I created a custom camera tool. Now, I am trying to handle checking existence of cameras however, I only have Simulator (no camera) and iphone (both cameras). I handled no camera but I couldn't understand how it works for one camera, so I also couldn't figure out how to help the user flip the camera
Currently I am using following external library using dojo custom camera
Position .Back and .Front works, and I handled no camera, but I couldn't figure out how to
handle checks for 1 camera
assign a variable for the control of Back & Front cameras depending on their existence (So I can create a uibutton in the VC and control flipping of camera back and front).
// I call addVideoInput() while initializing
func addVideoInput() {
if let device: AVCaptureDevice = self.deviceWithMediaTypeWithPosition(AVMediaTypeVideo, position: AVCaptureDevicePosition.Front) {
do {
let input = try AVCaptureDeviceInput(device: device)
if self.session.canAddInput(input) {
self.session.addInput(input)
}
} catch {
print(error)
}
}
}
func deviceWithMediaTypeWithPosition(mediaType: NSString, position: AVCaptureDevicePosition) -> AVCaptureDevice? {
let devices: NSArray = AVCaptureDevice.devicesWithMediaType(mediaType as String)
if devices.count != 0 {
if var captureDevice: AVCaptureDevice = devices.firstObject as? AVCaptureDevice {
for device in devices {
let d = device as! AVCaptureDevice
if d.position == position {
captureDevice = d
break;
}
}
print(captureDevice)
return captureDevice
}
}
print("doesnt have any camera")
return nil
}
You need to remove the object and create new object with use of few values and boolean uses.
Here I post the code for the when create the position of the camera AVCaptureDevicePosition.
In the top of the class add enum.
enum CameraType {
case Front
case Back
}
Initialise the variable.
var cameraCheck = CameraType.Back
Just change the following function.
func addVideoInput() {
if cameraCheck == CameraType.Front {
cameraCheck = CameraType.Back
let device: AVCaptureDevice = self.deviceWithMediaTypeWithPosition(AVMediaTypeVideo, position: AVCaptureDevicePosition.Front)
do {
let input = try AVCaptureDeviceInput(device: device)
if self.session.canAddInput(input) {
self.session.addInput(input)
}
} catch {
print(error)
}
}else{
cameraCheck = CameraType.Front
let device: AVCaptureDevice = self.deviceWithMediaTypeWithPosition(AVMediaTypeVideo, position: AVCaptureDevicePosition.Back)
do {
let input = try AVCaptureDeviceInput(device: device)
if self.session.canAddInput(input) {
self.session.addInput(input)
}
} catch {
print(error)
}
}
}
Create one button into your storyboard.
Now into your viewcontroller create one #IBAction function.
#IBAction func changeCamera(){
self.camera = nil
self.initializeCamera()
self.establishVideoPreviewArea()
if isBackCamera == true {
isBackCamera = false
self.camera?.cameraCheck = CameraType.Front
}else{
isBackCamera = true
self.camera?.cameraCheck = CameraType.Back
}
}
That's it your goal achieve.
Also you can download the source code from here.
You can use a boolean variable isUsingFrontCamera, for the first time when the camera view loads,
Set,
isUsingFrontCamera = false;
Then on clicking on the camera switch button,
-(IBAction)switchCameras:(id)sender {
AVCaptureDevicePosition desiredPosition;
if (isUsingFrontFacingCamera)
desiredPosition = AVCaptureDevicePositionBack;
else
desiredPosition = AVCaptureDevicePositionFront;
for (AVCaptureDevice *d in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if ([d position] == desiredPosition) {
[[captureVideoPreviewLayer session] beginConfiguration];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:d error:nil];
for (AVCaptureInput *oldInput in [[captureVideoPreviewLayer session] inputs]) {
[[captureVideoPreviewLayer session] removeInput:oldInput];
}
[[captureVideoPreviewLayer session] addInput:input];
[[captureVideoPreviewLayer session] commitConfiguration];
break;
}
}
isUsingFrontFacingCamera = !isUsingFrontFacingCamera;
}
Where captureVideoPreviewLayer is the,
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer
Also, you can get the count of your camera using
[[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] count]
Then show and hide button accordingly
Best to keep an enum IMO.
private enum CameraPosition {
case Front, Back
}
Then when you press the button have
var currentState: CameraPostition
switch to the other camera position.
Then on the didSet of currentState config the camera
var currentState: CameraPostition {
didSet {
configCamera(state: currentState)
}
}
EDIT: After some more information provided.
If you change
func addVideoInput() {
let device: AVCaptureDevice = self.deviceWithMediaTypeWithPosition(AVMediaTypeVideo, position: AVCaptureDevicePosition.Back)
do {
let input = try AVCaptureDeviceInput(device: device)
if self.session.canAddInput(input) {
self.session.addInput(input)
}
} catch {
print(error)
}
}
To
func addVideoInput(position: AVCaptureDevicePosition) {
let device: AVCaptureDevice = self.deviceWithMediaTypeWithPosition(AVMediaTypeVideo, position: position)
do {
let input = try AVCaptureDeviceInput(device: device)
if self.session.canAddInput(input) {
self.session.addInput(input)
}
} catch {
print(error)
}
}
Then when your "CurrentState" changes on the didSet of currentState you can just call the function.
func configCamera(state: CameraState) {
switch state{
case .Back:
addVideoInput(.Back)
case .Front
addVideoInput(.Front)
}
}
Related
I'm try to understand what I'm doing wrong on my project.
I'm try to draw a box over a detected face using vision kit.
I first set up the back camera with the following method.
func configureSession(){
// controllo se ho ricevuto auth a usar camera else ret
if setupResult != .success { return }
var defaultVideoDevice: AVCaptureDevice?
session.beginConfiguration() // per poter sett la conf
session.sessionPreset = .vga640x480 // Model image size is smaller.
do {
// seleziono il device migliore da usare come imput
if let dualCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera,for: .video,position: .back) {
print("select input tripleCamera")
defaultVideoDevice = dualCameraDevice
}
guard let defaultVideoDevice = defaultVideoDevice else {
print("errore Can not find any camera in configurate session")
return
}
let videoDeviceInput = try AVCaptureDeviceInput(device: defaultVideoDevice)
//Aggiungo input alla sessione
if session.canAddInput(videoDeviceInput){
session.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
} else {
print("Could not add video device input to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}// fine add input
} catch let error {
print("Could set input device to session err \(error.localizedDescription)")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
//-----aggiungi Output
if session.canAddOutput(videoDataOutput) {
session.addOutput(videoDataOutput)
// Add a video data output
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)]
videoDataOutput.setSampleBufferDelegate(self, queue: sessionQueue)
}else {
print("Could not add video data output to the session")
session.commitConfiguration()
return
}
guard let captureConnection = videoDataOutput.connection(with: .video) else {return}
captureConnection.videoOrientation = .portrait //< DO I NEED TO CHANGE THIS??----------
captureConnection.isEnabled = true
if captureConnection.isVideoOrientationSupported {
print("capture connection orient \(captureConnection.videoOrientation.rawValue) / 3 landscape right")
}
// get the buffer size
do {
try defaultVideoDevice!.lockForConfiguration()
let dimensions = CMVideoFormatDescriptionGetDimensions((defaultVideoDevice?.activeFormat.formatDescription)!)
bufferSize.width = CGFloat(dimensions.width)
bufferSize.height = CGFloat(dimensions.height)
defaultVideoDevice!.unlockForConfiguration()
} catch {
print("// get the buffer size ERROR \(error.localizedDescription)")
}
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(tapAction))
cameraView.addGestureRecognizer(tapGesture)
// setting up the view to show
cameraView.videoPreviewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
session.commitConfiguration()
cameraView.session = session
rootLayer = cameraView.videoPreviewLayer
guard let conn = self.cameraView.videoPreviewLayer.connection else {return}
print("cameraView conn video orient \(conn.videoOrientation.rawValue)")
}
First question..
how do I need to set captureConnection.videoOrientation ?? I can't understand how this need to be set.
my idea is using the phone in portrait and landscape..
Second question...
When I use Vision how do I need to set orientation in the Handler?
I tried to use a method from an apple example exifOrientationFromDeviceOrientation()
but it is completely wrong in my case.
it only work correctly if I set the orientation as leftMirrored...
but why leftMirrored since I'm using the backCamera as input??? all the other setting give me the wrong box position.
var faceLayersArray : [CAShapeLayer] = []
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
self.sessionQueue.async {
let faceRequest = VNDetectFaceLandmarksRequest { req, err in
DispatchQueue.main.async {
self.faceLayersArray.forEach { layer in
layer.removeFromSuperlayer()
}
if let result = req.results as? [VNFaceObservation], result.count > 0 {
self.handleFace(observation: result)
} else {
}
}
}
let exifOrientation = self.exifOrientationFromDeviceOrientation()
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: [:])
do {
try imageRequestHandler.perform([faceRequest])
} catch {
print("Error sequance handler \(error)")
}
}
}
func handleFace(observation : [VNFaceObservation]){
for observation in observation {
let boundBoxFace = observation.boundingBox
let faceRectConverted = self.cameraView.videoPreviewLayer.layerRectConverted(fromMetadataOutputRect: boundBoxFace)
let faceRectPath = CGPath(rect: faceRectConverted, transform: nil)
let faceLayer = CAShapeLayer()
faceLayer.path = faceRectPath
faceLayer.fillColor = UIColor.clear.cgColor
faceLayer.strokeColor = UIColor.yellow.cgColor
faceLayersArray.append(faceLayer)
self.cameraView.videoPreviewLayer.addSublayer(faceLayer)
}
}
}
// from apple
public func exifOrientationFromDeviceOrientation() -> CGImagePropertyOrientation {
let curDeviceOrientation = UIDevice.current.orientation
let exifOrientation: CGImagePropertyOrientation
switch curDeviceOrientation {
case UIDeviceOrientation.portraitUpsideDown: // Device oriented vertically, home button on the top
exifOrientation = .left
case UIDeviceOrientation.landscapeLeft: // Device oriented horizontally, home button on the right
exifOrientation = .upMirrored
case UIDeviceOrientation.landscapeRight: // Device oriented horizontally, home button on the left
exifOrientation = .down
case UIDeviceOrientation.portrait: // Device oriented vertically, home button on the bottom
exifOrientation = .up
default:
exifOrientation = .up
}
return exifOrientation
}
Using this tutorial here: http://www.musicalgeometry.com/?p=1297 I have created a custom overlay and image capture with AVCaptureSession.
I am attempting to allow the user to switch between the front and back camera. Here is my code in CaptureSessionManager to switch cameras:
- (void)addVideoInputFrontCamera:(BOOL)front {
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *frontCamera;
AVCaptureDevice *backCamera;
for (AVCaptureDevice *device in devices) {
//NSLog(#"Device name: %#", [device localizedName]);
if ([device hasMediaType:AVMediaTypeVideo]) {
if ([device position] == AVCaptureDevicePositionBack) {
//NSLog(#"Device position : back");
backCamera = device;
}
else {
//NSLog(#"Device position : front");
frontCamera = device;
}
}
}
NSError *error = nil;
if (front) {
AVCaptureDeviceInput *frontFacingCameraDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:frontCamera error:&error];
if (!error) {
if ([[self captureSession] canAddInput:frontFacingCameraDeviceInput]) {
[[self captureSession] addInput:frontFacingCameraDeviceInput];
} else {
NSLog(#"Couldn't add front facing video input");
}
}
} else {
AVCaptureDeviceInput *backFacingCameraDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:&error];
if (!error) {
if ([[self captureSession] canAddInput:backFacingCameraDeviceInput]) {
[[self captureSession] addInput:backFacingCameraDeviceInput];
} else {
NSLog(#"Couldn't add back facing video input");
}
}
}
}
Now in my custom overlay controller I initialize everything like so in viewDidLoad:
[self setCaptureManager:[[CaptureSessionManager alloc] init]];
[[self captureManager] addVideoInputFrontCamera:NO]; // set to YES for Front Camera, No for Back camera
[[self captureManager] addStillImageOutput];
[[self captureManager] addVideoPreviewLayer];
CGRect layerRect = [[[self view] layer] bounds];
[[[self captureManager] previewLayer] setBounds:layerRect];
[[[self captureManager] previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];
[[[self view] layer] addSublayer:[[self captureManager] previewLayer]];
[[_captureManager captureSession] startRunning];
The switch camera button is connected to a method called switchCamera. I have tried this:
- (void)switchCameraView:(id)sender {
[[self captureManager] addVideoInputFrontCamera:YES]; // set to YES for Front Camera, No for Back camera
}
When calling this, I get the error NSLog from the CaptureSessionManager and I cannot figure out why. In viewDidLoad, if I set the fontCamera to YES, it shows the front camera but cannot switch to back, and vice versa.
Any ideas on how to get it to switch properly?
You first need to remove the existing AVCameraInput from the AVCaptureSession and then add a new AVCameraInput to the AVCaptureSession. The following works for me (under ARC):
-(IBAction)switchCameraTapped:(id)sender
{
//Change camera source
if(_captureSession)
{
//Indicate that some changes will be made to the session
[_captureSession beginConfiguration];
//Remove existing input
AVCaptureInput* currentCameraInput = [_captureSession.inputs objectAtIndex:0];
[_captureSession removeInput:currentCameraInput];
//Get new input
AVCaptureDevice *newCamera = nil;
if(((AVCaptureDeviceInput*)currentCameraInput).device.position == AVCaptureDevicePositionBack)
{
newCamera = [self cameraWithPosition:AVCaptureDevicePositionFront];
}
else
{
newCamera = [self cameraWithPosition:AVCaptureDevicePositionBack];
}
//Add input to session
NSError *err = nil;
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:newCamera error:&err];
if(!newVideoInput || err)
{
NSLog(#"Error creating capture device input: %#", err.localizedDescription);
}
else
{
[_captureSession addInput:newVideoInput];
}
//Commit all the configuration changes at once
[_captureSession commitConfiguration];
}
}
// Find a camera with the specified AVCaptureDevicePosition, returning nil if one is not found
- (AVCaptureDevice *) cameraWithPosition:(AVCaptureDevicePosition) position
{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == position) return device;
}
return nil;
}
Swift 4/5
#IBAction func switchCameraTapped(sender: Any) {
//Change camera source
if let session = captureSession {
//Remove existing input
guard let currentCameraInput: AVCaptureInput = session.inputs.first else {
return
}
//Indicate that some changes will be made to the session
session.beginConfiguration()
session.removeInput(currentCameraInput)
//Get new input
var newCamera: AVCaptureDevice! = nil
if let input = currentCameraInput as? AVCaptureDeviceInput {
if (input.device.position == .back) {
newCamera = cameraWithPosition(position: .front)
} else {
newCamera = cameraWithPosition(position: .back)
}
}
//Add input to session
var err: NSError?
var newVideoInput: AVCaptureDeviceInput!
do {
newVideoInput = try AVCaptureDeviceInput(device: newCamera)
} catch let err1 as NSError {
err = err1
newVideoInput = nil
}
if newVideoInput == nil || err != nil {
print("Error creating capture device input: \(err?.localizedDescription)")
} else {
session.addInput(newVideoInput)
}
//Commit all the configuration changes at once
session.commitConfiguration()
}
}
// Find a camera with the specified AVCaptureDevicePosition, returning nil if one is not found
func cameraWithPosition(position: AVCaptureDevice.Position) -> AVCaptureDevice? {
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .unspecified)
for device in discoverySession.devices {
if device.position == position {
return device
}
}
return nil
}
Swift 3 Edit (Combined with François-Julien Alcaraz answer):
#IBAction func switchCameraTapped(sender: Any) {
//Change camera source
if let session = captureSession {
//Indicate that some changes will be made to the session
session.beginConfiguration()
//Remove existing input
guard let currentCameraInput: AVCaptureInput = session.inputs.first as? AVCaptureInput else {
return
}
session.removeInput(currentCameraInput)
//Get new input
var newCamera: AVCaptureDevice! = nil
if let input = currentCameraInput as? AVCaptureDeviceInput {
if (input.device.position == .back) {
newCamera = cameraWithPosition(position: .front)
} else {
newCamera = cameraWithPosition(position: .back)
}
}
//Add input to session
var err: NSError?
var newVideoInput: AVCaptureDeviceInput!
do {
newVideoInput = try AVCaptureDeviceInput(device: newCamera)
} catch let err1 as NSError {
err = err1
newVideoInput = nil
}
if newVideoInput == nil || err != nil {
print("Error creating capture device input: \(err?.localizedDescription)")
} else {
session.addInput(newVideoInput)
}
//Commit all the configuration changes at once
session.commitConfiguration()
}
}
// Find a camera with the specified AVCaptureDevicePosition, returning nil if one is not found
func cameraWithPosition(position: AVCaptureDevicePosition) -> AVCaptureDevice? {
if let discoverySession = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .unspecified) {
for device in discoverySession.devices {
if device.position == position {
return device
}
}
}
return nil
}
Swift version to #NES_4Life's answer:
#IBAction func switchCameraTapped(sender: AnyObject) {
//Change camera source
if let session = captureSession {
//Indicate that some changes will be made to the session
session.beginConfiguration()
//Remove existing input
let currentCameraInput:AVCaptureInput = session.inputs.first as! AVCaptureInput
session.removeInput(currentCameraInput)
//Get new input
var newCamera:AVCaptureDevice! = nil
if let input = currentCameraInput as? AVCaptureDeviceInput {
if (input.device.position == .Back)
{
newCamera = cameraWithPosition(.Front)
}
else
{
newCamera = cameraWithPosition(.Back)
}
}
//Add input to session
var err: NSError?
var newVideoInput: AVCaptureDeviceInput!
do {
newVideoInput = try AVCaptureDeviceInput(device: newCamera)
} catch let err1 as NSError {
err = err1
newVideoInput = nil
}
if(newVideoInput == nil || err != nil)
{
print("Error creating capture device input: \(err!.localizedDescription)")
}
else
{
session.addInput(newVideoInput)
}
//Commit all the configuration changes at once
session.commitConfiguration()
}
}
// Find a camera with the specified AVCaptureDevicePosition, returning nil if one is not found
func cameraWithPosition(position: AVCaptureDevicePosition) -> AVCaptureDevice?
{
let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)
for device in devices {
let device = device as! AVCaptureDevice
if device.position == position {
return device
}
}
return nil
}
Based on previous answers I made my own version with some validations and one specific change, the current camera input might not be the first object of the capture session's inputs, so I changed this:
//Remove existing input
AVCaptureInput* currentCameraInput = [self.captureSession.inputs objectAtIndex:0];
[self.captureSession removeInput:currentCameraInput];
To this (removing all video type inputs):
for (AVCaptureDeviceInput *input in self.captureSession.inputs) {
if ([input.device hasMediaType:AVMediaTypeVideo]) {
[self.captureSession removeInput:input];
break;
}
}
Here's the entire code:
if (!self.captureSession) return;
[self.captureSession beginConfiguration];
AVCaptureDeviceInput *currentCameraInput;
// Remove current (video) input
for (AVCaptureDeviceInput *input in self.captureSession.inputs) {
if ([input.device hasMediaType:AVMediaTypeVideo]) {
[self.captureSession removeInput:input];
currentCameraInput = input;
break;
}
}
if (!currentCameraInput) return;
// Switch device position
AVCaptureDevicePosition captureDevicePosition = AVCaptureDevicePositionUnspecified;
if (currentCameraInput.device.position == AVCaptureDevicePositionBack) {
captureDevicePosition = AVCaptureDevicePositionFront;
} else {
captureDevicePosition = AVCaptureDevicePositionBack;
}
// Select new camera
AVCaptureDevice *newCamera;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *captureDevice in devices) {
if (captureDevice.position == captureDevicePosition) {
newCamera = captureDevice;
}
}
if (!newCamera) return;
// Add new camera input
NSError *error;
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:newCamera error:&error];
if (!error && [self.captureSession canAddInput:newVideoInput]) {
[self.captureSession addInput:newVideoInput];
}
[self.captureSession commitConfiguration];
Swift 3
func switchCamera() {
session?.beginConfiguration()
let currentInput = session?.inputs.first as? AVCaptureDeviceInput
session?.removeInput(currentInput)
let newCameraDevice = currentInput?.device.position == .back ? getCamera(with: .front) : getCamera(with: .back)
let newVideoInput = try? AVCaptureDeviceInput(device: newCameraDevice)
session?.addInput(newVideoInput)
session?.commitConfiguration()
}
// MARK: - Private
extension CameraService {
func getCamera(with position: AVCaptureDevicePosition) -> AVCaptureDevice? {
guard let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice] else {
return nil
}
return devices.filter {
$0.position == position
}.first
}
}
Swift 4
You can check full implementation in this gist
Here is an updated version of chengsam's code that includes the fix for 'Multiple audio/video AVCaptureInputs are not currently supported'.
func switchCameraTapped() {
//Change camera source
//Indicate that some changes will be made to the session
session.beginConfiguration()
//Remove existing input
guard let currentCameraInput: AVCaptureInput = session.inputs.first else {
return
}
//Get new input
var newCamera: AVCaptureDevice! = nil
if let input = currentCameraInput as? AVCaptureDeviceInput {
if (input.device.position == .back) {
newCamera = cameraWithPosition(position: .front)
} else {
newCamera = cameraWithPosition(position: .back)
}
}
//Add input to session
var err: NSError?
var newVideoInput: AVCaptureDeviceInput!
do {
newVideoInput = try AVCaptureDeviceInput(device: newCamera)
} catch let err1 as NSError {
err = err1
newVideoInput = nil
}
if let inputs = session.inputs as? [AVCaptureDeviceInput] {
for input in inputs {
session.removeInput(input)
}
}
if newVideoInput == nil || err != nil {
print("Error creating capture device input: \(err?.localizedDescription)")
} else {
session.addInput(newVideoInput)
}
//Commit all the configuration changes at once
session.commitConfiguration()
}
// Find a camera with the specified AVCaptureDevicePosition, returning nil if one is not found
func cameraWithPosition(position: AVCaptureDevice.Position) -> AVCaptureDevice? {
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .unspecified)
for device in discoverySession.devices {
if device.position == position {
return device
}
}
return nil
}
Swift 3 version of cameraWithPosition without deprecated warning :
// Find a camera with the specified AVCaptureDevicePosition, returning nil if one is not found
func cameraWithPosition(_ position: AVCaptureDevicePosition) -> AVCaptureDevice?
{
if let deviceDescoverySession = AVCaptureDeviceDiscoverySession.init(deviceTypes: [AVCaptureDeviceType.builtInWideAngleCamera],
mediaType: AVMediaTypeVideo,
position: AVCaptureDevicePosition.unspecified) {
for device in deviceDescoverySession.devices {
if device.position == position {
return device
}
}
}
return nil
}
If you want, you can also get the new devicesTypes from iPhone 7+ (dual camera) by changing the deviceTypes array.
Here's a good read : https://forums.developer.apple.com/thread/63347
Hi I am getting this error.
It should be because of this code (it should switch between front and back camera in my custom camera). I am able to take a picture and everything works fine except this code...
#IBAction func switchCamera(sender: UIButton) {
var session:AVCaptureSession!
let currentCameraInput: AVCaptureInput = session.inputs[0] as! AVCaptureInput
session.removeInput(currentCameraInput)
do {
let newCamera: AVCaptureDevice?
if(captureDevice!.position == AVCaptureDevicePosition.Back){
print("Setting new camera with Front")
newCamera = self.cameraWithPosition(AVCaptureDevicePosition.Front)
} else {
print("Setting new camera with Back")
newCamera = self.cameraWithPosition(AVCaptureDevicePosition.Back)
}
let error = NSError?()
let newVideoInput = try AVCaptureDeviceInput(device: newCamera)
if (error == nil && captureSession?.canAddInput(newVideoInput) != nil) {
session.addInput(newVideoInput)
} else {
print("Error creating capture device input")
}
session.commitConfiguration()
captureDevice! = newCamera!
} catch let error as NSError {
// Handle any errors
print(error)
}
}
Thanks.
I have a uibutton to activate and inactivate flash. I need to hide the button when I change camera to front as it doesn't have a flash in front . And, need to unhide when I change the cam to back again. Appreciate your help.
#IBAction func changeCamera(sender: AnyObject) {
dispatch_async(self.sessionQueue, {
let currentVideoDevice:AVCaptureDevice = self.videoDeviceInput!.device
let currentPosition: AVCaptureDevicePosition = currentVideoDevice.position
var preferredPosition: AVCaptureDevicePosition = AVCaptureDevicePosition.Unspecified
switch currentPosition{
case AVCaptureDevicePosition.Front:
preferredPosition = AVCaptureDevicePosition.Back
case AVCaptureDevicePosition.Back:
preferredPosition = AVCaptureDevicePosition.Front
case AVCaptureDevicePosition.Unspecified:
preferredPosition = AVCaptureDevicePosition.Back
}
let device:AVCaptureDevice = takePhotoScreen.deviceWithMediaType(AVMediaTypeVideo, preferringPosition: preferredPosition)
var videoDeviceInput: AVCaptureDeviceInput?
do {
videoDeviceInput = try AVCaptureDeviceInput(device: device)
} catch _ as NSError {
videoDeviceInput = nil
} catch {
fatalError()
}
self.session!.beginConfiguration()
self.session!.removeInput(self.videoDeviceInput)
if self.session!.canAddInput(videoDeviceInput){
NSNotificationCenter.defaultCenter().removeObserver(self, name:AVCaptureDeviceSubjectAreaDidChangeNotification, object:currentVideoDevice)
takePhotoScreen.setFlashMode(AVCaptureFlashMode.Auto, device: device)
NSNotificationCenter.defaultCenter().addObserver(self, selector: "subjectAreaDidChange:", name: AVCaptureDeviceSubjectAreaDidChangeNotification, object: device)
self.session!.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
}else{
self.session!.addInput(self.videoDeviceInput)
}
self.session!.commitConfiguration()
dispatch_async(dispatch_get_main_queue(), {
self.snapButton.enabled = true
self.cameraButton.enabled = true
})
})
}
#IBAction func toggleTorch(sender: AnyObject) {
let device = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
if (device.hasTorch) {
do {
try device.lockForConfiguration()
if (device.torchMode == AVCaptureTorchMode.On) {
device.torchMode = AVCaptureTorchMode.Off
} else {
do {
try device.setTorchModeOnWithLevel(1.0)
} catch {
print(error)
}
}
device.unlockForConfiguration()
} catch {
print(error)
}
} }
You can write your hide unhide code here
switch currentPosition{
case AVCaptureDevicePosition.Front:{
preferredPosition = AVCaptureDevicePosition.Back;
// UNHIDE FLASH BUTTON HERE
break;
}
case AVCaptureDevicePosition.Back:{
preferredPosition = AVCaptureDevicePosition.Front
// HIDE FLASH BUTTON HERE
break;
}
case AVCaptureDevicePosition.Unspecified:
preferredPosition = AVCaptureDevicePosition.Back
}
P.S - I dont know swift syntax properly, but this should point you to the right direction. Also use break whenever you use switch. Dont know if its needed in swift 2.0, but still its good practice.
I was looking how to turn on/off the iPhone's camera flash and I found this:
#IBAction func didTouchFlashButton(sender: AnyObject) {
let avDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// check if the device has torch
if avDevice.hasTorch {
// lock your device for configuration
avDevice.lockForConfiguration(nil)
// check if your torchMode is on or off. If on turns it off otherwise turns it on
if avDevice.torchActive {
avDevice.torchMode = AVCaptureTorchMode.Off
} else {
// sets the torch intensity to 100%
avDevice.setTorchModeOnWithLevel(1.0, error: nil)
}
// unlock your device
avDevice.unlockForConfiguration()
}
}
I do get 2 issues, one on the line:
avDevice.lockForConfiguration(nil)
and the other on the line:
avDevice.setTorchModeOnWithLevel(1.0, error:nil)
both of them are related to exception handling but I don't know how to resolve them.
#IBAction func didTouchFlashButton(sender: UIButton) {
let avDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// check if the device has torch
if avDevice.hasTorch {
// lock your device for configuration
do {
let abv = try avDevice.lockForConfiguration()
} catch {
print("aaaa")
}
// check if your torchMode is on or off. If on turns it off otherwise turns it on
if avDevice.torchActive {
avDevice.torchMode = AVCaptureTorchMode.Off
} else {
// sets the torch intensity to 100%
do {
let abv = try avDevice.setTorchModeOnWithLevel(1.0)
} catch {
print("bbb")
}
// avDevice.setTorchModeOnWithLevel(1.0, error: nil)
}
// unlock your device
avDevice.unlockForConfiguration()
}
}
Swift 4 version, adapted from Ivan Slavov's answer. "TorchMode.auto" is also an option if you want to get fancy.
#IBAction func didTouchFlashButton(_ sender: Any) {
if let avDevice = AVCaptureDevice.default(for: AVMediaType.video) {
if (avDevice.hasTorch) {
do {
try avDevice.lockForConfiguration()
} catch {
print("aaaa")
}
if avDevice.isTorchActive {
avDevice.torchMode = AVCaptureDevice.TorchMode.off
} else {
avDevice.torchMode = AVCaptureDevice.TorchMode.on
}
}
// unlock your device
avDevice.unlockForConfiguration()
}
}
Swift 5.4 &
Xcode 12.4 &
iOS 14.4.2
#objc private func flashEnableButtonAction() {
guard let captureDevice = AVCaptureDevice.default(for: AVMediaType.video) else {
return
}
if captureDevice.hasTorch {
do {
let _: () = try captureDevice.lockForConfiguration()
} catch {
print("aaaa")
}
if captureDevice.isTorchActive {
captureDevice.torchMode = AVCaptureDevice.TorchMode.off
} else {
do {
let _ = try captureDevice.setTorchModeOn(level: 1.0)
} catch {
print("bbb")
}
}
captureDevice.unlockForConfiguration()
}
}
for some reason "avDevice.torchActive" is always false, even when the torch is on, making it impossible to turn off but I fixed it by declaring a boolean initially set to false and every time the flash turns on, the boolean is set to true.
var on: Bool = false
#IBAction func didTouchFlashButton(sender: UIButton) {
let avDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// check if the device has torch
if avDevice.hasTorch {
// lock your device for configuration
do {
let abv = try avDevice.lockForConfiguration()
} catch {
print("aaaa")
}
// check if your torchMode is on or off. If on turns it off otherwise turns it on
if on == true {
avDevice.torchMode = AVCaptureTorchMode.Off
on = false
} else {
// sets the torch intensity to 100%
do {
let abv = try avDevice.setTorchModeOnWithLevel(1.0)
on = true
} catch {
print("bbb")
}
// avDevice.setTorchModeOnWithLevel(1.0, error: nil)
}
// unlock your device
avDevice.unlockForConfiguration()
}
}
import AVFoundation
var videoDeviceInput: AVCaptureDeviceInput?
var movieFileOutput: AVCaptureMovieFileOutput?
var stillImageOutput: AVCaptureStillImageOutput?
Add a class method to ViewController.
class func setFlashMode(flashMode: AVCaptureFlashMode, device: AVCaptureDevice){
if device.hasFlash && device.isFlashModeSupported(flashMode) {
var error: NSError? = nil
do {
try device.lockForConfiguration()
device.flashMode = flashMode
device.unlockForConfiguration()
} catch let error1 as NSError {
error = error1
print(error)
}
}
}
Check the flashmode status.
// Flash set to Auto/Off for Still Capture
print("flashMode.rawValue : \(self.videoDeviceInput!.device.flashMode.rawValue)")
if(self.videoDeviceInput!.device.flashMode.rawValue == 1)
{
CameraViewController.setFlashMode(AVCaptureFlashMode.On, device: self.videoDeviceInput!.device)
}
else if (self.videoDeviceInput!.device.flashMode.rawValue == 2)
{
CameraViewController.setFlashMode(AVCaptureFlashMode.Auto, device: self.videoDeviceInput!.device)
}
else
{
CameraViewController.setFlashMode(AVCaptureFlashMode.Off, device: self.videoDeviceInput!.device)
}
Another short way is to do this
let devices = AVCaptureDevice.devices()
let device = devices[0]
guard device.isTorchAvailable else { return }
do {
try device.lockForConfiguration()
if device.torchMode == .on {
device.torchMode = .off
}else{
device.torchMode = .on
}
} catch {
debugPrint(error)
}