I have a weird crash showing on Crashlytics when setting up a camera session.
The stacktrace shows that the crash occurred at the method addInput.
func setupCamSession(){
self.captureSession = AVCaptureSession()
self.cameraView.setSession(self.captureSession)
self.sessionQueue = dispatch_queue_create("com.myapp.camera_queue", DISPATCH_QUEUE_SERIAL)
self.setupResult = .Success
switch AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeVideo){
case .Authorized:
break //already we set it to success
case .NotDetermined:
// The user has not yet been presented with the option to grant video access.
// We suspend the session queue to delay session setup until the access request has completed
dispatch_suspend(self.sessionQueue)
AVCaptureDevice.requestAccessForMediaType(AVMediaTypeVideo, completionHandler: { (granted) -> Void in
if ( !granted ) {
self.setupResult = .CameraNotAuthorized
}
dispatch_resume(self.sessionQueue)
})
default:
self.setupResult = .CameraNotAuthorized
}
dispatch_async(self.sessionQueue){
if self.setupResult != .Success{
return
}
//link input to captureSession
guard let videoDevice = self.deviceWithMediaType(AVMediaTypeVideo, position: AVCaptureDevicePosition.Back) else{
AppLog("Video Device Unavailable")
self.setupResult = .SessionConfigurationFailed
return
}
var videoDeviceInput: AVCaptureDeviceInput!
do {
videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
}catch {
AppLog("Could not create video device input")
}
/////////////////////////////////////////////////////
self.captureSession.beginConfiguration()
if self.captureSession.canAddInput(videoDeviceInput){
self.captureSession.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
self.videoDevice = videoDevice
dispatch_async(dispatch_get_main_queue()){
//update the cameraView layer on the main thread
let previewLayer : AVCaptureVideoPreviewLayer = self.cameraView.layer as! AVCaptureVideoPreviewLayer
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation(ui:UIApplication.sharedApplication().statusBarOrientation)
}
}else{
AppLog("Could not add video device input to the session")
self.setupResult = .SessionConfigurationFailed
}
//link output to captureSession
let stillImageOutput = AVCaptureStillImageOutput()
if self.captureSession.canAddOutput(stillImageOutput){
self.captureSession.addOutput(stillImageOutput)
self.stillImageOutput = stillImageOutput
stillImageOutput.outputSettings = [AVVideoCodecKey : AVVideoCodecJPEG]
}else{
AppLog("Could not add still image output to the session")
self.setupResult = .SessionConfigurationFailed
}
self.captureSession.commitConfiguration()
/////////////////////////////////////////////////////
}
}
func runSession(){
dispatch_async(self.sessionQueue){
switch self.setupResult!{
case .Success:
self.videoDeviceInput!.device.addObserver(self, forKeyPath: "adjustingFocus", options: NSKeyValueObservingOptions.New, context: nil)
self.captureSession.addObserver(self, forKeyPath: "running", options: [.New], context: &SessionRunningContext)
self.captureSession.startRunning()
self.captureSessionRunning = self.captureSession.running
if !self.captureSessionRunning {
self.captureSession.removeObserver(self, forKeyPath: "running", context: &SessionRunningContext)
self.videoDeviceInput?.device?.removeObserver(self, forKeyPath: "adjustingFocus", context: nil)
}
default:
//Handle errors.
}
}
func stopCaptureSession(){
dispatch_async(self.sessionQueue){
if self.setupResult == .Success{
if self.captureSessionRunning{
self.captureSession.stopRunning()
self.videoDeviceInput?.device?.removeObserver(self, forKeyPath: "adjustingFocus", context: nil)
self.captureSession.removeObserver(self, forKeyPath: "running", context: &SessionRunningContext)
}
self.captureSessionRunning = false
}
}
}
The setupCamSession is called in viewDidLoad, the runSession in viewWillAppear and I have also a stopSession method in viewWillDisappear. Everything related to the camera session is dispatched on a background serial queue.
The crash doesn't happen 100% of the time and I am unable to reproduce the crash on the device I use.
Thanks
Make sure you are removing observers on deinit. I saw this occurring when coming back to the camera capture screen and I didn't remove the observer for adjustingFocus. Once I removed that in deinit all was well.
Had the same problem. It was resolved after adding an usage description for the Privacy - Camera Usage Description in the Info.plist file. This answer contains tips on how to set up the description:
Request Permission for Camera and Library in iOS 10 - Info.plist
Related
The main issue here is, when the microphone input is added, the device loses any haptic system sounds.
I setup the audio session here:
let audioSession = AVAudioSession.sharedInstance()
do {
try self.audioSession.setCategory(.playAndRecord, options: .mixWithOthers)
try self.audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true)
try self.audioSession.setActive(true)
} catch { }
I make sure I am using setAllowHapticsAndSystemSoundsDuringRecording.
Though out the app, I am adding the microphone and removing it on-demand:
do {
let microphonePermission = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio)
if microphonePermission != .denied && microphonePermission != .restricted {
let audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
let audioDeviceInput = try AVCaptureDeviceInput(device: audioDevice!)
if self.session.canAddInput(audioDeviceInput) {
self.session.addInput(audioDeviceInput)
}
else { print("Could not add audio device input to the session.") }
} else {
}
}
catch { print("Could not create audio device input: \(error).") }
As soon as the microphone is added, it loses haptic feedback and system sounds.
I'm using AVCaptureSession to capture video.
I want to light on the torch during the whole session, but once the session is started, the light turns automatically off.
There is a lot of posts here showing how to turn on the torch. It works, unless the capture session is started.
here's the way I start the session
guard let camera = AVCaptureDevice.default(for: .video) else { return }
self.captureSession.beginConfiguration()
let deviceInput = try AVCaptureDeviceInput(device: camera)
self.captureSession.addInput(deviceInput)
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "com.axelife.axcapturer.samplebufferdelegate"))
self.captureSession.addOutput(videoOutput)
try camera.setLight(on: true)
self.captureSession.commitConfiguration()
DispatchQueue(label: "capturesession").async {
self.captureSession.startRunning()
}
And my code to turn on the light
extension AVCaptureDevice {
func setLight(on: Bool) throws {
try self.lockForConfiguration()
if on {
try self.setTorchModeOn(level: 1)
}
else {
self.torchMode = .off
}
self.unlockForConfiguration()
}
}
With that code, the light turns on during < 0.5 seconds, and turn back off automatically.
Ok, I figured out.
The torch simply must be lighted on after the session start.
So instead of:
try camera.setLight(on: true)
self.captureSession.commitConfiguration()
DispatchQueue(label: "capturesession").async {
self.captureSession.startRunning()
}
just do
self.captureSession.commitConfiguration()
DispatchQueue(label: "capturesession").async {
self.captureSession.startRunning()
try camera.setLight(on: true)
}
For me, the torch turns off when I change the videoOrientation of the capture preview layer. I turn it on again after that happens. (DispatchQueue.main.async is important, for some reason, it doesn't work if I leave it out)
previewConnection.videoOrientation = orientation
do {
try captureDevice.lockForConfiguration()
} catch {
print(error)
}
DispatchQueue.main.async { [weak self] in
self?.captureDevice.torchMode = .on
self?.captureDevice.unlockForConfiguration()
}
Of course with iOS 10, you now have to do THIS
to use the phone's camera. On first launch, the user gets a question such as,
and all is well.
BUT we have a client app that is a "camera app": when you launch the app it simply immediately launches the camera, when the app is running the camera is running and is shown fullscreen. The code to do so is the usual way, see below.
The problem is - the first launch of the app on a phone, the user is asked the question; user says yes. But then, the camera is just black on devices we have tried. It does not crash (as it would if you forget the plist item) but it goes black and stays black.
If the user quits the app and launches it again - it's fine, everything works.
What the heck is the workflow for a "camera app"? I can't see a good solution, but there must be one for the various camera apps out there - which immediately go to fullscreen camera when you launch the app.
class CameraPlane:UIViewController
{
...
func cameraBegin()
{
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError
{
error = error1
input = nil
}
if ( error != nil )
{
print("probably on simulator? no camera?")
return;
}
if ( captureSession!.canAddInput(input) == false )
{
print("capture session problem?")
return;
}
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if ( captureSession!.canAddOutput(stillImageOutput) == false )
{
print("capture session with stillImageOutput problem?")
return;
}
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
// or, AVLayerVideoGravityResizeAspect
fixConnectionOrientation()
view.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
previewLayer!.frame = view.bounds
}
Note: it's likely OP's code was actually working correctly in terms of the new ISO10 permission string, and OP had another problem causing the black screen.
From the code you've posted, I can't tell why you experience this kind of behavior. I can only give you the code that is working for me.
This code also runs on iOS 9. Note that I am loading the camera in viewDidAppear to make sure that all constraints are set.
import AVFoundation
class ViewController : UIViewController {
//the view where the camera feed is shown
#IBOutlet weak var cameraView: UIView!
var captureSession: AVCaptureSession = {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPresetPhoto
return session
}()
var sessionOutput = AVCaptureStillImageOutput()
var previewLayer = AVCaptureVideoPreviewLayer()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice]
guard let backCamera = (devices?.first { $0.position == .back }) else {
print("The back camera is currently not available")
return
}
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if captureSession.canAddInput(input){
captureSession.addInput(input)
sessionOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession.canAddOutput(sessionOutput) {
captureSession.addOutput(sessionOutput)
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = .portrait
cameraView.layer.addSublayer(previewLayer)
previewLayer.position = CGPoint(x: cameraView.frame.width / 2, y: cameraView.frame.height / 2)
previewLayer.bounds = cameraView.frame
}
}
} catch {
print("Could not create a AVCaptureSession")
}
}
}
If you want the camera to show fullscreen you can simply use view instead of cameraView. In my camera implementation the camera feed does not cover the entire view, there's still some navigation stuff.
What's happening is that on that first launch, you're activating the camera before it can check what the permissions are and display the appropriate UIAlertController. What you'd want to do is include this code inside an if statement to check the status of the camera permissions (AVAuthorizationStatus). Make sure that if it's not allowed to ask for permission before displaying the camera. See this question for more help.
I am using the code provided by https://www.hackingwithswift.com/example-code/media/how-to-scan-a-qr-code to make my own scanning app. But I like my scanning to occur on button press. Now for this I put the viewDidLoad() part from the tutorial into its own function:
func cameraScanningLayer(){
view.backgroundColor = UIColor.blackColor()
captureSession = AVCaptureSession()
let videoCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let videoInput: AVCaptureDeviceInput
do {
videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
} catch {
return
}
if (captureSession.canAddInput(videoInput)) {
captureSession.addInput(videoInput)
} else {
failed();
return;
}
let metadataOutput = AVCaptureMetadataOutput()
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
metadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
// need to scan barcode + QRcode
metadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode,AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypePDF417Code,AVMetadataObjectTypeCode128Code,AVMetadataObjectTypeCode39Code]
} else {
failed()
return
}
// Previewlayer with camera
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession);
previewLayer.frame = viewForLayer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
viewForLayer.layer.addSublayer(previewLayer);
captureSession.startRunning();
}
And a button action calls the function:
func buttonScanAction() {
print("Scan")
scanEnabled = true // like to use some kind of bool/switch
self.cameraScanningLayer()
}
The problems I have are:
1) On load the camera is not in view
2) After the button is pressed the camera is in view but it always scans automatically
So I thought of using a global:
var scanEnabled: Bool = false
Then, when the button is clicked, set it to true and the scanning is enabled.
For reference here is a sketch:
EDIT
my quick fix which might not be the right way to do it.
I replaced the
let metadataOutput = AVCaptureMetadataOutput() {...} else {
failed()
return
}
and put it between an if statement
if (scanEnabled == true) {
let metadataOutput = AVCaptureMetadataOutput()
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
metadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
// to use them both wwe need to skip AVMetadataObjectTypeQRCode
metadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode,AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypePDF417Code,AVMetadataObjectTypeCode128Code,AVMetadataObjectTypeCode39Code]
scanEnabled = false
} else {
failed()
return
}
}
Author of that tutorial here. My method was to use a dedicated scanning view controller, but I guess you want to unify that with your existing view controller – and that's fine. Both approaches work.
If you want to show the camera interface all the time (even when not actively recognising QR codes) then your plan to use a boolean to track whether scanning is enabled is a good one. My example code has a foundCode() method that gets called, and also calls dismissViewControllerAnimated() when codes are found.
In your version, you need to make foundCode() do all the work of stopping the scane, handling the dismissal, etc. You can then add a check for your scanEnabled boolean in one place.
Something like this ought to do it:
func foundCode(code: String) {
if scanCode == true {
print(code)
captureSession.stopRunning()
AudioServicesPlaySystemSound(SystemSoundID(kSystemSoundID_Vibrate))
dismissViewControllerAnimated(true, completion: nil)
}
}
If you wanted to, you could move the scanCode == true check up to didOutputMetadataObjects to save the unnecessary method call.
Thanks #alex for this question. I also use the excellent class created by twostraws (Very useful Paul, thank you so much) and also needed to implement code scan reading with only the action of a button. My solution was the following:
I define metadataOutput as a global variable and only in the button action do I integrate them as a delegate:
var metadataOutput: AVCaptureMetadataOutput!
In viewDidLoad method:
metadataOutput = AVCaptureMetadataOutput()
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
// Was removed this line: metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
metadataOutput.metadataObjectTypes = [.qr]
} else {
failed()
return
}
func buttonScanAction() {
print("Scan")
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
}
When I change my view I stop the camera and remove the delegate like this:
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
if (captureSession?.isRunning == true) {
captureSession.stopRunning()
}
metadataOutput.setMetadataObjectsDelegate(nil, queue: DispatchQueue.main)
}
hello i had a working code with AVCaptureVideoPreviewLayer.. it's a part barcode reader:
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
do {
let input = try AVCaptureDeviceInput(device: captureDevice) as AVCaptureDeviceInput
session.addInput(input)
print("input done..")
} catch let error as NSError {
print(error)
}
let output = AVCaptureMetadataOutput()
output.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
session.addOutput(output)
output.metadataObjectTypes = output.availableMetadataObjectTypes
previewLayer = AVCaptureVideoPreviewLayer(session: session) as AVCaptureVideoPreviewLayer
previewLayer.frame = self.view.bounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.view.layer.addSublayer(previewLayer)
self.view.bringSubviewToFront(self.highlightView)
session.startRunning()
It was starting and running also, there wasn't any error messeage and i used on my old iPhone too and i could see the picture of the camera. But 2 days ago my iphone's been replaced, i didn't change anything in the code. Now the app starting but i can see only black screen.
Does anybody know what could cause this?
Thank you!
This is the code I used to test this out:
override func viewDidLoad() {
super.viewDidLoad()
var session = AVCaptureSession.new()
session.sessionPreset = AVCaptureSessionPresetMedium
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let input = AVCaptureDeviceInput(device: captureDevice, error: nil) as AVCaptureDeviceInput
session.addInput(input)
print("input done..")
var previewLayer = AVCaptureVideoPreviewLayer(session: session) as AVCaptureVideoPreviewLayer
previewLayer.frame = self.view.bounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.view.layer.addSublayer(previewLayer)
session.startRunning()
}
I appears that the problem was that you did not pass in the NSError required by the AVCaptureDeviceInput. Xcode was not sure what the try statement was trying to do as well. I've simply passed in nil into the constructor for now, but you will want to make sure that you handle it. I also removed the part on setting up the output, as that was not relevant to my testing. Feel free to add that back in.
I tested this on my iPhone 4 running iOS 7.1 for reference.
From iOS 8 you need to ask for the permission, you can do something like this
if ([AVCaptureDevice respondsToSelector:#selector(requestAccessForMediaType: completionHandler:)]) {
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler:^(BOOL granted) {
if (granted) {
// Got the permission
} else {
// Permission has been denied.
}
}];
} else {
// below ios 6
}
you can execute permission method in swifth
if AVCaptureDevice.respondsToSelector(Selector("requestAccessForMediaType"))
{
AVCaptureDevice.requestAccessForMediaType(AVMediaTypeVideo, completionHandler: { (grandted:Bool) -> Void in
//Check if granted and if its true do camera work else permission denied
})
}
else
{
// Below iOS 6
}