I am trying to add a rotate camera function with AVFoundation to allow the user to toggle between the front-facing and back-facing cameras.
As shown in the code below, I've put in some println() statements and all the values seem legit but the code always drops to the failed else-clause when testing CanAddInput().
I've tried setting the sessionPreset (which is in another function that initializes the session beforehand) to various values including AVCaptureSessionPresetHigh and AVCaptureSessionPresetLow but that didn't help.
#IBAction func rotateCameraPressed(sender: AnyObject) {
// Loop through all the capture devices to find right ones
var backCameraDevice : AVCaptureDevice?
var frontCameraDevice : AVCaptureDevice?
let devices = AVCaptureDevice.devices()
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Define devices
if (device.position == AVCaptureDevicePosition.Back) {
backCameraDevice = device as? AVCaptureDevice
} else if (device.position == AVCaptureDevicePosition.Front) {
frontCameraDevice = device as? AVCaptureDevice
}
}
}
// Assign found devices to corresponding input
var backInput : AVCaptureDeviceInput?
var frontInput : AVCaptureDeviceInput?
var error: NSError?
if let backDevice = backCameraDevice {
println("Back device is \(backDevice)")
backInput = AVCaptureDeviceInput(device : backDevice, error: &error)
}
if let frontDevice = frontCameraDevice {
println("Front device is \(frontDevice)")
frontInput = AVCaptureDeviceInput(device : frontDevice, error: &error)
}
// Now rotate the camera
isBackCamera = !isBackCamera // toggle camera position
if isBackCamera {
// remove front and add back
captureSession!.removeInput(frontInput)
if let bi = backInput {
println("Back input is \(bi)")
if captureSession!.canAddInput(bi) {
captureSession!.addInput(bi)
} else {
println("Cannot add back input!")
}
}
} else {
// remove back and add front
captureSession!.removeInput(backInput)
if let fi = frontInput {
println("Front input is \(fi)")
if captureSession!.canAddInput(fi) {
captureSession!.addInput(fi)
} else {
println("Cannot add front input!")
}
}
}
}
The problem seems to be the fact that the derived input from the devices found in the iteration do not actually match the input in the captureSession variable. This appears to be a new thing since all the code I've seen posted about this would find and remove the input for the current camera by iterating through the list of devices, as I've done in my code.
This doesn't seem to work anymore - well, at least not in the code I posted, which is based upon all the sources I've been able to dig up (that all happen to be in Objective C). The reason canAddInput() fails is that the removeInput() never succeeds; the fact that it doesn't issue the usual error about not being able to have multiple input devices is strange (since it would have helped with the debugging).
Anyway, the fix is to not remove the input on the derived input from the found device (which used to work). Instead, remove the input device that is actually there by going into the captureSession.inputs variable and doing a removeInput() on that.
To scrunch all that babble to code, here's what I did:
for ii in captureSession!.inputs {
captureSession!.removeInput(ii as! AVCaptureInput)
}
And that did the trick! :)
Related
I am trying to update my project to AudioKit v5, using SPM. As far as I can see in the current documentation, you instantiate the microphone by attaching it to the audio engine input.
However, I am missing what used to be AudioKit.inputDevices (and then AKManager.inputDevices). I used to be able to select my microphone of choice.
How does one select a specific microphone using AudioKit v5 on iOS?
As of November 6, 2020, you need to make sure you are using the v5-develop branch, since v5-main still does not support hardware with 48K sample rate.
Here is code that allows you to choose the microphone according to its debug description:
// AudioKit engine and node definitions
let engine = AudioEngine()
var mic : AudioEngine.InputNode!
var boost : Fader!
var mixer : Mixer!
// Choose device for microphone
if let inputs = AudioEngine.inputDevices {
// print (inputs) // Uncomment to see the possible inputs
let micSelection : String = "Front" // On a 2020 iPad pro you can also choose "Back" or "Top"
var chosenMic : Int = 0
var micTypeCounter : Int = 0
for microphones in inputs {
let micType : String = "\(microphones)"
if micType.range(of: micSelection) != nil {
chosenMic = micTypeCounter
}
// If we find a wired mic, prefer it
if micType.range(of: "Wired") != nil {
chosenMic = micTypeCounter
break
}
// If we find a USB mic (newer devices), prefer it
if micType.range(of: "USB") != nil {
chosenMic = micTypeCounter
break
}
micTypeCounter += 1
}
do {
try AudioEngine.setInputDevice(inputs[chosenMic])
} catch {
print ("Could not set audio inputs: \(error)")
}
mic = engine.input
}
Settings.sampleRate = mic.avAudioNode.inputFormat(forBus: 0).sampleRate // This is essential for 48Kbps
// Start AudioKit
if !engine.avEngine.isRunning {
do {
boost = Fader(mic)
// Set boost values here, or leave it for silence
// Connect mic or boost to any other audio nodes you need
// Set AudioKit's output
mixer = Mixer(boost) // You can add any other nodes to the mixer
engine.output = mixer
// Additional settings
Settings.audioInputEnabled = true
// Start engine
try engine.avEngine.start()
try Settings.setSession(category: .playAndRecord)
} catch {
print ("Could not start AudioKit: \(error)")
}
}
It is advisable to add a notification for audio route changes to viewDidLoad:
// Notification for monitoring audio route changes
NotificationCenter.default.addObserver(
self,
selector: #selector(audioRouteChanged(notification:)),
name: AVAudioSession.routeChangeNotification,
object: nil)
This will call
#objc func audioRouteChanged(notification:Notification) {
// Replicate the code for choosing the microphone here (the first `if let` block)
}
EDIT: To clarify, the reason for the selective use of break in the loop is to create a hierarchy of selected inputs, if more than one is present. You may change the order of the inputs detected at your discretion, or add break to other parts of the loop.
the same for audio kit 4..
Apis are changed.
Seems You should write:
guard let inputs = AKManager.inputDevices else{
print("NO AK INPUT devices")
return false
}
I'm using CoreMIDI to receive messages from a MIDI-Keyboard via Camera Connection Kit on iOS-Devices. My App is about pitch recognition. I want the following functionality to be automatic:
By default use the microphone (already implemented), if a MIDI-Keyboard is connected use that instead.
It's could find out how to tell if it is a USB-Keyboard using the default driver. Just ask for the device called "USB-MIDI":
private func getUSBDeviceReference() -> MIDIDeviceRef? {
for index in 0..<MIDIGetNumberOfDevices() {
let device = MIDIGetDevice(index)
var name : Unmanaged<CFString>?
MIDIObjectGetStringProperty(device, kMIDIPropertyName, &name)
if name!.takeRetainedValue() as String == "USB-MIDI" {
return device
}
}
return nil
}
But unfortunately there are USB-Keyboards that use a custom driver. How can I tell if I'm looking at one of these? Standard Bluetooth- and Network-Devices seem to be always online. Even if Wifi and Bluetooth are turned of on the device (strange?).
I ended up using the USBLocationID. It worked with any device I tested so far and none of the users complained.But I don't expect many users to use the MIDI-Features of my app.
/// Filters all `MIDIDeviceRef`'s for USB-Devices
private func getUSBDeviceReferences() -> [MIDIDeviceRef] {
var devices = [MIDIDeviceRef]()
for index in 0..<MIDIGetNumberOfDevices() {
let device = MIDIGetDevice(index)
var list: Unmanaged<CFPropertyList>?
MIDIObjectGetProperties(device, &list, true)
if let list = list {
let dict = list.takeRetainedValue() as! NSDictionary
if dict["USBLocationID"] != nil {
devices.append(device)
}
}
}
return devices
}
I am using AVSpeechSynthesizer for TextToSpeech. I have to play the speak in HeadPhone left Channel (Mono 2). I have got the following to set the output channel.
func initalizeSpeechForRightChannel(){
let avSession = AVAudioSession.sharedInstance()
let route = avSession.currentRoute
let outputPorts = route.outputs
var channels:[AVAudioSessionChannelDescription] = []
//NSMutableArray *channels = [NSMutableArray array];
var leftAudioChannel:AVAudioSessionChannelDescription? = nil
var leftAudioPortDesc:AVAudioSessionPortDescription? = nil
for outputPort in outputPorts {
for channel in outputPort.channels! {
leftAudioPortDesc = outputPort
//print("Name: \(channel.channelName)")
if channel.channelName == "Headphones Left" {
channels.append(channel)
leftAudioChannel = channel
}else {
// leftAudioPortDesc?.channels?.removeObject(channel)
}
}
}
if channels.count > 0 {
if #available(iOS 10.0, *) {
print("Setting Left Channel")
speechSynthesizer.outputChannels = channels
print("Checking output channel : \(speechSynthesizer.outputChannels?.count)")
} else {
// Fallback on earlier versions
}
}
}
I have 2 problems in the code
1. Cant able to set outputchannels , It always nil (It is happening on first time calling this method, consecutive calls working fine)
2. outputchannels supports from iOS 10.* But I need to support it from iOS
8.0
Please provide the best way to do that.
Instead of checking the channelName, which is descriptive (i.e. for the user), check the channelLabel. There is an enumeration containing the left channel.
I suspect this may not be possible pre-iOS 10. AVAudioSession doesn't appear to have any method to select only the left output channel. You may be able to use overrideAudioPort:error but it would affect the entire app.
I've looked at many other questions like this, and tried a lot of the solutions, but this case is a bit different. I'm using AVCaptureVideoDataOutputSampleBufferDelegate so that I can apply CIFilters to the live video feed. I'm using the following method to change cameras:
func changeCameras() {
captureSession.stopRunning()
var desiredPosition: AVCaptureDevicePosition?
if front {
desiredPosition = AVCaptureDevicePosition.Back
} else {
desiredPosition = AVCaptureDevicePosition.Front
}
let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) as? [AVCaptureDevice]
for device in devices! {
if device.position == desiredPosition {
self.captureSession.beginConfiguration()
do {
let input = try AVCaptureDeviceInput(device: device)
for oldInput in self.captureSession.inputs {
print(oldInput)
self.captureSession.removeInput(oldInput as! AVCaptureInput)
}
print(input)
self.captureSession.addInput(input)
self.captureSession.commitConfiguration()
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.captureSession.startRunning()
})
} catch { print("evic failed")}
}
}
front = !front
}
The methods that I am using to set up the camera (called in viewDidLoad) and receive the sampleBuffer from the delegate are here: https://gist.github.com/JoeyBodnar/17e22e3c04093caa54cf240ed8b1b601.
One problem is that when pressing the button to change cameras, it takes a solid 4-5 seconds of the screen freezing before changing. I've tried the above method, as well as creating a separate queue to run the entire function on, and it still takes a long time. I've never had this problem when switching cameras just using the regular AVVideoPreviewLayer, so I think this may be caused in part by the fact that i'm using the sample buffer delegate, but can't quite piece together how/why. Any help is appreciated. thanks!
it seems that front camera doesn't support focusMode.
func configureDevice() {
if let device = captureDevice {
let focusMode: AVCaptureFocusMode = .AutoFocus
if device.isFocusModeSupported(focusMode) {
device.lockForConfiguration(nil)
device.focusMode = AVCaptureFocusMode.AutoFocus
device.unlockForConfiguration()
println("configured device")
}
}
}
This code doesn't run because
if device.isFocusModeSupported(focusMode)
returns false.
But within the built-in-camera-app, front camera can focus on tap.
Is there any way implement tap-to-focus on the FRONT camera?
The front-facing camera does not support tap-to-focus on any iPhone. You can use device.focusPointOfInterestSupported property to check if you can do tap-to-focus (but you will get false, as with isFocusModeSupported())
What you are seeing is tap-for-exposure, and you can check for that with device.exposurePointOfInterestSupported. Once you know that you can use it, use device.exposurePointOfInterest to set your PoI.
All the details of each mode is explained to detail in Apple Docs
Hope it helps!