AudioKit v5: How does one choose the microphone? - audiokit

I am trying to update my project to AudioKit v5, using SPM. As far as I can see in the current documentation, you instantiate the microphone by attaching it to the audio engine input.
However, I am missing what used to be AudioKit.inputDevices (and then AKManager.inputDevices). I used to be able to select my microphone of choice.
How does one select a specific microphone using AudioKit v5 on iOS?

As of November 6, 2020, you need to make sure you are using the v5-develop branch, since v5-main still does not support hardware with 48K sample rate.
Here is code that allows you to choose the microphone according to its debug description:
// AudioKit engine and node definitions
let engine = AudioEngine()
var mic : AudioEngine.InputNode!
var boost : Fader!
var mixer : Mixer!
// Choose device for microphone
if let inputs = AudioEngine.inputDevices {
// print (inputs) // Uncomment to see the possible inputs
let micSelection : String = "Front" // On a 2020 iPad pro you can also choose "Back" or "Top"
var chosenMic : Int = 0
var micTypeCounter : Int = 0
for microphones in inputs {
let micType : String = "\(microphones)"
if micType.range(of: micSelection) != nil {
chosenMic = micTypeCounter
}
// If we find a wired mic, prefer it
if micType.range(of: "Wired") != nil {
chosenMic = micTypeCounter
break
}
// If we find a USB mic (newer devices), prefer it
if micType.range(of: "USB") != nil {
chosenMic = micTypeCounter
break
}
micTypeCounter += 1
}
do {
try AudioEngine.setInputDevice(inputs[chosenMic])
} catch {
print ("Could not set audio inputs: \(error)")
}
mic = engine.input
}
Settings.sampleRate = mic.avAudioNode.inputFormat(forBus: 0).sampleRate // This is essential for 48Kbps
// Start AudioKit
if !engine.avEngine.isRunning {
do {
boost = Fader(mic)
// Set boost values here, or leave it for silence
// Connect mic or boost to any other audio nodes you need
// Set AudioKit's output
mixer = Mixer(boost) // You can add any other nodes to the mixer
engine.output = mixer
// Additional settings
Settings.audioInputEnabled = true
// Start engine
try engine.avEngine.start()
try Settings.setSession(category: .playAndRecord)
} catch {
print ("Could not start AudioKit: \(error)")
}
}
It is advisable to add a notification for audio route changes to viewDidLoad:
// Notification for monitoring audio route changes
NotificationCenter.default.addObserver(
self,
selector: #selector(audioRouteChanged(notification:)),
name: AVAudioSession.routeChangeNotification,
object: nil)
This will call
#objc func audioRouteChanged(notification:Notification) {
// Replicate the code for choosing the microphone here (the first `if let` block)
}
EDIT: To clarify, the reason for the selective use of break in the loop is to create a hierarchy of selected inputs, if more than one is present. You may change the order of the inputs detected at your discretion, or add break to other parts of the loop.

the same for audio kit 4..
Apis are changed.
Seems You should write:
guard let inputs = AKManager.inputDevices else{
print("NO AK INPUT devices")
return false
}

Related

"__CFRunLoopModeFindSourceForMachPort returned NULL" messages when using AVAudioPlayer

We're working on a SpriteKit game. In order to have more control over sound effects, we switched from using SKAudioNodes to having some AVAudioPlayers. While everything seems to be working well in terms of game play, frame rate, and sounds, we're seeing occasional error(?) messages in the console output when testing on physical devices:
... [general] __CFRunLoopModeFindSourceForMachPort returned NULL for mode 'kCFRunLoopDefaultMode' livePort: #####
It doesn't seem to really cause any harm when it happens (no sound glitches or hiccups in frame rate or anything), but not understanding exactly what the message means and why it's happening is making us nervous.
Details:
The game is all standard SpriteKit, all events driven by SKActions, nothing unusual there.
The uses of AVFoundation stuff are the following. Initialization of app sounds:
class Sounds {
let soundQueue: DispatchQueue
init() {
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error.localizedDescription)
}
soundQueue = DispatchQueue.global(qos: .background)
}
func execute(_ soundActions: #escaping () -> Void) {
soundQueue.async(execute: soundActions)
}
}
Creating various sound effect players:
guard let player = try? AVAudioPlayer(contentsOf: url) else {
fatalError("Unable to instantiate AVAudioPlayer")
}
player.prepareToPlay()
Playing a sound effect:
let pan = stereoBalance(...)
sounds.execute {
if player.pan != pan {
player.pan = pan
}
player.play()
}
The AVAudioPlayers are all for short sound effects with no looping, and they get reused. We create about 25 players total, including multiple players for certain effects when they can repeat in quick succession. For a particular effect, we rotate through the players for that effect in a fixed sequence. We have verified that whenever a player is triggered, its isPlaying is false, so we're not trying to invoke play on something that's already playing.
The message isn't that often. Over the course of a 5-10 minute game with possibly thousands of sound effects, we see the message maybe 5-10 times.
The message seems to occur most commonly when a bunch of sound effects are being played in quick succession, but it doesn't feel like it's 100% correlated with that.
Not using the dispatch queue (i.e., having sounds.execute just call soundActions() directly) doesn't fix the issue (though that does cause the game to lag significantly). Changing the dispatch queue to some of the other priorities like .utility also doesn't affect the issue.
Making sounds.execute just return immediately (i.e., don't actually call the closure at all, so there's no play()) does eliminate the messages.
We did find the source code that's producing the message at this link:
https://github.com/apple/swift-corelibs-foundation/blob/master/CoreFoundation/RunLoop.subproj/CFRunLoop.c
but we don't understand it except at an abstract level, and are not sure how run loops are involved in the AVFoundation stuff.
Lots of googling has turned up nothing helpful. And as I indicated, it doesn't seem to be causing noticeable problems at all. It would be nice to know why it's happening though, and either how to fix it or to have certainty that it won't ever be an issue.
We're still working on this, but have experimented enough that it's clear how we should do things. Outline:
Use the scene's audioEngine property.
For each sound effect, make an AVAudioFile for reading the audio's URL from the bundle. Read it into an AVAudioPCMBuffer. Stick the buffers into a dictionary that's indexed by sound effect.
Make a bunch of AVAudioPlayerNodes. Attach() them to the audioEngine. Connect(playerNode, to: audioEngine.mainMixerNode). At the moment we're creating these dynamically, searching through our current list of player nodes to find one that's not playing and making a new one if there's none available. That's probably got more overhead than is needed, since we have to have callbacks to observe when the player node finishes whatever it's playing and set it back to a stopped state. We'll try switching to just a fixed maximum number of active sound effects and rotating through the players in order.
To play a sound effect, grab the buffer for the effect, find a non-busy playerNode, and do playerNode.scheduleBuffer(buffer, ...). And playerNode.play() if it's not currently playing.
I may update this with some more detailed code once we have things fully converted and cleaned up. We still have a couple of long-running AVAudioPlayers that we haven't switched to use AVAudioPlayerNode going through the mixer. But anyway, pumping the vast majority of sound effects through the scheme above has eliminated the error message, and it needs far less stuff sitting around since there's no duplication of the sound effects in-memory like we had before. There's a tiny bit of lag, but we haven't even tried putting some stuff on a background thread yet, and maybe not having to search for and constantly start/stop players would even eliminate it without having to worry about that.
Since switching to this approach, we've had no more runloop complaints.
Edit: Some example code...
import SpriteKit
import AVFoundation
enum SoundEffect: String, CaseIterable {
case playerExplosion = "player_explosion"
// lots more
var url: URL {
guard let url = Bundle.main.url(forResource: self.rawValue, withExtension: "wav") else {
fatalError("Sound effect file \(self.rawValue) missing")
}
return url
}
func audioBuffer() -> AVAudioPCMBuffer {
guard let file = try? AVAudioFile(forReading: self.url) else {
fatalError("Unable to instantiate AVAudioFile")
}
guard let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)) else {
fatalError("Unable to instantiate AVAudioPCMBuffer")
}
do {
try file.read(into: buffer)
} catch {
fatalError("Unable to read audio file into buffer, \(error.localizedDescription)")
}
return buffer
}
}
class Sounds {
var audioBuffers = [SoundEffect: AVAudioPCMBuffer]()
// more stuff
init() {
for effect in SoundEffect.allCases {
preload(effect)
}
}
func preload(_ sound: SoundEffect) {
audioBuffers[sound] = sound.audioBuffer()
}
func cachedAudioBuffer(_ sound: SoundEffect) -> AVAudioPCMBuffer {
guard let buffer = audioBuffers[sound] else {
fatalError("Audio buffer for \(sound.rawValue) was not preloaded")
}
return buffer
}
}
class Globals {
// Sounds loaded once and shared amount all scenes in the game
static let sounds = Sounds()
}
class SceneAudio {
let stereoEffectsFrame: CGRect
let audioEngine: AVAudioEngine
var playerNodes = [AVAudioPlayerNode]()
var nextPlayerNode = 0
// more stuff
init(stereoEffectsFrame: CGRect, audioEngine: AVAudioEngine) {
self.stereoEffectsFrame = stereoEffectsFrame
self.audioEngine = audioEngine
do {
try audioEngine.start()
let buffer = Globals.sounds.cachedAudioBuffer(.playerExplosion)
// We got up to about 10 simultaneous sounds when really pushing the game
for _ in 0 ..< 10 {
let playerNode = AVAudioPlayerNode()
playerNodes.append(playerNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
playerNode.play()
}
} catch {
logging("Cannot start audio engine, \(error.localizedDescription)")
}
}
func soundEffect(_ sound: SoundEffect, at position: CGPoint = .zero) {
guard audioEngine.isRunning else { return }
let buffer = Globals.sounds.cachedAudioBuffer(sound)
let playerNode = playerNodes[nextPlayerNode]
nextPlayerNode = (nextPlayerNode + 1) % playerNodes.count
playerNode.pan = stereoBalance(position)
playerNode.scheduleBuffer(buffer)
}
func stereoBalance(_ position: CGPoint) -> Float {
guard stereoEffectsFrame.width != 0 else { return 0 }
guard position.x <= stereoEffectsFrame.maxX else { return 1 }
guard position.x >= stereoEffectsFrame.minX else { return -1 }
return Float((position.x - stereoEffectsFrame.midX) / (0.5 * stereoEffectsFrame.width))
}
}
class GameScene: SKScene {
var audio: SceneAudio!
// lots more stuff
// somewhere in initialization
// gameFrame is the area where action takes place and which
// determines panning for stereo sound effects
audio = SceneAudio(stereoEffectsFrame: gameFrame, audioEngine: audioEngine)
func destroyPlayer(_ player: SKSpriteNode) {
audio.soundEffect(.playerExplosion, at: player.position)
// more stuff
}
}

Audio won't play after app interrupted by phone call iOS

I have a problem in my SpriteKit game where audio using playSoundFileNamed(_ soundFile:, waitForCompletion:) will not play after the app is interrupted by a phone call. (I also use SKAudioNodes in my app which aren't affected but I really really really want to be able to use the SKAction playSoundFileNamed as well.)
Here's the gameScene.swift file from a stripped down SpriteKit game template which reproduces the problem. You just need to add an audio file to the project and call it "note"
I've attached the code that should reside in appDelegate to a toggle on/off button to simulate the phone call interruption. That code 1) Stops AudioEngine then deactivates AVAudioSession - (normally in applicationWillResignActive) ... and 2) Activates AVAudioSession then Starts AudioEngine - (normally in applicationDidBecomeActive)
The error:
AVAudioSession.mm:1079:-[AVAudioSession setActive:withOptions:error:]: Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
This occurs when attempting to deactivate the audio session but only after a sound has been played at least once.
to reproduce:
1) Run the app
2) toggle the engine off and on a few times. No error will occur.
3) Tap the playSoundFileNamed button 1 or more times to play the sound.
4) Wait for sound to stop
5) Wait some more to be sure
6) Tap Toggle Audio Engine button to stop the audioEngine and deactivate session -
the error occurs.
7) Toggle the engine on and of a few times to see session activated, session deactivated, session activated printed in debug area - i.e. no errors reported.
8) Now with session active and engine running, playSoundFileNamed button will not play the sound anymore.
What am I doing wrong?
import SpriteKit
import AVFoundation
class GameScene: SKScene {
var toggleAudioButton: SKLabelNode?
var playSoundFileButton: SKLabelNode?
var engineIsRunning = true
override func didMove(to view: SKView) {
toggleAudioButton = SKLabelNode(text: "toggle Audio Engine")
toggleAudioButton?.position = CGPoint(x:20, y:100)
toggleAudioButton?.name = "toggleAudioEngine"
toggleAudioButton?.fontSize = 80
addChild(toggleAudioButton!)
playSoundFileButton = SKLabelNode(text: "playSoundFileNamed")
playSoundFileButton?.position = CGPoint(x: (toggleAudioButton?.frame.midX)!, y: (toggleAudioButton?.frame.midY)!-240)
playSoundFileButton?.name = "playSoundFileNamed"
playSoundFileButton?.fontSize = 80
addChild(playSoundFileButton!)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let location = touch.location(in: self)
let nodes = self.nodes(at: location)
for spriteNode in nodes {
if spriteNode.name == "toggleAudioEngine" {
if engineIsRunning { // 1 stop engine, 2 deactivate session
scene?.audioEngine.stop() // 1
toggleAudioButton!.text = "engine is paused"
engineIsRunning = !engineIsRunning
do{
// this is the line that fails when hit anytime after the playSoundFileButton has played a sound
try AVAudioSession.sharedInstance().setActive(false) // 2
print("session deactivated")
}
catch{
print("DEACTIVATE SESSION FAILED")
}
}
else { // 1 activate session/ 2 start engine
do{
try AVAudioSession.sharedInstance().setActive(true) // 1
print("session activated")
}
catch{
print("couldn't setActive = true")
}
do {
try scene?.audioEngine.start() // 2
toggleAudioButton!.text = "engine is running"
engineIsRunning = !engineIsRunning
}
catch {
//
}
}
}
if spriteNode.name == "playSoundFileNamed" {
self.run(SKAction.playSoundFileNamed("note", waitForCompletion: false))
}
}
}
}
}
Let me save you some time here: playSoundFileNamed sounds wonderful in theory, so wonderful that you might say use it in an app you spent 4 years developing until one day you realize it’s not just totally broken on interruptions but will even crash your app in the most critical of interruptions, your IAP. Don’t do it. I’m still not entirely sure whether SKAudioNode or AVPlayer is the answer, but it may depend on your use case. Just don’t do it.
If you need scientific evidence, create an app and create a for loop that playSoundFileNamed whatever you want in touchesBegan, and see what happens to your memory usage. The method is a leaky piece of garbage.
EDITED FOR OUR FINAL SOLUTION:
We found having a proper number of preloaded instances of AVAudioPlayer in memory with prepareToPlay() was the best method. The SwiftySound audio class uses an on-the-fly generator, but making AVAudioPlayers on the fly created slowdown in animation. We found having a max number of AVAudioPlayers and checking an array for those where isPlaying == false was simplest and best; if one isn't available you don't get sound, similar to what you likely saw with PSFN if you had it playing lots of sounds on top of each other. Overall, we have not found an ideal solution, but this was close for us.
In response to Mike Pandolfini’s advice not to use playSoundFileNamed I’ve converted my code to only use SKAudioNodes.
(and sent the bug report to apple).
I then found that some of these SKAudioNodes don’t play after app interruption either … and I’ve stumbled across a fix.
You need to tell each SKAudioNode to stop() as the app resigns to, or returns from the background - even if they’re not playing.
(I'm now not using any of the code in my first post which stops the audio engine and deactivates the session)
The problem then became how to play the same sound rapidly where it possibly plays over itself. That was what was so good about playSoundFileNamed.
1) The SKAudioNode fix:
Preload your SKAudioNodes i.e.
let sound = SKAudioNode(fileNamed: "super-20")
In didMoveToView add them
sound.autoplayLooped = false
addChild(sound)
Add a willResignActive notification
notificationCenter.addObserver(self, selector:#selector(willResignActive), name:UIApplication.willResignActiveNotification, object: nil)
Then create the selector’s function which stops all audioNodes playing:
#objc func willResignActive() {
for node in self.children {
if NSStringFromClass(type(of: node)) == “SKAudioNode" {
node.run(SKAction.stop())
}
}
}
All SKAudioNodes now play reliably after app interrupt.
2) To replicate playSoundFileNamed’s ability to play the short rapid repeating sounds or longer sounds that may need to play more than once and therefore could overlap, create/preload more than 1 property for each sound and use them like this:
let sound1 = SKAudioNode(fileNamed: "super-20")
let sound2 = SKAudioNode(fileNamed: "super-20")
let sound3 = SKAudioNode(fileNamed: "super-20")
let sound4 = SKAudioNode(fileNamed: "super-20")
var soundArray: [SKAudioNode] = []
var soundCounter: Int = 0
in didMoveToView
soundArray = [sound1, sound2, sound3, sound4]
for sound in soundArray {
sound.autoplayLooped = false
addChild(sound)
}
Create a play function
func playFastSound(from array:[SKAudioNode], with counter:inout Int) {
counter += 1
if counter > array.count-1 {
counter = 0
}
array[counter].run(SKAction.play())
}
To play a sound pass that particular sound's array and its counter to the play function.
playFastSound(from: soundArray, with: &soundCounter)

AVAudioEngine uses wrong format when bluetooth headset plugged in

I have a pair of bluetooth headphones with microphone input. The microphone is not used, but when it is, both input and output is forced to 8000kHz.
My AVAudioEngine instance connects to the headset in 8000kHz mode, unless I enter the system settings and specify that I do not want to use the headset for input (which has to be done every time the headset is connected).
I have noticed that other applications can play back at the expected 44100kHz without issues. There are no input nodes in my AVAudioEngine graph.
How can I make AVAudioEngine prefer connecting at reasonable sample rates?
After my failed bounty I wrote to Apple DTS, and got a wonderful response (including the code sample below that I translated from Objective-C).
The function below will connect to the default audio device in output-only mode, instead of the inout/output mode that is the default behavior. Remember to call it before engine start!
func setOutputDeviceFor(_ engine: AVAudioEngine) -> Bool {
var addr = AudioObjectPropertyAddress(
mSelector: kAudioHardwarePropertyDefaultOutputDevice,
mScope: kAudioObjectPropertyScopeGlobal,
mElement: kAudioObjectPropertyElementMaster)
var deviceID: AudioObjectID = 0
var size = UInt32(MemoryLayout.size(ofValue: deviceID))
let err = AudioObjectGetPropertyData(
AudioObjectID(kAudioObjectSystemObject),
&addr,
0,
nil,
&size,
&deviceID)
if (noErr == err && kAudioDeviceUnknown != deviceID) {
do {
try engine.outputNode.auAudioUnit.setDeviceID(deviceID)
} catch {
print(error)
return false
}
return true
} else {
print("ERROR: couldn't get default output device, ID = \(deviceID), err = \(err)")
return false
}
}

iOS: AVSpeechSynthesizer : need to speak text in Headphone left channel

I am using AVSpeechSynthesizer for TextToSpeech. I have to play the speak in HeadPhone left Channel (Mono 2). I have got the following to set the output channel.
func initalizeSpeechForRightChannel(){
let avSession = AVAudioSession.sharedInstance()
let route = avSession.currentRoute
let outputPorts = route.outputs
var channels:[AVAudioSessionChannelDescription] = []
//NSMutableArray *channels = [NSMutableArray array];
var leftAudioChannel:AVAudioSessionChannelDescription? = nil
var leftAudioPortDesc:AVAudioSessionPortDescription? = nil
for outputPort in outputPorts {
for channel in outputPort.channels! {
leftAudioPortDesc = outputPort
//print("Name: \(channel.channelName)")
if channel.channelName == "Headphones Left" {
channels.append(channel)
leftAudioChannel = channel
}else {
// leftAudioPortDesc?.channels?.removeObject(channel)
}
}
}
if channels.count > 0 {
if #available(iOS 10.0, *) {
print("Setting Left Channel")
speechSynthesizer.outputChannels = channels
print("Checking output channel : \(speechSynthesizer.outputChannels?.count)")
} else {
// Fallback on earlier versions
}
}
}
I have 2 problems in the code
1. Cant able to set outputchannels , It always nil (It is happening on first time calling this method, consecutive calls working fine)
2. outputchannels supports from iOS 10.* But I need to support it from iOS
8.0
Please provide the best way to do that.
Instead of checking the channelName, which is descriptive (i.e. for the user), check the channelLabel. There is an enumeration containing the left channel.
I suspect this may not be possible pre-iOS 10. AVAudioSession doesn't appear to have any method to select only the left output channel. You may be able to use overrideAudioPort:error but it would affect the entire app.

AVFoundation: toggle camera fails at CanAddInput

I am trying to add a rotate camera function with AVFoundation to allow the user to toggle between the front-facing and back-facing cameras.
As shown in the code below, I've put in some println() statements and all the values seem legit but the code always drops to the failed else-clause when testing CanAddInput().
I've tried setting the sessionPreset (which is in another function that initializes the session beforehand) to various values including AVCaptureSessionPresetHigh and AVCaptureSessionPresetLow but that didn't help.
#IBAction func rotateCameraPressed(sender: AnyObject) {
// Loop through all the capture devices to find right ones
var backCameraDevice : AVCaptureDevice?
var frontCameraDevice : AVCaptureDevice?
let devices = AVCaptureDevice.devices()
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Define devices
if (device.position == AVCaptureDevicePosition.Back) {
backCameraDevice = device as? AVCaptureDevice
} else if (device.position == AVCaptureDevicePosition.Front) {
frontCameraDevice = device as? AVCaptureDevice
}
}
}
// Assign found devices to corresponding input
var backInput : AVCaptureDeviceInput?
var frontInput : AVCaptureDeviceInput?
var error: NSError?
if let backDevice = backCameraDevice {
println("Back device is \(backDevice)")
backInput = AVCaptureDeviceInput(device : backDevice, error: &error)
}
if let frontDevice = frontCameraDevice {
println("Front device is \(frontDevice)")
frontInput = AVCaptureDeviceInput(device : frontDevice, error: &error)
}
// Now rotate the camera
isBackCamera = !isBackCamera // toggle camera position
if isBackCamera {
// remove front and add back
captureSession!.removeInput(frontInput)
if let bi = backInput {
println("Back input is \(bi)")
if captureSession!.canAddInput(bi) {
captureSession!.addInput(bi)
} else {
println("Cannot add back input!")
}
}
} else {
// remove back and add front
captureSession!.removeInput(backInput)
if let fi = frontInput {
println("Front input is \(fi)")
if captureSession!.canAddInput(fi) {
captureSession!.addInput(fi)
} else {
println("Cannot add front input!")
}
}
}
}
The problem seems to be the fact that the derived input from the devices found in the iteration do not actually match the input in the captureSession variable. This appears to be a new thing since all the code I've seen posted about this would find and remove the input for the current camera by iterating through the list of devices, as I've done in my code.
This doesn't seem to work anymore - well, at least not in the code I posted, which is based upon all the sources I've been able to dig up (that all happen to be in Objective C). The reason canAddInput() fails is that the removeInput() never succeeds; the fact that it doesn't issue the usual error about not being able to have multiple input devices is strange (since it would have helped with the debugging).
Anyway, the fix is to not remove the input on the derived input from the found device (which used to work). Instead, remove the input device that is actually there by going into the captureSession.inputs variable and doing a removeInput() on that.
To scrunch all that babble to code, here's what I did:
for ii in captureSession!.inputs {
captureSession!.removeInput(ii as! AVCaptureInput)
}
And that did the trick! :)

Resources