I am trying to migrate an app from AudioKit v4 to v5 and I am having a hard time finding documentation on the migration, and I can't find these in the Cookbook. Previously we could set defaultToSpeaker and audioInputEnabled through AKSettings. Now, these properties are gone and I can't find how can I replace them.
v4:
AKSettings.audioInputEnabled = true
AKSettings.defaultToSpeaker = true
Does anyone know how these parameters can be set with the new version? Any feedback is highly appreciated!
Nazarii,
In AudioKit 5, here's how I set up my audio input parameters:
import AudioKit
import AVFoundation
class Conductor {
static let sharedInstance = Conductor()
// Instantiate the audio engine and Mic Input node objects
let engine = AudioEngine()
var mic: AudioEngine.InputNode!
// Add effects for the Mic Input.
var delay: Delay!
var reverb: Reverb!
let mixer = Mixer()
// MARK: Initialize the audio engine settings.
init() {
// AVAudioSession requires the AVFoundation framework to be imported in the header.
do {
Settings.bufferLength = .medium
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(Settings.bufferLength.duration)
try AVAudioSession.sharedInstance().setCategory(.playAndRecord,
options: [.defaultToSpeaker, .mixWithOthers, .allowBluetoothA2DP])
try AVAudioSession.sharedInstance().setActive(true)
} catch let err {
print(err)
}
// The audio signal path with be:
// input > mic > delay > reverb > mixer > output
// Mic is connected to the audio engine's input...
mic = engine.input
// Mic goes into the delay...
delay = Delay(mic)
delay.time = AUValue(0.5)
delay.feedback = AUValue(30.0)
delay.dryWetMix = AUValue(15.0)
// Delay output goes into the reverb...
reverb = Reverb(delay)
reverb.loadFactoryPreset(.largeHall2)
reverb.dryWetMix = AUValue(0.4)
// Reverb output goes into the mixer...
mixer.addInput(reverb)
// Engine output is connected to the mixer.
engine.output = mixer
// Uncomment the following method, if you don't want to Start and stop the audio engine via the SceneDelegate.
// startAudioEngine()
}
// MARK: Start and stop the audio engine via the SceneDelegate
func startAudioEngine() {
do {
print("Audio engine was started.")
try engine.start()
} catch {
Log("AudioKit did not start! \(error)")
}
}
func stopAudioEngine() {
engine.stop()
print("Audio engine was stopped.")
}
}
Please let me know if this works for you.
Take care,
Mark
Related
I'm currently trying to debug my video camera module that records video and audio with av foundation while mixing with background audio. It only adds the audio input when the user starts recording and should remove the audio input after the user stops recording. It successfully works the first time on app launch when the user starts recording but fails to remove the audio input after showing me the recorded preview with the following error:
2023-02-10 12:44:01.834776-0800 Camera[14061:986111] [avas] AVAudioSession_iOS.mm:1271 Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
Error occurred while removing audio device input: Error Domain=NSOSStatusErrorDomain Code=560030580 "(null)"
Here is my initial capture session code:
func setUp(){
do{
self.session.beginConfiguration()
let cameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
if cameraDevice != nil {
/* only add video in setup */
if self.session.canAddInput(videoInput) {
self.session.addInput(videoInput)
self.videoDeviceInput = videoInput
}
if self.session.canAddOutput(self.output){
self.session.addOutput(self.output)
}
if self.session.canAddOutput(self.photoOutput){
self.session.addOutput(self.photoOutput)
}
//for audio mixing, make sure this is default set to true
self.session.automaticallyConfiguresApplicationAudioSession = true
self.session.commitConfiguration()
}
}
catch{
print(error.localizedDescription)
}
}
Here is my start recording code that gets triggered when the users uses the camera module to start recording a video with audio that mixes with the background audio:
func startRecording() {
/* add audio input only when user starts recording */
do
{
try AVAudioSession.sharedInstance().setActive(false)
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.ambient)
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .default, options: AVAudioSession.CategoryOptions.mixWithOthers)
try AVAudioSession.sharedInstance().setMode(AVAudioSession.Mode.videoRecording)
try AVAudioSession.sharedInstance().setActive(true)
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
if self.session.canAddInput(audioInput){
self.session.automaticallyConfiguresApplicationAudioSession = false
self.session.addInput(audioInput)
}
} catch {
print("Can't Set Audio Session Category: \(error)")
}
// MARK: Temporary URL for recording Video
let tempURL = NSTemporaryDirectory() + "\(Date()).mov"
//Need to correct image orientation before moving further
if let videoOutputConnection = output.connection(with: .video) {
//For frontCamera settings to capture mirror image
if self.videoDeviceInput.device.position == .front {
videoOutputConnection.automaticallyAdjustsVideoMirroring = false
videoOutputConnection.isVideoMirrored = true
} else {
videoOutputConnection.automaticallyAdjustsVideoMirroring = true
}
}
output.startRecording(to: URL(fileURLWithPath: tempURL), recordingDelegate: self)
isRecording = true
}
Here is my stop recording method:
func stopRecording(){
output.stopRecording()
isRecording = false
self.flashOn = false
/* attempts to remove Audio input */
do{
try AVAudioSession.sharedInstance().setActive(false)
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
self.session.removeInput(audioInput)
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.ambient, mode: .default, options: [.mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print("Error occurred while removing audio device input: \(error)")
}
}
The stopRecording method outputs the error which makes the camera no longer attach the audio for future recordings but successfully works only for the first recording.
Basically, I feel like I'm really close but I've been splitting my hair trying to debug this for the past week with limited documents and solutions. It works for the first recording session with the audio and future recording sessions no longer have any audio attached to the video that the camera has recorded upon calling start recording again.
I am developing an iOS 14 app that plays fragments of an audio file for the user to imitate. If the user wants to, the app can record the user's responses and play these back immediately. The user can also export an audio recording that comprises the original fragments, plus the user's responses.
I am using AudioKit 4.11
Because it is possible the user may never wish to take advantage of the app's recording abilities, the app initially adopts the audio session category of .playback. If the user wants to use the recording feature, the app triggers the standard Apple dialog for requesting microphone access, and if this is granted, switches the session category to .playAndRecord.
I have found that when the session category is .playback and the user has not yet granted microphone permission, I am able to listen to the app's output on a Bluetooth speaker, or on my Jabra Elite 65t Bluetooth earbuds when the app is running on a real iPhone. In the example below, this is the case when the app first runs and the user has only ever tapped "Play sound" or "Stop".
However, as soon as I tap "Play sound and record response" and grant microphone access, I am unable to listen to the app's output on a Bluetooth device, regardless of whether the session category applicable at the time is .playback (after tapping "Play sound and record response") or .playAndRecord (after tapping "Play sound") - unless I subsequently go to my phone's Privacy settings and toggle microphone access to off. Playback is available only through the phone's speaker, or through plugged in headphones.
When setting the session category of .playAndRecord I have tried invoking the .allowBluetoothA2DP option.
Apple's advice implies this should allow me to listen to my app's sound over Bluetooth in the circumstances I have described above (see https://developer.apple.com/documentation/avfoundation/avaudiosession/categoryoptions/1771735-allowbluetootha2dp). However I've not found this to be the case.
The code below represents a runnable app (albeit one requiring the presence of AudioKit 4.11) that illustrates the problem in a simplified form. The only elements not shown here are an NSMicrophoneUsageDescription that I added to info.plist, and the file "blues.mp3" which I imported into the project.
ContentView:
import SwiftUI
import AudioKit
import AVFoundation
struct ContentView: View {
private var pr = PlayerRecorder()
var body: some View {
VStack{
Text("Play sound").onTapGesture{
pr.setupforPlay()
pr.playSound()
}
.padding()
Text("Play sound and record response").onTapGesture{
if recordingIsAllowed() {
pr.activatePlayAndRecord()
pr.startSoundAndResponseRecording()
}
}
.padding()
Text("Stop").onTapGesture{
pr.stop()
}
.padding()
}
}
func recordingIsAllowed() -> Bool {
var retval = false
AVAudioSession.sharedInstance().requestRecordPermission { granted in
retval = granted
}
return retval
}
}
PlayerRecorder:
import Foundation
import AudioKit
class PlayerRecorder {
private var mic: AKMicrophone!
private var micBooster: AKBooster!
private var mixer: AKMixer!
private var outputBooster: AKBooster!
private var player: AKPlayer!
private var playerBooster: AKBooster!
private var recorder: AKNodeRecorder!
private var soundFile: AKAudioFile!
private var twentySecondTimer = Timer()
init() {
AKSettings.defaultToSpeaker = true
AKSettings.disableAudioSessionDeactivationOnStop = true
AKSettings.notificationsEnabled = true
}
func activatePlayAndRecord() {
do {
try AKManager.shutdown()
} catch {
print("Shutdown failed")
}
setupForPlayAndRecord()
}
func playSound() {
do {
soundFile = try AKAudioFile(readFileName: "blues.mp3")
} catch {
print("Failed to open sound file")
}
do {
try player.load(audioFile: soundFile!)
} catch {
print("Player failed to load sound file")
}
if micBooster != nil{
micBooster.gain = 0.0
}
player.play()
}
func setupforPlay() {
do {
try AKSettings.setSession(category: .playback)
} catch {
print("Failed to set session category to .playback")
}
mixer = AKMixer()
outputBooster = AKBooster(mixer)
player = AKPlayer()
playerBooster = AKBooster(player)
playerBooster >>> mixer
AKManager.output = outputBooster
if !AKManager.engine.isRunning {
try? AKManager.start()
}
}
func setupForPlayAndRecord() {
AKSettings.audioInputEnabled = true
do {
try AKSettings.setSession(category: .playAndRecord)
/* Have tried the following instead of the line above, but without success
let options: AVAudioSession.CategoryOptions = [.allowBluetoothA2DP]
try AKSettings.setSession(category: .playAndRecord, options: options.rawValue)
Have also tried:
try AKSettings.setSession(category: .multiRoute)
*/
} catch {
print("Failed to set session category to .playAndRecord")
}
mic = AKMicrophone()
micBooster = AKBooster(mic)
mixer = AKMixer()
outputBooster = AKBooster(mixer)
player = AKPlayer()
playerBooster = AKBooster(player)
mic >>> micBooster
micBooster >>> mixer
playerBooster >>> mixer
AKManager.output = outputBooster
micBooster.gain = 0.0
outputBooster.gain = 1.0
if !AKManager.engine.isRunning {
try? AKManager.start()
}
}
func startSoundAndResponseRecording() {
// Start player and recorder. After 20 seconds, call a function that stops the player
// (while allowing recording to continue until user taps Stop button).
activatePlayAndRecord()
playSound()
// Force removal of any tap not previously removed with stop() call for recorder
var mixerNode: AKNode?
mixerNode = mixer
for i in 0..<8 {
mixerNode?.avAudioUnitOrNode.removeTap(onBus: i)
}
do {
recorder = try? AKNodeRecorder(node: mixer)
try recorder.record()
} catch {
print("Failed to start recorder")
}
twentySecondTimer = Timer.scheduledTimer(timeInterval: 20.0, target: self, selector: #selector(stopPlayerOnly), userInfo: nil, repeats: false)
}
func stop(){
twentySecondTimer.invalidate()
if player.isPlaying {
player.stop()
}
if recorder != nil {
if recorder.isRecording {
recorder.stop()
}
}
if AKManager.engine.isRunning {
do {
try AKManager.stop()
} catch {
print("Error occurred while stopping engine.")
}
}
print("Stopped")
}
#objc func stopPlayerOnly () {
player.stop()
if !mic.isStarted {
mic.start()
}
if !micBooster.isStarted {
micBooster.start()
}
mic.volume = 1.0
micBooster.gain = 1.0
outputBooster.gain = 0.0
}
}
Three additional lines of code near the beginning of setupForPlayAndRecord() solve the problem:
func setupForPlayAndRecord() {
AKSettings.audioInputEnabled = true
// Adding the following three lines solves the problem
AKSettings.useBluetooth = true
let categoryOptions: AVAudioSession.CategoryOptions = [.allowBluetoothA2DP]
AKSettings.bluetoothOptions = categoryOptions
do {
try AKSettings.setSession(category: .playAndRecord)
} catch {
print("Failed to set session category to .playAndRecord")
}
mic = AKMicrophone()
micBooster = AKBooster(mic)
mixer = AKMixer()
outputBooster = AKBooster(mixer)
player = AKPlayer()
playerBooster = AKBooster(player)
mic >>> micBooster
micBooster >>> mixer
playerBooster >>> mixer
AKManager.output = outputBooster
micBooster.gain = 0.0
outputBooster.gain = 1.0
if !AKManager.engine.isRunning {
try? AKManager.start()
}
}
I am using AVFoundation / AudioKit in order to record the internal microphone of the iPhone / iPad. It should be possible to continue using the app after switching the output between BluetoothA2DP and the internal speaker. The microphone should continue to take the input from the internal microphone of the device. And it does. Everything is working fine but only until I want to change the output device.
func basicAudioSetup(){
// microphone
self.microphone = AKMicrophone()
// select input of device
if let input = AudioKit.inputDevice {
try! self.microphone?.setDevice(input)
}
AKSettings.sampleRate = 44100
AKSettings.channelCount = 2
AKSettings.playbackWhileMuted = true
AKSettings.enableRouteChangeHandling = false
AKSettings.useBluetooth = true
AKSettings.allowAirPlay = true
AKSettings.defaultToSpeaker = true
AKSettings.audioInputEnabled = true
// init DSP
self.dsp = AKClock(amountSamples: Int32(self.amountSamples), amountGroups: Int32(self.amountGroups), bpm: self.bpm, iPad:self.iPad)
self.masterbusTracker = AKAmplitudeTracker(self.dsp)
self.mixer.connect(input: self.masterbusTracker)
self.player = AKPlayer()
self.mixer.connect(input: self.player)
self.microphone?.stop()
self.microphoneTracker = AKAmplitudeTracker(self.microphone)
self.microphoneTracker?.stop()
self.microphoneRecorder = try! AKNodeRecorder(node: self.microphone)
self.microphoneMonitoring = AKBooster(self.microphoneTracker)
self.microphoneMonitoring?.gain = 0
self.mixer.connect(input: self.microphoneMonitoring)
AudioKit.output = self.mixer
// the following line is actually happening inside a customized AudioKit.start() function to make sure that only BluetoothA2DP is used for better sound quality:
try AKSettings.setSession(category: .playAndRecord, with: [.defaultToSpeaker, .allowBluetoothA2DP, .allowAirPlay, .mixWithOthers])
do {
try AudioKit.start()
}catch{
print("AudioKit did not start")
}
// adding Notifications to manually restart the engine.
NotificationCenter.default.addObserver(self, selector: #selector(self.audioRouteChangeListener(notification:)), name: NSNotification.Name.AVAudioSessionRouteChange, object: nil)
}
#objc func audioRouteChangeListener(notification:NSNotification) {
let audioRouteChangeReason = notification.userInfo![AVAudioSessionRouteChangeReasonKey] as! UInt
let checkRestart = {
print("ROUTE CHANGE")
do{
try AudioKit.engine.start()
}catch{
print("error rebooting engine")
}
}
if audioRouteChangeReason == AVAudioSessionRouteChangeReason.newDeviceAvailable.rawValue ||
audioRouteChangeReason == AVAudioSessionRouteChangeReason.oldDeviceUnavailable.rawValue{
if Thread.isMainThread {
checkRestart()
} else {
DispatchQueue.main.async(execute: checkRestart)
}
}
}
I noticed, that when the microphone is connected, AVAudioSessionRouteChange is never called when switching from the internal speaker to Bluetooth. I do receive messages when starting with and switching from Bluetooth to the internal speaker:
[AVAudioEngineGraph.mm:1481:Start: (err = PerformCommand(*ioNode,
kAUStartIO, NULL, 0))
What does this message exactly mean? I tried everything, from manually disconnecting all inputs of the engine / de- and reactivating the session to rebuilding the whole chain. Nothing works.
Theoretically the input source is not changing, because it is staying on the input of the phone. Any help highly appreciated.
FYI: I am using a customized version of AudioKit library where I removed its internal Notifications of AVAudioSessionRouteChange to avoid unwanted Doppelgaenger. This customized library also sets the session category and options internally for the same reason and to ensure, that only BluetoothA2DP is used.
I'm trying to record an audio using Build-In Microphone and playback it simultaneously through Remote Speaker. I'm using AudioKit as follows:
import UIKit
import AudioKit
class ViewController: UIViewController {
let session = AVAudioSession.sharedInstance()
let mic = AKMicrophone()
let reverb = AKReverb()
override func viewDidLoad() {
super.viewDidLoad()
mic >>> reverb
AudioKit.output = reverb
AKSettings.ioBufferDuration = 0.002
}
#IBAction func buttonWasPressed() {
printDevices()
try! AudioKit.start()
printDevices()
}
#IBAction func buttonWasReleased() {
try! AudioKit.stop()
}
func printDevices() {
// List of output devices:
if let outputs = AudioKit.outputDevices {
print("Outputs:")
dump(outputs)
}
}
}
The problem is even when a Bluetooth speaker is connected after executing AudioKit.start() the only available output device is Build-In Receiver (So, there's no way to change AudioKit.output property).
Another problem is that after the launch of the app it also fails to determine remote speaker in the output devices, once it was re-opened it starts to work properly.
So I wonder is there's a way to simultaneously use Build-In Mic and Remote Speaker? ..And a way to quit re-opening the app after it's launch every single time? -_-
Thanks a lot in advance!
I found this sample code online: https://developer.apple.com/library/content/samplecode/AVCam/Introduction/Intro.html
I am trying to change the input microphone from the default microphone to the bottom microphone on an iPhone. Does anyone have any experience going about this in Swift? The only examples I've found were in Obj-C and caused errors when I implemented them. I'm using apple's AVCam sample app for reference, the audio part is included below.
// Add audio input.
do {
let audioDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
let audioDeviceInput = try AVCaptureDeviceInput(device: audioDevice)
if session.canAddInput(audioDeviceInput) {
session.addInput(audioDeviceInput)
}
else {
print("Could not add audio device input to the session")
}
}
catch {
print("Could not create audio device input: \(error)")
}
You should try settings the category of the session using:
session.setCategory(AVAudioSessionCategoryPlayAndRecord, withOptions: AVAudioSessionCategoryOptions.DefaultToSpeaker, error: nil)
this should make use the bottom microphone by default
If you only need audio you should use AVAudioSession - https://developer.apple.com/reference/avfoundation/avaudiosession
Not tested Sample code you could play around with:
import AVFoundation
.
.
private var session: AVAudioSession!
private var input: AVAudioSessionPortDescription!
.
.
.
session = AVAudioSession.sharedInstance()
do {
try session.setCategory(AVAudioSessionCategoryRecord)
// Fetch Built in Mic
if let availableInputs = session.availableInputs {
for inputSource in availableInputs {
if inputSource.portType == AVAudioSessionPortBuiltInMic {
input = inputSource
break
}
}
// Set preferred data source by location
if let dataSources = input.dataSources {
for dataSource in dataSources {
if dataSource.location == AVAudioSessionLocationLower {
input.setPreferredDataSource(dataSource)
break
}
}
}
session.setPreferredInput(input)
.
.
} catch {
....
}