I'm trying to record an audio using Build-In Microphone and playback it simultaneously through Remote Speaker. I'm using AudioKit as follows:
import UIKit
import AudioKit
class ViewController: UIViewController {
let session = AVAudioSession.sharedInstance()
let mic = AKMicrophone()
let reverb = AKReverb()
override func viewDidLoad() {
super.viewDidLoad()
mic >>> reverb
AudioKit.output = reverb
AKSettings.ioBufferDuration = 0.002
}
#IBAction func buttonWasPressed() {
printDevices()
try! AudioKit.start()
printDevices()
}
#IBAction func buttonWasReleased() {
try! AudioKit.stop()
}
func printDevices() {
// List of output devices:
if let outputs = AudioKit.outputDevices {
print("Outputs:")
dump(outputs)
}
}
}
The problem is even when a Bluetooth speaker is connected after executing AudioKit.start() the only available output device is Build-In Receiver (So, there's no way to change AudioKit.output property).
Another problem is that after the launch of the app it also fails to determine remote speaker in the output devices, once it was re-opened it starts to work properly.
So I wonder is there's a way to simultaneously use Build-In Mic and Remote Speaker? ..And a way to quit re-opening the app after it's launch every single time? -_-
Thanks a lot in advance!
Related
I am trying to migrate an app from AudioKit v4 to v5 and I am having a hard time finding documentation on the migration, and I can't find these in the Cookbook. Previously we could set defaultToSpeaker and audioInputEnabled through AKSettings. Now, these properties are gone and I can't find how can I replace them.
v4:
AKSettings.audioInputEnabled = true
AKSettings.defaultToSpeaker = true
Does anyone know how these parameters can be set with the new version? Any feedback is highly appreciated!
Nazarii,
In AudioKit 5, here's how I set up my audio input parameters:
import AudioKit
import AVFoundation
class Conductor {
static let sharedInstance = Conductor()
// Instantiate the audio engine and Mic Input node objects
let engine = AudioEngine()
var mic: AudioEngine.InputNode!
// Add effects for the Mic Input.
var delay: Delay!
var reverb: Reverb!
let mixer = Mixer()
// MARK: Initialize the audio engine settings.
init() {
// AVAudioSession requires the AVFoundation framework to be imported in the header.
do {
Settings.bufferLength = .medium
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(Settings.bufferLength.duration)
try AVAudioSession.sharedInstance().setCategory(.playAndRecord,
options: [.defaultToSpeaker, .mixWithOthers, .allowBluetoothA2DP])
try AVAudioSession.sharedInstance().setActive(true)
} catch let err {
print(err)
}
// The audio signal path with be:
// input > mic > delay > reverb > mixer > output
// Mic is connected to the audio engine's input...
mic = engine.input
// Mic goes into the delay...
delay = Delay(mic)
delay.time = AUValue(0.5)
delay.feedback = AUValue(30.0)
delay.dryWetMix = AUValue(15.0)
// Delay output goes into the reverb...
reverb = Reverb(delay)
reverb.loadFactoryPreset(.largeHall2)
reverb.dryWetMix = AUValue(0.4)
// Reverb output goes into the mixer...
mixer.addInput(reverb)
// Engine output is connected to the mixer.
engine.output = mixer
// Uncomment the following method, if you don't want to Start and stop the audio engine via the SceneDelegate.
// startAudioEngine()
}
// MARK: Start and stop the audio engine via the SceneDelegate
func startAudioEngine() {
do {
print("Audio engine was started.")
try engine.start()
} catch {
Log("AudioKit did not start! \(error)")
}
}
func stopAudioEngine() {
engine.stop()
print("Audio engine was stopped.")
}
}
Please let me know if this works for you.
Take care,
Mark
I currently have a pair of Bluetooth earbuds. From this post, I have the code needed to retrieve audio from the Bluetooth earbuds' microphone and then playback the audio through the Bluetooth earbuds. However, I want to modify the code so that I can retrieve audio from the Bluetooth earbuds' microphone and then playback the audio through the phone's INTERNAL speaker / through any other pair of earbuds that may be physically connected to the phone. How would I go about doing that? This is my current code:
import UIKit
import AVFoundation
class PlayRecordVC: UIViewController, AVAudioRecorderDelegate {
let audioSession = AVAudioSession.sharedInstance()
let player = AVAudioPlayerNode()
let engine = AVAudioEngine()
override func viewDidLoad() {
super.viewDidLoad()
do{
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.allowBluetooth])
try audioSession.overrideOutputAudioPort(.speaker)
try audioSession.setActive(true)
} catch{
print(error.localizedDescription)
}
let input = engine.inputNode
engine.attach(player)
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
engine.connect(player, to: engine.mainMixerNode, format: inputFormat)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
self.player.scheduleBuffer(buffer)
print(buffer)
}
}
#IBAction func start(_ sender: UIButton) {
try! engine.start()
player.play()
}
#IBAction func stop(_ sender: UIButton) {
engine.stop()
player.stop()
}
}
UPDATE:
When I add the line audioSession.overrideOutputAudioPort(.speaker) no audio plays at all (neither from the Bluetooth earbuds nor the phone's internal speaker)
I haven't tested if this actually causes microphone to still be used by the headphones, but by default to route the audio to the speakers instead of the headphones this should work:
try! audioSession.overrideOutputAudioPort(.speaker)
Here is the documentation of the method.
There is also this category option you could give to the audio session to have that same effect but also avoid it to be reset by gestures (like attaching new headphones).
If after your test you find out this actually moves also the recording to use the internal microphone, I think there is no other way (at least no other way that I found of).
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, with: .allowBluetooth)
try audioSession.setMode(AVAudioSessionModeDefault)
try audioSession.setActive(true)
} catch {
print(error)
}
need to define the correct mode I believe.
Not exactly sure which mode will work best in your case.
https://developer.apple.com/documentation/avfoundation/avaudiosession/mode
This link below has extension for checking if microphone is plugged in.
https://stackoverflow.com/a/52460651/8272698
I'm currently making a metronome app, it uses AKMetronome for the core part since it has the callback function, I need the callback function to visualize the beat, I also want to use it for customized click sound (and I'll mute the original AKMetronome sound, but it'll still play, just without the original sound).
Now, I have two wav files which are customized click sound, there will be two AKPlayers to play it, and the AKPlayers need to be triggered on every AKMetronome beat's callback.
Since AKPlayer and AKMetronome are both need to be played, I put them in an AKMixer, like this:
let mixer = AKMixer(playerA, playerB, metronome)
AudioKit.output = mixer
and the callback will call this func:
func playAudio() {
playerA.play()
playerB.play()
}
Then, when playerA.play() is executed, it'll crash.
This is the error message:
AURemoteIO::IOThread (11): EXC_BAD_ACCESS (code=1, address=0xffff9ffffdb0e360)
same error message in screenshot
If I only put one of the AKPlayer object or AKMetronome in the AKMixer, then it works fine.
I can't understand the error message, also don't know why this happens.
Any help would be appreciated.
Here is the full code:
var playerA: AKPlayer!
var playerB: AKPlayer!
var clickA: AKAudioFile?
var clickB: AKAudioFile?
var metronome: AKMetronome?
override func viewDidLoad() {
super.viewDidLoad()
prepareAudio()
metronome!.start()
}
func prepareAudio() {
clickA = try? AKAudioFile(readFileName: "Click1A.wav")
clickB = try? AKAudioFile(readFileName: "Click1B.wav")
playerA = AKPlayer(audioFile: clickA!)
playerA.buffering = .always
playerB = AKPlayer(audioFile: clickB!)
playerB.buffering = .always
//metronome
metronome = AKMetronome.init()
metronome!.subdivision = 4
metronome!.frequency1 = 1000
metronome!.frequency2 = 800
metronome!.tempo = 60
metronome!.callback = {
self.playAudio()
}
let mixer = AKMixer(playerA, playerB, metronome)
AudioKit.output = mixer
do {
try AudioKit.start()
} catch {
print("audiokit start fail!")
}
}
func playAudio() {
playerA.play()
playerB.play()
}
I am using AudioKit to monitor frequency for a simple guitar tuner application and am experiencing discrepancies in frequency after updating from AudioKit ~4.2 to 4.4, Xcode 9.x to 10, and iOS 11 to 12. Before the updates, I was achieving correct frequency readings on my device. After updating, I am getting accurate results for a low E1 (82.4 Hz) on the simulator, but false readings on the device (alternates from ~23 to ~47 kHz).
I have tried using another device, but achieve the same results.
My viewDidLoad() setting up AudioKit is relatively simple, and I used the AudioKit playgrounds as a guideline:
override func viewDidLoad() {
super.viewDidLoad()
// Enable microphone tracking.
AKSettings.audioInputEnabled = true
let mic = AKMicrophone()
let tracker = AKFrequencyTracker(mic)
let silence = AKBooster(tracker, gain: 0)
AudioKit.output = silence
do {
try AudioKit.start()
}
catch {
print("AudioKit did not start!")
}
mic.start()
tracker.start()
// Track input frequency, 100ms intervals
timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) {
[weak self] (timer) in
guard let this = self else { return }
this.frequencyLabel.text = String(format: "Frequency: %.3f Hz", tracker.frequency)
this.frequencyLabel.sizeToFit()
}
}
As a sidenote, I am getting Objective-C console output regarding AudioKit classes being implemented in two places. Would this contribute to the issue?
objc[517]: Class AKRhodesPianoAudioUnit is implemented in both /private/var/containers/Bundle/Application/5A294050-2DB2-45C9-BB0A-3A0DE25E87C6/Tuner.app/Frameworks/AudioKitUI.framework/AudioKitUI (0x1058413f0) and /var/containers/Bundle/Application/5A294050-2DB2-45C9-BB0A-3A0DE25E87C6/Tuner.app/Tuner (0x104e177e8). One of the two will be used. Which one is undefined.
Any ideas? Thanks in advance!
I'm building a game that uses the AudioKit framework to detect the frequency of sound received by the mic. I set it up as follows:
import SpriteKit
import AudioKit
class GameScene: SKScene {
var mic : AKMicrophone!
var tracker : AKFrequencyTracker!
var silence : AKBooster!
let mixer = AKMixer()
override func didMove(to view: SKView) {
mic = AKMicrophone()
tracker = AKFrequencyTracker.init(mic)
silence = AKBooster(tracker, gain: 0)
mixer.connect(silence)
AudioKit.output = mixer
AudioKit.start()
}
}
I would also like to use SKAction.playAudioFileNamed for the playback of sound effects etc, but when I use it, the playback volume is very low. I assume it has something to do with the scene's mixer node and the AKMixer? Playing sound files using AudioKit is far more complicated than I need.
Do I need to make an extension of SKScene? Help would be very much appreciated!
It seems that Aurelius was correct in that the AudioSession output route was being directed to the headset. I'm still not sure why this is was the case, but overriding and setting the output worked as follows:
let session = AVAudioSession()
do {
try session.overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
} catch {
print("error setting output")
}
This needs to be done after initializing AudioKit components. If there's a better way of doing this, please let me know!