AudioKit for iOS: Frequency Discrepancy on Simulator vs Device - ios

I am using AudioKit to monitor frequency for a simple guitar tuner application and am experiencing discrepancies in frequency after updating from AudioKit ~4.2 to 4.4, Xcode 9.x to 10, and iOS 11 to 12. Before the updates, I was achieving correct frequency readings on my device. After updating, I am getting accurate results for a low E1 (82.4 Hz) on the simulator, but false readings on the device (alternates from ~23 to ~47 kHz).
I have tried using another device, but achieve the same results.
My viewDidLoad() setting up AudioKit is relatively simple, and I used the AudioKit playgrounds as a guideline:
override func viewDidLoad() {
super.viewDidLoad()
// Enable microphone tracking.
AKSettings.audioInputEnabled = true
let mic = AKMicrophone()
let tracker = AKFrequencyTracker(mic)
let silence = AKBooster(tracker, gain: 0)
AudioKit.output = silence
do {
try AudioKit.start()
}
catch {
print("AudioKit did not start!")
}
mic.start()
tracker.start()
// Track input frequency, 100ms intervals
timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) {
[weak self] (timer) in
guard let this = self else { return }
this.frequencyLabel.text = String(format: "Frequency: %.3f Hz", tracker.frequency)
this.frequencyLabel.sizeToFit()
}
}
As a sidenote, I am getting Objective-C console output regarding AudioKit classes being implemented in two places. Would this contribute to the issue?
objc[517]: Class AKRhodesPianoAudioUnit is implemented in both /private/var/containers/Bundle/Application/5A294050-2DB2-45C9-BB0A-3A0DE25E87C6/Tuner.app/Frameworks/AudioKitUI.framework/AudioKitUI (0x1058413f0) and /var/containers/Bundle/Application/5A294050-2DB2-45C9-BB0A-3A0DE25E87C6/Tuner.app/Tuner (0x104e177e8). One of the two will be used. Which one is undefined.
Any ideas? Thanks in advance!

Related

AVAudioSessionRouteChange Audiokit crashes when Bluetooth connection is turned on / off

I am using AVFoundation / AudioKit in order to record the internal microphone of the iPhone / iPad. It should be possible to continue using the app after switching the output between BluetoothA2DP and the internal speaker. The microphone should continue to take the input from the internal microphone of the device. And it does. Everything is working fine but only until I want to change the output device.
func basicAudioSetup(){
// microphone
self.microphone = AKMicrophone()
// select input of device
if let input = AudioKit.inputDevice {
try! self.microphone?.setDevice(input)
}
AKSettings.sampleRate = 44100
AKSettings.channelCount = 2
AKSettings.playbackWhileMuted = true
AKSettings.enableRouteChangeHandling = false
AKSettings.useBluetooth = true
AKSettings.allowAirPlay = true
AKSettings.defaultToSpeaker = true
AKSettings.audioInputEnabled = true
// init DSP
self.dsp = AKClock(amountSamples: Int32(self.amountSamples), amountGroups: Int32(self.amountGroups), bpm: self.bpm, iPad:self.iPad)
self.masterbusTracker = AKAmplitudeTracker(self.dsp)
self.mixer.connect(input: self.masterbusTracker)
self.player = AKPlayer()
self.mixer.connect(input: self.player)
self.microphone?.stop()
self.microphoneTracker = AKAmplitudeTracker(self.microphone)
self.microphoneTracker?.stop()
self.microphoneRecorder = try! AKNodeRecorder(node: self.microphone)
self.microphoneMonitoring = AKBooster(self.microphoneTracker)
self.microphoneMonitoring?.gain = 0
self.mixer.connect(input: self.microphoneMonitoring)
AudioKit.output = self.mixer
// the following line is actually happening inside a customized AudioKit.start() function to make sure that only BluetoothA2DP is used for better sound quality:
try AKSettings.setSession(category: .playAndRecord, with: [.defaultToSpeaker, .allowBluetoothA2DP, .allowAirPlay, .mixWithOthers])
do {
try AudioKit.start()
}catch{
print("AudioKit did not start")
}
// adding Notifications to manually restart the engine.
NotificationCenter.default.addObserver(self, selector: #selector(self.audioRouteChangeListener(notification:)), name: NSNotification.Name.AVAudioSessionRouteChange, object: nil)
}
#objc func audioRouteChangeListener(notification:NSNotification) {
let audioRouteChangeReason = notification.userInfo![AVAudioSessionRouteChangeReasonKey] as! UInt
let checkRestart = {
print("ROUTE CHANGE")
do{
try AudioKit.engine.start()
}catch{
print("error rebooting engine")
}
}
if audioRouteChangeReason == AVAudioSessionRouteChangeReason.newDeviceAvailable.rawValue ||
audioRouteChangeReason == AVAudioSessionRouteChangeReason.oldDeviceUnavailable.rawValue{
if Thread.isMainThread {
checkRestart()
} else {
DispatchQueue.main.async(execute: checkRestart)
}
}
}
I noticed, that when the microphone is connected, AVAudioSessionRouteChange is never called when switching from the internal speaker to Bluetooth. I do receive messages when starting with and switching from Bluetooth to the internal speaker:
[AVAudioEngineGraph.mm:1481:Start: (err = PerformCommand(*ioNode,
kAUStartIO, NULL, 0))
What does this message exactly mean? I tried everything, from manually disconnecting all inputs of the engine / de- and reactivating the session to rebuilding the whole chain. Nothing works.
Theoretically the input source is not changing, because it is staying on the input of the phone. Any help highly appreciated.
FYI: I am using a customized version of AudioKit library where I removed its internal Notifications of AVAudioSessionRouteChange to avoid unwanted Doppelgaenger. This customized library also sets the session category and options internally for the same reason and to ensure, that only BluetoothA2DP is used.

How Can I Add DeviceMotion Capabilities to a Swift Playground?

I am working on a Swift playground and I am trying to use this code to get the device motion.
#objc func update()
{
if let deviceMotion = motionManager.deviceMotion {
print("Device Motion Yaw: \(deviceMotion.attitude.yaw)")
}
}
However, it seems that device motion does not work on a Swift playground even though it works in iOS. How would I change a playground to support device motion? I am using an iPad running iOS 12 and the latest version of Swift Playgrounds and a Mac for the code. I know that the method gets called perfectly, and the code runs perfectly when I put it as part of an iOS app on both an iPad and an iPhone. How would I modify a playground to support this, as from my understanding it does not by default?
It is entirely possible. I’ve done it on several occasions. You’ll need a CMMotionManager class. There are many ways to do this, but I would recommend using a timer. Here is some example code, taken from Apple’s developer documentation and modified to fit the question.
let motion = CMMotionManager()
func startDeviceMotion() {
if motion.isDeviceMotionAvailable {
//How often to push updates
self.motion.deviceMotionUpdateInterval = 1.0/60.0
self.motion.showsDeviceMovementDisplay = true
self.motion.startDeviceMotionUpdates(using: .xMagneticNorthZVertical)
// Configure a timer to fetch the motion data.
self.timer = Timer(fire: Date(), interval: (1.0 / 60.0), repeats: true,
block: { (timer) in
if let data = self.motion.deviceMotion {
let x = data.attitude.pitch
let y = data.attitude.roll
let z = data.attitude.yaw
//Use the data
}
})
RunLoop.current.add(self.timer!, forMode: RunLoop.Mode.default)
}
}
startDeviceMotionUpdates()
Either do that or try something like this, also from the documentation
func startQueuedUpdates() {
if motion.isDeviceMotionAvailable { self.motion.deviceMotionUpdateInterval = 1.0 / 60.0
self.motion.showsDeviceMovementDisplay = true
self.motion.startDeviceMotionUpdates(using: .xMagneticNorthZVertical,
to: self.queue, withHandler: { (data, error) in
// Make sure the data is valid before accessing it.
if let validData = data {
// Get the attitude relative to the magnetic north reference frame.
let roll = validData.attitude.roll
let pitch = validData.attitude.pitch
let yaw = validData.attitude.yaw
// Use the motion data in your app.
}
})
}
}

iOS - AudioKit: Different devices for input and output

I'm trying to record an audio using Build-In Microphone and playback it simultaneously through Remote Speaker. I'm using AudioKit as follows:
import UIKit
import AudioKit
class ViewController: UIViewController {
let session = AVAudioSession.sharedInstance()
let mic = AKMicrophone()
let reverb = AKReverb()
override func viewDidLoad() {
super.viewDidLoad()
mic >>> reverb
AudioKit.output = reverb
AKSettings.ioBufferDuration = 0.002
}
#IBAction func buttonWasPressed() {
printDevices()
try! AudioKit.start()
printDevices()
}
#IBAction func buttonWasReleased() {
try! AudioKit.stop()
}
func printDevices() {
// List of output devices:
if let outputs = AudioKit.outputDevices {
print("Outputs:")
dump(outputs)
}
}
}
The problem is even when a Bluetooth speaker is connected after executing AudioKit.start() the only available output device is Build-In Receiver (So, there's no way to change AudioKit.output property).
Another problem is that after the launch of the app it also fails to determine remote speaker in the output devices, once it was re-opened it starts to work properly.
So I wonder is there's a way to simultaneously use Build-In Mic and Remote Speaker? ..And a way to quit re-opening the app after it's launch every single time? -_-
Thanks a lot in advance!

AVAudioUnitSampler generates sinewaves after headphones route change, iOS 11 iPhone

I'm facing a strange issue on iPhone (iOS 11) when using AVAudioUnitSampler.
Let's say I have a AVAudioUnitSampler initialised with a piano sound. So, every time that I connect or disconnect the headphones I hear the piano sound plus a sinewave tone added to it, which gets louder the more times I connect/disconnect the headphones.
So, to me it feels as if every time that the headphones are plugged/un-plugged, a new audio unit sampler would be internally attached to the sound output (and, since it is un-initialised, it generates just sinewave tones).
The following class already shows the problem. Note that I'm using AudioKit to handle MIDI signals and trigger the sampler (although on that end everything seem to work fine, ie. startNote() and stopNote() get called properly):
class MidiController: NSObject, AKMIDIListener {
var midi = AKMIDI()
var engine = AVAudioEngine()
var samplerUnit = AVAudioUnitSampler()
override public init() {
super.init()
NotificationCenter.default.addObserver(
self,
selector: #selector(handleRouteChange),
name: .AVAudioSessionRouteChange,
object: nil)
midi.openInput()
midi.addListener(self)
engine.attach(samplerUnit)
engine.connect(samplerUnit, to: engine.outputNode)
startEngine()
}
func startEngine() {
if (!engine.isRunning) {
do {
try self.engine.start()
} catch {
fatalError("couldn't start engine.")
}
}
}
#objc func handleRouteChange(notification: NSNotification) {
let deadlineTime = DispatchTime.now() + .milliseconds(100)
DispatchQueue.main.asyncAfter(deadline: deadlineTime) {
self.startEngine()
}
}
func receivedMIDINoteOn(noteNumber: MIDINoteNumber, velocity:MIDIVelocity, channel: MIDIChannel) {
if velocity > 0 {
samplerUnit.startNote(noteNumber: noteNumber, velocity: 127, channel: 0)
} else {
samplerUnit.stopNote(noteNumber: noteNumber, channel: 0)
}
}
func receivedMIDINoteOff(noteNumber: MIDINoteNumber, velocity: MIDIVelocity, channel: MIDIChannel) {
samplerUnit.stopNote(noteNumber: noteNumber, channel: 0)
}
}
I have forked AudioKit and replaced the HelloWorld example with a minimal project with which I can reproduce this problem.
Also, I couldn't reproduce this problem on iPad under both iOS 9.3 and 11, so this might be an iPhone-specific problem.
Any help or suggestion on how to continue debugging this would be very welcome, I'm quite puzzled with this and I'm not really an expert on iOS audio development.
Thanks!
You might try checking to see if the engine is started in handleRouteChange and then bounce it if it is instead of just starting it. Let us know if that works.

Using AudioKit and SpriteKit audio simultaneously

I'm building a game that uses the AudioKit framework to detect the frequency of sound received by the mic. I set it up as follows:
import SpriteKit
import AudioKit
class GameScene: SKScene {
var mic : AKMicrophone!
var tracker : AKFrequencyTracker!
var silence : AKBooster!
let mixer = AKMixer()
override func didMove(to view: SKView) {
mic = AKMicrophone()
tracker = AKFrequencyTracker.init(mic)
silence = AKBooster(tracker, gain: 0)
mixer.connect(silence)
AudioKit.output = mixer
AudioKit.start()
}
}
I would also like to use SKAction.playAudioFileNamed for the playback of sound effects etc, but when I use it, the playback volume is very low. I assume it has something to do with the scene's mixer node and the AKMixer? Playing sound files using AudioKit is far more complicated than I need.
Do I need to make an extension of SKScene? Help would be very much appreciated!
It seems that Aurelius was correct in that the AudioSession output route was being directed to the headset. I'm still not sure why this is was the case, but overriding and setting the output worked as follows:
let session = AVAudioSession()
do {
try session.overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
} catch {
print("error setting output")
}
This needs to be done after initializing AudioKit components. If there's a better way of doing this, please let me know!

Resources