AudioKit - audio engine configuration changes periodically - ios

I have an app that uses samplers to play loops. I am in the process of converting my app from using AVAudioEngine to AudioKit. My app now works well except for this: Approximately every 1-3 minutes, my app receives two .AVAudioEngineConfigurationChange notifications in a row. There is no apparent pattern to its repetition and this happens on both my iPhone 6s and new iPad.
Here is my init code for my "conductor" singleton:
init() {
//sampler array
//sampler array is cycled through as user changes sounds
samplerArray = [sampler0, sampler1, sampler2, sampler3]
//start by loading samplers with default preset
for sampler in samplerArray {
//get the sampler preset
let presetPath = Bundle.main.path(forResource: currentSound, ofType: "aupreset")
let presetURL = NSURL.fileURL(withPath: presetPath!)
do {
try sampler.samplerUnit.loadPreset(at: presetURL)
print("rrob: loaded sample")
} catch {
print("rrob: failed to load sample")
}
}
//signal chain
samplerMixer = AKMixer(samplerArray)
filter = AKMoogLadder(samplerMixer)
reverb = AKCostelloReverb(filter)
reverbMixer = AKDryWetMixer(filter, reverb, balance: 0.3)
outputMixer = AKMixer(reverbMixer)
AudioKit.output = outputMixer
//AKSettings.enableRouteChangeHandling = false
AKSettings.playbackWhileMuted = true
do {
try AKSettings.setSession(category: AKSettings.SessionCategory.playback, with: AVAudioSessionCategoryOptions.mixWithOthers)
} catch {
print("rrob: failed to set audio session")
}
//AudioBus recommended buffer length
AKSettings.bufferLength = .medium
AudioKit.start()
print("rrob: did init autoEngine")
}
Any AudioKit experts have ideas for where I can start troubleshooting? Happy to provide more info. Thanks.

Related

AudioKit v5 output problems, no sound when AVAudioSession defaultToSpeaker is used

EDIT #2: OK, I missed something big here, but I still have a problem. The reason the sound is soft and I have to amplify it is that it is coming from the earpiece, not the speaker. When I add the option .defaultToSpeaker to the setCategory I get no sound at all.
So, this is the real problem, when I set the category to .playbackAndRecord and the option to .defaultToSpeaker, why do I get no sound at all on a real phone? In addition to no sound, I did not receive input from the mic either. The sound is fine in the simulator.
EDIT #3: I began observing route changes and my code reports the following when the .defaultToSpeaker option is included.
2020-12-26 12:17:56.212366-0700 SST[13807:3950195] Current route:
2020-12-26 12:17:56.213275-0700 SST[13807:3950195] <AVAudioSessionRouteDescription: 0x2816af8e0,
inputs = (
"<AVAudioSessionPortDescription: 0x2816af900, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
);
outputs = (
"<AVAudioSessionPortDescription: 0x2816af990, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
)>
The output is set to Speaker. Is it significant that the selectedDataSource is (null)? Before the .defaultToSpeaker option was added this reported output set to Receiver, also with selectedDataSource = (null), so I would guess not.
EDIT: I added the code to set the Audio Session category. The new code is shown below. So far it seems to have no effect. If I leave it in or comment it out, I don't see any difference. I also have code (that I deleted here for simplicity) that modifies the microphone pattern. That too had no discernible effect. Perhaps though, that is to be expected?
In addition to the symptoms below, if I use Settings/Bluetooth to select the AirPods, then I got no output from the App, even after I remove the AirPods.
What am I missing here?
/EDIT
After getting this to work well on the simulator, I moved to debugging on my 11 Pro Max. When playing notes on the MandolinString, the sound from the (11 Pro Max or an 8) simulator is loud and clear. On the real phone, the sound is barely audible and from the speaker only. It does not go to the attached audio speaker, be that a HomePod or AirPods. Is this a v5 bug? Do I need to do something with the output?
A second less important issue is that when I instantiate this object the MandolinString triggers without me calling anything. The extra fader and the reset of the gain from 0 to 1 after a delay suppresses this sound.
private let engine = AudioEngine()
private let mic : AudioEngine.InputNode
private let micAmp : Fader
private let mixer1 : Mixer
private let mixer2 : Mixer
private let silence : Fader
private let stringAmp : Fader
private var pitchTap : PitchTap
private var tockAmp : Fader
private var metro = Timer()
private let sampler = MIDISampler(name: "click")
private let startTime = NSDate.timeIntervalSinceReferenceDate
private var ampThreshold: AUValue = 0.12
private var ampJumpSize: AUValue = 0.05
private var samplePause = 0
private var trackingNotStarted = true
private var tracking = false
private var ampPrev: AUValue = 0.0
private var freqArray: [AUValue] = []
init() {
// Set up mic input and pitchtap
mic = engine.input!
micAmp = Fader(mic, gain: 1.0)
mixer1 = Mixer(micAmp)
silence = Fader(mixer1, gain: 0)
mixer2 = Mixer(silence)
pitchTap = PitchTap(mixer1, handler: {_ , _ in })
// All sound is fed into mixer2
// Mic input is faded to zero
// Now add String sound to Mixer2 with a Fader
pluckedString = MandolinString()
stringAmp = Fader(pluckedString, gain: 4.0)
mixer2.addInput(stringAmp)
// Create a sound for the metronome (tock), add as input to mixer2
try! sampler.loadWav("Click")
tockAmp = Fader(sampler, gain: 1.0)
mixer2.addInput(tockAmp)
engine.output = mixer2
self.pitchTap = PitchTap(micAmp,
handler:
{ freq, amp in
if (self.samplePause <= 0 && self.tracking) {
self.samplePause = 0
self.sample(freq: freq[0], amp: amp[0])
}
})
do {
//try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.measurement)
try audioSession.setCategory(AVAudioSession.Category.playAndRecord)
//, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
try audioSession.setActive(true)
} catch let error as NSError {
print("Unable to create AudioSession: \(error.localizedDescription)")
}
do {
try engine.start()
akStartSucceeded = true
} catch {
akStartSucceeded = false
}
} // init
XCode 12, iOS 14, SPM. Everything up to date
Most likely this is not an AudioKit issue per se, it has to do with AVAudioSession, you probably need to set it on the device to be DefaultToSpeaker. AudioKit 5 has less automatic session management compared to version 4, opting to make fewer assumptions and let the developer have control.
The answer was indeed to add code for AVAudioSession. However, it did not work where I first put it. It only worked for me when I put it in the App delegate didFInishLauchWithOptions. I found this in the AudioKit Cookbook. This works:
class AppDelegate: UIResponder, UIApplicationDelegate {
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
// Override point for customization after application launch.
#if os(iOS)
self.audioSetup()
#endif
return true
}
#if os(iOS)
func audioSetup() {
let session = AVAudioSession.sharedInstance()
do {
Settings.bufferLength = .short
try session.setPreferredIOBufferDuration(Settings.bufferLength.duration)
try session.setCategory(.playAndRecord,
options: [.defaultToSpeaker, .mixWithOthers])
try session.setActive(true)
} catch let err {
print(err)
}
// Other AudioSession stuff here
do {
try session.setActive(true)
} catch let err {
print(err)
}
}
#endif
}

Continuous Sine Wave From AKMIDISampler when AKMicrophone is Present

I’m having a problem using AKMIDISampler in my project when there’s an initialized AKMicrophone. Along with correctly playing the woodblock sample when “play” is called on the sampler, the first time “play” is called a constant sine wave starts playing - it never stops.
I’ve replicated the problem in the smallest amount of code below. Issue happens when the class is initialized then playTestSample() is called.
Note that if the AKMicrophone related code is all muted the AKMIDISampler plays fine and the sine wave that currently haunts my dreams doesn’t happen.
(I’ve tried switching to use the AKSampler() just to see if the problem would exist there but I haven’t been able to get any sound out of that).
Fyi: I have “App plays audio or streams audio/video using AirPlay” in the “Required background modes” in info.plist - which is know to fix another sine wave issue.
Thank you very much for any assistance.
Btw: AudioKit rocks and has been a massive help on this project! :^)
AK 4.5.4
Xcode 10.1
import Foundation
import AudioKit
class AudioKitTESTManager {
var mixer = AKMixer()
var sampler = AKMIDISampler()
var mic = AKMicrophone()
var micMixer = AKMixer()
var micBooster = AKBooster()
init() {
mixer = AKMixer(sampler, micBooster)
do {
let woodblock = try AKAudioFile(readFileName: RhythmGameConfig.woodblockSoundName)
try sampler.loadAudioFiles([woodblock])
} catch {
print("Error loading audio files into sampler")
}
micMixer = AKMixer(mic)
micBooster = AKBooster(micMixer)
micBooster.gain = 0.0
AudioKit.output = mixer
AKSettings.playbackWhileMuted = true
AKSettings.defaultToSpeaker = true
AKSettings.sampleRate = 44100
do {
print("Attempting to start AudioKit")
try AudioKit.start()
} catch {
AKLog("AudioKit did not start!")
}
}
func playTestSample() {
// You hear the sample and a continuous sine wave starts playing through the samplerMixer
try? sampler.play(noteNumber: 60, velocity: 90, channel: 1)
}
}
Wheeew. I believe I've found a solution. Maybe it will help out someone else?
It seems that loading the files into the sampler AFTER AudioKit.start() fixes the Sine Wave of Terror!
//..
do {
print("Attempting to start AudioKit")
try AudioKit.start()
} catch {
AKLog("AudioKit did not start!")
}
do {
let woodblock = try AKAudioFile(readFileName: RhythmGameConfig.woodblockSoundName)
try sampler.loadAudioFiles([woodblock])
} catch {
print("Error loading audio files into sampler")
}

AudioKit's RenderToFile not working correctly

I have an AKSequencer which has an AKMusicTrack inside of it with the output of an AKMIDISampler. I also load the AKMIDISampler with a soundfont file.
The problem that I'm facing with AudioKit's renderToFile is that when it does create the file the sound is empty/silent, or it will play a single note which will be at the very beginning of the file, as well as only playing the single note a strange sound is played for the entirety of the length.
Here's the code for the initialisation
let midiSampler = AKMIDISampler()
let sequencer = AKSequencer()
let midi = AKMIDI()
do {
try midiSampler.loadSoundFont("soundFontFile", preset: 0, bank: 0)
} catch {
AKLog("Error - Couldn't load Sample!!!")
}
AudioKit.output = midiSampler
do {
try AudioKit.start()
} catch {
AKLog("AudioKit didn't begin")
}
let drumTrack = sequencer.newTrack("Drum Track")
midi.openInput()
midiSampler.enableMIDI(midi.client, name: "MIDI Sampler MIDI In")
drumTrack.setMIDIOutput(midiSampler.midiIn)
sequencer.setLength(AKDuration(beats: 8))
sequencer.setTempo(136)
sequencer.setRate(40)
midi = AudioKit.midi
Here is how I attempt to renderToFile:
let path = "recordedMIDIAudio.caf"
let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent(path)
let format = AVAudioFormat(commonFormat: .pcmFormatFloat64, sampleRate: 44100, channels: 1, interleaved: true)!
do {
let audioFile = try AKAudioFile(forWriting: url, settings: format.settings, commonFormat: format.commonFormat, interleaved: format.isInterleaved)
try AudioKit.renderToFile(audioFile, duration: 3.55, prerender: {
self.sequencer.play()
})
} catch {
AKLog("Error when converting")
}
I've done quite a lot of research on this particular issue but I've had no luck. Any help or pointers will be greatly appreciated, thanks in advance!
Unfortunately its a well known but probably not well enough documented fact that offline rendering does not work with MIDI based signal generation. The time clock that the midi system uses is not sped up with the speed of sample generation that happens when rendering to a file.

Disconnecting headphones with audiokit running gives malloc error with AKOperationEffect specifically

// My code is below
do{
file = try AKAudioFile(readFileName: "Sound1.mp3", baseDir: .resources)
// file = try AKAudioFile(forReading: SingletonClass.sharedInstance.recordedURLs[SingletonClass.sharedInstance.recordedURL]!)
// AKSettings.defaultToSpeaker = true
}
catch {
}
do {
player = try AKAudioPlayer(file : file)
}
catch {
}
let lfoAmplitude = 1_000
let lfoRate = 1.0 / 3.428
_ = 0.9
//filter section effect below
filterSectionEffect = AKOperationEffect(tracker) { input, _ in
let lfo = AKOperation.sineWave(frequency: lfoRate, amplitude: lfoAmplitude)
return input.moogLadderFilter(cutoffFrequency: lfo + cutoffFrequency,
resonance: resonance)
}
Audiokit.output = filterSectionEffect
Audiokit.start()
And whenever I play the audio using a button with code player.play , the audio gets played properly. And if I connect the headphones, it gets played properly as well but as soon as I disconnect the headphones, I see the error:
It happens in same way for both wired as well as bluetooth headphones.
My app with stuck because of this issue only that too happens only with AKOperationEffect. Any help would be appreciated.
The comment from Kunal Verma that this is fixed is correct, but just for completeness here is the commit that fixed it.
https://github.com/AudioKit/AudioKit/commit/ffac4acbe93553764f6095011e9bf5d71fdc88c2

Playing scheduled audio in the background

I am having a really difficult time with playing audio in the background of my app. The app is a timer that is counting down and plays bells, and everything worked using the timer originally. Since you cannot run a timer over 3 minutes in the background, I need to play the bells another way.
The user has the ability to choose bells and set the time for these bells to play (e.g. play bell immediately, after 5 minutes, repeat another bell every 10 minutes, etc).
So far I have tried using notifications using DispatchQueue.main and this will work fine if the user does not pause the timer. If they re-enter the app though and pause, I cannot seem to cancel this queue or pause it in anyway.
Next I tried using AVAudioEngine, and created a set of nodes. These will play while the app is in the foreground but seem to stop upon backgrounding. Additionally when I pause the engine and resume later, it won't pause the sequence properly. It will squish the bells into playing one after the other or not at all.
If anyone has any ideas of how to solve my issue that would be great. Technically I could try remove everything from the engine and recreate it from the paused time when the user pauses/resumes, but this seems quite costly. It also doesn't solve the problem of the audio stopping in the background. I have the required background mode 'App plays audio or streams audio/video using Airplay', and it is also checked under the background modes in capabilities.
Below is a sample of how I tried to set up the audio engine. The registerAndPlaySound method is called several other times to create the chain of nodes (or is this done incorrectly?). The code is kinda messy at the moment because I have been trying many ways trying to get this to work.
func setupSounds{
if (attached){
engine.detach(player)
}
engine.attach(player)
attached = true
let mixer = engine.mainMixerNode
engine.connect(player, to: mixer, format: mixer.outputFormat(forBus: 0))
var bell = ""
do {
try engine.start()
} catch {
return
}
if (currentSession.bellObject?.startBell != nil){
bell = (currentSession.bellObject?.startBell)!
guard let url = Bundle.main.url(forResource: bell, withExtension: "mp3") else {
return
}
registerAndPlaySound(url: url, delay: warmUpTime)
}
}
func registerAndPlaySound(url: URL, delay: Double) {
do {
let file = try AVAudioFile(forReading: url)
let format = file.processingFormat
let capacity = file.length
let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(capacity))
do {
try file.read(into: buffer)
}catch {
return
}
let sampleRate = buffer.format.sampleRate
let sampleTime = sampleRate*delay
let futureTime = AVAudioTime(sampleTime: AVAudioFramePosition(sampleTime), atRate: sampleRate)
player.scheduleBuffer(buffer, at: futureTime, options: AVAudioPlayerNodeBufferOptions(rawValue: 0), completionHandler: nil)
player.play()
} catch {
return
}
}

Resources