Using AudioKit and SpriteKit audio simultaneously - ios

I'm building a game that uses the AudioKit framework to detect the frequency of sound received by the mic. I set it up as follows:
import SpriteKit
import AudioKit
class GameScene: SKScene {
var mic : AKMicrophone!
var tracker : AKFrequencyTracker!
var silence : AKBooster!
let mixer = AKMixer()
override func didMove(to view: SKView) {
mic = AKMicrophone()
tracker = AKFrequencyTracker.init(mic)
silence = AKBooster(tracker, gain: 0)
mixer.connect(silence)
AudioKit.output = mixer
AudioKit.start()
}
}
I would also like to use SKAction.playAudioFileNamed for the playback of sound effects etc, but when I use it, the playback volume is very low. I assume it has something to do with the scene's mixer node and the AKMixer? Playing sound files using AudioKit is far more complicated than I need.
Do I need to make an extension of SKScene? Help would be very much appreciated!

It seems that Aurelius was correct in that the AudioSession output route was being directed to the headset. I'm still not sure why this is was the case, but overriding and setting the output worked as follows:
let session = AVAudioSession()
do {
try session.overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
} catch {
print("error setting output")
}
This needs to be done after initializing AudioKit components. If there's a better way of doing this, please let me know!

Related

Enable audio input AudioKit v5

I am trying to migrate an app from AudioKit v4 to v5 and I am having a hard time finding documentation on the migration, and I can't find these in the Cookbook. Previously we could set defaultToSpeaker and audioInputEnabled through AKSettings. Now, these properties are gone and I can't find how can I replace them.
v4:
AKSettings.audioInputEnabled = true
AKSettings.defaultToSpeaker = true
Does anyone know how these parameters can be set with the new version? Any feedback is highly appreciated!
Nazarii,
In AudioKit 5, here's how I set up my audio input parameters:
import AudioKit
import AVFoundation
class Conductor {
static let sharedInstance = Conductor()
// Instantiate the audio engine and Mic Input node objects
let engine = AudioEngine()
var mic: AudioEngine.InputNode!
// Add effects for the Mic Input.
var delay: Delay!
var reverb: Reverb!
let mixer = Mixer()
// MARK: Initialize the audio engine settings.
init() {
// AVAudioSession requires the AVFoundation framework to be imported in the header.
do {
Settings.bufferLength = .medium
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(Settings.bufferLength.duration)
try AVAudioSession.sharedInstance().setCategory(.playAndRecord,
options: [.defaultToSpeaker, .mixWithOthers, .allowBluetoothA2DP])
try AVAudioSession.sharedInstance().setActive(true)
} catch let err {
print(err)
}
// The audio signal path with be:
// input > mic > delay > reverb > mixer > output
// Mic is connected to the audio engine's input...
mic = engine.input
// Mic goes into the delay...
delay = Delay(mic)
delay.time = AUValue(0.5)
delay.feedback = AUValue(30.0)
delay.dryWetMix = AUValue(15.0)
// Delay output goes into the reverb...
reverb = Reverb(delay)
reverb.loadFactoryPreset(.largeHall2)
reverb.dryWetMix = AUValue(0.4)
// Reverb output goes into the mixer...
mixer.addInput(reverb)
// Engine output is connected to the mixer.
engine.output = mixer
// Uncomment the following method, if you don't want to Start and stop the audio engine via the SceneDelegate.
// startAudioEngine()
}
// MARK: Start and stop the audio engine via the SceneDelegate
func startAudioEngine() {
do {
print("Audio engine was started.")
try engine.start()
} catch {
Log("AudioKit did not start! \(error)")
}
}
func stopAudioEngine() {
engine.stop()
print("Audio engine was stopped.")
}
}
Please let me know if this works for you.
Take care,
Mark

How to use AKMixer with both AKPlayer and AKMetronome at the same time?

I'm currently making a metronome app, it uses AKMetronome for the core part since it has the callback function, I need the callback function to visualize the beat, I also want to use it for customized click sound (and I'll mute the original AKMetronome sound, but it'll still play, just without the original sound).
Now, I have two wav files which are customized click sound, there will be two AKPlayers to play it, and the AKPlayers need to be triggered on every AKMetronome beat's callback.
Since AKPlayer and AKMetronome are both need to be played, I put them in an AKMixer, like this:
let mixer = AKMixer(playerA, playerB, metronome)
AudioKit.output = mixer
and the callback will call this func:
func playAudio() {
playerA.play()
playerB.play()
}
Then, when playerA.play() is executed, it'll crash.
This is the error message:
AURemoteIO::IOThread (11): EXC_BAD_ACCESS (code=1, address=0xffff9ffffdb0e360)
same error message in screenshot
If I only put one of the AKPlayer object or AKMetronome in the AKMixer, then it works fine.
I can't understand the error message, also don't know why this happens.
Any help would be appreciated.
Here is the full code:
var playerA: AKPlayer!
var playerB: AKPlayer!
var clickA: AKAudioFile?
var clickB: AKAudioFile?
var metronome: AKMetronome?
override func viewDidLoad() {
super.viewDidLoad()
prepareAudio()
metronome!.start()
}
func prepareAudio() {
clickA = try? AKAudioFile(readFileName: "Click1A.wav")
clickB = try? AKAudioFile(readFileName: "Click1B.wav")
playerA = AKPlayer(audioFile: clickA!)
playerA.buffering = .always
playerB = AKPlayer(audioFile: clickB!)
playerB.buffering = .always
//metronome
metronome = AKMetronome.init()
metronome!.subdivision = 4
metronome!.frequency1 = 1000
metronome!.frequency2 = 800
metronome!.tempo = 60
metronome!.callback = {
self.playAudio()
}
let mixer = AKMixer(playerA, playerB, metronome)
AudioKit.output = mixer
do {
try AudioKit.start()
} catch {
print("audiokit start fail!")
}
}
func playAudio() {
playerA.play()
playerB.play()
}

iOS - AudioKit: Different devices for input and output

I'm trying to record an audio using Build-In Microphone and playback it simultaneously through Remote Speaker. I'm using AudioKit as follows:
import UIKit
import AudioKit
class ViewController: UIViewController {
let session = AVAudioSession.sharedInstance()
let mic = AKMicrophone()
let reverb = AKReverb()
override func viewDidLoad() {
super.viewDidLoad()
mic >>> reverb
AudioKit.output = reverb
AKSettings.ioBufferDuration = 0.002
}
#IBAction func buttonWasPressed() {
printDevices()
try! AudioKit.start()
printDevices()
}
#IBAction func buttonWasReleased() {
try! AudioKit.stop()
}
func printDevices() {
// List of output devices:
if let outputs = AudioKit.outputDevices {
print("Outputs:")
dump(outputs)
}
}
}
The problem is even when a Bluetooth speaker is connected after executing AudioKit.start() the only available output device is Build-In Receiver (So, there's no way to change AudioKit.output property).
Another problem is that after the launch of the app it also fails to determine remote speaker in the output devices, once it was re-opened it starts to work properly.
So I wonder is there's a way to simultaneously use Build-In Mic and Remote Speaker? ..And a way to quit re-opening the app after it's launch every single time? -_-
Thanks a lot in advance!

Preload Sounds in SpriteKit

I was watching a tutorial on preloading sounds in SpriteKit to avoid the delay and frame rate drop when first playing the sound.This is the way they said to use the AVAudioPlayer to preload the sounds using the prepareToPlay() method:
import AVFoundation
override func didMoveToView(view: SKView) {
do {
let sounds = ["sound1", "sound2"]
for sound in sounds {
let audioPlayer = try AVAudioPlayer(contentsOfURL: NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource(sound ofType: "mp3")!))
audioPlayer.prepareToPlay()
}
}
catch {
}
}
And then playing the sound using an SKAction like this:
self.runAction(SKAction.playSoundFileNamed("sound1.mp3", waitForCompletetion: false)
How does this actually preload the sounds? Is there some reference to the actual sound file in memory when you do the prepareToPlay() method? It seems like I would have to use the AVAudioPlayer or the audioPlayer variable since that's what I used the prepareToPlay() method on instead of just referencing the sound file from the SKAction.

iOS Swift AVAudioPlayer playAtTime method doesn't work

My code is very simple like this:
import UIKit
import AVFoundation
class VC: UIViewController {
var player:AVAudioPlayer!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
let url = NSURL.fileURLWithPath(NSBundle.mainBundle().pathForResource("audioName", ofType: "mp3")!)
do{
try player = AVAudioPlayer(contentsOfURL: url)
print("duration", player.duration)// duration 200
player.prepareToPlay()
player.playAtTime(50.0)
}catch{
}
}
}
when player.play() is called, the audio can play normally.
I don't know why the playAtTime function doesn't work.
Please help!
playAtTime plays the sound at a time in the future, specified relative to the device's current time. So the time parameter needs to be the current device time plus your required delay (in seconds):
player.playAtTime(player.currentDeviceTime + 50)
player.currentTime will be 0 when you initialise the player so this isn't the property to use.
Apple doc is here
func playAtTime(time: NSTimeInterval) -> Bool
This function is meant to play the sound some time in the future, based on and greater than deviceCurrentTime, i would suggest to try:
player.playAtTime(player.currentTime + 50.0)

Resources