Details on using the AVAudioEngine - ios

Background: I found one of Apple WWDC sessions called "AVAudioEngine in Practice" and am trying to make something similar to the last demo shown at 43:35 (https://youtu.be/FlMaxen2eyw?t=2614). I'm using SpriteKit instead of SceneKit but the principle is the same: I want to generate spheres, throw them around and when they collide the engine plays a sound, unique to each sphere.
Problems:
I want a unique AudioPlayerNode attached to each SpriteKitNode so that I can play a different sound for each sphere. i.e Right now, if I create two spheres and set a different pitch for each of their AudioPlayerNode, only the most recently created AudioPlayerNode seems to be playing, even when the original sphere collides. During the demo, he mentions "I'm tying a player, a dedicated player to each ball". How would I go about doing that?
There are audio clicks/artefacts every time a new collision happens. I'm assuming this has to do with the AVAudioPlayerNodeBufferOptions and/or the fact that I'm trying to create, schedule and consume buffers very quickly each time contact occurs, which is not the most efficient method. What would be a good work around for this?
Code: As mentioned in the video, "...for every ball that's born into this world, a new player node is also created". I have a separate class for the spheres, with a method that returns a SpriteKitNode and also creates an AudioPlayerNode every time it is called :
class Sphere {
var sphere: SKSpriteNode = SKSpriteNode(color: UIColor(), size: CGSize())
var sphereScale: CGFloat = CGFloat(0.01)
var spherePlayer = AVAudioPlayerNode()
let audio = Audio()
let sphereCollision: UInt32 = 0x1 << 0
func createSphere(position: CGPoint, pitch: Float) -> SKSpriteNode {
let texture = SKTexture(imageNamed: "Slice")
let collisionTexture = SKTexture(imageNamed: "Collision")
// Define the node
sphere = SKSpriteNode(texture: texture, size: texture.size())
sphere.position = position
sphere.name = "sphere"
sphere.physicsBody = SKPhysicsBody(texture: collisionTexture, size: sphere.size)
sphere.physicsBody?.dynamic = true
sphere.physicsBody?.mass = 0
sphere.physicsBody?.restitution = 0.5
sphere.physicsBody?.usesPreciseCollisionDetection = true
sphere.physicsBody?.categoryBitMask = sphereCollision
sphere.physicsBody?.contactTestBitMask = sphereCollision
sphere.zPosition = 1
// Create AudioPlayerNode
spherePlayer = audio.createPlayer(pitch)
return sphere
}
Here's my Audio Class with which I create AudioPCMBuffers and AudioPlayerNodes
class Audio {
let engine: AVAudioEngine = AVAudioEngine()
func createBuffer(name: String, type: String) -> AVAudioPCMBuffer {
let audioFilePath = NSBundle.mainBundle().URLForResource(name as String, withExtension: type as String)!
let audioFile = try! AVAudioFile(forReading: audioFilePath)
let buffer = AVAudioPCMBuffer(PCMFormat: audioFile.processingFormat, frameCapacity: UInt32(audioFile.length))
try! audioFile.readIntoBuffer(buffer)
return buffer
}
func createPlayer(pitch: Float) -> AVAudioPlayerNode {
let player = AVAudioPlayerNode()
let buffer = self.createBuffer("PianoC1", type: "wav")
let pitcher = AVAudioUnitTimePitch()
let delay = AVAudioUnitDelay()
pitcher.pitch = pitch
delay.delayTime = 0.2
delay.feedback = 90
delay.wetDryMix = 0
engine.attachNode(pitcher)
engine.attachNode(player)
engine.attachNode(delay)
engine.connect(player, to: pitcher, format: buffer.format)
engine.connect(pitcher, to: delay, format: buffer.format)
engine.connect(delay, to: engine.mainMixerNode, format: buffer.format)
engine.prepare()
try! engine.start()
return player
}
}
In my GameScene class I then test for collision, schedule a buffer and play the AudioPlayerNode if contact has occurred
func didBeginContact(contact: SKPhysicsContact) {
let firstBody: SKPhysicsBody = contact.bodyA
if (firstBody.categoryBitMask & sphere.sphereCollision != 0) {
let buffer1 = audio.createBuffer("PianoC1", type: "wav")
sphere.spherePlayer.scheduleBuffer(buffer1, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil)
sphere.spherePlayer.play()
}
}
I'm new to Swift and only have basic knowledge of programming so any suggestion/criticism is welcome.

I've been working on AVAudioEngine in scenekit and trying to do something else, but this will be what you are looking for:
https://developer.apple.com/library/mac/samplecode/AVAEGamingExample/Listings/AVAEGamingExample_AudioEngine_m.html
It explains the process of:
1-Instantiating your own AVAudioEngine sub-class
2-Methods to load PCMBuffers for each AVAudioPlayer
3-Changing your Environment node's parameters to accomodate the reverb for the large number of pinball objects
Edit: Converted, tested and added a few features:
1-You create a subclass of AVAudioEngine, name it AudioLayerEngine for example. This is to access the AVAudioUnit effects such as distortion, delay, pitch and many of the other effects available as AudioUnits.
2-Initialise by setting up some configurations for the audio engine, such as rendering algorithm, exposing the AVAudioEnvironmentNode to play with 3D positions of your SCNNode objects or SKNode objects if you are in 2D but want 3D effects
3-Create some helper methods to load presets for each AudioUnit effect you want
4-Create a helper method to create an audio player then add it to whatever node you want, as many times as you want since that SCNNode accepts a .audioPlayers methods which returns [AVAudioPlayer] or [SCNAudioPlayer]
5-Start playing.
I've pasted the entire class for reference so that you can then structure it as you wish, but keep in mind that if you are coupling this with SceneKit or SpriteKit, you use this audioEngine to manage all your sounds instead of SceneKit's internal AVAudioEngine. This means that you instantiate this in your gameView during the AwakeFromNib method
import Foundation
import SceneKit
import AVFoundation
class AudioLayerEngine:AVAudioEngine{
var engine:AVAudioEngine!
var environment:AVAudioEnvironmentNode!
var outputBuffer:AVAudioPCMBuffer!
var voicePlayer:AVAudioPlayerNode!
var multiChannelEnabled:Bool!
//audio effects
let delay = AVAudioUnitDelay()
let distortion = AVAudioUnitDistortion()
let reverb = AVAudioUnitReverb()
override init(){
super.init()
engine = AVAudioEngine()
environment = AVAudioEnvironmentNode()
engine.attachNode(self.environment)
voicePlayer = AVAudioPlayerNode()
engine.attachNode(voicePlayer)
voicePlayer.volume = 1.0
outputBuffer = loadVoice()
wireEngine()
startEngine()
voicePlayer.scheduleBuffer(self.outputBuffer, completionHandler: nil)
voicePlayer.play()
}
func startEngine(){
do{
try engine.start()
}catch{
print("error loading engine")
}
}
func loadVoice()->AVAudioPCMBuffer{
let URL = NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource("art.scnassets/sounds/interface/test", ofType: "aiff")!)
do{
let soundFile = try AVAudioFile(forReading: URL, commonFormat: AVAudioCommonFormat.PCMFormatFloat32, interleaved: false)
outputBuffer = AVAudioPCMBuffer(PCMFormat: soundFile.processingFormat, frameCapacity: AVAudioFrameCount(soundFile.length))
do{
try soundFile.readIntoBuffer(outputBuffer)
}catch{
print("somethign went wrong with loading the buffer into the sound fiel")
}
print("returning buffer")
return outputBuffer
}catch{
}
return outputBuffer
}
func wireEngine(){
loadDistortionPreset(AVAudioUnitDistortionPreset.MultiCellphoneConcert)
engine.attachNode(distortion)
engine.attachNode(delay)
engine.connect(voicePlayer, to: distortion, format: self.outputBuffer.format)
engine.connect(distortion, to: delay, format: self.outputBuffer.format)
engine.connect(delay, to: environment, format: self.outputBuffer.format)
engine.connect(environment, to: engine.outputNode, format: constructOutputFormatForEnvironment())
}
func constructOutputFormatForEnvironment()->AVAudioFormat{
let outputChannelCount = self.engine.outputNode.outputFormatForBus(1).channelCount
let hardwareSampleRate = self.engine.outputNode.outputFormatForBus(1).sampleRate
let environmentOutputConnectionFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: outputChannelCount)
multiChannelEnabled = false
return environmentOutputConnectionFormat
}
func loadDistortionPreset(preset: AVAudioUnitDistortionPreset){
distortion.loadFactoryPreset(preset)
}
func createPlayer(node: SCNNode){
let player = AVAudioPlayerNode()
distortion.loadFactoryPreset(AVAudioUnitDistortionPreset.SpeechCosmicInterference)
engine.attachNode(player)
engine.attachNode(distortion)
engine.connect(player, to: distortion, format: outputBuffer.format)
engine.connect(distortion, to: environment, format: constructOutputFormatForEnvironment())
let algo = AVAudio3DMixingRenderingAlgorithm.HRTF
player.renderingAlgorithm = algo
player.reverbBlend = 0.3
player.renderingAlgorithm = AVAudio3DMixingRenderingAlgorithm.HRTF
}
}

Related

How to synchronize textures animation and sound in SpriteKit

I'm creating a RPGGameKit using SpriteKit to help me develop my iOS games. Now that my player can move, I added animations and an audio system.
I ran on a problem to synchronize textures and sounds. Like a step when my player walk.
let atlas = SKTextureAtlas(named: "Walk")
let textures = atlas.getTextures() // I created an extension that returns textures of atlas
let walkingAnimation = SKAction.animate(with: textures, timePerFrame: 1)
So, walkingAnimation will loop through textures and change it every 1 second.
Now, I want to play a walking sound when the texture changes.
I have look at SKAction and SpriteKit documentation but there is no callback for this SKAction.
If you want to try to get this done with me or you have ideas of how to do it, please leave a comment.
Thanks :)
Try this:
let frame1 = SKAction.setTexture(yourTexture1)
let frame2 = SKAction.setTexture(yourTexture2)
let frame3 = SKAction.setTexture(yourTexture3)
//etc
let sound = SKAction.playSoundFileNamed("soundName", waitForCompletion: false)
let oneSecond = SKAction.wait(forDuration: 1)
let sequence = SKAction.sequence([frame1,sound,oneSecond,frame2,sound,oneSecond,frame3,sound,oneSecond])
node.run(sequence)
So, for now I'm going to do it like this :
let textures = SKTextureAtlas(named: "LeftStep").getTextures()
var actions = [SKAction]()
for texture in textures {
let group = SKAction.group([
SKAction.setTexture(texture),
SKAction.playSoundFileNamed("Step.mp3", waitForCompletion: false)
])
let sequence = SKAction.sequence([
group,
SKAction.wait(forDuration: 0.5)
])
actions.append(sequence)
}
self.node.run(SKAction.repeatForever(SKAction.sequence(actions)))
Thanks #StefanOvomate
I've found myself in the same situation, currently I'm doing the below. From what I have read in the documentation and seen online, the only way to do it is to create the audio file length to match one rotation of the texture animation.
let walkAtlas = global.playerWalkAtlas
var walkFrames: [SKTexture] = []
let numImages = walkAtlas.textureNames.count
for i in 1...numImages {
let texture = "walk\(i)"
walkFrames.append(walkAtlas.textureNamed(texture))
}
walking = walkFrames
isWalking = true
animateMove()
}
func animateMove(){
let animateWalk = SKAction.animate(with: walking, timePerFrame: 0.05)
let soundWalk = global.playSound(sound: .walkSound)
let sequence = SKAction.sequence([soundWalk, animateWalk])
self.run(SKAction.repeatForever(sequence),withKey: "isMoving")
}
func stopMoving(){
self.removeAction(forKey: "isMoving")
isWalking = false
}

Audiokit AKSampler not playing sounds

currently trying to get my AKSampler to play sounds that I send it but not having much luck getting audio to output. My AKMidiCallbackInstrument is properly logging the notes playing (although I'm seeing the print for each note twice..) However, the call to my sampler is not producing any audio and I can't figure out why.
class Sequencer {
var sampler: AKSampler
var sequencer: AKAppleSequencer
var mixer: AKMixer
init() {
sampler = AKSampler()
sequencer = AKAppleSequencer()
mixer=AKMixer(sampler)
let midicallback = AKMIDICallbackInstrument()
let url = Bundle.main.url(forResource: "UprightPianoKW-20190703", withExtension: "sfz")!;
let track = sequencer.newTrack()
track?.setMIDIOutput(midicallback.midiIn)
sampler.loadSFZ(url: url)
//generate some notes and add thtem to the track
generateSequence()
midicallback >>> mixer
AudioKit.output = mixer
AKSettings.playbackWhileMuted = true
AKSettings.audioInputEnabled = true
midicallback.callback = { status, note, vel in
guard let status = AKMIDIStatus(byte: status),
let type = status.type,
type == .noteOn else { return print("note off: \(note)") }
print("note on: \(note)")
self.sampler.play(noteNumber: note, velocity: vel) }
}
func play() {
try? AudioKit.start()
sequencer.rewind()
sequencer.play()
try? AudioKit.stop()
}
func stop() {
sequencer.stop()
}
you need to connect your sampler to the mixer:
sampler >>> mixer
Fwiw,
midicallback >>> mixer isn't necessary with AKAppleSequencer/AKMIDICallbackInstrument although it would be with AKSequencer/AKCallbackInstrument

Synchronizing AKPlayer with AKSampleMetronome

I am trying to use AKSamplerMetronome as a master clock in my (sort of) multi audiofile playback project. I wanted AKPlayers to be started in sync with Metronome's downbeat. Mixing AKPlayer and AKSamplerMetronome as AudioKit.output was successful, however, I am struggling to connect AKPlayer.start with AKSamplerMetronome.beatTime(or something else I haven't figured out) so the playback starts with the Metronome's downbeat in sync (and repeat every time Metronome hits downbeat). Here's what I've written:
class ViewController: UIViewController {
let metronome = AKSamplerMetronome()
let player = AKPlayer(audioFile: try! AKAudioFile(readFileName: "loop.wav"))
let mixer = AKMixer()
func startAudioEngine() {
do {
try AudioKit.start()
} catch {
print(error)
fatalError()
}
}
func makeConnections() {
player >>> mixer
metronome >>> mixer
AudioKit.output = mixer
}
func startMetronome() {
metronome.tempo = 120.0
metronome.beatVolume = 1.0
metronome.play()
}
func preparePlayer() {
player.isLooping = true
player.buffering = .always
player.prepare()
// I wanted AKPlayer to be repeated based on Metronome's downbeat.
}
func startPlayer() {
let startTime = AVAudioTime.now() + 0.25
player.start(at: startTime)
}
override func viewDidLoad() {
super.viewDidLoad()
makeConnections()
startAudioEngine()
preparePlayer()
startPlayer()
startMetronome()
}
}
My problem is that, AKPlayer's start point(at:) doesn't recognize AKSamplerMetronome's properties, maybe because it's not compatible with AVAudioTime? I tried something like:
let startTime = metronome.beatTime + 0.25
player.start(at: startTime)
But this doesn't seem to be an answer (as "cannot convert value type 'Double' to expected argument type 'AVAudioTime?'). It would be extremely helpful if someone could help me exploring Swift/AudioKit. <3
you are calling the AVAudioTime playback function with a Double parameter. That's incorrect. If you want to start the AKPlayer with a seconds param, use player.play(when: time)
In general, you're close. This is how you do it:
let startTime: Double = 1
let hostTime = mach_absolute_time()
let now = AVAudioTime(hostTime: hostTime)
let avTime = now.offset(seconds: startTime)
metronome.setBeatTime(0, at: avTime)
player.play(at: avTime)
Basically you need to give a common clock to each unit (mach_absolute_time()), then use AVAudioTime to start them at the exact time. The metronome.setBeatTime is telling the metronome to reset it's 0 point at the passed in avTime.

MTAudioProcessingTap with kMTAudioProcessingTapCreationFlag_PostEffects not reflecting AVAudioMix volume

I am trying to build level metering for AVPlayer. I am doing this with an MTAudioProcessingTap that gets passed to an AVAudioMix, which in turns gets passed to the AVPlayerItem. The MTAudioProcessingTap gets created with the kMTAudioProcessingTapCreationFlag_PostEffects flag.
Technical Q&A QA1783 has the following to say about the PreEffects and PostEffects flags:
When you create a "pre-effects" audio tap using the kMTAudioProcessingTapCreationFlag_PreEffects flag, the tap will be called before any effects specified by AVAudioMixInputParameters are applied; when you create a "post-effects" tap by using the kMTAudioProcessingTapCreationFlag_PostEffects flag, the tap will be called after those effects are applied. Currently the only "effect" that AVAudioMixInputParameters supports is a linear volume ramp.
The problem:
When created with the kMTAudioProcessingTapCreationFlag_PostEffects, I would expect the that samples received by the MTAudioProcessingTap would reflect the the volume or audio ramps set on the AVAudioMixInputParameters. For example, if I set the volume to 0, I would expect to get all 0 samples. However the samples I receive seem to be totally unaffected by the volume or volume ramps.
Am I doing something wrong?
Here is a quick an dirty playground that illustrates the problem. The example sets the volume directly, but I observed the same problem when using audio ramps. Tested on both macOS and iOS:
import Foundation
import XCPlayground
import PlaygroundSupport
import AVFoundation
import Accelerate
PlaygroundPage.current.needsIndefiniteExecution = true;
let assetURL = Bundle.main.url(forResource: "sample", withExtension: "mp3")!
let asset = AVAsset(url: assetURL)
let playerItem = AVPlayerItem(asset: asset)
var audioMix = AVMutableAudioMix()
// The volume. Set to > 0 to hear something.
let kVolume: Float = 0.0
var parameterArray: [AVAudioMixInputParameters] = []
for assetTrack in asset.tracks(withMediaType: .audio) {
let parameters = AVMutableAudioMixInputParameters(track: assetTrack);
parameters.setVolume(kVolume, at: kCMTimeZero)
parameterArray.append(parameters)
// Omitting most callbacks to keep sample short:
var callbacks = MTAudioProcessingTapCallbacks(
version: kMTAudioProcessingTapCallbacksVersion_0,
clientInfo: nil,
init: nil,
finalize: nil,
prepare: nil,
unprepare: nil,
process: { (tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut) in
guard MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, nil, numberFramesOut) == noErr else {
preconditionFailure()
}
// Assume 32bit float format, native endian:
for i in 0..<bufferListInOut.pointee.mNumberBuffers {
let buffer = bufferListInOut.pointee.mBuffers
let stride: vDSP_Stride = vDSP_Stride(buffer.mNumberChannels)
let numElements: vDSP_Length = vDSP_Length(buffer.mDataByteSize / UInt32(MemoryLayout<Float>.stride))
for j in 0..<Int(buffer.mNumberChannels) {
// Use vDSP_maxmgv tof ind the maximum amplitude
var start = buffer.mData!.bindMemory(to: Float.self, capacity: Int(numElements))
start += Int(j * MemoryLayout<Float>.stride)
var magnitude: Float = 0
vDSP_maxmgv(start, stride, &magnitude, numElements - vDSP_Length(j))
DispatchQueue.main.async {
print("buff: \(i), chan: \(j), max: \(magnitude)")
}
}
}
}
)
var tap: Unmanaged<MTAudioProcessingTap>?
guard MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap) == noErr else {
preconditionFailure()
}
parameters.audioTapProcessor = tap?.takeUnretainedValue()
}
audioMix.inputParameters = parameterArray
playerItem.audioMix = audioMix
let player = AVPlayer(playerItem: playerItem)
player.rate = 1.0

iOS Adjust Pitch Whilst Playing via AVAudioUnitTimePitch

I’m trying to get some audio to be able to have the pitch adjusted whilst playing. I’m very new to Swift and iOS, but my initial attempt was to just change timePitchNode.pitch whilst it was playing; however, it wouldn’t update whilst playing. My current attempt is to reset audioEngine, and have it just resume from where it was playing (below). How do I determine where the audio currently is, and how do I get it to resume from there?
var audioFile: AVAudioFile?
var audioEngine: AVAudioEngine?
var audioPlayerNode: AVAudioPlayerNode?
var pitch: Int = 1 {
didSet {
playResumeAudio()
}
}
…
func playResumeAudio() {
var currentTime: AVAudioTime? = nil
if audioPlayerNode != nil {
let nodeTime = audioPlayerNode!.lastRenderTime!
currentTime = audioPlayerNode!.playerTimeForNodeTime(nodeTime)
}
if audioEngine != nil {
audioEngine!.stop()
audioEngine!.reset()
}
audioEngine = AVAudioEngine()
audioPlayerNode = AVAudioPlayerNode()
audioEngine!.attachNode(audioPlayerNode!)
let timePitchNode = AVAudioUnitTimePitch()
timePitchNode.pitch = Float(pitch * 100)
timePitchNode.rate = rate
audioEngine!.attachNode(timePitchNode)
audioEngine!.connect(audioPlayerNode!, to: timePitchNode, format: nil)
audioEngine!.connect(timePitchNode, to: audioEngine!.outputNode, format: nil)
audioPlayerNode!.scheduleFile(audioFile!, atTime: nil, completionHandler: nil)
let _ = try? audioEngine?.start()
audioPlayerNode!.playAtTime(currentTime)
}
I was being dumb apparently. You can modify the pitch during playback, and it does update. No need to reset any audio, just mutate the node as it’s playing, and it’ll work.

Resources