I am trying to build a sequencer that render the note from midi file.
Currently I am using AudioKit for the music data processing. Would like to know how can I get the note data / event from the midi file with AudioKit.
I have tried to use AKSequencer and output to AKMIDINode to listen the MIDI event, but seems cannot get anything from it.
class CustomMIDINode: AKMIDINode {
override init(node: AKPolyphonicNode) {
print("Node create") // OK
super.init(node: node)
}
func receivedMIDINoteOff(noteNumber: MIDINoteNumber, velocity: MIDIVelocity, channel: MIDIChannel) {
print("midi note off") // Not printed
}
func receivedMIDISetupChange() {
print("midi setup changed") // Not printed
}
override func receivedMIDINoteOn(_ noteNumber: MIDINoteNumber, velocity: MIDIVelocity, channel: MIDIChannel) {
print("receivedMIDINoteOn") // Not printed
}
}
func setupSynth() {
oscBank.attackDuration = 0.05
oscBank.decayDuration = 0.1
oscBank.sustainLevel = 0.1
oscBank.releaseDuration = 0.1
}
let seq = AKSequencer(filename: "music")
let oscBank = AKOscillatorBank()
var midi = AKMIDI()
let midiNode = CustomMIDINode(node: oscBank)
setupSynth()
midi.openInput()
midi.addListener(midiNode)
seq.tracks.forEach { (track) in
track.setMIDIOutput(midiNode.midiIn)
}
AudioKit.output = midiNode
AudioKit.start()
seq.play()
Have you looked at any of the example Audio Kit projects available for download? they are very useful for troubleshooting AK. I actually find the examples better than the documentation (as implementation isn't explained very well).
As for your question you can add a midi listener to an event, there is an example of this code in the Analog Synth X Project available here.
let midi = AKMIDI()
midi.createVirtualPorts()
midi.openInput("Session 1")
midi.addListener(self)
For a more worked bit of code you can refer to this although the code is likely out of date in parts.
Tony, is it that you aren’t receiving any MIDI events, or just the print statements?
I agree with Axemasta’s response about adding AKMidiListener to the class, along with checking out the MIDI code examples that come with AudioKit. This ROM Player example shows how to play external MIDI files with the AKMidiSsmpler node:
https://github.com/AudioKit/ROMPlayer
In order for the print to display, try wrapping it in a DispatchQueue.main.async so that it’s on the main thread. Here’s an AudioKit MIDI implementation question with a code example that I posted here:
AudioKit iOS - receivedMIDINoteOn function
I hope this helps.
Related
I use AudioKit 5 for iOS and want to play MIDI files (or single MIDI events) using sound fonts, but I hear that MIDISampler or AppleSampler ignores Release Time option of Sound Font. The option is needed to fade notes slowly, but the sampler stops them immediately. It sounds really strange especially for Sound Fonts like Strings, Violin etc.
I use Strings Sound Font and it plays great in Logic or Polyphone (I attached the screenshot from Polyphone app that shows that the Sound Font has option Vol Env Release = 1.1 and if I change the value it works as expected).
I also tried:
To play a MIDI file via AppleSequencer with MIDISampler connected
To play MIDI events manually added to the track of AppleSequencer
To load a Sound Font as a Melodic Sound Font
To replace MIDISampler with AppleSampler
But had no luck.
Below I attached a peace of my code that plays on/off MIDI events manually
import Foundation
import AudioKit
final class MySampler {
let engine = AudioEngine()
let strings = MIDISampler()
init() {
do {
try strings.loadSoundFont("strings", preset: 0, bank: 0)
engine.output = strings
try engine.start()
print("MIDI", "started")
} catch {
print("MIDI", error.localizedDescription)
}
}
func playNote(note: MIDINoteNumber, velocity: MIDIVelocity) {
strings.play(noteNumber: note, velocity: velocity, channel: 0)
}
func stopNote(note: MIDINoteNumber) {
strings.stop(noteNumber: note, channel: 0)
}
}
let mySampler = MySampler()
var currentNote: MIDINoteNumber = 0
func randomNote() -> MIDINoteNumber {
currentNote = (48...48 + 24).randomElement() ?? 60
return currentNote
}
func keyTouchDown() {
mySampler.playNote(note: randomNote(), velocity: 112)
}
func keyTouchUp() {
mySampler.stopNote(note: currentNote)
}
Thanks in advance for your help
I am attempting to combine operation generated sounds into a single instrument, I am aware that it is possible to use operations as arguments in each other - but I'm trying to trigger two (or more) simultaneously in the same instrument if possible - so Im trying to do so with a mixer. This is my instrument code:
public class LayeredInstrument: MIDIInstrument {
var opGenOne = OperationGenerator {
let volSlideCurve = Operation.exponentialSegment(trigger: Operation.trigger, start: 1, end: 0, duration: 0.09)
return Operation.sawtooth(frequency: 880, amplitude: volSlideCurve)
}
var opGenTwo = OperationGenerator {
let volSlideCurve = Operation.exponentialSegment(trigger: Operation.trigger, start: 1, end: 0, duration: 0.09)
return Operation.square(frequency: 220, amplitude: volSlideCurve)
}
var mixer = Mixer()
public init(){
super.init()
opGenOne.start()
opGenTwo.start()
mixer.start()
mixer.addInput(Node(avAudioNode: opGenOne.avAudioNode))
mixer.addInput(Node(avAudioNode: opGenTwo.avAudioNode))
avAudioUnit = mixer.avAudioUnit
avAudioNode = mixer.avAudioNode
}
public override func play(noteNumber: MIDINoteNumber, velocity: MIDIVelocity, channel: MIDIChannel) {
opGenOne.trigger()
opGenTwo.trigger()
}
public func stop(){}
}
I have two questions,
1: how come I can take the avAudioUnit and avAudioNode of either of those two operations and use it for a voice but I cant use the mixer (it is silent upon play() at the minute), is there a way to get this to work and hear both voices?
2: a question about operation triggering itself, is it necessary to stop a note played in an operation? if so what's the best way of doing that? possibly have some event at the end of a duration, or a class that monitors all notes played? its just that the examples Ive seen thus far have empty stop() methods
I'm trying to build a sequencer app on iOS. There's a sample on the Apple Developer website that makes an audio unit play a repeating scale, here:
https://developer.apple.com/documentation/audiotoolbox/incorporating_audio_effects_and_instruments
In the sample code, there's a file "SimplePlayEngine.swift", with a class "InstrumentPlayer" which handles sending MIDI events to the selected audio unit. It spawns a thread with a loop that iterates through the scale. It sends a MIDI Note On message by calling the audio unit's AUScheduleMIDIEventBlock, sleeps the thread for a short time, sends a Note Off, and repeats.
Here's an abridged version:
DispatchQueue.global(qos: .default).async {
...
while self.isPlaying {
// cbytes is set to MIDI Note On message
...
self.audioUnit.scheduleMIDIEventBlock!(AUEventSampleTimeImmediate, 0, 3, cbytes)
usleep(useconds_t(0.2 * 1e6))
...
// cbytes is now MIDI Note Off message
self.noteBlock(AUEventSampleTimeImmediate, 0, 3, cbytes)
...
}
...
}
This works well enough for a demonstration, but it doesn't enforce strict timing, since the events will be scheduled whenever the thread wakes up.
How can I modify it to play the scale at a certain tempo with sample-accurate timing?
My assumption is that I need a way to make the synthesizer audio unit call a callback in my code before each render with the number of frames that are about to be rendered. Then I can schedule a MIDI event every "x" number of frames. You can add an offset, up to the size of the buffer, to the first parameter to scheduleMIDIEventBlock, so I could use that to schedule the event at exactly the right frame in a given render cycle.
I tried using audioUnit.token(byAddingRenderObserver: AURenderObserver), but the callback I gave it was never called, even though the app was making sound. That method sounds like it's the Swift version of AudioUnitAddRenderNotify, and from what I read here, that sounds like what I need to do - https://stackoverflow.com/a/46869149/11924045. How come it wouldn't be called? Is it even possible to make this "sample accurate" using Swift, or do I need to use C for that?
Am I on the right track? Thanks for your help!
You're on the right track. MIDI events can be scheduled with sample-accuracy in a render callback:
let sampler = AVAudioUnitSampler()
...
let renderCallback: AURenderCallback = {
(inRefCon: UnsafeMutableRawPointer,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBusNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>?) -> OSStatus in
if ioActionFlags.pointee == AudioUnitRenderActionFlags.unitRenderAction_PreRender {
let sampler = Unmanaged<AVAudioUnitSampler>.fromOpaque(inRefCon).takeUnretainedValue()
let bpm = 960.0
let samples = UInt64(44000 * 60.0 / bpm)
let sampleTime = UInt64(inTimeStamp.pointee.mSampleTime)
let cbytes = UnsafeMutablePointer<UInt8>.allocate(capacity: 3)
cbytes[0] = 0x90
cbytes[1] = 64
cbytes[2] = 127
for i:UInt64 in 0..<UInt64(inNumberFrames) {
if (((sampleTime + i) % (samples)) == 0) {
sampler.auAudioUnit.scheduleMIDIEventBlock!(Int64(i), 0, 3, cbytes)
}
}
}
return noErr
}
AudioUnitAddRenderNotify(sampler.audioUnit,
renderCallback,
Unmanaged.passUnretained(sampler).toOpaque()
)
That used AURenderCallback and scheduleMIDIEventBlock. You can swap in AURenderObserver and MusicDeviceMIDIEvent, respectively, with similar sample-accurate results:
let audioUnit = sampler.audioUnit
let renderObserver: AURenderObserver = {
(actionFlags: AudioUnitRenderActionFlags,
timestamp: UnsafePointer<AudioTimeStamp>,
frameCount: AUAudioFrameCount,
outputBusNumber: Int) -> Void in
if (actionFlags.contains(.unitRenderAction_PreRender)) {
let bpm = 240.0
let samples = UInt64(44000 * 60.0 / bpm)
let sampleTime = UInt64(timestamp.pointee.mSampleTime)
for i:UInt64 in 0..<UInt64(frameCount) {
if (((sampleTime + i) % (samples)) == 0) {
MusicDeviceMIDIEvent(audioUnit, 144, 64, 127, UInt32(i))
}
}
}
}
let _ = sampler.auAudioUnit.token(byAddingRenderObserver: renderObserver)
Note that these are just examples of how it's possible to do sample-accurate MIDI sequencing on the fly. You should still follow the rules of rendering to reliably implement these patterns.
Sample accurate timing generally requires using the RemoteIO Audio Unit, and manually inserting samples at the desired sample position in each audio callback block using C code.
(A WWDC session on core audio a few years back recommended against using Swift in the audio real-time context. Not sure if anything has changed that recommendation.)
Or, for MIDI, use a precisely incremented time value in each successive scheduleMIDIEventBlock call, instead of AUEventSampleTimeImmediate, and set these calls up slightly ahead of time.
We're working on a SpriteKit game. In order to have more control over sound effects, we switched from using SKAudioNodes to having some AVAudioPlayers. While everything seems to be working well in terms of game play, frame rate, and sounds, we're seeing occasional error(?) messages in the console output when testing on physical devices:
... [general] __CFRunLoopModeFindSourceForMachPort returned NULL for mode 'kCFRunLoopDefaultMode' livePort: #####
It doesn't seem to really cause any harm when it happens (no sound glitches or hiccups in frame rate or anything), but not understanding exactly what the message means and why it's happening is making us nervous.
Details:
The game is all standard SpriteKit, all events driven by SKActions, nothing unusual there.
The uses of AVFoundation stuff are the following. Initialization of app sounds:
class Sounds {
let soundQueue: DispatchQueue
init() {
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error.localizedDescription)
}
soundQueue = DispatchQueue.global(qos: .background)
}
func execute(_ soundActions: #escaping () -> Void) {
soundQueue.async(execute: soundActions)
}
}
Creating various sound effect players:
guard let player = try? AVAudioPlayer(contentsOf: url) else {
fatalError("Unable to instantiate AVAudioPlayer")
}
player.prepareToPlay()
Playing a sound effect:
let pan = stereoBalance(...)
sounds.execute {
if player.pan != pan {
player.pan = pan
}
player.play()
}
The AVAudioPlayers are all for short sound effects with no looping, and they get reused. We create about 25 players total, including multiple players for certain effects when they can repeat in quick succession. For a particular effect, we rotate through the players for that effect in a fixed sequence. We have verified that whenever a player is triggered, its isPlaying is false, so we're not trying to invoke play on something that's already playing.
The message isn't that often. Over the course of a 5-10 minute game with possibly thousands of sound effects, we see the message maybe 5-10 times.
The message seems to occur most commonly when a bunch of sound effects are being played in quick succession, but it doesn't feel like it's 100% correlated with that.
Not using the dispatch queue (i.e., having sounds.execute just call soundActions() directly) doesn't fix the issue (though that does cause the game to lag significantly). Changing the dispatch queue to some of the other priorities like .utility also doesn't affect the issue.
Making sounds.execute just return immediately (i.e., don't actually call the closure at all, so there's no play()) does eliminate the messages.
We did find the source code that's producing the message at this link:
https://github.com/apple/swift-corelibs-foundation/blob/master/CoreFoundation/RunLoop.subproj/CFRunLoop.c
but we don't understand it except at an abstract level, and are not sure how run loops are involved in the AVFoundation stuff.
Lots of googling has turned up nothing helpful. And as I indicated, it doesn't seem to be causing noticeable problems at all. It would be nice to know why it's happening though, and either how to fix it or to have certainty that it won't ever be an issue.
We're still working on this, but have experimented enough that it's clear how we should do things. Outline:
Use the scene's audioEngine property.
For each sound effect, make an AVAudioFile for reading the audio's URL from the bundle. Read it into an AVAudioPCMBuffer. Stick the buffers into a dictionary that's indexed by sound effect.
Make a bunch of AVAudioPlayerNodes. Attach() them to the audioEngine. Connect(playerNode, to: audioEngine.mainMixerNode). At the moment we're creating these dynamically, searching through our current list of player nodes to find one that's not playing and making a new one if there's none available. That's probably got more overhead than is needed, since we have to have callbacks to observe when the player node finishes whatever it's playing and set it back to a stopped state. We'll try switching to just a fixed maximum number of active sound effects and rotating through the players in order.
To play a sound effect, grab the buffer for the effect, find a non-busy playerNode, and do playerNode.scheduleBuffer(buffer, ...). And playerNode.play() if it's not currently playing.
I may update this with some more detailed code once we have things fully converted and cleaned up. We still have a couple of long-running AVAudioPlayers that we haven't switched to use AVAudioPlayerNode going through the mixer. But anyway, pumping the vast majority of sound effects through the scheme above has eliminated the error message, and it needs far less stuff sitting around since there's no duplication of the sound effects in-memory like we had before. There's a tiny bit of lag, but we haven't even tried putting some stuff on a background thread yet, and maybe not having to search for and constantly start/stop players would even eliminate it without having to worry about that.
Since switching to this approach, we've had no more runloop complaints.
Edit: Some example code...
import SpriteKit
import AVFoundation
enum SoundEffect: String, CaseIterable {
case playerExplosion = "player_explosion"
// lots more
var url: URL {
guard let url = Bundle.main.url(forResource: self.rawValue, withExtension: "wav") else {
fatalError("Sound effect file \(self.rawValue) missing")
}
return url
}
func audioBuffer() -> AVAudioPCMBuffer {
guard let file = try? AVAudioFile(forReading: self.url) else {
fatalError("Unable to instantiate AVAudioFile")
}
guard let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)) else {
fatalError("Unable to instantiate AVAudioPCMBuffer")
}
do {
try file.read(into: buffer)
} catch {
fatalError("Unable to read audio file into buffer, \(error.localizedDescription)")
}
return buffer
}
}
class Sounds {
var audioBuffers = [SoundEffect: AVAudioPCMBuffer]()
// more stuff
init() {
for effect in SoundEffect.allCases {
preload(effect)
}
}
func preload(_ sound: SoundEffect) {
audioBuffers[sound] = sound.audioBuffer()
}
func cachedAudioBuffer(_ sound: SoundEffect) -> AVAudioPCMBuffer {
guard let buffer = audioBuffers[sound] else {
fatalError("Audio buffer for \(sound.rawValue) was not preloaded")
}
return buffer
}
}
class Globals {
// Sounds loaded once and shared amount all scenes in the game
static let sounds = Sounds()
}
class SceneAudio {
let stereoEffectsFrame: CGRect
let audioEngine: AVAudioEngine
var playerNodes = [AVAudioPlayerNode]()
var nextPlayerNode = 0
// more stuff
init(stereoEffectsFrame: CGRect, audioEngine: AVAudioEngine) {
self.stereoEffectsFrame = stereoEffectsFrame
self.audioEngine = audioEngine
do {
try audioEngine.start()
let buffer = Globals.sounds.cachedAudioBuffer(.playerExplosion)
// We got up to about 10 simultaneous sounds when really pushing the game
for _ in 0 ..< 10 {
let playerNode = AVAudioPlayerNode()
playerNodes.append(playerNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
playerNode.play()
}
} catch {
logging("Cannot start audio engine, \(error.localizedDescription)")
}
}
func soundEffect(_ sound: SoundEffect, at position: CGPoint = .zero) {
guard audioEngine.isRunning else { return }
let buffer = Globals.sounds.cachedAudioBuffer(sound)
let playerNode = playerNodes[nextPlayerNode]
nextPlayerNode = (nextPlayerNode + 1) % playerNodes.count
playerNode.pan = stereoBalance(position)
playerNode.scheduleBuffer(buffer)
}
func stereoBalance(_ position: CGPoint) -> Float {
guard stereoEffectsFrame.width != 0 else { return 0 }
guard position.x <= stereoEffectsFrame.maxX else { return 1 }
guard position.x >= stereoEffectsFrame.minX else { return -1 }
return Float((position.x - stereoEffectsFrame.midX) / (0.5 * stereoEffectsFrame.width))
}
}
class GameScene: SKScene {
var audio: SceneAudio!
// lots more stuff
// somewhere in initialization
// gameFrame is the area where action takes place and which
// determines panning for stereo sound effects
audio = SceneAudio(stereoEffectsFrame: gameFrame, audioEngine: audioEngine)
func destroyPlayer(_ player: SKSpriteNode) {
audio.soundEffect(.playerExplosion, at: player.position)
// more stuff
}
}
I have a problem in my SpriteKit game where audio using playSoundFileNamed(_ soundFile:, waitForCompletion:) will not play after the app is interrupted by a phone call. (I also use SKAudioNodes in my app which aren't affected but I really really really want to be able to use the SKAction playSoundFileNamed as well.)
Here's the gameScene.swift file from a stripped down SpriteKit game template which reproduces the problem. You just need to add an audio file to the project and call it "note"
I've attached the code that should reside in appDelegate to a toggle on/off button to simulate the phone call interruption. That code 1) Stops AudioEngine then deactivates AVAudioSession - (normally in applicationWillResignActive) ... and 2) Activates AVAudioSession then Starts AudioEngine - (normally in applicationDidBecomeActive)
The error:
AVAudioSession.mm:1079:-[AVAudioSession setActive:withOptions:error:]: Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
This occurs when attempting to deactivate the audio session but only after a sound has been played at least once.
to reproduce:
1) Run the app
2) toggle the engine off and on a few times. No error will occur.
3) Tap the playSoundFileNamed button 1 or more times to play the sound.
4) Wait for sound to stop
5) Wait some more to be sure
6) Tap Toggle Audio Engine button to stop the audioEngine and deactivate session -
the error occurs.
7) Toggle the engine on and of a few times to see session activated, session deactivated, session activated printed in debug area - i.e. no errors reported.
8) Now with session active and engine running, playSoundFileNamed button will not play the sound anymore.
What am I doing wrong?
import SpriteKit
import AVFoundation
class GameScene: SKScene {
var toggleAudioButton: SKLabelNode?
var playSoundFileButton: SKLabelNode?
var engineIsRunning = true
override func didMove(to view: SKView) {
toggleAudioButton = SKLabelNode(text: "toggle Audio Engine")
toggleAudioButton?.position = CGPoint(x:20, y:100)
toggleAudioButton?.name = "toggleAudioEngine"
toggleAudioButton?.fontSize = 80
addChild(toggleAudioButton!)
playSoundFileButton = SKLabelNode(text: "playSoundFileNamed")
playSoundFileButton?.position = CGPoint(x: (toggleAudioButton?.frame.midX)!, y: (toggleAudioButton?.frame.midY)!-240)
playSoundFileButton?.name = "playSoundFileNamed"
playSoundFileButton?.fontSize = 80
addChild(playSoundFileButton!)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let location = touch.location(in: self)
let nodes = self.nodes(at: location)
for spriteNode in nodes {
if spriteNode.name == "toggleAudioEngine" {
if engineIsRunning { // 1 stop engine, 2 deactivate session
scene?.audioEngine.stop() // 1
toggleAudioButton!.text = "engine is paused"
engineIsRunning = !engineIsRunning
do{
// this is the line that fails when hit anytime after the playSoundFileButton has played a sound
try AVAudioSession.sharedInstance().setActive(false) // 2
print("session deactivated")
}
catch{
print("DEACTIVATE SESSION FAILED")
}
}
else { // 1 activate session/ 2 start engine
do{
try AVAudioSession.sharedInstance().setActive(true) // 1
print("session activated")
}
catch{
print("couldn't setActive = true")
}
do {
try scene?.audioEngine.start() // 2
toggleAudioButton!.text = "engine is running"
engineIsRunning = !engineIsRunning
}
catch {
//
}
}
}
if spriteNode.name == "playSoundFileNamed" {
self.run(SKAction.playSoundFileNamed("note", waitForCompletion: false))
}
}
}
}
}
Let me save you some time here: playSoundFileNamed sounds wonderful in theory, so wonderful that you might say use it in an app you spent 4 years developing until one day you realize it’s not just totally broken on interruptions but will even crash your app in the most critical of interruptions, your IAP. Don’t do it. I’m still not entirely sure whether SKAudioNode or AVPlayer is the answer, but it may depend on your use case. Just don’t do it.
If you need scientific evidence, create an app and create a for loop that playSoundFileNamed whatever you want in touchesBegan, and see what happens to your memory usage. The method is a leaky piece of garbage.
EDITED FOR OUR FINAL SOLUTION:
We found having a proper number of preloaded instances of AVAudioPlayer in memory with prepareToPlay() was the best method. The SwiftySound audio class uses an on-the-fly generator, but making AVAudioPlayers on the fly created slowdown in animation. We found having a max number of AVAudioPlayers and checking an array for those where isPlaying == false was simplest and best; if one isn't available you don't get sound, similar to what you likely saw with PSFN if you had it playing lots of sounds on top of each other. Overall, we have not found an ideal solution, but this was close for us.
In response to Mike Pandolfini’s advice not to use playSoundFileNamed I’ve converted my code to only use SKAudioNodes.
(and sent the bug report to apple).
I then found that some of these SKAudioNodes don’t play after app interruption either … and I’ve stumbled across a fix.
You need to tell each SKAudioNode to stop() as the app resigns to, or returns from the background - even if they’re not playing.
(I'm now not using any of the code in my first post which stops the audio engine and deactivates the session)
The problem then became how to play the same sound rapidly where it possibly plays over itself. That was what was so good about playSoundFileNamed.
1) The SKAudioNode fix:
Preload your SKAudioNodes i.e.
let sound = SKAudioNode(fileNamed: "super-20")
In didMoveToView add them
sound.autoplayLooped = false
addChild(sound)
Add a willResignActive notification
notificationCenter.addObserver(self, selector:#selector(willResignActive), name:UIApplication.willResignActiveNotification, object: nil)
Then create the selector’s function which stops all audioNodes playing:
#objc func willResignActive() {
for node in self.children {
if NSStringFromClass(type(of: node)) == “SKAudioNode" {
node.run(SKAction.stop())
}
}
}
All SKAudioNodes now play reliably after app interrupt.
2) To replicate playSoundFileNamed’s ability to play the short rapid repeating sounds or longer sounds that may need to play more than once and therefore could overlap, create/preload more than 1 property for each sound and use them like this:
let sound1 = SKAudioNode(fileNamed: "super-20")
let sound2 = SKAudioNode(fileNamed: "super-20")
let sound3 = SKAudioNode(fileNamed: "super-20")
let sound4 = SKAudioNode(fileNamed: "super-20")
var soundArray: [SKAudioNode] = []
var soundCounter: Int = 0
in didMoveToView
soundArray = [sound1, sound2, sound3, sound4]
for sound in soundArray {
sound.autoplayLooped = false
addChild(sound)
}
Create a play function
func playFastSound(from array:[SKAudioNode], with counter:inout Int) {
counter += 1
if counter > array.count-1 {
counter = 0
}
array[counter].run(SKAction.play())
}
To play a sound pass that particular sound's array and its counter to the play function.
playFastSound(from: soundArray, with: &soundCounter)