AudioPlayer+Playback seek() doesn't preserve stopped/paused player status and always ends up playing. Expected behavior? - audiokit

When I call seek(time) on a stopped AudioPlayer node, it seeks and then triggers the node to start playing. Is this expected behavior?
Looking at the source for seek() it seems that this is the intent; but, I don't understand why this function would choose to ignore the player state going into the seek.
func seek(time seekTime: TimeInterval) {
...
isSeeking = true
playerNode.stop()
playerNode.scheduleSegment(
file,
startingFrame: startFrame,
frameCount: frameCount,
at: nil,
completionCallbackType: .dataPlayedBack
) { _ in
self.internalCompletionHandler()
}
playerNode.play()
status = .playing
isSeeking = false
timeBeforePlay = editStartTime - startTime
}
Is there another seek option that I haven't found? Or, need I fork the repo and modify this to suit my needs?

Related

AVAudioEngine recording microphone input seems to stop upon playing music

I’m recording microphone input to match it to a song in the Shazam catalog. This works, but if I start() the AVAudioEngine then something happens like music starts playing via MPMusicPlayerController.applicationMusicPlayer.play(), it seems the audio engine stops or gets interrupted. The microphone recording shuts off, thus the SHSessionDelegate never finds a match or fails with an error, so my UI is stuck showing it's listening when it’s not anymore. Is there a way to be informed when this happens so that I may update the UI to handle cancelation?
private lazy var shazamAudioEngine = AVAudioEngine()
private lazy var shazamSession: SHSession = {
let session = SHSession()
session.delegate = self
return session
}()
...
try? AVAudioSession.sharedInstance().setCategory(.record)
//Create an audio format for our buffers based on the format of the input, with a single channel (mono)
let audioFormat = AVAudioFormat(standardFormatWithSampleRate: shazamAudioEngine.inputNode.outputFormat(forBus: 0).sampleRate, channels: 1)
//Install a "tap" in the audio engine's input so that we can send buffers from the microphone to the session
shazamAudioEngine.inputNode.installTap(onBus: 0, bufferSize: 2048, format: audioFormat) { [weak self] buffer, when in
//Whenever a new buffer comes in, we send it over to the session for recognition
self?.shazamSession.matchStreamingBuffer(buffer, at: when)
}
do {
try shazamAudioEngine.start()
} catch {
...
}
In my testing isRunning tracks this state, so it changes from true to false when you start playing music and the microphone stops being recorded. Unfortunately that property can't be observed with KVO so what I did was set up a repeating Timer to detect if it changes to handle cancelation, making sure to invalidate() the timer when other state changes occur.
audioEngineRunningTimer = Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true) { [weak self] timer in
//the audio engine stops running when you start playing music for example, so handle cancelation here
if self?.shazamAudioEngine.isRunning == false {
//...
}
}

"__CFRunLoopModeFindSourceForMachPort returned NULL" messages when using AVAudioPlayer

We're working on a SpriteKit game. In order to have more control over sound effects, we switched from using SKAudioNodes to having some AVAudioPlayers. While everything seems to be working well in terms of game play, frame rate, and sounds, we're seeing occasional error(?) messages in the console output when testing on physical devices:
... [general] __CFRunLoopModeFindSourceForMachPort returned NULL for mode 'kCFRunLoopDefaultMode' livePort: #####
It doesn't seem to really cause any harm when it happens (no sound glitches or hiccups in frame rate or anything), but not understanding exactly what the message means and why it's happening is making us nervous.
Details:
The game is all standard SpriteKit, all events driven by SKActions, nothing unusual there.
The uses of AVFoundation stuff are the following. Initialization of app sounds:
class Sounds {
let soundQueue: DispatchQueue
init() {
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error.localizedDescription)
}
soundQueue = DispatchQueue.global(qos: .background)
}
func execute(_ soundActions: #escaping () -> Void) {
soundQueue.async(execute: soundActions)
}
}
Creating various sound effect players:
guard let player = try? AVAudioPlayer(contentsOf: url) else {
fatalError("Unable to instantiate AVAudioPlayer")
}
player.prepareToPlay()
Playing a sound effect:
let pan = stereoBalance(...)
sounds.execute {
if player.pan != pan {
player.pan = pan
}
player.play()
}
The AVAudioPlayers are all for short sound effects with no looping, and they get reused. We create about 25 players total, including multiple players for certain effects when they can repeat in quick succession. For a particular effect, we rotate through the players for that effect in a fixed sequence. We have verified that whenever a player is triggered, its isPlaying is false, so we're not trying to invoke play on something that's already playing.
The message isn't that often. Over the course of a 5-10 minute game with possibly thousands of sound effects, we see the message maybe 5-10 times.
The message seems to occur most commonly when a bunch of sound effects are being played in quick succession, but it doesn't feel like it's 100% correlated with that.
Not using the dispatch queue (i.e., having sounds.execute just call soundActions() directly) doesn't fix the issue (though that does cause the game to lag significantly). Changing the dispatch queue to some of the other priorities like .utility also doesn't affect the issue.
Making sounds.execute just return immediately (i.e., don't actually call the closure at all, so there's no play()) does eliminate the messages.
We did find the source code that's producing the message at this link:
https://github.com/apple/swift-corelibs-foundation/blob/master/CoreFoundation/RunLoop.subproj/CFRunLoop.c
but we don't understand it except at an abstract level, and are not sure how run loops are involved in the AVFoundation stuff.
Lots of googling has turned up nothing helpful. And as I indicated, it doesn't seem to be causing noticeable problems at all. It would be nice to know why it's happening though, and either how to fix it or to have certainty that it won't ever be an issue.
We're still working on this, but have experimented enough that it's clear how we should do things. Outline:
Use the scene's audioEngine property.
For each sound effect, make an AVAudioFile for reading the audio's URL from the bundle. Read it into an AVAudioPCMBuffer. Stick the buffers into a dictionary that's indexed by sound effect.
Make a bunch of AVAudioPlayerNodes. Attach() them to the audioEngine. Connect(playerNode, to: audioEngine.mainMixerNode). At the moment we're creating these dynamically, searching through our current list of player nodes to find one that's not playing and making a new one if there's none available. That's probably got more overhead than is needed, since we have to have callbacks to observe when the player node finishes whatever it's playing and set it back to a stopped state. We'll try switching to just a fixed maximum number of active sound effects and rotating through the players in order.
To play a sound effect, grab the buffer for the effect, find a non-busy playerNode, and do playerNode.scheduleBuffer(buffer, ...). And playerNode.play() if it's not currently playing.
I may update this with some more detailed code once we have things fully converted and cleaned up. We still have a couple of long-running AVAudioPlayers that we haven't switched to use AVAudioPlayerNode going through the mixer. But anyway, pumping the vast majority of sound effects through the scheme above has eliminated the error message, and it needs far less stuff sitting around since there's no duplication of the sound effects in-memory like we had before. There's a tiny bit of lag, but we haven't even tried putting some stuff on a background thread yet, and maybe not having to search for and constantly start/stop players would even eliminate it without having to worry about that.
Since switching to this approach, we've had no more runloop complaints.
Edit: Some example code...
import SpriteKit
import AVFoundation
enum SoundEffect: String, CaseIterable {
case playerExplosion = "player_explosion"
// lots more
var url: URL {
guard let url = Bundle.main.url(forResource: self.rawValue, withExtension: "wav") else {
fatalError("Sound effect file \(self.rawValue) missing")
}
return url
}
func audioBuffer() -> AVAudioPCMBuffer {
guard let file = try? AVAudioFile(forReading: self.url) else {
fatalError("Unable to instantiate AVAudioFile")
}
guard let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)) else {
fatalError("Unable to instantiate AVAudioPCMBuffer")
}
do {
try file.read(into: buffer)
} catch {
fatalError("Unable to read audio file into buffer, \(error.localizedDescription)")
}
return buffer
}
}
class Sounds {
var audioBuffers = [SoundEffect: AVAudioPCMBuffer]()
// more stuff
init() {
for effect in SoundEffect.allCases {
preload(effect)
}
}
func preload(_ sound: SoundEffect) {
audioBuffers[sound] = sound.audioBuffer()
}
func cachedAudioBuffer(_ sound: SoundEffect) -> AVAudioPCMBuffer {
guard let buffer = audioBuffers[sound] else {
fatalError("Audio buffer for \(sound.rawValue) was not preloaded")
}
return buffer
}
}
class Globals {
// Sounds loaded once and shared amount all scenes in the game
static let sounds = Sounds()
}
class SceneAudio {
let stereoEffectsFrame: CGRect
let audioEngine: AVAudioEngine
var playerNodes = [AVAudioPlayerNode]()
var nextPlayerNode = 0
// more stuff
init(stereoEffectsFrame: CGRect, audioEngine: AVAudioEngine) {
self.stereoEffectsFrame = stereoEffectsFrame
self.audioEngine = audioEngine
do {
try audioEngine.start()
let buffer = Globals.sounds.cachedAudioBuffer(.playerExplosion)
// We got up to about 10 simultaneous sounds when really pushing the game
for _ in 0 ..< 10 {
let playerNode = AVAudioPlayerNode()
playerNodes.append(playerNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
playerNode.play()
}
} catch {
logging("Cannot start audio engine, \(error.localizedDescription)")
}
}
func soundEffect(_ sound: SoundEffect, at position: CGPoint = .zero) {
guard audioEngine.isRunning else { return }
let buffer = Globals.sounds.cachedAudioBuffer(sound)
let playerNode = playerNodes[nextPlayerNode]
nextPlayerNode = (nextPlayerNode + 1) % playerNodes.count
playerNode.pan = stereoBalance(position)
playerNode.scheduleBuffer(buffer)
}
func stereoBalance(_ position: CGPoint) -> Float {
guard stereoEffectsFrame.width != 0 else { return 0 }
guard position.x <= stereoEffectsFrame.maxX else { return 1 }
guard position.x >= stereoEffectsFrame.minX else { return -1 }
return Float((position.x - stereoEffectsFrame.midX) / (0.5 * stereoEffectsFrame.width))
}
}
class GameScene: SKScene {
var audio: SceneAudio!
// lots more stuff
// somewhere in initialization
// gameFrame is the area where action takes place and which
// determines panning for stereo sound effects
audio = SceneAudio(stereoEffectsFrame: gameFrame, audioEngine: audioEngine)
func destroyPlayer(_ player: SKSpriteNode) {
audio.soundEffect(.playerExplosion, at: player.position)
// more stuff
}
}

Audio won't play after app interrupted by phone call iOS

I have a problem in my SpriteKit game where audio using playSoundFileNamed(_ soundFile:, waitForCompletion:) will not play after the app is interrupted by a phone call. (I also use SKAudioNodes in my app which aren't affected but I really really really want to be able to use the SKAction playSoundFileNamed as well.)
Here's the gameScene.swift file from a stripped down SpriteKit game template which reproduces the problem. You just need to add an audio file to the project and call it "note"
I've attached the code that should reside in appDelegate to a toggle on/off button to simulate the phone call interruption. That code 1) Stops AudioEngine then deactivates AVAudioSession - (normally in applicationWillResignActive) ... and 2) Activates AVAudioSession then Starts AudioEngine - (normally in applicationDidBecomeActive)
The error:
AVAudioSession.mm:1079:-[AVAudioSession setActive:withOptions:error:]: Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
This occurs when attempting to deactivate the audio session but only after a sound has been played at least once.
to reproduce:
1) Run the app
2) toggle the engine off and on a few times. No error will occur.
3) Tap the playSoundFileNamed button 1 or more times to play the sound.
4) Wait for sound to stop
5) Wait some more to be sure
6) Tap Toggle Audio Engine button to stop the audioEngine and deactivate session -
the error occurs.
7) Toggle the engine on and of a few times to see session activated, session deactivated, session activated printed in debug area - i.e. no errors reported.
8) Now with session active and engine running, playSoundFileNamed button will not play the sound anymore.
What am I doing wrong?
import SpriteKit
import AVFoundation
class GameScene: SKScene {
var toggleAudioButton: SKLabelNode?
var playSoundFileButton: SKLabelNode?
var engineIsRunning = true
override func didMove(to view: SKView) {
toggleAudioButton = SKLabelNode(text: "toggle Audio Engine")
toggleAudioButton?.position = CGPoint(x:20, y:100)
toggleAudioButton?.name = "toggleAudioEngine"
toggleAudioButton?.fontSize = 80
addChild(toggleAudioButton!)
playSoundFileButton = SKLabelNode(text: "playSoundFileNamed")
playSoundFileButton?.position = CGPoint(x: (toggleAudioButton?.frame.midX)!, y: (toggleAudioButton?.frame.midY)!-240)
playSoundFileButton?.name = "playSoundFileNamed"
playSoundFileButton?.fontSize = 80
addChild(playSoundFileButton!)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let location = touch.location(in: self)
let nodes = self.nodes(at: location)
for spriteNode in nodes {
if spriteNode.name == "toggleAudioEngine" {
if engineIsRunning { // 1 stop engine, 2 deactivate session
scene?.audioEngine.stop() // 1
toggleAudioButton!.text = "engine is paused"
engineIsRunning = !engineIsRunning
do{
// this is the line that fails when hit anytime after the playSoundFileButton has played a sound
try AVAudioSession.sharedInstance().setActive(false) // 2
print("session deactivated")
}
catch{
print("DEACTIVATE SESSION FAILED")
}
}
else { // 1 activate session/ 2 start engine
do{
try AVAudioSession.sharedInstance().setActive(true) // 1
print("session activated")
}
catch{
print("couldn't setActive = true")
}
do {
try scene?.audioEngine.start() // 2
toggleAudioButton!.text = "engine is running"
engineIsRunning = !engineIsRunning
}
catch {
//
}
}
}
if spriteNode.name == "playSoundFileNamed" {
self.run(SKAction.playSoundFileNamed("note", waitForCompletion: false))
}
}
}
}
}
Let me save you some time here: playSoundFileNamed sounds wonderful in theory, so wonderful that you might say use it in an app you spent 4 years developing until one day you realize it’s not just totally broken on interruptions but will even crash your app in the most critical of interruptions, your IAP. Don’t do it. I’m still not entirely sure whether SKAudioNode or AVPlayer is the answer, but it may depend on your use case. Just don’t do it.
If you need scientific evidence, create an app and create a for loop that playSoundFileNamed whatever you want in touchesBegan, and see what happens to your memory usage. The method is a leaky piece of garbage.
EDITED FOR OUR FINAL SOLUTION:
We found having a proper number of preloaded instances of AVAudioPlayer in memory with prepareToPlay() was the best method. The SwiftySound audio class uses an on-the-fly generator, but making AVAudioPlayers on the fly created slowdown in animation. We found having a max number of AVAudioPlayers and checking an array for those where isPlaying == false was simplest and best; if one isn't available you don't get sound, similar to what you likely saw with PSFN if you had it playing lots of sounds on top of each other. Overall, we have not found an ideal solution, but this was close for us.
In response to Mike Pandolfini’s advice not to use playSoundFileNamed I’ve converted my code to only use SKAudioNodes.
(and sent the bug report to apple).
I then found that some of these SKAudioNodes don’t play after app interruption either … and I’ve stumbled across a fix.
You need to tell each SKAudioNode to stop() as the app resigns to, or returns from the background - even if they’re not playing.
(I'm now not using any of the code in my first post which stops the audio engine and deactivates the session)
The problem then became how to play the same sound rapidly where it possibly plays over itself. That was what was so good about playSoundFileNamed.
1) The SKAudioNode fix:
Preload your SKAudioNodes i.e.
let sound = SKAudioNode(fileNamed: "super-20")
In didMoveToView add them
sound.autoplayLooped = false
addChild(sound)
Add a willResignActive notification
notificationCenter.addObserver(self, selector:#selector(willResignActive), name:UIApplication.willResignActiveNotification, object: nil)
Then create the selector’s function which stops all audioNodes playing:
#objc func willResignActive() {
for node in self.children {
if NSStringFromClass(type(of: node)) == “SKAudioNode" {
node.run(SKAction.stop())
}
}
}
All SKAudioNodes now play reliably after app interrupt.
2) To replicate playSoundFileNamed’s ability to play the short rapid repeating sounds or longer sounds that may need to play more than once and therefore could overlap, create/preload more than 1 property for each sound and use them like this:
let sound1 = SKAudioNode(fileNamed: "super-20")
let sound2 = SKAudioNode(fileNamed: "super-20")
let sound3 = SKAudioNode(fileNamed: "super-20")
let sound4 = SKAudioNode(fileNamed: "super-20")
var soundArray: [SKAudioNode] = []
var soundCounter: Int = 0
in didMoveToView
soundArray = [sound1, sound2, sound3, sound4]
for sound in soundArray {
sound.autoplayLooped = false
addChild(sound)
}
Create a play function
func playFastSound(from array:[SKAudioNode], with counter:inout Int) {
counter += 1
if counter > array.count-1 {
counter = 0
}
array[counter].run(SKAction.play())
}
To play a sound pass that particular sound's array and its counter to the play function.
playFastSound(from: soundArray, with: &soundCounter)

Swift: How to stop scheduleBuffer completion handler being called when interrupted?

I have a AVAudioPlayerNode that I'm scheduling a lot of buffers on using the following:
node.scheduleBuffer(buffer, at: nil, options: .interrupts, completionHandler: completeFunc)
But I'm having a bit of a problem. If I play another buffer interrupting the currently playing buffer, the completion handler for the buffer getting interrupted is still being called. I guess this makes sense, but I can't seem to find a way to check if the file actually COMPLETED playing or if it was interrupted. Is there a way I can do this?
Any answers will help!
You will need an interruption state variable that is referenced in the completion handler and toggled by the interrupting code. Here's some shorthand code that makes some assumptions on how you're looping through buffers:
var node: AVAudioPlayerNode!
var loopingBuffers: [AVAudioPCMBuffer] = [buffer1, buffer2, buffer3]
var interruptBuffers = false // state variable to handle the interruption
func scheduleNextBuffer() {
/* code to find the next buffer in the loopingBuffers queue */
// assuming you have instantiated an AVAudioPlayerNode elsewhere
node.scheduleBuffer(nextBuffer, completionHandler: bufferCompletion)
}
func bufferCompletion() {
// check the state variable to make certain the completionHandler isn't being triggered by a scheduling interruption
guard !interruptBuffers else { return }
scheduleNextBuffer()
}
func interruptBuffers() {
let newBuffers: [AVAudioPCMBuffer] = [buffer4, buffer5, buffer6]
interruptBuffers = true
node.scheduleBuffer(buffer4, at: nil, options: .interrupts, completionHandler: bufferCompletion)
// cleanup your state and queue up your new buffers
interruptBuffers = false
loopingBuffers = newBuffers
}

How to get animation to work at exact points during playback of a music file?

Question:
In Swift code, apart from using an NSTimer, how can I get animations
to start at exact points during playback of a music file played using AVFoundation?
Background
I have a method that plays a music file using AVFoundation (below). I also have UIView animations that I want to start at exact points during the music file being played.
One way I could achieve this is using an NSTimer, but that has the potential to get out of sync or not be exact enough.
Is there a method that I can tap into AVFoundation accessing the music file's time elapsed (time counter), so when certain points during the music playback arrive, animations start?
Is there an event / notification that AVFoundation triggers that gives a constant stream of time elapsed since the music file has started playing?
For example
At 0:52.50 (52 seconds and 1/2), call startAnimation1(), at 1:20.75 (1 minute, 20 seconds and 3/4), call startAnimation2(), and so on?
switch musicPlayingTimeElapsed {
case 0:52.50:
startAnimation1()
case 1:20.75:
startAnimation2()
default:
()
}
Playing music using AVFoundation
import AVFoundation
var myMusic : AVAudioPlayer?
func playMusic() {
if let musicFile = self.setupAudioPlayerWithFile("fileName", type:"mp3") {
self.myMusic = musicFile
}
myMusic?.play()
}
func setupAudioPlayerWithFile(file:NSString, type:NSString) -> AVAudioPlayer? {
let path = NSBundle.mainBundle().pathForResource(file as String, ofType: type as String)
let url = NSURL.fileURLWithPath(path!)
var audioPlayer:AVAudioPlayer?
do {
try audioPlayer = AVAudioPlayer(contentsOfURL: url)
} catch {
print("AVAudioPlayer not available")
}
return audioPlayer
}
If you use AVPlayer instead of AVAudioPlayer, you can use the (TBH slightly awkward) addBoundaryTimeObserverForTimes method:
let times = [
NSValue(CMTime:CMTimeMake(...)),
NSValue(CMTime:CMTimeMake(...)),
NSValue(CMTime:CMTimeMake(...)),
// etc
];
var observer: AnyObject? = nil // instance variable
self.observer = self.player.addBoundaryTimeObserverForTimes(times, queue: nil) {
switch self.player.currentTime() {
case 0:52.50:
startAnimation1()
case 1:20.75:
startAnimation2()
default:
break
}
}
// call this to stop observer
self.player.removeTimeObserver(self.observer)
The way I solve this is to divide the music up into separate segments beforehand. I then use one of two approaches:
I play the segments one at a time, each in its own audio player. The audio player's delegate is notified when a segment finishes, and so starting the next segment — along with accompanying action — is up to me.
Alternatively, I queue up all the segments onto an AVQueuePlayer. I then use KVO on the queue player's currentItem. Thus, I am notified exactly when we move to a new segment.
You might try using Key Value Observing to observe the duration property of your sound as it plays. When the duration reaches your time thresholds you'd trigger each animation. You'd need to make the time thresholds match times >= the trigger time, since you will likely not get a perfect match with your desired time.
I don't know how well that would work however. First, I'm not sure if the sound player's duration is KVO-compliant.
Next, KVO is somewhat resource-intensive, and if your KVO listener gets called thousands of times a second it might bog things down. It would at least be worth a try.

Resources