iOS: How to play audio without fps drops? - ios

I am in the process of developing a game for iOS 9+ using Sprite Kit and preferably using Swift libraries.
Currently I'm using a Singleton where I preload my audio files, each connected to a separate instance of AVAudioPlayer.
Here's a short code-snipped to get the idea:
import SpriteKit
import AudioToolbox
import AVFoundation
class AudioEngine {
static let sharedInstance = AudioEngine()
internal var sfxPing: AVAudioPlayer
private init() {
self.sfxPing = AVAudioPlayer()
if let path = NSBundle.mainBundle().pathForResource("ping", ofType: "m4a") {
do {
let url = NSURL(fileURLWithPath:path)
sfxPing = try AVAudioPlayer(contentsOfURL: url)
sfxPing.prepareToPlay()
} catch {
print("ERROR: Can't load ping.m4a audio file.")
}
}
}
}
This Singleton is initialised during app start-up. In the game-loop I then just call the following line to play a specific audio file:
AudioEngine.sharedInstance.sfxPing.play()
This basically works, but I always get glitches when a file is played and the frame rate drops from 60.0 to 56.0 on my iPad Air.
Someone any idea how to fix this performance issue with AVAudioPlayer ?
I also watched out for 3rd party libraries, namely:
AudioKit [Looks very heavy-weighted]
ObjectAL [Last Update 2013 ...]
AVAudioEngine [Based on AVAudioPlayer, same problems ?]
Requirements:
Play a lot of very short samples (like shots, hits, etc..)
Play some motor effects (thus pitching would be nice)
Play some background / ambient sound in a loop
NO nasty glitches / frame rate drops !
Could you recommend any of the above mentioned libraries for my requirements or point out the problems using the above code ?
UPDATE:
Playing short sounds with:
self.runAction(SKAction.playSoundFileNamed("sfx.caf", waitForCompletion: false))
does indeed improve the frame rate. I exported the audio files with Audiacity to the .caf format (Apple's Core Audio Format). But in the tutorial, they export with "Signed 32-bit PCM" encoding which led to disturbed audio playback in my case. Using any of the other encoding options (32-bit float, U-Law, A-Law, etc..) worked fine for me.
Why using caf format? Because it's uncompressed and thus loaded faster into memory with less CPU overhead compared to compressed formats like m4a. For short sound effects played a lot in short intervals, this makes sense and disk usage is not affected much for short audio files consuming few kilobytes. For bigger audio files, like ambient and background music, using compressed formats (mp3, m4a) is obviously the better choice.

According to your question, if you develop a game for iOS 9+, you can use the new iOS 9 library SKAudioNode (official Apple doc):
var backgroundMusic: SKAudioNode!
For example you can add this to didMoveToView():
if let musicURL = NSBundle.mainBundle().URLForResource("music", withExtension: "m4a") {
backgroundMusic = SKAudioNode(URL: musicURL)
addChild(backgroundMusic)
}
You can also use to play a simple effect:
let beep = SKAudioNode(fileNamed: "beep.wav")
beep.autoplayLooped = false
self.addChild(beep)
Finally, if you want to change the volume:
beep.runAction(SKAction.changeVolumeTo(0.4, duration: 0))
Update:
I see you have update your question speaking about AVAudioPlayer and SKAction. I've tested both of them for my iOS8+ compatible games.
About AVAudioPlayer, I personally use a custom library maked by me based from the old SKTAudio.
I see your code, about AVAudioPlayer init, and my code is different because I use:
#available(iOS 7.0, *)
public init(contentsOfURL url: NSURL, fileTypeHint utiString: String?)
I don't know if fileTypeHint make the difference, so try and fill me about your test.
Advantages about your code:
With a shared instance audio manager based to AVAudioPlayer you can control volume, use your manager wherever you want, ensure compatibility with iOS8
Disadvantages about your code:
Everytime you play a sound and you want to play another sound, the previous is broken, especially if you have launch a background music.
How to solve? According with this SO post to work well without issues seems AVFoundation is limited to 4 AVAudioPlayer properties instantiables, so you can do this:
1) backgroundMusicPlayer: AVAudioPlayer!
2) soundEffectPlayer1: AVAudioPlayer!
3) soundEffectPlayer2: AVAudioPlayer!
4) soundEffectPlayer3: AVAudioPlayer!
You could build a method that switch through the 3 soundEffect to see if is occupied:
if player.playing
and use the next free player. With this workaround you have always your sound played correctly, even your background music.

Related

iOS: Apply audio modifications to Music library content

I'm working on an iOS/Flutter application, and am trying to work out if it's possible to play audio from the Music library on iOS with audio modifications (e. g. equalization settings) applied.
It seems like I'm looking for a solution that can work with MPMusicPlayerController, since that appears to be the strategy for playing local audio from the user's iOS Music library. I can find examples of applying EQ to audio on iOS (e. g. using AVAudioUnitEQ and AVAudioEngine: SO link, tutorial), but I'm unable to find any resources to help me understand if it's possible to bridge the gap between these resources.
Flutter specific context:
There are Flutter plugins that provide some of the functionality I'm looking for, but don't appear to work together. For example, the just_audio plugin has a robust set of features for modifying audio, but does not work with the local Music application on iOS/MPMusicPlayerController. Other plugins that do work with MPMusicPlayerController, like playify, do not have the ability to modify/transform the audio.
Even though I'm working with Flutter, any general advice on the iOS side would be very helpful. I appreciate any insight someone with more knowledge may be able to share with me!
Updating with my own answer here for future people: It looks like my only path forward (for now) is leaning into into AVAudioEngine directly. This is the rough POC that worked for me:
var audioPlayer = AVAudioPlayerNode()
var audioEngine = AVAudioEngine()
var eq = AVAudioUnitEQ()
let mediaItemCollection: [MPMediaItem] = MPMediaQuery.songs().items!
let song = mediaItemCollection[0]
do {
let file = try AVAudioFile(forReading: song.assetURL!)
audioEngine.attach(audioPlayer)
audioEngine.attach(eq)
audioEngine.connect(audioPlayer, to: eq, format: nil)
audioEngine.connect(eq, to: audioEngine.outputNode, format: file.processingFormat)
audioPlayer.scheduleFile(file, at: nil)
try audioEngine.start()
audioPlayer.play()
} catch {
// catch
}
The trickiest part for me was working out how to bridge together the "Music library/MPMediaItem" world to "AVAudioEngine" world -- which was just AVAudioFile(forReading: song.assetURL!)

Mono audio output in iOS app when using a webRTC powered video call

The app i'm writing contains 2 parts:
An audio player that plays stereo MP3 files
Video conferencing using webRTC
Each part works perfectly in isolation, but the moment i try them together, one of two things happens:
The video conference audio fades out and we just hear the audio files (in stereo)
We get audio output from both, but the audio files are played in mono, coming out of both ears equally
My digging had taken me down a few routes:
https://developer.apple.com/forums/thread/90503
&
https://github.com/twilio/twilio-video-ios/issues/77
Which suggest that the issue could be with the audio session category, mode or options.
However i've tried lots of the combos and am struggling to get anything working as intended.
Does anyone have a better understanding of the audio options to point in the right direction?
My most recent combination
class BBAudioClass {
static private var audioCategory : AVAudioSession.Category = AVAudioSession.Category.playAndRecord
static private var audioCategoryOptions : AVAudioSession.CategoryOptions = [
AVAudioSession.CategoryOptions.mixWithOthers,
AVAudioSession.CategoryOptions.allowBluetooth,
AVAudioSession.CategoryOptions.allowAirPlay,
AVAudioSession.CategoryOptions.allowBluetoothA2DP
]
static private var audioMode = AVAudioSession.Mode.default
static func setCategory() -> Void {
do {
let audioSession: AVAudioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(
BBAudioClass.audioCategory,
mode: BBAudioClass.audioMode,
options: BBAudioClass.audioCategoryOptions
)
} catch {
};
}
}
Update
I managed to get everything working as i wanted by:
Starting the audio session
Connecting to the video conference (at this point all audio is mono)
Forcing all output to the speaker
Forcing output back to the headphones
Obviously this is a crazy thing to have to do, but does prove that it should work.
But it would be great if anyone knew WHY this works, in order that i can actually get things to work properly first time without going through all these hacky steps

AudioKit - Trying to generate sound in Audio Kit 5

I am setting up a start application that on start up, will generate some white noise using AudioKit.
I have set up the following code that gets called on start up of my application:
let engine = AudioEngine()
let noise = WhiteNoise()
let mixer = Mixer(noise)
mixer.volume = 1
engine.output = mixer
try! engine.start()
But when I start up the application I do not hear any sound being generated. I set up a simple example to generate a sine wave using AVFoundation and I was able to hear the sound generated from my simulator.
I found an old thread - AudioKit - no sound output but I checked the AudioKit repo and it looks like this feature was removed a couple months back since it was not being used.
Any help would be appreciated!
Try noise.start() Generators don't default to being on.

I want to make sound effects without stopping the music playing in the background in another app

I am currently developing an application with SwiftUI.
There is a function that plays a sound effect when you tap on it.
When I tested it on the actual device, the Spotify music playing in the background stopped. Is it possible to use AVFoundation to play sound effects without stopping the music? Also, if there is a better way to implement this, please help me.
import Foundation
import AVFoundation
var audioPlayer: AVAudioPlayer?
func playSound(sound: String, type: String) {
if let path = Bundle.main.path(forResource: sound, ofType: type) {
do {
audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path))
audioPlayer?.play()
} catch {
print("ERROR:Could not find and play the sound file.")
}
}
}
Set your AVAudioSession category options to .mixWithOthers.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.ambient, mode: .default, options: [.mixWithOthers])
} catch {
print("Failed to set audio session category.")
}
audioSession.setActive(true) // When you're ready to play something
ambient indicates that sound is not critical to this application. The default is soloAmbient, which is non-mixable. (It's possible you can just set the category to ambient here and you'll get mixWithOthers for free. I don't often use that mode, so I don't remember how much you get by default.) If sound is critical, see the mode .playback.
As a rule, you set the category once in the app, and then set the session active or inactive as you need it. If I remember correctly, AVAudioPlayer will automatically activate the session, so you may not need to do that (I typically work at much lower levels, and don't always remember what the high-level tools do automatically). It is legal to change your category, however, if different features of your app have different needs.
For more details, see AVAudioSession.Category and AVAudioSession. There are many options in sound playback, and many of them are not obvious (and a few have very confusing names, I'm looking at you .allowBluetooth), so it's worth reading through the docs.

iOS How to set specifics channels of USB device to a audio player? AVFoundation

I'm working with AVAudioplayer and AVAudiosession. I have got an iPad and a audio interface (sound card).
This audio interface has 4 outputs (2 stereo), a lightning cable and it receive energy from the iDevice, works excellent.
Ive coded a simple play() stop() AVAudioplayer that works fine BUT I need to asign specific channel of the audio interface (1-2 & 3-4). My idea is send two audios (A & B) to each output/channel (1-2 or 3-4)
I've read the AVAudioplayer's documentation and it says: channelAssignments is for asign channels to a audioplayer.
The problem is: I've created an AVAudiosession that get the data of the USBport's device plugged (soundcard). And I got:
let route = AVAudioSession.sharedInstance().currentRoute
for port in route.outputs {
if port.portType == AVAudioSessionPortUSBAudio {
let portChannels = port.channels
let sessionOutputs = route.outputs
let dataSource = port.dataSources
dataText.text = String(portChannels) + "\n" + String(sessionOutputs) + "\n" + String(dataSource)
}
}
Log:
outputs
Which data I must to take and use to send the audios with play()?
Wow - I had no idea that AVAudioPlayer had been developed at all since AVPlayer came out in iOS 4. Yet here we are in 2016, and AVAudioPlayer has channelAssignments while the fancy streaming, video playing with subtitles AVPlayer does not.
But I don't think you will be able to play two files A and B through one AVAudioPlayer as each player can only open one file. That leaves
creating two players (player(A) and player(B)) and setting the channelAssignments of each to one half of the audio devices output channels, dealing with the joys of synchronising the players, or
creating a four channel file, AB, and playing it through one player, assigning channelAssignments the full four channels you found above, dealing with the joys of slightly non-standard audio files .
Sanity check: is your session.outputNumberOfChannels returning 4?
Personally, when I do this kind of thing I create a 4 channel remote io audio unit as I've found the higher level APIs cause too much heartache once you do anything a little unusual. I also use AVAudioSessionCategoryMultiRoute because I don't have any > 2 channel sound cards, so I have to cobble headphone jack plus usb sound card to get 4 channels, but you shouldn't need this.
Despite not having procedural output (like remoteIO audio units), you may also be able to use AVAudioEngine to do what you want.

Resources