AVAudioPlayer initialization & memory management - ios

I am trying to figure out something -
Every time I use AVAudioPlayer I need to initialize a new AVAudioPlayer object.
I have 10 Sentence objects on a view and I wish to add a "PlaySentence" method to each Sentence object so when the user taps the Sentence the app will play a sound file.
I need this behavior on many views, so I thought adding the method to the object class so I can simply call -
[Sentence playSound];
Since AVAudioPlayer is any way initialized every time I wish to use it I can not see why this should be more expensive operation.
Am I right / is it a good approch for this need and why?
Thanks
Shani

So if I understand you right, you want your Sentence object have a playSound method which sets up an AVAudioPlayer and plays the sound.
You can definitely do it like that, but be aware that if you have a lot of Sentence objects then you will end up creating a lot of AVAudioPlayer objects. But you could stop the high water line being too high by releasing it at after the file has finished playing.
Another way you could do this is to have a method on Sentence to return the URL of the file to play and then just have a single AVAudioPlayer instance in your view controller where you want to play the sounds and set it up each time for the correct file. This would be my personal suggested way of doing it.

I think best solution for you is to use SimpleAudioEngine from Cocos2D library.
It's easy to use and integrates well. As I know, it uses OpenAL and AVAudioPlayer. It has easy interfaces to control background music, effects and works with mp3, wav, m4a and other formats and codecs.
Using it is very easy:
// it's better to preload all effects during app start so they will play immediately when you call "playEffect" method
[[SimpleAudioEngine sharedEngine] preloadEffect:#"sentence.m4a"];
...
// play effect where you need it
[[SimpleAudioEngine sharedEngine] playEffect:soundName];
You can find it here inside Cocos2D library. And you can find all useful information about it here.

Related

Best way to play silence using AVAudioPlayer on iOS

I found myself in a situation where I need to simulate audio playback to trick OS controls and MPNowPlayingInfoCenter into thinking that an audio is being played. This is because I am building a player that plays multiple audio tracks, with pauses in-between creating one, continuous "audio" track. I have already everything setup inside the app itself, and the lock screen controls are working correctly but the only problem I am facing is while the actual audio stops and a pause is being "played", the lock screen info center stops the timer, and it only continues with showing correct time and overall state once another audio track starts playing.
Here is the example of my audio track built from audio files and pause items:
let items: [AudioItem] = [
.audio("part-1.mp3"),
.pause(duration: 5), // value of type: TimeInterval
.audio("part-2.mp3"),
.pause(duration: 3),
... // the list goes on
]
then in my custom player, once AVAudioPlayer finishes its job with current item, I get the next one from the array and play either a .pause with a scheduled Timer or another .audio with AVAudioPlayer.
extension Player: AVAudioPlayerDelegate {
func audioPlayerDidFinishPlaying(_ player: AVAudioPlayer, successfully flag: Bool) {
playNextItem()
}
}
And here lies the problem, once the AVAudioPlayer stops, the Now Playing info center automatically stops too, even tho I keep feeding it fresh nowPlayingInfo. Then when it hits another .audio item, it resumes correctly and shows current time, etc.
And here lies the question
how do I trick the MPNowPlayingInfoCenter into thinking that audio is being played while I "play" my .pause item?
I realise that it may still not be clear, what I am trying to achieve but I am happy to share more insight if needed. Thanks!
Some solutions I am currently thinking about:
A. Keeping 1s long empty audio track that would play on loop for as long as the pause is needed to play.
B. Creating programatically empty audio track with appropriate lenght and playing it instead of using Timer for keeping track of pause duration/progress and relying completely on AVAudioPlayer for both .audio and .pause items. Not sure this is possible though.
C. Maybe there is a way to tell the MPNowPlayingInfoCenter that the audio keeps playing without the need of using AVAudioPlayer but some API I am not familiar with?
AVAudioPlayer is probably the wrong tool here. You want AVAudioPlayerNode, which is slightly lower-level. Create an AVAudioEngine, and attach an AVAudioPlayerNode. You can then call scheduleFile(_:at:completionHandler:) to play the audio at the times you want.
Much of the Apple documentation on AVAudioEngine appears broken right this moment, but the links hopefully will be available again shortly in the links for Audio Engine Building Blocks. (If it stays down and you have trouble finding docs, leave a comment and I'll hunt down the WWDC videos and other tutorials on using AVAudioEngine. It's not particularly difficult for simple problems.)
If you know in advance how you want to compose these items (and it looks like you may), see also AVMutableComposition, which lets you glue together assets very efficiently, including adding empty segments of silence. See Media Composition and Editing for the various tools in that space.

Best strategy for managing audio instances to allow cross-fades, etc

I'm working on an iOS application that works with different pieces of audio. Each piece of audio is tied to a separate button (similar to the kind of functionality you'd see in a soundboard app).
In a simple soundboard application, you've got an instance of a player object (AVAudioPlayer or an AVAudioEngine player node) that gets fired whenever a button is pressed.
If you push buttonOne, then sound1 plays.
If you push buttonOne again while sound1 is still playing, then the current instance is interrupted and "replaced" with a new instance of sound1 that starts over at the beginning.
If you push buttonOne, THEN push buttonTwo before sound1 is finished, the instance of sound1 is interrupted and replaced with sound2, again, played from the beginning.
Suppose you're trying to enable cross-fading between the two sounds. You can just create two player instances, load the first sound into player1 and the second into player2, and cross fade between them.
Building on that, suppose you're trying to allow different combinations of sounds to play at the same time. Either the idea that a large number of sounds (maybe the whole soundboard) can all play without interrupting each other. Or, perhaps, the ability to have multiple sounds playing at the same time, e.g. sound1 is a music bed, and sound2...soundXX are sound effects that should be able to play over the music bed without interrupting it.
QUESTIONS: what is the best design strategy for managing your player instances in this situation? Suppose you have a 5 x 5 grid of buttons in your soundboard. If, hypothetically, you should be able to play all 25 sounds at the same time, would that require you to init 25 player instances at setup? (this seems very anti-DRY and not particularly efficient). Or is there some way to dynamically manage the number of instances you need (maybe with a lazy variable?) so that additional instances are only generated as needed, e.g. when you have x number of sounds playing and you start another, an additional instance is generated to contain the newly added sound?
what is the best design strategy for managing instances of AVAudioPlayer in this situation
None. You should be using AVAudioEngine. That's exactly what it is, a sound board / patch kit.
Or is there some way to dynamically manage the number of instances you need
If I have 25 ordered instances of a thing where one possibility is no instance at all, that sounds like an array of Optionals.

Playing sound without lag in Swift

As many developers know, using AVAudioPlayer for playing sound in games can result in jerky animation/movement, because of a tiny delay each time a sound is played.
I used to overcome this in Objective-C, by using OpenAL through a wrapper class (also in Obj-C).
I now use Swift for all new projects, but I can't figure out how to use my wrapper class from Swift. I can import the class (through a bridging header), but when I need to create ALCdevice and ALCcontext objects in my Swift file, Xcode won't accept it.
Does anyone have or know of a working example of playing a sound using OpenAL from Swift? Or maybe sound without lag can be achieved in some other way in Swift?
I've ran to a delay-type problem once, I hope your problem is the same one I've encountered.
In my situation, I was using Sprite-Kit to play my sounds, using SKAction.playSoundFileNamed:. It would always lag half a second behind where I wanted it to play.
This is because it takes time to allocate memory for each SKAction call. To solve this, store the sound action in a variable so you can reuse the sound later without instantiating new objects. It saved me from the delay. This technique would probably work for AVAudioPlayer too.

multiple MPMoviePlayerController on a tableview

I have multiple MPMoviePlayerController on a UITableView (on different sections).
I know that only one can play on a specific time, but the thing is that if a different player was in "pause" mode, it gets stuck and I need to re-init it.
I can do a sophisticated [tableview reload] on everything else but me - but it seems cycle consuming and idiotic (and not that simple to reload all but me)
Is there a better way? Maybe a 3rd party OS package that does handle this nicely?
Ok, first of all, why you need multiple MPMoviePlayerController instances if you play only one video at a time? You could create one instance of MPMoviePlayerController to play all videos one by one.
And, I think, AVPlayer with AVPlayerLayer would be more extensible solution for playing videos on iOS. Take a look at the AVFoundation framework reference for more information about the AVPlayer here.
Good Luck!

How should I go about synchronizing sound effects with UI animations in iOS?

I want to play some sound effects while animating an UI element (e.g. playing a movement sound while an UI object is moving), which requires precise timing and synchronization.
I really can't figure out which framework I should be using from the descriptions in the Multimedia Programming Guide. So I need your kind help in choosing one.
What I want to do is:
Play short (max 10 seconds) sound effects (e.g. a button tap sound).
Being able to synchronize some of them with UI animations (e.g. a view appearance/disappearance).
I tried using the AudioServicesPlaySystemSound function from the AudioToolbox framework, sometimes it works great, but sometimes the sound won't play instantly. For example when a button is clicked, its action is performed before the sound is played, even though the AudioServicesPlaySystemSound is called first in the button's action method.
Thanks in advance,
Mota
Mixing audio waveforms into an already running RemoteIO Audio Unit configured with short buffers will have the lowest possible audio latency. The cost of this is a more complex to use API and the need for uncompressed audio assets.

Resources