In my iOS app, I have large PCM files. I understand that the OpenAL buffers reside in memory. Is it possible to get the buffers to stream from disk instead?
Thanks.
Related
We are using Web Audio API to play and manipulate audio in a web app.
When trying to decode large mp3 files (around 5MB) the memory usage spikes upwards in Safari on iPad, and if we load another similar size file it will simply crash.
It seems like Web Audio API is not really usable when running on the iPad unless we use small files.
Note that the same code works well on Chrome Desktop version - Safari version does complain on high memory usage.
Does anybody knows how to get around this issue? or what's the memory limit for playing audio files using Web Audio on an iPad?
Thanks!
Decoded audio files weight a lot more in RAM than on disk. A single sample uses 4 bytes (32-bit float). This translates to 230 MB of RAM for 10 minutes of audio at 48 000 Hz sample rate and in stereo. One hour of audio at the same sample rate and with stereo will take ~1,3 GB of RAM!
So, if you decode a lot of files, you can consume big amounts of RAM. My suggestion is to "undecode" files that you don't need (just "forget" unneeded audio buffers, so garbage collector can free memory).
You can also use mono audio files instead of stereo, that should reduce memory usage by half.
Note, that decoded audio files are always resampled to device's sample rate. This means that using audio with low sample rates won't help with memory usage.
I am getting strange behaviour with AudioFileReadPackets since it's been deprecated in iOS 8.
How should it be replaced?
From the Apple's Documentation:
AudioFileReadPackets is deprecated in iOS 8.0 . You can try this: AudioFileReadPacketData
If you do not need to read a fixed duration of audio data, but rather
want to use your memory buffer most efficiently, use
AudioFileReadPacketData instead of AudioFileReadPackets .When reading variable
bit-rate (VBR) audio data, using AudioFileReadPackets function requires that you
allocate more memory than you would for the AudioFileReadPacketData
function. See the descriptions for the outBuffer parameter in each of
these two functions. In addition, AudioFileReadPackets function is less efficient than
AudioFileReadPacketData when reading compressed file formats that do
not have packet tables, such as MP3 or ADTS. Use this function only
when you need to read a fixed duration of audio data, or when you are
reading only uncompressed audio.
Audio File Services reads one 32-bit chunk of a file at a time.
My app uses Core Audio to analyse audio buffers, recording over long time periods but only presenting a few samples of that audio to the user. To date, I have been writing everything to disk before selectively deleting files. However, it seems that the write operation is quite demanding of the hardware, and can occasionally trigger crashes if used too much.
I'd love to have a way to avoid the write operation unless necessary, which could be done by storing audio buffers (say 1 minutes worth) in RAM before either writing to disk or releasing them from memory if not needed.
Can anyone please advise of the most efficient way that this can be done?
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, THIS->bufferList);
Analysis of buffer....
OSStatus s;
s = ExtAudioFileWriteAsync(THIS->mAudioFileRef, inNumberFrames, THIS->bufferList);
What would be the best container for storing the buffer before selectively running a loop to write the buffer to disk...? What would be the best way to release the memory?
A lock-free circular queue or fifo using an array of samples of size 2646000+1 or larger could store 1 minute of mono audio at 44.1 ksps in RAM. Most iOS devices won't have a problem allocating 6MB or more of memory for a queue or fifo. Displaying, analyzing or writing this data to disk can be done in another thread via a periodic polling or displaylink timer (emptying from the tail of the circular queue or fifo). The memory used for this array can be released after the audio unit is stopped.
So, I have setup a multichannel mixer and a Remote I/O unit to mix/play several buffers of PCM data that I read from audio files.
For short sound effects in my game, I load the whole file into a memory buffer using ExtAudioFileRead().
For my background music, let's say I have a 3 minute compressed audio file. Assuming it's encoded as mp3 # 128 kbps (44,100 Hz stereo), that gives around 1 MB per minute, or 3 MB total. Uncompressed, in memory, I believe it's around ten times that if I remember correctly. I could use the exact same method as for small files; I believe ExtAudioFileRead() takes care of the decoding, using the (single) hardware decoder when available, but I'd rather not read the whole buffer at once, and instead 'stream' it at regular intervals from disk.
The first thing that comes to mind is going one step below to the (non-"extended") Audio File Services API and use AudioFileReadPackets(), like so:
Prepare two buffers A and B, each big enough to hold (say) 5 seconds of audio. During playback, start reading from one buffer and switch to the other one when reaching the end (i.e., they make up the two halves of a ring buffer).
Read first 5 seconds of audio from file into buffer A.
Read next 5 seconds of audio from file into buffer B.
Begin playback (from buffer A).
Once the play head enters buffer B, load next 5 seconds of audio into buffer A.
Once the play head enters buffer A again, load next 5 seconds of audio into buffer B.
Go to #5
Is this the right approach, or is there a better way?
I'd suggest using the high-level AVAudioPlayer class to do simple background playback of an audio file. See:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioPlayerClassReference/Chapters/Reference.html#//apple_ref/doc/uid/TP40008067
If you require finer-grained control and lower latency, check out Apple's AUAudioFilePlayer. See AudioUnitProperties.h for a discussion. This is an Audio Unit that that abstracts the complexities of streaming an audio file from disk. That said, it's still pretty complicated to set up and use, so definitely try AVAudioPlayer first.
I'm just researching at the moment the possibility of writing an app to record an hours worth of video/audio for a specific use case.
As the video will be an hour long I would want to encode on-the-fly and not after the recording has finished to keep disk usage to a minimum.
Do the video capture APIs write a large uncompressed file to disk that has to be encoded after or can they encode on-the-fly resulting in a optimised file written to disk?
It's important that the video is recorded at a lower resolution than the iPhone's advertised 720/1080p as I need to keep the file sizes down due to length of video (which will need to be uploaded).
Any information you have would be appreciated or even just a pointer in the right direction.
No they do not record uncompressed to disk (unless this is what you want). You can specify to record to a MOV/MP4 and have the video encoded in H264. Additionally you can control the average bit rate of the encoding. You can also specify the capture size, and output encoding size along with scaling options if needed. For demo code check out AVCamDemo in the WWDC 2010 sample code. This demo code may now be available in the docs.