I am getting strange behaviour with AudioFileReadPackets since it's been deprecated in iOS 8.
How should it be replaced?
From the Apple's Documentation:
AudioFileReadPackets is deprecated in iOS 8.0 . You can try this: AudioFileReadPacketData
If you do not need to read a fixed duration of audio data, but rather
want to use your memory buffer most efficiently, use
AudioFileReadPacketData instead of AudioFileReadPackets .When reading variable
bit-rate (VBR) audio data, using AudioFileReadPackets function requires that you
allocate more memory than you would for the AudioFileReadPacketData
function. See the descriptions for the outBuffer parameter in each of
these two functions. In addition, AudioFileReadPackets function is less efficient than
AudioFileReadPacketData when reading compressed file formats that do
not have packet tables, such as MP3 or ADTS. Use this function only
when you need to read a fixed duration of audio data, or when you are
reading only uncompressed audio.
Audio File Services reads one 32-bit chunk of a file at a time.
Related
I have an app that broadcasts the screen of the device. I am using OpenTok library to send the frames. Their library does not handle the compressing process so every time I send a frame using this code to consume Buffer:
self.videoCaptureConsumer?.consumeImageBuffer(pixelBuffer, orientation:sample.orientation.openTokOrientation,timestamp:CMSampleBufferGetPresentatioTimeStamp(sample) ,metadata:nil)
the Broadcast extension crashes because of the memory limitation(50MB).
I have been searching in SO as well as repos in GitHub finally I ended up using a CPU image processing using Accelerate. I created an extension of CVPixelBuffer to resize the buffer. Here is the extension
I can resize the CVPixelBuffer and then send the new buffer to the OpenTok library. But the problem is since it is done by CPU, on iPhones X and below, the broadcast extension is stopped by the high CPU usage by the system.
So, I thought I have to find a way to compress-resize the buffer in a faster and memory-safer way with GPU acceleration.
Then I ended up checking Telegram's iOS app. I discovered the Telegram`s Broadcast Extension which actually works like a charm and it is what I need but I lack information on how it works since it uses C libraries.
My question is how I can compress-resize the CVPixelBuffer in a similar way that Telegram does but at least written with Swift language without passing memory limit and using GPU acceleration?
We are using Web Audio API to play and manipulate audio in a web app.
When trying to decode large mp3 files (around 5MB) the memory usage spikes upwards in Safari on iPad, and if we load another similar size file it will simply crash.
It seems like Web Audio API is not really usable when running on the iPad unless we use small files.
Note that the same code works well on Chrome Desktop version - Safari version does complain on high memory usage.
Does anybody knows how to get around this issue? or what's the memory limit for playing audio files using Web Audio on an iPad?
Thanks!
Decoded audio files weight a lot more in RAM than on disk. A single sample uses 4 bytes (32-bit float). This translates to 230 MB of RAM for 10 minutes of audio at 48 000 Hz sample rate and in stereo. One hour of audio at the same sample rate and with stereo will take ~1,3 GB of RAM!
So, if you decode a lot of files, you can consume big amounts of RAM. My suggestion is to "undecode" files that you don't need (just "forget" unneeded audio buffers, so garbage collector can free memory).
You can also use mono audio files instead of stereo, that should reduce memory usage by half.
Note, that decoded audio files are always resampled to device's sample rate. This means that using audio with low sample rates won't help with memory usage.
I've got an iOS app compressing a bunch of small chunks of data. I use compression_encode_buffer running in LZ4 mode to do it so that it is fast enough for my needs.
Later, I take the file[s] I made and decode them on a non-Apple device. Previously I'd been using their ZLIB compression mode and could successfully decode it in C# with System.IO.Compression.DeflateStream.
However, I'm having a hell of a time with the LZ4 output. Based on the LZ4 docs here, Apple breaks the stream into a bunch of blocks, each starting with a 4-byte magic number, 4-byte decompressed size, and 4-byte compressed size. All that makes sense, and I'm able to parse the file into its consituent raw-LZ4 chunks. Each chunk in the buffer iOS outputs decompresses to about 65,635 bytes, and there's about 10 of them in my case.
But then: I have no idea what to DO with the LZ4 chunks I'm left with. I've tried decoding them with LZ4net's LZ4.LZ4Stream, LZ4net's LZ4.LZ4Codec (it manages the first block, but then fails when I feed in the 2nd one). I've also tried several C++ libraries to decode the data. Each of them seem to be looking for a header that the iOS compression functions have encoded in a non-standard way.
Answering my own: Apple's LZ4 decompressor (with necessary modifications to handle their raw storage format) is here: https://opensource.apple.com/source/xnu/xnu-3789.21.4/osfmk/vm/lz4.c.auto.html
Edit afterwards: I actually wasn't able to get this working, but I didn't spend much time on it because I found Apple's LZFSE decompressor.
LZFSE Decompressor can be found here: https://github.com/lzfse/lzfse
So, I have setup a multichannel mixer and a Remote I/O unit to mix/play several buffers of PCM data that I read from audio files.
For short sound effects in my game, I load the whole file into a memory buffer using ExtAudioFileRead().
For my background music, let's say I have a 3 minute compressed audio file. Assuming it's encoded as mp3 # 128 kbps (44,100 Hz stereo), that gives around 1 MB per minute, or 3 MB total. Uncompressed, in memory, I believe it's around ten times that if I remember correctly. I could use the exact same method as for small files; I believe ExtAudioFileRead() takes care of the decoding, using the (single) hardware decoder when available, but I'd rather not read the whole buffer at once, and instead 'stream' it at regular intervals from disk.
The first thing that comes to mind is going one step below to the (non-"extended") Audio File Services API and use AudioFileReadPackets(), like so:
Prepare two buffers A and B, each big enough to hold (say) 5 seconds of audio. During playback, start reading from one buffer and switch to the other one when reaching the end (i.e., they make up the two halves of a ring buffer).
Read first 5 seconds of audio from file into buffer A.
Read next 5 seconds of audio from file into buffer B.
Begin playback (from buffer A).
Once the play head enters buffer B, load next 5 seconds of audio into buffer A.
Once the play head enters buffer A again, load next 5 seconds of audio into buffer B.
Go to #5
Is this the right approach, or is there a better way?
I'd suggest using the high-level AVAudioPlayer class to do simple background playback of an audio file. See:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioPlayerClassReference/Chapters/Reference.html#//apple_ref/doc/uid/TP40008067
If you require finer-grained control and lower latency, check out Apple's AUAudioFilePlayer. See AudioUnitProperties.h for a discussion. This is an Audio Unit that that abstracts the complexities of streaming an audio file from disk. That said, it's still pretty complicated to set up and use, so definitely try AVAudioPlayer first.
I'm just researching at the moment the possibility of writing an app to record an hours worth of video/audio for a specific use case.
As the video will be an hour long I would want to encode on-the-fly and not after the recording has finished to keep disk usage to a minimum.
Do the video capture APIs write a large uncompressed file to disk that has to be encoded after or can they encode on-the-fly resulting in a optimised file written to disk?
It's important that the video is recorded at a lower resolution than the iPhone's advertised 720/1080p as I need to keep the file sizes down due to length of video (which will need to be uploaded).
Any information you have would be appreciated or even just a pointer in the right direction.
No they do not record uncompressed to disk (unless this is what you want). You can specify to record to a MOV/MP4 and have the video encoded in H264. Additionally you can control the average bit rate of the encoding. You can also specify the capture size, and output encoding size along with scaling options if needed. For demo code check out AVCamDemo in the WWDC 2010 sample code. This demo code may now be available in the docs.