AudioQueueDispose delay - ios

According to documentation here https://developer.apple.com/library/mac/documentation/MusicAudio/Reference/AudioQueueReference/#//apple_ref/c/func/AudioQueueDispose
err = AudioQueueDispose(queue, true);
I use true so dispose of AudioQueue happens immediately, although it does dispose queue immediately sometimes , other times it has delay 3-4 seconds up to 13 seconds on the device. err = AudioQueueStop(queue, true) has the same problem as well.
My understanding is that both functions try to flush-release buffers already and about to be enqueued...
so I even help my call-back function to flush the buffers if AudioQueueDispose is going to be called.
static void MyAQOutputCallBack(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
if (player.shouldDispose) {
printf("player shouldDispose !!!!!!!!!!!\n\n\n\n\n\n");
OSStatus dispose = AudioQueueFlush (inAQ);
return;
}
}
Since I am going to record something using AudioQueues after playing a track, I need this functions returned without delays. couple hundred milliseconds is okay but 3-4 seconds? that is unacceptable.
Other AudioQueue functions also being called on the same thread and they seem working fine.
I have also tried to call this on main thread to make sure if it is going to change anything or not
[self performSelectorOnMainThread:#selector(tryOnMain) withObject:nil waitUntilDone:NO];
or
dispatch_sync(dispatch_get_main_queue(),^{ didnt do any difference
Any idea what might be happening?

I successfully immediately stop my audio playback by:
-(void)stopAudio
{
#synchronized(audioLock) {
audioLock=[NSNumber numberWithBool:false];
OSStatus err;
err=AudioQueueReset (_audioQueue);
if (err != noErr)
{
NSLog(#"AudioQueueReset() error: %d", (int)err);
}
err=AudioQueueStop (_audioQueue, YES);
if (err != noErr)
{
NSLog(#"AudioQueueStop() error: %d", (int)err);
}
err=AudioQueueDispose (_audioQueue, YES);
if (err != noErr)
{
NSLog(#"AudioQueueDispose() error: %d", (int)err);
}
}
}
And in my:
void audioCallback(void *custom_data, AudioQueueRef queue, AudioQueueBufferRef buffer)
I only put more stuff in my queue if:
myObject *weakSelf = (__bridge myObject *)custom_data;
#synchronized(weakSelf -> audioLock) {
if ([weakSelf -> audioLock boolValue]) {
Put_more_stuff_on_queue
}
In my particular case I playback AAC-LC audio.

Related

How Audio Queue stop current playing task and start play another AudioQueueBuffer immediately?

NEED: I have an audio queue and two AudioQueueBuffer. How can I play
NO.2 AudioQueueBuffer immediately in the midst of NO.1 AudioQueueBuffer playing.
I have tried AudioQueueStop or AudioQueueReset. it take a long time to process, NO.2 AudioQueueBuffer playing too late.
-(void)playBuffer:(AudioBuffer *)buffer format:(const AudioStreamBasicDescription *)format
{
AudioQueueStop(_audioQueue, YES);//this line consuming too much time
AudioQueueDispose(_audioQueue, YES);
AudioQueueRef newAudioQueue;
AudioQueueBufferRef queueBuffer;
AudioQueueNewOutput(format, audioQueueOutputCallback, (__bridge void*)self,
nil, nil, 0, &newAudioQueue);
OSStatus status;
status = AudioQueueAllocateBuffer(newAudioQueue, buffer->mDataByteSize, &queueBuffer);
memcpy(queueBuffer->mAudioData, buffer->mData, buffer->mDataByteSize);
queueBuffer->mAudioDataByteSize=buffer->mDataByteSize;
status = AudioQueueEnqueueBuffer(newAudioQueue, queueBuffer, 0, NULL);
Float32 gain=1.0;
AudioQueueSetParameter(newAudioQueue, kAudioQueueParam_Volume, gain);
AudioQueueStart(newAudioQueue, nil);
AudioQueueFreeBuffer(newAudioQueue, queueBuffer);
_audioQueue = newAudioQueue;
}
so my question is:Is it possible audio queue play next audio buffer immediately?Or Is audio queue don't match this task And I need a alternative?
finally I just dispose audioQueue asynchronously. But I think AVAudioUint maybe a better solution for my situation.
- (void)playBuffer:(AudioBuffer *)buffer format:(const AudioStreamBasicDescription *)format
{
oldAudioQueue = _audioQueue;
if (oldAudioQueue){
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
AudioQueuePause(oldAudioQueue);
//AudioQueueStop(oldAudioQueue, YES);//this line consuming too much time, will barries thread
AudioQueueDispose(oldAudioQueue, YES);
oldAudioQueue = nil;
});
}
AudioQueueRef newAudioQueue;
AudioQueueBufferRef queueBuffer;
AudioQueueNewOutput(format, audioQueueOutputCallback, (__bridge void*)self,
nil, nil, 0, &newAudioQueue);
OSStatus status;
status = AudioQueueAllocateBuffer(newAudioQueue, buffer->mDataByteSize, &queueBuffer);
memcpy(queueBuffer->mAudioData, buffer->mData, buffer->mDataByteSize);
queueBuffer->mAudioDataByteSize=buffer->mDataByteSize;
status = AudioQueueEnqueueBuffer(newAudioQueue, queueBuffer, 0, NULL);
Float32 gain=1.0;
AudioQueueSetParameter(newAudioQueue, kAudioQueueParam_Volume, gain);
AudioQueueStart(newAudioQueue, nil);
AudioQueueFreeBuffer(newAudioQueue, queueBuffer);
_audioQueue = newAudioQueue;
}

Playing audio using ffmpeg and AVAudioPlayer

I am trying to read an audio file (that is not supported by iOS) with ffmpeg and then play it using AVAudioPlayer. It took me a while to get ffmpeg built inside an iOS project, but I finally did using kewlbear/FFmpeg-iOS-build-script.
This is the snippet I have right now, after a lot of searching on the web, including stackoverflow. One of the best examples I found was here.
I believe this is all the relevant code. I added comments to let you know what I'm doing and where I need something clever to happen.
#import "FFmpegWrapper.h"
#import <AVFoundation/AVFoundation.h>
AVFormatContext *formatContext = NULL;
AVStream *audioStream = NULL;
av_register_all();
avformat_network_init();
avcodec_register_all();
// this is a file locacted on my NAS
int opened = avformat_open_input(&formatContext, #"http://192.168.1.70:50002/m/NDLNA/43729.flac", NULL, NULL);
// can't open file
if(opened == 1) {
avformat_close_input(&formatContext);
}
int streamInfoValue = avformat_find_stream_info(formatContext, NULL);
// can't open stream
if (streamInfoValue < 0)
{
avformat_close_input(&formatContext);
}
// number of streams available
int inputStreamCount = formatContext->nb_streams;
for(unsigned int i = 0; i<inputStreamCount; i++)
{
// I'm only interested in the audio stream
if(formatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
// found audio stream
audioStream = formatContext->streams[i];
}
}
if(audioStream == NULL) {
// no audio stream
}
AVFrame* frame = av_frame_alloc();
AVCodecContext* codecContext = audioStream->codec;
codecContext->codec = avcodec_find_decoder(codecContext->codec_id);
if (codecContext->codec == NULL)
{
av_free(frame);
avformat_close_input(&formatContext);
// no proper codec found
}
else if (avcodec_open2(codecContext, codecContext->codec, NULL) != 0)
{
av_free(frame);
avformat_close_input(&formatContext);
// could not open the context with the decoder
}
// this is displaying: This stream has 2 channels and a sample rate of 44100Hz
// which makes sense
NSLog(#"This stream has %d channels and a sample rate of %dHz", codecContext->channels, codecContext->sample_rate);
AVPacket packet;
av_init_packet(&packet);
// this is where I try to store in the sound data
NSMutableData *soundData = [[NSMutableData alloc] init];
while (av_read_frame(formatContext, &packet) == 0)
{
if (packet.stream_index == audioStream->index)
{
// Try to decode the packet into a frame
int frameFinished = 0;
avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet);
// Some frames rely on multiple packets, so we have to make sure the frame is finished before
// we can use it
if (frameFinished)
{
// this is where I think something clever needs to be done
// I need to store some bytes, but I can't figure out what exactly and what length?
// should the length be multiplied by the of the number of channels?
NSData *frameData = [[NSData alloc] initWithBytes:packet.buf->data length:packet.buf->size];
[soundData appendData: frameData];
}
}
// You *must* call av_free_packet() after each call to av_read_frame() or else you'll leak memory
av_free_packet(&packet);
}
// first try to write it to a file, see if that works
// this is indeed writing bytes, but it is unplayable
[soundData writeToFile:#"output.wav" atomically:YES];
NSError *error;
// this is my final goal, playing it with the AVAudioPlayer, but this is giving unclear errors
AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:soundData error:&error];
if(player == nil) {
NSLog(error.description); // Domain=NSOSStatusErrorDomain Code=1954115647 "(null)"
} else {
[player prepareToPlay];
[player play];
}
// Some codecs will cause frames to be buffered up in the decoding process. If the CODEC_CAP_DELAY flag
// is set, there can be buffered up frames that need to be flushed, so we'll do that
if (codecContext->codec->capabilities & CODEC_CAP_DELAY)
{
av_init_packet(&packet);
// Decode all the remaining frames in the buffer, until the end is reached
int frameFinished = 0;
while (avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet) >= 0 && frameFinished)
{
}
}
av_free(frame);
avcodec_close(codecContext);
avformat_close_input(&formatContext);
Not really found a solution to this specific problem, but ended up using ap4y/OrigamiEngine instead.
My main reason I wanted to use FFmpeg is to play unsupported audio files (FLAC/OGG) on iOS and tvOS and OrigamiEngine does the job just fine.

Deep copy of CMSampleBufferRef

I'm trying to perform a deep copy of a CMSampleBufferRef for audio and video connection ? I need to use this buffer for delayed processing. Can somebody helper here by point to a sample code.
Thanks
I solve this problem
I needs access to the sample data for a long period of time.
try many way:
CVPixelBufferRetain -----> program broken
CVPixelBufferPool -----> program broken
CVPixelBufferCreateWithBytes ----> it can solve this program,but this will reduce performance,Apple is not recommended to do so
CMSampleBufferCreateCopy --->it is ok, and apple recommended it.
List : To maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible. If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped. If your application is causing samples to be dropped by retaining the provided CMSampleBuffer objects for too long, but it needs access to the sample data for a long period of time, consider copying the data into a new buffer and then calling CFRelease on the sample buffer (if it was previously retained) so that the memory it references can be reused.
REF:https://developer.apple.com/reference/avfoundation/avcapturefileoutputdelegate/1390096-captureoutput
that might be what you need:
pragma mark -captureOutput
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if (connection == m_videoConnection) {
/* if you did not read m_sampleBuffer ,here you must CFRelease m_sampleBuffer, it is causing samples to be dropped
*/
if (m_sampleBuffer) {
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
OSStatus status = CMSampleBufferCreateCopy(kCFAllocatorDefault, sampleBuffer, &m_sampleBuffer);
if (noErr != status) {
m_sampleBuffer = nil;
}
NSLog(#"m_sampleBuffer = %p sampleBuffer= %p",m_sampleBuffer,sampleBuffer);
}
}
pragma mark -get CVPixelBufferRef to use for a long period of time
- (ACResult) readVideoFrame: (CVPixelBufferRef *)pixelBuffer{
while (1) {
dispatch_sync(m_readVideoData, ^{
if (!m_sampleBuffer) {
_readDataSuccess = NO;
return;
}
CMSampleBufferRef sampleBufferCopy = nil;
OSStatus status = CMSampleBufferCreateCopy(kCFAllocatorDefault, m_sampleBuffer, &sampleBufferCopy);
if ( noErr == status)
{
CVPixelBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBufferCopy);
*pixelBuffer = buffer;
_readDataSuccess = YES;
NSLog(#"m_sampleBuffer = %p ",m_sampleBuffer);
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
else{
_readDataSuccess = NO;
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
});
if (_readDataSuccess) {
_readDataSuccess = NO;
return ACResultNoErr;
}
else{
usleep(15*1000);
continue;
}
}
}
then you can use it such this:
-(void)getCaptureVideoDataToEncode{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(){
while (1) {
CVPixelBufferRef buffer = NULL;
ACResult result= [videoCapture readVideoFrame:&buffer];
if (ACResultNoErr == result) {
ACResult error = [videoEncode encoder:buffer outputPacket:&streamPacket];
if (buffer) {
CVPixelBufferRelease(buffer);
buffer = NULL;
}
if (ACResultNoErr == error) {
NSLog(#"encode success");
}
}
}
});
}
I do this. CMSampleBufferCreateCopy can indeed deep copy
but a new problem is appear
captureOutput delegate doesn't work

Audioqueue callback not being called

So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)

EXC_BAD_ACCESS in AudioRingBuffer::GetTimeBounds

Okay, here's the scenario: I have a real-time recording app using ExtAudioFileWriteAsync targeted for iOS 4.3. The first time I record with the app, it works perfectly. If I press stop, then record again, better than half the time I will get an EXC_BAD_ACCESS in AudioRingBuffer::GetTimeBounds right when recording starts.
That is to say that ExtAudioFileWriteAsync fails on GetTimeBounds when starting the second recording. Here is the bit of code that is fired when recording starts, which creates the ExtAudioFile reference:
- (void) setActive:(NSString *) file
{
if (mExtAFRef) {
ExtAudioFileDispose(mExtAFRef);
mExtAFRef = nil;
NSLog(#"mExtAFRef Disposed.");
}
if (mOutputAudioFile)
{
ExtAudioFileDispose(mOutputAudioFile);
mOutputAudioFile = nil;
NSLog(#"mOutputAudioFile Disposed.");
}
NSURL *outUrl = [NSURL fileURLWithPath:file];
OSStatus setupErr = ExtAudioFileCreateWithURL((CFURLRef)outUrl, kAudioFileWAVEType, &mOutputFormat, NULL, kAudioFileFlags_EraseFile, &mOutputAudioFile);
NSAssert(setupErr == noErr, #"Couldn't create file for writing");
setupErr = ExtAudioFileSetProperty(mOutputAudioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
NSAssert(setupErr == noErr, #"Couldn't create file for format");
setupErr = ExtAudioFileWriteAsync(mOutputAudioFile, 0, NULL);
NSAssert(setupErr == noErr, #"Couldn't initialize write buffers for audio file");
isActive = TRUE;
}
Does anyone have any thoughts whatsoever on what may be causing this? I assume, given EXC_BAD_ACCESS, that it is a memory leak or something's ref count getting knocked to zero, but I can't for the life of me figure out what it might be, and the Googles are drawing a complete blank. I posted this same thing on the Apple dev forum for CoreAudio, but not a soul took pity on me, even to make a pithy comment. HALP!
EDIT: Found the problem. The error was happening when ExtAudioFileWriteAsync was trying to write a new file before the old file was "optimized." A little mutex love solved the problem.
I'm having almost the same issue on a recording app, can anyone please explain how to solve it with "A little mutex love"?
EDIT
Tnx to Chris Randall I did manage to solve my problems. This is how I implemented the mutex:
#include <pthread.h>
static pthread_mutex_t outputAudioFileLock;
then in my init:
pthread_mutex_init(&outputAudioFileLock,NULL);
and in the callback:
if (THIS.mIsRecording) {
if (0 == pthread_mutex_trylock(&outputAudioFileLock)) {
OSStatus err = ExtAudioFileWriteAsync(THIS.mRecordFile, inNumberFrames, THIS.recordingBufferList);
if (noErr != err) {
NSLog(#"ExtAudioFileWriteAsync Failed: %ld!!!", err);
} else {
}
pthread_mutex_unlock(&outputAudioFileLock);
}
}
finally in the stopRecord method:
if (mRecordFile) {
pthread_mutex_lock(&outputAudioFileLock);
OSStatus setupErr;
setupErr = ExtAudioFileDispose(mRecordFile);
mRecordFile = NULL;
pthread_mutex_unlock(&outputAudioFileLock);
NSAssert(setupErr == noErr, #"Couldn't dispose audio file");
NSLog(#"Stopping Record");
mIsRecording = NO;
}
Tnx again for the help, hope this saves someone's time.
Include pthread.h, and define pthread_mutex_t outputAudioFileLock in your constructor. Then, in your audio callback, when you want to write, do something like this (adjusting the variables according to what you're using):
if (0 == pthread_mutex_trylock(&outputAudioFileLock)) {
OSStatus err = ExtAudioFileWriteAsync(mOutputAudioFile, frames, bufferList);
if (noErr != err) {
NSLog(#"ExtAudioFileWriteAsync Failed: %ld!!!", err);
} else {
}
pthread_mutex_unlock(&outputAudioFileLock);
}
The pthread_mutex_trylock checks to see if the thread is locked (and thus "optimizing"). If it is not, it then allows the write. I then wrap both the audio file setup (as seen above) and the audio file cleanup like so, so that the thread is locked when the file system is doing anything that would cause the AudioRingBuffer BAD_ACCESS error:
pthread_mutex_lock(&outputAudioFileLock);
OSStatus setupErr;
setupErr = ExtAudioFileDispose(mOutputAudioFile);
mOutputAudioFile = NULL;
pthread_mutex_unlock(&outputAudioFileLock);
NSAssert(setupErr == noErr, #"Couldn't dispose audio file");
This locks the setup and cleanup threads so that you can't write to a file that is being "optimized," which is the source of the error. Hope this helps!
EDIT: I do my audio callback in the Obj-C part of the audio controller; if you're doing it in the C++ part, this would be structured quite a bit differently; perhaps someone else can answer that?

Resources