I have a strange memory "leak" with AVAssetWriterInput appendSampleBuffer. I'm writing video and audio at the same time, so I have one AVAssetWriter with two inputs, one for video and one for audio:
self.videoWriter = [[[AVAssetWriter alloc] initWithURL:[self.currentVideo currentVideoClipLocalURL]
fileType:AVFileTypeMPEG4
error:&error] autorelease];
...
self.videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
self.videoWriterInput.expectsMediaDataInRealTime = YES;
[self.videoWriter addInput:self.videoWriterInput];
...
self.audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
outputSettings:audioSettings];
self.audioWriterInput.expectsMediaDataInRealTime = YES;
[self.videoWriter addInput:self.audioWriterInput];
I start writing and everything works fine on the surface. The video and audio get written and are aligned, etc. However, I put my code through the Allocations Instrument and noticed the following:
The audio bytes are getting retained in memory, as I'll prove in a second. That's the ramp up in memory. The audio bytes are only released after I call [self.videoWriter endSessionAtSourceTime:...], which you see as the dramatic drop in memory usage. Here is my audio writing code, which is dispatched as a block onto a serial queue:
#autoreleasepool
{
// The objects that will hold the audio data
CMSampleBufferRef sampleBuffer;
CMBlockBufferRef blockBuffer1;
CMBlockBufferRef blockBuffer2;
size_t nbytes = numSamples * asbd_.mBytesPerPacket;
OSStatus status = noErr;
status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
data,
nbytes,
kCFAllocatorNull,
NULL,
0,
nbytes,
kCMBlockBufferAssureMemoryNowFlag,
&blockBuffer1);
if (status != noErr)
{
NLog(#"CMBlockBufferCreateWithMemoryBlock error at buffer 1");
return;
}
status = CMBlockBufferCreateContiguous(kCFAllocatorDefault,
blockBuffer1,
kCFAllocatorDefault,
NULL,
0,
nbytes,
kCMBlockBufferAssureMemoryNowFlag | kCMBlockBufferAlwaysCopyDataFlag,
&blockBuffer2);
if (status != noErr)
{
NSLog(#"CMBlockBufferCreateWithMemoryBlock error at buffer 2");
CFRelease(blockBuffer1);
return;
}
// Finally, create the CMSampleBufferRef
status = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault,
blockBuffer2,
YES, // Yes data is ready
NULL, // No callback needed to make data ready
NULL,
audioFormatDescription_,
1,
timestamp,
NULL,
&sampleBuffer);
if (status != noErr)
{
NSLog(#"CMAudioSampleBufferCreateWithPacketDescriptions error.");
CFRelease(blockBuffer1);
CFRelease(blockBuffer2);
return;
}
if ([self.audioWriterInput isReadyForMoreMediaData])
{
if (![self.audioWriterInput appendSampleBuffer:sampleBuffer])
{
NSLog(#"Couldn't append audio sample buffer: %d", numAudioCallbacks_);
}
} else {
NSLog(#"AudioWriterInput isn't ready for more data.");
}
// One release per create
CFRelease(blockBuffer1);
CFRelease(blockBuffer2);
CFRelease(sampleBuffer);
}
As you can see, I'm releasing each buffer once per create. I've traced the "leak" down to the line where the audio buffers are appended:
[self.audioWriterInput appendSampleBuffer:sampleBuffer]
I proved this to myself by commenting out that line, after which I get the following "leak-free" Allocations graph (although the recorded video now has no audio now, of course):
I tried one other thing, which is to add back the appendSamplebuffer line and instead double-release blockBuffer2:
CFRelease(blockBuffer1);
CFRelease(blockBuffer2);
CFRelease(blockBuffer2); // Double release to test the hypothesis that appendSamplebuffer is retaining this
CFRelease(sampleBuffer);
Doing this did not cause a double-free, indicating that blockBuffer2's retain count at that point is 2. This produced the same "leak-free" allocations graph, with the exception that when I called [self.videoWriter endSessionAtSourceTime:...], I get a crash from a double-release (indicating that self.videoWriter is trying to release all of its pointers to the blockBuffer2s that have been passed in).
If instead, I try the following:
CFRelease(blockBuffer1);
CFRelease(blockBuffer2);
CMSampleBufferInvalidate(sampleBuffer); // Invalidate sample buffer
CFRelease(sampleBuffer);
then [self.audioWriterInput appendSampleBuffer:sampleBuffer] and the call to append video frames begin to fail for every call after that.
So my conclusion is that AVAssetWriter or AVAssetWriterInput is retaining blockBuffer2 until the video has finished recording. Obviously, this can cause real memory problems if the video is recording for long enough. Am I doing something wrong?
Edit: The audio bytes I'm getting are PCM format, whereas the video format I'm writing is MPEG4 and the audio format for that video is MPEG4AAC. Is it possible that the video writer is performing the PCM --> AAC format on the fly, and that's why it's getting buffered?
Since you have been waiting a month for an answer I'll give you a less than ideal but workable answer.
You could use the ExtendedAudioFile functions to write a separate file. Then you could just playback the video and audio together with an AVComposition. I think you might be able to use AVFoundation to composite the caf and video together without reencoding if you need them composited at the end of recording.
That will get you off and running, then you can solve the memory leak at your leisure.
Related
I want to compress some data with VToolBox. When I run my app in the foreground it runs well, but when I run my app in the background it gives no compressed data anymore...
I added logs when the encoding starts:
- (void) encode1:(CMSampleBufferRef )sampleBuffer wrapTs:(UInt64)ts;
{
dispatch_sync(aQueue, ^{
frameCount++;
// Get the CV Image buffer
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Create properties
CMTime presentationTimeStamp = CMTimeMake(frameCount, 1000);
//CMTime duration = CMTimeMake(1, DURATION);
VTEncodeInfoFlags flags;
//NSLog(#"encode sessino status:%d", EncodingSession==nil? 0:1);
// Pass it to the encoder
OSStatus statusCode = VTCompressionSessionEncodeFrame(EncodingSession,
imageBuffer,
presentationTimeStamp,
kCMTimeInvalid,
NULL, (__bridge void*)#(ts), &flags);
NSLog(#"hardware compress result: %d", (int)statusCode);
// Check for error
if (statusCode != noErr) {
and in the compress callback:
void didCompressH264(void *outputCallbackRefCon, void *sourceFrameRefCon, OSStatus status, VTEncodeInfoFlags infoFlags,
CMSampleBufferRef sampleBuffer ){
//get outside stamp
UInt64 pp = [((__bridge NSNumber*)sourceFrameRefCon) longLongValue];
NSLog(#"didCompressH264 status:%d", status);
if (status != 0) return;
if (!CMSampleBufferDataIsReady(sampleBuffer))
{
NSLog(#"didCompressH264 data is not ready ");
return;
}
When I run the app in the background, I can see the log "hardware compress result: 0" which means put data into VToolBox well, but I can't get log "didCompressH264 status".
It seems it never reaches the didCompressH264 function.
So, I wonder if VToolBox can run in the background? If so, how? Any answer is appreciated!
iOS won't let you app run in the background forever. By default, it will give you just a few seconds of running time after the app moves to the background. Beyond that, it will either prevent the app from running or kill it.
You can ask for more time using beginBackgroundTaskWithName:expirationHandler: in your AppDelegate's applicationDidEnterBackground: method. You can check with backgroundTimeRemaining how long iOS gives you. Beyond that, your app will be killed.
Note that quite a few things don't work the same in the background. Not sure what resources the library uses, but I would be surprised if you got full access to the GPU while in the background, for instance.
VideoToolbox stops decompressing frames as soon as your app enters background. Hardware acceleration is only permitted on the foreground application to provide the best experience.
I am using Audio Queues to playback audio files. I need precise timing on the finish of last buffer.
I need to notify a function no later than 150ms-200 ms after the last buffer is played...
Thru callback method I know how many buffers are enqueued
I know the buffer size, I know the how many bytes last buffer is filled with.
First I initialize a number of buffers end fill the buffers with audio data, then enqueue them. When Audio Queue needs a buffer to be filled it calls the callback and I fill the buffer with data.
When there is no more audio data available Audio Queue sends me the last empty buffer, so I fill it with whatever data I have:
if (sharedCache.numberOfToTalPackets>0)
{
if (currentlyReadingBufferIndex==[sharedCache.baseAudioCache count]-1) {
inBuffer->mAudioDataByteSize = (UInt32)bytesFilled;
lastEnqueudBufferSize=bytesFilled;
err=AudioQueueEnqueueBuffer(inAQ,inBuffer,(UInt32)packetsFilled,packetDescs);
if (err) {
[self failWithErrorCode:err customError:AP_AUDIO_QUEUE_ENQUEUE_FAILED];
}
printf("if that was the last free packet description, then enqueue the buffer\n");
//go to the next item on keepbuffer array
isBufferFilled=YES;
[self incrementBufferUsedCount];
return;
}
}
When Audio Queue asks for more data via callback and I have no more data , I start to countdown the buffers. If buffer count equals to zero, which means only one buffer left on the flight to be played, the moment playback is done I try to stop the audio queue.
-(void)decrementBufferUsedCount
{
if (buffersUsed>0) {
buffersUsed--;
printf("buffer on the queue %i\n",buffersUsed);
if (buffersUsed==0) {
NSLog(#"playback is finished\n");
// end playback
isPlayBackDone=YES;
double sampleRate = dataFormat.mSampleRate;
double bufferDuration = lastEnqueudBufferSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;
[self performSelector:#selector(stopPlayer) withObject:nil afterDelay:estimatedTimeNeded];
}
}
}
-(void)stopPlayer
{
#synchronized(self)
{
state=AP_STOPPING;
}
err=AudioQueueStop(queue, TRUE);
if (err) {
[self failWithErrorCode:err customError:AP_AUDIO_QUEUE_STOP_FAILED];
}
else
{
#synchronized(self)
{
state=AP_STOPPED;
NSLog(#"Stopped\n");
}
However it seems I can't get precise timing here. Above code stops player early.
if I do following audio cuts early too
double bufferDuration = XMAQDefaultBufSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;
if increase 1 to 2 since the buffer size is big I get some delay, seem 1.5 is the optimum value for now but I dont understand why lastEnqueudBufferSize/ sampleRate is not wotking
Details of the audio file, and buffers:
Audio file has 22050 sample rate
#define kNumberPlaybackBuffers 4
#define kAQDefaultBufSize 16384
it is a vbr file format with no bitrate information available
EDIT:
I found an easier way that gets the same results (+/-10ms). After you set up your output Queue with AudioQueueNewOutput() you initialize a AudioQueueTimelineRef to be used in your output callback. (ticksToSeconds function is included below in my first method) don't forget to import<mach/mach_time.h>
//After AudioQueueNewOutput()
AudioQueueTimelineRef timeLine; //ivar
AudioQueueCreateTimeline(queue, self.timeLine);
Then in your output callback you call AudioQueueGetCurrentTime(). Caveat: queue must be playing for valid timestamps. So for very short files you might need to use the AudioQueueProcessingTap method below.
AudioTimeStamp timestamp;
AudioQueueGetCurrentTime(queue, self->timeLine, ×tamp, NULL);
The timestamp ties together the current sample playing with the current machine time. With that info we can get an exact machine time in the future when our last sample will be played.
Float64 samplesLeft = self->frameCount - timestamp.mSampleTime;//samples in file - current sample
Float64 secondsLeft = samplesLeft / self->sampleRate; //seconds of audio to play
UInt64 ticksLeft = secondsLeft / ticksToSeconds(); //seconds converted to machine ticks
UInt64 machTimeFinish = timestamp.mHostTime + ticksLeft; //machine time of first sample + ticks left
Now that we have this future machine time we can use it to time whatever it is that you want to do with some accuracy.
UInt64 currentMachTime = mach_absolute_time();
Uint64 ticksFromNow = machTimeFinish - currentMachTime;
float secondsFromNow = ticksFromNow * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(secondsFromNow * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//do the thing!!!
printf("Giggety");
});
If GCD dispatch_async isn't accurate enough there are ways to set up a precision timer
Using AudioQueueProcessingTap
You can get fairly low response time from an AudioQueueProcessingTap. First you make your callback that will essentially put itself in-between the audio stream. The MyObject type is just whatever self is in your code(this is ARC bridging here to get self inside the function). Inspecting ioFlags tells you when the stream starts and finishes. The ioTimeStamp of an output callback describes time that the first sample in the callback will hit the speaker in the future. So if you want to get exact here's how you do it. I added some convenience functions for converting machine time to seconds.
#import <mach/mach_time.h>
double getTimeConversion(){
double timecon;
mach_timebase_info_data_t tinfo;
kern_return_t kerror;
kerror = mach_timebase_info(&tinfo);
timecon = (double)tinfo.numer / (double)tinfo.denom;
return timecon;
}
double ticksToSeconds(){
static double ticksToSeconds = 0;
if (!ticksToSeconds) {
ticksToSeconds = getTimeConversion() * 0.000000001;
}
return ticksToSeconds;
}
void processingTapCallback(
void * inClientData,
AudioQueueProcessingTapRef inAQTap,
UInt32 inNumberFrames,
AudioTimeStamp * ioTimeStamp,
UInt32 * ioFlags,
UInt32 * outNumberFrames,
AudioBufferList * ioData){
MyObject *self = (__bridge Object *)inClientData;
AudioQueueProcessingTapGetSourceAudio(inAQTap, inNumberFrames, ioTimeStamp, ioFlags, outNumberFrames, ioData);
if (*ioFlags == kAudioQueueProcessingTap_EndOfStream) {
Float64 sampTime;
UInt32 frameCount;
AudioQueueProcessingTapGetQueueTime(inAQTap, &sampTime, &frameCount);
Float64 samplesInThisCallback = self->frameCount - sampleTime;//file sampleCount - queue current sample
//double secondsInCallback = outNumberFrames / (double)self->sampleRate; outNumberFrames was inaccurate
double secondsInCallback = * samplesInThisCallback / (double)self->sampleRate;
uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (secondsInCallback / ticksToSeconds());
[self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
}
}
-(void)lastSampleDoneAt:(uint64_t)lastSampTime{
uint64_t currentTime = mach_absolute_time();
if (lastSampTime > currentTime) {
double secondsFromNow = (lastSampTime - currentTime) * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(secondsFromNow * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//do the thing!!!
});
}
else{
//do the thing!!!
}
}
You set it up like this after AudioQueueNewOutput and before AudioQueueStart. Notice the passing of bridged self to the inClientData argument. The queue actually holds self as void* to be used in callback where we bridge it back to an objective-C object within the callback.
AudioStreamBasicDescription format;
AudioQueueProcessingTapRef tapRef;
UInt32 maxFrames = 0;
AudioQueueProcessingTapNew(queue, processingTapCallback, (__bridge void *)self, kAudioQueueProcessingTap_PostEffects, &maxFrames, &format, &tapRef);
You could get the end machine time as soon as the file starts too. A little cleaner too.
void processingTapCallback(
void * inClientData,
AudioQueueProcessingTapRef inAQTap,
UInt32 inNumberFrames,
AudioTimeStamp * ioTimeStamp,
UInt32 * ioFlags,
UInt32 * outNumberFrames,
AudioBufferList * ioData){
MyObject *self = (__bridge Object *)inClientData;
AudioQueueProcessingTapGetSourceAudio(inAQTap, inNumberFrames, ioTimeStamp, ioFlags, outNumberFrames, ioData);
if (*ioFlags == kAudioQueueProcessingTap_StartOfStream) {
uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (self->audioDurSeconds / ticksToSeconds());
[self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
}
}
If you use AudioQueueStop in asynchronous mode, then stopping happens after all queued buffers have been played or recorded. See doc.
You're using it in a synchronous mode, where stopping happens ASAP, and playback cuts out immediately, without regard for previously buffered audio data. You want precise timing, but only because audio is cutting off. Right? So rather than go synchronous + add additional timing/callback code, I recommend going asynchronous:
err=AudioQueueStop(queue, FALSE);
From docs:
If you pass false, the function returns immediately, but the audio
queue does not stop until its queued buffers are played or recorded
(that is, the stop occurs asynchronously). Audio queue callbacks are
invoked as necessary until the queue actually stops.
For me this worked really well for what I heeded:
stopping the queue in callback when data is over using AudioQueueStop(queue, FALSE), while:
listening to actual stop using kAudioQueueProperty_IsRunning property (happens later than AudioQueueStop() is called, actually, when last buffer gets actually rendered)
after stopping the queue You can get prepared for action You need to execute on audio ending, and when listener fires - actually execute this action.
I am not sure about time precision of that event but for my task it behaved definitely better than using notification straight from callback. There is buffering inside AudioQueue and output device itself so definitely IsRunning listener gives better results as to when AudioQueue stops playing.
In my iOS Application , i am using AudioQueue for Audio recording and playback, basically i have OSX Version running and porting it on iOS.
I realize in iOS I need to configure / set the AV Session and i have done following till now,
-(void)initAudioSession{
//get your app's audioSession singleton object
AVAudioSession* session = [AVAudioSession sharedInstance];
//error handling
BOOL success;
NSError* error;
//set the audioSession category.
//Needs to be Record or PlayAndRecord to use audioRouteOverride:
success = [session setCategory:AVAudioSessionCategoryPlayAndRecord
error:&error];
if (!success) NSLog(#"AVAudioSession error setting category:%#",error);
//set the audioSession override
success = [session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker
error:&error];
if (!success) NSLog(#"AVAudioSession error overrideOutputAudioPort:%#",error);
//activate the audio session
success = [session setActive:YES error:&error];
if (!success) NSLog(#"AVAudioSession error activating: %#",error);
else NSLog(#"audioSession active");
}
Now what is happening is, Speaker AudioQueue callback is never getting called, i checked many answers, comments on so , google etc... and looks to be correct , the way i did is
Create AudioQueue for input and output : Configuration Linear PCM , 16000 Sampling rate
Allocate buffer
Setup queue with valid callback,
Start Queue,
It seems to be fine, i can able to hear Output on other end ( i.e. Input AudioQueue is working ) but output AudioQueue ( i.e. AudioQueueOutputCallback is never getting called).
I am suspecting i need to set the Proper AVSessionCatogery that i am trying with all possible option but didn't able to hear anything in the speaker,
I Compare my Implementation with Apple example Speakhere running AudioQueue on the main thread.
Even if i don't start Input AudioQueue ( mic ) then also i same behavior. and its difficult to have Speakhere behavior i.e. stop record and play
Thanks for looking at it, expecting your comments/help. Will be able to share code snippet.
Thanks for looking at it , i realize the problem, this is my callback,
void AudioStream::AQBufferCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inCompleteAQBuffer)
{
AudioStream *THIS = (AudioStream *)inUserData;
if (THIS->mIsDone) {
return;
}
if ( !THIS->IsRunning()){
NSLog(#" AudioQueue is not running");
**return;** // Error part
}
int bytes = THIS->bufferByteSize;
if ( !THIS->pSingleBuffer){
THIS->pSingleBuffer = new unsigned char[bytes];
}
unsigned char *buffer = THIS->pSingleBuffer;
if ((THIS->mNumPacketsToRead) > 0) {
/* lets read only firt packet */
memset(buffer,0x00,bytes);
float volume = THIS->volume();
if (THIS->volumeChange){
SInt16 *editBuffer = (SInt16 *)buffer;
// loop over every packet
for (int nb = 0; nb < (sizeof(buffer) / 2); nb++) {
// we check if the gain has been modified to save resoures
if (volume != 0) {
// we need more accuracy in our calculation so we calculate with doubles
double gainSample = ((double)editBuffer[nb]) / 32767.0;
/*
at this point we multiply with our gain factor
we dont make a addition to prevent generation of sound where no sound is.
no noise
0*10=0
noise if zero
0+10=10
*/
gainSample *= volume;
/**
our signal range cant be higher or lesser -1.0/1.0
we prevent that the signal got outside our range
*/
gainSample = (gainSample < -1.0) ? -1.0 : (gainSample > 1.0) ? 1.0 : gainSample;
/*
This thing here is a little helper to shape our incoming wave.
The sound gets pretty warm and better and the noise is reduced a lot.
Feel free to outcomment this line and here again.
You can see here what happens here http://silentmatt.com/javascript-function-plotter/
Copy this to the command line and hit enter: plot y=(1.5*x)-0.5*x*x*x
*/
gainSample = (1.5 * gainSample) - 0.5 * gainSample * gainSample * gainSample;
// multiply the new signal back to short
gainSample = gainSample * 32767.0;
// write calculate sample back to the buffer
editBuffer[nb] = (SInt16)gainSample;
}
}
}
else{
// NSLog(#" No change in the volume");
}
memcpy(inCompleteAQBuffer->mAudioData, buffer, 640);
inCompleteAQBuffer->mAudioDataByteSize = 640;
inCompleteAQBuffer->mPacketDescriptionCount = 320;
show_err(AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL));
}
}
as i was not enqueue when its allocated and i believe it had to enqueue few buffers before it gets started, removing the return part solved my problem.
MIDI noob in training here...
I have been using MusicPlayer/MusicSequence/MusicTrack to play MIDI notes on devices running iOS. The notes are playing fine. I am struggling to change the instrument being played. As far as I can figure this is how to do it:
-(void) setInstrument:(MIDIInstruments) program channel:(int) channel MusicTrack:(MusicTrack*) track time:(float) time {
if(channel < 0 || channel > 15 || program >=MIDI_INSTRUMENT_COUNT || time < 0) {
return;
}
MIDIChannelMessage programChange = { ((UInt8)0xC) << 4 | ((UInt8)channel), ((UInt8)program), 0, 0};
OSStatus result = MusicTrackNewMIDIChannelEvent(*track, time, &programChange);
if(result != noErr) {
[NSException raise:#"Set Instrument" format:#"Failed to set instrument error: %#", [NSError errorWithDomain:NSOSStatusErrorDomain code:result userInfo:nil]];
}
}
In this case channel is 0 or 1, I tried several instruments through out the range of valid instrument enumerations, the time is 0.0, and the MusicTrack is valid, and has ~30 seconds of note events. The call to set the channel event passes back noErr. I am stumped...Anyone?
I had read in other posts that I would be able to generate midi using Music Player and friends. It provides for program changes. So, I had figured it was supported. After exhausting all theories, I turned to AUGraph. I added a *.sf2 file that I found online, instantiated the AUGraph, two AudioUnits, a MidiEndpointRef, and a MidiClientRef; according to this tutorial.
It was in the endpoint callback that I had to turn notes on and off using MusicDeviceMIDIEvent on the samplerUnit that seemed to allow for the program change. Whereas before I was just loading note events into a MusicTrack and playing/stoping the MusicPlayer.
For some reason, it seems that stopping at a breakpoint during debugging will kill my audio queue playback.
AudioQueue will be playing audio
output.
Trigger a breakpoint to
pause my iPhone app.
Subsequent
resume, audio no longer gets played.
( However, AudioQueue callback
functions are still getting called.)
( No AudioSession or AudioQueue
errors are found.)
Since the debugger pauses the application (rather than an incoming phone call, for example) , it's not a typical iPhone interruption, so AudioSession interruption callbacks do not get triggered like in this solution.
I am using three AudioQueue buffers at 4096 samples at 22kHz and filling them in a circular manner.
Problem occurs for both multi-threaded and single-threaded mode.
Is there some known problem that you can't pause and resume AudioSessions or AudioQueues during a debugging session?
Is it running out of "queued buffers" and it's destroying/killing the AudioQueue object (but then my AQ callback shouldn't trigger).
Anyone have insight into inner workings of iPhone AudioQueues?
After playing around with it for the last several days, before posting to StackOverflow, I figured out the answer just today. Go figure!
Just recreate the AudioQueue again by calling my "preparation functions"
SetupNewQueue(mDataFormat.mSampleRate, mDataFormat.mChannelsPerFrame);
StartQueue(true);
So detect when your AudioQueue may have "died". In my case, I would be writing data into an input buffer to be "pulled" by AudioQueue callback. If it doesn't occur in a certain time, or after X number of bytes of input buffer have been filled, I then recreate the AudioQueue.
This seems to solve the issue where "halts/fails" audio when you hit a debugging breakpoint.
The simplified versions of these functions are the following:
void AQPlayer::SetupNewQueue(double inSampleRate, UInt32 inChannelsPerFrame)
{
//Prep AudioStreamBasicDescription
mDataFormat.mSampleRate = inSampleRate;
mDataFormat.SetCanonical(inChannelsPerFrame, YES);
XThrowIfError(AudioQueueNewOutput(&mDataFormat, AQPlayer::AQBufferCallback, this,
NULL, kCFRunLoopCommonModes, 0, &mQueue), "AudioQueueNew failed");
// adjust buffer size to represent about a half second of audio based on this format
CalculateBytesForTime(mDataFormat, kBufferDurationSeconds, &mBufferByteSize, &mNumPacketsToRead);
ctl->cmsg(CMSG_INFO, VERB_NOISY, "AQPlayer Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)mBufferByteSize, (int)mNumPacketsToRead);
mBufferWaitTime = mNumPacketsToRead / mDataFormat.mSampleRate * 0.9;
XThrowIfError(AudioQueueAddPropertyListener(mQueue, kAudioQueueProperty_IsRunning, isRunningProc, this), "adding property listener");
//Allocate AQ buffers (assume we are using CBR (constant bitrate)
for (int i = 0; i < kNumberBuffers; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, mBufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed");
}
...
}
OSStatus AQPlayer::StartQueue(BOOL inResume)
{
// if we are not resuming, we also should restart the file read index
if (!inResume)
mCurrentPacket = 0;
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffers; ++i) {
mBuffers[i]->mAudioDataByteSize = mBuffers[i]->mAudioDataBytesCapacity;
memset( mBuffers[i]->mAudioData, 0, mBuffers[i]->mAudioDataByteSize );
XThrowIfError(AudioQueueEnqueueBuffer( mQueue,
mBuffers[i],
0,
NULL ),"AudioQueueEnqueueBuffer failed");
}
OSStatus status;
status = AudioSessionSetActive( true );
XThrowIfError(status, "\n\n*** AudioSession failed to become active *** \n\n");
status = AudioQueueStart(mQueue, NULL);
XThrowIfError(status, "\n\n*** AudioQueue failed to start *** \n\n");
return status;
}