I want to implement an audio manager. But I got a memory leak.
I don't know why & what happen. Could someone help me?
I create a button, the button event just runs the playAudio with audio path. Then, I click the button, click, click, click, ..., click(many times).
The memory usage is increased. I try to close the audio file and clean the memory before each play time, but no use.
Please help or try to give some ideas how to achieve this. Thanks!
Much detail you could see my demo project in Github
UIView
- (void)viewDidLoad {
[super viewDidLoad];
// Create an audio manager
self.audio1 = [AudioPlayerManager new];
}
// This is a button click event
- (IBAction)actionAudioPlay:(id)sender {
NSString *path1 = [NSString stringWithFormat:#"%#", [[NSBundle
mainBundle] pathForResource:#"success-notification-
alert_A_major" ofType:#"wav"]];
[self.audio1 playAudio:path1];
}
AudioManager
Prepare define
static const UInt32 maxBufferSize = 0x10000;
static const UInt32 minBufferSize = 0x4000;
static const UInt32 maxBufferNum = 3;
Global Variable
AudioFileID _audioFile;
AudioStreamBasicDescription _dataFormat;
AudioQueueRef _queue;
UInt32 numPacketsToRead;
AudioStreamPacketDescription *packetDescs;
AudioQueueBufferRef buffers[maxBufferNum];
SInt64 packetIndex;
UInt32 maxPacketSize;
UInt32 outBufferSize;
My code
- (void)playAudio:(NSString *)audioFileName {
// Step 1: Open the audio file
OSStatus status = AudioFileOpenURL(
(__bridge CFURLRef _Nonnull)([NSURL
fileURLWithPath:audioPath]),
kAudioFileReadPermission,
0,
&_audioFile);
// Step 2: Read the meta-data of this audio file
UInt32 formatSize = sizeof(AudioStreamBasicDescription);
status = AudioFileGetProperty(audioFileID,
kAudioFilePropertyDataFormat, &formatSize, &_dataFormat);
// Step 3: Register the callback function
status = AudioQueueNewOutput(
&dataFormat,
BufferCallback,
(__bridge void * _Nullable)(self),
nil,
nil,
0,
&_queue
);
if (status != noErr) NSLog(#"AudioQueueNewOutput bitrate failed %d", status);
// Step 4: Read the package size
UInt32 size = sizeof(maxPacketSize);
AudioFileGetProperty(
audioFileID,
kAudioFilePropertyPacketSizeUpperBound,
&size,
&maxPacketSize);
if (status != noErr) NSLog(#"kAudioFilePropertyPacketSizeUpperBound failed %d", status);
if (dataFormat.mFramesPerPacket != 0) {
Float64 numPacketsPersecond = dataFormat.mSampleRate / dataFormat.mFramesPerPacket;
outBufferSize = numPacketsPersecond * maxPacketSize;
} else {
outBufferSize = (maxBufferSize > maxPacketSize) ? maxBufferSize : maxPacketSize;
}
if (outBufferSize > maxBufferSize &&
outBufferSize > maxPacketSize) {
outBufferSize = maxBufferSize;
} else {
if (outBufferSize < minBufferSize) {
outBufferSize = minBufferSize;
}
}
// Step 5: Calculate the package count
numPacketsToRead = outBufferSize / maxPacketSize;
// Step 6: Alloc AudioStreamPacketDescription buffers
packetDescs = (AudioStreamPacketDescription *)malloc(numPacketsToRead * sizeof (AudioStreamPacketDescription));
// Step 7: Reset the packet index
packetIndex = 0;
// Step 8: Allocate buffer
for (int i = 0; i < maxBufferNum; i++) {
// Step 8.1: allock the buffer
status = AudioQueueAllocateBuffer(
_queue,
outBufferSize,
&buffers[i]
);
if (status != noErr) NSLog(#"AudioQueueAllocateBuffer failed %d", status);
// Step 8.2: Fill the audio data to buffer
[self audioQueueOutputWithQueue:_queue
queueBuffer:buffers[i]];
}
// Step 9: Start
status = AudioQueueStart(_queue, nil);
if (status != noErr) NSLog(#"AudioQueueStart failed %d", status);
}
Audio queue output method
- (void)audioQueueOutputWithQueue:(AudioQueueRef)audioQueue
queueBuffer:(AudioQueueBufferRef)audioQueueBuffer {
OSStatus status;
// Step 1: load audio data
// If the packetIndex is out of range, the ioNumPackets will be 0
UInt32 ioNumBytes = outBufferSize;
UInt32 ioNumPackets = numPacketsToRead;
status = AudioFileReadPacketData(
_audioFile,
NO,
&ioNumBytes,
packetDescs,
packetIndex,
&ioNumPackets,
audioQueueBuffer->mAudioData
);
if (status != noErr) NSLog(#"AudioQueueSetParameter failed %d", status);
// Step 2: prevent load audio data failed
if (ioNumPackets <= 0) {
return;
}
// Step 3: re-assign the data size
audioQueueBuffer->mAudioDataByteSize = ioNumBytes;
// Step 4: fill the buffer to AudioQueue
status = AudioQueueEnqueueBuffer(
audioQueue,
audioQueueBuffer,
ioNumPackets,
packetDescs
);
if (status != noErr) NSLog(#"AudioQueueEnqueueBuffer failed %d", status);
// Step 5: Shift to followed index
packetIndex += ioNumPackets;
}
Callback function
static void BufferCallback(void *inUserData,AudioQueueRef inAQ,
AudioQueueBufferRef buffer) {
AudioPlayerManager *manager = (__bridge AudioPlayerManager *)inUserData;
[manager audioQueueOutputWithQueue:inAQ queueBuffer:buffer];
}
Close audio file
- (OSStatus)close:(AudioFileID)audioFileID {
OSStatus status = AudioFileClose( audioFileID );
if (status != noErr) NSLog(#"AudioFileClose failed %d", status);
return status;
}
Free Memory
- (void)freeMemory {
if (packetDescs) {
free(packetDescs);
}
packetDescs = NULL;
}
Finally, I find out the solution. I just kill out my queue.
All of memory are released. Share my method to everyone who has the same ticket.
- (void)playAudio:(NSString *)audioFileName {
// Add these code
if (_queue) {
AudioFileClose(_audioFile);
[self freeMemory];
AudioQueueStop(_queue, true);
AudioQueueDispose(_queue, true);
_queue = nil;
}
// the other code ...
}
Related
I am using AudioQueueStart in order to start recording on an iOS device and I want all the recording data streamed to me in buffers so that I can process them and send them to a server.
Basic functionality works great however in my BufferFilled function I usually get < 10 bytes of data on every call. This feels very inefficient. Especially since I have tried to set the buffer size to 16384 btyes (see beginning of startRecording method)
How can I make it fill up the buffer more before calling BufferFilled? Or do I need to make a second phase buffering before sending to server to achieve what I want?
OSStatus BufferFilled(void *aqData, SInt64 inPosition, UInt32 requestCount, const void *inBuffer, UInt32 *actualCount) {
AQRecorderState *pAqData = (AQRecorderState*)aqData;
NSData *audioData = [NSData dataWithBytes:inBuffer length:requestCount];
*actualCount = inBuffer + requestCount;
//audioData is ususally < 10 bytes, sometimes 100 bytes but never close to 16384 bytes
return 0;
}
void HandleInputBuffer(void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc) {
AQRecorderState *pAqData = (AQRecorderState*)aqData;
if (inNumPackets == 0 && pAqData->mDataFormat.mBytesPerPacket != 0)
inNumPackets = inBuffer->mAudioDataByteSize / pAqData->mDataFormat.mBytesPerPacket;
if(AudioFileWritePackets(pAqData->mAudioFile, false, inBuffer->mAudioDataByteSize, inPacketDesc, pAqData->mCurrentPacket, &inNumPackets, inBuffer->mAudioData) == noErr) {
pAqData->mCurrentPacket += inNumPackets;
}
if (pAqData->mIsRunning == 0)
return;
OSStatus error = AudioQueueEnqueueBuffer(pAqData->mQueue, inBuffer, 0, NULL);
}
void DeriveBufferSize(AudioQueueRef audioQueue, AudioStreamBasicDescription *ASBDescription, Float64 seconds, UInt32 *outBufferSize) {
static const int maxBufferSize = 0x50000;
int maxPacketSize = ASBDescription->mBytesPerPacket;
if (maxPacketSize == 0) {
UInt32 maxVBRPacketSize = sizeof(maxPacketSize);
AudioQueueGetProperty(audioQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize, &maxVBRPacketSize);
}
Float64 numBytesForTime = ASBDescription->mSampleRate * maxPacketSize * seconds;
*outBufferSize = (UInt32)(numBytesForTime < maxBufferSize ? numBytesForTime : maxBufferSize);
}
OSStatus SetMagicCookieForFile (AudioQueueRef inQueue, AudioFileID inFile) {
OSStatus result = noErr;
UInt32 cookieSize;
if (AudioQueueGetPropertySize (inQueue, kAudioQueueProperty_MagicCookie, &cookieSize) == noErr) {
char* magicCookie =
(char *) malloc (cookieSize);
if (AudioQueueGetProperty (inQueue, kAudioQueueProperty_MagicCookie, magicCookie, &cookieSize) == noErr)
result = AudioFileSetProperty (inFile, kAudioFilePropertyMagicCookieData, cookieSize, magicCookie);
free(magicCookie);
}
return result;
}
- (void)startRecording {
aqData.mDataFormat.mFormatID = kAudioFormatMPEG4AAC;
aqData.mDataFormat.mSampleRate = 22050.0;
aqData.mDataFormat.mChannelsPerFrame = 1;
aqData.mDataFormat.mBitsPerChannel = 0;
aqData.mDataFormat.mBytesPerPacket = 0;
aqData.mDataFormat.mBytesPerFrame = 0;
aqData.mDataFormat.mFramesPerPacket = 1024;
aqData.mDataFormat.mFormatFlags = kMPEG4Object_AAC_Main;
AudioFileTypeID fileType = kAudioFileAAC_ADTSType;
aqData.bufferByteSize = 16384;
UInt32 defaultToSpeaker = TRUE;
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(defaultToSpeaker), &defaultToSpeaker);
OSStatus status = AudioQueueNewInput(&aqData.mDataFormat, HandleInputBuffer, &aqData, NULL, kCFRunLoopCommonModes, 0, &aqData.mQueue);
UInt32 dataFormatSize = sizeof (aqData.mDataFormat);
status = AudioQueueGetProperty(aqData.mQueue, kAudioQueueProperty_StreamDescription, &aqData.mDataFormat, &dataFormatSize);
status = AudioFileInitializeWithCallbacks(&aqData, nil, BufferFilled, nil, nil, fileType, &aqData.mDataFormat, 0, &aqData.mAudioFile);
for (int i = 0; i < kNumberBuffers; ++i) {
status = AudioQueueAllocateBuffer (aqData.mQueue, aqData.bufferByteSize, &aqData.mBuffers[i]);
status = AudioQueueEnqueueBuffer (aqData.mQueue, aqData.mBuffers[i], 0, NULL);
}
aqData.mCurrentPacket = 0;
aqData.mIsRunning = true;
status = AudioQueueStart(aqData.mQueue, NULL);
}
UPDATE: I have logged the data that I receive and it is quite interesting, it almost seems like half of the "packets" are some kind of header and half is sound data. Could I assume this is just how the AAC encoding on iOS works? It writes header in one buffer, then data in the next one and so on. And it never wants more than around 170-180 bytes for each data chunk and that is why it ignores my large buffer?
I solved this eventually. Turns out that yes the encoding on iOS produces small and large chunks of data. I added a second phase buffer myself using NSMutableData and it worked perfectly.
Update
I resolved the problem in recording in iOS 10. After adding Audio Session configuration before starting recording, it works as normal. But playback hasn't been resolved.
Here's the solution:
NSError *error = nil;
// the param category depends what you need
BOOL ret = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
if (!ret) {
NSLog(#"Audio session category setup failed");
return;
}
// don't forget to setActive NO when finishing recording
ret = [[AVAudioSession sharedInstance] setActive:YES error:&error];
if (!ret)
{
NSLog(#"Audio session activation failed");
return;
}
Original
I work in audio recording with audio queue service in iOS. I followed apple's official tutorial to realize the recording part and playback part. It was successfully tested in iOS 9.3 in emulator but failed in iOS 10.3.1 in real device iPad.
For the recording part, the callback function invokes AudioFileWritePackets to save the audio into a file (see the code below). In iOS 9, ioNumPackets always has a non-zero value but in iOS 10, it is always 0 during the first recording, and from the second time it becomes normal. That is, only from the second time the recording works.
Here's some code about recording:
Callback function:
static void AudioInputCallback(void * inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp * inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription * inPacketDescs) {
NSLog(#"Input callback called");
RecordState * aqData = (RecordState*)inUserData;
if (aqData->isRecording == 0) return;
if (inNumPackets == 0 && aqData->dataFormat.mBytesPerPacket != 0)
inNumPackets = inBuffer->mAudioDataByteSize / aqData->dataFormat.mBytesPerPacket;
NSLog(#"inNumPackets = %d", inNumPackets);
// handler the data
if (outputToMobile){
OSStatus res = AudioFileWritePackets(aqData->audioFile, false, inBuffer->mAudioDataByteSize, inPacketDescs, aqData->currentPacket, &inNumPackets, inBuffer->mAudioData);
if(res == noErr)
aqData->currentPacket += inNumPackets;
}else{
}
// after handling, re-enqueue de buffer into the queue
AudioQueueEnqueueBuffer(aqData->queue, inBuffer, 0, NULL);
}
Start record function:
-(void)startRecording{
[self setupAudioFormat:&recordState.dataFormat];
recordState.currentPacket = 0;
OSStatus status;
status = AudioQueueNewInput(&recordState.dataFormat, AudioInputCallback, &recordState, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &recordState.queue);
if (status == 0) {
UInt32 dataFormatSize = sizeof (recordState.dataFormat);
AudioQueueGetProperty (recordState.queue,kAudioQueueProperty_StreamDescription,&recordState.dataFormat,&dataFormatSize);
if (outputToMobile) {
[self createFile];
SetMagicCookieForFile(recordState.queue, recordState.audioFile);
}
DeriveBufferSize(recordState.queue, &recordState.dataFormat, 0.5, &recordState.bufferByteSize);
for (int i = 0; i < NUM_BUFFERS; i++) {
AudioQueueAllocateBuffer(recordState.queue, recordState.bufferByteSize, &recordState.buffers[i]);
AudioQueueEnqueueBuffer(recordState.queue, recordState.buffers[i], 0, NULL);
}
recordState.isRecording = true;
AudioQueueStart(recordState.queue, NULL);
}
}
For the playback part, the callback function invokes AudioFileReadPacketData to read the audio file (see the code below). As well, in iOS 9, ioNumPackets is always non-zero but in iOS 10, ioNumPackets is always always 0 so that nothing is output from iOS 10.
Here's some code about playback:
Callback function:
static void AudioOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer){
NSLog(#"Output callback called");
PlayState *aqData = (PlayState *)inUserData;
if (aqData->isPlaying == 0) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = aqData->numPacketsToRead;
AudioFileReadPacketData(aqData->audioFile, false, &numBytesReadFromFile, aqData->packetDesc, aqData->currentPacket, &numPackets, inBuffer->mAudioData);
NSLog(#"outNumPackets = %d", numPackets);
if (numPackets > 0) {
AudioQueueEnqueueBuffer(aqData->queue, inBuffer, aqData->packetDesc ? numPackets : 0, aqData->packetDesc);
aqData->currentPacket += numPackets;
} else {
AudioQueueStop(aqData->queue, false);
aqData->isPlaying = false;
}
}
Start playback function:
- (void)startPlaying{
playState.currentPacket = 0;
[self openFile];
UInt32 dataFormatSize = sizeof(playState.dataFormat);
AudioFileGetProperty(playState.audioFile, kAudioFilePropertyDataFormat, &dataFormatSize, &playState.dataFormat);
OSStatus status;
status = AudioQueueNewOutput(&playState.dataFormat, AudioOutputCallback, &playState, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &playState.queue);
if (status == 0) {
playState.isPlaying = true;
UInt32 maxPacketSize;
UInt32 propertySize = sizeof(maxPacketSize);
AudioFileGetProperty(playState.audioFile,kAudioFilePropertyPacketSizeUpperBound,&propertySize,&maxPacketSize);
DeriveBufferSize(playState.dataFormat, maxPacketSize, 0.5, &playState.bufferByteSize, &playState.numPacketsToRead);
bool isFormatVBR = (playState.dataFormat.mBytesPerPacket == 0 ||playState.dataFormat.mFramesPerPacket == 0);
if (isFormatVBR) {
playState.packetDesc = (AudioStreamPacketDescription*) malloc (playState.numPacketsToRead * sizeof(AudioStreamPacketDescription));
} else {
playState.packetDesc = NULL;
}
//Set a Magic Cookie for a Playback Audio Queue
MyCopyEncoderCookieToQueue(playState.audioFile, playState.queue);
for (int i = 0; i < NUM_BUFFERS; i++) {
AudioQueueAllocateBuffer(playState.queue, playState.bufferByteSize, &playState.buffers[i]);
playState.buffers[i]->mAudioDataByteSize = playState.bufferByteSize;
AudioOutputCallback(&playState, playState.queue, playState.buffers[i]);
}
Float32 gain = 10.0;
AudioQueueSetParameter(playState.queue, kAudioQueueParam_Volume, gain);
AudioQueueStart(playState.queue, NULL);
}
}
This kind of incompatibility really upsets me for several days. Free to ask me if you need more details. I hope someone could help me out. Thanks a lot.
I allocate buffers and start audio queue like
// allocate the buffers and prime the queue with some data before starting
AudioQueueBufferRef buffers[kNumberPlaybackBuffers];
isDone = false;
packetPosition = 0;
int i;
for (i = 0; i < kNumberPlaybackBuffers; ++i)
{
CheckError(AudioQueueAllocateBuffer(queue, packetBufferSize, &buffers[i]), "AudioQueueAllocateBuffer failed");
// manually invoke callback to fill buffers with data
MyAQOutputCallBack((__bridge void *)(self), queue, buffers[i]);
// EOF (the entire file's contents fit in the buffers)
if (isDone)
break;
}
// start the queue. this function returns immedatly and begins
// invoking the callback, as needed, asynchronously.
CheckError(AudioQueueStart(queue, NULL), "AudioQueueStart failed");
Above code successfully call the outputcallback function
#pragma mark playback callback function
static void MyAQOutputCallBack(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
// this is called by the audio queue when it has finished decoding our data.
// The buffer is now free to be reused.
printf("MyAQOutputCallBack...\n");
printf("######################\n");
AnotherPlayer* player = (__bridge AnotherPlayer *)inUserData;
[player handleBufferCompleteForQueue:inAQ buffer:inCompleteAQBuffer];
}
which calls an objective-c function where I fill the buffers and enqueu them.
- (void)handleBufferCompleteForQueue:(AudioQueueRef)inAQ
buffer:(AudioQueueBufferRef)inBuffer
{
BOOL isBufferFilled=NO;
size_t bytesFilled=0; // how many bytes have been filled
size_t packetsFilled=0; // how many packets have been filled
size_t bufSpaceRemaining;
while (isBufferFilled==NO ) {
if (currentlyReadingBufferIndex<[sharedCache.baseAudioCache count]) {
printf("currentlyReadingBufferIndex %i\n",currentlyReadingBufferIndex);
//loop thru untill buffer is enqued
if (sharedCache.baseAudioCache) {
NSMutableDictionary *myDict= [[NSMutableDictionary alloc] init];
myDict=[sharedCache.baseAudioCache objectAtIndex:currentlyReadingBufferIndex];
//UInt32 inNumberBytes =[[myDict objectForKey:#"inNumberBytes"] intValue];
UInt32 inNumberPackets =[[myDict objectForKey:#"inNumberPackets"] intValue];
NSData *convert=[myDict objectForKey:#"inInputData"];
const void *inInputData=(const char *)[convert bytes];
//AudioStreamPacketDescription *inPacketDescriptions;
AudioStreamPacketDescription *inPacketDescriptions= malloc(sizeof(AudioStreamPacketDescription));
NSNumber *mStartOffset = [myDict objectForKey:#"mStartOffset"];
NSNumber *mDataByteSize = [myDict objectForKey:#"mDataByteSize"];
NSNumber *mVariableFramesInPacket = [myDict objectForKey:#"mVariableFramesInPacket"];
inPacketDescriptions->mVariableFramesInPacket=[mVariableFramesInPacket intValue];
inPacketDescriptions->mStartOffset=[mStartOffset intValue];
inPacketDescriptions->mDataByteSize=[mDataByteSize intValue];
for (int i = 0; i < inNumberPackets; ++i)
{
SInt64 packetOffset = [mStartOffset intValue];
SInt64 packetSize = [mDataByteSize intValue];
printf("packetOffset %lli\n",packetOffset);
printf("packetSize %lli\n",packetSize);
currentlyReadingBufferIndex++;
if (packetSize > packetBufferSize)
{
//[self failWithErrorCode:AS_AUDIO_BUFFER_TOO_SMALL];
}
bufSpaceRemaining = packetBufferSize - bytesFilled;
printf("bufSpaceRemaining %zu\n",bufSpaceRemaining);
// if the space remaining in the buffer is not enough for this packet, then enqueue the buffer.
if (bufSpaceRemaining < packetSize)
{
CheckError(AudioQueueEnqueueBuffer(inAQ,
inBuffer,
packetsFilled,
packetDescs), "AudioQueueEnqueueBuffer failed");
// OSStatus status = AudioQueueEnqueueBuffer(inAQ,
// fillBuf,
// packetsFilled,
// packetDescs);
// if (status) {
// // This is also not called.
// NSLog(#"Error enqueueing buffer %d", (int)status);
// }
//printf("bufSpaceRemaining < packetSize\n");
//go to the next item on keepbuffer array
isBufferFilled=YES;
}
#synchronized(self)
{
//
// If there was some kind of issue with enqueueBuffer and we didn't
// make space for the new audio data then back out
//
if (bytesFilled + packetSize > packetBufferSize)
{
return;
}
// copy data to the audio queue buffer
//error -66686 refers to
//kAudioQueueErr_BufferEmpty = -66686
memcpy((char*)inBuffer->mAudioData + bytesFilled, (const char*)inInputData + packetOffset, packetSize);
//memcpy(inBuffer->mAudioData, (const char*)inInputData + packetOffset, packetSize);
// fill out packet description
packetDescs[packetsFilled] = inPacketDescriptions[0];
packetDescs[packetsFilled].mStartOffset = bytesFilled;
bytesFilled += packetSize;
packetsFilled += 1;
free(inPacketDescriptions);
}
// if that was the last free packet description, then enqueue the buffer.
size_t packetsDescsRemaining = kAQMaxPacketDescs - packetsFilled;
if (packetsDescsRemaining == 0) {
CheckError(AudioQueueEnqueueBuffer(inAQ,
inBuffer,
packetsFilled,
packetDescs), "AudioQueueEnqueueBuffer failed");
printf("if that was the last free packet description, then enqueue the buffer\n");
//go to the next item on keepbuffer array
isBufferFilled=YES;
}
}
}
}
else
{
isDone=YES;
}
}
}
Here once the buffer is full I call AudioQueueEnqueueBuffer , I think problem might be memcpy( but through break point it seems there is some data in the buffer but it still gives me
Error: AudioQueueEnqueueBuffer failed (-66686)
That means in AudioQueue.h
kAudioQueueErr_BufferEmpty = -66686,
filling mAudioDataByteSize of the buffer before enquing it solved the problem
inBuffer->mAudioDataByteSize = bytesFilled;
CheckError(AudioQueueEnqueueBuffer(inAQ,
inBuffer,
packetsFilled,
packetDescs), "AudioQueueEnqueueBuffer failed");
I have the following code which opens an AudioQueue to playback 16 bit pcm # 44,100hz. It has a very odd quirk where once the initial buffers are filled it plays back really quickly then gets "choppy" as it waits for more bytes to come over the network.
So either I am somehow messing up the code that copies a subrange of data into the buffer or I have told the AudioQueue to playback faster than the data comes over the network.
Anybody have any ideas? I've been stuck for a few days now.
//
// Created by Benjamin St Pierre on 15-01-02.
// Copyright (c) 2015 Lightning Strike Solutions. All rights reserved.
//
#import <MacTypes.h>
#import "MediaPlayer.h"
#implementation MediaPlayer
#synthesize sampleQueue;
void OutputBufferCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
//Cast userData to MediaPlayer Objective-C class instance
MediaPlayer *mediaPlayer = (__bridge MediaPlayer *) inUserData;
// Fill buffer.
[mediaPlayer fillAudioBuffer:inBuffer];
// Re-enqueue buffer.
OSStatus err = AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
if (err != noErr)
NSLog(#"AudioQueueEnqueueBuffer() error %d", (int) err);
}
- (void)fillAudioBuffer:(AudioQueueBufferRef)inBuffer {
if (self.currentAudioPiece == nil || self.currentAudioPiece.duration >= self.currentAudioPieceIndex) {
//grab latest sample from sample queue
self.currentAudioPiece = sampleQueue.dequeue;
self.currentAudioPieceIndex = 0;
}
//Check for empty sample queue
if (self.currentAudioPiece == nil) {
NSLog(#"Empty sample queue");
memset(inBuffer->mAudioData, 0, kBufferByteSize);
return;
}
UInt32 bytesToRead = inBuffer->mAudioDataBytesCapacity;
while (bytesToRead > 0) {
UInt32 maxBytesFromCurrentPiece = self.currentAudioPiece.audioData.length - self.currentAudioPieceIndex;
//Take the min of what the current piece can provide OR what is needed to be read
UInt32 bytesToReadNow = MIN(maxBytesFromCurrentPiece, bytesToRead);
NSData *subRange = [self.currentAudioPiece.audioData subdataWithRange:NSMakeRange(self.currentAudioPieceIndex, bytesToReadNow)];
//Copy what you can before continuing loop
memcpy(inBuffer->mAudioData, subRange.bytes, subRange.length);
bytesToRead -= bytesToReadNow;
if (bytesToReadNow == maxBytesFromCurrentPiece) {
#synchronized (sampleQueue) {
self.currentAudioPiece = self.sampleQueue.dequeue;
self.currentAudioPieceIndex = 0;
}
} else {
self.currentAudioPieceIndex += bytesToReadNow;
}
}
inBuffer->mAudioDataByteSize = kBufferByteSize;
}
- (void)startMediaPlayer {
AudioStreamBasicDescription streamFormat;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mSampleRate = 44100.0;
streamFormat.mChannelsPerFrame = 2;
streamFormat.mBytesPerFrame = 4;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerPacket = 4;
streamFormat.mBitsPerChannel = 16;
streamFormat.mReserved = 0;
streamFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
// New input queue
OSStatus err = AudioQueueNewOutput(&streamFormat, OutputBufferCallback, (__bridge void *) self, nil, nil, 0, &outputQueue);
if (err != noErr) {
NSLog(#"AudioQueueNewOutput() error: %d", (int) err);
}
int i;
// Enqueue buffers
AudioQueueBufferRef buffer;
for (i = 0; i < kNumberBuffers; i++) {
err = AudioQueueAllocateBuffer(outputQueue, kBufferByteSize, &buffer);
memset(buffer->mAudioData, 0, kBufferByteSize);
buffer->mAudioDataByteSize = kBufferByteSize;
if (err == noErr) {
err = AudioQueueEnqueueBuffer(outputQueue, buffer, 0, nil);
if (err != noErr) NSLog(#"AudioQueueEnqueueBuffer() error: %d", (int) err);
} else {
NSLog(#"AudioQueueAllocateBuffer() error: %d", (int) err);
return;
}
}
// Start queue
err = AudioQueueStart(outputQueue, nil);
if (err != noErr) NSLog(#"AudioQueueStart() error: %d", (int) err);
}
#end
I'm going to take a swag here and say that you're getting choppy playback because you aren't advancing the write pointer for your data. I don't know objective-C well enough to tell you if this syntax is correct, but here's what I think you need to add:
while (bytesToRead > 0) {
....
memcpy(inBuffer->mAudioData, subRange.bytes, subRange.length);
bytesToRead -= bytesToReadNow;
inBuffer->mAudioData += bytesReadNow; // move the write pointer
...
}
As you can see from the code, within my callback I extract out the audio data and place it into NSData data, then send that off to another class to upload that to the server. This all works, meaning the server receives and plays the audio data. HOWEVER there is a clicking or tapping noise between the buffers. I am hoping someone might show me what is causing that and how it can be fixed.
I have read other related postings however they all seemed to refer to only using 1 buffer and that adding more was the fix but I am using 3 buffers and have tried adjusting that number which did not fix it
AQRecorder.mm
#include "AQRecorder.h"
RestClient * restClient;
NSData* data;
// ____________________________________________________________________________________
// Determine the size, in bytes, of a buffer necessary to represent the supplied number
// of seconds of audio data.
int AQRecorder::ComputeRecordBufferSize(const AudioStreamBasicDescription *format, float seconds)
{
int packets, frames, bytes = 0;
try {
frames = (int)ceil(seconds * format->mSampleRate);
if (format->mBytesPerFrame > 0)
bytes = frames * format->mBytesPerFrame;
else {
UInt32 maxPacketSize;
if (format->mBytesPerPacket > 0)
maxPacketSize = format->mBytesPerPacket; // constant packet size
else {
UInt32 propertySize = sizeof(maxPacketSize);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize,
&propertySize), "couldn't get queue's maximum output packet size");
}
if (format->mFramesPerPacket > 0)
packets = frames / format->mFramesPerPacket;
else
packets = frames; // worst-case scenario: 1 frame in a packet
if (packets == 0) // sanity check
packets = 1;
bytes = packets * maxPacketSize;
}
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
return 0;
}
return bytes;
}
// ____________________________________________________________________________________
// AudioQueue callback function, called when an input buffers has been filled.
void AQRecorder::MyInputBufferHandler( void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
// write packets to file
// XThrowIfError(AudioFileWritePackets(aqr->mRecordFile, FALSE, inBuffer->mAudioDataByteSize,
// inPacketDesc, aqr->mRecordPacket, &inNumPackets, inBuffer->mAudioData),
// "AudioFileWritePackets failed");
aqr->mRecordPacket += inNumPackets;
// int numBytes = inBuffer->mAudioDataByteSize;
// SInt8 *testBuffer = (SInt8*)inBuffer->mAudioData;
//
// for (int i=0; i < numBytes; i++)
// {
// SInt8 currentData = testBuffer[i];
// printf("Current data in testbuffer is %d", currentData);
//
// NSData * temp = [NSData dataWithBytes:currentData length:sizeof(currentData)];
// }
data=[[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]retain];
[restClient uploadAudioData:data url:nil];
}
// if we're not stopping, re-enqueue the buffer so that it gets filled again
if (aqr->IsRunning())
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
AQRecorder::AQRecorder()
{
mIsRunning = false;
mRecordPacket = 0;
data = [[NSData alloc]init];
restClient = [[RestClient sharedManager]retain];
}
AQRecorder::~AQRecorder()
{
AudioQueueDispose(mQueue, TRUE);
AudioFileClose(mRecordFile);
if (mFileName){
CFRelease(mFileName);
}
[restClient release];
[data release];
}
// ____________________________________________________________________________________
// Copy a queue's encoder's magic cookie to an audio file.
void AQRecorder::CopyEncoderCookieToFile()
{
UInt32 propertySize;
// get the magic cookie, if any, from the converter
OSStatus err = AudioQueueGetPropertySize(mQueue, kAudioQueueProperty_MagicCookie, &propertySize);
// we can get a noErr result and also a propertySize == 0
// -- if the file format does support magic cookies, but this file doesn't have one.
if (err == noErr && propertySize > 0) {
Byte *magicCookie = new Byte[propertySize];
UInt32 magicCookieSize;
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MagicCookie, magicCookie, &propertySize), "get audio converter's magic cookie");
magicCookieSize = propertySize; // the converter lies and tell us the wrong size
// now set the magic cookie on the output file
UInt32 willEatTheCookie = false;
// the converter wants to give us one; will the file take it?
err = AudioFileGetPropertyInfo(mRecordFile, kAudioFilePropertyMagicCookieData, NULL, &willEatTheCookie);
if (err == noErr && willEatTheCookie) {
err = AudioFileSetProperty(mRecordFile, kAudioFilePropertyMagicCookieData, magicCookieSize, magicCookie);
XThrowIfError(err, "set audio file's magic cookie");
}
delete[] magicCookie;
}
}
void AQRecorder::SetupAudioFormat(UInt32 inFormatID)
{
memset(&mRecordFormat, 0, sizeof(mRecordFormat));
UInt32 size = sizeof(mRecordFormat.mSampleRate);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareSampleRate,
&size,
&mRecordFormat.mSampleRate), "couldn't get hardware sample rate");
//override samplearate to 8k from device sample rate
mRecordFormat.mSampleRate = 8000.0;
size = sizeof(mRecordFormat.mChannelsPerFrame);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareInputNumberChannels,
&size,
&mRecordFormat.mChannelsPerFrame), "couldn't get input channel count");
// mRecordFormat.mChannelsPerFrame = 1;
mRecordFormat.mFormatID = inFormatID;
if (inFormatID == kAudioFormatLinearPCM)
{
// if we want pcm, default to signed 16-bit little-endian
mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
mRecordFormat.mBitsPerChannel = 16;
mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
mRecordFormat.mFramesPerPacket = 1;
}
if (inFormatID == kAudioFormatULaw) {
NSLog(#"is ulaw");
mRecordFormat.mSampleRate = 8000.0;
mRecordFormat.mFormatFlags = 0;
mRecordFormat.mFramesPerPacket = 1;
mRecordFormat.mChannelsPerFrame = 1;
mRecordFormat.mBitsPerChannel = 8;
mRecordFormat.mBytesPerPacket = 1;
mRecordFormat.mBytesPerFrame = 1;
}
}
NSString * GetDocumentDirectory(void)
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
return basePath;
}
void AQRecorder::StartRecord(CFStringRef inRecordFile)
{
int i, bufferByteSize;
UInt32 size;
CFURLRef url;
try {
mFileName = CFStringCreateCopy(kCFAllocatorDefault, inRecordFile);
// specify the recording format
SetupAudioFormat(kAudioFormatULaw /*kAudioFormatLinearPCM*/);
// create the queue
XThrowIfError(AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
this /* userData */,
NULL /* run loop */, NULL /* run loop mode */,
0 /* flags */, &mQueue), "AudioQueueNewInput failed");
// get the record format back from the queue's audio converter --
// the file may require a more specific stream description than was necessary to create the encoder.
mRecordPacket = 0;
size = sizeof(mRecordFormat);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription,
&mRecordFormat, &size), "couldn't get queue's format");
NSString *basePath = GetDocumentDirectory();
NSString *recordFile = [basePath /*NSTemporaryDirectory()*/ stringByAppendingPathComponent: (NSString*)inRecordFile];
url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)recordFile, NULL);
// create the audio file
XThrowIfError(AudioFileCreateWithURL(url, kAudioFileCAFType, &mRecordFormat, kAudioFileFlags_EraseFile,
&mRecordFile), "AudioFileCreateWithURL failed");
CFRelease(url);
// copy the cookie first to give the file object as much info as we can about the data going in
// not necessary for pcm, but required for some compressed audio
CopyEncoderCookieToFile();
// allocate and enqueue buffers
bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); // enough bytes for half a second
for (i = 0; i < kNumberRecordBuffers; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
"AudioQueueAllocateBuffer failed");
XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL),
"AudioQueueEnqueueBuffer failed");
}
// start the queue
mIsRunning = true;
XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed");
}
catch (CAXException &e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
catch (...) {
fprintf(stderr, "An unknown error occurred\n");
}
}
void AQRecorder::StopRecord()
{
// end recording
mIsRunning = false;
// XThrowIfError(AudioQueueReset(mQueue), "AudioQueueStop failed");
XThrowIfError(AudioQueueStop(mQueue, true), "AudioQueueStop failed");
// a codec may update its cookie at the end of an encoding session, so reapply it to the file now
CopyEncoderCookieToFile();
if (mFileName)
{
CFRelease(mFileName);
mFileName = NULL;
}
AudioQueueDispose(mQueue, true);
AudioFileClose(mRecordFile);
}
I changed my #define kBufferDurationSeconds from .5 to 5.0 and although the clicking is still there it is alot less noticeable.
Please if you have suggestions/answer still post as this is not a fix merely a work around thats somewhat better then before
I also tried to append data to data for a number of times prior to sending the data to the server. This also seems to have helped.