I have the following code which I'm using to read the contents of a WAV file into an SInt16 array:
AudioBufferList *buffers;
UInt32 ablSize = offsetof(AudioBufferList, mBuffers) +
(sizeof(AudioBuffer) * 1);
buffers = malloc(ablSize);
buffers->mNumberBuffers = 1;
buffers->mBuffers[0].mNumberChannels = 1;
buffers->mBuffers[0].mDataByteSize = dataByteSize;
UInt32 dataSize = (UInt32)fileLengthFrames * sizeof(SInt16);
self.extractedSamples = malloc(dataSize);
self.extractedByteCount = dataByteSize;
UInt32 totalFramesRead = 0;
do {
UInt32 framesRead = (UInt32)fileLengthFrames - totalFramesRead;
buffers->mBuffers[0].mData = self.extractedSamples +
(totalFramesRead * sizeof(SInt16));
ExtAudioFileRead(eaf, &framesRead, buffers);
totalFramesRead += framesRead;
} while (totalFramesRead < fileLengthFrames);
free(buffers);
This is working fine for files of < 0.5 seconds duration. But for a longer file I'm testing, the app crashes with a bad access error inside the do loop. For this file, dataByteSize is 60472, and at the start of the loop buffer->mBuffers[0].mDataByteSize is also 60472. But when the crash occurs, I see that buffer->mBuffers[0].mDataByteSize has changed to 57300, which is presumably why the crash is now occurring.
Anybody know how/why this value is changing in the middle of the loop? One guess I have is that I'm not properly retaining the AudioBufferList and the memory space for mDataByteSize is somehow getting overwritten.
Edit: When this code is run on the simulator with the same file, it works fine.
mDataByteSize should be set to framesRead * sizeof(SInt16) * channelCount before each call to ExtAudioFileRead
Related
I write a voip app that uses "novocaine" library for recording and playback of sound. I set sample rate as 8kHz. This sample rate is set in novocaine in AudioStreamBasicDescription of audio unit and as audio session property kAudioSessionProperty_PreferredHardwareSampleRate. I understand that setting preferred hardware sample rate has no guarantee that actual hardware sample rate will be changed but it worked for all devices except iPhone6s and iPhone6s+ (when route is changed to speaker). With iPhone6s(+) and speaker route I receive 48kHz sound from microphone. So I need to somehow convert this 48 kHz sound to 8kHz. In documentation I found that AudioConverterRef can be used in this case but I have troubles with using it.
I use AudioConverterFillComplexBuffer for sample rate conversion but it always returns -50 OSStatus (one or more parameters passed to the function were not valid). This is how I use audio converter:
// Setup AudioStreamBasicDescription for input
inputFormat.mSampleRate = 48000.0;
inputFormat.mFormatID = kAudioFormatLinearPCM;
inputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
inputFormat.mChannelsPerFrame = 1;
inputFormat.mBitsPerChannel = 8 * sizeof(float);
inputFormat.mFramesPerPacket = 1;
inputFormat.mBytesPerFrame = sizeof(float) * inputFormat.mChannelsPerFrame;
inputFormat.mBytesPerPacket = inputFormat.mBytesPerFrame * inputFormat.mFramesPerPacket;
// Setup AudioStreamBasicDescription for output
outputFormat.mSampleRate = 8000.0;
outputFormat.mFormatID = kAudioFormatLinearPCM;
outputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
outputFormat.mChannelsPerFrame = 1;
outputFormat.mBitsPerChannel = 8 * sizeof(float);
outputFormat.mFramesPerPacket = 1;
outputFormat.mBytesPerFrame = sizeof(float) * outputFormat.mChannelsPerFrame;
outputFormat.mBytesPerPacket = outputFormat.mBytesPerFrame * outputFormat.mFramesPerPacket;
// Create new instance of audio converter
AudioConverterNew(&inputFormat, &outputFormat, &converter);
// Set conversion quality
UInt32 tmp = kAudioConverterQuality_Medium;
AudioConverterSetProperty( converter, kAudioConverterCodecQuality,
sizeof( tmp ), &tmp );
AudioConverterSetProperty( converter, kAudioConverterSampleRateConverterQuality, sizeof( tmp ), &tmp );
// Get the size of the IO buffer(s)
UInt32 bufferSizeFrames = 0;
size = sizeof(UInt32);
AudioUnitGetProperty(self.inputUnit,
kAudioDevicePropertyBufferFrameSize,
kAudioUnitScope_Global,
0,
&bufferSizeFrames,
&size);
UInt32 bufferSizeBytes = bufferSizeFrames * sizeof(Float32);
// Allocate an AudioBufferList plus enough space for array of AudioBuffers
UInt32 propsize = offsetof(AudioBufferList, mBuffers[0]) + (sizeof(AudioBuffer) * outputFormat.mChannelsPerFrame);
// Malloc buffer lists
convertedInputBuffer = (AudioBufferList *)malloc(propsize);
convertedInputBuffer->mNumberBuffers = 1;
// Pre-malloc buffers for AudioBufferLists
convertedInputBuffer->mBuffers[0].mNumberChannels = outputFormat.mChannelsPerFrame;
convertedInputBuffer->mBuffers[0].mDataByteSize = bufferSizeBytes;
convertedInputBuffer->mBuffers[0].mData = malloc(bufferSizeBytes);
memset(convertedInputBuffer->mBuffers[0].mData, 0, bufferSizeBytes);
// Setup callback for converter
static OSStatus inputProcPtr(AudioConverterRef inAudioConverter,
UInt32* ioNumberDataPackets,
AudioBufferList* ioData,
AudioStreamPacketDescription* __nullable* __nullable outDataPacketDescription,
void* __nullable inUserData)
{
// Read data from buffer
}
// Perform actual sample rate conversion
AudioConverterFillComplexBuffer(converter, inputProcPtr, NULL, &numberOfFrames, convertedInputBuffer, NULL)
inputProcPtr callback is never called. I tried to set different number of frames but still receive OSStatus -50.
1) Is using AudioConverterRef is correct way to make sample rate conversion or it could be done in different way?
2) What is wrong with my conversion implementation?
Thank you all in advance
One problem is this:
AudioUnitGetProperty(self.inputUnit,
kAudioDevicePropertyBufferFrameSize,
kAudioUnitScope_Global,
0,
&bufferSizeFrames,
&size);
kAudioDevicePropertyBufferFrameSize is an OSX property, and doesn't exist on iOS. How is this code even compiling?
If you've somehow made it compile, check the return code from this function! I've got a feeling that it's failing, and bufferSizeFrames is zero. That would make AudioConverterFillComplexBuffer return -50 (kAudio_ParamError).
So on iOS, either pick a bufferSizeFrames yourself or base it on AVAudioSession's IOBufferDuration if you must.
Another problem: check your return codes. All of them!
e.g.
UInt32 tmp = kAudioConverterQuality_Medium;
AudioConverterSetProperty( converter, kAudioConverterCodecQuality,
sizeof( tmp ), &tmp );
I'm pretty sure there's no codec to speak of in LPCM->LPCM conversions, and that kAudioConverterQuality_Medium is not the right value to use with kAudioConverterCodecQuality in any case. I don't see how this call can succeed.
I am converting from the following format:
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
_stereoGraphStreamFormat.mFormatID = kAudioFormatLinearPCM;
_stereoGraphStreamFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
_stereoGraphStreamFormat.mBytesPerPacket = four_bytes_per_float;
_stereoGraphStreamFormat.mFramesPerPacket = 1;
_stereoGraphStreamFormat.mBytesPerFrame = four_bytes_per_float;
_stereoGraphStreamFormat.mChannelsPerFrame = 2;
_stereoGraphStreamFormat.mBitsPerChannel = eight_bits_per_byte * four_bytes_per_float;
_stereoGraphStreamFormat.mSampleRate = 44100;
to the following format:
interleavedAudioDescription.mFormatID = kAudioFormatLinearPCM;
interleavedAudioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger;
interleavedAudioDescription.mChannelsPerFrame = 2;
interleavedAudioDescription.mBytesPerPacket = sizeof(SInt16)*interleavedAudioDescription.mChannelsPerFrame;
interleavedAudioDescription.mFramesPerPacket = 1;
interleavedAudioDescription.mBytesPerFrame = sizeof(SInt16)*interleavedAudioDescription.mChannelsPerFrame;
interleavedAudioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
interleavedAudioDescription.mSampleRate = 44100;
Using the following code:
int32_t availableBytes = 0;
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
// If we have no data in the buffer, we simply return
if (availableBytes <= 0)
{
return;
}
// ========== Non-Interleaved to Interleaved (Plus Samplerate Conversion) =========
// Get the number of frames available
UInt32 frames = availableBytes / this->mInputFormat.mBytesPerFrame;
pcmOutputBuffer->mBuffers[0].mDataByteSize = frames * interleavedAudioDescription.mBytesPerFrame;
struct complexInputDataProc_t data = (struct complexInputDataProc_t) { .self = this, .sourceL = tailL, .sourceR = tailR, .byteLength = availableBytes };
// Do the conversion
OSStatus result = AudioConverterFillComplexBuffer(interleavedAudioConverter,
complexInputDataProc,
&data,
&frames,
pcmOutputBuffer,
NULL);
// Tell the buffers how much data we consumed during the conversion so that it can be removed
TPCircularBufferConsume(inputBufferL(), availableBytes);
TPCircularBufferConsume(inputBufferR(), availableBytes);
// ========== Buffering Of Interleaved Samples =========
// If we got converted frames back from the converter, we want to add it to a separate buffer
if (frames > 0)
{
// Make sure we have enough space in the buffer to store the new data
TPCircularBufferHead(&pcmCircularBuffer, &availableBytes);
if (availableBytes > pcmOutputBuffer->mBuffers[0].mDataByteSize)
{
// Add the newly converted data to the buffer
TPCircularBufferProduceBytes(&pcmCircularBuffer, pcmOutputBuffer->mBuffers[0].mData, frames * interleavedAudioDescription.mBytesPerFrame);
}
else
{
printf("No Space in Buffer\n");
}
}
However I am getting the following output:
It should be a perfect sine wave, however as you can see it is not.
I have been working on this for days now and just can’t seem to find where it is going wrong.
Can anyone see something that I might be missing?
Edit:
Buffer initialisation:
TPCircularBuffer pcmCircularBuffer;
static SInt16 pcmOutputBuf[BUFFER_SIZE];
pcmOutputBuffer = (AudioBufferList*)malloc(sizeof(AudioBufferList));
pcmOutputBuffer->mNumberBuffers = 1;
pcmOutputBuffer->mBuffers[0].mNumberChannels = 2;
pcmOutputBuffer->mBuffers[0].mData = pcmOutputBuf;
TPCircularBufferInit(&pcmCircularBuffer, BUFFER_SIZE);
Complex input data proc:
static OSStatus complexInputDataProc(AudioConverterRef inAudioConverter,
UInt32 *ioNumberDataPackets,
AudioBufferList *ioData,
AudioStreamPacketDescription **outDataPacketDescription,
void *inUserData) {
struct complexInputDataProc_t *arg = (struct complexInputDataProc_t*)inUserData;
BroadcastingServices::MP3Encoder *self = (BroadcastingServices::MP3Encoder*)arg->self;
if ( arg->byteLength <= 0 )
{
*ioNumberDataPackets = 0;
return 100; //kNoMoreDataErr;
}
UInt32 framesAvailable = arg->byteLength / self->interleavedAudioDescription.mBytesPerFrame;
if (*ioNumberDataPackets > framesAvailable)
{
*ioNumberDataPackets = framesAvailable;
}
ioData->mBuffers[0].mData = arg->sourceL;
ioData->mBuffers[0].mDataByteSize = arg->byteLength;
ioData->mBuffers[1].mData = arg->sourceR;
ioData->mBuffers[1].mDataByteSize = arg->byteLength;
arg->byteLength = 0;
return noErr;
}
I see a few things that raise a red flag.
1) as mentioned in a comment above, the fact that you are overwriting availableBytes for the left input with that from the right:
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
If the two input streams are changing asynchronously to this code then most certainly you have a race condition.
2) Truncation errors: availableBytes is not necessarily a multiple of bytes per frame. If not then the following bit of code could cause you to consume more bytes than you actually converted.
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
...
UInt32 frames = availableBytes / this->mInputFormat.mBytesPerFrame;
...
TPCircularBufferConsume(inputBufferL(), availableBytes);
TPCircularBufferConsume(inputBufferR(), availableBytes);
3) If the output buffer is not ready to consume all of the input you just throw the input buffer away. That happens in this code.
if (availableBytes > pcmOutputBuffer->mBuffers[0].mDataByteSize)
{
...
}
else
{
printf("No Space in Buffer\n");
}
I'd be really curious if your seeing the print output.
Here's is how I would suggest doing it. It's going to be pseudo-codeish since I don't have anything necessary to compile and test it.
int32_t availableBytesInL = 0;
int32_t availableBytesInR = 0;
int32_t availableBytesOut = 0;
// figure out how many bytes are available in each buffer.
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytesInL);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytesInR);
TPCircularBufferHead(&pcmCircularBuffer, &availableBytesOut);
// figure out how many full frames are available
UInt32 framesInL = availableBytesInL / mInputFormat.mBytesPerFrame;
UInt32 framesInR = availableBytesInR / mInputFormat.mBytesPerFrame;
UInt32 framesOut = availableBytesOut / interleavedAudioDescription.mBytesPerFrame;
// figure out how many frames to process this time.
UInt32 frames = min(min(framesInL, framesInL), framesOut);
if (frames == 0)
return;
int32_t bytesConsumed = frames * mInputFormat.mBytesPerFrame;
struct complexInputDataProc_t data = (struct complexInputDataProc_t) {
.self = this, .sourceL = tailL, .sourceR = tailR, .byteLength = bytesConsumed };
// Do the conversion
OSStatus result = AudioConverterFillComplexBuffer(interleavedAudioConverter,
complexInputDataProc,
&data,
&frames,
pcmOutputBuffer,
NULL);
int32_t bytesProduced = frames * interleavedAudioDescription.mBytesPerFrame;
// Tell the buffers how much data we consumed during the conversion so that it can be removed
TPCircularBufferConsume(inputBufferL(), bytesConsumed);
TPCircularBufferConsume(inputBufferR(), bytesConsumed);
TPCircularBufferProduceBytes(&pcmCircularBuffer, pcmOutputBuffer->mBuffers[0].mData, bytesProduced);
Basically what I've done here is to figure out up front how many frames should be processed making sure I'm only processing as many frames as the output buffer can handle. If it were me I'd also add some checking for buffer underruns on the output and buffer overruns on the input. Finally, I'm not exactly sure of the semantics of AudioConverterFillComplexBuffer wrt the frame parameter that is passing in and out. It's conceivable that the # frames out would be less or more than the number of frames in. Although, since your not doing sample rate conversion that's probably not going to happen. I've attempted to account for that condition by assigning bytesProduced after the conversion.
Hope this helps. If not you have 2 other clues. One is that the drop outs are periodic and two is that the size of the drop outs looks to be about the same. If you can figure out how many samples each are then you can look for those numbers in your code.
I don't think your output buffer, pcmCircularBuffer, is big enough.
Try replacing
TPCircularBufferInit(&pcmCircularBuffer, BUFFER_SIZE);
with
TPCircularBufferInit(&pcmCircularBuffer, sizeof(pcmOutputBuf));
Even if that is the solution, I think there are some problems with your code. I don't know exactly what you're doing, I guess encoding mp3 (which by itself is an uphill battle on iOS, why not use hardware AAC?), but unless you have realtime demands on both input and output, why use ring buffers at all? Also, I recommend using units to visually catch type frame/byte size mismatches: e.g. BUFFER_SIZE_IN_FRAMES
If it's not the solution, then I want to see the sine generator.
I've been at this for a few days now. I'm not very familiar with the Audio Unit layer of the framework. Could someone point me to some full example on how I can let user record and than write the file on the fly with x number of interval. For example, user press record, every 10 seconds, I want to write to a file, on the 11th second, it'll write to the next file and in the 21th second, it's the same thing. So when I record 25 seconds word of audio, it'll produce 3 different files.
I've tried this with AVCapture but it produce clicks and pops in the middle. I've read up on it, it is due to the milliseconds between the read and write operations. I've tried Audio Queue Services but knowing the app I'm working on, I'll need full control over the audio layer; so I've decided to go with Audio Unit.
I think I'm getting closer...still pretty lost. I ended up using The Amazing Audio Engine (TAAE). I'm now looking at AEAudioReceiver, my callback code looks like this. I think logically is right but I don' think it's implemented correctly.
The task at hand: Record ~5-second segments in AAC format.
The attempt: Use the AEAudioReciever callback and store the AudioBufferList in the circular buffer. Track the number of seconds of audio have received in the recorder class; once it pass the 5 seconds mark (it can be a little bit over but not 6 seconds). Call Obj-c method to write the file using the AEAudioFileWriter
The outcome: Didn't work, the recordings sounded very slow and lots of noise constantly; I can hear some of the record audio; so I know some data is there but it's like I'm losing a lot of data. I'm not even sure how to debug this (I'll continue to try but pretty lost at the moment).
Another item is converting to AAC, do I write the file first in PCM format than convert to AAC or is it possible to convert just the audio segment to AAC?
Thanks ahead for helping!
----- Circular buffer init -----
//trying to get 5 seconds audio, how do I know what the length is if I don't know the frame size yet? and is that even the right question to ask?
TPCircularBufferInit(&_buffer, 1024 * 256);
----- AEAudioReceiver callback ------
static void receiverCallback(__unsafe_unretained MyAudioRecorder *THIS,
__unsafe_unretained AEAudioController *audioController,
void *source,
const AudioTimeStamp *time,
UInt32 frames,
AudioBufferList *audio) {
//store the audio into the buffer
TPCircularBufferCopyAudioBufferList(&THIS->_buffer, audio, time, kTPCircularBufferCopyAll, NULL);
//increase the time interval to track by THIS
THIS.numberOfSecondInCurrentRecording += AEConvertFramesToSeconds(THIS.audioController, frames);
//if number of seconds passed an interval of 5 seconds, than write the last 5 seconds of the buffer to a file
if (THIS.numberOfSecondInCurrentRecording > 5 * THIS->_currentSegment + 1) {
NSLog(#"Segment %d is full, writing file", THIS->_currentSegment);
[THIS writeBufferToFile];
//segment tracking variables
THIS->_numberOfReceiverLoop = 0;
THIS.lastTimeStamp = nil;
THIS->_currentSegment += 1;
} else {
THIS->_numberOfReceiverLoop += 1;
}
// Do something with 'audio'
if (!THIS.lastTimeStamp) {
THIS.lastTimeStamp = (AudioTimeStamp *)time;
}
}
---- Writing to file (method inside of the MyAudioRecorderClass) ----
- (void)writeBufferToFileHandler {
NSString *documentsFolder = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)
objectAtIndex:0];
NSString *filePath = [documentsFolder stringByAppendingPathComponent:[NSString stringWithFormat:#"Segment_%d.aiff", _currentSegment]];
NSError *error = nil;
//setup audio writer, should the buffer be converted to aac first or save the file than convert; and how the heck do you do that?
AEAudioFileWriter *writeFile = [[AEAudioFileWriter alloc] initWithAudioDescription:_audioController.inputAudioDescription];
[writeFile beginWritingToFileAtPath:filePath fileType:kAudioFileAIFFType error:&error];
if (error) {
NSLog(#"Error in init. the file: %#", error);
return;
}
int i = 1;
//loop to write all the AudioBufferLists that is in the Circular Buffer; retrieve the ones based off of the _lastTimeStamp; but I had it in NULL too and worked the same way.
while (1) {
//NSLog(#"Processing buffer file list for segment [%d] and buffer index [%d]", _currentSegment, i);
i += 1;
// Discard any buffers with an incompatible format, in the event of a format change
AudioBufferList *nextBuffer = TPCircularBufferNextBufferList(&_buffer, _lastTimeStamp);
Float32 *frame = (Float32*) &nextBuffer->mBuffers[0].mData;
//if buffer runs out, than we are done writing it and exit loop to close the file
if ( !nextBuffer ) {
NSLog(#"Ran out of frames, there were [%d] AudioBufferList", i - 1);
break;
}
//Adding audio using AudioFileWriter, is the length correct?
OSStatus status = AEAudioFileWriterAddAudio(writeFile, nextBuffer, sizeof(nextBuffer->mBuffers[0].mDataByteSize));
if (status) {
NSLog(#"Writing Error? %d", status);
}
//consume/clear the buffer
TPCircularBufferConsumeNextBufferList(&_buffer);
}
//close the file and hope it worked
[writeFile finishWriting];
}
----- Audio Controller AudioStreamBasicDescription ------
//interleaved16BitStereoAudioDescription
AudioStreamBasicDescription audioDescription;
memset(&audioDescription, 0, sizeof(audioDescription));
audioDescription.mFormatID = kAudioFormatLinearPCM;
audioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
audioDescription.mChannelsPerFrame = 2;
audioDescription.mBytesPerPacket = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mFramesPerPacket = 1;
audioDescription.mBytesPerFrame = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
audioDescription.mSampleRate = 44100.0;
I use ExtAudioFileRead Function to load audio file to memory. But I found there is alway an error with code -50. That means I pass the wrong parameters to the function. But I have no idea which one is the wrong parameter.
The Audio File's data format is alac, sampleRate 44100k, has 2 channels.
My code is shown below:
ExtAudioFileRef recordFile;
OSStatus error = noErr;
error = ExtAudioFileOpenURL((CFURLRef)file, &recordFile);
checkError(error, "open file");
SInt64 frameCount;
UInt32 size = sizeof(frameCount);
error = ExtAudioFileGetProperty(recordFile, kExtAudioFileProperty_FileLengthFrames, &size, &frameCount);
checkError(error, "get frameTotlal");
soundStruct *sound = &_sound;
sound->frameCount = frameCount;
sound->isStereo = true;
sound->audioDataLeft = (SInt16 *)calloc(frameCount, sizeof(SInt16));
sound->audioDataRight = (SInt16 *)calloc(frameCount, sizeof(SInt16));
AudioStreamBasicDescription desc;
UInt32 descSize = sizeof(desc);
error = ExtAudioFileGetProperty(recordFile, kExtAudioFileProperty_FileDataFormat, &descSize, &desc);
[self printASBD:desc];
UInt32 channels = desc.mChannelsPerFrame;
error = ExtAudioFileSetProperty(recordFile, kExtAudioFileProperty_ClientDataFormat, sizeof(inFormat), &inFormat);
AudioBufferList *bufferList;
bufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (channels - 1));
AudioBuffer emptyBuff = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channels; arrayIndex ++) {
bufferList->mBuffers[arrayIndex] = emptyBuff;
}
bufferList->mBuffers[0].mData = sound->audioDataLeft;
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = frameCount * sizeof(SInt16);
if (channels == 2) {
bufferList->mBuffers[1].mData = sound->audioDataRight;
bufferList->mBuffers[1].mNumberChannels = 1;
bufferList->mBuffers[1].mDataByteSize = frameCount * sizeof(SInt16);
bufferList->mNumberBuffers = 2;
}
UInt32 count = (UInt32)frameCount;
error = ExtAudioFileRead(recordFile, &count, bufferList);
checkError(error, "reading"); // Get a -50 error
free(bufferList);
ExtAudioFileDispose(recordFile);
Good question.
This error happened to me when I ExtAudioFileRead a MONO file, using a STEREO client data format in your call to ExtAudioFileSetProperty.
I don't think ExtAudioFileRead automatically upconverts mono files to stereo files, if there is a mismatch there I think it fails with this -50 error.
Either make the mono file stereo, or set inFormat.mChannelsPerFrame=1 for the mono files.
Remember, if you don't upconvert, you must account for the mono files in your audiorenderfunction by writing L/R channels from the single mono channel of data.
I am trying to access the raw data for an audio file on the iPhone/iPad. I have the following code which is a basic start down the path I need. However I am stumped at what to do once I have an AudioBuffer.
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:urlAsset error:nil];
AVAssetReaderTrackOutput *assetReaderOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[[urlAsset tracks] objectAtIndex:0] outputSettings:nil];
[assetReader addOutput:assetReaderOutput];
[assetReader startReading];
CMSampleBufferRef ref;
NSArray *outputs = assetReader.outputs;
AVAssetReaderOutput *output = [outputs objectAtIndex:0];
int y = 0;
while (ref = [output copyNextSampleBuffer]) {
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(ref, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
SInt16 *frames = audioBuffer.mData;
for(int i = 0; i < 24000; i++) { // This sometimes crashes
Float32 currentFrame = frames[i] / 32768.0f;
}
}
}
Essentially I don't know how to tell how many frames each buffer contains so I can't reliably extract the data from them. I am new to working with raw audio data so I'm open to any suggestions in how to best read the mData property of the AudioBuffer struct. I also haven't done much with void pointers in the past so help with that in this context would be great too!
audioBuffer.mDataByteSize tells you the size of the buffer. Did you know this? Just incase you didn't you can't have looked at the declaration of struct AudioBuffer. You should always look at the header files as well as the docs.
For the mDataByteSize to make sense you must know the format of the data. The count of output values is mDataByteSize/sizeof(outputType). However, you seem confused about the format - you must have specified it somewhere. First of all you treat it as a 16bit signed int
SInt16 *frames = audioBuffer.mData
then you treat it as 32 bit float
Float32 currentFrame = frames[i] / 32768.0f
inbetween you assume that there are 24000 values, of course this will crash if there aren't exactly 24000 16bit values. Also, you refer to the data as 'frames' but what you really mean is samples. Each value you call 'currentFrame' is one sample of the audio. 'Frame' would typically refer to a block of samples like .mData
So, assuming the data format is 32bit Float (and please note, i have no idea if it is, it could be 8 bit int, or 32bit Fixed for all i know)
for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
{
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
int bufferSize = audioBuffer.mDataByteSize / sizeof(Float32);
Float32 *frame = audioBuffer.mData;
for( int i=0; i<bufferSize; i++ ) {
Float32 currentSample = frame[i];
}
}
Note, sizeof(Float32) is always 4, but i left it in to be clear.