I'm trying to set up an audio unit capable of mono input and stereo output. Intend on playing a sine wave tone through the left channel output and a different sign wave periodically through the right channel out.
I am receiving the error,
'NSInternalInconsistencyException', reason: ' Error initialing unit: -50;
when I attempt to initialize my audio unit here,
// Initialize audio unit
OSErr err = AudioUnitInitialize(self.ioUnit);
NSAssert1(err == noErr, #"Error initializing unit: %hd", err);
I believe it has to do with how I am setting up the audio unit,
// Audio component description
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get Audio units
AudioComponentInstanceNew(inputComponent, &ioUnit);
// Enable input, which disabled by default. Output enable by default
UInt32 enableInput = 1;
AudioUnitSetProperty(ioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&enableInput,
sizeof(enableInput));
AudioStreamBasicDescription monoStreamFormat;
monoStreamFormat.mSampleRate = 44100.00;
monoStreamFormat.mFormatID = kAudioFormatLinearPCM;
monoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
monoStreamFormat.mBytesPerPacket = 2;
monoStreamFormat.mBytesPerFrame = 2;
monoStreamFormat.mFramesPerPacket = 1;
monoStreamFormat.mChannelsPerFrame = 1;
monoStreamFormat.mBitsPerChannel = 16;
// Apply format to input of ioUnit
AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kInputBus,
&monoStreamFormat,
sizeof(monoStreamFormat));
AudioStreamBasicDescription stereoStreamFormat;
stereoStreamFormat.mSampleRate = 44100.00;
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat.mBytesPerPacket = 4;
stereoStreamFormat.mBytesPerFrame = 4;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mChannelsPerFrame = 2;
stereoStreamFormat.mBitsPerChannel = 32;
// Apply format to output of ioUnit
AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kOutputBus,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = inputCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
// Set output callback
callbackStruct.inputProc = outputCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
but I couldn't find anything on the error code -50 it's giving back so I'm not sure what it doesn't like. I'm on a deadline so any help it greatly appreciated.
kAudioFormatFlagsAudioUnitCanonical is 32-bit 8.24 fixed point format. That means mBytesPerFrame should be 4 for mono and 8 for stereo, and mBitsPerChannel should be 32 for both stereo and mono. Setting it to 16-bits for would probably produce this error.
That said, I doubt you want 8.24. You probably want kAudioFormatFlagsCanonical to get standard 16-bit audio. Or, if you want 32 bit, go with 32-bit float (kAudioFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved). Easier to deal with the math during your processing.
Also, you are setting the formats on the virtual inputs and outputs rather than the client inputs and outputs, which I don't think is what you want. The API here is rather confusing, but to set client input format, use kAudioUnitScope_Input, kOutputBus. The logic here is that you're setting the format of the output of the internal Core Audio input converter unit, which, from the perspective of your code, is the input audio. Confusing, I know. See Audio Unit docs for more detail. Likewise, to set client output format, use kAudioUnitScope_Output, kInputBus.
32 bits per channel is too many bits to fit 2 channels into 4 bytes. Try 16.
Related
I write a voip app that uses "novocaine" library for recording and playback of sound. I set sample rate as 8kHz. This sample rate is set in novocaine in AudioStreamBasicDescription of audio unit and as audio session property kAudioSessionProperty_PreferredHardwareSampleRate. I understand that setting preferred hardware sample rate has no guarantee that actual hardware sample rate will be changed but it worked for all devices except iPhone6s and iPhone6s+ (when route is changed to speaker). With iPhone6s(+) and speaker route I receive 48kHz sound from microphone. So I need to somehow convert this 48 kHz sound to 8kHz. In documentation I found that AudioConverterRef can be used in this case but I have troubles with using it.
I use AudioConverterFillComplexBuffer for sample rate conversion but it always returns -50 OSStatus (one or more parameters passed to the function were not valid). This is how I use audio converter:
// Setup AudioStreamBasicDescription for input
inputFormat.mSampleRate = 48000.0;
inputFormat.mFormatID = kAudioFormatLinearPCM;
inputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
inputFormat.mChannelsPerFrame = 1;
inputFormat.mBitsPerChannel = 8 * sizeof(float);
inputFormat.mFramesPerPacket = 1;
inputFormat.mBytesPerFrame = sizeof(float) * inputFormat.mChannelsPerFrame;
inputFormat.mBytesPerPacket = inputFormat.mBytesPerFrame * inputFormat.mFramesPerPacket;
// Setup AudioStreamBasicDescription for output
outputFormat.mSampleRate = 8000.0;
outputFormat.mFormatID = kAudioFormatLinearPCM;
outputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
outputFormat.mChannelsPerFrame = 1;
outputFormat.mBitsPerChannel = 8 * sizeof(float);
outputFormat.mFramesPerPacket = 1;
outputFormat.mBytesPerFrame = sizeof(float) * outputFormat.mChannelsPerFrame;
outputFormat.mBytesPerPacket = outputFormat.mBytesPerFrame * outputFormat.mFramesPerPacket;
// Create new instance of audio converter
AudioConverterNew(&inputFormat, &outputFormat, &converter);
// Set conversion quality
UInt32 tmp = kAudioConverterQuality_Medium;
AudioConverterSetProperty( converter, kAudioConverterCodecQuality,
sizeof( tmp ), &tmp );
AudioConverterSetProperty( converter, kAudioConverterSampleRateConverterQuality, sizeof( tmp ), &tmp );
// Get the size of the IO buffer(s)
UInt32 bufferSizeFrames = 0;
size = sizeof(UInt32);
AudioUnitGetProperty(self.inputUnit,
kAudioDevicePropertyBufferFrameSize,
kAudioUnitScope_Global,
0,
&bufferSizeFrames,
&size);
UInt32 bufferSizeBytes = bufferSizeFrames * sizeof(Float32);
// Allocate an AudioBufferList plus enough space for array of AudioBuffers
UInt32 propsize = offsetof(AudioBufferList, mBuffers[0]) + (sizeof(AudioBuffer) * outputFormat.mChannelsPerFrame);
// Malloc buffer lists
convertedInputBuffer = (AudioBufferList *)malloc(propsize);
convertedInputBuffer->mNumberBuffers = 1;
// Pre-malloc buffers for AudioBufferLists
convertedInputBuffer->mBuffers[0].mNumberChannels = outputFormat.mChannelsPerFrame;
convertedInputBuffer->mBuffers[0].mDataByteSize = bufferSizeBytes;
convertedInputBuffer->mBuffers[0].mData = malloc(bufferSizeBytes);
memset(convertedInputBuffer->mBuffers[0].mData, 0, bufferSizeBytes);
// Setup callback for converter
static OSStatus inputProcPtr(AudioConverterRef inAudioConverter,
UInt32* ioNumberDataPackets,
AudioBufferList* ioData,
AudioStreamPacketDescription* __nullable* __nullable outDataPacketDescription,
void* __nullable inUserData)
{
// Read data from buffer
}
// Perform actual sample rate conversion
AudioConverterFillComplexBuffer(converter, inputProcPtr, NULL, &numberOfFrames, convertedInputBuffer, NULL)
inputProcPtr callback is never called. I tried to set different number of frames but still receive OSStatus -50.
1) Is using AudioConverterRef is correct way to make sample rate conversion or it could be done in different way?
2) What is wrong with my conversion implementation?
Thank you all in advance
One problem is this:
AudioUnitGetProperty(self.inputUnit,
kAudioDevicePropertyBufferFrameSize,
kAudioUnitScope_Global,
0,
&bufferSizeFrames,
&size);
kAudioDevicePropertyBufferFrameSize is an OSX property, and doesn't exist on iOS. How is this code even compiling?
If you've somehow made it compile, check the return code from this function! I've got a feeling that it's failing, and bufferSizeFrames is zero. That would make AudioConverterFillComplexBuffer return -50 (kAudio_ParamError).
So on iOS, either pick a bufferSizeFrames yourself or base it on AVAudioSession's IOBufferDuration if you must.
Another problem: check your return codes. All of them!
e.g.
UInt32 tmp = kAudioConverterQuality_Medium;
AudioConverterSetProperty( converter, kAudioConverterCodecQuality,
sizeof( tmp ), &tmp );
I'm pretty sure there's no codec to speak of in LPCM->LPCM conversions, and that kAudioConverterQuality_Medium is not the right value to use with kAudioConverterCodecQuality in any case. I don't see how this call can succeed.
I m using ExtAudioFileWriteAsync to write an audio file while using device recording, but once I get recording finished I try to read it with ExtAudioFileRead function and samples I get are not same samples I m writing... Anyone know why this could happen?
For writing:
self.audioManager.inputBlock = ^(float *data, UInt32 numFrames, UInt32 numChannels) {
for (int i = 0; i < numFrames*numChannels; i++) {
printf("write*%f\n", data[i]);
}
UInt32 numIncomingBytes = numFrames*numChannels*sizeof(float);
UInt32 *outputBuffer =(UInt32*)malloc(numIncomingBytes);
memcpy(outputBuffer, recordedData, numIncomingBytes);
AudioBufferList outgoingAudio;
outgoingAudio.mNumberBuffers = 1;
outgoingAudio.mBuffers[0].mNumberChannels = numChannels;
outgoingAudio.mBuffers[0].mDataByteSize = numIncomingBytes;
outgoingAudio.mBuffers[0].mData = self.outputBuffer;
if( 0 == pthread_mutex_trylock( &outputAudioFileLock ) )
{
ExtAudioFileWriteAsync(outputFile, numFrames, &outgoingAudio);
}
pthread_mutex_unlock( &outputAudioFileLock );
};
[self.audioManager play];
For reading:
UInt32 *outputBuffer = (UInt32 *)malloc(numFrames*numChannels*sizeof(float));
AudioBufferList convertedData;
convertedData.mNumberBuffers = 1;
convertedData.mBuffers[0].mNumberChannels = numChannels;
convertedData.mBuffers[0].mDataByteSize = numFrames*numChannels*sizeof(float);
convertedData.mBuffers[0].mData = outputBuffer;
NSMutableArray *samplesArray = [[NSMutableArray alloc]init];
while (numFrames > 0) {
ExtAudioFileRead(inputFile, &numFrames, &convertedData);
if (numFrames > 0) {
AudioBuffer audioBuffer = convertedData.mBuffers[0];
float *samples = (float *)audioBuffer.mData;
for (int i = 0; i < frameCount*numChannels; i++) {
printf("read*%f\n", samples[i]);
}
}
}
By the way I'm using Novocaine project in order to get device audio. I can reproduce saved audio with Novocaine code or with any other player.
When writing ExtAudioFileRef output :
ExtAudioFileCreateWithURL(audioFileRef, kAudioFileM4AType, &outputFileDesc, NULL, kAudioFileFlags_EraseFile, &outputFile);
Where outputFileDesc is
AudioStreamBasicDescription outputFileDesc = {44100.0, kAudioFormatMPEG4AAC, 0, 0, 1024, 0, thisNumChannels, 0, 0};
outputFileDesc.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
When reading ExtAudioFileRef inputFile:
ExtAudioFileOpenURL(audioFileRef, &inputFile):
And in both cases (writing and reading) it is applied same format:
AudioStreamBasicDescription outputFormat;
_outputFormat.mSampleRate = self.samplingRate;
_outputFormat.mFormatID = kAudioFormatLinearPCM;
_outputFormat.mFormatFlags = kAudioFormatFlagIsFloat;
_outputFormat.mBytesPerPacket = 4*self.numChannels;
_outputFormat.mFramesPerPacket = 1;
_outputFormat.mBytesPerFrame = 4*self.numChannels;
_outputFormat.mChannelsPerFrame = self.numChannels;
_outputFormat.mBitsPerChannel = 32;
ExtAudioFileSetProperty(outputFile, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &_outputFormat);
ExtAudioFileSetProperty(inputFile, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &_outputFormat);
And by the way, even if read samples are not equal to written samples, mean value of both signals are quite similar. But I do not fully understand why are not totally equal!
Any idea what I'm doing wrong?
It sounds like there is an implicit format conversion from one or both of the ExtaudioFileRefs and you are seeing different samples as a result of the conversion. You have three formats: audio_in_format, file_format, and audio_out_format. If audio_in_format is different than file_format, The writing ExtAudioFileRef will create an audio converter for you to convert the input audio to file_format before writing to disk. And the reading ExtAudioFileRef will also create a converter if file_format is different than audio_out_format.
Opinion:
It's confusing that you named your writing ExtAudioFileRef "outputFile", and your reading ExtAudioFileRef "inputFile". I would use something like audioWriter and audioReader.
First of all I am new bee on c and objective c
I try to fft a buffer of audio and plot the graph of it.
I use audio unit callback to get audio buffer. the callback brings 512 frames but after 471 frames it brings 0. (I dont know this is normal or not. It used to bring 471 frames with full of numbers. but now somehow 512 frames with 0 after 471. Please let me know if this is normal)
Anyway. I can get the buffer from the callback, apply fft and draw it . this works perfect. and here is the outcome below. the graph very smooth as long as I get buffer in each callback
but in my case I need 3 second of buffer in order to apply fft and draw. so I try to concatenate the buffers from two callback and then apply fft and draw it. but the result is not like what I expect . while the above one is very smooth and precise during record( only the magnitude change on the 18 and 19 khz), when I concatenate the two buffers, the simualator display mainly two different views that swapping between them very fast. they are displayed below. Of course they basically display 18 and 19 khz. but I need precise khz so I can apply more algorithms for the app I work on.
and here is my code in callback
//FFTInputBufferLen, FFTInputBufferFrameIndex is gloabal
//also tempFilteredBuffer is allocated in global
//by the way FFTInputBufferLen = 1024;
static OSStatus performRender (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
UInt32 bus1 = 1;
CheckError(AudioUnitRender(effectState.rioUnit,
ioActionFlags,
inTimeStamp,
bus1,
inNumberFrames,
ioData), "Couldn't render from RemoteIO unit");
Float32 * renderBuff = ioData->mBuffers[0].mData;
ViewController *vc = (__bridge ViewController *) inRefCon;
// inNumberFrames comes 512 as I described above
for (int i = 0; i < inNumberFrames ; i++)
{
//I defined InputBuffers[5] in global.
//then added 5 Float32 InputBuffers and allocated in global
InputBuffers[bufferCount][FFTInputBufferFrameIndex] = renderBuff[i];
FFTInputBufferFrameIndex ++;
if(FFTInputBufferFrameIndex == FFTInputBufferLen)
{
int bufCount = bufferCount;
dispatch_async( dispatch_get_main_queue(), ^{
tempFilteredBuffer = [vc FilterData_rawSamples:InputBuffers[bufCount] numSamples:FFTInputBufferLen];
[vc CalculateFFTwithPlotting_Data:tempFilteredBuffer NumberofSamples:FFTInputBufferLen ];
free(InputBuffers[bufCount]);
InputBuffers[bufCount] = (Float32*)malloc(sizeof(Float32) * FFTInputBufferLen);
});
FFTInputBufferFrameIndex = 0;
bufferCount ++;
if (bufferCount == 5)
{
bufferCount = 0;
}
}
}
return noErr;
}
here is my AudioUnit setup
- (void)setupIOUnit
{
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
CheckError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of AURemoteIO");
UInt32 one = 1;
CheckError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on AURemoteIO");
// I removed this in order to not getting recorded audio back on speakers! Am I right?
//CheckError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on AURemoteIO");
UInt32 maxFramesPerSlice = 4096;
CheckError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on AURemoteIO");
UInt32 propSize = sizeof(UInt32);
CheckError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on AURemoteIO");
AudioUnitElement bus1 = 1;
AudioStreamBasicDescription myASBD;
myASBD.mSampleRate = 44100;
myASBD.mChannelsPerFrame = 1;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mBytesPerFrame = sizeof(Float32) * myASBD.mChannelsPerFrame ;
myASBD.mFramesPerPacket = 1;
myASBD.mBytesPerPacket = myASBD.mFramesPerPacket * myASBD.mBytesPerFrame;
myASBD.mBitsPerChannel = sizeof(Float32) * 8 ;
myASBD.mFormatFlags = 9 | 12 ;
// I also remove this for not getting audio back!!
// CheckError(AudioUnitSetProperty (_rioUnit,
// kAudioUnitProperty_StreamFormat,
// kAudioUnitScope_Input,
// bus0,
// &myASBD,
// sizeof (myASBD)), "Couldn't set ASBD for RIO on input scope / bus 0");
CheckError(AudioUnitSetProperty (_rioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
bus1,
&myASBD,
sizeof (myASBD)), "Couldn't set ASBD for RIO on output scope / bus 1");
effectState.rioUnit = _rioUnit;
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = (__bridge void *)(self);
CheckError(AudioUnitSetProperty(_rioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&renderCallback,
sizeof(renderCallback)), "couldn't set render callback on AURemoteIO");
CheckError(AudioUnitInitialize(_rioUnit), "couldn't initialize AURemoteIO instance");
}
My questions are : why this happens, why there are two main different views on output when I concatenate the two buffers. is there another way to collect buffers and apply DSP? what do I do wrong! if the way I concatenate is correct, is my logic incorrect? (though I checked it many times)
Here I try to say : how can I get 3 sn of buffer in perfect condition
I really need help , best Regards
Your render callback may be writing data into the same buffer that is being processed in another thread (the main queue), thus overwriting and altering part of the data being processed.
Try using more than one buffer. Don't write into a buffer that is still being processed (by your filter & fft methods). Perhaps recycle the buffers for reuse after the FFT calculation method is finished.
I have successfully cancatenated the buffers without any unstable graphics. how did I do is to convert AVAudioSession category to PlayAndRecord from Record. then I have commented out the two AudioUnitSetProperty lines . then I started to get 470~471 frames per render. then I cancenated them like I did on the code I have posted. I have also used buffers in the code too. Now it works. But now it plays through the sound . In order to close it I applied the code below
for (UInt32 i=0; i<ioData->mNumberBuffers; ++i)
{
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}
then I started to get 3sec of buffers. when I plot it on the screen I got a similar view of the first graph
I use ExtAudioFileRead Function to load audio file to memory. But I found there is alway an error with code -50. That means I pass the wrong parameters to the function. But I have no idea which one is the wrong parameter.
The Audio File's data format is alac, sampleRate 44100k, has 2 channels.
My code is shown below:
ExtAudioFileRef recordFile;
OSStatus error = noErr;
error = ExtAudioFileOpenURL((CFURLRef)file, &recordFile);
checkError(error, "open file");
SInt64 frameCount;
UInt32 size = sizeof(frameCount);
error = ExtAudioFileGetProperty(recordFile, kExtAudioFileProperty_FileLengthFrames, &size, &frameCount);
checkError(error, "get frameTotlal");
soundStruct *sound = &_sound;
sound->frameCount = frameCount;
sound->isStereo = true;
sound->audioDataLeft = (SInt16 *)calloc(frameCount, sizeof(SInt16));
sound->audioDataRight = (SInt16 *)calloc(frameCount, sizeof(SInt16));
AudioStreamBasicDescription desc;
UInt32 descSize = sizeof(desc);
error = ExtAudioFileGetProperty(recordFile, kExtAudioFileProperty_FileDataFormat, &descSize, &desc);
[self printASBD:desc];
UInt32 channels = desc.mChannelsPerFrame;
error = ExtAudioFileSetProperty(recordFile, kExtAudioFileProperty_ClientDataFormat, sizeof(inFormat), &inFormat);
AudioBufferList *bufferList;
bufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (channels - 1));
AudioBuffer emptyBuff = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channels; arrayIndex ++) {
bufferList->mBuffers[arrayIndex] = emptyBuff;
}
bufferList->mBuffers[0].mData = sound->audioDataLeft;
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = frameCount * sizeof(SInt16);
if (channels == 2) {
bufferList->mBuffers[1].mData = sound->audioDataRight;
bufferList->mBuffers[1].mNumberChannels = 1;
bufferList->mBuffers[1].mDataByteSize = frameCount * sizeof(SInt16);
bufferList->mNumberBuffers = 2;
}
UInt32 count = (UInt32)frameCount;
error = ExtAudioFileRead(recordFile, &count, bufferList);
checkError(error, "reading"); // Get a -50 error
free(bufferList);
ExtAudioFileDispose(recordFile);
Good question.
This error happened to me when I ExtAudioFileRead a MONO file, using a STEREO client data format in your call to ExtAudioFileSetProperty.
I don't think ExtAudioFileRead automatically upconverts mono files to stereo files, if there is a mismatch there I think it fails with this -50 error.
Either make the mono file stereo, or set inFormat.mChannelsPerFrame=1 for the mono files.
Remember, if you don't upconvert, you must account for the mono files in your audiorenderfunction by writing L/R channels from the single mono channel of data.
So ,I need to reverse some audio *.caf file,
I have seen that the way to do it should be:
You cannot just reverse the byte data. I have achieved the same
effect using CoreAudio and AudioUnits. Use ExtFileReader C API to read
the file into lPCM buffers and then you can reverse the buffers as
needed.
But I cannot find any documentation of the use of
ExtFileReader C API
So if I have a *.caf file, how can I read it in to a linear PCM, I have checked the Core Audio overview but cant find how to accomplish this?
How can i then, read my caf file to linear PCM?
thanks!
ExtendedAudioFile is in the AudioToolbox framework. It's pretty straightforward to read in a file to whatever format you'd like. Here's a quick (compiles, but not tested) example of reading in to 32-bit float non-interleaved Linear PCM:
#import <AudioToolbox/AudioToolbox.h>
...
ExtAudioFileRef audioFile = NULL;
CFURLRef url = NULL;
OSStatus err = ExtAudioFileOpenURL(url, &audioFile);
AudioStreamBasicDescription asbd;
UInt32 dataSize = sizeof(asbd);
// get the audio file's format
err = ExtAudioFileGetProperty(audioFile, kExtAudioFileProperty_FileDataFormat, &dataSize, &asbd);
// now set the client format to what we want on read (LPCM, 32-bit floating point)
AudioStreamBasicDescription clientFormat = asbd;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagIsPacked;
clientFormat.mBitsPerChannel = 32;
clientFormat.mBytesPerPacket = 4;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBytesPerFrame = 4;
err = ExtAudioFileSetProperty(audioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
// okay, now the ext audio file is set up to convert samples to LPCM on read
// get the total number of samples
SInt64 numFrames = 0;
dataSize = sizeof(numFrames);
err = ExtAudioFileGetProperty(audioFile, kExtAudioFileProperty_FileLengthFrames, &dataSize, &numFrames);
// prepare an audio buffer list to hold the data when we read it from the file
UInt32 maxReadFrames = 4096; // how many samples will we read in at a time?
AudioBufferList *bufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (asbd.mChannelsPerFrame - 1));
bufferList->mNumberBuffers = asbd.mChannelsPerFrame;
for (int ii = 0; ii < bufferList->mNumberBuffers; ++ii) {
bufferList->mBuffers[ii].mDataByteSize = maxReadFrames * sizeof(float);
bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
bzero(bufferList->mBuffers[ii].mData, bufferList->mBuffers[ii].mDataByteSize);
bufferList->mBuffers[ii].mNumberChannels = 1;
}
while(numFrames > 0) {
UInt32 framesToRead = (maxReadFrames > numFrames) ? numFrames : maxReadFrames;
err = ExtAudioFileRead(audioFile, &framesToRead, bufferList);
// okay, your LPCM audio data is in `bufferList` -- do whatever processing you'd like!
}
// clean up
for (int ii = 0; ii < bufferList->mNumberBuffers; ++ii) {
free(bufferList->mBuffers[ii].mData);
}
free(bufferList);
ExtAudioFileDispose(audioFile);