How to play and read .caf PCM audio file - ios

I have an app that selects a song from the iPod Library then copies that song into the app's directory as a '.caf' file. I now need to play and at the same time read that file into Apples FFT from the Accelerate framework so I can visualize the data like a spectrogram. Here is the code for the FFT:
void FFTAccelerate::doFFTReal(float samples[], float amp[], int numSamples)
{
int i;
vDSP_Length log2n = log2f(numSamples);
//Convert float array of reals samples to COMPLEX_SPLIT array A
vDSP_ctoz((COMPLEX*)samples,2,&A,1,numSamples/2);
//Perform FFT using fftSetup and A
//Results are returned in A
vDSP_fft_zrip(fftSetup, &A, 1, log2n, FFT_FORWARD);
//Convert COMPLEX_SPLIT A result to float array to be returned
amp[0] = A.realp[0]/(numSamples*2);
for(i=1;i<numSamples;i++)
amp[i]=sqrt(A.realp[i]*A.realp[i]+A.imagp[i]*A.imagp[i])/numSamples;
}
//Constructor
FFTAccelerate::FFTAccelerate (int numSamples)
{
vDSP_Length log2n = log2f(numSamples);
fftSetup = vDSP_create_fftsetup(log2n, FFT_RADIX2);
int nOver2 = numSamples/2;
A.realp = (float *) malloc(nOver2*sizeof(float));
A.imagp = (float *) malloc(nOver2*sizeof(float));
}
My question is how to I loop through the '.caf' audio file to feed the FFT while at the same time playing the song? I only need one channel. Im guessing I need to get 1024 samples of the song, process that in the FTT and then move further down the file and grab another 1024 samples. But I dont understand how to read an audio file to do this. The file has a sample rate of 44100.0 hz, is in linear PCM format, 16 Bit and I believe is also interleaved if that helps...

Try the ExtendedAudioFile API (requires AudioToolbox.framework).
#include <AudioToolbox/ExtendedAudioFile.h>
NSURL *urlToCAF = ...;
ExtAudioFileRef caf;
OSStatus status;
status = ExtAudioFileOpenURL((__bridge CFURLRef)urlToCAF, &caf);
if(noErr == status) {
// request float format
const UInt32 NumFrames = 1024;
const int ChannelsPerFrame = 1; // Mono, 2 for Stereo
// request float format
AudioStreamBasicDescription clientFormat;
clientFormat.mChannelsPerFrame = ChannelsPerFrame;
clientFormat.mSampleRate = 44100;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved; // float
int cmpSize = sizeof(float);
int frameSize = cmpSize*ChannelsPerFrame;
clientFormat.mBitsPerChannel = cmpSize*8;
clientFormat.mBytesPerPacket = frameSize;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBytesPerFrame = frameSize;
status = ExtAudioFileSetProperty(caf, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
if(noErr != status) { /* handle it */ }
while(1) {
float buf[ChannelsPerFrame*NumFrames];
AudioBuffer ab = { ChannelsPerFrame, sizeof(buf), buf };
AudioBufferList abl;
abl.mNumberBuffers = 1;
abl.mBuffers[0] = ab;
UInt32 ioNumFrames = NumFrames;
status = ExtAudioFileRead(caf, &ioNumFrames, &abl);
if(noErr == status) {
// process ioNumFrames here in buf
if(0 == ioNumFrames) {
// EOF!
break;
} else if(ioNumFrames < NumFrames) {
// TODO: pad buf with zeroes out to NumFrames
} else {
float amp[NumFrames]; // scratch space
doFFTReal(buf, amp, NumFrames);
}
}
}
// later
status = ExtAudioFileDispose(caf);
if(noErr != status) { /* hmm */ }
}

Related

AudioFileWriteBytes performance for stereo file

I'm writing a stereo wave file with AudioFileWriteBytes (CoreAudio / iOS) and the only way I can get it to work is by calling it for each sample on each channel.
The following code works:
// Prepare the format AudioStreamBasicDescription;
AudioStreamBasicDescription asbd = {
.mSampleRate = session.samplerate,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsBigEndian| kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked,
.mChannelsPerFrame = 2,
.mBitsPerChannel = 16,
.mFramesPerPacket = 1, // Always 1 for uncompressed formats
.mBytesPerPacket = 4, // 16 bits for 2 channels = 4 bytes
.mBytesPerFrame = 4 // 16 bits for 2 channels = 4 bytes
};
// Set up the file
AudioFileID audioFile;
OSStatus audioError = noErr;
audioError = AudioFileCreateWithURL((__bridge CFURLRef)fileURL, kAudioFileAIFFType, &asbd, kAudioFileFlags_EraseFile, &audioFile);
if (audioError != noErr) {
NSLog(#"Error creating file");
return;
}
// Write samples
UInt64 currentFrame = 0;
while (currentFrame < totalLengthInFrames) {
UInt64 numberOfFramesToWrite = totalLengthInFrames - currentFrame;
if (numberOfFramesToWrite > 2048) {
numberOfFramesToWrite = 2048;
}
UInt32 sampleByteCount = sizeof(int16_t);
UInt32 bytesToWrite = (UInt32)numberOfFramesToWrite * sampleByteCount;
int16_t *sampleBufferLeft = (int16_t *)malloc(bytesToWrite);
int16_t *sampleBufferRight = (int16_t *)malloc(bytesToWrite);
// Some magic to fill the buffers
for (int j = 0; j < numberOfFramesToWrite; j++) {
int16_t left = CFSwapInt16HostToBig(sampleBufferLeft[j]);
int16_t right = CFSwapInt16HostToBig(sampleBufferRight[j]);
audioError = AudioFileWriteBytes(audioFile, false, (currentFrame + j) * 4, &sampleByteCount, &left);
assert(audioError == noErr);
audioError = AudioFileWriteBytes(audioFile, false, (currentFrame + j) * 4 + 2, &sampleByteCount, &right);
assert(audioError == noErr);
}
free(sampleBufferLeft);
free(sampleBufferRight);
currentFrame += numberOfFramesToWrite;
}
However, it is (obviously) very slow and inefficient.
I can't find anything on how to use it with a big buffer so that I can write more than a single sample while also writing 2 channels.
I tried making a buffer going LRLRLRLR (left / right), and then write that with just one AudioFileWriteBytes call. I expected that to work, but it produced a file filled with noise.
This is the code:
UInt64 currentFrame = 0;
UInt64 bytePos = 0;
while (currentFrame < totalLengthInFrames) {
UInt64 numberOfFramesToWrite = totalLengthInFrames - currentFrame;
if (numberOfFramesToWrite > 2048) {
numberOfFramesToWrite = 2048;
}
UInt32 sampleByteCount = sizeof(int16_t);
UInt32 bytesInBuffer = (UInt32)numberOfFramesToWrite * sampleByteCount;
UInt32 bytesInOutputBuffer = (UInt32)numberOfFramesToWrite * sampleByteCount * 2;
int16_t *sampleBufferLeft = (int16_t *)malloc(bytesInBuffer);
int16_t *sampleBufferRight = (int16_t *)malloc(bytesInBuffer);
int16_t *outputBuffer = (int16_t *)malloc(bytesInOutputBuffer);
// Some magic to fill the buffers
for (int j = 0; j < numberOfFramesToWrite; j++) {
int16_t left = CFSwapInt16HostToBig(sampleBufferLeft[j]);
int16_t right = CFSwapInt16HostToBig(sampleBufferRight[j]);
outputBuffer[(j * 2)] = left;
outputBuffer[(j * 2) + 1] = right;
}
audioError = AudioFileWriteBytes(audioFile, false, bytePos, &bytesInOutputBuffer, &outputBuffer);
assert(audioError == noErr);
free(sampleBufferLeft);
free(sampleBufferRight);
free(outputBuffer);
bytePos += bytesInOutputBuffer;
currentFrame += numberOfFramesToWrite;
}
I also tried to just write the buffers at once (2048*L, 2048*R, etc.) which I did not expect to work, and it didn't.
How do I speed this up AND get a working wave file?
I tried making a buffer going LRLRLRLR (left / right), and then write that with just one AudioFileWriteBytes call.
This is the correct approach if using (the rather difficult) Audio File Services.
If possible, instead of the very low level Audio File Services, use Extended Audio File Services. It is a wrapper around Audio File Services that has built in format converters. Or even better yet, use AVAudioFile it is a wrapper around Extended Audio File Services that covers most common use cases.
If you are set on using Audio File Services, you'll have to interleave the audio manually like you had tried. Maybe show the code where you attempted this.

Using CMSampleTimingInfo, CMSampleBuffer and AudioBufferList from raw PCM 16000 sample rate stream

I recevie audio data and size from outside, the audio appears to be linear PCM, signed int16, but when recording this using an AssetWriter it saves to the audio file highly distorted and higher pitch.
#define kSamplingRate 16000
#define kNumberChannels 1
UInt32 framesAlreadyWritten = 0;
-(AudioStreamBasicDescription) getAudioFormat {
AudioStreamBasicDescription format;
format.mSampleRate = kSamplingRate;
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
format.mChannelsPerFrame = 1; // mono
format.mBitsPerChannel = 16;
format.mBytesPerFrame = sizeof(SInt16);
format.mFramesPerPacket = 1;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;
format.mReserved = 0;
return format;
}
- (CMSampleBufferRef)createAudioSample:(const void *)audioData frames: (UInt32)len {
AudioStreamBasicDescription asbd = [self getAudioFormat];
CMSampleBufferRef buff = NULL;
static CMFormatDescriptionRef format = NULL;
OSStatus error = 0;
if(format == NULL) {
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
error = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &asbd, sizeof(acl), &acl, 0, NULL, NULL, &format);
}
CMTime duration = CMTimeMake(1, kSamplingRate);
CMTime pts = CMTimeMake(framesAlreadyWritten, kSamplingRate);
NSLog(#"-----------pts");
CMTimeShow(pts);
CMSampleTimingInfo timing = {duration , pts, kCMTimeInvalid };
error = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, len, 1, &timing, 0, NULL, &buff);
framesAlreadyWritten += len;
if (error) {
NSLog(#"CMSampleBufferCreate returned error: %ld", (long)error);
return NULL;
}
AudioBufferList audioBufferList;
audioBufferList.mNumberBuffers = 1;
audioBufferList.mBuffers[0].mNumberChannels = asbd.mChannelsPerFrame;
audioBufferList.mBuffers[0].mDataByteSize = (UInt32)(number_of_frames * audioFormat.mBytesPerFrame);
audioBufferList.mBuffers[0].mData = audioData;
error = CMSampleBufferSetDataBufferFromAudioBufferList(buff, kCFAllocatorDefault, kCFAllocatorDefault, 0, &audioBufferList);
if(error) {
NSLog(#"CMSampleBufferSetDataBufferFromAudioBufferList returned error: %ld", (long)error);
return NULL;
}
return buff;
}
Not sure why you're dividing len by two, but your time should progress instead of being constant, something like
CMTime time = CMTimeMake(framesAlreadyWritten , kSamplingRate);

How to resemple pcm data in iOS

I want to use AudioConverterFillComplexBuffer to convert sample rate for a pcm buffer(32k to 44.1k)。But i didn't know why the voice seems changed(too many noise)。Here is the main code:
struct AudioFrame {
int samples; //number of samples in this frame. e.g. 320
int bytesPerSample; //number of bytes per sample: 2 for PCM16.
int channels; //number of channels (data are interleaved if stereo)
int samplesPerSec; //sampling rate
void* buffer; //data buffer
};
-(void)convertAudioFrame:(AudioFrame *)buffer outPutData:(unsigned char **)outPutData outPutDataSize:(UInt32 *)outPutDataSize{
if (buffer->bytesPerSample != self.unitDescription.mBitsPerChannel ||
buffer->channels != self.unitDescription.mChannelsPerFrame ||
buffer->samplesPerSec != self.unitDescription.mSampleRate){
// describe the input format's description
AudioStreamBasicDescription inputDescription = {0};
inputDescription.mFormatID = kAudioFormatLinearPCM;
inputDescription.mFormatFlags = kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger;
inputDescription.mChannelsPerFrame = buffer->channels;
inputDescription.mSampleRate = buffer->samplesPerSec;
inputDescription.mBitsPerChannel = 16;
inputDescription.mBytesPerFrame = (inputDescription.mBitsPerChannel/8) * inputDescription.mChannelsPerFrame;
inputDescription.mFramesPerPacket = 1;
inputDescription.mBytesPerPacket = inputDescription.mBytesPerFrame;
AudioStreamBasicDescription outputDescription = {0};
outputDescription.mSampleRate = 44100;
outputDescription.mFormatID = kAudioFormatLinearPCM;
outputDescription.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
outputDescription.mChannelsPerFrame = 1;
outputDescription.mFramesPerPacket = 1;
outputDescription.mBitsPerChannel = 16;
outputDescription.mBytesPerFrame = (outputDescription.mBitsPerChannel/8) * outputDescription.mChannelsPerFrame;
outputDescription.mBytesPerPacket = outputDescription.mBytesPerFrame;
// create an audio converter
AudioConverterRef audioConverter;
OSStatus status = AudioConverterNew(&inputDescription, &outputDescription, &audioConverter);
[self checkError:status errorMsg:#"AudioConverterNew error"];
if(!audioConverter)
{
*outPutDataSize = 0;
return;
}
UInt32 outputBytes = outputDescription.mBytesPerPacket * (buffer->samples*buffer->bytesPerSample / inputDescription.mBytesPerPacket);
unsigned char *outputBuffer = (unsigned char*)malloc(outputBytes);
memset(outputBuffer, 0, outputBytes);
AudioBuffer inputBuffer;
inputBuffer.mNumberChannels = inputDescription.mChannelsPerFrame;
inputBuffer.mDataByteSize = buffer->samples*buffer->bytesPerSample;
inputBuffer.mData = buffer->buffer;
AudioBufferList outputBufferList;
outputBufferList.mNumberBuffers = 1;
outputBufferList.mBuffers[0].mNumberChannels = outputDescription.mChannelsPerFrame;
outputBufferList.mBuffers[0].mDataByteSize = outputBytes;
outputBufferList.mBuffers[0].mData = outputBuffer;
UInt32 outputDataPacketSize = outputBytes / outputDescription.mBytesPerPacket;
self.currentBuffer = &inputBuffer;
self.currentInputDescription = inputDescription;
// convert
OSStatus result = AudioConverterFillComplexBuffer(audioConverter,
converterComplexInputDataProc,
(__bridge void*)self,
&outputDataPacketSize,
&outputBufferList,
NULL);
[self checkError:result errorMsg:#"AudioConverterConvertBuffer error"];
*outPutData = outputBuffer;
*outPutDataSize = outputBytes;
AudioConverterDispose(audioConverter);
}
}
//convert callback
OSStatus converterComplexInputDataProc(AudioConverterRef inAudioConverter,
UInt32* ioNumberDataPackets, AudioBufferList* ioData, AudioStreamPacketDescription** ioDataPacketDescription, void* inUserData){
XMMicAudioManager *self = (__bridge XMMicAudioManager *)inUserData;
ioData->mNumberBuffers = 1;
ioData->mBuffers[0] = *(self.currentBuffer);
*ioNumberDataPackets = ioData->mBuffers[0].mDataByteSize / self.currentInputDescription.mBytesPerPacket;
return 0;
}

AudioConverter number of packets is wrong

I've set up a class to convert audio from one format to another given an input and output AudioStreamBasicDescription. When I convert Linear PCM from the mic to iLBC, it works and gives me 6 packets when I give it 1024 packets from the AudioUnitRender function. I then send those 226 bytes via UDP to the same app running on a different device. The problem is that when I use the same class to convert back into Linear PCM for giving to an audio unit input, the AudioConverterFillComplexBuffer function doesn't give 1024 packets, it gives 960... This means that the audio unit input is expecting 4096 bytes (2048 x 2 for stereo) but I can only give it 3190 or so, and so it sounds really crackly and distorted...
If I give AudioConverter 1024 packets of LinearPCM, convert to iLBC, convert back to LinearPCM, surely I should get 1024 packets again?
Audio converter function:
-(void) doConvert {
// Start converting
if (converting) return;
converting = YES;
while (true) {
// Get next buffer
id bfr = [buffers getNextBuffer];
if (!bfr) {
converting = NO;
return;
}
// Get info
NSArray* bfrs = ([bfr isKindOfClass:[NSArray class]] ? bfr : #[bfr]);
int bfrSize = 0;
for (NSData* dat in bfrs) bfrSize += dat.length;
int inputPackets = bfrSize / self.inputFormat.mBytesPerPacket;
int outputPackets = (inputPackets * self.inputFormat.mFramesPerPacket) / self.outputFormat.mFramesPerPacket;
// Create output buffer
AudioBufferList* bufferList = (AudioBufferList*) malloc(sizeof(AudioBufferList) * self.outputFormat.mChannelsPerFrame);
bufferList -> mNumberBuffers = self.outputFormat.mChannelsPerFrame;
for (int i = 0 ; i < self.outputFormat.mChannelsPerFrame ; i++) {
bufferList -> mBuffers[i].mNumberChannels = 1;
bufferList -> mBuffers[i].mDataByteSize = 4*1024;
bufferList -> mBuffers[i].mData = malloc(bufferList -> mBuffers[i].mDataByteSize);
}
// Create input buffer
AudioBufferList* inputBufferList = (AudioBufferList*) malloc(sizeof(AudioBufferList) * bfrs.count);
inputBufferList -> mNumberBuffers = bfrs.count;
for (int i = 0 ; i < bfrs.count ; i++) {
inputBufferList -> mBuffers[i].mNumberChannels = 1;
inputBufferList -> mBuffers[i].mDataByteSize = [[bfrs objectAtIndex:i] length];
inputBufferList -> mBuffers[i].mData = (void*) [[bfrs objectAtIndex:i] bytes];
}
// Create sound data payload
struct SoundDataPayload payload;
payload.data = inputBufferList;
payload.numPackets = inputPackets;
payload.packetDescriptions = NULL;
payload.used = NO;
// Convert data
UInt32 numPackets = outputPackets;
OSStatus err = AudioConverterFillComplexBuffer(converter, acvConverterComplexInput, &payload, &numPackets, bufferList, NULL);
if (err)
continue;
// Check how to output
if (bufferList -> mNumberBuffers > 1) {
// Output as array
NSMutableArray* array = [NSMutableArray arrayWithCapacity:bufferList -> mNumberBuffers];
for (int i = 0 ; i < bufferList -> mNumberBuffers ; i++)
[array addObject:[NSData dataWithBytes:bufferList -> mBuffers[i].mData length:bufferList -> mBuffers[i].mDataByteSize]];
// Save
[convertedBuffers addBuffer:array];
} else {
// Output as data
NSData* newData = [NSData dataWithBytes:bufferList -> mBuffers[0].mData length:bufferList -> mBuffers[0].mDataByteSize];
// Save
[convertedBuffers addBuffer:newData];
}
// Free memory
for (int i = 0 ; i < bufferList -> mNumberBuffers ; i++)
free(bufferList -> mBuffers[i].mData);
free(inputBufferList);
free(bufferList);
// Tell delegate
if (self.convertHandler)
//dispatch_async(dispatch_get_main_queue(), self.convertHandler);
self.convertHandler();
}
}
Formats when converting to iLBC:
// Get input format from mic
UInt32 size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription inputDesc;
AudioUnitGetProperty(self.ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &inputDesc, &size);
// Set output stream description
size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription outputDescription;
memset(&outputDescription, 0, size);
outputDescription.mFormatID = kAudioFormatiLBC;
OSStatus err = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &outputDescription);
Formats when converting from iLBC:
// Set input stream description
size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription inputDescription;
memset(&inputDescription, 0, size);
inputDescription.mFormatID = kAudioFormatiLBC;
AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &inputDescription);
// Set output stream description
UInt32 size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription outputDesc;
AudioUnitGetProperty(unit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &outputDesc, &size);
You have to use an intermediate buffer to save up enough bytes from enough incoming packets to exactly match the number requested by the audio unit input. Depending on any one UDP packet in compressed format to be exactly the right size won't work.
The AudioConverter may buffer samples and change the packet sizes depending on the compression format.

iOS: How to read an audio file into a float buffer

I have a really short audio file, say a 10th of a second in (say) .PCM format
I want to use RemoteIO to loop through the file repeatedly to produce a continuous musical tone. So how do I read this into an array of floats?
EDIT: while I could probably dig out the file format, extract the file into an NSData and process it manually, I'm guessing there is a more sensible generic approach... ( that eg copes with different formats )
You can use ExtAudioFile to read data from any supported data format in numerous client formats. Here is an example to read a file as 16-bit integers:
CFURLRef url = /* ... */;
ExtAudioFileRef eaf;
OSStatus err = ExtAudioFileOpenURL((CFURLRef)url, &eaf);
if(noErr != err)
/* handle error */
AudioStreamBasicDescription format;
format.mSampleRate = 44100;
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags = kAudioFormatFormatFlagIsPacked;
format.mBitsPerChannel = 16;
format.mChannelsPerFrame = 2;
format.mBytesPerFrame = format.mChannelsPerFrame * 2;
format.mFramesPerPacket = 1;
format.mBytesPerPacket = format.mFramesPerPacket * format.mBytesPerFrame;
err = ExtAudioFileSetProperty(eaf, kExtAudioFileProperty_ClientDataFormat, sizeof(format), &format);
/* Read the file contents using ExtAudioFileRead */
If you wanted Float32 data, you would set up format like this:
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags = kAudioFormatFlagsNativeFloatPacked;
format.mBitsPerChannel = 32;
This is the code I have used to convert my audio data (audio file ) into floating point representation and saved into an array.
-(void) PrintFloatDataFromAudioFile {
NSString * name = #"Filename"; //YOUR FILE NAME
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:#"m4a"]; // SPECIFY YOUR FILE FORMAT
const char *cString = [source cStringUsingEncoding:NSASCIIStringEncoding];
CFStringRef str = CFStringCreateWithCString(
NULL,
cString,
kCFStringEncodingMacRoman
);
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(
kCFAllocatorDefault,
str,
kCFURLPOSIXPathStyle,
false
);
ExtAudioFileRef fileRef;
ExtAudioFileOpenURL(inputFileURL, &fileRef);
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100; // GIVE YOUR SAMPLING RATE
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat;
audioFormat.mBitsPerChannel = sizeof(Float32) * 8;
audioFormat.mChannelsPerFrame = 1; // Mono
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(Float32); // == sizeof(Float32)
audioFormat.mFramesPerPacket = 1;
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mBytesPerFrame; // = sizeof(Float32)
// 3) Apply audio format to the Extended Audio File
ExtAudioFileSetProperty(
fileRef,
kExtAudioFileProperty_ClientDataFormat,
sizeof (AudioStreamBasicDescription), //= audioFormat
&audioFormat);
int numSamples = 1024; //How many samples to read in at a time
UInt32 sizePerPacket = audioFormat.mBytesPerPacket; // = sizeof(Float32) = 32bytes
UInt32 packetsPerBuffer = numSamples;
UInt32 outputBufferSize = packetsPerBuffer * sizePerPacket;
// So the lvalue of outputBuffer is the memory location where we have reserved space
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8 *) * outputBufferSize);
AudioBufferList convertedData ;//= malloc(sizeof(convertedData));
convertedData.mNumberBuffers = 1; // Set this to 1 for mono
convertedData.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame; //also = 1
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer; //
UInt32 frameCount = numSamples;
float *samplesAsCArray;
int j =0;
double floatDataArray[882000] ; // SPECIFY YOUR DATA LIMIT MINE WAS 882000 , SHOULD BE EQUAL TO OR MORE THAN DATA LIMIT
while (frameCount > 0) {
ExtAudioFileRead(
fileRef,
&frameCount,
&convertedData
);
if (frameCount > 0) {
AudioBuffer audioBuffer = convertedData.mBuffers[0];
samplesAsCArray = (float *)audioBuffer.mData; // CAST YOUR mData INTO FLOAT
for (int i =0; i<1024 /*numSamples */; i++) { //YOU CAN PUT numSamples INTEAD OF 1024
floatDataArray[j] = (double)samplesAsCArray[i] ; //PUT YOUR DATA INTO FLOAT ARRAY
printf("\n%f",floatDataArray[j]); //PRINT YOUR ARRAY'S DATA IN FLOAT FORM RANGING -1 TO +1
j++;
}
}
}}
I'm not familiar with RemoteIO, but I am familiar with WAV's and thought I'd post some format information on them. If you need, you should be able to easily parse out information such as duration, bit rate, etc...
First, here is an excellent website detailing the WAVE PCM soundfile format. This site also does an excellent job illustrating what the different byte addresses inside the "fmt" sub-chunk refer to.
WAVE File format
A WAVE is composed of a "RIFF" chunk and subsequent sub-chunks
Every chunk is at least 8 bytes
First 4 bytes is the Chunk ID
Next 4 bytes is the Chunk Size (The Chunk Size gives the size of the remainder of the chunk excluding the 8 bytes used for the Chunk ID and Chunk Size)
Every WAVE has the following chunks / sub chunks
"RIFF" (first and only chunk. All the rest are technically sub-chunks.)
"fmt " (usually the first sub-chunk after "RIFF" but can be anywhere between "RIFF" and "data". This chunk has information about the WAV such as number of channels, sample rate, and byte rate)
"data" (must be the last sub-chunk and contains all the sound data)
Common WAVE Audio Formats:
PCM
IEEE_Float
PCM_EXTENSIBLE (with a sub format of PCM or IEEE_FLOAT)
WAVE Duration and Size
A WAVE File's duration can be calculated as follows:
seconds = DataChunkSize / ByteRate
Where
ByteRate = SampleRate * NumChannels * BitsPerSample/8
and DataChunkSize does not include the 8 bytes reserved for the ID and Size of the "data" sub-chunk.
Knowing this, the DataChunkSize can be calculated if you know the duration of the WAV and the ByteRate.
DataChunkSize = seconds * ByteRate
This can be useful for calculating the size of the wav data when converting from formats like mp3 or wma. Note that a typical wav header is 44 bytes followed by DataChunkSize (this is always the case if the wav was converted using the Normalizer tool - at least as of this writing).
Update for Swift 5
This is a simple function that helps get your audio file into an array of floats. This is for both mono and stereo audio, To get the second channel of stereo audio, just uncomment sample 2
import AVFoundation
//..
do {
guard let url = Bundle.main.url(forResource: "audio_example", withExtension: "wav") else { return }
let file = try AVAudioFile(forReading: url)
if let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: file.fileFormat.sampleRate, channels: file.fileFormat.channelCount, interleaved: false), let buf = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(file.length)) {
try file.read(into: buf)
guard let floatChannelData = buf.floatChannelData else { return }
let frameLength = Int(buf.frameLength)
let samples = Array(UnsafeBufferPointer(start:floatChannelData[0], count:frameLength))
// let samples2 = Array(UnsafeBufferPointer(start:floatChannelData[1], count:frameLength))
print("samples")
print(samples.count)
print(samples.prefix(10))
// print(samples2.prefix(10))
}
} catch {
print("Audio Error: \(error)")
}

Resources