How to correctly update AUSampler's loadInstrument property? - ios

I have an AUGraph consisting of two nodes: AUSampler unit and output unit.
I am loading samples to AUSampler from sf2 files. I have multiple sf2 files and want to switch between them in runtime.
Currently I have the following code to set a sf2 file:
- (void) loadSoundFontWithName:(NSString *) name {
CheckError (AUGraphStop(_midiPlayer.graph), "couldn't stop graph");
OSStatus result = noErr;
// fill out a instrument data structure
AUSamplerInstrumentData instdata;
NSString *presetPath = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat: #"Sampler Files/%#", name]
ofType:#"sf2"];
const char* presetPathC = [presetPath cStringUsingEncoding:NSUTF8StringEncoding];
NSLog (#"presetPathC: %s", presetPathC);
CFURLRef presetURL = CFURLCreateFromFileSystemRepresentation(
kCFAllocatorDefault,
presetPathC,
[presetPath length],
NO);
instdata.fileURL = presetURL;
instdata.instrumentType = kInstrumentType_DLSPreset;
instdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
instdata.bankLSB = kAUSampler_DefaultBankLSB;
instdata.presetID = (UInt8) 0;
// set the kAUSamplerProperty_LoadPresetFromBank property
result = AudioUnitSetProperty(_midiPlayer.instrumentUnit,
kAUSamplerProperty_LoadInstrument,
kAudioUnitScope_Global,
0,
&instdata,
sizeof(instdata));
// check for errors
NSCAssert (result == noErr,
#"Unable to set the preset property on the Sampler. Error code:%d '%.4s'",
(int) result,
(const char *)&result);
//===============
CheckError (AUGraphStart(_midiPlayer.graph), "couldn't start graph");
}
It works only if there is not any sounds played at the time of switching to another file. If the property is set while the graph being played, it crashes with EXC_BAD_ACCESS at DLSSample::GetMoreFrames(unsigned long long, unsigned long long, void*) ()
So what is the right way to update this property?

Calling
AudioUnitReset(_midiPlayer.instrumentUnit, kAudioUnitScope_Global, 0);
just after the invocation of AudioUnitSetProperty function seems to solve my problem.

Related

How to load Sound Font

I'm having no luck loading a Sound Font file (.SF2) in my IOS app. I initially tried using Apple's code from Tech note TN2283
- (OSStatus) loadFromDLSOrSoundFont: (NSURL *)bankURL withPatch: (int)presetNumber {
OSStatus result = noErr;
// fill out a instrument data structure
AUSamplerInstrumentData instdata;
instdata.bankURL = (CFURLRef) bankURL;
instdata.instrumentType = kInstrumentType_DLSPreset;
instdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
instdata.bankLSB = kAUSampler_DefaultBankLSB;
instdata.presetID = (UInt8) presetNumber;
// set the kAUSamplerProperty_LoadPresetFromBank property
result = AudioUnitSetProperty(self.mySamplerUnit,
kAUSamplerProperty_LoadInstrument,
kAudioUnitScope_Global,
0,
&instdata,
sizeof(instdata));
// check for errors
NSCAssert (result == noErr,
#"Unable to set the preset property on the Sampler. Error code:%d '%.4s'",
(int) result,
(const char *)&result);
return result; }
But the compiler complains that 'No member named 'bankURL' in struct AUSamplerInstrumentData' which is true, the struct does not contain a 'bankURL' member??
I then came across the following code, by Apple I believe
- (OSStatus)loadSoundFont:(NSURL *)bankURL withPatch:(int)presetNumber
{
OSStatus result = noErr;
// fill out a bank preset data structure
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (__bridge CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = (UInt8) presetNumber;
// set the kAUSamplerProperty_LoadPresetFromBank property
result = AudioUnitSetProperty(self.samplerUnit,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
// check for errors
NSCAssert (result == noErr,
#"Unable to set the preset property on the Sampler. Error code:%d '%.4s'",
(int) result,
(const char *)&result);
return result;
}
This all looks correct but when I attempt to load a sound font using this method such as follows
NSURL *SFURL = [[NSBundle mainBundle] URLForResource:#"YAMAHA DX7Piano" withExtension:#"SF2"];
[self loadSoundFont:url withPatch:0];
it throws the error "Unable to set the preset property on the Sampler.." This did lead me to think there was some error in how I specified the patch number, such as supplying a non-existent patch number. But I did eventually discover that the NSURL I was supplying was null, so I tried specifying the url as follows:
NSString *resources = [[NSBundle mainBundle] resourcePath];
NSURL *SFURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/%#",resources,#"YAMAHA DX7Piano.SF2"]];
This has got me a step closer. I think I am now supplying a valid url to the sound font file in my app bundle. But it still is not working. My compiler now tells me
ERROR: [0x19a824310] 486: DLS/SF2 bank load failed
There is a piece of the puzzle missing and I can't see what.??
Well I found the solution. The Sound font wasn't loading. It wasn't loadoing because it was not added to the app bundle correctly. I had dragged it into resources but had to then add it to 'Copy bundle resources' in the project 'Build Phases'.

Decoding H264 VideoToolkit API fails with Error -8971 in VTDecompressionSessionCreate

I am trying to write a Video decoder using the Hardware supported Video Toolkit Decoder. But if I try to initialize the decoding session like in the example posted below, I get the error -8971 while calling VTDecompressionSessionCreate. Can anyone tell me what I am doing wrong here?
Thank you and best regards,
Oliver
OSStatus status;
int tmpWidth = sps.EncodedWidth();
int tmpHeight = sps.EncodedHeight();
NSLog(#"Got new Width and Height from SPS - %dx%d", tmpWidth, tmpHeight);
const VTDecompressionOutputCallbackRecord callback = { ReceivedDecompressedFrame, self };
status = CMVideoFormatDescriptionCreate(NULL,
kCMVideoCodecType_H264,
tmpWidth,
tmpHeight,
NULL,
&decoderFormatDescription);
if (status == noErr)
{
// Set the pixel attributes for the destination buffer
CFMutableDictionaryRef destinationPixelBufferAttributes = CFDictionaryCreateMutable(
NULL, // CFAllocatorRef allocator
0, // CFIndex capacity
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
SInt32 destinationPixelType = kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange;
CFDictionarySetValue(destinationPixelBufferAttributes,kCVPixelBufferPixelFormatTypeKey, CFNumberCreate(NULL, kCFNumberSInt32Type, &destinationPixelType));
CFDictionarySetValue(destinationPixelBufferAttributes,kCVPixelBufferWidthKey, CFNumberCreate(NULL, kCFNumberSInt32Type, &tmpWidth));
CFDictionarySetValue(destinationPixelBufferAttributes, kCVPixelBufferHeightKey, CFNumberCreate(NULL, kCFNumberSInt32Type, &tmpHeight));
CFDictionarySetValue(destinationPixelBufferAttributes, kCVPixelBufferOpenGLCompatibilityKey, kCFBooleanTrue);
// Set the Decoder Parameters
CFMutableDictionaryRef decoderParameters = CFDictionaryCreateMutable(
NULL, // CFAllocatorRef allocator
0, // CFIndex capacity
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(decoderParameters,kVTDecompressionPropertyKey_RealTime, kCFBooleanTrue);
// Create the decompression session
// Throws Error -8971 (codecExtensionNotFoundErr)
status = VTDecompressionSessionCreate(NULL, decoderFormatDescription, decoderParameters, destinationPixelBufferAttributes, &callback, &decoderDecompressionSession);
// release the dictionaries
CFRelease(destinationPixelBufferAttributes);
CFRelease(decoderParameters);
// Check the Status
if(status != noErr)
{
NSLog(#"Error %d while creating Video Decompression Session.", (int)status);
continue;
}
}
else
{
NSLog(#"Error %d while creating Video Format Descripttion.", (int)status);
continue;
}
I also stumbled with kVTVideoDecoderBadDataErr. In my case I was changing the header 0x00000001 with the size of the NAL package which included the 4 bytes of this header, that was the reason. I changed the size to not include these 4 bytes (frame_size = sizeof(NAL) - 4). This size should be encoded in big-endian.
You need to create the CMFormatDescriptionRef from your SPS and PPS like
CMFormatDescriptionRef decoderFormatDescription;
const uint8_t* const parameterSetPointers[2] = { (const uint8_t*)[currentSps bytes], (const uint8_t*)[currentPps bytes] };
const size_t parameterSetSizes[2] = { [currentSps length], [currentPps length] };
status = CMVideoFormatDescriptionCreateFromH264ParameterSets(NULL,
2,
parameterSetPointers,
parameterSetSizes,
4,
&decoderFormatDescription);
Also if you are getting your Video Data in Annex-B format you need to remove the start code and replace it with the 4-Byte size information for the decoder to recognize it as avcc formated (Thats what the 5th parameter to CMVideoFormatDescriptionCreateFromH264ParameterSets is for).
#Joride
refer to http://www.szatmary.org/blog/25
It explains that the header (first) byte of each buffer within a NALU describes the buffer's type. You need to mask off these bits and compare them to the table provided. Note the comment about the bit fields. You need to mask the byte with 0x1f to the type value.

Audioqueue callback not being called

So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)

CFURLCreateDataAndPropertiesFromResource deprecated. And looking for substitute

Along with a bunch of other things included in Apple's Load Preset Demo sample code, the call to CFURLCreateDataAndPropertiesFromResource is now deprecated. But I can't find a substitute for it - neither an option-click nor a look at the reference tell me any more than that it is no longer the done thing.
CFDataRef propertyResourceData = 0;
Boolean status;
SInt32 errorCode = 0;
OSStatus result = noErr;
// Read from the URL and convert into a CFData chunk
status = CFURLCreateDataAndPropertiesFromResource (
kCFAllocatorDefault,
(__bridge CFURLRef) presetURL,
&propertyResourceData,
NULL,
NULL,
&errorCode
);
NSAssert (status == YES && propertyResourceData != 0, #"Unable to create data and properties from a preset. Error code: %d '%.4s'", (int) errorCode, (const char *)&errorCode);
// Convert the data object into a property list
CFPropertyListRef presetPropertyList = 0;
CFPropertyListFormat dataFormat = 0;
CFErrorRef errorRef = 0;
presetPropertyList = CFPropertyListCreateWithData (
kCFAllocatorDefault,
propertyResourceData,
kCFPropertyListImmutable,
&dataFormat,
&errorRef
);
// Set the class info property for the Sampler unit using the property list as the value.
if (presetPropertyList != 0) {
result = AudioUnitSetProperty(
self.samplerUnit,
kAudioUnitProperty_ClassInfo,
kAudioUnitScope_Global,
0,
&presetPropertyList,
sizeof(CFPropertyListRef)
);
CFRelease(presetPropertyList);
}
if (errorRef) CFRelease(errorRef);
CFRelease (propertyResourceData);
return result;
For the properties: CFURLCopyResourcePropertiesForKeys example property: kCFURLFileSizeKey and kCFURLContentModificationDateKey, or Foundation-style with [NSURL resourceValuesForKeys:error:].
For the data: +[NSData dataWithContentsOfURL:options:error:].
They're not documented as replacements, AFAIK. Most of these newer replacement APIs have been around for a few years now.
Edit
In this example you posted in the edit, the program makes no request for properties, so you just want the data at the URL presetURL.
You can achieve this by:
NSURL * presetURL = ...;
// do review these options for your needs. you can make great
// optimizations if you use memory mapping or avoid unnecessary caching.
const NSDataReadingOptions DataReadingOptions = 0;
NSError * outError = nil;
NSData * data = [NSData dataWithContentsOfURL:presetURL
options:DataReadingOptions
error:&outError];
const bool status = nil != data; // << your `status` variable
if (!status) {
// oops - an error was encountered getting the data see `outError`
}
else {
// use the data
}
I found that I could remove even more code by using just the following:
OSStatus result = noErr;
NSData* data = [NSData dataWithContentsOfURL:presetURL];
id propertyList = [NSPropertyListSerialization propertyListWithData:data options:NSPropertyListImmutable format:NULL error:NULL];
// Set the class info property for the Sampler unit using the property list as the value.
if (propertyList) {
result = AudioUnitSetProperty(
self.samplerUnit,
kAudioUnitProperty_ClassInfo,
kAudioUnitScope_Global,
0,
(__bridge CFPropertyListRef)propertyList,
sizeof(CFPropertyListRef)
);
}
return result;
I ended up using this code https://developer.apple.com/library/mac/technotes/tn2283/_index.html#//apple_ref/doc/uid/DTS40011217-CH1-TNTAG2
- (OSStatus) loadSynthFromPresetURL: (NSURL *) presetURL {
OSStatus result = noErr;
AUSamplerInstrumentData auPreset = {0};
auPreset.fileURL = (__bridge CFURLRef) presetURL;
auPreset.instrumentType = kInstrumentType_AUPreset;
result = AudioUnitSetProperty(self.samplerUnit,
kAUSamplerProperty_LoadInstrument,
kAudioUnitScope_Global,
0,
&auPreset,
sizeof(auPreset));
return result;

Play audio file using Audio Units?

I've successfully recorded audio from the microphone into an audio file using Audio Units with the help of openframeworks and this website http://atastypixel.com/blog/using-remoteio-audio-unit.
I want to be able to stream the file back to audio units and play the audio. According to Play an audio file using RemoteIO and Audio Unit I can use ExtAudioFileOpenURL and ExtAudioFileRead. However, how do I play audio data in my buffer?
This is what I currently have:
static OSStatus setupAudioFileRead() {
//construct the file destination URL
CFURLRef destinationURL = audioSystemFileURL();
OSStatus status = ExtAudioFileOpenURL(destinationURL, &audioFileRef);
CFRelease(destinationURL);
if (checkStatus(status)) { ofLog(OF_LOG_ERROR, "ofxiPhoneSoundStream: Couldn't open file to read"); return status; }
while( TRUE ) {
// Try to fill the buffer to capacity.
UInt32 framesRead = 8000;
status = ExtAudioFileRead( audioFileRef, &framesRead, &inputBufferList );
// error check
if( checkStatus(status) ) { break; }
// 0 frames read means EOF.
if( framesRead == 0 ) { break; }
//play audio???
}
return noErr;
}
From this author: http://atastypixel.com/blog/using-remoteio-audio-unit/, if you scroll down to the PLAYBACK section, try something like this:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Notes: ioData contains buffers (may be more than one!)
// Fill them up as much as you can. Remember to set the size value in each buffer to match how
// much data is in the buffer.
for (int i=0; i < ioData->mNumberBuffers; i++)
{
AudioBuffer buffer = ioData->mBuffers[i];
// copy from your whatever buffer data to output buffer
UInt32 size = min(buffer.mDataByteSize, your buffer.size);
memcpy(buffer.mData, your buffer, size);
buffer.mDataByteSize = size; // indicate how much data we wrote in the buffer
// To test if your Audio Unit setup is working - comment out the three
// lines above and uncomment the for loop below to hear random noise
/*
UInt16 *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames; j++) {
frameBuffer[j] = rand();
}
*/
}
return noErr;
}
If you are only looking for recording from MIC to a file and play it back, the Apple's Speakhere sample is probably much more ready to use.
Basically,
1. Create a RemoteIO unit (See references about how to create RemoteIO);
Create a FilePlayer audio unit which is a dedicated audio unit to read an audio file and provide audio data in the file to output units, for example, the RemoteIO unit created in step 1. To actually use the FilePlayer, a lot of settings (specify which file to play, which part of the file to play, etc.) are needed to be done on the it;
Set kAudioUnitProperty_SetRenderCallback and kAudioUnitProperty_StreamFormat properties of the RemoteIO unit. The first property is essentially a callback function from which the RemoteIO unit pulls audio data and play it. The second property must be set in accordance to StreamFormat that supported by the FilePlayer. It can be derived from a get-property function invoked on the FilePlayer.
Define the callback set in step 3 where the most important thing to do is asking the FilePlayer to render into the buffer provided by the callback for which you will need to invoke AudioUnitRender() on the FilePlayer.
Finally start the RemoteIO unit to play the file.
Above is just a preliminary outline of basic things to do to play files using audio units on iOS. You can refer to Chris Adamson and Kevin Avila's Learning Core Audio for details.
It's a relatively simple approach that utilizes the audio unit mentioned in the Tasty Pixel blog. In the recording callback, instead of filling the buffer with data from the microphone, you could fill it with data from the file using ExtAudioFileRead. I'll try and paste an example below. Mind you this will just work for .caf files.
In the start method call an readAudio or initAudioFile function, something that just gets all the info about the file.
- (void) start {
readAudio();
OSStatus status = AudioOutputUnitStart(audioUnit);
checkStatus(status);
}
Now in the readAudio method you initialize the audio file reference as such.
ExtAudioFileRef fileRef;
void readAudio() {
NSString * name = #"AudioFile";
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:#"caf"];
const char * cString = [source cStringUsingEncoding:NSASCIIStringEncoding];
CFStringRef str = CFStringCreateWithCString(NULL, cString, kCFStringEncodingMacRoman);
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, str, kCFURLPOSIXPathStyle, false);
AudioFileID fileID;
OSStatus err = AudioFileOpenURL(inputFileURL, kAudioFileReadPermission, 0, &fileID);
CheckError(err, "AudioFileOpenURL");
err = ExtAudioFileOpenURL(inputFileURL, &fileRef);
CheckError(err, "ExtAudioFileOpenURL");
err = ExtAudioFileSetProperty(fileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
CheckError(err, "ExtAudioFileSetProperty");
}
Now that you have the Audio Data at hand, next step is pretty easy. In the recordingCallback read the data from the file instead of the mic.
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Then:
// Obtain recorded samples
OSStatus err = ExtAudioFileRead(fileRef, &inNumberFrames, &bufferList);
// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];
// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}
This worked for me.

Resources