Core audio: file playback render callback function - ios

I am using RemoteIO Audio Unit for audio playback in my app with kAudioUnitProperty_ScheduledFileIDs.
Audio files are in PCM format. How can I implement a render callback function for this case, so I could manually modify buffer samples?
Here is my code:
static AudioComponentInstance audioUnit;
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
CheckError(AudioComponentInstanceNew(comp, &audioUnit), "error AudioComponentInstanceNew");
NSURL *playerFile = [[NSBundle mainBundle] URLForResource:#"short" withExtension:#"wav"];
AudioFileID audioFileID;
CheckError(AudioFileOpenURL((__bridge CFURLRef)playerFile, kAudioFileReadPermission, 0, &audioFileID), "error AudioFileOpenURL");
// Determine file properties
UInt64 packetCount;
UInt32 size = sizeof(packetCount);
CheckError(AudioFileGetProperty(audioFileID, kAudioFilePropertyAudioDataPacketCount, &size, &packetCount),
"AudioFileGetProperty(kAudioFilePropertyAudioDataPacketCount)");
AudioStreamBasicDescription dataFormat;
size = sizeof(dataFormat);
CheckError(AudioFileGetProperty(audioFileID, kAudioFilePropertyDataFormat, &size, &dataFormat),
"AudioFileGetProperty(kAudioFilePropertyDataFormat)");
// Assign the region to play
ScheduledAudioFileRegion region;
memset (&region.mTimeStamp, 0, sizeof(region.mTimeStamp));
region.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
region.mTimeStamp.mSampleTime = 0;
region.mCompletionProc = NULL;
region.mCompletionProcUserData = NULL;
region.mAudioFile = audioFileID;
region.mLoopCount = 0;
region.mStartFrame = 0;
region.mFramesToPlay = (UInt32)packetCount * dataFormat.mFramesPerPacket;
CheckError(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ScheduledFileRegion, kAudioUnitScope_Global, 0, &region, sizeof(region)),
"AudioUnitSetProperty(kAudioUnitProperty_ScheduledFileRegion)");
// Prime the player by reading some frames from disk
UInt32 defaultNumberOfFrames = 0;
CheckError(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ScheduledFilePrime, kAudioUnitScope_Global, 0, &defaultNumberOfFrames, sizeof(defaultNumberOfFrames)),
"AudioUnitSetProperty(kAudioUnitProperty_ScheduledFilePrime)");
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = MyCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
CheckError(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &callbackStruct, sizeof(callbackStruct)), "error AudioUnitSetProperty[kAudioUnitProperty_setRenderCallback]");
CheckError(AudioUnitInitialize(audioUnit), "error AudioUnitInitialize");
Callback function:
static OSStatus MyCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
printf("my callback");
return noErr;
}
Audio Unit start playback on button press:
- (IBAction)playSound:(id)sender {
CheckError(AudioOutputUnitStart(audioUnit), "error AudioOutputUnitStart");
}
This code fails during compiling with kAudioUnitErr_InvalidProperty(-10879) error. The goal is to modify buffer samples that has been read from the AudioFileID and send the result to the speakers.

Seeing as how you are just getting familiar with core audio, I suggest you first get your remoteIO callback working independently of your file player. Just remove all of your file player related code and try to get that working first.
Then, once you have that working, move on to incorporating your file player.
As far as what I can see that's wrong, I think you are confusing the Audio File Services API with an audio unit. This API is used to read a file into a buffer which you would manually feed to to remoteIO, if you do want to go this route, use the Extended Audio File Services API, it's a LOT easier. The kAudioUnitProperty_ScheduledFileRegion property is supposed to be called on a file player audio unit. To get one of those, you would need to create it the same way as your remmoteIO with the exception that AudioComponentDescription's componentSubType and componentType are kAudioUnitSubType_AudioFilePlayer and kAudioUnitType_Generator respectively. Then, once you have that unit you would need to connect it to the remoteIO using the kAudioUnitProperty_MakeConnection property.
But seriously, start with just getting your remoteIO callback working, then try making a file player audio unit and connecting it (without the callback), then go from there.
Ask very specific questions about each of these steps independently, posting code you have tried that's not working, and you'll get a ton of help.

Related

How to play a signal with AudioUnit (iOS)?

I need to generate a signal and play it with iPhone's speakers or a headset.
To do so I generate an interleaved signal. Then i need to instantiate an AudioUnit inherited class object with the next info: 2 channels, 44100 kHz sample rate, some buffer size to store a few frames.
Then I need to write a callback method which will take a chink of my signal and pit it into iPhone's output buffer.
The problem is that I have no idea how to write an AudioUnit inherited class. I can't understand Apple's documentation regarding it, and all the examples I could find either read from file and play it with huge lag or use depricated constructions.
I start to think I am stupid or something. Please, help...
To play audio to the iPhone's hardware with an AudioUnit, you don't derive from the AudioUnit as CoreAudio is a c framework - instead you give it a render callback in which you feed the unit your audio samples. The following code sample shows you how. You need to replace the asserts with real error handling and you'll probably want to change or at least inspect the audio unit's sample format using the kAudioUnitProperty_StreamFormat selector. My format happens to be 48kHz floating point interleaved stereo.
static OSStatus
renderCallback(
void* inRefCon,
AudioUnitRenderActionFlags* ioActionFlags,
const AudioTimeStamp* inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList* ioData)
{
// inRefCon contains your cookie
// write inNumberFrames to ioData->mBuffers[i].mData here
return noErr;
}
AudioUnit
createAudioUnit() {
AudioUnit au;
OSStatus err;
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
assert(0 != comp);
err = AudioComponentInstanceNew(comp, &au);
assert(0 == err);
AURenderCallbackStruct input;
input.inputProc = renderCallback;
input.inputProcRefCon = 0; // put your cookie here
err = AudioUnitSetProperty(au, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &input, sizeof(input));
assert(0 == err);
err = AudioUnitInitialize(au);
assert(0 == err);
err = AudioOutputUnitStart(au);
assert(0 == err);
return au;
}

OSStatus error -50 (paramErr) on AudioUnitRender call on device

I'm writing an iOS app that captures audio from the microphone, filters it with a high-pass filter, and plays it back through the speakers.
I'm getting a -50 OSStatus error when I call AudioUnitRender on the render callback function when I run it on an iPhone 4S, but it runs fine on the simulator.
I'm using an AUGraph, which has a RemoteIO unit, a HighPassFilter effect unit, and an AUConverter unit to make the ASBDs between the HPF and the output match. The converter AudioUnit instance is called converterUnit.
Here's the code.
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
AudioController *THIS = (AudioController*)inRefCon;
AudioBuffer buffer;
AudioStreamBasicDescription converterOutputASBD;
UInt32 converterOutputASBDSize = sizeof(converterOutputASBD);
AudioUnitGetProperty([THIS converterUnit], kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &converterOutputASBD, &converterOutputASBDSize);
buffer.mDataByteSize = inNumberFrames * converterOutputASBD.mBytesPerFrame;
buffer.mNumberChannels = converterOutputASBD.mChannelsPerFrame;
buffer.mData = malloc(buffer.mDataByteSize);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus result = AudioUnitRender([THIS converterUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
...
}
I think -50 error means one of the parameters is wrong. The only parameters that can be wrong are [THIS converterUnit] and &bufferList, given that all the rest are handed to me as arguments. I've checked the converterUnit instance and it is correctly allocated and initialized (what's more, if that was the problem, it wouldn't run on the simulator either). The only parameter left to check is the bufferList. What I could make out so far from debugging is that both the RemoteIO's output element's input ASBD, and the inNumberFrames are different in the phone and on the simulator. But still, I think that to me that doesn't change things, given that I create and allocate memory for the AudioBuffer buffer based on an ASBD resulting from a AudioUnitGetProperty([THIS ioUnit], kAudioUnitProperty_StreamFormat, ...) call.
Any help will be much appreciated, I'm kind of running desperate here..
You guys rock.
Cheers.
UPDATE:
Here's the audio controller class' definition:
#interface AudioController : NSObject
{
AUGraph mGraph;
AudioUnit mEffects;
AudioUnit ioUnit;
AudioUnit converterUnit;
}
#property (readonly, nonatomic) AudioUnit mEffects;
#property (readonly, nonatomic) AudioUnit ioUnit;
#property (readonly, nonatomic) AudioUnit converterUnit;
#property (nonatomic) float* volumenPromedio;
-(void)initializeAUGraph;
-(void)startAUGraph;
-(void)stopAUGraph;
#end
, and here's the initialization code for the AUGraph (defined in AudioController.mm):
- (void)initializeAUGraph
{
NSError *audioSessionError = nil;
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setPreferredHardwareSampleRate: kGraphSampleRate
error: &audioSessionError];
[mySession setCategory: AVAudioSessionCategoryPlayAndRecord
error: &audioSessionError];
[mySession setActive: YES error: &audioSessionError];
OSStatus result = noErr;
// create a new AUGraph
result = NewAUGraph(&mGraph);
AUNode outputNode;
AUNode effectsNode;
AUNode converterNode;
// effects component
AudioComponentDescription effects_desc;
effects_desc.componentType = kAudioUnitType_Effect;
effects_desc.componentSubType = kAudioUnitSubType_HighPassFilter;
effects_desc.componentFlags = 0;
effects_desc.componentFlagsMask = 0;
effects_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// output component
AudioComponentDescription output_desc;
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// stream format converter component
AudioComponentDescription converter_desc;
converter_desc.componentType = kAudioUnitType_FormatConverter;
converter_desc.componentSubType = kAudioUnitSubType_AUConverter;
converter_desc.componentFlags = 0;
converter_desc.componentFlagsMask = 0;
converter_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Add nodes to the graph
result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &effects_desc, &effectsNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &converter_desc, &converterNode);
// manage connections in the graph
// Connect the io unit node's input element's output to the effectsNode input
result = AUGraphConnectNodeInput(mGraph, outputNode, 1, effectsNode, 0);
// Connect the effects node's output to the converter node's input
result = AUGraphConnectNodeInput(mGraph, effectsNode, 0, converterNode, 0);
// open the graph
result = AUGraphOpen(mGraph);
// Get references to the audio units
result = AUGraphNodeInfo(mGraph, effectsNode, NULL, &mEffects);
result = AUGraphNodeInfo(mGraph, outputNode, NULL, &ioUnit);
result = AUGraphNodeInfo(mGraph, converterNode, NULL, &converterUnit);
// Enable input on remote io unit
UInt32 flag = 1;
result = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flag, sizeof(flag));
// Setup render callback struct
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &renderInput;
renderCallbackStruct.inputProcRefCon = self;
result = AUGraphSetNodeInputCallback(mGraph, outputNode, 0, &renderCallbackStruct);
// Get fx unit's input current stream format...
AudioStreamBasicDescription fxInputASBD;
UInt32 sizeOfASBD = sizeof(AudioStreamBasicDescription);
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxInputASBD, &sizeOfASBD);
// ...and set it on the io unit's input scope's output
result = AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&fxInputASBD,
sizeof(fxInputASBD));
// Set fx unit's output sample rate, just in case
Float64 sampleRate = 44100.0;
result = AudioUnitSetProperty(mEffects,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&sampleRate,
sizeof(sampleRate));
AudioStreamBasicDescription fxOutputASBD;
// get fx audio unit's output ASBD...
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &fxOutputASBD, &sizeOfASBD);
// ...and set it to the converter audio unit's input
result = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxOutputASBD, sizeof(fxOutputASBD));
AudioStreamBasicDescription ioUnitsOutputElementInputASBD;
// now get io audio unit's output element's input ASBD...
result = AudioUnitGetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioUnitsOutputElementInputASBD, &sizeOfASBD);
// ...set the sample rate...
ioUnitsOutputElementInputASBD.mSampleRate = 44100.0;
// ...and set it to the converter audio unit's output
result = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &ioUnitsOutputElementInputASBD, sizeof(ioUnitsOutputElementInputASBD));
// initialize graph
result = AUGraphInitialize(mGraph);
}
The reason I make the connection between the converter's output and the remote io unit's output element's input with a render callback function (rather than with the AUGraphConnectNodeInput method) is because I need to make some calculations on the samples right after they've been processed by the high-pass filter. The render callback gives me the opportunity to look into the samples buffer right after the AudioUnitRender call, and do said calculations there.
UPDATE 2:
By debugging, I found differences in the Remote IO output bus' input ASBD on the device and on the simulator. It shouldn't make a difference (I allocate and initialize the AudioBufferList based on data coming from a previous AudioUnitGetProperty([THIS ioUnit], kAudioUnitProperty_StreamFormat, ...) call), but it's the only thing I can see different in the device and the simulator.
Here's the Remote IO output bus' input ASBD on the device:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 41
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 32
UInt32 mReserved 0
, and here it is on the simulator:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 12
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 16
UInt32 mReserved 0

Play audio file using Audio Units?

I've successfully recorded audio from the microphone into an audio file using Audio Units with the help of openframeworks and this website http://atastypixel.com/blog/using-remoteio-audio-unit.
I want to be able to stream the file back to audio units and play the audio. According to Play an audio file using RemoteIO and Audio Unit I can use ExtAudioFileOpenURL and ExtAudioFileRead. However, how do I play audio data in my buffer?
This is what I currently have:
static OSStatus setupAudioFileRead() {
//construct the file destination URL
CFURLRef destinationURL = audioSystemFileURL();
OSStatus status = ExtAudioFileOpenURL(destinationURL, &audioFileRef);
CFRelease(destinationURL);
if (checkStatus(status)) { ofLog(OF_LOG_ERROR, "ofxiPhoneSoundStream: Couldn't open file to read"); return status; }
while( TRUE ) {
// Try to fill the buffer to capacity.
UInt32 framesRead = 8000;
status = ExtAudioFileRead( audioFileRef, &framesRead, &inputBufferList );
// error check
if( checkStatus(status) ) { break; }
// 0 frames read means EOF.
if( framesRead == 0 ) { break; }
//play audio???
}
return noErr;
}
From this author: http://atastypixel.com/blog/using-remoteio-audio-unit/, if you scroll down to the PLAYBACK section, try something like this:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Notes: ioData contains buffers (may be more than one!)
// Fill them up as much as you can. Remember to set the size value in each buffer to match how
// much data is in the buffer.
for (int i=0; i < ioData->mNumberBuffers; i++)
{
AudioBuffer buffer = ioData->mBuffers[i];
// copy from your whatever buffer data to output buffer
UInt32 size = min(buffer.mDataByteSize, your buffer.size);
memcpy(buffer.mData, your buffer, size);
buffer.mDataByteSize = size; // indicate how much data we wrote in the buffer
// To test if your Audio Unit setup is working - comment out the three
// lines above and uncomment the for loop below to hear random noise
/*
UInt16 *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames; j++) {
frameBuffer[j] = rand();
}
*/
}
return noErr;
}
If you are only looking for recording from MIC to a file and play it back, the Apple's Speakhere sample is probably much more ready to use.
Basically,
1. Create a RemoteIO unit (See references about how to create RemoteIO);
Create a FilePlayer audio unit which is a dedicated audio unit to read an audio file and provide audio data in the file to output units, for example, the RemoteIO unit created in step 1. To actually use the FilePlayer, a lot of settings (specify which file to play, which part of the file to play, etc.) are needed to be done on the it;
Set kAudioUnitProperty_SetRenderCallback and kAudioUnitProperty_StreamFormat properties of the RemoteIO unit. The first property is essentially a callback function from which the RemoteIO unit pulls audio data and play it. The second property must be set in accordance to StreamFormat that supported by the FilePlayer. It can be derived from a get-property function invoked on the FilePlayer.
Define the callback set in step 3 where the most important thing to do is asking the FilePlayer to render into the buffer provided by the callback for which you will need to invoke AudioUnitRender() on the FilePlayer.
Finally start the RemoteIO unit to play the file.
Above is just a preliminary outline of basic things to do to play files using audio units on iOS. You can refer to Chris Adamson and Kevin Avila's Learning Core Audio for details.
It's a relatively simple approach that utilizes the audio unit mentioned in the Tasty Pixel blog. In the recording callback, instead of filling the buffer with data from the microphone, you could fill it with data from the file using ExtAudioFileRead. I'll try and paste an example below. Mind you this will just work for .caf files.
In the start method call an readAudio or initAudioFile function, something that just gets all the info about the file.
- (void) start {
readAudio();
OSStatus status = AudioOutputUnitStart(audioUnit);
checkStatus(status);
}
Now in the readAudio method you initialize the audio file reference as such.
ExtAudioFileRef fileRef;
void readAudio() {
NSString * name = #"AudioFile";
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:#"caf"];
const char * cString = [source cStringUsingEncoding:NSASCIIStringEncoding];
CFStringRef str = CFStringCreateWithCString(NULL, cString, kCFStringEncodingMacRoman);
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, str, kCFURLPOSIXPathStyle, false);
AudioFileID fileID;
OSStatus err = AudioFileOpenURL(inputFileURL, kAudioFileReadPermission, 0, &fileID);
CheckError(err, "AudioFileOpenURL");
err = ExtAudioFileOpenURL(inputFileURL, &fileRef);
CheckError(err, "ExtAudioFileOpenURL");
err = ExtAudioFileSetProperty(fileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
CheckError(err, "ExtAudioFileSetProperty");
}
Now that you have the Audio Data at hand, next step is pretty easy. In the recordingCallback read the data from the file instead of the mic.
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Then:
// Obtain recorded samples
OSStatus err = ExtAudioFileRead(fileRef, &inNumberFrames, &bufferList);
// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];
// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}
This worked for me.

Core Audio memory issues

My Audio Unit analysis project is having some memory issues, whereby each time an Audio Unit is rendered (or somewhere around that) it is allocating a bunch of memory which isn't being released, causing memory usage to swell and the app to eventually crash.
In instruments, I notice the following string of 32 byte mallocs occurring repeatedly, and they remain live:
BufferedAudioConverter::AllocateBuffers() x6
BufferedInputAudioConverter:BufferedInputAudioConverter(StreamDescPair const&) x 3
Any ideas where the problem might lie? When is that memory allocated in the process and how can it safely be released?
Many thanks.
The code was based on some non-Apple sample code, PitchDetector from sleepyleaf.com
Some code extracts where the problem might lie..... Please let me know if more code is needed.
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, THIS->bufferList); //128 inNumberFrames
if (renderErr < 0) {
return renderErr;
}
// Fill the buffer with our sampled data. If we fill our buffer, run the
// fft.
int read = bufferCapacity - index;
if (read > inNumberFrames) {
memcpy((SInt16 *)dataBuffer + index, THIS->bufferList->mBuffers[0].mData, inNumberFrames*sizeof(SInt16));
THIS->index += inNumberFrames;
} else { DO ANALYSIS
memset(outputBuffer, 0, n*sizeof(SInt16));
- (void)createAUProcessingGraph {
OSStatus err;
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription ioUnitDescription;
ioUnitDescription.componentType = kAudioUnitType_Output;
ioUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
ioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ioUnitDescription.componentFlags = 0;
enter code here
ioUnitDescription.componentFlagsMask = 0;
// Declare and instantiate an audio processing graph
NewAUGraph(&processingGraph);
// Add an audio unit node to the graph, then instantiate the audio unit.
/*
An AUNode is an opaque type that represents an audio unit in the context
of an audio processing graph. You receive a reference to the new audio unit
instance, in the ioUnit parameter, on output of the AUGraphNodeInfo
function call.
*/
AUNode ioNode;
AUGraphAddNode(processingGraph, &ioUnitDescription, &ioNode);
AUGraphOpen(processingGraph); // indirectly performs audio unit instantiation
// Obtain a reference to the newly-instantiated I/O unit. Each Audio Unit
// requires its own configuration.
AUGraphNodeInfo(processingGraph, ioNode, NULL, &ioUnit);
// Initialize below.
AURenderCallbackStruct callbackStruct = {0};
UInt32 enableInput;
UInt32 enableOutput;
// Enable input and disable output.
enableInput = 1; enableOutput = 0;
callbackStruct.inputProc = RenderFFTCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus, &enableInput, sizeof(enableInput));
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus, &enableOutput, sizeof(enableOutput));
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Input,
kOutputBus, &callbackStruct, sizeof(callbackStruct));
// Set the stream format.
size_t bytesPerSample = [self ASBDForSoundMode];
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus, &streamFormat, sizeof(streamFormat));
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus, &streamFormat, sizeof(streamFormat));
// Disable system buffer allocation. We'll do it ourselves.
UInt32 flag = 0;
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus, &flag, sizeof(flag));
// Allocate AudioBuffers for use when listening.
// TODO: Move into initialization...should only be required once.
bufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = 512*bytesPerSample;
bufferList->mBuffers[0].mData = calloc(512, bytesPerSample);
}
I managed to find and fix the issue, which was in an area of the code not posted above.
In a following step the output buffer was being converted into a different number format using an AudioConverter object. However, the converter object was not being disposed of, and remained live in the memory. I fixed it by using AudioConverterDispose as below:
err = AudioConverterNew(&inFormat, &outFormat, &converter);
err = AudioConverterConvertBuffer(converter, inSize, buf, &outSize, outputBuf);
err = AudioConverterDispose (converter);

How to write output of AUGraph to a file?

I am trying to write (what should be) a simple app that has a bunch of audio units in sequence in an AUGraph and then writes the output to a file. I added a callback using AUGraphAddRenderNotify. Here is my callback function:
OSStatus MyAURenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *actionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
if (*actionFlags & kAudioUnitRenderAction_PostRender) {
ExtAudioFileRef outputFile = (ExtAudioFileRef)inRefCon;
ExtAudioFileWriteAsync(outputFile, inNumberFrames, ioData);
}
}
This sort of works. The file is playable and I can hear what I recorded but there is horrible amounts of static that makes it barely audible.
Does anybody know what is wrong with this? Or does anyone know of a better way to record the AUGraph output to a file?
Thanks for the help.
I had a epiphany with regards to Audio Units just now which helped me solve my own problem. I had a misconception about how audio unit connections and render callbacks work. I thought they were completely separate things but it turns out that a connection is just short hand for a render callback.
Doing an kAudioUnitProperty_MakeConnection from the output of audio unit A to the input of audio unit B is the same as doing kAudioUnitProperty_SetRenderCallback on the input of unit B and having the callback function call AudioUnitRender on the output of audio unit A.
I tested this by doing a make connection after setting my render callback and the render callback was no longer invoked.
Therefore, I was able to solve my problem by doing the following:
AURenderCallbackStruct callbackStruct = {0};
callbackStruct.inputProc = MyAURenderCallback;
callbackStruct.inputProcRefCon = mixerUnit;
AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&callbackStruct,
sizeof(callbackStruct));
And them my callback function did something like this:
OSStatus MyAURenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *actionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioUnit mixerUnit = (AudioUnit)inRefCon;
AudioUnitRender(mixerUnit,
actionFlags,
inTimeStamp,
0,
inNumberFrames,
ioData);
ExtAudioFileWriteAsync(outputFile,
inNumberFrames,
ioData);
return noErr;
}
This probably should have been obvious to me but since it wasn't I'll bet there are others that were confused in the same way so hopefully this is helpful to them too.
I'm still not sure why I had trouble with the AUGraphAddRenderNotify callback. I will dig deeper into this later but for now I found a solution that seems to work.
Here is some sample code from Apple (the project is PlaySequence, but it isn't MIDI specific) that might help:
{
CAStreamBasicDescription clientFormat = CAStreamBasicDescription();
ca_require_noerr (result = AudioUnitGetProperty(outputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0,
&clientFormat, &size), fail);
size = sizeof(clientFormat);
ca_require_noerr (result = ExtAudioFileSetProperty(outfile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat), fail);
{
MusicTimeStamp currentTime;
AUOutputBL outputBuffer (clientFormat, numFrames);
AudioTimeStamp tStamp;
memset (&tStamp, 0, sizeof(AudioTimeStamp));
tStamp.mFlags = kAudioTimeStampSampleTimeValid;
int i = 0;
int numTimesFor10Secs = (int)(10. / (numFrames / srate));
do {
outputBuffer.Prepare();
AudioUnitRenderActionFlags actionFlags = 0;
ca_require_noerr (result = AudioUnitRender (outputUnit, &actionFlags, &tStamp, 0, numFrames, outputBuffer.ABL()), fail);
tStamp.mSampleTime += numFrames;
ca_require_noerr (result = ExtAudioFileWrite(outfile, numFrames, outputBuffer.ABL()), fail);
ca_require_noerr (result = MusicPlayerGetTime (player, &currentTime), fail);
if (shouldPrint && (++i % numTimesFor10Secs == 0))
printf ("current time: %6.2f beats\n", currentTime);
} while (currentTime < sequenceLength);
}
}
Maybe try this. Copy the data from the audio unit callback to a long buffer. Play the buffer to test it, then write the entire buffer to a file after you have verified that the whole buffer is OK.

Resources