How to play a signal with AudioUnit (iOS)? - ios

I need to generate a signal and play it with iPhone's speakers or a headset.
To do so I generate an interleaved signal. Then i need to instantiate an AudioUnit inherited class object with the next info: 2 channels, 44100 kHz sample rate, some buffer size to store a few frames.
Then I need to write a callback method which will take a chink of my signal and pit it into iPhone's output buffer.
The problem is that I have no idea how to write an AudioUnit inherited class. I can't understand Apple's documentation regarding it, and all the examples I could find either read from file and play it with huge lag or use depricated constructions.
I start to think I am stupid or something. Please, help...

To play audio to the iPhone's hardware with an AudioUnit, you don't derive from the AudioUnit as CoreAudio is a c framework - instead you give it a render callback in which you feed the unit your audio samples. The following code sample shows you how. You need to replace the asserts with real error handling and you'll probably want to change or at least inspect the audio unit's sample format using the kAudioUnitProperty_StreamFormat selector. My format happens to be 48kHz floating point interleaved stereo.
static OSStatus
renderCallback(
void* inRefCon,
AudioUnitRenderActionFlags* ioActionFlags,
const AudioTimeStamp* inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList* ioData)
{
// inRefCon contains your cookie
// write inNumberFrames to ioData->mBuffers[i].mData here
return noErr;
}
AudioUnit
createAudioUnit() {
AudioUnit au;
OSStatus err;
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
assert(0 != comp);
err = AudioComponentInstanceNew(comp, &au);
assert(0 == err);
AURenderCallbackStruct input;
input.inputProc = renderCallback;
input.inputProcRefCon = 0; // put your cookie here
err = AudioUnitSetProperty(au, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &input, sizeof(input));
assert(0 == err);
err = AudioUnitInitialize(au);
assert(0 == err);
err = AudioOutputUnitStart(au);
assert(0 == err);
return au;
}

Related

Core audio: file playback render callback function

I am using RemoteIO Audio Unit for audio playback in my app with kAudioUnitProperty_ScheduledFileIDs.
Audio files are in PCM format. How can I implement a render callback function for this case, so I could manually modify buffer samples?
Here is my code:
static AudioComponentInstance audioUnit;
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
CheckError(AudioComponentInstanceNew(comp, &audioUnit), "error AudioComponentInstanceNew");
NSURL *playerFile = [[NSBundle mainBundle] URLForResource:#"short" withExtension:#"wav"];
AudioFileID audioFileID;
CheckError(AudioFileOpenURL((__bridge CFURLRef)playerFile, kAudioFileReadPermission, 0, &audioFileID), "error AudioFileOpenURL");
// Determine file properties
UInt64 packetCount;
UInt32 size = sizeof(packetCount);
CheckError(AudioFileGetProperty(audioFileID, kAudioFilePropertyAudioDataPacketCount, &size, &packetCount),
"AudioFileGetProperty(kAudioFilePropertyAudioDataPacketCount)");
AudioStreamBasicDescription dataFormat;
size = sizeof(dataFormat);
CheckError(AudioFileGetProperty(audioFileID, kAudioFilePropertyDataFormat, &size, &dataFormat),
"AudioFileGetProperty(kAudioFilePropertyDataFormat)");
// Assign the region to play
ScheduledAudioFileRegion region;
memset (&region.mTimeStamp, 0, sizeof(region.mTimeStamp));
region.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
region.mTimeStamp.mSampleTime = 0;
region.mCompletionProc = NULL;
region.mCompletionProcUserData = NULL;
region.mAudioFile = audioFileID;
region.mLoopCount = 0;
region.mStartFrame = 0;
region.mFramesToPlay = (UInt32)packetCount * dataFormat.mFramesPerPacket;
CheckError(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ScheduledFileRegion, kAudioUnitScope_Global, 0, &region, sizeof(region)),
"AudioUnitSetProperty(kAudioUnitProperty_ScheduledFileRegion)");
// Prime the player by reading some frames from disk
UInt32 defaultNumberOfFrames = 0;
CheckError(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ScheduledFilePrime, kAudioUnitScope_Global, 0, &defaultNumberOfFrames, sizeof(defaultNumberOfFrames)),
"AudioUnitSetProperty(kAudioUnitProperty_ScheduledFilePrime)");
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = MyCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
CheckError(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &callbackStruct, sizeof(callbackStruct)), "error AudioUnitSetProperty[kAudioUnitProperty_setRenderCallback]");
CheckError(AudioUnitInitialize(audioUnit), "error AudioUnitInitialize");
Callback function:
static OSStatus MyCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
printf("my callback");
return noErr;
}
Audio Unit start playback on button press:
- (IBAction)playSound:(id)sender {
CheckError(AudioOutputUnitStart(audioUnit), "error AudioOutputUnitStart");
}
This code fails during compiling with kAudioUnitErr_InvalidProperty(-10879) error. The goal is to modify buffer samples that has been read from the AudioFileID and send the result to the speakers.
Seeing as how you are just getting familiar with core audio, I suggest you first get your remoteIO callback working independently of your file player. Just remove all of your file player related code and try to get that working first.
Then, once you have that working, move on to incorporating your file player.
As far as what I can see that's wrong, I think you are confusing the Audio File Services API with an audio unit. This API is used to read a file into a buffer which you would manually feed to to remoteIO, if you do want to go this route, use the Extended Audio File Services API, it's a LOT easier. The kAudioUnitProperty_ScheduledFileRegion property is supposed to be called on a file player audio unit. To get one of those, you would need to create it the same way as your remmoteIO with the exception that AudioComponentDescription's componentSubType and componentType are kAudioUnitSubType_AudioFilePlayer and kAudioUnitType_Generator respectively. Then, once you have that unit you would need to connect it to the remoteIO using the kAudioUnitProperty_MakeConnection property.
But seriously, start with just getting your remoteIO callback working, then try making a file player audio unit and connecting it (without the callback), then go from there.
Ask very specific questions about each of these steps independently, posting code you have tried that's not working, and you'll get a ton of help.

Mixer AudioUnit to RemoteIO AudioUnit

I have an AudioUnit with correspondent Callback working properly, But now, I need to send it to a RemoteIO, cause i'm implementing some framework who needs an RemoteIO AudioUnit to work.
THen... I need the same output i'm getting with this audiounit mixer but with another audiounit with type kAudioUnitSubType_RemoteIO.
Please, help!
EDIT ...
This is the code I'm trying...
EDIT 2- iOUnitDescription Added
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType = kAudioUnitType_Output;
iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
AudioComponent foundIoUnitReference = AudioComponentFindNext (
NULL,
&iOUnitDescription
);
AudioComponentInstanceNew (
foundIoUnitReference,
&audioUnit
);
result = AudioUnitSetProperty (
audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
guitarBus,
&stereoStreamFormat,
sizeof (stereoStreamFormat)
);
if (noErr != result) {[self printErrorMessage: #"AudioUnitSetProperty (set mixer unit guitar input bus stream format)" withStatus: result];return;}
result = AudioUnitSetProperty (
audioUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&graphSampleRate,
sizeof (graphSampleRate)
);
if (noErr != result) {[self printErrorMessage: #"AudioUnitSetProperty (set AUDIOUNIT unit output stream format)" withStatus: result]; return;}
AudioUnitElement mixerUnitOutputBus = 0;
AudioUnitElement ioUnitOutputElement = 0;
AudioUnitConnection mixerOutToIoUnitIn;
mixerOutToIoUnitIn.sourceAudioUnit = mixerUnit;
mixerOutToIoUnitIn.sourceOutputNumber = mixerUnitOutputBus;
mixerOutToIoUnitIn.destInputNumber = ioUnitOutputElement;
AudioUnitSetProperty (
audioUnit, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
ioUnitOutputElement, // destination element
&mixerOutToIoUnitIn, // connection definition
sizeof (mixerOutToIoUnitIn)
);
I really need more info. From the above, i see you have a mixer somewhere, a guitarBus which presumably is your input (and seemingly a stream). What is the definition of &iOUnitDescription. More importantly, where are you hooking your renderCallback to, what are you doing in the callback and what does the framework expect?
Typically, when i need to process Audio, I build my own graph; I make this it's own class for better portability. This should be a good starting point for you
Here is how I implement such a solution.
// header file
#interface MDMixerGraph : NSObject{
AUGraph graph;
AudioUnit mixerUnit;
AudioUnit inputUnit;
AudioUnit rioUnit;
}
-(void) setupAUGraph;
#end
// implementation
#implementation MDMixerGraph
// exception Helper
void MDThrowOnError(OSStatus status){
if (status != noErr) {
#throw [NSException exceptionWithName:#"MDMixerException"
reason:[NSString stringWithFormat:#"Status Error %d).", (int)status]
userInfo:nil];
}
}
// helper method for setting up graph nodes
OSStatus MDAdAUGraphdNode(OSType inComponentType, OSType inComponentSubType, AUGraph inGraph, AUNode *outNode)
{
AudioComponentDescription desc;
desc.componentType = inComponentType;
desc.componentSubType = inComponentSubType;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
return AUGraphAddNode(inGraph, &desc, outNode);
}
// setup method to init and start AUGraph
-(void) setupAUGraph{
//Create the Graph
MDThrowOnError(NewAUGraph(&graph));
// setup AU Units
// Add Audio Units (Nodes) to the graph
AUNode inputNode, rioNode, mixerNode;
//Input Node -- this may need to be a different type to accept your Stream (not enough info above)
MDThrowOnError(MDAdAUGraphdNode(kAudioUnitType_Output, kAudioUnitSubType_RemoteIO, graph, &inputUnit));
//Remote IO Node - your output node
MDThrowOnError(MDAdAUGraphdNode(kAudioUnitType_Output, kAudioUnitSubType_RemoteIO, graph, &rioNode));
//mixerNode - Depending on output and input change the mixer sub-type here
// you can configure additional nodes depending on your needs for inputs and outputs
MDThrowOnError(MDAdAUGraphdNode(kAudioUnitType_Mixer, kAudioUnitSubType_AU3DMixerEmbedded, graph, &mixerNode));
// open graph
MDThrowOnError(AUGraphOpen(graph));
// we need a ref to the Audio Units so lets grab all of them here
MDThrowOnError(AUGraphNodeInfo(graph, inputNode, NULL, &inputUnit));
MDThrowOnError(AUGraphNodeInfo(graph, rioNode, NULL, &rioUnit));
MDThrowOnError(AUGraphNodeInfo(graph, mixerNode, NULL, &mixerUnit));
// setup the connections here, input to output of the graph.
/// the graph looks like inputNode->mixerNode->rioNode
MDThrowOnError(AUGraphConnectNodeInput(graph, inputNode, 0, mixerNode, 0));
MDThrowOnError(AUGraphConnectNodeInput(graph, mixerNode, 0, rioNode, 0));
// Init the graph
MDThrowOnError(AUGraphInitialize(graph));
//do any other setup here for your stream
// Finally, Start the graph
MDThrowOnError(AUGraphStart(graph));
}
In your View Controller extension you simply;
// define the MDMixerGraph Class
// #property (nonatomic) MDMixerGraph *mixer;
And in the implementation
self.mixer = [[MDMixerGraph alloc]init];
[self.mixer setupAUGraph];
And you have reference to the rioUnit to pass to your framework (self.mixer.rioUnit); Without knowing more about your requirements connection/processing this is the best i can do for you.
Cheers!

Play sound file together with AudioUnit

i have an app which uses an AudioUnit with subtype kAudioUnitSubType_VoiceProcessingIO. The input comes from a network stream. That works fine, but now i want to play a simple sound from file (alert sound after receiving a push notification). That does not work while the AudioUnit is in action, only when it's not started yet or already disposed. I can even stop the AudioUnit while the sound file stills plays to hear the final samples, so the playback seems to work, but it's silent.
I tried AudioServicesPlaySystemSound and AVAudioPlayer without success on a real device (might work in simulator).
Is there a simple solution to this simple task or do i need to manually mix in the file based audio content?
Thanks in advance!
You can build AUGraph with Audio Units and play audio file through kAudioUnitSubType_AudioFilePlayer unit.
Try
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers, // 1
sizeof (allowMixing), // 2
&allowMixing // 3
);
Or you can use kAudioUnitType_Mixer type AudioUnit.
AudioComponentDescription MixerUnitDescription;
MixerUnitDescription.componentType = kAudioUnitType_Mixer;
MixerUnitDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
MixerUnitDescription.componentFlags = 0;
MixerUnitDescription.componentFlagsMask = 0;
Add them(mixer_unit,voice_processing_unit) into a AUGraph. set input bus count 2 for mixer unit.
UInt32 busCount = 2; // bus count for mixer unit input
status = AudioUnitSetProperty (mixer_unit,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&busCount,
sizeof(busCount)
);
Add render callback for each bus of mixer_unit:
for (UInt16 busNumber = 0; busNumber < busCount; ++busNumber) {
AURenderCallbackStruct inputCallbackStruct;
inputCallbackStruct.inputProc = &VoiceMixRenderCallback;
inputCallbackStruct.inputProcRefCon = self;
status = AUGraphSetNodeInputCallback (_mixSaveGraph,
mixerNode,
busNumber,
&inputCallbackStruct
);
if (status) { printf("AudioUnitSetProperty set callback"); return NO; }
}
Connect the mixer_unit's output bus to io_unit's input bus:
status = AUGraphConnectNodeInput (_mixSaveGraph,
mixerNode, // source node
0, // source node output bus number
saveNode, // destination node
0 // desintation node input bus number
);
Start the graph , u'll get reader call back with different Bus number (0 and 1) , like this:
static OSStatus VoiceMixRenderCallback(void * inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData )
{
OSStatus status;
AURecorder *recorder = (AURecorder*)inRefCon;
if (inBusNumber == BUS_ONE && (recorder.isMusicPlaying)) {
//Get data from music file. try ExtAudioFileRead
} else if (inBusNumber == BUS_TWO) {
status = AudioUnitRender(recorder.io_unit, ioActionFlags, inTimeStamp, INPUT_BUS, inNumberFrames, ioData);
//Get data from io unit's input bus.
if (status)
{
printf("AudioUnitRender for record error");
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
}
} else {
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
}
return noErr;
}
This might be a system restriction. If your iOS device didn't do anything when you call AudioServicesPlaySystemSound while using PlayAndRecord Category, try to stop your AudioUnit first, and then call this function.
I had a similar problem. In a VOIP application, I had to play tones while the call was on-going. Playing a tone file using AVPlayer would get played thru receiver and at 50% the volume.
What worked for me was to switch the AVAudioSession category from AVAudioSessionCategoryPlayAndRecord to AVAudioSessionCategoryPlayback and back for the duration of the tone file.
There are a few catches here:
You have to switch over for the duration of the audio file you are about to play.
Your AudioUnit or input processing code should be okay with silence for the duration of the switchover.
The category switch-over is not free. Keep in mind the older devices which may take longer to turn the microphone on and off.
I think what was happening is that the speakerphone is exclusively allocated to the AudioUnit and any parallel playback using AVAudioPlayer or AVPlayer is played thru the receiver. If your audio file's level is low, it will be hardly noticeable. Also, while the AudioUnit is active and AVAudioSession is in PlayAndRecord category, other sound files will be 'ducked'.
You can confirm this by using Mac's console while your device is attached, search for mediaserverd and grep for 'duck'.
I got something like this:
default 18:36:01.844883-0800 mediaserverd -CMSessionMgr- cmsDuckVolumeBy: ducking 'sid:0x36952, $$$$$$(2854), 'prim'' duckToLevelDB = -40.0000 duckVolume = 0.100 duckFadeDuration = 0.500
default 18:36:02.371953-0800 mediaserverd -CMSystSounds- cmsmSystemSoundShouldPlay_block_invoke: SSID = 4096 with systemSoundCategory = UserAlert returning OutVolume = 1.000000, Audio = 1, Vibration = 2, Synchronized = 16, Interrupt = 0, NeedsDidFinishCall = 0, NeedsUnduckCall = 128, BudgetNotAvailable = 0
default 18:36:02.372685-0800 mediaserverd SSSActionLists.cpp:130:LogFlags: mSSID 4105, mShouldPlayAudio 1, mShouldVibe 1, mAudioVolume 1.000000, mVibeIntensity 1.000000, mNeedsFinishCall 0, mNeedsUnduckCall 1, mSynchronizedSystemSound 1, mInterruptCurrentSystemSounds 0
default 18:36:02.729872-0800 mediaserverd ActiveSoundList.cpp:386:SendUnduckMessageToCM: Notifying CM that sound should unduck now for ssidForCMSession: 4096, token 3569
default 18:36:02.729928-0800 mediaserverd -CMSystSounds- cmsmSystemSoundUnduckMedia: called for ssid 4096
default 18:36:02.729973-0800 mediaserverd -CMSessionMgr- cmsUnduckVolume: 'sid:0x36952, $$$$$$(2854), 'prim'' unduckFadeDuration: 0.500000

OSStatus error -50 (paramErr) on AudioUnitRender call on device

I'm writing an iOS app that captures audio from the microphone, filters it with a high-pass filter, and plays it back through the speakers.
I'm getting a -50 OSStatus error when I call AudioUnitRender on the render callback function when I run it on an iPhone 4S, but it runs fine on the simulator.
I'm using an AUGraph, which has a RemoteIO unit, a HighPassFilter effect unit, and an AUConverter unit to make the ASBDs between the HPF and the output match. The converter AudioUnit instance is called converterUnit.
Here's the code.
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
AudioController *THIS = (AudioController*)inRefCon;
AudioBuffer buffer;
AudioStreamBasicDescription converterOutputASBD;
UInt32 converterOutputASBDSize = sizeof(converterOutputASBD);
AudioUnitGetProperty([THIS converterUnit], kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &converterOutputASBD, &converterOutputASBDSize);
buffer.mDataByteSize = inNumberFrames * converterOutputASBD.mBytesPerFrame;
buffer.mNumberChannels = converterOutputASBD.mChannelsPerFrame;
buffer.mData = malloc(buffer.mDataByteSize);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus result = AudioUnitRender([THIS converterUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
...
}
I think -50 error means one of the parameters is wrong. The only parameters that can be wrong are [THIS converterUnit] and &bufferList, given that all the rest are handed to me as arguments. I've checked the converterUnit instance and it is correctly allocated and initialized (what's more, if that was the problem, it wouldn't run on the simulator either). The only parameter left to check is the bufferList. What I could make out so far from debugging is that both the RemoteIO's output element's input ASBD, and the inNumberFrames are different in the phone and on the simulator. But still, I think that to me that doesn't change things, given that I create and allocate memory for the AudioBuffer buffer based on an ASBD resulting from a AudioUnitGetProperty([THIS ioUnit], kAudioUnitProperty_StreamFormat, ...) call.
Any help will be much appreciated, I'm kind of running desperate here..
You guys rock.
Cheers.
UPDATE:
Here's the audio controller class' definition:
#interface AudioController : NSObject
{
AUGraph mGraph;
AudioUnit mEffects;
AudioUnit ioUnit;
AudioUnit converterUnit;
}
#property (readonly, nonatomic) AudioUnit mEffects;
#property (readonly, nonatomic) AudioUnit ioUnit;
#property (readonly, nonatomic) AudioUnit converterUnit;
#property (nonatomic) float* volumenPromedio;
-(void)initializeAUGraph;
-(void)startAUGraph;
-(void)stopAUGraph;
#end
, and here's the initialization code for the AUGraph (defined in AudioController.mm):
- (void)initializeAUGraph
{
NSError *audioSessionError = nil;
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setPreferredHardwareSampleRate: kGraphSampleRate
error: &audioSessionError];
[mySession setCategory: AVAudioSessionCategoryPlayAndRecord
error: &audioSessionError];
[mySession setActive: YES error: &audioSessionError];
OSStatus result = noErr;
// create a new AUGraph
result = NewAUGraph(&mGraph);
AUNode outputNode;
AUNode effectsNode;
AUNode converterNode;
// effects component
AudioComponentDescription effects_desc;
effects_desc.componentType = kAudioUnitType_Effect;
effects_desc.componentSubType = kAudioUnitSubType_HighPassFilter;
effects_desc.componentFlags = 0;
effects_desc.componentFlagsMask = 0;
effects_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// output component
AudioComponentDescription output_desc;
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// stream format converter component
AudioComponentDescription converter_desc;
converter_desc.componentType = kAudioUnitType_FormatConverter;
converter_desc.componentSubType = kAudioUnitSubType_AUConverter;
converter_desc.componentFlags = 0;
converter_desc.componentFlagsMask = 0;
converter_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Add nodes to the graph
result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &effects_desc, &effectsNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &converter_desc, &converterNode);
// manage connections in the graph
// Connect the io unit node's input element's output to the effectsNode input
result = AUGraphConnectNodeInput(mGraph, outputNode, 1, effectsNode, 0);
// Connect the effects node's output to the converter node's input
result = AUGraphConnectNodeInput(mGraph, effectsNode, 0, converterNode, 0);
// open the graph
result = AUGraphOpen(mGraph);
// Get references to the audio units
result = AUGraphNodeInfo(mGraph, effectsNode, NULL, &mEffects);
result = AUGraphNodeInfo(mGraph, outputNode, NULL, &ioUnit);
result = AUGraphNodeInfo(mGraph, converterNode, NULL, &converterUnit);
// Enable input on remote io unit
UInt32 flag = 1;
result = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flag, sizeof(flag));
// Setup render callback struct
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &renderInput;
renderCallbackStruct.inputProcRefCon = self;
result = AUGraphSetNodeInputCallback(mGraph, outputNode, 0, &renderCallbackStruct);
// Get fx unit's input current stream format...
AudioStreamBasicDescription fxInputASBD;
UInt32 sizeOfASBD = sizeof(AudioStreamBasicDescription);
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxInputASBD, &sizeOfASBD);
// ...and set it on the io unit's input scope's output
result = AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&fxInputASBD,
sizeof(fxInputASBD));
// Set fx unit's output sample rate, just in case
Float64 sampleRate = 44100.0;
result = AudioUnitSetProperty(mEffects,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&sampleRate,
sizeof(sampleRate));
AudioStreamBasicDescription fxOutputASBD;
// get fx audio unit's output ASBD...
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &fxOutputASBD, &sizeOfASBD);
// ...and set it to the converter audio unit's input
result = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxOutputASBD, sizeof(fxOutputASBD));
AudioStreamBasicDescription ioUnitsOutputElementInputASBD;
// now get io audio unit's output element's input ASBD...
result = AudioUnitGetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioUnitsOutputElementInputASBD, &sizeOfASBD);
// ...set the sample rate...
ioUnitsOutputElementInputASBD.mSampleRate = 44100.0;
// ...and set it to the converter audio unit's output
result = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &ioUnitsOutputElementInputASBD, sizeof(ioUnitsOutputElementInputASBD));
// initialize graph
result = AUGraphInitialize(mGraph);
}
The reason I make the connection between the converter's output and the remote io unit's output element's input with a render callback function (rather than with the AUGraphConnectNodeInput method) is because I need to make some calculations on the samples right after they've been processed by the high-pass filter. The render callback gives me the opportunity to look into the samples buffer right after the AudioUnitRender call, and do said calculations there.
UPDATE 2:
By debugging, I found differences in the Remote IO output bus' input ASBD on the device and on the simulator. It shouldn't make a difference (I allocate and initialize the AudioBufferList based on data coming from a previous AudioUnitGetProperty([THIS ioUnit], kAudioUnitProperty_StreamFormat, ...) call), but it's the only thing I can see different in the device and the simulator.
Here's the Remote IO output bus' input ASBD on the device:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 41
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 32
UInt32 mReserved 0
, and here it is on the simulator:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 12
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 16
UInt32 mReserved 0

Core Audio memory issues

My Audio Unit analysis project is having some memory issues, whereby each time an Audio Unit is rendered (or somewhere around that) it is allocating a bunch of memory which isn't being released, causing memory usage to swell and the app to eventually crash.
In instruments, I notice the following string of 32 byte mallocs occurring repeatedly, and they remain live:
BufferedAudioConverter::AllocateBuffers() x6
BufferedInputAudioConverter:BufferedInputAudioConverter(StreamDescPair const&) x 3
Any ideas where the problem might lie? When is that memory allocated in the process and how can it safely be released?
Many thanks.
The code was based on some non-Apple sample code, PitchDetector from sleepyleaf.com
Some code extracts where the problem might lie..... Please let me know if more code is needed.
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, THIS->bufferList); //128 inNumberFrames
if (renderErr < 0) {
return renderErr;
}
// Fill the buffer with our sampled data. If we fill our buffer, run the
// fft.
int read = bufferCapacity - index;
if (read > inNumberFrames) {
memcpy((SInt16 *)dataBuffer + index, THIS->bufferList->mBuffers[0].mData, inNumberFrames*sizeof(SInt16));
THIS->index += inNumberFrames;
} else { DO ANALYSIS
memset(outputBuffer, 0, n*sizeof(SInt16));
- (void)createAUProcessingGraph {
OSStatus err;
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription ioUnitDescription;
ioUnitDescription.componentType = kAudioUnitType_Output;
ioUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
ioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ioUnitDescription.componentFlags = 0;
enter code here
ioUnitDescription.componentFlagsMask = 0;
// Declare and instantiate an audio processing graph
NewAUGraph(&processingGraph);
// Add an audio unit node to the graph, then instantiate the audio unit.
/*
An AUNode is an opaque type that represents an audio unit in the context
of an audio processing graph. You receive a reference to the new audio unit
instance, in the ioUnit parameter, on output of the AUGraphNodeInfo
function call.
*/
AUNode ioNode;
AUGraphAddNode(processingGraph, &ioUnitDescription, &ioNode);
AUGraphOpen(processingGraph); // indirectly performs audio unit instantiation
// Obtain a reference to the newly-instantiated I/O unit. Each Audio Unit
// requires its own configuration.
AUGraphNodeInfo(processingGraph, ioNode, NULL, &ioUnit);
// Initialize below.
AURenderCallbackStruct callbackStruct = {0};
UInt32 enableInput;
UInt32 enableOutput;
// Enable input and disable output.
enableInput = 1; enableOutput = 0;
callbackStruct.inputProc = RenderFFTCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus, &enableInput, sizeof(enableInput));
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus, &enableOutput, sizeof(enableOutput));
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Input,
kOutputBus, &callbackStruct, sizeof(callbackStruct));
// Set the stream format.
size_t bytesPerSample = [self ASBDForSoundMode];
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus, &streamFormat, sizeof(streamFormat));
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus, &streamFormat, sizeof(streamFormat));
// Disable system buffer allocation. We'll do it ourselves.
UInt32 flag = 0;
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus, &flag, sizeof(flag));
// Allocate AudioBuffers for use when listening.
// TODO: Move into initialization...should only be required once.
bufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = 512*bytesPerSample;
bufferList->mBuffers[0].mData = calloc(512, bytesPerSample);
}
I managed to find and fix the issue, which was in an area of the code not posted above.
In a following step the output buffer was being converted into a different number format using an AudioConverter object. However, the converter object was not being disposed of, and remained live in the memory. I fixed it by using AudioConverterDispose as below:
err = AudioConverterNew(&inFormat, &outFormat, &converter);
err = AudioConverterConvertBuffer(converter, inSize, buf, &outSize, outputBuf);
err = AudioConverterDispose (converter);

Resources