Mixer AudioUnit to RemoteIO AudioUnit - ios

I have an AudioUnit with correspondent Callback working properly, But now, I need to send it to a RemoteIO, cause i'm implementing some framework who needs an RemoteIO AudioUnit to work.
THen... I need the same output i'm getting with this audiounit mixer but with another audiounit with type kAudioUnitSubType_RemoteIO.
Please, help!
EDIT ...
This is the code I'm trying...
EDIT 2- iOUnitDescription Added
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType = kAudioUnitType_Output;
iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
AudioComponent foundIoUnitReference = AudioComponentFindNext (
NULL,
&iOUnitDescription
);
AudioComponentInstanceNew (
foundIoUnitReference,
&audioUnit
);
result = AudioUnitSetProperty (
audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
guitarBus,
&stereoStreamFormat,
sizeof (stereoStreamFormat)
);
if (noErr != result) {[self printErrorMessage: #"AudioUnitSetProperty (set mixer unit guitar input bus stream format)" withStatus: result];return;}
result = AudioUnitSetProperty (
audioUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&graphSampleRate,
sizeof (graphSampleRate)
);
if (noErr != result) {[self printErrorMessage: #"AudioUnitSetProperty (set AUDIOUNIT unit output stream format)" withStatus: result]; return;}
AudioUnitElement mixerUnitOutputBus = 0;
AudioUnitElement ioUnitOutputElement = 0;
AudioUnitConnection mixerOutToIoUnitIn;
mixerOutToIoUnitIn.sourceAudioUnit = mixerUnit;
mixerOutToIoUnitIn.sourceOutputNumber = mixerUnitOutputBus;
mixerOutToIoUnitIn.destInputNumber = ioUnitOutputElement;
AudioUnitSetProperty (
audioUnit, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
ioUnitOutputElement, // destination element
&mixerOutToIoUnitIn, // connection definition
sizeof (mixerOutToIoUnitIn)
);

I really need more info. From the above, i see you have a mixer somewhere, a guitarBus which presumably is your input (and seemingly a stream). What is the definition of &iOUnitDescription. More importantly, where are you hooking your renderCallback to, what are you doing in the callback and what does the framework expect?
Typically, when i need to process Audio, I build my own graph; I make this it's own class for better portability. This should be a good starting point for you
Here is how I implement such a solution.
// header file
#interface MDMixerGraph : NSObject{
AUGraph graph;
AudioUnit mixerUnit;
AudioUnit inputUnit;
AudioUnit rioUnit;
}
-(void) setupAUGraph;
#end
// implementation
#implementation MDMixerGraph
// exception Helper
void MDThrowOnError(OSStatus status){
if (status != noErr) {
#throw [NSException exceptionWithName:#"MDMixerException"
reason:[NSString stringWithFormat:#"Status Error %d).", (int)status]
userInfo:nil];
}
}
// helper method for setting up graph nodes
OSStatus MDAdAUGraphdNode(OSType inComponentType, OSType inComponentSubType, AUGraph inGraph, AUNode *outNode)
{
AudioComponentDescription desc;
desc.componentType = inComponentType;
desc.componentSubType = inComponentSubType;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
return AUGraphAddNode(inGraph, &desc, outNode);
}
// setup method to init and start AUGraph
-(void) setupAUGraph{
//Create the Graph
MDThrowOnError(NewAUGraph(&graph));
// setup AU Units
// Add Audio Units (Nodes) to the graph
AUNode inputNode, rioNode, mixerNode;
//Input Node -- this may need to be a different type to accept your Stream (not enough info above)
MDThrowOnError(MDAdAUGraphdNode(kAudioUnitType_Output, kAudioUnitSubType_RemoteIO, graph, &inputUnit));
//Remote IO Node - your output node
MDThrowOnError(MDAdAUGraphdNode(kAudioUnitType_Output, kAudioUnitSubType_RemoteIO, graph, &rioNode));
//mixerNode - Depending on output and input change the mixer sub-type here
// you can configure additional nodes depending on your needs for inputs and outputs
MDThrowOnError(MDAdAUGraphdNode(kAudioUnitType_Mixer, kAudioUnitSubType_AU3DMixerEmbedded, graph, &mixerNode));
// open graph
MDThrowOnError(AUGraphOpen(graph));
// we need a ref to the Audio Units so lets grab all of them here
MDThrowOnError(AUGraphNodeInfo(graph, inputNode, NULL, &inputUnit));
MDThrowOnError(AUGraphNodeInfo(graph, rioNode, NULL, &rioUnit));
MDThrowOnError(AUGraphNodeInfo(graph, mixerNode, NULL, &mixerUnit));
// setup the connections here, input to output of the graph.
/// the graph looks like inputNode->mixerNode->rioNode
MDThrowOnError(AUGraphConnectNodeInput(graph, inputNode, 0, mixerNode, 0));
MDThrowOnError(AUGraphConnectNodeInput(graph, mixerNode, 0, rioNode, 0));
// Init the graph
MDThrowOnError(AUGraphInitialize(graph));
//do any other setup here for your stream
// Finally, Start the graph
MDThrowOnError(AUGraphStart(graph));
}
In your View Controller extension you simply;
// define the MDMixerGraph Class
// #property (nonatomic) MDMixerGraph *mixer;
And in the implementation
self.mixer = [[MDMixerGraph alloc]init];
[self.mixer setupAUGraph];
And you have reference to the rioUnit to pass to your framework (self.mixer.rioUnit); Without knowing more about your requirements connection/processing this is the best i can do for you.
Cheers!

Related

How to play a signal with AudioUnit (iOS)?

I need to generate a signal and play it with iPhone's speakers or a headset.
To do so I generate an interleaved signal. Then i need to instantiate an AudioUnit inherited class object with the next info: 2 channels, 44100 kHz sample rate, some buffer size to store a few frames.
Then I need to write a callback method which will take a chink of my signal and pit it into iPhone's output buffer.
The problem is that I have no idea how to write an AudioUnit inherited class. I can't understand Apple's documentation regarding it, and all the examples I could find either read from file and play it with huge lag or use depricated constructions.
I start to think I am stupid or something. Please, help...
To play audio to the iPhone's hardware with an AudioUnit, you don't derive from the AudioUnit as CoreAudio is a c framework - instead you give it a render callback in which you feed the unit your audio samples. The following code sample shows you how. You need to replace the asserts with real error handling and you'll probably want to change or at least inspect the audio unit's sample format using the kAudioUnitProperty_StreamFormat selector. My format happens to be 48kHz floating point interleaved stereo.
static OSStatus
renderCallback(
void* inRefCon,
AudioUnitRenderActionFlags* ioActionFlags,
const AudioTimeStamp* inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList* ioData)
{
// inRefCon contains your cookie
// write inNumberFrames to ioData->mBuffers[i].mData here
return noErr;
}
AudioUnit
createAudioUnit() {
AudioUnit au;
OSStatus err;
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
assert(0 != comp);
err = AudioComponentInstanceNew(comp, &au);
assert(0 == err);
AURenderCallbackStruct input;
input.inputProc = renderCallback;
input.inputProcRefCon = 0; // put your cookie here
err = AudioUnitSetProperty(au, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &input, sizeof(input));
assert(0 == err);
err = AudioUnitInitialize(au);
assert(0 == err);
err = AudioOutputUnitStart(au);
assert(0 == err);
return au;
}

core audio offline rendering GenericOutput

Anybody successfully done offline rendering using core-Audio.?
I had to mix two audio files and apply reverb(used 2 AudioFilePlayer,MultiChannelMixer,Reverb2 and RemoteIO).
Got it working. and i could save it while its previewing(on renderCallBack of RemoteIO).
I need to save it without playing it (offline).
Thanks in advance.
Offline rendering Worked for me using GenericOutput AudioUnit.
I am sharing the working code here.
core-audio framework seems a little though. But small-small things in it like ASBD, parameters ...etc are making these issues. try hard it will work. Don't give-up :-). core-audio is very powerful and useful while dealing with low-level audio. Thats what I learned from these last weeks. Enjoy :-D ....
Declare these in .h
//AUGraph
AUGraph mGraph;
//Audio Unit References
AudioUnit mFilePlayer;
AudioUnit mFilePlayer2;
AudioUnit mReverb;
AudioUnit mTone;
AudioUnit mMixer;
AudioUnit mGIO;
//Audio File Location
AudioFileID inputFile;
AudioFileID inputFile2;
//Audio file refereces for saving
ExtAudioFileRef extAudioFile;
//Standard sample rate
Float64 graphSampleRate;
AudioStreamBasicDescription stereoStreamFormat864;
Float64 MaxSampleTime;
//in .m class
- (id) init
{
self = [super init];
graphSampleRate = 44100.0;
MaxSampleTime = 0.0;
UInt32 category = kAudioSessionCategory_MediaPlayback;
CheckError(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(category),
&category),
"Couldn't set category on audio session");
[self initializeAUGraph];
return self;
}
//ASBD setup
- (void) setupStereoStream864 {
// The AudioUnitSampleType data type is the recommended type for sample data in audio
// units. This obtains the byte size of the type for use in filling in the ASBD.
size_t bytesPerSample = sizeof (AudioUnitSampleType);
// Fill the application audio format struct's fields to define a linear PCM,
// stereo, noninterleaved stream at the hardware sample rate.
stereoStreamFormat864.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat864.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat864.mBytesPerPacket = bytesPerSample;
stereoStreamFormat864.mFramesPerPacket = 1;
stereoStreamFormat864.mBytesPerFrame = bytesPerSample;
stereoStreamFormat864.mChannelsPerFrame = 2; // 2 indicates stereo
stereoStreamFormat864.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat864.mSampleRate = graphSampleRate;
}
//AUGraph setup
- (void)initializeAUGraph
{
[self setupStereoStream864];
// Setup the AUGraph, add AUNodes, and make connections
// create a new AUGraph
CheckError(NewAUGraph(&mGraph),"Couldn't create new graph");
// AUNodes represent AudioUnits on the AUGraph and provide an
// easy means for connecting audioUnits together.
AUNode filePlayerNode;
AUNode filePlayerNode2;
AUNode mixerNode;
AUNode reverbNode;
AUNode toneNode;
AUNode gOutputNode;
// file player component
AudioComponentDescription filePlayer_desc;
filePlayer_desc.componentType = kAudioUnitType_Generator;
filePlayer_desc.componentSubType = kAudioUnitSubType_AudioFilePlayer;
filePlayer_desc.componentFlags = 0;
filePlayer_desc.componentFlagsMask = 0;
filePlayer_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// file player component2
AudioComponentDescription filePlayer2_desc;
filePlayer2_desc.componentType = kAudioUnitType_Generator;
filePlayer2_desc.componentSubType = kAudioUnitSubType_AudioFilePlayer;
filePlayer2_desc.componentFlags = 0;
filePlayer2_desc.componentFlagsMask = 0;
filePlayer2_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Create AudioComponentDescriptions for the AUs we want in the graph
// mixer component
AudioComponentDescription mixer_desc;
mixer_desc.componentType = kAudioUnitType_Mixer;
mixer_desc.componentSubType = kAudioUnitSubType_MultiChannelMixer;
mixer_desc.componentFlags = 0;
mixer_desc.componentFlagsMask = 0;
mixer_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Create AudioComponentDescriptions for the AUs we want in the graph
// Reverb component
AudioComponentDescription reverb_desc;
reverb_desc.componentType = kAudioUnitType_Effect;
reverb_desc.componentSubType = kAudioUnitSubType_Reverb2;
reverb_desc.componentFlags = 0;
reverb_desc.componentFlagsMask = 0;
reverb_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
//tone component
AudioComponentDescription tone_desc;
tone_desc.componentType = kAudioUnitType_FormatConverter;
//tone_desc.componentSubType = kAudioUnitSubType_NewTimePitch;
tone_desc.componentSubType = kAudioUnitSubType_Varispeed;
tone_desc.componentFlags = 0;
tone_desc.componentFlagsMask = 0;
tone_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponentDescription gOutput_desc;
gOutput_desc.componentType = kAudioUnitType_Output;
gOutput_desc.componentSubType = kAudioUnitSubType_GenericOutput;
gOutput_desc.componentFlags = 0;
gOutput_desc.componentFlagsMask = 0;
gOutput_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
//Add nodes to graph
// Add nodes to the graph to hold our AudioUnits,
// You pass in a reference to the AudioComponentDescription
// and get back an AudioUnit
AUGraphAddNode(mGraph, &filePlayer_desc, &filePlayerNode );
AUGraphAddNode(mGraph, &filePlayer2_desc, &filePlayerNode2 );
AUGraphAddNode(mGraph, &mixer_desc, &mixerNode );
AUGraphAddNode(mGraph, &reverb_desc, &reverbNode );
AUGraphAddNode(mGraph, &tone_desc, &toneNode );
AUGraphAddNode(mGraph, &gOutput_desc, &gOutputNode);
//Open the graph early, initialize late
// open the graph AudioUnits are open but not initialized (no resource allocation occurs here)
CheckError(AUGraphOpen(mGraph),"Couldn't Open the graph");
//Reference to Nodes
// get the reference to the AudioUnit object for the file player graph node
AUGraphNodeInfo(mGraph, filePlayerNode, NULL, &mFilePlayer);
AUGraphNodeInfo(mGraph, filePlayerNode2, NULL, &mFilePlayer2);
AUGraphNodeInfo(mGraph, reverbNode, NULL, &mReverb);
AUGraphNodeInfo(mGraph, toneNode, NULL, &mTone);
AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
AUGraphNodeInfo(mGraph, gOutputNode, NULL, &mGIO);
AUGraphConnectNodeInput(mGraph, filePlayerNode, 0, reverbNode, 0);
AUGraphConnectNodeInput(mGraph, reverbNode, 0, toneNode, 0);
AUGraphConnectNodeInput(mGraph, toneNode, 0, mixerNode,0);
AUGraphConnectNodeInput(mGraph, filePlayerNode2, 0, mixerNode, 1);
AUGraphConnectNodeInput(mGraph, mixerNode, 0, gOutputNode, 0);
UInt32 busCount = 2; // bus count for mixer unit input
//Setup mixer unit bus count
CheckError(AudioUnitSetProperty (
mMixer,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&busCount,
sizeof (busCount)
),
"Couldn't set mixer unit's bus count");
//Enable metering mode to view levels input and output levels of mixer
UInt32 onValue = 1;
CheckError(AudioUnitSetProperty(mMixer,
kAudioUnitProperty_MeteringMode,
kAudioUnitScope_Input,
0,
&onValue,
sizeof(onValue)),
"error");
// Increase the maximum frames per slice allows the mixer unit to accommodate the
// larger slice size used when the screen is locked.
UInt32 maximumFramesPerSlice = 4096;
CheckError(AudioUnitSetProperty (
mMixer,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global,
0,
&maximumFramesPerSlice,
sizeof (maximumFramesPerSlice)
),
"Couldn't set mixer units maximum framers per slice");
// set the audio data format of tone Unit
AudioUnitSetProperty(mTone,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Global,
0,
&stereoStreamFormat864,
sizeof(AudioStreamBasicDescription));
// set the audio data format of reverb Unit
AudioUnitSetProperty(mReverb,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Global,
0,
&stereoStreamFormat864,
sizeof(AudioStreamBasicDescription));
// set initial reverb
AudioUnitParameterValue reverbTime = 2.5;
AudioUnitSetParameter(mReverb, 4, kAudioUnitScope_Global, 0, reverbTime, 0);
AudioUnitSetParameter(mReverb, 5, kAudioUnitScope_Global, 0, reverbTime, 0);
AudioUnitSetParameter(mReverb, 0, kAudioUnitScope_Global, 0, 0, 0);
AudioStreamBasicDescription auEffectStreamFormat;
UInt32 asbdSize = sizeof (auEffectStreamFormat);
memset (&auEffectStreamFormat, 0, sizeof (auEffectStreamFormat ));
// get the audio data format from reverb
CheckError(AudioUnitGetProperty(mReverb,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&auEffectStreamFormat,
&asbdSize),
"Couldn't get aueffectunit ASBD");
auEffectStreamFormat.mSampleRate = graphSampleRate;
// set the audio data format of mixer Unit
CheckError(AudioUnitSetProperty(mMixer,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&auEffectStreamFormat, sizeof(auEffectStreamFormat)),
"Couldn't set ASBD on mixer output");
CheckError(AUGraphInitialize(mGraph),"Couldn't Initialize the graph");
[self setUpAUFilePlayer];
[self setUpAUFilePlayer2];
}
//Audio file playback setup here i am setting the voice file
-(OSStatus) setUpAUFilePlayer{
NSString *songPath = [[NSBundle mainBundle] pathForResource: #"testVoice" ofType:#".m4a"];
CFURLRef songURL = ( CFURLRef) [NSURL fileURLWithPath:songPath];
// open the input audio file
CheckError(AudioFileOpenURL(songURL, kAudioFileReadPermission, 0, &inputFile),
"setUpAUFilePlayer AudioFileOpenURL failed");
AudioStreamBasicDescription fileASBD;
// get the audio data format from the file
UInt32 propSize = sizeof(fileASBD);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyDataFormat,
&propSize, &fileASBD),
"setUpAUFilePlayer couldn't get file's data format");
// tell the file player unit to load the file we want to play
CheckError(AudioUnitSetProperty(mFilePlayer, kAudioUnitProperty_ScheduledFileIDs,
kAudioUnitScope_Global, 0, &inputFile, sizeof(inputFile)),
"setUpAUFilePlayer AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileIDs] failed");
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyAudioDataPacketCount,
&propsize, &nPackets),
"setUpAUFilePlayer AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] failed");
// tell the file player AU to play the entire file
ScheduledAudioFileRegion rgn;
memset (&rgn.mTimeStamp, 0, sizeof(rgn.mTimeStamp));
rgn.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
rgn.mTimeStamp.mSampleTime = 0;
rgn.mCompletionProc = NULL;
rgn.mCompletionProcUserData = NULL;
rgn.mAudioFile = inputFile;
rgn.mLoopCount = -1;
rgn.mStartFrame = 0;
rgn.mFramesToPlay = nPackets * fileASBD.mFramesPerPacket;
if (MaxSampleTime < rgn.mFramesToPlay)
{
MaxSampleTime = rgn.mFramesToPlay;
}
CheckError(AudioUnitSetProperty(mFilePlayer, kAudioUnitProperty_ScheduledFileRegion,
kAudioUnitScope_Global, 0,&rgn, sizeof(rgn)),
"setUpAUFilePlayer1 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileRegion] failed");
// prime the file player AU with default values
UInt32 defaultVal = 0;
CheckError(AudioUnitSetProperty(mFilePlayer, kAudioUnitProperty_ScheduledFilePrime,
kAudioUnitScope_Global, 0, &defaultVal, sizeof(defaultVal)),
"setUpAUFilePlayer AudioUnitSetProperty[kAudioUnitProperty_ScheduledFilePrime] failed");
// tell the file player AU when to start playing (-1 sample time means next render cycle)
AudioTimeStamp startTime;
memset (&startTime, 0, sizeof(startTime));
startTime.mFlags = kAudioTimeStampSampleTimeValid;
startTime.mSampleTime = -1;
CheckError(AudioUnitSetProperty(mFilePlayer, kAudioUnitProperty_ScheduleStartTimeStamp,
kAudioUnitScope_Global, 0, &startTime, sizeof(startTime)),
"setUpAUFilePlayer AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp]");
return noErr;
}
//Audio file playback setup here i am setting the BGMusic file
-(OSStatus) setUpAUFilePlayer2{
NSString *songPath = [[NSBundle mainBundle] pathForResource: #"BGmusic" ofType:#".mp3"];
CFURLRef songURL = ( CFURLRef) [NSURL fileURLWithPath:songPath];
// open the input audio file
CheckError(AudioFileOpenURL(songURL, kAudioFileReadPermission, 0, &inputFile2),
"setUpAUFilePlayer2 AudioFileOpenURL failed");
AudioStreamBasicDescription fileASBD;
// get the audio data format from the file
UInt32 propSize = sizeof(fileASBD);
CheckError(AudioFileGetProperty(inputFile2, kAudioFilePropertyDataFormat,
&propSize, &fileASBD),
"setUpAUFilePlayer2 couldn't get file's data format");
// tell the file player unit to load the file we want to play
CheckError(AudioUnitSetProperty(mFilePlayer2, kAudioUnitProperty_ScheduledFileIDs,
kAudioUnitScope_Global, 0, &inputFile2, sizeof(inputFile2)),
"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileIDs] failed");
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
CheckError(AudioFileGetProperty(inputFile2, kAudioFilePropertyAudioDataPacketCount,
&propsize, &nPackets),
"setUpAUFilePlayer2 AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] failed");
// tell the file player AU to play the entire file
ScheduledAudioFileRegion rgn;
memset (&rgn.mTimeStamp, 0, sizeof(rgn.mTimeStamp));
rgn.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
rgn.mTimeStamp.mSampleTime = 0;
rgn.mCompletionProc = NULL;
rgn.mCompletionProcUserData = NULL;
rgn.mAudioFile = inputFile2;
rgn.mLoopCount = -1;
rgn.mStartFrame = 0;
rgn.mFramesToPlay = nPackets * fileASBD.mFramesPerPacket;
if (MaxSampleTime < rgn.mFramesToPlay)
{
MaxSampleTime = rgn.mFramesToPlay;
}
CheckError(AudioUnitSetProperty(mFilePlayer2, kAudioUnitProperty_ScheduledFileRegion,
kAudioUnitScope_Global, 0,&rgn, sizeof(rgn)),
"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileRegion] failed");
// prime the file player AU with default values
UInt32 defaultVal = 0;
CheckError(AudioUnitSetProperty(mFilePlayer2, kAudioUnitProperty_ScheduledFilePrime,
kAudioUnitScope_Global, 0, &defaultVal, sizeof(defaultVal)),
"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFilePrime] failed");
// tell the file player AU when to start playing (-1 sample time means next render cycle)
AudioTimeStamp startTime;
memset (&startTime, 0, sizeof(startTime));
startTime.mFlags = kAudioTimeStampSampleTimeValid;
startTime.mSampleTime = -1;
CheckError(AudioUnitSetProperty(mFilePlayer2, kAudioUnitProperty_ScheduleStartTimeStamp,
kAudioUnitScope_Global, 0, &startTime, sizeof(startTime)),
"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp]");
return noErr;
}
//Start Saving File
- (void)startRecordingAAC{
AudioStreamBasicDescription destinationFormat;
memset(&destinationFormat, 0, sizeof(destinationFormat));
destinationFormat.mChannelsPerFrame = 2;
destinationFormat.mFormatID = kAudioFormatMPEG4AAC;
UInt32 size = sizeof(destinationFormat);
OSStatus result = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &destinationFormat);
if(result) printf("AudioFormatGetProperty %ld \n", result);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[NSString alloc] initWithFormat: #"%#/output.m4a", documentsDirectory];
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
(CFStringRef)destinationFilePath,
kCFURLPOSIXPathStyle,
false);
[destinationFilePath release];
// specify codec Saving the output in .m4a format
result = ExtAudioFileCreateWithURL(destinationURL,
kAudioFileM4AType,
&destinationFormat,
NULL,
kAudioFileFlags_EraseFile,
&extAudioFile);
if(result) printf("ExtAudioFileCreateWithURL %ld \n", result);
CFRelease(destinationURL);
// This is a very important part and easiest way to set the ASBD for the File with correct format.
AudioStreamBasicDescription clientFormat;
UInt32 fSize = sizeof (clientFormat);
memset(&clientFormat, 0, sizeof(clientFormat));
// get the audio data format from the Output Unit
CheckError(AudioUnitGetProperty(mGIO,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&clientFormat,
&fSize),"AudioUnitGetProperty on failed");
// set the audio data format of mixer Unit
CheckError(ExtAudioFileSetProperty(extAudioFile,
kExtAudioFileProperty_ClientDataFormat,
sizeof(clientFormat),
&clientFormat),
"ExtAudioFileSetProperty kExtAudioFileProperty_ClientDataFormat failed");
// specify codec
UInt32 codec = kAppleHardwareAudioCodecManufacturer;
CheckError(ExtAudioFileSetProperty(extAudioFile,
kExtAudioFileProperty_CodecManufacturer,
sizeof(codec),
&codec),"ExtAudioFileSetProperty on extAudioFile Faild");
CheckError(ExtAudioFileWriteAsync(extAudioFile, 0, NULL),"ExtAudioFileWriteAsync Failed");
[self pullGenericOutput];
}
// Manual Feeding and getting data/Buffer from the GenericOutput Node.
-(void)pullGenericOutput{
AudioUnitRenderActionFlags flags = 0;
AudioTimeStamp inTimeStamp;
memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));
inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
UInt32 busNumber = 0;
UInt32 numberFrames = 512;
inTimeStamp.mSampleTime = 0;
int channelCount = 2;
NSLog(#"Final numberFrames :%li",numberFrames);
int totFrms = MaxSampleTime;
while (totFrms > 0)
{
if (totFrms < numberFrames)
{
numberFrames = totFrms;
NSLog(#"Final numberFrames :%li",numberFrames);
}
else
{
totFrms -= numberFrames;
}
AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
bufferList->mNumberBuffers = channelCount;
for (int j=0; j<channelCount; j++)
{
AudioBuffer buffer = {0};
buffer.mNumberChannels = 1;
buffer.mDataByteSize = numberFrames*sizeof(AudioUnitSampleType);
buffer.mData = calloc(numberFrames, sizeof(AudioUnitSampleType));
bufferList->mBuffers[j] = buffer;
}
CheckError(AudioUnitRender(mGIO,
&flags,
&inTimeStamp,
busNumber,
numberFrames,
bufferList),
"AudioUnitRender mGIO");
CheckError(ExtAudioFileWrite(extAudioFile, numberFrames, bufferList),("extaudiofilewrite fail"));
}
[self FilesSavingCompleted];
}
//FilesSavingCompleted
-(void)FilesSavingCompleted{
OSStatus status = ExtAudioFileDispose(extAudioFile);
printf("OSStatus(ExtAudioFileDispose): %ld\n", status);
}
One way to do offline rendering is to remove the RemoteIO unit and explicitly call AudioUnitRender on the right-most unit in your graph (either the mixer or the reverb unit depending on your topology). By doing this in a loop until you exhaust the samples from both of your source files, and writing the resulting sample buffers with Extended Audio File Services, you can create a compressed audio file of the mixdown. You'll want to do this on a background thread to keep the UI responsive, but I've used this technique before with some success.
Above code is working on iOS7 device but not working on iOS8 device and on all simulators.
I had replaced the following code segment
UInt32 category = kAudioSessionCategory_MediaPlayback;
CheckError(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(category),
&category),
"Couldn't set category on audio session");
with the following code. Because AudioSessionSetProperty is deprecated so I had replaced following code.
AVAudioSession *session = [AVAudioSession sharedInstance];
NSError *setCategoryError = nil;
if (![session setCategory:AVAudioSessionCategoryPlayback
withOptions:AVAudioSessionCategoryOptionMixWithOthers
error:&setCategoryError]) {
// handle error
}
There must be some update for iOS 8. which can be in above code or in some where else.
I followed Abdusha's approach but my output file had no audio plus the size was very small as compared to the input. After looking into it, a fix I made was in "pullGenericOutput" function. After AudioUnitRender call:
AudioUnitRender(genericOutputUnit,
&flags,
&inTimeStamp,
busNumber,
numberFrames,
bufferList);
inTimeStamp.mSampleTime++; //Updated
increment the timeStamp by 1. After doing this, the output file was perfect with effects working. Thanks. Your answer helped a lot.

OSStatus error -50 (paramErr) on AudioUnitRender call on device

I'm writing an iOS app that captures audio from the microphone, filters it with a high-pass filter, and plays it back through the speakers.
I'm getting a -50 OSStatus error when I call AudioUnitRender on the render callback function when I run it on an iPhone 4S, but it runs fine on the simulator.
I'm using an AUGraph, which has a RemoteIO unit, a HighPassFilter effect unit, and an AUConverter unit to make the ASBDs between the HPF and the output match. The converter AudioUnit instance is called converterUnit.
Here's the code.
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
AudioController *THIS = (AudioController*)inRefCon;
AudioBuffer buffer;
AudioStreamBasicDescription converterOutputASBD;
UInt32 converterOutputASBDSize = sizeof(converterOutputASBD);
AudioUnitGetProperty([THIS converterUnit], kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &converterOutputASBD, &converterOutputASBDSize);
buffer.mDataByteSize = inNumberFrames * converterOutputASBD.mBytesPerFrame;
buffer.mNumberChannels = converterOutputASBD.mChannelsPerFrame;
buffer.mData = malloc(buffer.mDataByteSize);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus result = AudioUnitRender([THIS converterUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
...
}
I think -50 error means one of the parameters is wrong. The only parameters that can be wrong are [THIS converterUnit] and &bufferList, given that all the rest are handed to me as arguments. I've checked the converterUnit instance and it is correctly allocated and initialized (what's more, if that was the problem, it wouldn't run on the simulator either). The only parameter left to check is the bufferList. What I could make out so far from debugging is that both the RemoteIO's output element's input ASBD, and the inNumberFrames are different in the phone and on the simulator. But still, I think that to me that doesn't change things, given that I create and allocate memory for the AudioBuffer buffer based on an ASBD resulting from a AudioUnitGetProperty([THIS ioUnit], kAudioUnitProperty_StreamFormat, ...) call.
Any help will be much appreciated, I'm kind of running desperate here..
You guys rock.
Cheers.
UPDATE:
Here's the audio controller class' definition:
#interface AudioController : NSObject
{
AUGraph mGraph;
AudioUnit mEffects;
AudioUnit ioUnit;
AudioUnit converterUnit;
}
#property (readonly, nonatomic) AudioUnit mEffects;
#property (readonly, nonatomic) AudioUnit ioUnit;
#property (readonly, nonatomic) AudioUnit converterUnit;
#property (nonatomic) float* volumenPromedio;
-(void)initializeAUGraph;
-(void)startAUGraph;
-(void)stopAUGraph;
#end
, and here's the initialization code for the AUGraph (defined in AudioController.mm):
- (void)initializeAUGraph
{
NSError *audioSessionError = nil;
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setPreferredHardwareSampleRate: kGraphSampleRate
error: &audioSessionError];
[mySession setCategory: AVAudioSessionCategoryPlayAndRecord
error: &audioSessionError];
[mySession setActive: YES error: &audioSessionError];
OSStatus result = noErr;
// create a new AUGraph
result = NewAUGraph(&mGraph);
AUNode outputNode;
AUNode effectsNode;
AUNode converterNode;
// effects component
AudioComponentDescription effects_desc;
effects_desc.componentType = kAudioUnitType_Effect;
effects_desc.componentSubType = kAudioUnitSubType_HighPassFilter;
effects_desc.componentFlags = 0;
effects_desc.componentFlagsMask = 0;
effects_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// output component
AudioComponentDescription output_desc;
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// stream format converter component
AudioComponentDescription converter_desc;
converter_desc.componentType = kAudioUnitType_FormatConverter;
converter_desc.componentSubType = kAudioUnitSubType_AUConverter;
converter_desc.componentFlags = 0;
converter_desc.componentFlagsMask = 0;
converter_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Add nodes to the graph
result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &effects_desc, &effectsNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &converter_desc, &converterNode);
// manage connections in the graph
// Connect the io unit node's input element's output to the effectsNode input
result = AUGraphConnectNodeInput(mGraph, outputNode, 1, effectsNode, 0);
// Connect the effects node's output to the converter node's input
result = AUGraphConnectNodeInput(mGraph, effectsNode, 0, converterNode, 0);
// open the graph
result = AUGraphOpen(mGraph);
// Get references to the audio units
result = AUGraphNodeInfo(mGraph, effectsNode, NULL, &mEffects);
result = AUGraphNodeInfo(mGraph, outputNode, NULL, &ioUnit);
result = AUGraphNodeInfo(mGraph, converterNode, NULL, &converterUnit);
// Enable input on remote io unit
UInt32 flag = 1;
result = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flag, sizeof(flag));
// Setup render callback struct
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &renderInput;
renderCallbackStruct.inputProcRefCon = self;
result = AUGraphSetNodeInputCallback(mGraph, outputNode, 0, &renderCallbackStruct);
// Get fx unit's input current stream format...
AudioStreamBasicDescription fxInputASBD;
UInt32 sizeOfASBD = sizeof(AudioStreamBasicDescription);
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxInputASBD, &sizeOfASBD);
// ...and set it on the io unit's input scope's output
result = AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&fxInputASBD,
sizeof(fxInputASBD));
// Set fx unit's output sample rate, just in case
Float64 sampleRate = 44100.0;
result = AudioUnitSetProperty(mEffects,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&sampleRate,
sizeof(sampleRate));
AudioStreamBasicDescription fxOutputASBD;
// get fx audio unit's output ASBD...
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &fxOutputASBD, &sizeOfASBD);
// ...and set it to the converter audio unit's input
result = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxOutputASBD, sizeof(fxOutputASBD));
AudioStreamBasicDescription ioUnitsOutputElementInputASBD;
// now get io audio unit's output element's input ASBD...
result = AudioUnitGetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioUnitsOutputElementInputASBD, &sizeOfASBD);
// ...set the sample rate...
ioUnitsOutputElementInputASBD.mSampleRate = 44100.0;
// ...and set it to the converter audio unit's output
result = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &ioUnitsOutputElementInputASBD, sizeof(ioUnitsOutputElementInputASBD));
// initialize graph
result = AUGraphInitialize(mGraph);
}
The reason I make the connection between the converter's output and the remote io unit's output element's input with a render callback function (rather than with the AUGraphConnectNodeInput method) is because I need to make some calculations on the samples right after they've been processed by the high-pass filter. The render callback gives me the opportunity to look into the samples buffer right after the AudioUnitRender call, and do said calculations there.
UPDATE 2:
By debugging, I found differences in the Remote IO output bus' input ASBD on the device and on the simulator. It shouldn't make a difference (I allocate and initialize the AudioBufferList based on data coming from a previous AudioUnitGetProperty([THIS ioUnit], kAudioUnitProperty_StreamFormat, ...) call), but it's the only thing I can see different in the device and the simulator.
Here's the Remote IO output bus' input ASBD on the device:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 41
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 32
UInt32 mReserved 0
, and here it is on the simulator:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 12
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 16
UInt32 mReserved 0

How do I connect an AudioFilePlayer AudioUnit to a 3DMixer?

I am trying to connect an AudioFilePlayer AudioUnit to an AU3DMixerEmbedded Audio Unit, but I'm having no success.
Here's what I'm doing:
create an AUGraph with NewAUGraph()
Open the graph
Initalize the graph
Add 3 nodes:
outputNode: kAudioUnitSubType_RemoteIO
mixerNode: kAudioUnitSubType_AU3DMixerEmbedded
filePlayerNode: kAudioUnitSubType_AudioFilePlayer
Connect the nodes:
filePlayerNode -> mixerNode
mixerNode -> outputNode
Configure the filePlayer Audio Unit to play the required file
Start the graph
This doesn't work: it balks at AUGraphInitialize with error 10868 (kAudioUnitErr_FormatNotSupported). I think the problem is due to audio format mismatch between the filePlayer and the mixer. I think this because:
- If I comment out connecting the filePlayerNode to the mixerNode (AUGraphConnectNodeInput(_graph, filePlayerNode, 0, mixerNode, 0)) and comment out step 6 then no errors are reported.
- If I replace step 3 with connecting the filePlayerNode directly to the outputNode (AUGraphConnectNodeInput(_graph, filePlayerNode, 0, outputNode, 0)) then the audio plays.
What steps am I missing in connecting the filePlayerNode to the mixerNode?
Here's the code in full. It's based on Apple's sample code and other samples I've found from the interwebs. (AUGraphStart is called latter):
- (id)init
{
self = [super init];
if (self != nil)
{
{
//create a new AUGraph
CheckError(NewAUGraph(&_graph), "NewAUGraph failed");
// opening the graph opens all contained audio units but does not allocate any resources yet
CheckError(AUGraphOpen(_graph), "AUGraphOpen failed");
// now initialize the graph (causes resources to be allocated)
CheckError(AUGraphInitialize(_graph), "AUGraphInitialize failed");
}
AUNode outputNode;
{
AudioComponentDescription outputAudioDesc = {0};
outputAudioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
outputAudioDesc.componentType = kAudioUnitType_Output;
outputAudioDesc.componentSubType = kAudioUnitSubType_RemoteIO;
// adds a node with above description to the graph
CheckError(AUGraphAddNode(_graph, &outputAudioDesc, &outputNode), "AUGraphAddNode[kAudioUnitSubType_DefaultOutput] failed");
}
AUNode mixerNode;
{
AudioComponentDescription mixerAudioDesc = {0};
mixerAudioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
mixerAudioDesc.componentType = kAudioUnitType_Mixer;
mixerAudioDesc.componentSubType = kAudioUnitSubType_AU3DMixerEmbedded;
mixerAudioDesc.componentFlags = 0;
mixerAudioDesc.componentFlagsMask = 0;
// adds a node with above description to the graph
CheckError(AUGraphAddNode(_graph, &mixerAudioDesc, &mixerNode), "AUGraphAddNode[kAudioUnitSubType_AU3DMixerEmbedded] failed");
}
AUNode filePlayerNode;
{
AudioComponentDescription fileplayerAudioDesc = {0};
fileplayerAudioDesc.componentType = kAudioUnitType_Generator;
fileplayerAudioDesc.componentSubType = kAudioUnitSubType_AudioFilePlayer;
fileplayerAudioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
// adds a node with above description to the graph
CheckError(AUGraphAddNode(_graph, &fileplayerAudioDesc, &filePlayerNode), "AUGraphAddNode[kAudioUnitSubType_AudioFilePlayer] failed");
}
//Connect the nodes
{
// connect the output source of the file player AU to the input source of the output node
// CheckError(AUGraphConnectNodeInput(_graph, filePlayerNode, 0, outputNode, 0), "AUGraphConnectNodeInput");
CheckError(AUGraphConnectNodeInput(_graph, filePlayerNode, 0, mixerNode, 0), "AUGraphConnectNodeInput");
CheckError(AUGraphConnectNodeInput(_graph, mixerNode, 0, outputNode, 0), "AUGraphConnectNodeInput");
}
// configure the file player
// tell the file player unit to load the file we want to play
{
//?????
AudioStreamBasicDescription inputFormat; // input file's data stream description
AudioFileID inputFile; // reference to your input file
// open the input audio file and store the AU ref in _player
CFURLRef songURL = (__bridge CFURLRef)[[NSBundle mainBundle] URLForResource:#"monoVoice" withExtension:#"aif"];
CheckError(AudioFileOpenURL(songURL, kAudioFileReadPermission, 0, &inputFile), "AudioFileOpenURL failed");
//create an empty MyAUGraphPlayer struct
AudioUnit fileAU;
// get the reference to the AudioUnit object for the file player graph node
CheckError(AUGraphNodeInfo(_graph, filePlayerNode, NULL, &fileAU), "AUGraphNodeInfo failed");
// get and store the audio data format from the file
UInt32 propSize = sizeof(inputFormat);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyDataFormat, &propSize, &inputFormat), "couldn't get file's data format");
CheckError(AudioUnitSetProperty(fileAU, kAudioUnitProperty_ScheduledFileIDs, kAudioUnitScope_Global, 0, &(inputFile), sizeof((inputFile))), "AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileIDs] failed");
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyAudioDataPacketCount, &propsize, &nPackets), "AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] failed");
// tell the file player AU to play the entire file
ScheduledAudioFileRegion rgn;
memset (&rgn.mTimeStamp, 0, sizeof(rgn.mTimeStamp));
rgn.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
rgn.mTimeStamp.mSampleTime = 0;
rgn.mCompletionProc = NULL;
rgn.mCompletionProcUserData = NULL;
rgn.mAudioFile = inputFile;
rgn.mLoopCount = 1;
rgn.mStartFrame = 0;
rgn.mFramesToPlay = nPackets * inputFormat.mFramesPerPacket;
CheckError(AudioUnitSetProperty(fileAU, kAudioUnitProperty_ScheduledFileRegion, kAudioUnitScope_Global, 0,&rgn, sizeof(rgn)), "AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileRegion] failed");
// prime the file player AU with default values
UInt32 defaultVal = 0;
CheckError(AudioUnitSetProperty(fileAU, kAudioUnitProperty_ScheduledFilePrime, kAudioUnitScope_Global, 0, &defaultVal, sizeof(defaultVal)), "AudioUnitSetProperty[kAudioUnitProperty_ScheduledFilePrime] failed");
// tell the file player AU when to start playing (-1 sample time means next render cycle)
AudioTimeStamp startTime;
memset (&startTime, 0, sizeof(startTime));
startTime.mFlags = kAudioTimeStampSampleTimeValid;
startTime.mSampleTime = -1;
CheckError(AudioUnitSetProperty(fileAU, kAudioUnitProperty_ScheduleStartTimeStamp, kAudioUnitScope_Global, 0, &startTime, sizeof(startTime)), "AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp]");
// file duration
//double duration = (nPackets * _player.inputFormat.mFramesPerPacket) / _player.inputFormat.mSampleRate;
}
}
return self;
}
I don't see in your code where you set the appropriate kAudioUnitProperty_StreamFormat for the audio units. You will also have to check the error result codes to see if the stream format setting you choose is actually supported by the audio unit being configured. If not, try another format.
(AUGraphConnectNodeInput(_graph, filePlayerNode, 0, mixerNode, 0)) (AUGraphConnectNodeInput(_graph, mixerNode, 0, outputNode, 0))
Try doing this way if it can help.Just for information the left node is input in the right node. so in first line the player node is input to the mixer node, now mixer node contains both player and mixer so add mixer node to output node.

Core Audio memory issues

My Audio Unit analysis project is having some memory issues, whereby each time an Audio Unit is rendered (or somewhere around that) it is allocating a bunch of memory which isn't being released, causing memory usage to swell and the app to eventually crash.
In instruments, I notice the following string of 32 byte mallocs occurring repeatedly, and they remain live:
BufferedAudioConverter::AllocateBuffers() x6
BufferedInputAudioConverter:BufferedInputAudioConverter(StreamDescPair const&) x 3
Any ideas where the problem might lie? When is that memory allocated in the process and how can it safely be released?
Many thanks.
The code was based on some non-Apple sample code, PitchDetector from sleepyleaf.com
Some code extracts where the problem might lie..... Please let me know if more code is needed.
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, THIS->bufferList); //128 inNumberFrames
if (renderErr < 0) {
return renderErr;
}
// Fill the buffer with our sampled data. If we fill our buffer, run the
// fft.
int read = bufferCapacity - index;
if (read > inNumberFrames) {
memcpy((SInt16 *)dataBuffer + index, THIS->bufferList->mBuffers[0].mData, inNumberFrames*sizeof(SInt16));
THIS->index += inNumberFrames;
} else { DO ANALYSIS
memset(outputBuffer, 0, n*sizeof(SInt16));
- (void)createAUProcessingGraph {
OSStatus err;
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription ioUnitDescription;
ioUnitDescription.componentType = kAudioUnitType_Output;
ioUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
ioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ioUnitDescription.componentFlags = 0;
enter code here
ioUnitDescription.componentFlagsMask = 0;
// Declare and instantiate an audio processing graph
NewAUGraph(&processingGraph);
// Add an audio unit node to the graph, then instantiate the audio unit.
/*
An AUNode is an opaque type that represents an audio unit in the context
of an audio processing graph. You receive a reference to the new audio unit
instance, in the ioUnit parameter, on output of the AUGraphNodeInfo
function call.
*/
AUNode ioNode;
AUGraphAddNode(processingGraph, &ioUnitDescription, &ioNode);
AUGraphOpen(processingGraph); // indirectly performs audio unit instantiation
// Obtain a reference to the newly-instantiated I/O unit. Each Audio Unit
// requires its own configuration.
AUGraphNodeInfo(processingGraph, ioNode, NULL, &ioUnit);
// Initialize below.
AURenderCallbackStruct callbackStruct = {0};
UInt32 enableInput;
UInt32 enableOutput;
// Enable input and disable output.
enableInput = 1; enableOutput = 0;
callbackStruct.inputProc = RenderFFTCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus, &enableInput, sizeof(enableInput));
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus, &enableOutput, sizeof(enableOutput));
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Input,
kOutputBus, &callbackStruct, sizeof(callbackStruct));
// Set the stream format.
size_t bytesPerSample = [self ASBDForSoundMode];
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus, &streamFormat, sizeof(streamFormat));
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus, &streamFormat, sizeof(streamFormat));
// Disable system buffer allocation. We'll do it ourselves.
UInt32 flag = 0;
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus, &flag, sizeof(flag));
// Allocate AudioBuffers for use when listening.
// TODO: Move into initialization...should only be required once.
bufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = 512*bytesPerSample;
bufferList->mBuffers[0].mData = calloc(512, bytesPerSample);
}
I managed to find and fix the issue, which was in an area of the code not posted above.
In a following step the output buffer was being converted into a different number format using an AudioConverter object. However, the converter object was not being disposed of, and remained live in the memory. I fixed it by using AudioConverterDispose as below:
err = AudioConverterNew(&inFormat, &outFormat, &converter);
err = AudioConverterConvertBuffer(converter, inSize, buf, &outSize, outputBuf);
err = AudioConverterDispose (converter);

Resources