AudioToolbox AUGraph failure when connecting nodes error kAudioUnitErr_PropertyNotWriteable - ios

I am trying to create an audio graph with a mixer and an io unit. The io unit will receive audio from the microphone and send it to the mixer, which will mix it with an external sound, and play it back out the speaker. I have created my audio graph like below, tried to follow guidelines as much as possible. However, I keep getting error -10865 (kAudioUnitErr_PropertyNotWriteable) when I try to connect the mixer node to the output node. Could somebody clarify for me what is going wrong? I will include more code such as my callback and private variables if needed.
NSLog(#"Creating audio graph");
sampleRate = 44100.0;
// Will check results
OSStatus result;
// Create the graph
result = NewAUGraph(&graph);
if(result != noErr)
NSLog(#"Failed creating graph");
result = AUGraphInitialize(graph);
if(result != noErr)
NSLog(#"Failed to initialize audio graph Error code: %d '%.4s", (int) result, (const char *)&result);
// Create audio nodes
AudioComponentDescription ioDescription;
ioDescription.componentType = kAudioUnitType_Output;
ioDescription.componentSubType = kAudioUnitSubType_RemoteIO;
ioDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ioDescription.componentFlagsMask = 0;
ioDescription.componentFlags = 0;
AudioComponentDescription mixerDescription;
mixerDescription.componentType = kAudioUnitType_Mixer;
mixerDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
mixerDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
mixerDescription.componentFlagsMask = 0;
mixerDescription.componentFlags = 0;
// Add nodes to the graph
AUNode ioNode;
AUNode mixerNode;
result = AUGraphAddNode(graph, &ioDescription, &ioNode);
if(result != noErr)
NSLog(#"Failed to add microphone node");
result = AUGraphAddNode(graph, &mixerDescription, &mixerNode);
if(result != noErr)
NSLog(#"Failed to add mixer node");
// Open the graph
result = AUGraphOpen(graph);
if(result != noErr)
NSLog(#"Failed to open graph");
// Get the IO node
result = AUGraphNodeInfo(graph, ioNode, NULL, &ioUnit);
if(result != noErr)
NSLog(#"Failed to fetch info from io node");
// Enable IO on the io node
UInt32 flag = 1;
result = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flag, sizeof(flag));
if(result != noErr)
NSLog(#"Failed enabling IO on io unit");
// Get the mixer unit
result = AUGraphNodeInfo(graph, mixerNode, NULL, &mixerUnit);
if(result != noErr)
NSLog(#"Failed to fetch info from mixer node");
// Set up the mixer unit bus count
UInt32 busCount = 2;
result = AudioUnitSetProperty(mixerUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &busCount, sizeof(busCount));
if(result != noErr)
NSLog(#"Failed to set property mixer input bus count");
// Attach render callback to sound effect bus
UInt32 soundEffectBus = 1;
AURenderCallbackStruct inputCallbackStruct;
inputCallbackStruct.inputProc = &inputRenderCallback;
inputCallbackStruct.inputProcRefCon = soundStruct;
result = AUGraphSetNodeInputCallback(graph, mixerNode, soundEffectBus, &inputCallbackStruct);
if(result != noErr)
NSLog(#"Failed to set mixer node input callback for sound effect bus");
// Set stream format for sound effect bus
UInt32 bytesPerSample = sizeof (AudioUnitSampleType);
stereoDescription.mFormatID = kAudioFormatLinearPCM;
stereoDescription.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoDescription.mBytesPerPacket = bytesPerSample;
stereoDescription.mFramesPerPacket = 1;
stereoDescription.mBytesPerFrame = bytesPerSample;
stereoDescription.mChannelsPerFrame = 2;
stereoDescription.mBitsPerChannel = 8 * bytesPerSample;
stereoDescription.mSampleRate = sampleRate;
result = AudioUnitSetProperty(mixerUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, soundEffectBus, &stereoDescription, sizeof(stereoDescription));
if(result != noErr)
NSLog(#"Failed to set stream description");
// Set mixer output sample rate
result = AudioUnitSetProperty(mixerUnit, kAudioUnitProperty_SampleRate, kAudioUnitScope_Output, 0, &sampleRate, sizeof(sampleRate));
if(result != noErr)
NSLog(#"Failed to set mixer output sample rate");
// Connect input to mixer
result = AUGraphConnectNodeInput(graph, ioNode, 1, mixerNode, 0);
if(result != noErr)
NSLog(#"Failed to connect microphone node to mixer node");
Error occurs here
// Connect mixer to output
result = AUGraphConnectNodeInput(graph, mixerNode, 0, ioNode, 0);
if(result != noErr)
NSLog(#"Failed to connect mixer to output node %d", result);
// Initialize the audio graph
CAShow(graph);
// Start the graph
result = AUGraphStart(graph);
if(result != noErr)
NSLog(#"Failed to start audio graph");
NSLog(#"Graph started");
EDIT
I was able to understand why I got this error, I think I was assigning my mixer output to the input channel of the io unit (which is read only, of course because it comes from the mic). However, after switching that, as I changed the code above, I have this error
ERROR: [0x196f982a0] 308: input bus 0 sample rate is 0
Could anyone help me? Is there something I am forgetting to set?

According to the error message, I suggest you explicitly set the stream format on every buses of the mixer (both input and output), just to be sure. Personally, I don't set kAudioUnitProperty_SampleRate on the mixer, I don't think it has a meaning there (IMHO, this is meaningful on a hardware IO unit to choose the sample rate of the DAC, it might also have a meaning of format converters units)

Related

AUGraph FormatConverter (AUConverter) render notify contains NULL ioData buffer

I am working on a iOS project and need to capture input from the microphone and convert it to ULaw (to send out a data stream). I am using an AUGraph with a converter node to accomplish this. The graph is created successfully and initialized, however in my render notify callback, the ioData buffer always contains NULL even thought inNumberFrame contains a value of 93. I think it might have something to due with incorrect size of format converter buffers, but I can figure out what is happening.
Here is the code:
OSStatus status;
// ************************** DEFINE AUDIO STREAM FORMATS ******************************
double currentSampleRate;
currentSampleRate = [[AVAudioSession sharedInstance] sampleRate];
// Describe stream format
AudioStreamBasicDescription streamAudioFormat = {0};
streamAudioFormat.mSampleRate = 8000.00;
streamAudioFormat.mFormatID = kAudioFormatULaw;
streamAudioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
streamAudioFormat.mFramesPerPacket = 1;
streamAudioFormat.mChannelsPerFrame = 1;
streamAudioFormat.mBitsPerChannel = 8;
streamAudioFormat.mBytesPerPacket = 1;
streamAudioFormat.mBytesPerFrame = streamAudioFormat.mBytesPerPacket * streamAudioFormat.mFramesPerPacket;
// ************************** SETUP SEND AUDIO ******************************
AUNode ioSendNode;
AUNode convertToULAWNode;
AUNode convertToLPCMNode;
AudioUnit convertToULAWUnit;
AudioUnit convertToLPCMUnit;
status = NewAUGraph(&singleChannelSendGraph);
if (status != noErr)
{
NSLog(#"Unable to create send audio graph.");
return;
}
AudioComponentDescription ioDesc = {0};
ioDesc.componentType = kAudioUnitType_Output;
ioDesc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
ioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
ioDesc.componentFlags = 0;
ioDesc.componentFlagsMask = 0;
status = AUGraphAddNode(singleChannelSendGraph, &ioDesc, &ioSendNode);
if (status != noErr)
{
NSLog(#"Unable to add IO node.");
return;
}
AudioComponentDescription converterDesc = {0};
converterDesc.componentType = kAudioUnitType_FormatConverter;
converterDesc.componentSubType = kAudioUnitSubType_AUConverter;
converterDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
converterDesc.componentFlags = 0;
converterDesc.componentFlagsMask = 0;
status = AUGraphAddNode(singleChannelSendGraph, &converterDesc, &convertToULAWNode);
if (status != noErr)
{
NSLog(#"Unable to add ULAW converter node.");
return;
}
status = AUGraphAddNode(singleChannelSendGraph, &converterDesc, &convertToLPCMNode);
if (status != noErr)
{
NSLog(#"Unable to add LPCM converter node.");
return;
}
status = AUGraphOpen(singleChannelSendGraph);
if (status != noErr)
{
return;
}
// get the io audio unit
status = AUGraphNodeInfo(singleChannelSendGraph, ioSendNode, NULL, &ioSendUnit);
if (status != noErr)
{
NSLog(#"Unable to get IO unit.");
return;
}
UInt32 enableInput = 1;
status = AudioUnitSetProperty (ioSendUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
1, // microphone bus
&enableInput,
sizeof (enableInput)
);
if (status != noErr)
{
return;
}
UInt32 sizeASBD = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription ioASBDin;
AudioStreamBasicDescription ioASBDout;
status = AudioUnitGetProperty(ioSendUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &ioASBDin, &sizeASBD);
if (status != noErr)
{
NSLog(#"Unable to get IO stream input format.");
return;
}
status = AudioUnitGetProperty(ioSendUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &ioASBDout, &sizeASBD);
if (status != noErr)
{
NSLog(#"Unable to get IO stream output format.");
return;
}
ioASBDin.mSampleRate = currentSampleRate;
ioASBDout.mSampleRate = currentSampleRate;
status = AudioUnitSetProperty(ioSendUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioASBDin, sizeof(ioASBDin));
if (status != noErr)
{
NSLog(#"Unable to set IO stream output format.");
return;
}
status = AudioUnitSetProperty(ioSendUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioASBDin, sizeof(ioASBDin));
if (status != noErr)
{
NSLog(#"Unable to set IO stream input format.");
return;
}
// get the converter audio unit
status = AUGraphNodeInfo(singleChannelSendGraph, convertToULAWNode, NULL, &convertToULAWUnit);
if (status != noErr)
{
NSLog(#"Unable to get ULAW converter unit.");
return;
}
status = AudioUnitSetProperty(convertToULAWUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioASBDin, sizeof(ioASBDin));
if (status != noErr)
{
NSLog(#"Unable to set ULAW stream input format.");
return;
}
status = AudioUnitSetProperty(convertToULAWUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &streamAudioFormat, sizeof(streamAudioFormat));
if (status != noErr)
{
NSLog(#"Unable to set ULAW stream output format.");
return;
}
// get the converter audio unit
status = AUGraphNodeInfo(singleChannelSendGraph, convertToLPCMNode, NULL, &convertToLPCMUnit);
if (status != noErr)
{
NSLog(#"Unable to get LPCM converter unit.");
return;
}
status = AudioUnitSetProperty(convertToLPCMUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &streamAudioFormat, sizeof(streamAudioFormat));
if (status != noErr)
{
NSLog(#"Unable to set LPCM stream input format.");
return;
}
status = AudioUnitSetProperty(convertToLPCMUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &ioASBDin, sizeof(ioASBDin));
if (status != noErr)
{
NSLog(#"Unable to set LPCM stream output format.");
return;
}
status = AUGraphConnectNodeInput(singleChannelSendGraph, ioSendNode, 1, convertToULAWNode, 0);
if (status != noErr)
{
NSLog(#"Unable to set ULAW node input.");
return;
}
status = AUGraphConnectNodeInput(singleChannelSendGraph, convertToULAWNode, 0, convertToLPCMNode, 0);
if (status != noErr)
{
NSLog(#"Unable to set LPCM node input.");
return;
}
status = AUGraphConnectNodeInput(singleChannelSendGraph, convertToLPCMNode, 0, ioSendNode, 0);
if (status != noErr)
{
NSLog(#"Unable to set IO node input.");
return;
}
status = AudioUnitAddRenderNotify(convertToULAWUnit, &outputULAWCallback, (__bridge void*)self);
if (status != noErr)
{
NSLog(#"Unable to add ULAW render notify.");
return;
}
status = AUGraphInitialize(singleChannelSendGraph);
if (status != noErr)
{
NSLog(#"Unable to initialize send graph.");
return;
}
CAShow (singleChannelSendGraph);
}
And the graph nodes are initialized as:
Member Nodes:
node 1: 'auou' 'vpio' 'appl', instance 0x7fd5faf8fac0 O I
node 2: 'aufc' 'conv' 'appl', instance 0x7fd5fad05420 O I
node 3: 'aufc' 'conv' 'appl', instance 0x7fd5fad05810 O I
Connections:
node 1 bus 1 => node 2 bus 0 [ 1 ch, 44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer]
node 2 bus 0 => node 3 bus 0 [ 1 ch, 8000 Hz, 'ulaw' (0x0000000C) 8 bits/channel, 1 bytes/packet, 1 frames/packet, 1 bytes/frame]
node 3 bus 0 => node 1 bus 0 [ 1 ch, 44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer]
And the render notify callback:
static OSStatus outputULAWCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioManager *audioManager = (__bridge AudioManager*)inRefCon;
if ((*ioActionFlags) & kAudioUnitRenderAction_PostRender)
{
if (!audioManager.mute && ioData->mBuffers[0].mData != NULL)
{
TPCircularBufferProduceBytes(audioManager.activeChannel == 0 ? audioManager.channel1StreamOutBufferPtr : audioManager.channel2StreamOutBufferPtr,
ioData->mBuffers[0].mData, ioData->mBuffers[0].mDataByteSize);
// do not want to playback our audio into local speaker
SilenceData(ioData);
}
}
return noErr;
}
Note: if I send the microphone input to the output directly (skipping the converter nodes), I do hear output, so I know the AUGraph is working.
I have a receive AUGraph setup to receive ULaw from a stream and run through a converter to play through the speakers and that is working without an issue.
Just can't figure out why the converter is failing and returning no data.
Has anyone had any experience with this type of issue?
UPDATE
So you're calling AUGraphStart elsewhere, but the ulaw converter is refusing to do general rate conversion for you :( You could add another rate converter to the graph or simply get the vpio unit to do it for you. Changing this code
ioASBDin.mSampleRate = currentSampleRate; // change me to 8000Hz
ioASBDout.mSampleRate = currentSampleRate; // delete me, I'm ignored
status = AudioUnitSetProperty(ioSendUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioASBDin, sizeof(ioASBDin));
into
ioASBDin.mSampleRate = streamAudioFormat.mSampleRate; // a.k.a 8000Hz
status = AudioUnitSetProperty(ioSendUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioASBDin, sizeof(ioASBDin));
will make the whole graph do 8kHz and give you non-null ioData buffers:
AudioUnitGraph 0xCA51000:
Member Nodes:
node 1: 'auou' 'vpio' 'appl', instance 0x7b5bb320 O I
node 2: 'aufc' 'conv' 'appl', instance 0x7c878d50 O I
node 3: 'aufc' 'conv' 'appl', instance 0x7c875eb0 O I
Connections:
node 1 bus 1 => node 2 bus 0 [ 1 ch, 8000 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer]
node 2 bus 0 => node 3 bus 0 [ 1 ch, 8000 Hz, 'ulaw' (0x0000000C) 8 bits/channel, 1 bytes/packet, 1 frames/packet, 1 bytes/frame]
node 3 bus 0 => node 1 bus 0 [ 1 ch, 8000 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer]
CurrentState:
mLastUpdateError=0, eventsToProcess=F, isInitialized=T, isRunning=T (1)
old answer
You need to
AUGraphStart your graph
Change your ulaw mSampleRate to 11025, 22050 or 44100
then you will see non-null ioData in the kAudioUnitRenderAction_PostRender phase.
Converting to 8kHz or even 16kHz ulaw seems like something an audio converter should be able to do. I have no idea why it doesn't work, but when you do set the sample rate to anything other than the values in point 2., the ulaw converter reports kAUGraphErr_CannotDoInCurrentContext (-10863) errors, which makes no sense to me.

Panning a mono signal with MultiChannelMixer & MTAudioProcessingTap

I'm looking to pan a mono signal using MTAudioProcessingTap and a Multichannel Mixer audio unit, but am getting a mono output instead of a panned, stereo output. The documentation states:
"The Multichannel Mixer unit (subtype
kAudioUnitSubType_MultiChannelMixer) takes any number of mono or
stereo streams and combines them into a single stereo output."
So, the mono output was unexpected. Any way around this? I ran a stereo signal through the exact same code and everything worked great: stereo output, panned as expected. Here's the code from my tap's prepare callback:
static void tap_PrepareCallback(MTAudioProcessingTapRef tap,
CMItemCount maxFrames,
const AudioStreamBasicDescription *processingFormat) {
AVAudioTapProcessorContext *context = (AVAudioTapProcessorContext *)MTAudioProcessingTapGetStorage(tap);
// Store sample rate for -setCenterFrequency:.
context->sampleRate = processingFormat->mSampleRate;
/* Verify processing format (this is not needed for Audio Unit, but for RMS calculation). */
context->supportedTapProcessingFormat = true;
if (processingFormat->mFormatID != kAudioFormatLinearPCM) {
NSLog(#"Unsupported audio format ID for audioProcessingTap. LinearPCM only.");
context->supportedTapProcessingFormat = false;
}
if (!(processingFormat->mFormatFlags & kAudioFormatFlagIsFloat)) {
NSLog(#"Unsupported audio format flag for audioProcessingTap. Float only.");
context->supportedTapProcessingFormat = false;
}
if (processingFormat->mFormatFlags & kAudioFormatFlagIsNonInterleaved) {
context->isNonInterleaved = true;
}
AudioUnit audioUnit;
AudioComponentDescription audioComponentDescription;
audioComponentDescription.componentType = kAudioUnitType_Mixer;
audioComponentDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
audioComponentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
audioComponentDescription.componentFlags = 0;
audioComponentDescription.componentFlagsMask = 0;
AudioComponent audioComponent = AudioComponentFindNext(NULL, &audioComponentDescription);
if (audioComponent) {
if (noErr == AudioComponentInstanceNew(audioComponent, &audioUnit)) {
OSStatus status = noErr;
// Set audio unit input/output stream format to processing format.
if (noErr == status) {
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
processingFormat,
sizeof(AudioStreamBasicDescription));
}
if (noErr == status) {
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
processingFormat,
sizeof(AudioStreamBasicDescription));
}
// Set audio unit render callback.
if (noErr == status) {
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = AU_RenderCallback;
renderCallbackStruct.inputProcRefCon = (void *)tap;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&renderCallbackStruct,
sizeof(AURenderCallbackStruct));
}
// Set audio unit maximum frames per slice to max frames.
if (noErr == status) {
UInt32 maximumFramesPerSlice = (UInt32)maxFrames;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global,
0,
&maximumFramesPerSlice,
(UInt32)sizeof(UInt32));
}
// Initialize audio unit.
if (noErr == status) {
status = AudioUnitInitialize(audioUnit);
}
if (noErr != status) {
AudioComponentInstanceDispose(audioUnit);
audioUnit = NULL;
}
context->audioUnit = audioUnit;
}
}
NSLog(#"Tap channels: %d",processingFormat->mChannelsPerFrame); // = 1 for mono source file
}
I've tried a few different options for the output stream format, e.g., AVAudioFormat *outFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:processingFormat->mSampleRate channels:2];, but get this error each time: "Client did not see 20 I/O cycles; giving up." Here's the code that creates the exact same ASBD as the input format except for 2 channels instead of one, and this gives the same "20 I/O cycles" error too:
AudioStreamBasicDescription asbd;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = 0x29;
asbd.mSampleRate = 44100;
asbd.mBitsPerChannel = 32;
asbd.mChannelsPerFrame = 2;
asbd.mBytesPerFrame = 4;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerPacket = 4;
asbd.mReserved = 0;

AudioUnit Input Missing Periodic Samples

I have implemented an AUGraph containing a single AudioUnit for handling IO from the mic and headsets. The issue I'm having is that there are missing chunks of audio input.
I believe the samples are being lost during the hardware to software buffer exchange. I tried slowing down the sample rate of the iPhone, from 44.1 kHz to 20 kHz, to see if this would give me the missing data, but it did not produce the output I expected.
The AUGraph is setup as follows:
// Audio component description
AudioComponentDescription desc;
bzero(&desc, sizeof(AudioComponentDescription));
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
// Stereo ASBD
AudioStreamBasicDescription stereoStreamFormat;
bzero(&stereoStreamFormat, sizeof(AudioStreamBasicDescription));
stereoStreamFormat.mSampleRate = kSampleRate;
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
stereoStreamFormat.mBytesPerPacket = 4;
stereoStreamFormat.mBytesPerFrame = 4;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mChannelsPerFrame = 2;
stereoStreamFormat.mBitsPerChannel = 16;
OSErr err = noErr;
#try {
// Create new AUGraph
err = NewAUGraph(&auGraph);
NSAssert1(err == noErr, #"Error creating AUGraph: %hd", err);
// Add node to AUGraph
err = AUGraphAddNode(auGraph,
&desc,
&ioNode);
NSAssert1(err == noErr, #"Error adding AUNode: %hd", err);
// Open AUGraph
err = AUGraphOpen(auGraph);
NSAssert1(err == noErr, #"Error opening AUGraph: %hd", err);
// Add AUGraph node info
err = AUGraphNodeInfo(auGraph,
ioNode,
&desc,
&_ioUnit);
NSAssert1(err == noErr, #"Error adding noe info to AUGraph: %hd", err);
// Enable input, which is disabled by default.
UInt32 enabled = 1;
err = AudioUnitSetProperty(_ioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&enabled,
sizeof(enabled));
NSAssert1(err == noErr, #"Error enabling input: %hd", err);
// Apply format to input of ioUnit
err = AudioUnitSetProperty(_ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
NSAssert1(err == noErr, #"Error setting input ASBD: %hd", err);
// Apply format to output of ioUnit
err = AudioUnitSetProperty(_ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
NSAssert1(err == noErr, #"Error setting output ASBD: %hd", err);
// Set hardware IO callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = hardwareIOCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
err = AUGraphSetNodeInputCallback(auGraph,
ioNode,
kOutputBus,
&callbackStruct);
NSAssert1(err == noErr, #"Error setting IO callback: %hd", err);
// Initialize AudioGraph
err = AUGraphInitialize(auGraph);
NSAssert1(err == noErr, #"Error initializing AUGraph: %hd", err);
// Start audio unit
err = AUGraphStart(auGraph);
NSAssert1(err == noErr, #"Error starting AUGraph: %hd", err);
}
#catch (NSException *exception) {
NSLog(#"Failed with exception: %#", exception);
}
Where kOutputBus is defined to be 0, kInputBus is 1, and kSampleRate is 44100. The IO callback function is:
IO Callback Function
static OSStatus hardwareIOCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Scope reference to GSFSensorIOController class
GSFSensorIOController *sensorIO = (__bridge GSFSensorIOController *) inRefCon;
// Grab the samples and place them in the buffer list
AudioUnit ioUnit = sensorIO.ioUnit;
OSStatus result = AudioUnitRender(ioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
ioData);
if (result != noErr) NSLog(#"Blowing it in interrupt");
// Process input data
[sensorIO processIO:ioData];
// Set up power tone attributes
float freq = 20000.00f;
float sampleRate = kSampleRate;
float phase = sensorIO.sinPhase;
float sinSignal;
double phaseInc = 2 * M_PI * freq / sampleRate;
// Write to output buffers
for(size_t i = 0; i < ioData->mNumberBuffers; ++i) {
AudioBuffer buffer = ioData->mBuffers[i];
for(size_t sampleIdx = 0; sampleIdx < inNumberFrames; ++sampleIdx) {
// Grab sample buffer
SInt16 *sampleBuffer = buffer.mData;
// Generate power tone on left channel
sinSignal = sin(phase);
sampleBuffer[2 * sampleIdx] = (SInt16)((sinSignal * 32767.0f) /2);
// Write to commands to micro on right channel as necessary
if(sensorIO.newDataOut)
sampleBuffer[2*sampleIdx + 1] = (SInt16)((sinSignal * 32767.0f) /2);
else
sampleBuffer[2*sampleIdx + 1] = 0;
phase += phaseInc;
if (phase >= 2 * M_PI * freq) {
phase -= (2 * M_PI * freq);
}
}
}
// Store sine wave phase for next callback
sensorIO.sinPhase = phase;
return result;
}
The processIO function called within hardwareIOCallback is used to process the input and create response for the output. For debugging purposes I just have it pushing each sample of the input buffer to an NSMutableArray.
Process IO
- (void) processIO: (AudioBufferList*) bufferList {
for (int j = 0 ; j < bufferList->mNumberBuffers ; j++) {
AudioBuffer sourceBuffer = bufferList->mBuffers[j];
SInt16 *buffer = (SInt16 *) bufferList->mBuffers[j].mData;
for (int i = 0; i < (sourceBuffer.mDataByteSize / sizeof(sourceBuffer)); i++) {
// DEBUG: Array of raw data points for printing to a file
[self.rawInputData addObject:[NSNumber numberWithInt:buffer[i]]];
}
}
}
I then am writing the contents of this input buffer to a file after I have stopped the AUGraph and have all samples in the array rawInputData. I then open this file in MatLab and plot it. Here I see that the audio input is missing data (seen in the image below circled in red).
I'm out of ideas as to how to fix this issue and could really use some help understanding and fixing this problem.
You callback may be too slow. It's usually not recommended to use any Objective C methods (such as adding to a mutable array, or anything else that could allocate memory) inside an Audio Unit callback.

How do I connect an AudioFilePlayer AudioUnit to a 3DMixer?

I am trying to connect an AudioFilePlayer AudioUnit to an AU3DMixerEmbedded Audio Unit, but I'm having no success.
Here's what I'm doing:
create an AUGraph with NewAUGraph()
Open the graph
Initalize the graph
Add 3 nodes:
outputNode: kAudioUnitSubType_RemoteIO
mixerNode: kAudioUnitSubType_AU3DMixerEmbedded
filePlayerNode: kAudioUnitSubType_AudioFilePlayer
Connect the nodes:
filePlayerNode -> mixerNode
mixerNode -> outputNode
Configure the filePlayer Audio Unit to play the required file
Start the graph
This doesn't work: it balks at AUGraphInitialize with error 10868 (kAudioUnitErr_FormatNotSupported). I think the problem is due to audio format mismatch between the filePlayer and the mixer. I think this because:
- If I comment out connecting the filePlayerNode to the mixerNode (AUGraphConnectNodeInput(_graph, filePlayerNode, 0, mixerNode, 0)) and comment out step 6 then no errors are reported.
- If I replace step 3 with connecting the filePlayerNode directly to the outputNode (AUGraphConnectNodeInput(_graph, filePlayerNode, 0, outputNode, 0)) then the audio plays.
What steps am I missing in connecting the filePlayerNode to the mixerNode?
Here's the code in full. It's based on Apple's sample code and other samples I've found from the interwebs. (AUGraphStart is called latter):
- (id)init
{
self = [super init];
if (self != nil)
{
{
//create a new AUGraph
CheckError(NewAUGraph(&_graph), "NewAUGraph failed");
// opening the graph opens all contained audio units but does not allocate any resources yet
CheckError(AUGraphOpen(_graph), "AUGraphOpen failed");
// now initialize the graph (causes resources to be allocated)
CheckError(AUGraphInitialize(_graph), "AUGraphInitialize failed");
}
AUNode outputNode;
{
AudioComponentDescription outputAudioDesc = {0};
outputAudioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
outputAudioDesc.componentType = kAudioUnitType_Output;
outputAudioDesc.componentSubType = kAudioUnitSubType_RemoteIO;
// adds a node with above description to the graph
CheckError(AUGraphAddNode(_graph, &outputAudioDesc, &outputNode), "AUGraphAddNode[kAudioUnitSubType_DefaultOutput] failed");
}
AUNode mixerNode;
{
AudioComponentDescription mixerAudioDesc = {0};
mixerAudioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
mixerAudioDesc.componentType = kAudioUnitType_Mixer;
mixerAudioDesc.componentSubType = kAudioUnitSubType_AU3DMixerEmbedded;
mixerAudioDesc.componentFlags = 0;
mixerAudioDesc.componentFlagsMask = 0;
// adds a node with above description to the graph
CheckError(AUGraphAddNode(_graph, &mixerAudioDesc, &mixerNode), "AUGraphAddNode[kAudioUnitSubType_AU3DMixerEmbedded] failed");
}
AUNode filePlayerNode;
{
AudioComponentDescription fileplayerAudioDesc = {0};
fileplayerAudioDesc.componentType = kAudioUnitType_Generator;
fileplayerAudioDesc.componentSubType = kAudioUnitSubType_AudioFilePlayer;
fileplayerAudioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
// adds a node with above description to the graph
CheckError(AUGraphAddNode(_graph, &fileplayerAudioDesc, &filePlayerNode), "AUGraphAddNode[kAudioUnitSubType_AudioFilePlayer] failed");
}
//Connect the nodes
{
// connect the output source of the file player AU to the input source of the output node
// CheckError(AUGraphConnectNodeInput(_graph, filePlayerNode, 0, outputNode, 0), "AUGraphConnectNodeInput");
CheckError(AUGraphConnectNodeInput(_graph, filePlayerNode, 0, mixerNode, 0), "AUGraphConnectNodeInput");
CheckError(AUGraphConnectNodeInput(_graph, mixerNode, 0, outputNode, 0), "AUGraphConnectNodeInput");
}
// configure the file player
// tell the file player unit to load the file we want to play
{
//?????
AudioStreamBasicDescription inputFormat; // input file's data stream description
AudioFileID inputFile; // reference to your input file
// open the input audio file and store the AU ref in _player
CFURLRef songURL = (__bridge CFURLRef)[[NSBundle mainBundle] URLForResource:#"monoVoice" withExtension:#"aif"];
CheckError(AudioFileOpenURL(songURL, kAudioFileReadPermission, 0, &inputFile), "AudioFileOpenURL failed");
//create an empty MyAUGraphPlayer struct
AudioUnit fileAU;
// get the reference to the AudioUnit object for the file player graph node
CheckError(AUGraphNodeInfo(_graph, filePlayerNode, NULL, &fileAU), "AUGraphNodeInfo failed");
// get and store the audio data format from the file
UInt32 propSize = sizeof(inputFormat);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyDataFormat, &propSize, &inputFormat), "couldn't get file's data format");
CheckError(AudioUnitSetProperty(fileAU, kAudioUnitProperty_ScheduledFileIDs, kAudioUnitScope_Global, 0, &(inputFile), sizeof((inputFile))), "AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileIDs] failed");
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyAudioDataPacketCount, &propsize, &nPackets), "AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] failed");
// tell the file player AU to play the entire file
ScheduledAudioFileRegion rgn;
memset (&rgn.mTimeStamp, 0, sizeof(rgn.mTimeStamp));
rgn.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
rgn.mTimeStamp.mSampleTime = 0;
rgn.mCompletionProc = NULL;
rgn.mCompletionProcUserData = NULL;
rgn.mAudioFile = inputFile;
rgn.mLoopCount = 1;
rgn.mStartFrame = 0;
rgn.mFramesToPlay = nPackets * inputFormat.mFramesPerPacket;
CheckError(AudioUnitSetProperty(fileAU, kAudioUnitProperty_ScheduledFileRegion, kAudioUnitScope_Global, 0,&rgn, sizeof(rgn)), "AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileRegion] failed");
// prime the file player AU with default values
UInt32 defaultVal = 0;
CheckError(AudioUnitSetProperty(fileAU, kAudioUnitProperty_ScheduledFilePrime, kAudioUnitScope_Global, 0, &defaultVal, sizeof(defaultVal)), "AudioUnitSetProperty[kAudioUnitProperty_ScheduledFilePrime] failed");
// tell the file player AU when to start playing (-1 sample time means next render cycle)
AudioTimeStamp startTime;
memset (&startTime, 0, sizeof(startTime));
startTime.mFlags = kAudioTimeStampSampleTimeValid;
startTime.mSampleTime = -1;
CheckError(AudioUnitSetProperty(fileAU, kAudioUnitProperty_ScheduleStartTimeStamp, kAudioUnitScope_Global, 0, &startTime, sizeof(startTime)), "AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp]");
// file duration
//double duration = (nPackets * _player.inputFormat.mFramesPerPacket) / _player.inputFormat.mSampleRate;
}
}
return self;
}
I don't see in your code where you set the appropriate kAudioUnitProperty_StreamFormat for the audio units. You will also have to check the error result codes to see if the stream format setting you choose is actually supported by the audio unit being configured. If not, try another format.
(AUGraphConnectNodeInput(_graph, filePlayerNode, 0, mixerNode, 0)) (AUGraphConnectNodeInput(_graph, mixerNode, 0, outputNode, 0))
Try doing this way if it can help.Just for information the left node is input in the right node. so in first line the player node is input to the mixer node, now mixer node contains both player and mixer so add mixer node to output node.

Core Audio memory issues

My Audio Unit analysis project is having some memory issues, whereby each time an Audio Unit is rendered (or somewhere around that) it is allocating a bunch of memory which isn't being released, causing memory usage to swell and the app to eventually crash.
In instruments, I notice the following string of 32 byte mallocs occurring repeatedly, and they remain live:
BufferedAudioConverter::AllocateBuffers() x6
BufferedInputAudioConverter:BufferedInputAudioConverter(StreamDescPair const&) x 3
Any ideas where the problem might lie? When is that memory allocated in the process and how can it safely be released?
Many thanks.
The code was based on some non-Apple sample code, PitchDetector from sleepyleaf.com
Some code extracts where the problem might lie..... Please let me know if more code is needed.
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, THIS->bufferList); //128 inNumberFrames
if (renderErr < 0) {
return renderErr;
}
// Fill the buffer with our sampled data. If we fill our buffer, run the
// fft.
int read = bufferCapacity - index;
if (read > inNumberFrames) {
memcpy((SInt16 *)dataBuffer + index, THIS->bufferList->mBuffers[0].mData, inNumberFrames*sizeof(SInt16));
THIS->index += inNumberFrames;
} else { DO ANALYSIS
memset(outputBuffer, 0, n*sizeof(SInt16));
- (void)createAUProcessingGraph {
OSStatus err;
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription ioUnitDescription;
ioUnitDescription.componentType = kAudioUnitType_Output;
ioUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
ioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ioUnitDescription.componentFlags = 0;
enter code here
ioUnitDescription.componentFlagsMask = 0;
// Declare and instantiate an audio processing graph
NewAUGraph(&processingGraph);
// Add an audio unit node to the graph, then instantiate the audio unit.
/*
An AUNode is an opaque type that represents an audio unit in the context
of an audio processing graph. You receive a reference to the new audio unit
instance, in the ioUnit parameter, on output of the AUGraphNodeInfo
function call.
*/
AUNode ioNode;
AUGraphAddNode(processingGraph, &ioUnitDescription, &ioNode);
AUGraphOpen(processingGraph); // indirectly performs audio unit instantiation
// Obtain a reference to the newly-instantiated I/O unit. Each Audio Unit
// requires its own configuration.
AUGraphNodeInfo(processingGraph, ioNode, NULL, &ioUnit);
// Initialize below.
AURenderCallbackStruct callbackStruct = {0};
UInt32 enableInput;
UInt32 enableOutput;
// Enable input and disable output.
enableInput = 1; enableOutput = 0;
callbackStruct.inputProc = RenderFFTCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus, &enableInput, sizeof(enableInput));
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus, &enableOutput, sizeof(enableOutput));
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Input,
kOutputBus, &callbackStruct, sizeof(callbackStruct));
// Set the stream format.
size_t bytesPerSample = [self ASBDForSoundMode];
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus, &streamFormat, sizeof(streamFormat));
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus, &streamFormat, sizeof(streamFormat));
// Disable system buffer allocation. We'll do it ourselves.
UInt32 flag = 0;
err = AudioUnitSetProperty(ioUnit, kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus, &flag, sizeof(flag));
// Allocate AudioBuffers for use when listening.
// TODO: Move into initialization...should only be required once.
bufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = 512*bytesPerSample;
bufferList->mBuffers[0].mData = calloc(512, bytesPerSample);
}
I managed to find and fix the issue, which was in an area of the code not posted above.
In a following step the output buffer was being converted into a different number format using an AudioConverter object. However, the converter object was not being disposed of, and remained live in the memory. I fixed it by using AudioConverterDispose as below:
err = AudioConverterNew(&inFormat, &outFormat, &converter);
err = AudioConverterConvertBuffer(converter, inSize, buf, &outSize, outputBuf);
err = AudioConverterDispose (converter);

Resources