I am writing an HOST application that uses Core Audio's new iOS 7 Inter App Audio technology. I have managed to get the instruments apps and effects app with the help of Inter-App Audio Examples .
The issue is that the effect node is dependent upon the instrument node. I want to make effect node and instrument node independent.
Here i my Try.
if (desc.componentType == kAudioUnitType_RemoteEffect) {
// if ([self isRemoteInstrumentConnected]) {
if (!_engineStarted) // Check if session is active
[self checkStartOrStopEngine];
if ([self isGraphStarted]) // Check if graph is running and or is created, if so, stop it
[self checkStartStopGraph];
if ([self checkGraphInitialized ]) // Check if graph has been inititialized if so, uninitialize it.
Check(AUGraphUninitialize(hostGraph));
Check (AUGraphAddNode (hostGraph, &desc, &effectNode)); // Add remote instrument
//Disconnect previous chain
// Check(AUGraphDisconnectNodeInput(hostGraph, mixerNode, remoteBus));
//Connect the effect node to the mixer on the remoteBus
Check(AUGraphConnectNodeInput (hostGraph, effectNode, 0, mixerNode, remoteBus));
//Connect the remote instrument node to the effect node on bus 0
Check(AUGraphConnectNodeInput (hostGraph, instrumentNode, 0, effectNode, 0));
//Grab audio units from the graph
Check(AUGraphNodeInfo(hostGraph, effectNode, 0, &effect));
currentUnit = &effect;
}
if (currentUnit) {
Check (AudioUnitSetProperty (*currentUnit, // Set stereo format
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
playerBus,
&stereoStreamFormat,
sizeof (stereoStreamFormat)));
UInt32 maxFrames = 4096;
Check(AudioUnitSetProperty(*currentUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global, playerBus,
&maxFrames,
sizeof(maxFrames)));
[self addAudioUnitPropertyListeners:*currentUnit]; // Add property listeners to audio unit
Check(AUGraphInitialize (hostGraph)); // Initialize the graph
[self checkStartStopGraph]; //Start the graph
}
[_connectedNodes addObject:rau];
but my Application Crashes on this Line --
Check(AUGraphInitialize (hostGraph));
And the Error i got ,
ConnectAudioUnit failed with error
-10860 Initialize failed with error
-10860 error -10860 from AUGraphInitialize (hostGraph)
Note :- I have also Attached screenshot of code portion for better understand.
Edit 1 :-
- (void)createGraph {
// 1
NewAUGraph(&hostGraph);
// 2
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType =
kAudioUnitType_Output;
iOUnitDescription.componentSubType =
kAudioUnitSubType_RemoteIO;
iOUnitDescription.componentManufacturer =
kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
AUGraphAddNode(hostGraph, &iOUnitDescription, &outNode);
// 3
AUGraphOpen(hostGraph);
// 4
Check(AUGraphNodeInfo(hostGraph, outNode, 0, &outputUnit));
// 5
AudioStreamBasicDescription format;
format.mChannelsPerFrame = 2;
format.mSampleRate =
[[AVAudioSession sharedInstance] sampleRate];
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags =
kAudioFormatFlagsNativeFloatPacked |
kAudioFormatFlagIsNonInterleaved;
format.mBytesPerFrame = sizeof(Float32);
format.mBytesPerPacket = sizeof(Float32);
format.mBitsPerChannel = 32;
format.mFramesPerPacket = 1;
AudioUnitSetProperty(mixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&format,
sizeof(format));
AudioUnitSetProperty(mixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&format,
sizeof(format));
CAShow(hostGraph);
}
So the error you're seeing, as per apple docs, is due to The specified node cannot be found.
It looks like you've taken the Apple example app you linked and just deleted a bit to attempt to remove 1 node, but I don't believe its that simple. The documentation of the example clearly states the two nodes are dependent. Just changing the addition of remotes method is not going to be enough, because the host still is attempting to create both, as shown by the error you're seeing.
From this file in the example project, you are only showing the changes you made to addRemoteAU but you need to be making changes to createGraph as well, since that is where the hostGraph is initialized with its nodes. If you initialize the graph with only 1 node, then in addRemoteAU you should stop seeing an error due to a node not being found, since the graph at that point won't expect two nodes (which it does now from it's creation).
Related
I have an app that handles playing multiple midi instruments. Everything works great except for playing percussion instruments. I understand that in order to play percussion in General MIDI you must send the events to channel 10. I've tried a bunch of different things, and I can't figure out how to get it to work, here's an example of how I'm doing it for melodic instruments vs percussion.
// Melodic instrument
MusicDeviceMIDIEvent(self.samplerUnit, 0x90, (UInt8)pitch, 127, 0);
// Percussion Instruments
MusicDeviceMIDIEvent(self.samplerUnit, 0x99, (UInt8)pitch, 127, 0);
The sampler Unit is an AudioUnit and the pitch is given as an int through my UI.
Thanks in advance!
Assuming you have some sort of a General MIDI sound font or similar loaded, you need to set the correct status byte before sending pitch/velocity information. So in case of a Standard MIDI Drum Kit (channel 9), you'd do something like this in Swift:
var status = OSStatus(noErr)
let drumCommand = UInt32( 0xC9 | 0 )
let noteOnCommand = UInt32(0x90 | channel)
status = MusicDeviceMIDIEvent(self._samplerUnit, drumCommand, 0, 0, 0) // set device
status = MusicDeviceMIDIEvent(self._samplerUnit, noteOnCommand, noteNum, velocity, 0) // sends note ON message
No need to undertake anything special for MIDI note off messages.
Ok, so I got it working. I guess the way I load the sound font makes it so the channel stuff doesn't do anything. Instead I had to set the bankMSB property on AUSamplerBankPresetData to kAUSampler_DefaultPercussionBankMSB instead of kAUSampler_DefaultMelodicBankMSB
I added a different font loading method specifically for percussion:
- (OSStatus) loadPercussionWithSoundFont: (NSURL *)bankURL {
OSStatus result = noErr;
// fill out a bank preset data structure
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (__bridge CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultPercussionBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = (UInt8) 32;
// set the kAUSamplerProperty_LoadPresetFromBank property
result = AudioUnitSetProperty(self.samplerUnit,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
// check for errors
NSCAssert (result == noErr,
#"Unable to set the preset property on the Sampler. Error code:%d '%.4s'",
(int) result,
(const char *)&result);
return result;
}
Is it possible to change/set sample rate in the middle of a running AudioSession/AudioUnit without stopping/restarting the current AudioSession/AudioUnit (Just like audio route) ?
I have an active audio session whose sample rate is 44.1 KHz
AudioStreamBasicDescription.mSampleRate = 44100
I want to change the sample rate to 8KHz without uninitializing [AudioUnitUninitialize(audioUnit)] or stopping [AudioOutputUnitStop(audioUnit)] or deactivating Audio Unit/Session.
This is my audio unit settings.
audioComponentDescription.componentType = kAudioUnitType_Output;
audioComponentDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
audioComponentDescription.componentFlags = 0;
audioComponentDescription.componentFlagsMask = 0;
audioComponentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
audioStreamBasicDescription.mSampleRate = 44100;
audioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;
audioStreamBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioStreamBasicDescription.mFramesPerPacket = 1;
audioStreamBasicDescription.mChannelsPerFrame = 1;
audioStreamBasicDescription.mBitsPerChannel = 16;
audioStreamBasicDescription.mBytesPerPacket = 2;
audioStreamBasicDescription.mBytesPerFrame = 2;
Any help is highly appreciated.
No, as each sample rate requires some startup time involving flushing samples at the previous rate from the Audio Unit buffers and sample rate converters.
Best bet, if you need to process another sample rate is to resample in software inside your own app.
Yes, you can do it dynamically with kAudioUnitSubType_SpatialMixer Audio Unit.
In pseudocode:
AudioUnitSetParameter(mixerUnit, k3DMixerParam_PlaybackRate, kAudioUnitScope_Input, 0, sampleRateRatio(from 0.0 to 2.0), 0);
Playback through my AudioUnit works fine until I start getting gyroscope updates from a CMMotionManager. I assumed this was due to a performance hit, but when I measured the runtime of my callback during said gyroscope updates it isn't as high as other CMMotionManager-less trials with smooth playback, yet the playback is choppy.
Some visual explanation (The red is the time between consecutive callbacks. The green--it's hard to see but there's bits of green right underneath all the red--is the runtime of the callback, which is consistently just a few milliseconds less):
Sorry if the graph is a bit messy, hopefully I'm still getting my point across.
In sum, rather than the runtime of the callback, the quality of the playback seems more tied to the "steadiness" of the frequency at which the callback is, erm, called back. What could be going on here? Could my callback runtimes just be off? That seems unlikely. I'm timing my callback via calls to clock() at the beginning and end. Is my AudioUnit setup wrong? It is admittedly a bit hacked together, and I'm not using an AUGraph or anything.
AudioUnit initialization code:
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO; // Remote I/O is for talking with the hardware
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent component = AudioComponentFindNext(NULL, &desc);
AudioComponentInstanceNew(component, &myAudioUnit);
UInt32 enableIO;
AudioUnitElement inputBus = 1;
AudioUnitElement outputBus = 0;
//Disabling IO for recording
enableIO = 0;
AudioUnitSetProperty(myAudioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, inputBus, &enableIO, sizeof(enableIO));
//Enabling IO for playback
enableIO = 1;
AudioUnitSetProperty(myAudioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, outputBus, &enableIO, sizeof(enableIO));
UInt32 bytesPerSample = BIT_DEPTH/8.0;
AudioStreamBasicDescription stereoStreamFormat = {0};
stereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat.mBytesPerFrame = bytesPerSample;
stereoStreamFormat.mBytesPerPacket = bytesPerSample;
stereoStreamFormat.mChannelsPerFrame = 2; // 2 indicates stereo
stereoStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked |
kAudioFormatFlagIsNonInterleaved;
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mReserved = 0;
stereoStreamFormat.mSampleRate = SAMPLE_RATE;
AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, inputBus, &stereoStreamFormat, sizeof(AudioStreamBasicDescription));
AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, outputBus, &stereoStreamFormat, sizeof(AudioStreamBasicDescription));
//Setting input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = &recordingCallback; //TODO: Should there be an ampersand?
callbackStruct.inputProcRefCon = myAudioUnit;
AudioUnitSetProperty(myAudioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Output, inputBus, &callbackStruct, sizeof(callbackStruct)); //TODO: Not sure of scope and bus/element
//Setting output callback
callbackStruct.inputProc = &playbackCallback;
callbackStruct.inputProcRefCon = myAudioUnit;
AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, outputBus, &callbackStruct, sizeof(callbackStruct));
AudioUnitInitialize(myAudioUnit);
RemoteIO Playback Callback:
static OSStatus playbackCallback (void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) {
timeOfCallback = clock();
if (timeOfPrevCallback != 0) {
callbackInterval = (double)timeOfCallback - timeOfPrevCallback;
}
clock_t t1, t2;
t1 = clock();
FooBarClass::myCallbackFunction((short *)ioData->mBuffers[0].mData, (short *)ioData->mBuffers[1].mData);
t2 = clock();
cout << "Callback duration: " << ((double)(t2-t1))/CLOCKS_PER_SEC << endl;
cout << "Callback interval: " << callbackInterval/CLOCKS_PER_SEC << endl;
timeOfPrevCallback = timeOfCallback;
//prevCallbackInterval = callbackInterval;
return noErr;
}
In myCallbackFunction I'm reading from a handful of .wav files, filtering each one and mixing them together, and copying the output to the buffers passed to it. In the graph where I mention "incrementally adding computation" I'm referring to the number of input files I'm mixing together. Also, if it matters, gyroscope updates occur via an NSTimer that goes off every 1/25 of a second:
[self.getMotionManager startDeviceMotionUpdates];
gyroUpdateTimer = [NSTimer scheduledTimerWithTimeInterval:GYRO_UPDATE_INTERVAL target:self selector:#selector(doGyroUpdate) userInfo:nil repeats:YES];
...
+(void)doGyroUpdate {
double currentYaw = motionManager.deviceMotion.attitude.yaw;
// a couple more lines of not very expensive code
}
I should also be more clear about what I mean by choppiness in this sense: The audio isn't skipping, it just sounds really bad, as if an additional, crackly track was getting mixed in while the other tracks play smoothly. I'm not talking about clipping either, which it isn't because the only difference between good and bad playback is the gyroscope updates.
Thanks in advance for any tips.
----- Update 1 -----
My runtimes were a bit off because I was using clock(), which apparently doesn't work right for multithreaded applications. Apparently get_clock_time() is the proper way to measure runtimes across multiple threads but it's not implemented for Darwin. Though it's not an ideal solution, I'm using gettimeofday() now to measure the callback run time and intervals. Aside from the now steady intervals between successive callbacks (which were previously pretty erratic), things are more or less the same:
----- Update 2 -----
Interestingly enough, when I start and then stop CMMotionManager updates altogether via stopDeviceMotionUpdates, the crackliness persists...
----- Update 3 -----
'Crackliness' doesn't start until the first CMMotionManager is received, i.e. when the deviceMotion property is first checked after the NSTimer is first triggered. After that, crackliness persists regardless of the update frequency and even after updates are stopped.
You are trying to call Objective C methods, do synchronous file reads, and/or do significant computation (your C++ function) inside a real-time audio render callback. Also, logging to cout from inside a real-time thread is most likely not going to work reliably. These operations can take too long to meet the latency critical real-time requirements of Audio Unit callbacks.
Instead, for any data that does not have a tightly bounded maximum latency to generate, you might just copy that data from a lock free circular fifo or queue inside your render callback, and fill that audio data fifo slightly ahead of time in another thread (perhaps running on an NSTimer or CADisplayLink).
I had a similar issue when activating the location services. The issue was only present on slower devices like the iPod touch 16gb and not present on other hardware. I saw that you have in your graph title BUF_SIZE: 1024
Is this your call back time?
I fixed my problem by increasing the callback time (buffer size).
If you can handle more latency, try increasing the callback time using
NSTimeInterval _preferredDuration = (2048) / 44100.0 ; // Try bigger values here
NSError* err;
[[AVAudioSession sharedInstance]setPreferredIOBufferDuration:_preferredDuration error:&err];
I've written some code to play multi-instrument general MIDI files on iOS. It works fine in iOS 7, but stopped working on iOS 8.
I've stripped it down to its essence here. Instead of creating 16 channels for my multi-channel mixer, I just create one sampler node, and map all the tracks to that channel. It still exhibits the same problem as the multi-sampler version. None of the Audio Toolbox calls return an error code (they all return 0) in iOS 7 or iOS 8. The sequence plays through the speakers in iOS 7, on both the simulator and on iPhone/iPad devices. Run the exact same code on the iOS 8 simulator, or an iPhone/iPad device, and no sound is produced.
If you comment out the call to [self initGraphFromMIDISequence], it plays on iOS 8 with the default sine-wave sound.
#implementation MyMusicPlayer {
MusicPlayer _musicPlayer;
MusicSequence _musicSequence;
AUGraph _processingGraph;
}
- (void)playMidi:(NSURL*)midiFileURL {
NewMusicSequence(&_musicSequence);
MusicSequenceFileLoad(_musicSequence, CFBridgingRetain(midiFileURL), 0, 0);
NewMusicPlayer(&_musicPlayer);
MusicPlayerSetSequence(_musicPlayer, _musicSequence);
[self initGraphFromMIDISequence];
MusicPlayerPreroll(_musicPlayer);
MusicPlayerStart(_musicPlayer);
}
// Sets up an AUGraph with one channel whose instrument is loaded from a sound bank.
// Maps all the tracks of the MIDI sequence onto that channel. Basically this is a
// way to replace the default sine-wave sound with another (single) instrument.
- (void)initGraphFromMIDISequence {
NewAUGraph(&_processingGraph);
// Add one sampler unit to the graph.
AUNode samplerNode;
AudioComponentDescription cd = {};
cd.componentManufacturer = kAudioUnitManufacturer_Apple;
cd.componentType = kAudioUnitType_MusicDevice;
cd.componentSubType = kAudioUnitSubType_Sampler;
AUGraphAddNode(_processingGraph, &cd, &samplerNode);
// Add a Mixer unit node to the graph
cd.componentType = kAudioUnitType_Mixer;
cd.componentSubType = kAudioUnitSubType_MultiChannelMixer;
AUNode mixerNode;
AUGraphAddNode(_processingGraph, &cd, &mixerNode);
// Add the Output unit node to the graph
cd.componentType = kAudioUnitType_Output;
cd.componentSubType = kAudioUnitSubType_RemoteIO; // Output to speakers.
AUNode ioNode;
AUGraphAddNode(_processingGraph, &cd, &ioNode);
AUGraphOpen(_processingGraph);
// Obtain the mixer unit instance from its corresponding node, and set the bus count to 1.
AudioUnit mixerUnit;
AUGraphNodeInfo(_processingGraph, mixerNode, NULL, &mixerUnit);
UInt32 const numChannels = 1;
AudioUnitSetProperty(mixerUnit,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&numChannels,
sizeof(numChannels));
// Connect the sampler node's output 0 to mixer node output 0.
AUGraphConnectNodeInput(_processingGraph, samplerNode, 0, mixerNode, 0);
// Connect the mixer unit to the output unit.
AUGraphConnectNodeInput(_processingGraph, mixerNode, 0, ioNode, 0);
// Obtain reference to the audio unit from its node.
AudioUnit samplerUnit;
AUGraphNodeInfo(_processingGraph, samplerNode, 0, &samplerUnit);
MusicSequenceSetAUGraph(_musicSequence, _processingGraph);
// Set the destination for each track to our single sampler node.
UInt32 trackCount;
MusicSequenceGetTrackCount(_musicSequence, &trackCount);
MusicTrack track;
for (int i = 0; i < trackCount; i++) {
MusicSequenceGetIndTrack(_musicSequence, i, &track);
MusicTrackSetDestNode(track, samplerNode);
}
// You can use either a DLS or an SF2 file bundled with your app; both work in iOS 7.
//NSString *soundBankPath = [[NSBundle mainBundle] pathForResource:#"GeneralUserv1.44" ofType:#"sf2"];
NSString *soundBankPath = [[NSBundle mainBundle] pathForResource:#"gs_instruments" ofType:#"dls"];
NSURL *bankURL = [NSURL fileURLWithPath:soundBankPath];
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (__bridge CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = 0;
UInt8 instrumentNumber = 46; // pick any GM instrument 0-127
bpdata.presetID = instrumentNumber;
AudioUnitSetProperty(samplerUnit,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
}
I have some code, not included here, which polls to see if the sequence is still playing, by calling MusicPlayerGetTime on the MusicPlayer instance. In iOS 7, the result of that call each time is the number of seconds that have elapsed since it started playing. In iOS 8, the call always returns 0, which presumably means the MusicPlayer does not start playing the sequence on the call to MusicPlayerStart.
The code above is highly order-dependent -- you have to make certain calls before others; e.g., opening the graph before calling getInfo on a node, and not loading instruments until you've assigned the tracks to channels. I've followed all the advice in other StackOverflow threads, and have verified that getting the order correct makes error codes disappear.
Any iOS MIDI experts know what might have changed between iOS 7 and iOS 8 to make this code stop working?
In iOS 8 Apple introduced a slick Obj-C abstraction of the core audio API - AVAudioEngine.
You should probably check it out. https://developer.apple.com/videos/wwdc/2014/#502
I want to implement my equalizer on iOS device, and it seems to be achievable by kAudioUnitSubType_NBandEQ. However, after I set the gain of each band, I heard nothing changed on the sound.
I use the sample code of BandEQDemo(https://github.com/springlo/BandEQDemo)
OSStatus TOAUGraphAddNode(OSType inComponentType, OSType inComponentSubType, AUGraph inGraph, AUNode *outNode)
{
AudioComponentDescription desc;
desc.componentType = inComponentType;
desc.componentSubType = inComponentSubType;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
return AUGraphAddNode(inGraph, &desc, outNode);
}
TOAUGraphAddNode(kAudioUnitType_Effect,
kAudioUnitSubType_NBandEQ,
graph,
&eqNode);
AUGraphNodeInfo(graph,
eqNode,
NULL,
&equalizerUnit);
// #[ #32, #64, #128, #256, #512, #1025, #2048, #4096, #8192, #16384 ]
Then after I set the gain and check again by these two functions:
- (AudioUnitParameterValue)gainForBandAtPosition:(NSUInteger)bandPosition
{
AudioUnitParameterValue gain;
AudioUnitParameterID parameterID = kAUNBandEQParam_Gain + bandPosition;
TOThrowOnError(AudioUnitGetParameter(equalizerUnit,
parameterID,
kAudioUnitScope_Global,
0,
&gain));
return gain;
}
- (void)setGain:(AudioUnitParameterValue)gain forBandAtPosition:(NSUInteger)bandPosition
{
AudioUnitParameterID parameterID = kAUNBandEQParam_Gain + bandPosition;
TOThrowOnError(AudioUnitSetParameter(equalizerUnit,
parameterID,
kAudioUnitScope_Global,
0,
gain,
0));
NSLog(#"After set gain#[%d]->%f", bandPosition, [self gainForBandAtPosition:bandPosition]);
}
When I get the gain value of each band after I set it(-96dB ~ 24dB), it does correspond to the value I set. But I cannot hear any different on the sound.
Hi fellow audio enthusiast!
I am also creating an app with an equaliser feature and have stumbled upon this web page:
http://www.deluge.co/?q=content/coreaudio-iphone-creating-graphic-equalizer
With a little of cross-referencing, I have found out that the code from your Git source lacks some key features. After line 142, additional parameter set up must be made.
For example, setNumBands method is not utilised, neither is setBands. Also, make sure to include the bypass setting loop - it really won't work without it! Most importantly, check what pieces of code is lacking in your implementation, based on the ling above.
I am sorry I can't give you more through explanation; I have started to learn Audio Units a few days ago; but your question (as well as the BandEQ project) and the page I stumbled upon have solved some of my problems, so I am trying to help you as well.
Nevertheless, this is an official guide for Audio Units and do study it, as so will I! Good luck!