iOS equalizer by kAudioUnitSubType_NBandEQ - ios

I want to implement my equalizer on iOS device, and it seems to be achievable by kAudioUnitSubType_NBandEQ. However, after I set the gain of each band, I heard nothing changed on the sound.
I use the sample code of BandEQDemo(https://github.com/springlo/BandEQDemo)
OSStatus TOAUGraphAddNode(OSType inComponentType, OSType inComponentSubType, AUGraph inGraph, AUNode *outNode)
{
AudioComponentDescription desc;
desc.componentType = inComponentType;
desc.componentSubType = inComponentSubType;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
return AUGraphAddNode(inGraph, &desc, outNode);
}
TOAUGraphAddNode(kAudioUnitType_Effect,
kAudioUnitSubType_NBandEQ,
graph,
&eqNode);
AUGraphNodeInfo(graph,
eqNode,
NULL,
&equalizerUnit);
// #[ #32, #64, #128, #256, #512, #1025, #2048, #4096, #8192, #16384 ]
Then after I set the gain and check again by these two functions:
- (AudioUnitParameterValue)gainForBandAtPosition:(NSUInteger)bandPosition
{
AudioUnitParameterValue gain;
AudioUnitParameterID parameterID = kAUNBandEQParam_Gain + bandPosition;
TOThrowOnError(AudioUnitGetParameter(equalizerUnit,
parameterID,
kAudioUnitScope_Global,
0,
&gain));
return gain;
}
- (void)setGain:(AudioUnitParameterValue)gain forBandAtPosition:(NSUInteger)bandPosition
{
AudioUnitParameterID parameterID = kAUNBandEQParam_Gain + bandPosition;
TOThrowOnError(AudioUnitSetParameter(equalizerUnit,
parameterID,
kAudioUnitScope_Global,
0,
gain,
0));
NSLog(#"After set gain#[%d]->%f", bandPosition, [self gainForBandAtPosition:bandPosition]);
}
When I get the gain value of each band after I set it(-96dB ~ 24dB), it does correspond to the value I set. But I cannot hear any different on the sound.

Hi fellow audio enthusiast!
I am also creating an app with an equaliser feature and have stumbled upon this web page:
http://www.deluge.co/?q=content/coreaudio-iphone-creating-graphic-equalizer
With a little of cross-referencing, I have found out that the code from your Git source lacks some key features. After line 142, additional parameter set up must be made.
For example, setNumBands method is not utilised, neither is setBands. Also, make sure to include the bypass setting loop - it really won't work without it! Most importantly, check what pieces of code is lacking in your implementation, based on the ling above.
I am sorry I can't give you more through explanation; I have started to learn Audio Units a few days ago; but your question (as well as the BandEQ project) and the page I stumbled upon have solved some of my problems, so I am trying to help you as well.
Nevertheless, this is an official guide for Audio Units and do study it, as so will I! Good luck!

Related

Inter App Audio technology : make effect node and instrument node independent

I am writing an HOST application that uses Core Audio's new iOS 7 Inter App Audio technology. I have managed to get the instruments apps and effects app with the help of Inter-App Audio Examples .
The issue is that the effect node is dependent upon the instrument node. I want to make effect node and instrument node independent.
Here i my Try.
if (desc.componentType == kAudioUnitType_RemoteEffect) {
// if ([self isRemoteInstrumentConnected]) {
if (!_engineStarted) // Check if session is active
[self checkStartOrStopEngine];
if ([self isGraphStarted]) // Check if graph is running and or is created, if so, stop it
[self checkStartStopGraph];
if ([self checkGraphInitialized ]) // Check if graph has been inititialized if so, uninitialize it.
Check(AUGraphUninitialize(hostGraph));
Check (AUGraphAddNode (hostGraph, &desc, &effectNode)); // Add remote instrument
//Disconnect previous chain
// Check(AUGraphDisconnectNodeInput(hostGraph, mixerNode, remoteBus));
//Connect the effect node to the mixer on the remoteBus
Check(AUGraphConnectNodeInput (hostGraph, effectNode, 0, mixerNode, remoteBus));
//Connect the remote instrument node to the effect node on bus 0
Check(AUGraphConnectNodeInput (hostGraph, instrumentNode, 0, effectNode, 0));
//Grab audio units from the graph
Check(AUGraphNodeInfo(hostGraph, effectNode, 0, &effect));
currentUnit = &effect;
}
if (currentUnit) {
Check (AudioUnitSetProperty (*currentUnit, // Set stereo format
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
playerBus,
&stereoStreamFormat,
sizeof (stereoStreamFormat)));
UInt32 maxFrames = 4096;
Check(AudioUnitSetProperty(*currentUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global, playerBus,
&maxFrames,
sizeof(maxFrames)));
[self addAudioUnitPropertyListeners:*currentUnit]; // Add property listeners to audio unit
Check(AUGraphInitialize (hostGraph)); // Initialize the graph
[self checkStartStopGraph]; //Start the graph
}
[_connectedNodes addObject:rau];
but my Application Crashes on this Line --
Check(AUGraphInitialize (hostGraph));
And the Error i got ,
ConnectAudioUnit failed with error
-10860 Initialize failed with error
-10860 error -10860 from AUGraphInitialize (hostGraph)
Note :- I have also Attached screenshot of code portion for better understand.
Edit 1 :-
- (void)createGraph {
// 1
NewAUGraph(&hostGraph);
// 2
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType =
kAudioUnitType_Output;
iOUnitDescription.componentSubType =
kAudioUnitSubType_RemoteIO;
iOUnitDescription.componentManufacturer =
kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
AUGraphAddNode(hostGraph, &iOUnitDescription, &outNode);
// 3
AUGraphOpen(hostGraph);
// 4
Check(AUGraphNodeInfo(hostGraph, outNode, 0, &outputUnit));
// 5
AudioStreamBasicDescription format;
format.mChannelsPerFrame = 2;
format.mSampleRate =
[[AVAudioSession sharedInstance] sampleRate];
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags =
kAudioFormatFlagsNativeFloatPacked |
kAudioFormatFlagIsNonInterleaved;
format.mBytesPerFrame = sizeof(Float32);
format.mBytesPerPacket = sizeof(Float32);
format.mBitsPerChannel = 32;
format.mFramesPerPacket = 1;
AudioUnitSetProperty(mixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&format,
sizeof(format));
AudioUnitSetProperty(mixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&format,
sizeof(format));
CAShow(hostGraph);
}
So the error you're seeing, as per apple docs, is due to The specified node cannot be found.
It looks like you've taken the Apple example app you linked and just deleted a bit to attempt to remove 1 node, but I don't believe its that simple. The documentation of the example clearly states the two nodes are dependent. Just changing the addition of remotes method is not going to be enough, because the host still is attempting to create both, as shown by the error you're seeing.
From this file in the example project, you are only showing the changes you made to addRemoteAU but you need to be making changes to createGraph as well, since that is where the hostGraph is initialized with its nodes. If you initialize the graph with only 1 node, then in addRemoteAU you should stop seeing an error due to a node not being found, since the graph at that point won't expect two nodes (which it does now from it's creation).

RemoteIO AudioUnit playback quality not tied to callback runtime, but to something else

Playback through my AudioUnit works fine until I start getting gyroscope updates from a CMMotionManager. I assumed this was due to a performance hit, but when I measured the runtime of my callback during said gyroscope updates it isn't as high as other CMMotionManager-less trials with smooth playback, yet the playback is choppy.
Some visual explanation (The red is the time between consecutive callbacks. The green--it's hard to see but there's bits of green right underneath all the red--is the runtime of the callback, which is consistently just a few milliseconds less):
Sorry if the graph is a bit messy, hopefully I'm still getting my point across.
In sum, rather than the runtime of the callback, the quality of the playback seems more tied to the "steadiness" of the frequency at which the callback is, erm, called back. What could be going on here? Could my callback runtimes just be off? That seems unlikely. I'm timing my callback via calls to clock() at the beginning and end. Is my AudioUnit setup wrong? It is admittedly a bit hacked together, and I'm not using an AUGraph or anything.
AudioUnit initialization code:
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO; // Remote I/O is for talking with the hardware
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent component = AudioComponentFindNext(NULL, &desc);
AudioComponentInstanceNew(component, &myAudioUnit);
UInt32 enableIO;
AudioUnitElement inputBus = 1;
AudioUnitElement outputBus = 0;
//Disabling IO for recording
enableIO = 0;
AudioUnitSetProperty(myAudioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, inputBus, &enableIO, sizeof(enableIO));
//Enabling IO for playback
enableIO = 1;
AudioUnitSetProperty(myAudioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, outputBus, &enableIO, sizeof(enableIO));
UInt32 bytesPerSample = BIT_DEPTH/8.0;
AudioStreamBasicDescription stereoStreamFormat = {0};
stereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat.mBytesPerFrame = bytesPerSample;
stereoStreamFormat.mBytesPerPacket = bytesPerSample;
stereoStreamFormat.mChannelsPerFrame = 2; // 2 indicates stereo
stereoStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked |
kAudioFormatFlagIsNonInterleaved;
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mReserved = 0;
stereoStreamFormat.mSampleRate = SAMPLE_RATE;
AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, inputBus, &stereoStreamFormat, sizeof(AudioStreamBasicDescription));
AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, outputBus, &stereoStreamFormat, sizeof(AudioStreamBasicDescription));
//Setting input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = &recordingCallback; //TODO: Should there be an ampersand?
callbackStruct.inputProcRefCon = myAudioUnit;
AudioUnitSetProperty(myAudioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Output, inputBus, &callbackStruct, sizeof(callbackStruct)); //TODO: Not sure of scope and bus/element
//Setting output callback
callbackStruct.inputProc = &playbackCallback;
callbackStruct.inputProcRefCon = myAudioUnit;
AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, outputBus, &callbackStruct, sizeof(callbackStruct));
AudioUnitInitialize(myAudioUnit);
RemoteIO Playback Callback:
static OSStatus playbackCallback (void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) {
timeOfCallback = clock();
if (timeOfPrevCallback != 0) {
callbackInterval = (double)timeOfCallback - timeOfPrevCallback;
}
clock_t t1, t2;
t1 = clock();
FooBarClass::myCallbackFunction((short *)ioData->mBuffers[0].mData, (short *)ioData->mBuffers[1].mData);
t2 = clock();
cout << "Callback duration: " << ((double)(t2-t1))/CLOCKS_PER_SEC << endl;
cout << "Callback interval: " << callbackInterval/CLOCKS_PER_SEC << endl;
timeOfPrevCallback = timeOfCallback;
//prevCallbackInterval = callbackInterval;
return noErr;
}
In myCallbackFunction I'm reading from a handful of .wav files, filtering each one and mixing them together, and copying the output to the buffers passed to it. In the graph where I mention "incrementally adding computation" I'm referring to the number of input files I'm mixing together. Also, if it matters, gyroscope updates occur via an NSTimer that goes off every 1/25 of a second:
[self.getMotionManager startDeviceMotionUpdates];
gyroUpdateTimer = [NSTimer scheduledTimerWithTimeInterval:GYRO_UPDATE_INTERVAL target:self selector:#selector(doGyroUpdate) userInfo:nil repeats:YES];
...
+(void)doGyroUpdate {
double currentYaw = motionManager.deviceMotion.attitude.yaw;
// a couple more lines of not very expensive code
}
I should also be more clear about what I mean by choppiness in this sense: The audio isn't skipping, it just sounds really bad, as if an additional, crackly track was getting mixed in while the other tracks play smoothly. I'm not talking about clipping either, which it isn't because the only difference between good and bad playback is the gyroscope updates.
Thanks in advance for any tips.
----- Update 1 -----
My runtimes were a bit off because I was using clock(), which apparently doesn't work right for multithreaded applications. Apparently get_clock_time() is the proper way to measure runtimes across multiple threads but it's not implemented for Darwin. Though it's not an ideal solution, I'm using gettimeofday() now to measure the callback run time and intervals. Aside from the now steady intervals between successive callbacks (which were previously pretty erratic), things are more or less the same:
----- Update 2 -----
Interestingly enough, when I start and then stop CMMotionManager updates altogether via stopDeviceMotionUpdates, the crackliness persists...
----- Update 3 -----
'Crackliness' doesn't start until the first CMMotionManager is received, i.e. when the deviceMotion property is first checked after the NSTimer is first triggered. After that, crackliness persists regardless of the update frequency and even after updates are stopped.
You are trying to call Objective C methods, do synchronous file reads, and/or do significant computation (your C++ function) inside a real-time audio render callback. Also, logging to cout from inside a real-time thread is most likely not going to work reliably. These operations can take too long to meet the latency critical real-time requirements of Audio Unit callbacks.
Instead, for any data that does not have a tightly bounded maximum latency to generate, you might just copy that data from a lock free circular fifo or queue inside your render callback, and fill that audio data fifo slightly ahead of time in another thread (perhaps running on an NSTimer or CADisplayLink).
I had a similar issue when activating the location services. The issue was only present on slower devices like the iPod touch 16gb and not present on other hardware. I saw that you have in your graph title BUF_SIZE: 1024
Is this your call back time?
I fixed my problem by increasing the callback time (buffer size).
If you can handle more latency, try increasing the callback time using
NSTimeInterval _preferredDuration = (2048) / 44100.0 ; // Try bigger values here
NSError* err;
[[AVAudioSession sharedInstance]setPreferredIOBufferDuration:_preferredDuration error:&err];

Why is my multi-channel mixer no longer playing in iOS 8?

I've written some code to play multi-instrument general MIDI files on iOS. It works fine in iOS 7, but stopped working on iOS 8.
I've stripped it down to its essence here. Instead of creating 16 channels for my multi-channel mixer, I just create one sampler node, and map all the tracks to that channel. It still exhibits the same problem as the multi-sampler version. None of the Audio Toolbox calls return an error code (they all return 0) in iOS 7 or iOS 8. The sequence plays through the speakers in iOS 7, on both the simulator and on iPhone/iPad devices. Run the exact same code on the iOS 8 simulator, or an iPhone/iPad device, and no sound is produced.
If you comment out the call to [self initGraphFromMIDISequence], it plays on iOS 8 with the default sine-wave sound.
#implementation MyMusicPlayer {
MusicPlayer _musicPlayer;
MusicSequence _musicSequence;
AUGraph _processingGraph;
}
- (void)playMidi:(NSURL*)midiFileURL {
NewMusicSequence(&_musicSequence);
MusicSequenceFileLoad(_musicSequence, CFBridgingRetain(midiFileURL), 0, 0);
NewMusicPlayer(&_musicPlayer);
MusicPlayerSetSequence(_musicPlayer, _musicSequence);
[self initGraphFromMIDISequence];
MusicPlayerPreroll(_musicPlayer);
MusicPlayerStart(_musicPlayer);
}
// Sets up an AUGraph with one channel whose instrument is loaded from a sound bank.
// Maps all the tracks of the MIDI sequence onto that channel. Basically this is a
// way to replace the default sine-wave sound with another (single) instrument.
- (void)initGraphFromMIDISequence {
NewAUGraph(&_processingGraph);
// Add one sampler unit to the graph.
AUNode samplerNode;
AudioComponentDescription cd = {};
cd.componentManufacturer = kAudioUnitManufacturer_Apple;
cd.componentType = kAudioUnitType_MusicDevice;
cd.componentSubType = kAudioUnitSubType_Sampler;
AUGraphAddNode(_processingGraph, &cd, &samplerNode);
// Add a Mixer unit node to the graph
cd.componentType = kAudioUnitType_Mixer;
cd.componentSubType = kAudioUnitSubType_MultiChannelMixer;
AUNode mixerNode;
AUGraphAddNode(_processingGraph, &cd, &mixerNode);
// Add the Output unit node to the graph
cd.componentType = kAudioUnitType_Output;
cd.componentSubType = kAudioUnitSubType_RemoteIO; // Output to speakers.
AUNode ioNode;
AUGraphAddNode(_processingGraph, &cd, &ioNode);
AUGraphOpen(_processingGraph);
// Obtain the mixer unit instance from its corresponding node, and set the bus count to 1.
AudioUnit mixerUnit;
AUGraphNodeInfo(_processingGraph, mixerNode, NULL, &mixerUnit);
UInt32 const numChannels = 1;
AudioUnitSetProperty(mixerUnit,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&numChannels,
sizeof(numChannels));
// Connect the sampler node's output 0 to mixer node output 0.
AUGraphConnectNodeInput(_processingGraph, samplerNode, 0, mixerNode, 0);
// Connect the mixer unit to the output unit.
AUGraphConnectNodeInput(_processingGraph, mixerNode, 0, ioNode, 0);
// Obtain reference to the audio unit from its node.
AudioUnit samplerUnit;
AUGraphNodeInfo(_processingGraph, samplerNode, 0, &samplerUnit);
MusicSequenceSetAUGraph(_musicSequence, _processingGraph);
// Set the destination for each track to our single sampler node.
UInt32 trackCount;
MusicSequenceGetTrackCount(_musicSequence, &trackCount);
MusicTrack track;
for (int i = 0; i < trackCount; i++) {
MusicSequenceGetIndTrack(_musicSequence, i, &track);
MusicTrackSetDestNode(track, samplerNode);
}
// You can use either a DLS or an SF2 file bundled with your app; both work in iOS 7.
//NSString *soundBankPath = [[NSBundle mainBundle] pathForResource:#"GeneralUserv1.44" ofType:#"sf2"];
NSString *soundBankPath = [[NSBundle mainBundle] pathForResource:#"gs_instruments" ofType:#"dls"];
NSURL *bankURL = [NSURL fileURLWithPath:soundBankPath];
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (__bridge CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = 0;
UInt8 instrumentNumber = 46; // pick any GM instrument 0-127
bpdata.presetID = instrumentNumber;
AudioUnitSetProperty(samplerUnit,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
}
I have some code, not included here, which polls to see if the sequence is still playing, by calling MusicPlayerGetTime on the MusicPlayer instance. In iOS 7, the result of that call each time is the number of seconds that have elapsed since it started playing. In iOS 8, the call always returns 0, which presumably means the MusicPlayer does not start playing the sequence on the call to MusicPlayerStart.
The code above is highly order-dependent -- you have to make certain calls before others; e.g., opening the graph before calling getInfo on a node, and not loading instruments until you've assigned the tracks to channels. I've followed all the advice in other StackOverflow threads, and have verified that getting the order correct makes error codes disappear.
Any iOS MIDI experts know what might have changed between iOS 7 and iOS 8 to make this code stop working?
In iOS 8 Apple introduced a slick Obj-C abstraction of the core audio API - AVAudioEngine.
You should probably check it out. https://developer.apple.com/videos/wwdc/2014/#502

ios audio queue - how to meter audio level in buffer?

I'm working on an app that should do some audio signal processing. I need to measure the audio level in each one of the buffers I get (through the Callback function). I've been searching the web for some time, and I found that there is a build-in property called Current level metering:
AudioQueueGetProperty(recordState->queue,kAudioQueueProperty_CurrentLevelMeter,meters,&dlen);
This property gets me the average or peak audio level, but it's not synchronised to the current buffer.
I figured out I need to calculate the audio level from the buffer data by myself, so I had this:
double calcAudioRMS (SInt16 * audioData, int numOfSamples)
{
double RMS, adPercent;
RMS = 0;
for (int i=0; i<numOfSamples; i++)
{
adPercent=audioData[i]/32768.0f;
RMS += adPercent*adPercent;
}
RMS = sqrt(RMS / numOfSamples);
return RMS;
}
This function gets the audio data (casted into Sint16) and the number of samples in the current buffer. The numbers I get are indeed between 0 and 1, but they seem to be rather random and low comparing to the numbers I got from the built-in audio level metering.
The recording audio format is:
format->mSampleRate = 8000.0;
format->mFormatID = kAudioFormatLinearPCM;
format->mFramesPerPacket = 1;
format->mChannelsPerFrame = 1;
format->mBytesPerFrame = 2;
format->mBytesPerPacket = 2;
format->mBitsPerChannel = 16;
format->mReserved = 0;
format->mFormatFlags = kLinearPCMFormatFlagIsSignedInteger |kLinearPCMFormatFlagIsPacked;
My question is how to get the right values from the buffer? Is there a built-in function \ property for this? Or should I calculate the audio level myself, and how to do it?
Thanks in advance.
Your calculation for RMS power is correct. I'd be inclined to say that you have a fewer number of samples than Apple does, or something similar, and that would explain the difference. You can check by inputting a loud sine wave, and checking that Apple (and you) calculate RMS power at 1/sqrt(2).
Unless there's a good reason, I would use Apple's power calculations. I've used them, and they seem good to me. Additionally, generally you don't want RMS power, you want RMS power as decibels, or use the kAudioQueueProperty_CurrentLevelMeterDB constant. (This depends on if you're trying to build an audio meter, or truly display the audio power)

Playing multiple files with a single file player audio unit

I'm trying to use a file player audio unit (kAudioUnitSubType_AudioFilePlayer) to play multiple files (not at the same time, of course). That's on iOS.
So I've successfully opened the files and stored their details in an array of AudioFileID's that I set to the audio unit using kAudioUnitProperty_ScheduledFileIDs. Now I would like to define 2 ScheduledAudioFileRegion's, one per file, and used them with the file player...
But I can't seem to find out:
How to set the kAudioUnitProperty_ScheduledFileRegion property to store these 2 regions (actually, how to define the index of each region)?
How to trigger the playback of a specific region.. My guess is that the kAudioTimeStampSampleTimeValid parameter should enable this but how to define which region you want to play?
Maybe I'm just plain wrong about the way I should use this audio unit, but documentation is very difficult to get and I haven't found any example showing the playback of 2 regions on the same player!
Thanks in advance.
You need to schedule region every time you want play file. In ScheduledAudioFileRegion you must set AudioFileID to play. Playback begins when current time in unit (samples) are equal or greater than sample time in scheduled region.
Example:
// get current unit time
AudioTimeStamp timeStamp;
UInt32 propSize = sizeof(AudioTimeStamp);
AudioUnitGetProperty(m_playerUnit, kAudioUnitProperty_CurrentPlayTime, kAudioUnitScope_Global, 0, &timeStamp, &propSize);
// when to start playback
timeStamp.mSampleTime += 100;
// schedule region
ScheduledAudioFileRegion region;
memset(&region, 0, sizeof(ScheduledAudioFileRegion));
region.mAudioFile = ...; // your AudioFileID
region.mFramesToPlay = ...; // count of frames to play
region.mLoopCount = 1;
region.mStartFrame = 0;
region.mTimeStamp = timeStamp;
AudioUnitSetProperty(m_playerUnit, kAudioUnitProperty_ScheduledFileRegion, kAudioUnitScope_Global, 0, &region,sizeof(region));

Resources