Only play audio from array once without looping - ios

I'm completely a beginner when it comes to audio programming and right now I'm playing around with AudioUnit. I'm following http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html and I've ported over the code to work with iOS7. The problem is that I only want it to play the generated sine wave once and not keep on playing the sound wave. I am not sure how to accomplish this though.
Generating audio samples:
OSStatus RenderTone(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// Fixed amplitude is good enough for our purposes
const double amplitude = 0.25;
// Get the tone parameters out of the view controller
ToneGeneratorViewController *viewController =
(ToneGeneratorViewController *)inRefCon;
double theta = viewController->theta;
double theta_increment =
2.0 * M_PI * viewController->frequency / viewController->sampleRate;
// This is a mono tone generator so we only need the first buffer
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sin(theta) * amplitude;
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
// Store the updated theta back in the view controller
viewController->theta = theta;
return noErr;
}
Creating AudioUnit:
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
// Get the default playback output unit
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, #"Can't find default output");
// Create a new unit based on this that we'll use for output
OSErr err = AudioComponentInstanceNew(defaultOutput, &toneUnit);
NSAssert1(toneUnit, #"Error creating unit: %ld", err);
// Set our tone rendering function on the unit
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = self;
err = AudioUnitSetProperty(toneUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input));
NSAssert1(err == noErr, #"Error setting callback: %ld", err);
// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = four_bytes_per_float;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (toneUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
sizeof(AudioStreamBasicDescription));
NSAssert1(err == noErr, #"Error setting stream format: %ld", err);
Thanks!

The problem is that I only want it to play the generated sine wave once
What you should do is stopping the audio unit after a certain time.
You could, e.g., set an NSTimer when you call AudioOutputUnitStart and then when the timer fires, you call AudioOutputUnitStop (actually, your audio unit disposal code). Even simpler, you could use performSelector:withObject:afterDelay: and call your audio unit disposal method.
Hope this helps.

Related

How to implement digital filters in Core Audio?

I have a question about the implementation of digital filters in CoreAudio. I'm in big trouble because it is a few weeks I'm trying to understand how to implement them. The basic idea is this: while I talk to the iPhone's microphone, my voice is filtered by a low-pass filter or high-pass or bandpass.
I Studied the books "Learning Core Audio" for the implementation of the CoreAudio (Although I had to work it a few weeks and do research, but now work very well) and the book "Digital Signal Processing" (very famous because i saw him indicated always in discussions dedicated to the filters).
Book of DSP I understand this: I must Create a filter, a sort of kernel filter that is a kind of "mask" to be applied to the signal itself. In this state the filter is said "Impulse Response" or also is said IIR. Now I apply the mask to signal by a function convolution becoming a FIR filter. What I have written is this correct?
I tried to implement what is written, taking even the AudioGraph application example as with the Documentation, but nothing. Does not work.
I also found a document which explained in a simple way what to do (the name is "Creating FIR Filters in C ++") where the algorithms of the filters were mentioned, but even here, once Implemented have not worked.
For this is coming to me the doubt: maybe something else I have to do with the signal that I get? The only thing that I understand is I have to convert the signal in Sint16. I did, but then what? What should I do to implement a digital filter?
For this I ask for help. I have no one to confront and so now I'm going blind. I leave you with my current code of CoreAudio, I works perfectly, even on devices, but, perhaps, may be some wrong setting?
AudioComponentDescription AudioCompDesc;
AudioCompDesc.componentType = kAudioUnitType_Output;
AudioCompDesc.componentSubType = kAudioUnitSubType_RemoteIO;
AudioCompDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioCompDesc.componentFlags = 0;
AudioCompDesc.componentFlagsMask = 0;
AudioComponent RIOComponente = AudioComponentFindNext(NULL, &AudioCompDesc);
CheckError(AudioComponentInstanceNew(RIOComponente, &strutturaAscolto.RIO), "Impossibile ottenre un'istanza dell'unità RIO");
UInt32 oneFlag = 1;
AudioUnitElement bus0 = 0;
CheckError(AudioUnitSetProperty (strutturaAscolto.RIO, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output,bus0, &oneFlag, sizeof(oneFlag)), "Impossibile abilitare l'uscita RIO");
AudioUnitElement bus1 = 1;
CheckError(AudioUnitSetProperty(strutturaAscolto.RIO, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, bus1, &oneFlag, sizeof(oneFlag)),"Imposibile abilitare l'ingresso RIO");
strutturaAscolto.AudioAscoltoASBD.mSampleRate = 44100;
strutturaAscolto.AudioAscoltoASBD.mFormatID = kAudioFormatLinearPCM;
strutturaAscolto.AudioAscoltoASBD.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked;
strutturaAscolto.AudioAscoltoASBD.mBytesPerPacket = 4;
strutturaAscolto.AudioAscoltoASBD.mFramesPerPacket = 1;
strutturaAscolto.AudioAscoltoASBD.mBytesPerFrame = 4;
strutturaAscolto.AudioAscoltoASBD.mChannelsPerFrame = 2;
strutturaAscolto.AudioAscoltoASBD.mBitsPerChannel = 16;
CheckError(AudioUnitSetProperty (strutturaAscolto.RIO, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, bus0, &strutturaAscolto.AudioAscoltoASBD, sizeof (strutturaAscolto.AudioAscoltoASBD)), "Impossibile impostare la descrizione audio (ASBD) per RIO sull'applicazione di ingresso 0");
CheckError(AudioUnitSetProperty (strutturaAscolto.RIO, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, bus1, &strutturaAscolto.AudioAscoltoASBD, sizeof (strutturaAscolto.AudioAscoltoASBD)), "Impossibile impostare la descrizione audio (ASBD) per RIO sull'applicazione di uscita 1");
strutturaAscolto.SenoFrequenza = 30;
strutturaAscolto.SenoFase = 0;
AURenderCallbackStruct CallbackStruct;
CallbackStruct.inputProc = ModulazioneAudio;
CallbackStruct.inputProcRefCon = &strutturaAscolto;
CheckError(AudioUnitSetProperty(strutturaAscolto.RIO, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, bus0, &CallbackStruct, sizeof (CallbackStruct)), "Impossibile impostare il rendering di callback di RIO sul bus 0");
CheckError(AudioUnitInitialize(strutturaAscolto.RIO), "Non è stato possibile inizializzare l'unità RIO");
CheckError(AudioOutputUnitStart(strutturaAscolto.RIO), "Non è stato possibile avviare l'unità RIO");
While the function that should accommodate the filter is simply this.
static OSStatus ModulazioneAudio(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames,AudioBufferList *ioData) {
StrutturaAscolto *strutturaAscolto = (StrutturaAscolto*) inRefCon;
UInt32 bus1 = 1;
CheckError(AudioUnitRender(strutturaAscolto->RIO, ioActionFlags, inTimeStamp, bus1, inNumberFrames, ioData), "Non è stato possibile fare il render dell'unità RIO");
return noErr;
}
I leave with some filters, what I tested (not all). Taking this algorithm for the high-pass filter of the document "Creating FIR Filters in C++":
int N = 1024;
float f_c = 5000/44100;
float ω_c = 2*M_PI*f_c;
int middle = N/2;
int i = -N/2;
float fltr[middle];
do {
if (i == 0) {
fltr[middle] = 1 - 2*f_c;
}
else {
fltr[i+middle] = -sin(ω_c*i)/(M_PI*i);
}
i++;
} while(i != N/2);
This i did by taking the book DPS (chapter 16) and is also referred to in the application AudioGraph.
void lowPassWindowedSincFilter( float *buf , float fc ) {
int i;
int m = 100;
float sum = 0;
for( i = 0; i < 101 ; i++ ) {
if((i - m / 2) == 0 ) {
buf[i] = 2 * M_PI * fc;
}
else {
buf[i] = sin(2 * M_PI * fc * (i - m / 2)) / (i - m / 2);
}
buf[i] = buf[i] * (.54 - .46 * cos(2 * M_PI * i / m ));
}
for ( i = 0 ; i < 101 ; i++ ) {
sum = sum + buf[i];
}
for ( i = 0 ; i < 101 ; i++ ) {
buf[i] = buf[i] / sum;
}
}
As for the convolution function, I used to Hamming, very simple that I found right here on StackOverflow.
for (int i = 0; i < 1024; i++) {
double multiplier = 0.5 * (1 - cos(2*M_PI*i/1023));
dataOut[i] = multiplier * dataIn[i];
}
Thank you for your attention.

iOS AudioUnit garbage input and output callback error on writing to right channel

I'm trying to output a sine wave on the left channel and silence on the right channel of an AudioUnit. I receive the following error when trying to write zero to the right channel,
Thread 5: EXC_BAD_ACCESS(code=1, address=0x0)
The callback function where this occurs is below with the line where the error is occuring marked by the comment / **** ERROR HERE **** at the end of the line
Output Callback
static OSStatus outputCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Scope reference to GSFSensorIOController class
GSFSensorIOController *THIS = (__bridge GSFSensorIOController *) inRefCon;
// Communication out on left and right channel if new communication out
AudioSampleType *outLeftSamples = (AudioSampleType *) ioData->mBuffers[0].mData;
AudioSampleType *outRightSamples = (AudioSampleType *) ioData->mBuffers[1].mData;
// Set up power tone attributes
float freq = 20000.00f;
float sampleRate = 44100.00f;
float phase = THIS.sinPhase;
float sinSignal;
double phaseInc = 2 * M_PI * freq / sampleRate;
for (UInt32 curFrame = 0; curFrame < inNumberFrames; ++curFrame) {
// Generate power tone on left channel
sinSignal = sin(phase);
outLeftSamples[curFrame] = (SInt16) ((sinSignal * 32767.0f) /2);
outRightSamples[curFrame] = (SInt16) (0); // **** ERROR HERE ****
phase += phaseInc;
if (phase >= 2 * M_PI * freq) {
phase = phase - (2 * M_PI * freq);
}
}
// Save sine wave phase wave for next callback
THIS.sinPhase = phase;
return noErr;
}
The curFrame = 0 and outRightSamples = NULL at the time the error is thrown. This leads me to believe that I'm setting up the channels incorrectly. Here is where I set up the IO of my AudioUnit,
Audio Unit Set Up
// Audio component description
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Mono ASBD
AudioStreamBasicDescription monoStreamFormat;
monoStreamFormat.mSampleRate = 44100.00;
monoStreamFormat.mFormatID = kAudioFormatLinearPCM;
monoStreamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
monoStreamFormat.mBytesPerPacket = 2;
monoStreamFormat.mBytesPerFrame = 2;
monoStreamFormat.mFramesPerPacket = 1;
monoStreamFormat.mChannelsPerFrame = 1;
monoStreamFormat.mBitsPerChannel = 16;
// Stereo ASBD
AudioStreamBasicDescription stereoStreamFormat;
stereoStreamFormat.mSampleRate = 44100.00;
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
stereoStreamFormat.mBytesPerPacket = 4;
stereoStreamFormat.mBytesPerFrame = 4;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mChannelsPerFrame = 2;
stereoStreamFormat.mBitsPerChannel = 16;
OSErr err;
#try {
// Get Audio units
err = AudioComponentInstanceNew(inputComponent, &_ioUnit);
NSAssert1(err == noErr, #"Error setting input component: %hd", err);
// Enable input, which is disabled by default. Output is enabled by default
UInt32 enableInput = 1;
err = AudioUnitSetProperty(_ioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&enableInput,
sizeof(enableInput));
NSAssert1(err == noErr, #"Error enable input: %hd", err);
err = AudioUnitSetProperty(_ioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&enableInput,
sizeof(enableInput));
NSAssert1(err == noErr, #"Error setting output: %hd", err);
// Apply format to input of ioUnit
err = AudioUnitSetProperty(self.ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&monoStreamFormat,
sizeof(monoStreamFormat));
NSAssert1(err == noErr, #"Error setting input ASBD: %hd", err);
// Apply format to output of ioUnit
err = AudioUnitSetProperty(self.ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
NSAssert1(err == noErr, #"Error setting output ASBD: %hd", err);
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = inputCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
err = AudioUnitSetProperty(self.ioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
NSAssert1(err == noErr, #"Error setting input callback: %hd", err);
// Set output callback
callbackStruct.inputProc = outputCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
err = AudioUnitSetProperty(self.ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
NSAssert1(err == noErr, #"Error setting output callback: %hd", err);
// Disable buffer allocation
UInt32 disableBufferAlloc = 0;
err = AudioUnitSetProperty(self.ioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&disableBufferAlloc,
sizeof(disableBufferAlloc));
// Allocate input buffers (1 channel, 16 bits per sample, thus 16 bits per frame and therefore 2 bytes per frame
_inBuffer.mNumberChannels = 1;
_inBuffer.mDataByteSize = 512 * 2;
_inBuffer.mData = malloc( 512 * 2 );
// Initialize audio unit
err = AudioUnitInitialize(self.ioUnit);
NSAssert1(err == noErr, #"Error initializing unit: %hd", err);
//AudioUnitInitialize(self.ioUnit);
// Start audio IO
err = AudioOutputUnitStart(self.ioUnit);
NSAssert1(err == noErr, #"Error starting unit: %hd", err);
//AudioOutputUnitStart(self.ioUnit);
}
#catch (NSException *exception) {
NSLog(#"Failed with exception: %#", exception);
}
I don't believe I'm setting up the AudioUnit correctly because I'm getting random values for my input on the mic line (ie. printing the input buffers to the command prompt gives values that do not change with ambient noise). Here's how I'm using my input callback,
Input Callback
static OSStatus inputCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Scope reference to GSFSensorIOController class
GSFSensorIOController *THIS = (__bridge GSFSensorIOController *) inRefCon;
// Set up buffer to hold input data
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Place buffer in an AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Grab the samples and place them in the buffer list
AudioUnitRender(THIS.ioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
// Process data
[THIS processIO:&bufferList];
// Free allocated buffer
free(bufferList.mBuffers[0].mData);
return noErr;
}
I've searched for example projects, as a reference, and I can't see a difference in the over all implementation of functionality. Any help is greatly appreciated.
The Audio Unit default setting may be for interleaved stereo channel data rather separate buffers for left and right.
The problem here looks to be that you're writing to unallocated memory. ioData->mBuffers[1] isn't valid for interleaved format. Both left and right channels are interleaved in ioData->mBuffers[0]. If you want non-interleaved data, mBytesPerFrame and mBytesPerPacket should be 2, not 4. That's probably why you're failing on AudioUnitInitialize for that format.
It's easier to deal with setting up these formats if you use the CAStreamBasicDescription utility class. See https://developer.apple.com/library/mac/samplecode/CoreAudioUtilityClasses/Introduction/Intro.html.
Setting up AudioStreamBasicDescription would then be as easy as:
CAStreamBasicDescription stereoStreamFormat(44100.0, 2, CAStreamBasicDescription::kPCMFormatInt16, false);

How do you save generated audio to a file in iOS?

I have successfully generated a tone using iOS with the following code. After that, I want to save the generated tone to an audio file. How can I do this?
- (void)createToneUnit
{
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
// Get the default playback output unit
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, #"Can't find default output");
// Create a new unit based on this that we'll use for output
OSErr err = AudioComponentInstanceNew(defaultOutput, &toneUnit);
NSAssert1(toneUnit, #"Error creating unit: %ld", err);
// Set our tone rendering function on the unit
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = (__bridge void *)(self);
err = AudioUnitSetProperty(toneUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input));
NSAssert1(err == noErr, #"Error setting callback: %ld", err);
// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = four_bytes_per_float;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (toneUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
sizeof(AudioStreamBasicDescription));
NSAssert1(err == noErr, #"Error setting stream format: %ld", err);
}
The Render Code :
OSStatus RenderTone(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// Fixed amplitude is good enough for our purposes
const double amplitude = 0.25;
// Get the tone parameters out of the view controller
ViewController *viewController =
(__bridge ViewController *)inRefCon;
double theta = viewController->theta;
double theta_increment = 2.0 * M_PI * viewController->frequency / viewController->sampleRate;
// This is a mono tone generator so we only need the first buffer
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sin(theta) * amplitude;
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
// Store the theta back in the view controller
viewController->theta = theta;
return noErr;
}
And to play the generated tone, I just :
OSErr err = AudioUnitInitialize(toneUnit);
err = AudioOutputUnitStart(toneUnit);
The extended audio file api provides an easy way to write audio files to disk.

How can I modify this AudioUnit code so that it has stereo output?

I can't seem to find what I'm looking for in the documentation. This code works great, but I want stereo output.
- (void)createToneUnit
{
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
// Get the default playback output unit
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, #"Can't find default output");
// Create a new unit based on this that we'll use for output
OSErr err = AudioComponentInstanceNew(defaultOutput, &_toneUnit);
NSAssert1(_toneUnit, #"Error creating unit: %d", err);
// Set our tone rendering function on the unit
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = (__bridge void*)self;
err = AudioUnitSetProperty(_toneUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input));
NSAssert1(err == noErr, #"Error setting callback: %d", err);
// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = kSampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = four_bytes_per_float;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (_toneUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
sizeof(AudioStreamBasicDescription));
NSAssert1(err == noErr, #"Error setting stream format: %dd", err);
}
And here is the callback:
OSStatus RenderTone( void* inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
// Get the tone parameters out of the view controller
VWWSynthesizerC *synth = (__bridge VWWSynthesizerC *)inRefCon;
double theta = synth.theta;
double theta_increment = 2.0 * M_PI * synth.frequency / kSampleRate;
// This is a mono tone generator so we only need the first buffer
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
if(synth.muted){
buffer[frame] = 0;
}
else{
switch(synth.waveType){
case VWWWaveTypeSine:{
buffer[frame] = sin(theta) * synth.amplitude;
break;
}
case VWWWaveTypeSquare:{
buffer[frame] = square(theta) * synth.amplitude;
break;
}
case VWWWaveTypeSawtooth:{
buffer[frame] = sawtooth(theta) * synth.amplitude;
break;
}
case VWWWaveTypeTriangle:{
buffer[frame] = triangle(theta) * synth.amplitude;
break;
}
default:
break;
}
}
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
synth.theta = theta;
return noErr;
}
If there is a different or better way to render this data, I'm open to suggestions. I'm rendering sine, square, triangle, sawtooth, etc... waves.

consuming audio data from circular buffer in a render callback attached to the input scope of a remoteio audio unit

The title pretty much sums up what I'm trying to achieve. I am trying to use Michael Tyson's TPCircularBuffer inside of a render callback while the circular buffer is getting filled with incoming audio data. I want to send the audio from the render callback to the output element of the RemoteIO audio unit so I can hear it through the device speakers.
The audio is interleaved stereo 16 bit coming in as packets of 2048 frames. Here's how I've set up my audio session:
#define kInputBus 1
#define kOutputBus 0
NSError *err = nil;
NSTimeInterval ioBufferDuration = 46;
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionMixWithOthers error:&err];
[session setPreferredIOBufferDuration:ioBufferDuration error:&err];
[session setActive:YES error:&err];
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, #"Can't find default output.");
AudioComponentInstanceNew(defaultOutput, &remoteIOUnit);
UInt32 flag = 0;
OSStatus status = AudioUnitSetProperty(remoteIOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kOutputBus, &flag, sizeof(flag));
size_t bytesPerSample = sizeof(AudioUnitSampleType);
AudioStreamBasicDescription streamFormat = {0};
streamFormat.mSampleRate = 44100.00;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
streamFormat.mBytesPerPacket = bytesPerSample;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = bytesPerSample;
streamFormat.mChannelsPerFrame = 2;
streamFormat.mBitsPerChannel = bytesPerSample * 8;
streamFormat.mReserved = 0;
status = AudioUnitSetProperty(remoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &streamFormat, sizeof(streamFormat));
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = render;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(remoteIOUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct));
And here's where the audio data gets loaded into the circular buffer and used in the render callback:
#define kBufferLength 2048
-(void)loadBytes:(Byte *)byteArrPtr{
TPCircularBufferProduceBytes(&buffer, byteArrPtr, kBufferLength);
}
OSStatus render(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AUDIOIO *audio = (__bridge AUDIOIO *)inRefCon;
AudioSampleType *outSample = (AudioSampleType *)ioData->mBuffers[0].mData;
//Zero outSample
memset(outSample, 0, kBufferLength);
int bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16 *targetBuffer = (SInt16 *)ioData->mBuffers[0].mData;
//Pull audio
int32_t availableBytes;
SInt16 *buffer = TPCircularBufferTail(&audio->buffer, &availableBytes);
memcpy(targetBuffer, buffer, MIN(bytesToCopy, availableBytes));
TPCircularBufferConsume(&audio->buffer, MIN(bytesToCopy, availableBytes));
return noErr;
}
There is something wrong with this setup because I am not getting any audio through the speakers, but I'm also not getting any errors when I test on my device. As far as I can tell the TPCircularBuffer is being filled and read from correctly. I've followed the Apple documentation for setting up the audio session. I am considering trying to set up an AUGraph next but I want to see if anyone could suggest a solution for what I'm trying to do here. Thanks!
For stereo (2 channels per frame), your bytes per frame and bytes per packet have to be twice your sample size in bytes. Same with bits per channel in terms of bits.
Added: If availableBytes/yourFrameSize isn't almost always as large or larger than inNumberFrames, you won't get much continuous sound.
At a glance, it looks like you've got everything set up correctly. You're missing a call to AudioOutputUnitStart() though:
...
// returns an OSStatus indicating success / fail
AudioOutputUnitStart(remoteIOUnit);
// now your callback should be being called
...
I believe one your problem is with using streamFormat.mBitsPerChannel = bytesPerSample * 8;
You assign bytesPerSample to be sizeof(AudioUnitSampleType) which is essentially 4 bytes.
So streamFormat.mBytesPerPacket = bytesPerSample; is ok.
But the assignment streamFormat.mBitsPerChannel = bytesPerSample * 8; is saying that you want 32 bits per sample instead of 16 bits per sample.
I would not create your audio format based on AudioUnitSampleType because this has nothing to do with your personal format that you want to utilize. I would create defines and do something like this:
#define BITS_PER_CHANNEL 16
#define SAMPLE_RATE 44100.0
#define CHANNELS_PER_FRAME 2
#define BYTES_PER_FRAME CHANNELS_PER_FRAME * (BITS_PER_CHANNEL / 8) //ie 4
#define FRAMES_PER_PACKET 1
#define BYTES_PER_PACKET FRAMES_PER_PACKET * BYTES_PER_FRAME
streamFormat.mSampleRate = SAMPLE_RATE; // 44100.0
streamFormat.mBitsPerChannel = BITS_PER_CHANNEL; //16
streamFormat.mChannelsPerFrame = CHANNELS_PER_FRAME; // 2
streamFormat.mFramesPerPacket = FRAMES_PER_PACKET; //1
streamFormat.mBytesPerFrame = BYTES_PER_FRAME; // 4 total, 2 for left ch, 2 for right ch
streamFormat.mBytesPerPacket = BYTES_PER_PACKET;
streamFormat.mReserved = 0;
streamFormat.mFormatID = kAudioFormatLinearPCM; // double check this also
streamFormat.mFormatFlags = kAudioFormatFlagsCanonical;`
You also need to look at the return values set to err and status immediately after each are run. You still need to add error checking at some of the calls as well such as
checkMyReturnValueToo = AudioComponentInstanceNew(defaultOutput, &remoteIOUnit);
You also have an extremely high value for your buffer duration. You have 46 and I am not sure where that came from. That means you want 46 seconds worth of audio during each audio callback. Usually you want something less than one second depending on your latency requirements. Most likely iOS will not use anything that high but you should try setting it to say 0.025 or so (25ms). You can try to lower it if you need faster latency.

Resources