Click During Inter-App Audio Recording - ios

I have been attempting to recording my input during an inter-app audio session on iOS 9. The speaker output sounds fine but the recorded file has a rhythmic clicking sound.
The waveform looks like below...
I have tweaked every setting and parameter I can think of and nothing seems to work.
Here are the format settings (stream settings are identical)...
AudioStreamBasicDescription fileFormat;
fileFormat.mSampleRate = kSessionSampleRate;
fileFormat.mFormatID = kAudioFormatLinearPCM;
fileFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked;
fileFormat.mFramesPerPacket = 1;
fileFormat.mChannelsPerFrame = 1;
fileFormat.mBitsPerChannel = 32; //tone is correct but there is still pops
fileFormat.mBytesPerPacket = sizeof(Float32);
fileFormat.mBytesPerFrame = sizeof(Float32);
Here are the stream settings...
//connect instrument to output
AudioComponentDescription componentDescription = unit.componentDescription;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &componentDescription);
OSStatus status = AudioComponentInstanceNew(inputComponent, &_instrumentUnit);
NSLog(#"%d",status);
AudioUnitElement instrumentOutputBus = 0;
AudioUnitElement ioUnitInputElement = 0;
//connect instrument unit to remoteIO output's input bus
AudioUnitConnection connection;
connection.sourceAudioUnit = _instrumentUnit;
connection.sourceOutputNumber = instrumentOutputBus;
connection.destInputNumber = ioUnitInputElement;
status = AudioUnitSetProperty(_ioUnit,
kAudioUnitProperty_MakeConnection,
kAudioUnitScope_Output,
ioUnitInputElement,
&connection,
sizeof(connection));
NSLog(#"%d",status);
UInt32 maxFrames = 1024; //I tried setting this to 4096 but it did not help
status = AudioUnitSetProperty(_instrumentUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Output,
0,
&maxFrames,
sizeof(maxFrames));
NSLog(#"%d",status);
_connectedInstrument = YES;
_instrumentIconImageView.image = unit.icon;
NSLog(#"Remote Instrument connected");
status = AudioUnitInitialize(_ioUnit);
NSLog(#"%d",status);
status = AudioOutputUnitStart(_ioUnit);
NSLog(#"%d",status);
status = AudioUnitInitialize(_instrumentUnit);
NSLog(#"%d",status);
[self setupFile];
Here is my callback...
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
ViewController* This = This = (__bridge ViewController *)inRefCon;
if (inBusNumber == 0 && !(*ioActionFlags & kAudioUnitRenderAction_PostRenderError))
{
ExtAudioFileWriteAsync(This->fileRef, inNumberFrames, ioData);
}
return noErr;
}
Full view controller code here
Thanks for your help.

You are writing to file pre and post render. In your render callback, change your if statement to only write on post render.
if (inBusNumber == 0 && *ioActionFlags == kAudioUnitRenderAction_PostRender){
ExtAudioFileWriteAsync(This->fileRef, inNumberFrames, ioData);
}
ExtAudioFileWriteAsync does some internal copying and buffering so it's fine to use in the render callback as long as you prime it before the first write.

Most likely you'll have to check for both:
post-render action flags
post render error
The critical part of your callback will probably have to look somewhat like this:
if (*ioActionFlags & kAudioUnitRenderAction_PostRender){
static int TEMP_kAudioUnitRenderAction_PostRenderError = (1 << 8);
if (!(*ioActionFlags & TEMP_kAudioUnitRenderAction_PostRenderError))
{
ExtAudioFileWriteAsync(This->fileRef, inNumberFrames, ioData);
//whichever additional code needed
// { … }
}

Related

Audiokit, how to playback a modified buffer in a tap?

I use Audiokit (in Objective-C) for realtime audio processing. I feed a C++ algorithm through a tap or lazy tap where the buffer is being modified.
I thought that would be obvious but...how can I playback the modified buffer in the output? Are taps only for analysis?
[self->microphoneGain.avAudioNode installTapOnBus:0 bufferSize:1024 format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
if (buffer.frameLength == 0) {
return;
}
// Process data -> return modified buffer
processData(buffer.floatChannelData[0], buffer.floatChannelData[1], buffer.frameLength);
// -> How to play back buffer?
}];
Furthermore, I can't get taps buffer size lower than 4800 samples. What would be my best option to get a better latency? I read about AUAudioUnit subclassing, render callback or realtime mode for AudioEngine, but I'm quite lost when trying to implement one of these with AudioKit. Thanks!
EDIT:
I managed to set a render callback which has apparently solved both of my problems.
AURenderCallbackStruct processingCallback;
processingCallback.inputProc = processingCalbackProc;
processingCallback.inputProcRefCon = (__bridge void *)(self);
OSStatus status = AudioUnitSetProperty(AudioKit.engine.outputNode.audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&processingCallback,
sizeof(processingCallback));
if(status != noErr) {
return false;
}
OSStatus processingCalbackProc (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
__unsafe_unretained MyClass *self = (__bridge MyClass *)inRefCon;
printf("%u, ", (unsigned int)inNumberFrames); // -> low latency!
if (!ioData) ioData = self->audioBufferList;
OSStatus status = AudioUnitRender(AudioKit.engine.outputNode.audioUnit,
ioActionFlags,
inTimeStamp,
1,
inNumberFrames,
ioData);
if(status != noErr) { return status; }
// Get buffers
unsigned int inputChannels = 2;
float *buffer[inputChannels];
for (int i = 0; i < inputChannels; i++) {
buffer[i] = (float *)ioData->mBuffers[i].mData;
}
// Process data
processData(buffer[0], buffer[1], inNumberFrames);
return noErr;
}
Now I can easily get buffers as low as 256samples (probably even less but not needed in my case) and when buffer[n]are modified, it outputs the modified buffers.
Everything seems to be fine, I just hope this is the right approach.

How to get the volume of an AudioUnit

I am using AudioUnit to play input from the microphone to the earphones.
It's working great. Now I need to increase the volume of weak sounds and decrease strong ones.
I found a way to increase the sound:
static OSStatus performRender (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus err = noErr;
if (*cd.audioChainIsBeingReconstructed == NO)
{
// we are calling AudioUnitRender on the input bus of AURemoteIO
// this will store the audio data captured by the microphone in ioData
err = AudioUnitRender(cd.rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
// filter out the DC component of the signal
cd.dcRejectionFilter->ProcessInplace((Float32*) ioData->mBuffers[0].mData, inNumberFrames);
//Add Volume
float desiredGain = 2.0f;
for(UInt32 bufferIndex = 0; bufferIndex < ioData->mNumberBuffers; ++bufferIndex) {
float *rawBuffer = (float *)ioData->mBuffers[bufferIndex].mData;
vDSP_vsmul(rawBuffer, 1, &desiredGain, rawBuffer, 1, inNumberFrames);
}
// mute audio if needed
if (*cd.muteAudio)
{
for (UInt32 i=0; i<ioData->mNumberBuffers; ++i)
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}
}
return err;
}
My question is how to I get what is the current volume so I would know how much to gain it and vice versa
Thanks!
Getting the "volume" depends on the type of AudioUnit. Some audio units have input levels, output levels, and "global" volume levels.
// MatrixMixer
Float32 volume = 0;
OSStatus result = AudioUnitGetParameter(mxmx_unit, kMatrixMixerParam_Volume, kAudioUnitScope_Global, 0, &volume);
// MultiChannelMixer
Float32 volume = 0;
OSStatus result = AudioUnitGetParameter(mcmx_unit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Global, 0, &volume);

Audio Unit file recording with aurioTouch - AudioStreamBasicDescription configuration issue?

I've started down the path on learning Audio Unit with aurioTouch. After a few days of learning Audio Unit, I'm still feeling a bit lost and I think I'm missing something very obvious.
Full source can be view at: http://pastebin.com/LXLYDEhy
Also listed the partial source down here
In my performRender callback, I've changed the code to
static OSStatus performRender (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
OSStatus err = noErr;
AudioController *audioController = (__bridge AudioController *)inRefCon;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mData = NULL;
OSStatus status;
status = AudioUnitRender(cd.rioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList); // bufferList.mBuffers[0].mData is null
status = ExtAudioFileWriteAsync(audioController.extAudioFileRef, bufferList.mNumberBuffers, &bufferList);
}
The audio units are setup like this
- (AudioStreamBasicDescription)getAudioDescription {
AudioStreamBasicDescription audioDescription = {0};
audioDescription.mFormatID = kAudioFormatLinearPCM;
audioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
audioDescription.mChannelsPerFrame = 1;
audioDescription.mBytesPerPacket = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mFramesPerPacket = 1;
audioDescription.mBytesPerFrame = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
audioDescription.mSampleRate = 44100.0;
return audioDescription;
}
- (void)setupIOUnit
{
try {
// Create a new instance of AURemoteIO
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
XThrowIfError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of AURemoteIO");
// Enable input and output on AURemoteIO
// Input is enabled on the input scope of the input element
// Output is enabled on the output scope of the output element
UInt32 one = 1;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on AURemoteIO");
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on AURemoteIO");
// Explicitly set the input and output client formats
// sample rate = 44100, num channels = 1, format = 32 bit floating point
CAStreamBasicDescription ioFormat = CAStreamBasicDescription(44100, 1, CAStreamBasicDescription::kPCMFormatFloat32, false);
// AudioStreamBasicDescription audioFormat = [self getAudioDescription];
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioFormat, sizeof(ioFormat)), "couldn't set the input client format on AURemoteIO");
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioFormat, sizeof(ioFormat)), "couldn't set the output client format on AURemoteIO");
// Set the MaximumFramesPerSlice property. This property is used to describe to an audio unit the maximum number
// of samples it will be asked to produce on any single given call to AudioUnitRender
UInt32 maxFramesPerSlice = 4096;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on AURemoteIO");
// Get the property value back from AURemoteIO. We are going to use this value to allocate buffers accordingly
UInt32 propSize = sizeof(UInt32);
XThrowIfError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on AURemoteIO");
_bufferManager = new BufferManager(maxFramesPerSlice);
_dcRejectionFilter = new DCRejectionFilter;
// We need references to certain data in the render callback
// This simple struct is used to hold that information
cd.rioUnit = _rioUnit;
cd.bufferManager = _bufferManager;
cd.dcRejectionFilter = _dcRejectionFilter;
cd.muteAudio = &_muteAudio;
cd.audioChainIsBeingReconstructed = &_audioChainIsBeingReconstructed;
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = self;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &renderCallback, sizeof(renderCallback)), "couldn't set render callback on AURemoteIO");
// Initialize the AURemoteIO instance
XThrowIfError(AudioUnitInitialize(_rioUnit), "couldn't initialize AURemoteIO instance");
}
catch (CAXException &e) {
NSLog(#"Error returned from setupIOUnit: %d: %s", (int)e.mError, e.mOperation);
}
catch (...) {
NSLog(#"Unknown error returned from setupIOUnit");
}
return;
}
Full source can be view at: http://pastebin.com/LXLYDEhy
Your code generally looks good from glancing at it, but there's at least one significant issue: instead of allocating space for the data to be copied into the buffers, you are explicitly setting them to NULL. Instead, you should allocate space and then copy it in with AudioUnitRender
Example code:
AudioBufferList *bufferList;
bufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer));
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = 1024 * 4;
bufferList->mBuffers[0].mData = calloc(1024, 4);
(Note that you may need to adjust the allocation sizes to fit your stream type, size, etc -- the above is just example code, but it addresses your main issue.

Realtime audio processing without output

I'm looking on this example http://teragonaudio.com/article/How-to-do-realtime-recording-with-effect-processing-on-iOS.html
and i want to turn off my output. I try to change: kAudioSessionCategory_PlayAndRecord to kAudioSessionCategory_RecordAudio but this is not working. I also try to get rid off:
if(AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 1, &streamDescription, sizeof(streamDescription)) != noErr) {
return 1;
}
Becouse i want to get sound from microphone but not playing it. But not matter what i do when my sound get to renderCallback method there is a -50 error. When audio is automatically play on output everything works fine...
Update with code:
using namespace std;
AudioUnit *audioUnit = NULL;
float *convertedSampleBuffer = NULL;
int initAudioSession() {
audioUnit = (AudioUnit*)malloc(sizeof(AudioUnit));
if(AudioSessionInitialize(NULL, NULL, NULL, NULL) != noErr) {
return 1;
}
if(AudioSessionSetActive(true) != noErr) {
return 1;
}
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
if(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(UInt32), &sessionCategory) != noErr) {
return 1;
}
Float32 bufferSizeInSec = 0.02f;
if(AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,
sizeof(Float32), &bufferSizeInSec) != noErr) {
return 1;
}
UInt32 overrideCategory = 1;
if(AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker,
sizeof(UInt32), &overrideCategory) != noErr) {
return 1;
}
// There are many properties you might want to provide callback functions for:
// kAudioSessionProperty_AudioRouteChange
// kAudioSessionProperty_OverrideCategoryEnableBluetoothInput
// etc.
return 0;
}
OSStatus renderCallback(void *userData, AudioUnitRenderActionFlags *actionFlags,
const AudioTimeStamp *audioTimeStamp, UInt32 busNumber,
UInt32 numFrames, AudioBufferList *buffers) {
OSStatus status = AudioUnitRender(*audioUnit, actionFlags, audioTimeStamp,
1, numFrames, buffers);
int doOutput = 0;
if(status != noErr) {
return status;
}
if(convertedSampleBuffer == NULL) {
// Lazy initialization of this buffer is necessary because we don't
// know the frame count until the first callback
convertedSampleBuffer = (float*)malloc(sizeof(float) * numFrames);
baseTime = (float)QRealTimer::getUptimeInMilliseconds();
}
SInt16 *inputFrames = (SInt16*)(buffers->mBuffers->mData);
// If your DSP code can use integers, then don't bother converting to
// floats here, as it just wastes CPU. However, most DSP algorithms rely
// on floating point, and this is especially true if you are porting a
// VST/AU to iOS.
int i;
for( i = numFrames; i < fftlength; i++ ) // Shifting buffer
x_inbuf[i - numFrames] = x_inbuf[i];
for( i = 0; i < numFrames; i++) {
x_inbuf[i + x_phase] = (float)inputFrames[i] / (float)32768;
}
if( x_phase + numFrames == fftlength )
{
x_alignment.SigProc_frontend(x_inbuf); // Signal processing front-end (FFT!)
doOutput = x_alignment.Align();
/// Output as text! In the real-time version,
// this is where we update visualisation callbacks and launch other services
if ((doOutput) & (x_netscore.isEvent(x_alignment.Position()))
&(x_alignment.lastAction()<x_alignment.Position()) )
{
// here i want to do something with my input!
}
}
else
x_phase += numFrames;
return noErr;
}
int initAudioStreams(AudioUnit *audioUnit) {
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
if(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(UInt32), &audioCategory) != noErr) {
return 1;
}
UInt32 overrideCategory = 1;
if(AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker,
sizeof(UInt32), &overrideCategory) != noErr) {
// Less serious error, but you may want to handle it and bail here
}
AudioComponentDescription componentDescription;
componentDescription.componentType = kAudioUnitType_Output;
componentDescription.componentSubType = kAudioUnitSubType_RemoteIO;
componentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
componentDescription.componentFlags = 0;
componentDescription.componentFlagsMask = 0;
AudioComponent component = AudioComponentFindNext(NULL, &componentDescription);
if(AudioComponentInstanceNew(component, audioUnit) != noErr) {
return 1;
}
UInt32 enable = 1;
if(AudioUnitSetProperty(*audioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input, 1, &enable, sizeof(UInt32)) != noErr) {
return 1;
}
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = renderCallback; // Render function
callbackStruct.inputProcRefCon = NULL;
if(AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input, 0, &callbackStruct,
sizeof(AURenderCallbackStruct)) != noErr) {
return 1;
}
AudioStreamBasicDescription streamDescription;
// You might want to replace this with a different value, but keep in mind that the
// iPhone does not support all sample rates. 8kHz, 22kHz, and 44.1kHz should all work.
streamDescription.mSampleRate = 44100;
// Yes, I know you probably want floating point samples, but the iPhone isn't going
// to give you floating point data. You'll need to make the conversion by hand from
// linear PCM <-> float.
streamDescription.mFormatID = kAudioFormatLinearPCM;
// This part is important!
streamDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked;
streamDescription.mBitsPerChannel = 16;
// 1 sample per frame, will always be 2 as long as 16-bit samples are being used
streamDescription.mBytesPerFrame = 2;
streamDescription.mChannelsPerFrame = 1;
streamDescription.mBytesPerPacket = streamDescription.mBytesPerFrame *
streamDescription.mChannelsPerFrame;
// Always should be set to 1
streamDescription.mFramesPerPacket = 1;
// Always set to 0, just to be sure
streamDescription.mReserved = 0;
// Set up input stream with above properties
if(AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 0, &streamDescription, sizeof(streamDescription)) != noErr) {
return 1;
}
// Ditto for the output stream, which we will be sending the processed audio to
if(AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 1, &streamDescription, sizeof(streamDescription)) != noErr) {
return 1;
}
return 0;
}
int startAudioUnit(AudioUnit *audioUnit) {
if(AudioUnitInitialize(*audioUnit) != noErr) {
return 1;
}
if(AudioOutputUnitStart(*audioUnit) != noErr) {
return 1;
}
return 0;
}
And calling from my VC:
initAudioSession();
initAudioStreams( audioUnit);
startAudioUnit( audioUnit);
If you want only recording, no playback, simply comment out the line that sets renderCallback:
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = renderCallback; // Render function
callbackStruct.inputProcRefCon = NULL;
if(AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input, 0, &callbackStruct,
sizeof(AURenderCallbackStruct)) != noErr) {
return 1;
}
Update after seeing code:
As I suspected, you're missing input callback. Add these lines:
// at top:
#define kInputBus 1
AURenderCallbackStruct callbackStruct;
/**/
callbackStruct.inputProc = &ALAudioUnit::recordingCallback;
callbackStruct.inputProcRefCon = this;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
Now in your recordingCallback:
OSStatus ALAudioUnit::recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// TODO: Use inRefCon to access our interface object to do stuff
// Then, use inNumberFrames to figure out how much data is available, and make
// that much space available in buffers in an AudioBufferList.
// Then:
// Obtain recorded samples
OSStatus status;
ALAudioUnit *pThis = reinterpret_cast<ALAudioUnit*>(inRefCon);
if (!pThis)
return noErr;
//assert (pThis->m_nMaxSliceFrames >= inNumberFrames);
pThis->recorderBufferList->GetBufferList().mBuffers[0].mDataByteSize = inNumberFrames * pThis->m_recorderSBD.mBytesPerFrame;
status = AudioUnitRender(pThis->audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&pThis->recorderBufferList->GetBufferList());
THROW_EXCEPTION_IF_ERROR(status, "error rendering audio unit");
// If we're not playing, I don't care about the data, simply discard it
if (!pThis->playbackState || pThis->isSeeking) return noErr;
// Now, we have the samples we just read sitting in buffers in bufferList
pThis->DoStuffWithTheRecordedAudio(inNumberFrames, pThis->recorderBufferList, inTimeStamp);
return noErr;
}
Btw, I'm allocating my own buffer instead of using the one provided by AudioUnit. You might want to change those parts if you want to use AudioUnit allocated buffer.
Update:
How to allocate own buffer:
recorderBufferList = new AUBufferList();
recorderBufferList->Allocate(m_recorderSBD, m_nMaxSliceFrames);
recorderBufferList->PrepareBuffer(m_recorderSBD, m_nMaxSliceFrames);
Also, if you're doing this, tell AudioUnit to not allocate buffers:
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
You'll need to include CoreAudio utility classes
Thanks for #Mar0ux 's answer. Whoever got here looking for complete sample code doing this can take a look here:
https://code.google.com/p/ios-coreaudio-example/
I am doing a similar app working with the same code and I found that you can end playback by changing the enumeration kAudioSessionCategory_PlayAndRecord to RecordAudio
int initAudioStreams(AudioUnit *audioUnit) {
UInt32 audioCategory = kAudioSessionCategory_RecordAudio;
if(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(UInt32), &audioCategory) != noErr) {
return 1;
}
This stopped the feedback between mic and speaker on my hardware.

How to write output of AUGraph to a file?

I am trying to write (what should be) a simple app that has a bunch of audio units in sequence in an AUGraph and then writes the output to a file. I added a callback using AUGraphAddRenderNotify. Here is my callback function:
OSStatus MyAURenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *actionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
if (*actionFlags & kAudioUnitRenderAction_PostRender) {
ExtAudioFileRef outputFile = (ExtAudioFileRef)inRefCon;
ExtAudioFileWriteAsync(outputFile, inNumberFrames, ioData);
}
}
This sort of works. The file is playable and I can hear what I recorded but there is horrible amounts of static that makes it barely audible.
Does anybody know what is wrong with this? Or does anyone know of a better way to record the AUGraph output to a file?
Thanks for the help.
I had a epiphany with regards to Audio Units just now which helped me solve my own problem. I had a misconception about how audio unit connections and render callbacks work. I thought they were completely separate things but it turns out that a connection is just short hand for a render callback.
Doing an kAudioUnitProperty_MakeConnection from the output of audio unit A to the input of audio unit B is the same as doing kAudioUnitProperty_SetRenderCallback on the input of unit B and having the callback function call AudioUnitRender on the output of audio unit A.
I tested this by doing a make connection after setting my render callback and the render callback was no longer invoked.
Therefore, I was able to solve my problem by doing the following:
AURenderCallbackStruct callbackStruct = {0};
callbackStruct.inputProc = MyAURenderCallback;
callbackStruct.inputProcRefCon = mixerUnit;
AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&callbackStruct,
sizeof(callbackStruct));
And them my callback function did something like this:
OSStatus MyAURenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *actionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioUnit mixerUnit = (AudioUnit)inRefCon;
AudioUnitRender(mixerUnit,
actionFlags,
inTimeStamp,
0,
inNumberFrames,
ioData);
ExtAudioFileWriteAsync(outputFile,
inNumberFrames,
ioData);
return noErr;
}
This probably should have been obvious to me but since it wasn't I'll bet there are others that were confused in the same way so hopefully this is helpful to them too.
I'm still not sure why I had trouble with the AUGraphAddRenderNotify callback. I will dig deeper into this later but for now I found a solution that seems to work.
Here is some sample code from Apple (the project is PlaySequence, but it isn't MIDI specific) that might help:
{
CAStreamBasicDescription clientFormat = CAStreamBasicDescription();
ca_require_noerr (result = AudioUnitGetProperty(outputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0,
&clientFormat, &size), fail);
size = sizeof(clientFormat);
ca_require_noerr (result = ExtAudioFileSetProperty(outfile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat), fail);
{
MusicTimeStamp currentTime;
AUOutputBL outputBuffer (clientFormat, numFrames);
AudioTimeStamp tStamp;
memset (&tStamp, 0, sizeof(AudioTimeStamp));
tStamp.mFlags = kAudioTimeStampSampleTimeValid;
int i = 0;
int numTimesFor10Secs = (int)(10. / (numFrames / srate));
do {
outputBuffer.Prepare();
AudioUnitRenderActionFlags actionFlags = 0;
ca_require_noerr (result = AudioUnitRender (outputUnit, &actionFlags, &tStamp, 0, numFrames, outputBuffer.ABL()), fail);
tStamp.mSampleTime += numFrames;
ca_require_noerr (result = ExtAudioFileWrite(outfile, numFrames, outputBuffer.ABL()), fail);
ca_require_noerr (result = MusicPlayerGetTime (player, &currentTime), fail);
if (shouldPrint && (++i % numTimesFor10Secs == 0))
printf ("current time: %6.2f beats\n", currentTime);
} while (currentTime < sequenceLength);
}
}
Maybe try this. Copy the data from the audio unit callback to a long buffer. Play the buffer to test it, then write the entire buffer to a file after you have verified that the whole buffer is OK.

Resources