How can I mute microphone input volume using AudioUnits? - ios

I'm using AudioUnits to record and play sound . It's part of a soft phone.
This is my initialisation:
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 8000;
audioFormat.mFormatID = kAudioFormatULaw;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &audioFormat, sizeof(audioFormat));
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
During the recording process I'm using a callback to process the sound:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
Now at some point I would like to mute the microphone. After googling, I found this as a solution:
-(void) setMuteOn {
AudioUnitParameterValue volume = 0.0;
AudioUnitSetProperty(audioUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, 1, &volume, 0);
}
But it doesn't work. Perhaps I need to do some kind of refresh on my audioUnit, I don't know. Any help would be great.

Actually it was easier than I thought. In the callback method I just overwrote those sound buffers with silence. In my case I was using ULAW compression, so just filled my array with 0xFF
The microphone was still recording, but I stopped using the data.

You could do the following which I think is a little cleaner.
-(BOOL)microphoneInput:(BOOL)enable;
{
UInt32 enableInput = (enable)? 1 : 0;
OSStatus status = AudioUnitSetProperty(
ioUnit,//our I/O unit
kAudioOutputUnitProperty_EnableIO, //property we are changing
kAudioUnitScope_Input,
kInputBus, //#define kInputBus 1
&enableInput,
sizeof (enableInput)
);
CheckStatus(status, #"Unable to enable/disable input");
return (status == noErr);
}

Related

iOS. Record at 96kHz with USB microphone

I am trying to record at full 96kHz with my RØDE iXY USB microphone.
Recording goes without error and when I launch the app with the mic connected, I see that AVAudioSession is running successfully at 96kHz sample rate.
But if I look at the spectrum it is clear that there is nothing but resample noise above 20kHz:
For comparison this is a spectrum of the same recording using the app bundled with the USB mic (RØDE Rec):
Is there anything else I must do to record at native 96kHz?
Or maybe the RØDE Rec app communicates with the mic with some proprietary protocol over USB and I'm out of luck here?
I included the source code that I use:
static AudioStreamBasicDescription AudioDescription24BitStereo96000 = (AudioStreamBasicDescription) {
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger,
.mChannelsPerFrame = 2,
.mBytesPerPacket = 6,
.mFramesPerPacket = 1,
.mBytesPerFrame = 6,
.mBitsPerChannel = 24,
.mSampleRate = 96000.0
};
- (void)setupAudioSession
{
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryRecord error:&error];
[session setActive:YES error:&error];
[session setPreferredSampleRate:96000.0f error:&error];
//I got my 96000Hz with the USB mic plugged in!
NSLog(#"sampleRate = %lf", session.sampleRate);
}
- (void)startRecording
{
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
AudioComponentInstanceNew(inputComponent, &audioUnit);
AudioUnitScope inputBus = 1;
UInt32 flag = 1;
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, inputBus, &flag, sizeof(flag));
audioDescription = AudioDescription24BitStereo96000;
AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputBus,
&audioDescription,
sizeof(audioDescription));
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
inputBus, &callbackStruct,
sizeof(callbackStruct));
AudioOutputUnitStart(audioUnit);
}
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioBuffer audioBuffer;
audioBuffer.mNumberChannels = 1;
audioBuffer.mDataByteSize = inNumberFrames * audioDescription.mBytesPerFrame;
audioBuffer.mData = malloc( inNumberFrames * audioDescription.mBytesPerFrame );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = audioBuffer;
AudioUnitRender(audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
//I then take the samples and write them to WAV file
}
Check the hardware sample rate audio session property with your microphone plugged in. Also check all audio unit function error return values.
RemoteIO may be using a lower input sample rate and then resampling to a 96k stream.

AudioUnit "Sometime" doesn't work. "Only" happens on 6s (may be 6s plus, but I haven't tested)

I am using AudioUnit for playback and recording at the same time. Preferred setting is sampling rate = 48kHz, buffer duration = 0.02
Here is render callback for playing and recording:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
IosAudioController *microphone = (__bridge IosAudioController *)inRefCon;
// render audio into buffer
OSStatus result = AudioUnitRender(microphone.audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
microphone.tempBuffer);
checkStatus(result);
// kAudioUnitErr_InvalidPropertyValue
// notify delegate of new buffer list to process
if ([microphone.dataSource respondsToSelector:#selector(microphone:hasBufferList:withBufferSize:withNumberOfChannels:)])
{
[microphone.dataSource microphone:microphone
hasBufferList:microphone.tempBuffer
withBufferSize:inNumberFrames
withNumberOfChannels:microphone.destinationFormat.mChannelsPerFrame];
}
return result;
}
/**
This callback is called when the audioUnit needs new data to play through the
speakers. If you don't have any, just don't write anything in the buffers
*/
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
IosAudioController *output = (__bridge IosAudioController *)inRefCon;
//
// Try to ask the data source for audio data to fill out the output's
// buffer list
//
if( [output.dataSource respondsToSelector:#selector(outputShouldUseCircularBuffer:)] ){
TPCircularBuffer *circularBuffer = [output.dataSource outputShouldUseCircularBuffer:output];
if( !circularBuffer ){
// SInt32 *left = ioData->mBuffers[0].mData;
// SInt32 *right = ioData->mBuffers[1].mData;
// for(int i = 0; i < inNumberFrames; i++ ){
// left[ i ] = 0.0f;
// right[ i ] = 0.0f;
// }
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
return noErr;
};
/**
Thank you Michael Tyson (A Tasty Pixel) for writing the TPCircularBuffer, you are amazing!
*/
// Get the available bytes in the circular buffer
int32_t availableBytes;
void *buffer = TPCircularBufferTail(circularBuffer,&availableBytes);
int32_t amount = 0;
// float floatNumber = availableBytes * 0.25 / 48;
// float speakerNumber = ioData->mBuffers[0].mDataByteSize * 0.25 / 48;
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer abuffer = ioData->mBuffers[i];
// Ideally we'd have all the bytes to be copied, but compare it against the available bytes (get min)
amount = MIN(abuffer.mDataByteSize,availableBytes);
// copy buffer to audio buffer which gets played after function return
memcpy(abuffer.mData, buffer, amount);
// set data size
abuffer.mDataByteSize = amount;
}
// Consume those bytes ( this will internally push the head of the circular buffer )
TPCircularBufferConsume(circularBuffer,amount);
}
else
{
//
// Silence if there is nothing to output
//
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
}
return noErr;
}
_tempBuffer is configured with 4096 frames.
Here is how I deallocate the audioUnit. Note, due to a bug that VoiceProcessingIO unit may not work properly if you start, stop and start it again, I need to dispose and initialize it every time. It is a known issue and posted here but I can't remember the link.
if (_tempBuffer != NULL) {
for(unsigned i = 0; i < _tempBuffer->mNumberBuffers; i++)
{
free(_tempBuffer->mBuffers[i].mData);
}
free(_tempBuffer);
}
AudioComponentInstanceDispose(_audioUnit);
This configuration works well on 6, 6+ and earlier devices. But something has gone wrong on 6s (may be 6s+).Some time, (those kind of bugs is really annoy. I hate it. To me, it happens 6-7 times on 20 tests), there is still incoming and outgoing data from IOUnit but no sound at all.
It seems that it never happens on the first test, so I guess that it may be a memory issue with the IOUnit and I still don't know how to fix that.
Any advise will be much appreciated.
UPDATE
I forgot to show how I configure the AudioUnit
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &_audioUnit);
checkStatus(status);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Enable IO for playback
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Apply format
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&_destinationFormat,
sizeof(self.destinationFormat));
checkStatus(status);
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&_destinationFormat,
sizeof(self.destinationFormat));
checkStatus(status);
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
[self configureMicrophoneBufferList];
// Initialise
status = AudioUnitInitialize(_audioUnit);
3 things might be problems:
For silence (during underflow), you might want to try filling the buffers with inNumberFrames of zeros instead of returning them unmodified.
Inside the audio callbacks, using any Objective C messaging (your respondsToSelector: call ) is not recommended by Apple DTS.
You shouldn't free buffers or call AudioComponentInstanceDispose until audio processing is really stopped. And since audio units run in another real-time thread, they don't (get thread or CPU time and) really stop until some time after your apps makes the stop audio call. I would wait a couple seconds, and certainly not call (re)initialize or (re)start until after that delay time.

Duplex Audio communication using AudioUnits

I am working on a app which has following requirements:
Record real time audio from iOS device (iPhone/iPad) and send to server over network
Play received audio from network server on iOS device(iPhone/iPad)
Above mentioned things need to be done simultaneously.
I have used AudioUnit for this.
I have run into a problem where I am hearing same audio what i speak into iPhone Mic instead of audio received from network server.
I have searched a lot on how to avoid this but haven't got the solution.
If anyone has had same problem and found any solution, sharing it will help a lot.
here is my code for initializing audio Unit
-(void)initializeAudioUnit
{
audioUnit = NULL;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
UInt32 flag = 1;
//enable IO for recording
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
AudioStreamBasicDescription audioStreamBasicDescription;
// Describe format
audioStreamBasicDescription.mSampleRate = 16000;
audioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;
audioStreamBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked |kLinearPCMFormatFlagIsNonInterleaved;
audioStreamBasicDescription.mFramesPerPacket = 1;
audioStreamBasicDescription.mChannelsPerFrame = 1;
audioStreamBasicDescription.mBitsPerChannel = 16;
audioStreamBasicDescription.mBytesPerPacket = 2;
audioStreamBasicDescription.mBytesPerFrame = 2;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioStreamBasicDescription,
sizeof(audioStreamBasicDescription));
NSLog(#"Status[%d]",(int)status);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioStreamBasicDescription,
sizeof(audioStreamBasicDescription));
NSLog(#"Status[%d]",(int)status);
AURenderCallbackStruct callbackStruct;
// Set input callback
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
flag=0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
}
Recording Call Back
static OSStatus recordingCallback (void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData)
{
MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;
AudioBuffer tempBuffer;
tempBuffer.mNumberChannels = 1;
tempBuffer.mDataByteSize = inNumberFrames * 2;
tempBuffer.mData = malloc(inNumberFrames *2);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = tempBuffer;
OSStatus status;
status = AudioUnitRender(THIS->audioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
&bufferList);
if (noErr != status) {
printf("AudioUnitRender error: %d", (int)status);
return noErr;
}
tempBuffer.mDataByteSize, &encodedSize,(__bridge void *)(THIS));
[THIS processAudio:&bufferList];
free(bufferList.mBuffers[0].mData);
return noErr;
}
Playback Call Back
static OSStatus playbackCallback(void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData) {
NSLog(#"In play back call back");
MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;
int32_t availableBytes=0;
char *inBuffer = GetDataFromCircularBuffer(&THIS->mybuffer, &availableBytes);
NSLog(#"bytes available in buffer[%d]",availableBytes);
decodeSpeexData(inBuffer, availableBytes,(__bridge void *)(THIS));
ConsumeReadBytes(&(THIS->mybuffer), availableBytes);
memcpy(targetBuffer, THIS->outTemp, inNumberFrames*2);
return noErr;
}
Process Audio recorded from MIC
- (void) processAudio: (AudioBufferList*) bufferList
{
AudioBuffer sourceBuffer = bufferList->mBuffers[0];
// NSLog(#"Origin size: %d", (int)sourceBuffer.mDataByteSize);
int size = 0;
encodeAudioDataSpeex((spx_int16_t*)sourceBuffer.mData, sourceBuffer.mDataByteSize, &size, (__bridge void *)(self));
[self performSelectorOnMainThread:#selector(SendAudioData:) withObject:[NSData dataWithBytes:self->jitterBuffer length:size] waitUntilDone:NO];
NSLog(#"Encoded size: %i", size);
}
Your playbackCallback render callback, which you have not shown, is responsible for the audio that is sent to the RemoteIO speaker output. If this RemoteIO render callback puts no data in its callback buffers, whatever junk that was left in buffers (stuff that was previously in the record callback buffers perhaps) might be sent to the speaker instead.
Also, it is strongly recommended by Apple DTS that your recordingCallback not include any memory management calls, such as malloc(). So this may be a bug helping cause the problem as well.

Recording iPhone output sound using Audio unit

I am developing an application in which i want to mix multiple sounds to create a single audio file. I'm able to mix the multiple audio files to generate single audio using apple's MixerHost example, but am not able to record this audio to generate single audio file.
I referred This link to record the generated audio. With reference to the mentioned link i am able to generate audio file, but its silent(there is not sound).
Follwing is my code :
-(void) initializeOutputUnit
{
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
// Enable IO for playback
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
// Describe format
AudioStreamBasicDescription audioFormat={0};
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
AudioUnitInitialize(audioUnit);
AudioOutputUnitStart(audioUnit);
// On initialise le fichier audio
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[NSString alloc] initWithFormat: #"%#/output.caf", documentsDirectory];
NSLog(#">>> %#\n", destinationFilePath);
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (__bridge CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus setupErr = ExtAudioFileCreateWithURL(destinationURL, kAudioFileCAFType, &audioFormat, NULL, kAudioFileFlags_EraseFile, &mAudioFileRef);
CFRelease(destinationURL);
NSAssert(setupErr == noErr, #"Couldn't create file for writing");
setupErr = ExtAudioFileSetProperty(mAudioFileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
NSAssert(setupErr == noErr, #"Couldn't create file for format");
setupErr = ExtAudioFileWriteAsync(mAudioFileRef, 0, NULL);
NSAssert(setupErr == noErr, #"Couldn't initialize write buffers for audio file");}
static OSStatus recordingCallback (void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
AudioBufferList bufferList;
SInt16 samples[inNumberFrames]; // A large enough size to not have to worry about buffer overrun
memset (&samples, 0, sizeof (samples));
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mData = samples;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mDataByteSize = inNumberFrames*sizeof(SInt16);
AudioMixerViewController* THIS = (__bridge AudioMixerViewController *)inRefCon;
OSStatus status;
status = AudioUnitRender(THIS->audioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
&bufferList);
if (noErr != status) {
printf("AudioUnitRender error: %ld", status);
return noErr;
}
// Now, we have the samples we just read sitting in buffers in bufferList
ExtAudioFileWriteAsync(THIS->mAudioFileRef, inNumberFrames, &bufferList);
return noErr;
}
I am getting OSStatus(ExtAudioFileDispose): 0 and OSStatus(ExtAudioFileDispose): -50
Please help me. Thanks!

Setting up an effect AudioUnit

I'm trying to write an iOS app that captures sound from the microphone, passes it through a high-pass filter, and does some calculation on the processed sound. Based on Stefan Popp's MicInput (http://www.stefanpopp.de/2011/capture-iphone-microphone/), I'm trying to put an effect audio unit (more specifically, a high-pass filter effect unit) inbetween the input and the output of the I/O Audio Unit. After setting up said AU, it gives me a 10877 error (kAudioUnitErr_InvalidElement) when I call AudioUnitRender(fxAudioUnit, ...) in the I/O AU's render callback.
AudioProcessingWithAudioUnitAPI.h
//
// AudioProcessingWithAudioUnitAPI.h
//
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVAudioSession.h>
#interface AudioProcessingWithAudioUnitAPI : NSObject
#property (readonly) AudioBuffer audioBuffer;
#property (readonly) AudioComponentInstance audioUnit;
#property (readonly) AudioComponentInstance fxAudioUnit;
...
#end
AudioProcessingWithAudioUnitAPI.m
//
// AudioProcessingWithAudioUnitAPI.m
//
#import "AudioProcessingWithAudioUnitAPI.h"
#implementation AudioProcessingWithAudioUnitAPI
#synthesize isPlaying = _isPlaying;
#synthesize outputLevelDisplay = _outputLevelDisplay;
#synthesize audioBuffer = _audioBuffer;
#synthesize audioUnit = _audioUnit;
#synthesize fxAudioUnit = _fxAudioUnit;
...
#pragma mark Recording callback
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// the data gets rendered here
AudioBuffer buffer;
// a variable where we check the status
OSStatus status;
/**
This is the reference to the object who owns the callback.
*/
AudioProcessingWithAudioUnitAPI *audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*) inRefCon;
/**
on this point we define the number of channels, which is mono
for the iphone. the number of frames is usally 512 or 1024.
*/
buffer.mDataByteSize = inNumberFrames * 2; // sample size
buffer.mNumberChannels = 1; // one channel
buffer.mData = malloc( inNumberFrames * 2 ); // buffer size
// we put our buffer into a bufferlist array for rendering
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
And on the next AudioUnitRender call is where the 10887 error is thrown:
status = AudioUnitRender([audioProcessor fxAudioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[audioProcessor hasError:status:__FILE__:__LINE__];
...
// process the bufferlist in the audio processor
[audioProcessor processBuffer:&bufferList];
//do some further processing
// clean up the buffer
free(bufferList.mBuffers[0].mData);
return noErr;
}
#pragma mark FX AudioUnit render callback
//This just asks for samples to the microphone (I/O AU render)
static OSStatus fxAudioUnitRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus retorno;
AudioProcessingWithAudioUnitAPI* audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*)inRefCon;
retorno = AudioUnitRender([audioProcessor audioUnit],
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
ioData);
[audioProcessor hasError:retorno:__FILE__:__LINE__];
return retorno;
}
#pragma mark Playback callback
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessingWithAudioUnitAPI *audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*) inRefCon;
// iterate over incoming stream and copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
// copy buffer to audio buffer which gets played after function return
memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);
// set data size
buffer.mDataByteSize = size;
}
return noErr;
}
#pragma mark - objective-c class methods
-(AudioProcessingWithAudioUnitAPI*)init
{
self = [super init];
if (self) {
self.isPlaying = NO;
[self initializeAudio];
}
return self;
}
-(void)initializeAudio
{
OSStatus status;
// We define the audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output; // we want to ouput
desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
desc.componentFlags = 0; // must be zero
desc.componentFlagsMask = 0; // must be zero
desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider
// find the AU component by description
AudioComponent component = AudioComponentFindNext(NULL, &desc);
// create audio unit by component
status = AudioComponentInstanceNew(component, &_audioUnit);
[self hasError:status:__FILE__:__LINE__];
// and now for the fx AudioUnit
desc.componentType = kAudioUnitType_Effect;
desc.componentSubType = kAudioUnitSubType_HighPassFilter;
// find the AU component by description
component = AudioComponentFindNext(NULL, &desc);
// create audio unit by component
status = AudioComponentInstanceNew(component, &_fxAudioUnit);
[self hasError:status:__FILE__:__LINE__];
// define that we want record io on the input bus
AudioUnitElement inputElement = 1;
AudioUnitElement outputElement = 0;
UInt32 flag = 1;
status = AudioUnitSetProperty(self.audioUnit,
kAudioOutputUnitProperty_EnableIO, // use io
kAudioUnitScope_Input, // scope to input
inputElement, // select input bus (1)
&flag, // set flag
sizeof(flag));
[self hasError:status:__FILE__:__LINE__];
UInt32 anotherFlag = 0;
// disable output (I don't want to hear back from the device)
status = AudioUnitSetProperty(self.audioUnit,
kAudioOutputUnitProperty_EnableIO, // use io
kAudioUnitScope_Output, // scope to output
outputElement, // select output bus (0)
&anotherFlag, // set flag
sizeof(flag));
[self hasError:status:__FILE__:__LINE__];
/*
We need to specify our format on which we want to work.
We use Linear PCM cause its uncompressed and we work on raw data.
for more informations check.
We want 16 bits, 2 bytes per packet/frames at 44khz
*/
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = SAMPLE_RATE;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16; //65536
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// set the format on the output stream
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputElement,
&audioFormat,
sizeof(audioFormat));
[self hasError:status:__FILE__:__LINE__];
// set the format on the input stream
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
outputElement,
&audioFormat,
sizeof(audioFormat));
[self hasError:status:__FILE__:__LINE__];
/**
We need to define a callback structure which holds
a pointer to the recordingCallback and a reference to
the audio processor object
*/
AURenderCallbackStruct callbackStruct;
// set recording callback
callbackStruct.inputProc = recordingCallback; // recordingCallback pointer
callbackStruct.inputProcRefCon = (__bridge void*)self;
// set input callback to recording callback on the input bus
status = AudioUnitSetProperty(self.audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
inputElement,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
/*
We do the same on the output stream to hear what is coming
from the input stream
*/
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
// set playbackCallback as callback on our renderer for the output bus
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
outputElement,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
callbackStruct.inputProc = fxAudioUnitRenderCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
// set input callback to input AU
status = AudioUnitSetProperty(self.fxAudioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
// reset flag to 0
flag = 0;
/*
we need to tell the audio unit to allocate the render buffer,
that we can directly write into it.
*/
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
inputElement,
&flag,
sizeof(flag));
status = AudioUnitSetProperty(self.fxAudioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
0,
&flag,
sizeof(flag));
/*
we set the number of channels to mono and allocate our block size to
1024 bytes.
*/
_audioBuffer.mNumberChannels = 1;
_audioBuffer.mDataByteSize = 512 * 2;
_audioBuffer.mData = malloc( 512 * 2 );
// Initialize the Audio Unit and cross fingers =)
status = AudioUnitInitialize(self.fxAudioUnit);
[self hasError:status:__FILE__:__LINE__];
status = AudioUnitInitialize(self.audioUnit);
[self hasError:status:__FILE__:__LINE__];
NSLog(#"Started");
}
//For now, this just copies the buffer to self.audioBuffer
-(void)processBuffer: (AudioBufferList*) audioBufferList
{
AudioBuffer sourceBuffer = audioBufferList->mBuffers[0];
// we check here if the input data byte size has changed
if (_audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
// clear old buffer
free(self.audioBuffer.mData);
// assing new byte size and allocate them on mData
_audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
_audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
}
// copy incoming audio data to the audio buffer
memcpy(self.audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize);
}
#pragma mark - Error handling
-(void)hasError:(int)statusCode:(char*)file:(int)line
{
if (statusCode) {
printf("Error Code responded %d in file %s on line %d\n", statusCode, file, line);
exit(-1);
}
}
#end
Any help will be greatly appreciated.
This type of question comes up somewhat frequently, so I once wrote a mini-tutorial on this subject. However, this guide is really the nuts-and-bolts way to solve the problem, I now feel that a far more elegant way is to use the Novocaine framework, which takes a lot of the headache out of AudioUnit setup on iOS.
I found a demo codes, maybe useful 4 U;
DEMO URL:https://github.com/JNYJdev/AudioUnit
OR
blog: http://atastypixel.com/blog/using-remoteio-audio-unit/
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Then:
// Obtain recorded samples
OSStatus status;
status = AudioUnitRender([iosAudio audioUnit],
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
checkStatus(status);
// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];
// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}

Resources