Recording iPhone output sound using Audio unit - ios

I am developing an application in which i want to mix multiple sounds to create a single audio file. I'm able to mix the multiple audio files to generate single audio using apple's MixerHost example, but am not able to record this audio to generate single audio file.
I referred This link to record the generated audio. With reference to the mentioned link i am able to generate audio file, but its silent(there is not sound).
Follwing is my code :
-(void) initializeOutputUnit
{
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
// Enable IO for playback
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
// Describe format
AudioStreamBasicDescription audioFormat={0};
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
AudioUnitInitialize(audioUnit);
AudioOutputUnitStart(audioUnit);
// On initialise le fichier audio
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[NSString alloc] initWithFormat: #"%#/output.caf", documentsDirectory];
NSLog(#">>> %#\n", destinationFilePath);
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (__bridge CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus setupErr = ExtAudioFileCreateWithURL(destinationURL, kAudioFileCAFType, &audioFormat, NULL, kAudioFileFlags_EraseFile, &mAudioFileRef);
CFRelease(destinationURL);
NSAssert(setupErr == noErr, #"Couldn't create file for writing");
setupErr = ExtAudioFileSetProperty(mAudioFileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
NSAssert(setupErr == noErr, #"Couldn't create file for format");
setupErr = ExtAudioFileWriteAsync(mAudioFileRef, 0, NULL);
NSAssert(setupErr == noErr, #"Couldn't initialize write buffers for audio file");}
static OSStatus recordingCallback (void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
AudioBufferList bufferList;
SInt16 samples[inNumberFrames]; // A large enough size to not have to worry about buffer overrun
memset (&samples, 0, sizeof (samples));
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mData = samples;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mDataByteSize = inNumberFrames*sizeof(SInt16);
AudioMixerViewController* THIS = (__bridge AudioMixerViewController *)inRefCon;
OSStatus status;
status = AudioUnitRender(THIS->audioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
&bufferList);
if (noErr != status) {
printf("AudioUnitRender error: %ld", status);
return noErr;
}
// Now, we have the samples we just read sitting in buffers in bufferList
ExtAudioFileWriteAsync(THIS->mAudioFileRef, inNumberFrames, &bufferList);
return noErr;
}
I am getting OSStatus(ExtAudioFileDispose): 0 and OSStatus(ExtAudioFileDispose): -50
Please help me. Thanks!

Related

Remote IO Audio Unit is not capturing audio from speaker or remote stream

I am using following code to record audio and writing it in file
- (void) initializeOutputUnit
{
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &mAudioUnit);
// Enable IO for recording
UInt32 flag = 1;
//bhoomi
status = AudioUnitSetProperty(mAudioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
// Enable IO for playback
status = AudioUnitSetProperty(mAudioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
// Describe format
AudioStreamBasicDescription audioFormat={0};
audioFormat.mSampleRate = kSampleRate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// Apply format
status = AudioUnitSetProperty(mAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
status = AudioUnitSetProperty(mAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)self;
status = AudioUnitSetProperty(mAudioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(mAudioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
// On initialise le fichier audio
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[NSString alloc] initWithFormat: #"%#/output.caf", documentsDirectory];
NSLog(#">>> %#\n", destinationFilePath);
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (__bridge CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus setupErr = ExtAudioFileCreateWithURL(destinationURL, kAudioFileCAFType, &audioFormat, NULL, kAudioFileFlags_EraseFile, &mAudioFileRef);
CFRelease(destinationURL);
NSAssert(setupErr == noErr, #"Couldn't create file for writing");
setupErr = ExtAudioFileSetProperty(mAudioFileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
NSAssert(setupErr == noErr, #"Couldn't create file for format");
setupErr = ExtAudioFileWriteAsync(mAudioFileRef, 0, NULL);
NSAssert(setupErr == noErr, #"Couldn't initialize write buffers for audio file");
CheckError(AudioUnitInitialize(mAudioUnit), "AudioUnitInitialize");
CheckError(AudioOutputUnitStart(mAudioUnit), "AudioOutputUnitStart");
// [NSTimer scheduledTimerWithTimeInterval:5 target:self selector:#selector(stopRecording:) userInfo:nil repeats:NO];
}
In my application I have video call functionality using webRTC. Above code
is working fine with mic audio, mic audio is recorded properly but i am not able to record speaker sound(remote audio stream). Is it possible to record both sound (mic and speaker) in same file?.If yes, then how to modify code so it will also record sound from internal iphone speakers.

AVAudioSession Force line-in input when output is A2DP

I want to reroute the input from line-in to A2DP. But I can't set line-in as input unfortunately. Is there any way to force line-in as input then outputting to A2DP?
Configuration below. Note the code near "set line in as preferred". I do believe this should set the input correctly - if it was available...
- (id) init {
self = [super init];
sizeof(allowBluetoothInput), &allowBluetoothInput);
// You can adjust the latency of RemoteIO (and, in fact, any other audio framework) by setting the kAudioSessionProperty_PreferredHardwareIOBufferDuration property
float aBufferLength = 0.005; // In seconds
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(aBufferLength), &aBufferLength);
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// set line in as preferred
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSString *preferredPortType = AVAudioSessionPortLineIn;
for (AVAudioSessionPortDescription *desc in audioSession.availableInputs) {
if ([desc.portType isEqualToString: AVAudioSessionPortLineIn] ||
// [desc.portType isEqualToString: AVAudioSessionPortBuiltInMic] ||
[desc.portType isEqualToString: AVAudioSessionPortHeadsetMic])
{
[audioSession setPreferredInput:desc error:nil];
}
}
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
checkStatus(status);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Enable IO for playback
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Describe format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM; // kAudioFormatMPEG4AAC_ELD
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
// Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per frame, thus 2 bytes per frame).
// Practice learns the buffers used contain 512 frames, if this changes it will be fixed in processAudio.
tempBuffer.mNumberChannels = 1;
tempBuffer.mDataByteSize = 512 * 2;
tempBuffer.mData = malloc( 512 * 2 );
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionAllowBluetoothA2DP error:NULL];
// Initialise
AudioSessionSetActive(true);
NSTimeInterval outLat = [[AVAudioSession sharedInstance] outputLatency];
NSTimeInterval inLat = [[AVAudioSession sharedInstance] inputLatency];
status = AudioUnitInitialize(audioUnit);
checkStatus(status);
return self;
}

iOS. Record at 96kHz with USB microphone

I am trying to record at full 96kHz with my RØDE iXY USB microphone.
Recording goes without error and when I launch the app with the mic connected, I see that AVAudioSession is running successfully at 96kHz sample rate.
But if I look at the spectrum it is clear that there is nothing but resample noise above 20kHz:
For comparison this is a spectrum of the same recording using the app bundled with the USB mic (RØDE Rec):
Is there anything else I must do to record at native 96kHz?
Or maybe the RØDE Rec app communicates with the mic with some proprietary protocol over USB and I'm out of luck here?
I included the source code that I use:
static AudioStreamBasicDescription AudioDescription24BitStereo96000 = (AudioStreamBasicDescription) {
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger,
.mChannelsPerFrame = 2,
.mBytesPerPacket = 6,
.mFramesPerPacket = 1,
.mBytesPerFrame = 6,
.mBitsPerChannel = 24,
.mSampleRate = 96000.0
};
- (void)setupAudioSession
{
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryRecord error:&error];
[session setActive:YES error:&error];
[session setPreferredSampleRate:96000.0f error:&error];
//I got my 96000Hz with the USB mic plugged in!
NSLog(#"sampleRate = %lf", session.sampleRate);
}
- (void)startRecording
{
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
AudioComponentInstanceNew(inputComponent, &audioUnit);
AudioUnitScope inputBus = 1;
UInt32 flag = 1;
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, inputBus, &flag, sizeof(flag));
audioDescription = AudioDescription24BitStereo96000;
AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputBus,
&audioDescription,
sizeof(audioDescription));
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
inputBus, &callbackStruct,
sizeof(callbackStruct));
AudioOutputUnitStart(audioUnit);
}
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioBuffer audioBuffer;
audioBuffer.mNumberChannels = 1;
audioBuffer.mDataByteSize = inNumberFrames * audioDescription.mBytesPerFrame;
audioBuffer.mData = malloc( inNumberFrames * audioDescription.mBytesPerFrame );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = audioBuffer;
AudioUnitRender(audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
//I then take the samples and write them to WAV file
}
Check the hardware sample rate audio session property with your microphone plugged in. Also check all audio unit function error return values.
RemoteIO may be using a lower input sample rate and then resampling to a 96k stream.

How can I mute microphone input volume using AudioUnits?

I'm using AudioUnits to record and play sound . It's part of a soft phone.
This is my initialisation:
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 8000;
audioFormat.mFormatID = kAudioFormatULaw;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &audioFormat, sizeof(audioFormat));
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
During the recording process I'm using a callback to process the sound:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
Now at some point I would like to mute the microphone. After googling, I found this as a solution:
-(void) setMuteOn {
AudioUnitParameterValue volume = 0.0;
AudioUnitSetProperty(audioUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, 1, &volume, 0);
}
But it doesn't work. Perhaps I need to do some kind of refresh on my audioUnit, I don't know. Any help would be great.
Actually it was easier than I thought. In the callback method I just overwrote those sound buffers with silence. In my case I was using ULAW compression, so just filled my array with 0xFF
The microphone was still recording, but I stopped using the data.
You could do the following which I think is a little cleaner.
-(BOOL)microphoneInput:(BOOL)enable;
{
UInt32 enableInput = (enable)? 1 : 0;
OSStatus status = AudioUnitSetProperty(
ioUnit,//our I/O unit
kAudioOutputUnitProperty_EnableIO, //property we are changing
kAudioUnitScope_Input,
kInputBus, //#define kInputBus 1
&enableInput,
sizeof (enableInput)
);
CheckStatus(status, #"Unable to enable/disable input");
return (status == noErr);
}

Duplex Audio communication using AudioUnits

I am working on a app which has following requirements:
Record real time audio from iOS device (iPhone/iPad) and send to server over network
Play received audio from network server on iOS device(iPhone/iPad)
Above mentioned things need to be done simultaneously.
I have used AudioUnit for this.
I have run into a problem where I am hearing same audio what i speak into iPhone Mic instead of audio received from network server.
I have searched a lot on how to avoid this but haven't got the solution.
If anyone has had same problem and found any solution, sharing it will help a lot.
here is my code for initializing audio Unit
-(void)initializeAudioUnit
{
audioUnit = NULL;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
UInt32 flag = 1;
//enable IO for recording
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
AudioStreamBasicDescription audioStreamBasicDescription;
// Describe format
audioStreamBasicDescription.mSampleRate = 16000;
audioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;
audioStreamBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked |kLinearPCMFormatFlagIsNonInterleaved;
audioStreamBasicDescription.mFramesPerPacket = 1;
audioStreamBasicDescription.mChannelsPerFrame = 1;
audioStreamBasicDescription.mBitsPerChannel = 16;
audioStreamBasicDescription.mBytesPerPacket = 2;
audioStreamBasicDescription.mBytesPerFrame = 2;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioStreamBasicDescription,
sizeof(audioStreamBasicDescription));
NSLog(#"Status[%d]",(int)status);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioStreamBasicDescription,
sizeof(audioStreamBasicDescription));
NSLog(#"Status[%d]",(int)status);
AURenderCallbackStruct callbackStruct;
// Set input callback
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
flag=0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
}
Recording Call Back
static OSStatus recordingCallback (void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData)
{
MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;
AudioBuffer tempBuffer;
tempBuffer.mNumberChannels = 1;
tempBuffer.mDataByteSize = inNumberFrames * 2;
tempBuffer.mData = malloc(inNumberFrames *2);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = tempBuffer;
OSStatus status;
status = AudioUnitRender(THIS->audioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
&bufferList);
if (noErr != status) {
printf("AudioUnitRender error: %d", (int)status);
return noErr;
}
tempBuffer.mDataByteSize, &encodedSize,(__bridge void *)(THIS));
[THIS processAudio:&bufferList];
free(bufferList.mBuffers[0].mData);
return noErr;
}
Playback Call Back
static OSStatus playbackCallback(void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData) {
NSLog(#"In play back call back");
MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;
int32_t availableBytes=0;
char *inBuffer = GetDataFromCircularBuffer(&THIS->mybuffer, &availableBytes);
NSLog(#"bytes available in buffer[%d]",availableBytes);
decodeSpeexData(inBuffer, availableBytes,(__bridge void *)(THIS));
ConsumeReadBytes(&(THIS->mybuffer), availableBytes);
memcpy(targetBuffer, THIS->outTemp, inNumberFrames*2);
return noErr;
}
Process Audio recorded from MIC
- (void) processAudio: (AudioBufferList*) bufferList
{
AudioBuffer sourceBuffer = bufferList->mBuffers[0];
// NSLog(#"Origin size: %d", (int)sourceBuffer.mDataByteSize);
int size = 0;
encodeAudioDataSpeex((spx_int16_t*)sourceBuffer.mData, sourceBuffer.mDataByteSize, &size, (__bridge void *)(self));
[self performSelectorOnMainThread:#selector(SendAudioData:) withObject:[NSData dataWithBytes:self->jitterBuffer length:size] waitUntilDone:NO];
NSLog(#"Encoded size: %i", size);
}
Your playbackCallback render callback, which you have not shown, is responsible for the audio that is sent to the RemoteIO speaker output. If this RemoteIO render callback puts no data in its callback buffers, whatever junk that was left in buffers (stuff that was previously in the record callback buffers perhaps) might be sent to the speaker instead.
Also, it is strongly recommended by Apple DTS that your recordingCallback not include any memory management calls, such as malloc(). So this may be a bug helping cause the problem as well.

Resources