iOS. Record at 96kHz with USB microphone - ios

I am trying to record at full 96kHz with my RØDE iXY USB microphone.
Recording goes without error and when I launch the app with the mic connected, I see that AVAudioSession is running successfully at 96kHz sample rate.
But if I look at the spectrum it is clear that there is nothing but resample noise above 20kHz:
For comparison this is a spectrum of the same recording using the app bundled with the USB mic (RØDE Rec):
Is there anything else I must do to record at native 96kHz?
Or maybe the RØDE Rec app communicates with the mic with some proprietary protocol over USB and I'm out of luck here?
I included the source code that I use:
static AudioStreamBasicDescription AudioDescription24BitStereo96000 = (AudioStreamBasicDescription) {
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger,
.mChannelsPerFrame = 2,
.mBytesPerPacket = 6,
.mFramesPerPacket = 1,
.mBytesPerFrame = 6,
.mBitsPerChannel = 24,
.mSampleRate = 96000.0
};
- (void)setupAudioSession
{
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryRecord error:&error];
[session setActive:YES error:&error];
[session setPreferredSampleRate:96000.0f error:&error];
//I got my 96000Hz with the USB mic plugged in!
NSLog(#"sampleRate = %lf", session.sampleRate);
}
- (void)startRecording
{
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
AudioComponentInstanceNew(inputComponent, &audioUnit);
AudioUnitScope inputBus = 1;
UInt32 flag = 1;
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, inputBus, &flag, sizeof(flag));
audioDescription = AudioDescription24BitStereo96000;
AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputBus,
&audioDescription,
sizeof(audioDescription));
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
inputBus, &callbackStruct,
sizeof(callbackStruct));
AudioOutputUnitStart(audioUnit);
}
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioBuffer audioBuffer;
audioBuffer.mNumberChannels = 1;
audioBuffer.mDataByteSize = inNumberFrames * audioDescription.mBytesPerFrame;
audioBuffer.mData = malloc( inNumberFrames * audioDescription.mBytesPerFrame );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = audioBuffer;
AudioUnitRender(audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
//I then take the samples and write them to WAV file
}

Check the hardware sample rate audio session property with your microphone plugged in. Also check all audio unit function error return values.
RemoteIO may be using a lower input sample rate and then resampling to a 96k stream.

Related

CoreAudio: change sample rate of microphone and get data in a callback?

This is my first attempt at using CoreAudio, but my goal is to capture microphone data, resample it to a new sample rate, and then capture the raw 16-bit PCM data.
My strategy for this is to make an AUGraph with the microphone --> a sample rate converter, and then have a callback that gets data from the output of the converter (which I'm hoping is mic output at the new sample rate?).
Right now my callback just fires with a null AudioBufferList*, which obviously isn't correct. How should I set this up and what am I doing wrong?
Code follows:
CheckError(NewAUGraph(&audioGraph), #"Creating graph");
CheckError(AUGraphOpen(audioGraph), #"Opening graph");
AUNode micNode, converterNode;
AudioUnit micUnit, converterUnit;
makeMic(&audioGraph, &micNode, &micUnit);
// get the Input/inputBus's stream description
UInt32 sizeASBD = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription hwASBDin;
AudioUnitGetProperty(micUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kInputBus,
&hwASBDin,
&sizeASBD);
makeConverter(&audioGraph, &converterNode, &converterUnit, hwASBDin);
// connect mic output to converterNode
CheckError(AUGraphConnectNodeInput(audioGraph, micNode, 1, converterNode, 0),
#"Connecting mic to converter");
// set callback on the output? maybe?
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = audioCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
CheckError(AudioUnitSetProperty(micUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct)),
#"Setting callback");
CheckError(AUGraphInitialize(audioGraph), #"AUGraphInitialize");
// activate audio session
NSError *err = nil;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
if (![audioSession setActive:YES error:&err]){
[self error:[NSString stringWithFormat:#"Couldn't activate audio session: %#", err]];
}
CheckError(AUGraphStart(audioGraph), #"AUGraphStart");
and:
void makeMic(AUGraph *graph, AUNode *micNode, AudioUnit *micUnit) {
AudioComponentDescription inputDesc;
inputDesc.componentType = kAudioUnitType_Output;
inputDesc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
inputDesc.componentFlags = 0;
inputDesc.componentFlagsMask = 0;
inputDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
CheckError(AUGraphAddNode(*graph, &inputDesc, micNode),
#"Adding mic node");
CheckError(AUGraphNodeInfo(*graph, *micNode, 0, micUnit),
#"Getting mic unit");
// enable microphone for recording
UInt32 flagOn = 1; // enable value
CheckError(AudioUnitSetProperty(*micUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flagOn,
sizeof(flagOn)),
#"Enabling microphone");
}
and:
void makeConverter(AUGraph *graph, AUNode *converterNode, AudioUnit *converterUnit, AudioStreamBasicDescription inFormat) {
AudioComponentDescription sampleConverterDesc;
sampleConverterDesc.componentType = kAudioUnitType_FormatConverter;
sampleConverterDesc.componentSubType = kAudioUnitSubType_AUConverter;
sampleConverterDesc.componentFlags = 0;
sampleConverterDesc.componentFlagsMask = 0;
sampleConverterDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
CheckError(AUGraphAddNode(*graph, &sampleConverterDesc, converterNode),
#"Adding converter node");
CheckError(AUGraphNodeInfo(*graph, *converterNode, 0, converterUnit),
#"Getting converter unit");
// describe desired output format
AudioStreamBasicDescription convertedFormat;
convertedFormat.mSampleRate = 16000.0;
convertedFormat.mFormatID = kAudioFormatLinearPCM;
convertedFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
convertedFormat.mFramesPerPacket = 1;
convertedFormat.mChannelsPerFrame = 1;
convertedFormat.mBitsPerChannel = 16;
convertedFormat.mBytesPerPacket = 2;
convertedFormat.mBytesPerFrame = 2;
// set format descriptions
CheckError(AudioUnitSetProperty(*converterUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0, // should be the only bus #
&inFormat,
sizeof(inFormat)),
#"Setting format of converter input");
CheckError(AudioUnitSetProperty(*converterUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0, // should be the only bus #
&convertedFormat,
sizeof(convertedFormat)),
#"Setting format of converter output");
}
The render callback is used a a source for an audio unit. If you set the kAudioOutputUnitProperty_SetInputCallback property on the remoteIO unit, you must call AudioUnitRender from within the callback you provide, then you would have to manually do the sample rate conversion, which is ugly.
There is an "easier" way. The remoteIO acts as two units, the input (mic) and the output (speaker). Create a graph with a remoteIO, then connect the mic to the speaker, using the desired format. Then you can get the data using a renderNotify callback, which acts as a "tap".
I created a ViewController class to demonstrate
#import "ViewController.h"
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
//Set your audio session to allow recording
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:NULL];
[audioSession setActive:1 error:NULL];
//Create graph and units
AUGraph graph = NULL;
NewAUGraph(&graph);
AUNode ioNode;
AudioUnit ioUnit = NULL;
AudioComponentDescription ioDescription = {0};
ioDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ioDescription.componentType = kAudioUnitType_Output;
ioDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
AUGraphAddNode(graph, &ioDescription, &ioNode);
AUGraphOpen(graph);
AUGraphNodeInfo(graph, ioNode, NULL, &ioUnit);
UInt32 enable = 1;
AudioUnitSetProperty(ioUnit,kAudioOutputUnitProperty_EnableIO,kAudioUnitScope_Input,1,&enable,sizeof(enable));
//Set the output of the ioUnit's input bus, and the input of it's output bus to the desired format.
//Core audio basically has implicite converters that we're taking advantage of.
AudioStreamBasicDescription asbd = {0};
asbd.mSampleRate = 16000.0;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mFramesPerPacket = 1;
asbd.mChannelsPerFrame = 1;
asbd.mBitsPerChannel = 16;
asbd.mBytesPerPacket = 2;
asbd.mBytesPerFrame = 2;
AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &asbd, sizeof(asbd));
AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &asbd, sizeof(asbd));
//Connect output of the remoteIO's input bus to the input of it's output bus
AUGraphConnectNodeInput(graph, ioNode, 1, ioNode, 0);
//Add a render notify with a bridged reference to self (If using ARC)
AudioUnitAddRenderNotify(ioUnit, renderNotify, (__bridge void *)self);
//Start graph
AUGraphInitialize(graph);
AUGraphStart(graph);
CAShow(graph);
}
OSStatus renderNotify(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
//Filter anything that isn't a post render call on the input bus
if (*ioActionFlags != kAudioUnitRenderAction_PostRender || inBusNumber != 1) {
return noErr;
}
//Get a reference to self
ViewController *self = (__bridge ViewController *)inRefCon;
//Do stuff with audio
//Optionally mute the audio by setting it to zero;
for (int i = 0; i < ioData->mNumberBuffers; i++) {
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}
return noErr;
}
#end

AudioUnit "Sometime" doesn't work. "Only" happens on 6s (may be 6s plus, but I haven't tested)

I am using AudioUnit for playback and recording at the same time. Preferred setting is sampling rate = 48kHz, buffer duration = 0.02
Here is render callback for playing and recording:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
IosAudioController *microphone = (__bridge IosAudioController *)inRefCon;
// render audio into buffer
OSStatus result = AudioUnitRender(microphone.audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
microphone.tempBuffer);
checkStatus(result);
// kAudioUnitErr_InvalidPropertyValue
// notify delegate of new buffer list to process
if ([microphone.dataSource respondsToSelector:#selector(microphone:hasBufferList:withBufferSize:withNumberOfChannels:)])
{
[microphone.dataSource microphone:microphone
hasBufferList:microphone.tempBuffer
withBufferSize:inNumberFrames
withNumberOfChannels:microphone.destinationFormat.mChannelsPerFrame];
}
return result;
}
/**
This callback is called when the audioUnit needs new data to play through the
speakers. If you don't have any, just don't write anything in the buffers
*/
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
IosAudioController *output = (__bridge IosAudioController *)inRefCon;
//
// Try to ask the data source for audio data to fill out the output's
// buffer list
//
if( [output.dataSource respondsToSelector:#selector(outputShouldUseCircularBuffer:)] ){
TPCircularBuffer *circularBuffer = [output.dataSource outputShouldUseCircularBuffer:output];
if( !circularBuffer ){
// SInt32 *left = ioData->mBuffers[0].mData;
// SInt32 *right = ioData->mBuffers[1].mData;
// for(int i = 0; i < inNumberFrames; i++ ){
// left[ i ] = 0.0f;
// right[ i ] = 0.0f;
// }
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
return noErr;
};
/**
Thank you Michael Tyson (A Tasty Pixel) for writing the TPCircularBuffer, you are amazing!
*/
// Get the available bytes in the circular buffer
int32_t availableBytes;
void *buffer = TPCircularBufferTail(circularBuffer,&availableBytes);
int32_t amount = 0;
// float floatNumber = availableBytes * 0.25 / 48;
// float speakerNumber = ioData->mBuffers[0].mDataByteSize * 0.25 / 48;
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer abuffer = ioData->mBuffers[i];
// Ideally we'd have all the bytes to be copied, but compare it against the available bytes (get min)
amount = MIN(abuffer.mDataByteSize,availableBytes);
// copy buffer to audio buffer which gets played after function return
memcpy(abuffer.mData, buffer, amount);
// set data size
abuffer.mDataByteSize = amount;
}
// Consume those bytes ( this will internally push the head of the circular buffer )
TPCircularBufferConsume(circularBuffer,amount);
}
else
{
//
// Silence if there is nothing to output
//
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
}
return noErr;
}
_tempBuffer is configured with 4096 frames.
Here is how I deallocate the audioUnit. Note, due to a bug that VoiceProcessingIO unit may not work properly if you start, stop and start it again, I need to dispose and initialize it every time. It is a known issue and posted here but I can't remember the link.
if (_tempBuffer != NULL) {
for(unsigned i = 0; i < _tempBuffer->mNumberBuffers; i++)
{
free(_tempBuffer->mBuffers[i].mData);
}
free(_tempBuffer);
}
AudioComponentInstanceDispose(_audioUnit);
This configuration works well on 6, 6+ and earlier devices. But something has gone wrong on 6s (may be 6s+).Some time, (those kind of bugs is really annoy. I hate it. To me, it happens 6-7 times on 20 tests), there is still incoming and outgoing data from IOUnit but no sound at all.
It seems that it never happens on the first test, so I guess that it may be a memory issue with the IOUnit and I still don't know how to fix that.
Any advise will be much appreciated.
UPDATE
I forgot to show how I configure the AudioUnit
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &_audioUnit);
checkStatus(status);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Enable IO for playback
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Apply format
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&_destinationFormat,
sizeof(self.destinationFormat));
checkStatus(status);
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&_destinationFormat,
sizeof(self.destinationFormat));
checkStatus(status);
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
[self configureMicrophoneBufferList];
// Initialise
status = AudioUnitInitialize(_audioUnit);
3 things might be problems:
For silence (during underflow), you might want to try filling the buffers with inNumberFrames of zeros instead of returning them unmodified.
Inside the audio callbacks, using any Objective C messaging (your respondsToSelector: call ) is not recommended by Apple DTS.
You shouldn't free buffers or call AudioComponentInstanceDispose until audio processing is really stopped. And since audio units run in another real-time thread, they don't (get thread or CPU time and) really stop until some time after your apps makes the stop audio call. I would wait a couple seconds, and certainly not call (re)initialize or (re)start until after that delay time.

How can I mute microphone input volume using AudioUnits?

I'm using AudioUnits to record and play sound . It's part of a soft phone.
This is my initialisation:
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 8000;
audioFormat.mFormatID = kAudioFormatULaw;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &audioFormat, sizeof(audioFormat));
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
During the recording process I'm using a callback to process the sound:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
Now at some point I would like to mute the microphone. After googling, I found this as a solution:
-(void) setMuteOn {
AudioUnitParameterValue volume = 0.0;
AudioUnitSetProperty(audioUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, 1, &volume, 0);
}
But it doesn't work. Perhaps I need to do some kind of refresh on my audioUnit, I don't know. Any help would be great.
Actually it was easier than I thought. In the callback method I just overwrote those sound buffers with silence. In my case I was using ULAW compression, so just filled my array with 0xFF
The microphone was still recording, but I stopped using the data.
You could do the following which I think is a little cleaner.
-(BOOL)microphoneInput:(BOOL)enable;
{
UInt32 enableInput = (enable)? 1 : 0;
OSStatus status = AudioUnitSetProperty(
ioUnit,//our I/O unit
kAudioOutputUnitProperty_EnableIO, //property we are changing
kAudioUnitScope_Input,
kInputBus, //#define kInputBus 1
&enableInput,
sizeof (enableInput)
);
CheckStatus(status, #"Unable to enable/disable input");
return (status == noErr);
}

Duplex Audio communication using AudioUnits

I am working on a app which has following requirements:
Record real time audio from iOS device (iPhone/iPad) and send to server over network
Play received audio from network server on iOS device(iPhone/iPad)
Above mentioned things need to be done simultaneously.
I have used AudioUnit for this.
I have run into a problem where I am hearing same audio what i speak into iPhone Mic instead of audio received from network server.
I have searched a lot on how to avoid this but haven't got the solution.
If anyone has had same problem and found any solution, sharing it will help a lot.
here is my code for initializing audio Unit
-(void)initializeAudioUnit
{
audioUnit = NULL;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
UInt32 flag = 1;
//enable IO for recording
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
AudioStreamBasicDescription audioStreamBasicDescription;
// Describe format
audioStreamBasicDescription.mSampleRate = 16000;
audioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;
audioStreamBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked |kLinearPCMFormatFlagIsNonInterleaved;
audioStreamBasicDescription.mFramesPerPacket = 1;
audioStreamBasicDescription.mChannelsPerFrame = 1;
audioStreamBasicDescription.mBitsPerChannel = 16;
audioStreamBasicDescription.mBytesPerPacket = 2;
audioStreamBasicDescription.mBytesPerFrame = 2;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioStreamBasicDescription,
sizeof(audioStreamBasicDescription));
NSLog(#"Status[%d]",(int)status);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioStreamBasicDescription,
sizeof(audioStreamBasicDescription));
NSLog(#"Status[%d]",(int)status);
AURenderCallbackStruct callbackStruct;
// Set input callback
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
flag=0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
}
Recording Call Back
static OSStatus recordingCallback (void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData)
{
MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;
AudioBuffer tempBuffer;
tempBuffer.mNumberChannels = 1;
tempBuffer.mDataByteSize = inNumberFrames * 2;
tempBuffer.mData = malloc(inNumberFrames *2);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = tempBuffer;
OSStatus status;
status = AudioUnitRender(THIS->audioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
&bufferList);
if (noErr != status) {
printf("AudioUnitRender error: %d", (int)status);
return noErr;
}
tempBuffer.mDataByteSize, &encodedSize,(__bridge void *)(THIS));
[THIS processAudio:&bufferList];
free(bufferList.mBuffers[0].mData);
return noErr;
}
Playback Call Back
static OSStatus playbackCallback(void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData) {
NSLog(#"In play back call back");
MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;
int32_t availableBytes=0;
char *inBuffer = GetDataFromCircularBuffer(&THIS->mybuffer, &availableBytes);
NSLog(#"bytes available in buffer[%d]",availableBytes);
decodeSpeexData(inBuffer, availableBytes,(__bridge void *)(THIS));
ConsumeReadBytes(&(THIS->mybuffer), availableBytes);
memcpy(targetBuffer, THIS->outTemp, inNumberFrames*2);
return noErr;
}
Process Audio recorded from MIC
- (void) processAudio: (AudioBufferList*) bufferList
{
AudioBuffer sourceBuffer = bufferList->mBuffers[0];
// NSLog(#"Origin size: %d", (int)sourceBuffer.mDataByteSize);
int size = 0;
encodeAudioDataSpeex((spx_int16_t*)sourceBuffer.mData, sourceBuffer.mDataByteSize, &size, (__bridge void *)(self));
[self performSelectorOnMainThread:#selector(SendAudioData:) withObject:[NSData dataWithBytes:self->jitterBuffer length:size] waitUntilDone:NO];
NSLog(#"Encoded size: %i", size);
}
Your playbackCallback render callback, which you have not shown, is responsible for the audio that is sent to the RemoteIO speaker output. If this RemoteIO render callback puts no data in its callback buffers, whatever junk that was left in buffers (stuff that was previously in the record callback buffers perhaps) might be sent to the speaker instead.
Also, it is strongly recommended by Apple DTS that your recordingCallback not include any memory management calls, such as malloc(). So this may be a bug helping cause the problem as well.

Recording iPhone output sound using Audio unit

I am developing an application in which i want to mix multiple sounds to create a single audio file. I'm able to mix the multiple audio files to generate single audio using apple's MixerHost example, but am not able to record this audio to generate single audio file.
I referred This link to record the generated audio. With reference to the mentioned link i am able to generate audio file, but its silent(there is not sound).
Follwing is my code :
-(void) initializeOutputUnit
{
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
// Enable IO for playback
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
// Describe format
AudioStreamBasicDescription audioFormat={0};
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
AudioUnitInitialize(audioUnit);
AudioOutputUnitStart(audioUnit);
// On initialise le fichier audio
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[NSString alloc] initWithFormat: #"%#/output.caf", documentsDirectory];
NSLog(#">>> %#\n", destinationFilePath);
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (__bridge CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus setupErr = ExtAudioFileCreateWithURL(destinationURL, kAudioFileCAFType, &audioFormat, NULL, kAudioFileFlags_EraseFile, &mAudioFileRef);
CFRelease(destinationURL);
NSAssert(setupErr == noErr, #"Couldn't create file for writing");
setupErr = ExtAudioFileSetProperty(mAudioFileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
NSAssert(setupErr == noErr, #"Couldn't create file for format");
setupErr = ExtAudioFileWriteAsync(mAudioFileRef, 0, NULL);
NSAssert(setupErr == noErr, #"Couldn't initialize write buffers for audio file");}
static OSStatus recordingCallback (void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
AudioBufferList bufferList;
SInt16 samples[inNumberFrames]; // A large enough size to not have to worry about buffer overrun
memset (&samples, 0, sizeof (samples));
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mData = samples;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mDataByteSize = inNumberFrames*sizeof(SInt16);
AudioMixerViewController* THIS = (__bridge AudioMixerViewController *)inRefCon;
OSStatus status;
status = AudioUnitRender(THIS->audioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
&bufferList);
if (noErr != status) {
printf("AudioUnitRender error: %ld", status);
return noErr;
}
// Now, we have the samples we just read sitting in buffers in bufferList
ExtAudioFileWriteAsync(THIS->mAudioFileRef, inNumberFrames, &bufferList);
return noErr;
}
I am getting OSStatus(ExtAudioFileDispose): 0 and OSStatus(ExtAudioFileDispose): -50
Please help me. Thanks!

Resources