Playing Audio on iOS from Socket connection - ios

Hope you can help me with this issue, I have seen a lot of questions related to this, but none of them really helps me to figure out what I am doing wrong here.
So on Android I have an AudioRecord which is recording audio and sending the audio as byte array over a socket connection to clients. This part was super easy on Android and is working perfectly.
When I started working with iOS I found out there is no easy way to go about this, so after 2 days of research and plugging and playing this is what I have got. Which still does not play any audio. It makes a noise when it starts but none of the audio being transferred over the socket is being played. I confirmed that the socket is receiving data by logging each element in the buffer array.
Here is all the code I am using, a lot is reused from a bunch of sites, can't remember all the links. (BTW using AudioUnits)
First up, audio processor:
Play Callback
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;
// iterate over incoming stream an copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
// copy buffer to audio buffer which gets played after function return
memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);
// set data size
buffer.mDataByteSize = size;
}
return noErr;
}
Audio processor initialize
-(void)initializeAudio
{
OSStatus status;
// We define the audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output; // we want to ouput
desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
desc.componentFlags = 0; // must be zero
desc.componentFlagsMask = 0; // must be zero
desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider
// find the AU component by description
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// create audio unit by component
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
[self hasError:status:__FILE__:__LINE__];
// define that we want record io on the input bus
UInt32 flag = 1;
// define that we want play on io on the output bus
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO, // use io
kAudioUnitScope_Output, // scope to output
kOutputBus, // select output bus (0)
&flag, // set flag
sizeof(flag));
[self hasError:status:__FILE__:__LINE__];
/*
We need to specifie our format on which we want to work.
We use Linear PCM cause its uncompressed and we work on raw data.
for more informations check.
We want 16 bits, 2 bytes per packet/frames at 44khz
*/
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = SAMPLE_RATE;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// set the format on the output stream
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
[self hasError:status:__FILE__:__LINE__];
/**
We need to define a callback structure which holds
a pointer to the recordingCallback and a reference to
the audio processor object
*/
AURenderCallbackStruct callbackStruct;
/*
We do the same on the output stream to hear what is coming
from the input stream
*/
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
// set playbackCallback as callback on our renderer for the output bus
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
// reset flag to 0
flag = 0;
/*
we need to tell the audio unit to allocate the render buffer,
that we can directly write into it.
*/
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
/*
we set the number of channels to mono and allocate our block size to
1024 bytes.
*/
audioBuffer.mNumberChannels = 1;
audioBuffer.mDataByteSize = 512 * 2;
audioBuffer.mData = malloc( 512 * 2 );
// Initialize the Audio Unit and cross fingers =)
status = AudioUnitInitialize(audioUnit);
[self hasError:status:__FILE__:__LINE__];
NSLog(#"Started");
}
Start Playing
-(void)start;
{
// start the audio unit. You should hear something, hopefully <img src="http://www.stefanpopp.de/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley">
OSStatus status = AudioOutputUnitStart(audioUnit);
[self hasError:status:__FILE__:__LINE__];
}
Adding data to the buffer
-(void)processBuffer: (AudioBufferList*) audioBufferList
{
AudioBuffer sourceBuffer = audioBufferList->mBuffers[0];
// we check here if the input data byte size has changed
if (audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
// clear old buffer
free(audioBuffer.mData);
// assing new byte size and allocate them on mData
audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
}
// loop over every packet
// copy incoming audio data to the audio buffer
memcpy(audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize);
}
Stream connection callback (Socket)
-(void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode
{
if(eventCode == NSStreamEventHasBytesAvailable)
{
if(aStream == inputStream) {
uint8_t buffer[1024];
UInt32 len;
while ([inputStream hasBytesAvailable]) {
len = (UInt32)[inputStream read:buffer maxLength:sizeof(buffer)];
if(len > 0)
{
AudioBuffer abuffer;
abuffer.mDataByteSize = len; // sample size
abuffer.mNumberChannels = 1; // one channel
abuffer.mData = buffer;
int16_t audioBuffer[len];
for(int i = 0; i <= len; i++)
{
audioBuffer[i] = MuLaw_Decode(buffer[i]);
}
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = abuffer;
NSLog(#"%", bufferList.mBuffers[0]);
[audioProcessor processBuffer:&bufferList];
}
}
}
}
}
The MuLaw_Decode
#define MULAW_BIAS 33
int16_t MuLaw_Decode(uint8_t number)
{
uint8_t sign = 0, position = 0;
int16_t decoded = 0;
number =~ number;
if(number&0x80)
{
number&=~(1<<7);
sign = -1;
}
position= ((number & 0xF0) >> 4) + 5;
decoded = ((1<<position) | ((number&0x0F) << (position - 4)) |(1<<(position-5))) - MULAW_BIAS;
return (sign == 0) ? decoded : (-(decoded));
}
And the code that opens the connection and initialises the audio processor
CFReadStreamRef readStream;
CFWriteStreamRef writeStream;
CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)#"10.0.0.14", 6000, &readStream, &writeStream);
inputStream = (__bridge_transfer NSInputStream *)readStream;
outputStream = (__bridge_transfer NSOutputStream *)writeStream;
[inputStream setDelegate:self];
[outputStream setDelegate:self];
[inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[inputStream open];
[outputStream open];
audioProcessor = [[AudioProcessor alloc] init];
[audioProcessor start];
[audioProcessor setGain:1];
I believe the issue in my code is with the socket connection callback, that I am not doing the right thing with the data.

I solved this in the end, see my answer here
I intended putting the code here, but it would be a lot of copy pasting

Related

CoreAudio: change sample rate of microphone and get data in a callback?

This is my first attempt at using CoreAudio, but my goal is to capture microphone data, resample it to a new sample rate, and then capture the raw 16-bit PCM data.
My strategy for this is to make an AUGraph with the microphone --> a sample rate converter, and then have a callback that gets data from the output of the converter (which I'm hoping is mic output at the new sample rate?).
Right now my callback just fires with a null AudioBufferList*, which obviously isn't correct. How should I set this up and what am I doing wrong?
Code follows:
CheckError(NewAUGraph(&audioGraph), #"Creating graph");
CheckError(AUGraphOpen(audioGraph), #"Opening graph");
AUNode micNode, converterNode;
AudioUnit micUnit, converterUnit;
makeMic(&audioGraph, &micNode, &micUnit);
// get the Input/inputBus's stream description
UInt32 sizeASBD = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription hwASBDin;
AudioUnitGetProperty(micUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kInputBus,
&hwASBDin,
&sizeASBD);
makeConverter(&audioGraph, &converterNode, &converterUnit, hwASBDin);
// connect mic output to converterNode
CheckError(AUGraphConnectNodeInput(audioGraph, micNode, 1, converterNode, 0),
#"Connecting mic to converter");
// set callback on the output? maybe?
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = audioCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
CheckError(AudioUnitSetProperty(micUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct)),
#"Setting callback");
CheckError(AUGraphInitialize(audioGraph), #"AUGraphInitialize");
// activate audio session
NSError *err = nil;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
if (![audioSession setActive:YES error:&err]){
[self error:[NSString stringWithFormat:#"Couldn't activate audio session: %#", err]];
}
CheckError(AUGraphStart(audioGraph), #"AUGraphStart");
and:
void makeMic(AUGraph *graph, AUNode *micNode, AudioUnit *micUnit) {
AudioComponentDescription inputDesc;
inputDesc.componentType = kAudioUnitType_Output;
inputDesc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
inputDesc.componentFlags = 0;
inputDesc.componentFlagsMask = 0;
inputDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
CheckError(AUGraphAddNode(*graph, &inputDesc, micNode),
#"Adding mic node");
CheckError(AUGraphNodeInfo(*graph, *micNode, 0, micUnit),
#"Getting mic unit");
// enable microphone for recording
UInt32 flagOn = 1; // enable value
CheckError(AudioUnitSetProperty(*micUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flagOn,
sizeof(flagOn)),
#"Enabling microphone");
}
and:
void makeConverter(AUGraph *graph, AUNode *converterNode, AudioUnit *converterUnit, AudioStreamBasicDescription inFormat) {
AudioComponentDescription sampleConverterDesc;
sampleConverterDesc.componentType = kAudioUnitType_FormatConverter;
sampleConverterDesc.componentSubType = kAudioUnitSubType_AUConverter;
sampleConverterDesc.componentFlags = 0;
sampleConverterDesc.componentFlagsMask = 0;
sampleConverterDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
CheckError(AUGraphAddNode(*graph, &sampleConverterDesc, converterNode),
#"Adding converter node");
CheckError(AUGraphNodeInfo(*graph, *converterNode, 0, converterUnit),
#"Getting converter unit");
// describe desired output format
AudioStreamBasicDescription convertedFormat;
convertedFormat.mSampleRate = 16000.0;
convertedFormat.mFormatID = kAudioFormatLinearPCM;
convertedFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
convertedFormat.mFramesPerPacket = 1;
convertedFormat.mChannelsPerFrame = 1;
convertedFormat.mBitsPerChannel = 16;
convertedFormat.mBytesPerPacket = 2;
convertedFormat.mBytesPerFrame = 2;
// set format descriptions
CheckError(AudioUnitSetProperty(*converterUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0, // should be the only bus #
&inFormat,
sizeof(inFormat)),
#"Setting format of converter input");
CheckError(AudioUnitSetProperty(*converterUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0, // should be the only bus #
&convertedFormat,
sizeof(convertedFormat)),
#"Setting format of converter output");
}
The render callback is used a a source for an audio unit. If you set the kAudioOutputUnitProperty_SetInputCallback property on the remoteIO unit, you must call AudioUnitRender from within the callback you provide, then you would have to manually do the sample rate conversion, which is ugly.
There is an "easier" way. The remoteIO acts as two units, the input (mic) and the output (speaker). Create a graph with a remoteIO, then connect the mic to the speaker, using the desired format. Then you can get the data using a renderNotify callback, which acts as a "tap".
I created a ViewController class to demonstrate
#import "ViewController.h"
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
//Set your audio session to allow recording
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:NULL];
[audioSession setActive:1 error:NULL];
//Create graph and units
AUGraph graph = NULL;
NewAUGraph(&graph);
AUNode ioNode;
AudioUnit ioUnit = NULL;
AudioComponentDescription ioDescription = {0};
ioDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ioDescription.componentType = kAudioUnitType_Output;
ioDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
AUGraphAddNode(graph, &ioDescription, &ioNode);
AUGraphOpen(graph);
AUGraphNodeInfo(graph, ioNode, NULL, &ioUnit);
UInt32 enable = 1;
AudioUnitSetProperty(ioUnit,kAudioOutputUnitProperty_EnableIO,kAudioUnitScope_Input,1,&enable,sizeof(enable));
//Set the output of the ioUnit's input bus, and the input of it's output bus to the desired format.
//Core audio basically has implicite converters that we're taking advantage of.
AudioStreamBasicDescription asbd = {0};
asbd.mSampleRate = 16000.0;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mFramesPerPacket = 1;
asbd.mChannelsPerFrame = 1;
asbd.mBitsPerChannel = 16;
asbd.mBytesPerPacket = 2;
asbd.mBytesPerFrame = 2;
AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &asbd, sizeof(asbd));
AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &asbd, sizeof(asbd));
//Connect output of the remoteIO's input bus to the input of it's output bus
AUGraphConnectNodeInput(graph, ioNode, 1, ioNode, 0);
//Add a render notify with a bridged reference to self (If using ARC)
AudioUnitAddRenderNotify(ioUnit, renderNotify, (__bridge void *)self);
//Start graph
AUGraphInitialize(graph);
AUGraphStart(graph);
CAShow(graph);
}
OSStatus renderNotify(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
//Filter anything that isn't a post render call on the input bus
if (*ioActionFlags != kAudioUnitRenderAction_PostRender || inBusNumber != 1) {
return noErr;
}
//Get a reference to self
ViewController *self = (__bridge ViewController *)inRefCon;
//Do stuff with audio
//Optionally mute the audio by setting it to zero;
for (int i = 0; i < ioData->mNumberBuffers; i++) {
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}
return noErr;
}
#end

AudioUnit "Sometime" doesn't work. "Only" happens on 6s (may be 6s plus, but I haven't tested)

I am using AudioUnit for playback and recording at the same time. Preferred setting is sampling rate = 48kHz, buffer duration = 0.02
Here is render callback for playing and recording:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
IosAudioController *microphone = (__bridge IosAudioController *)inRefCon;
// render audio into buffer
OSStatus result = AudioUnitRender(microphone.audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
microphone.tempBuffer);
checkStatus(result);
// kAudioUnitErr_InvalidPropertyValue
// notify delegate of new buffer list to process
if ([microphone.dataSource respondsToSelector:#selector(microphone:hasBufferList:withBufferSize:withNumberOfChannels:)])
{
[microphone.dataSource microphone:microphone
hasBufferList:microphone.tempBuffer
withBufferSize:inNumberFrames
withNumberOfChannels:microphone.destinationFormat.mChannelsPerFrame];
}
return result;
}
/**
This callback is called when the audioUnit needs new data to play through the
speakers. If you don't have any, just don't write anything in the buffers
*/
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
IosAudioController *output = (__bridge IosAudioController *)inRefCon;
//
// Try to ask the data source for audio data to fill out the output's
// buffer list
//
if( [output.dataSource respondsToSelector:#selector(outputShouldUseCircularBuffer:)] ){
TPCircularBuffer *circularBuffer = [output.dataSource outputShouldUseCircularBuffer:output];
if( !circularBuffer ){
// SInt32 *left = ioData->mBuffers[0].mData;
// SInt32 *right = ioData->mBuffers[1].mData;
// for(int i = 0; i < inNumberFrames; i++ ){
// left[ i ] = 0.0f;
// right[ i ] = 0.0f;
// }
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
return noErr;
};
/**
Thank you Michael Tyson (A Tasty Pixel) for writing the TPCircularBuffer, you are amazing!
*/
// Get the available bytes in the circular buffer
int32_t availableBytes;
void *buffer = TPCircularBufferTail(circularBuffer,&availableBytes);
int32_t amount = 0;
// float floatNumber = availableBytes * 0.25 / 48;
// float speakerNumber = ioData->mBuffers[0].mDataByteSize * 0.25 / 48;
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer abuffer = ioData->mBuffers[i];
// Ideally we'd have all the bytes to be copied, but compare it against the available bytes (get min)
amount = MIN(abuffer.mDataByteSize,availableBytes);
// copy buffer to audio buffer which gets played after function return
memcpy(abuffer.mData, buffer, amount);
// set data size
abuffer.mDataByteSize = amount;
}
// Consume those bytes ( this will internally push the head of the circular buffer )
TPCircularBufferConsume(circularBuffer,amount);
}
else
{
//
// Silence if there is nothing to output
//
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
}
return noErr;
}
_tempBuffer is configured with 4096 frames.
Here is how I deallocate the audioUnit. Note, due to a bug that VoiceProcessingIO unit may not work properly if you start, stop and start it again, I need to dispose and initialize it every time. It is a known issue and posted here but I can't remember the link.
if (_tempBuffer != NULL) {
for(unsigned i = 0; i < _tempBuffer->mNumberBuffers; i++)
{
free(_tempBuffer->mBuffers[i].mData);
}
free(_tempBuffer);
}
AudioComponentInstanceDispose(_audioUnit);
This configuration works well on 6, 6+ and earlier devices. But something has gone wrong on 6s (may be 6s+).Some time, (those kind of bugs is really annoy. I hate it. To me, it happens 6-7 times on 20 tests), there is still incoming and outgoing data from IOUnit but no sound at all.
It seems that it never happens on the first test, so I guess that it may be a memory issue with the IOUnit and I still don't know how to fix that.
Any advise will be much appreciated.
UPDATE
I forgot to show how I configure the AudioUnit
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &_audioUnit);
checkStatus(status);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Enable IO for playback
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Apply format
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&_destinationFormat,
sizeof(self.destinationFormat));
checkStatus(status);
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&_destinationFormat,
sizeof(self.destinationFormat));
checkStatus(status);
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
[self configureMicrophoneBufferList];
// Initialise
status = AudioUnitInitialize(_audioUnit);
3 things might be problems:
For silence (during underflow), you might want to try filling the buffers with inNumberFrames of zeros instead of returning them unmodified.
Inside the audio callbacks, using any Objective C messaging (your respondsToSelector: call ) is not recommended by Apple DTS.
You shouldn't free buffers or call AudioComponentInstanceDispose until audio processing is really stopped. And since audio units run in another real-time thread, they don't (get thread or CPU time and) really stop until some time after your apps makes the stop audio call. I would wait a couple seconds, and certainly not call (re)initialize or (re)start until after that delay time.

Duplex Audio communication using AudioUnits

I am working on a app which has following requirements:
Record real time audio from iOS device (iPhone/iPad) and send to server over network
Play received audio from network server on iOS device(iPhone/iPad)
Above mentioned things need to be done simultaneously.
I have used AudioUnit for this.
I have run into a problem where I am hearing same audio what i speak into iPhone Mic instead of audio received from network server.
I have searched a lot on how to avoid this but haven't got the solution.
If anyone has had same problem and found any solution, sharing it will help a lot.
here is my code for initializing audio Unit
-(void)initializeAudioUnit
{
audioUnit = NULL;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
UInt32 flag = 1;
//enable IO for recording
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
AudioStreamBasicDescription audioStreamBasicDescription;
// Describe format
audioStreamBasicDescription.mSampleRate = 16000;
audioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;
audioStreamBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked |kLinearPCMFormatFlagIsNonInterleaved;
audioStreamBasicDescription.mFramesPerPacket = 1;
audioStreamBasicDescription.mChannelsPerFrame = 1;
audioStreamBasicDescription.mBitsPerChannel = 16;
audioStreamBasicDescription.mBytesPerPacket = 2;
audioStreamBasicDescription.mBytesPerFrame = 2;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioStreamBasicDescription,
sizeof(audioStreamBasicDescription));
NSLog(#"Status[%d]",(int)status);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioStreamBasicDescription,
sizeof(audioStreamBasicDescription));
NSLog(#"Status[%d]",(int)status);
AURenderCallbackStruct callbackStruct;
// Set input callback
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
flag=0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
}
Recording Call Back
static OSStatus recordingCallback (void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData)
{
MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;
AudioBuffer tempBuffer;
tempBuffer.mNumberChannels = 1;
tempBuffer.mDataByteSize = inNumberFrames * 2;
tempBuffer.mData = malloc(inNumberFrames *2);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = tempBuffer;
OSStatus status;
status = AudioUnitRender(THIS->audioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
&bufferList);
if (noErr != status) {
printf("AudioUnitRender error: %d", (int)status);
return noErr;
}
tempBuffer.mDataByteSize, &encodedSize,(__bridge void *)(THIS));
[THIS processAudio:&bufferList];
free(bufferList.mBuffers[0].mData);
return noErr;
}
Playback Call Back
static OSStatus playbackCallback(void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData) {
NSLog(#"In play back call back");
MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;
int32_t availableBytes=0;
char *inBuffer = GetDataFromCircularBuffer(&THIS->mybuffer, &availableBytes);
NSLog(#"bytes available in buffer[%d]",availableBytes);
decodeSpeexData(inBuffer, availableBytes,(__bridge void *)(THIS));
ConsumeReadBytes(&(THIS->mybuffer), availableBytes);
memcpy(targetBuffer, THIS->outTemp, inNumberFrames*2);
return noErr;
}
Process Audio recorded from MIC
- (void) processAudio: (AudioBufferList*) bufferList
{
AudioBuffer sourceBuffer = bufferList->mBuffers[0];
// NSLog(#"Origin size: %d", (int)sourceBuffer.mDataByteSize);
int size = 0;
encodeAudioDataSpeex((spx_int16_t*)sourceBuffer.mData, sourceBuffer.mDataByteSize, &size, (__bridge void *)(self));
[self performSelectorOnMainThread:#selector(SendAudioData:) withObject:[NSData dataWithBytes:self->jitterBuffer length:size] waitUntilDone:NO];
NSLog(#"Encoded size: %i", size);
}
Your playbackCallback render callback, which you have not shown, is responsible for the audio that is sent to the RemoteIO speaker output. If this RemoteIO render callback puts no data in its callback buffers, whatever junk that was left in buffers (stuff that was previously in the record callback buffers perhaps) might be sent to the speaker instead.
Also, it is strongly recommended by Apple DTS that your recordingCallback not include any memory management calls, such as malloc(). So this may be a bug helping cause the problem as well.

iOS app crashes when headphones are plugged in or unplugged

I'm running a SIP audio streaming app on iOS 6.1.3 iPad2 and new iPad.
I start my app on my iPad (nothing plugged in).
Audio works.
I plug in the headphones.
The app crashes: malloc: error for object 0x....: pointer being freed was not allocated or EXC_BAD_ACCESS
Alternatively:
I start my app on my iPad (with the headphones plugged in).
Audio comes out of the headphones.
I unplug the headphones.
The app crashes: malloc: error for object 0x....: pointer being freed was not allocated or EXC_BAD_ACCESS
The app code employs AudioUnit api based on http://code.google.com/p/ios-coreaudio-example/ sample code (see below).
I use a kAudioSessionProperty_AudioRouteChange callback to get change awareness. So there are three callbacks fro the OS sound manager:
1) Process recorded mic samples
2) Provide samples for the speaker
3) Inform audio HW presence
After lots of tests my feeling is that the tricky code is the one that performs the mic capture. After the plug/unplugged action, the most of the times the recording callback is being called a few times before the RouteChange is called causing later 'segmentation fault' and RouteChange callback is never called. Being more specific I think that AudioUnitRender function causes a 'memory bad access' while an Exception is not thrown at all.
My feeling is that a non-atomic recording callback code races with the OS updating of the structures related to sound devices. So as much non-atomic is the recording callback more likely the concurrency of OS HW update and recording callback.
I modified my code to leave the recording callback as thin as possible but my feeling is that the high processing load brought by other threads of my app is feeding concurrency races described before. So the malloc/free error rises in other parts of the code due to the AudioUnitRender bad access.
I tried to reduce recording callback latency by:
UInt32 numFrames = 256;
UInt32 dataSize = sizeof(numFrames);
AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global,
0,
&numFrames,
dataSize);
and I tried to boost the problematic code:
dispatch_async(dispatch_get_main_queue(), ^{
Does anybody has a tip or solution for that?
In order to reproduce the error here is my audio session code:
//
// IosAudioController.m
// Aruts
//
// Created by Simon Epskamp on 10/11/10.
// Copyright 2010 __MyCompanyName__. All rights reserved.
//
#import "IosAudioController.h"
#import <AudioToolbox/AudioToolbox.h>
#define kOutputBus 0
#define kInputBus 1
IosAudioController* iosAudio;
void checkStatus(int status) {
if (status) {
printf("Status not 0! %d\n", status);
// exit(1);
}
}
/**
* This callback is called when new audio data from the microphone is available.
*/
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
NSLog(#"Recording Callback 1 0x%x ? 0x%x",buffer.mData,
bufferList.mBuffers[0].mData);
// Then:
// Obtain recorded samples
OSStatus status;
status = AudioUnitRender([iosAudio audioUnit],
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
checkStatus(status);
// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];
NSLog(#"Recording Callback 2 0x%x ? 0x%x",buffer.mData,
bufferList.mBuffers[0].mData);
// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}
/**
* This callback is called when the audioUnit needs new data to play through the
* speakers. If you don't have any, just don't write anything in the buffers
*/
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Notes: ioData contains buffers (may be more than one!)
// Fill them up as much as you can.
// Remember to set the size value in each
// buffer to match how much data is in the buffer.
for (int i=0; i < ioData->mNumberBuffers; i++) {
// in practice we will only ever have 1 buffer, since audio format is mono
AudioBuffer buffer = ioData->mBuffers[i];
// NSLog(#" Buffer %d has %d channels and wants %d bytes of data.", i,
buffer.mNumberChannels, buffer.mDataByteSize);
// copy temporary buffer data to output buffer
UInt32 size = min(buffer.mDataByteSize,
[iosAudio tempBuffer].mDataByteSize);
// dont copy more data then we have, or then fits
memcpy(buffer.mData, [iosAudio tempBuffer].mData, size);
// indicate how much data we wrote in the buffer
buffer.mDataByteSize = size;
// uncomment to hear random noise
/*
* UInt16 *frameBuffer = buffer.mData;
* for (int j = 0; j < inNumberFrames; j++) {
* frameBuffer[j] = rand();
* }
*/
}
return noErr;
}
#implementation IosAudioController
#synthesize audioUnit, tempBuffer;
void propListener(void *inClientData,
AudioSessionPropertyID inID,
UInt32 inDataSize,
const void *inData) {
if (inID == kAudioSessionProperty_AudioRouteChange) {
UInt32 isAudioInputAvailable;
UInt32 size = sizeof(isAudioInputAvailable);
CFStringRef newRoute;
size = sizeof(CFStringRef);
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute);
if (newRoute) {
CFIndex length = CFStringGetLength(newRoute);
CFIndex maxSize = CFStringGetMaximumSizeForEncoding(length,
kCFStringEncodingUTF8);
char *buffer = (char *)malloc(maxSize);
CFStringGetCString(newRoute, buffer, maxSize,
kCFStringEncodingUTF8);
//CFShow(newRoute);
printf("New route is %s\n",buffer);
if (CFStringCompare(newRoute, CFSTR("HeadsetInOut"), NULL) ==
kCFCompareEqualTo) // headset plugged in
{
printf("Headset\n");
} else {
printf("Another device\n");
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
sizeof (audioRouteOverride),&audioRouteOverride);
}
printf("New route is %s\n",buffer);
free(buffer);
}
newRoute = nil;
}
}
/**
* Initialize the audioUnit and allocate our own temporary buffer.
* The temporary buffer will hold the latest data coming in from the microphone,
* and will be copied to the output when this is requested.
*/
- (id) init {
self = [super init];
OSStatus status;
// Initialize and configure the audio session
AudioSessionInitialize(NULL, NULL, NULL, self);
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(audioCategory), &audioCategory);
AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange,
propListener, self);
Float32 preferredBufferSize = .020;
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,
sizeof(preferredBufferSize), &preferredBufferSize);
AudioSessionSetActive(true);
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType =
kAudioUnitSubType_VoiceProcessingIO/*kAudioUnitSubType_RemoteIO*/;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
checkStatus(status);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Enable IO for playback
flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Describe format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 8000.00;
//audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags =
kAudioFormatFlagsCanonical/* kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsPacked*/;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
AudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Disable buffer allocation for the recorder (optional - do this if we want to
// pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
// Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per
// frame, thus 2 bytes per frame).
// Practice learns the buffers used contain 512 frames,
// if this changes it will be fixed in processAudio.
tempBuffer.mNumberChannels = 1;
tempBuffer.mDataByteSize = 512 * 2;
tempBuffer.mData = malloc( 512 * 2 );
// Initialise
status = AudioUnitInitialize(audioUnit);
checkStatus(status);
return self;
}
/**
* Start the audioUnit. This means data will be provided from
* the microphone, and requested for feeding to the speakers, by
* use of the provided callbacks.
*/
- (void) start {
OSStatus status = AudioOutputUnitStart(audioUnit);
checkStatus(status);
}
/**
* Stop the audioUnit
*/
- (void) stop {
OSStatus status = AudioOutputUnitStop(audioUnit);
checkStatus(status);
}
/**
* Change this function to decide what is done with incoming
* audio data from the microphone.
* Right now we copy it to our own temporary buffer.
*/
- (void) processAudio: (AudioBufferList*) bufferList {
AudioBuffer sourceBuffer = bufferList->mBuffers[0];
// fix tempBuffer size if it's the wrong size
if (tempBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
free(tempBuffer.mData);
tempBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
tempBuffer.mData = malloc(sourceBuffer.mDataByteSize);
}
// copy incoming audio data to temporary buffer
memcpy(tempBuffer.mData, bufferList->mBuffers[0].mData,
bufferList->mBuffers[0].mDataByteSize);
usleep(1000000); // <- TO REPRODUCE THE ERROR, CONCURRENCY MORE LIKELY
}
/**
* Clean up.
*/
- (void) dealloc {
[super dealloc];
AudioUnitUninitialize(audioUnit);
free(tempBuffer.mData);
}
#end
According to my tests, the line that triggers the SEGV error is ultimately
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
sizeof (audioRouteOverride),&audioRouteOverride);
Changing the properties of an AudioUnit chain mid-flight always is tricky, but if you stop the AudioUnit before rerouting, and start it again, it finishes using up all the buffers it has stored and then carries on with the new parameters.
Would that be acceptable, or do you need less of a gap between the change of route and the restart of the recording?
What I did was:
void propListener(void *inClientData,
AudioSessionPropertyID inID,
UInt32 inDataSize,
const void *inData) {
[iosAudio stop];
// ...
[iosAudio start];
}
No more crash on my iPhone 5 (your mileage may vary with different hardware)
The most logical explanation I have for that behavior, somewhat supported by these tests, is that the render pipe is asynchronous. If you take forever to manipulate your buffers, they just stay queued. But if you change the settings of the AudioUnit, you trigger a mass reset in the render queue with unknown side effects. Trouble is, these changes are synchronous, which affect in a retroactive way all the asynchronous calls waiting patiently for their turn.
if you don't care about the missed samples, you can do something like:
static BOOL isStopped = NO;
static OSStatus recordingCallback(void *inRefCon, //...
{
if(isStopped) {
NSLog(#"Stopped, ignoring");
return noErr;
}
// ...
}
static OSStatus playbackCallback(void *inRefCon, //...
{
if(isStopped) {
NSLog(#"Stopped, ignoring");
return noErr;
}
// ...
}
// ...
/**
* Start the audioUnit. This means data will be provided from
* the microphone, and requested for feeding to the speakers, by
* use of the provided callbacks.
*/
- (void) start {
OSStatus status = AudioOutputUnitStart(_audioUnit);
checkStatus(status);
isStopped = NO;
}
/**
* Stop the audioUnit
*/
- (void) stop {
isStopped = YES;
OSStatus status = AudioOutputUnitStop(_audioUnit);
checkStatus(status);
}
// ...

Setting up an effect AudioUnit

I'm trying to write an iOS app that captures sound from the microphone, passes it through a high-pass filter, and does some calculation on the processed sound. Based on Stefan Popp's MicInput (http://www.stefanpopp.de/2011/capture-iphone-microphone/), I'm trying to put an effect audio unit (more specifically, a high-pass filter effect unit) inbetween the input and the output of the I/O Audio Unit. After setting up said AU, it gives me a 10877 error (kAudioUnitErr_InvalidElement) when I call AudioUnitRender(fxAudioUnit, ...) in the I/O AU's render callback.
AudioProcessingWithAudioUnitAPI.h
//
// AudioProcessingWithAudioUnitAPI.h
//
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVAudioSession.h>
#interface AudioProcessingWithAudioUnitAPI : NSObject
#property (readonly) AudioBuffer audioBuffer;
#property (readonly) AudioComponentInstance audioUnit;
#property (readonly) AudioComponentInstance fxAudioUnit;
...
#end
AudioProcessingWithAudioUnitAPI.m
//
// AudioProcessingWithAudioUnitAPI.m
//
#import "AudioProcessingWithAudioUnitAPI.h"
#implementation AudioProcessingWithAudioUnitAPI
#synthesize isPlaying = _isPlaying;
#synthesize outputLevelDisplay = _outputLevelDisplay;
#synthesize audioBuffer = _audioBuffer;
#synthesize audioUnit = _audioUnit;
#synthesize fxAudioUnit = _fxAudioUnit;
...
#pragma mark Recording callback
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// the data gets rendered here
AudioBuffer buffer;
// a variable where we check the status
OSStatus status;
/**
This is the reference to the object who owns the callback.
*/
AudioProcessingWithAudioUnitAPI *audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*) inRefCon;
/**
on this point we define the number of channels, which is mono
for the iphone. the number of frames is usally 512 or 1024.
*/
buffer.mDataByteSize = inNumberFrames * 2; // sample size
buffer.mNumberChannels = 1; // one channel
buffer.mData = malloc( inNumberFrames * 2 ); // buffer size
// we put our buffer into a bufferlist array for rendering
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
And on the next AudioUnitRender call is where the 10887 error is thrown:
status = AudioUnitRender([audioProcessor fxAudioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[audioProcessor hasError:status:__FILE__:__LINE__];
...
// process the bufferlist in the audio processor
[audioProcessor processBuffer:&bufferList];
//do some further processing
// clean up the buffer
free(bufferList.mBuffers[0].mData);
return noErr;
}
#pragma mark FX AudioUnit render callback
//This just asks for samples to the microphone (I/O AU render)
static OSStatus fxAudioUnitRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus retorno;
AudioProcessingWithAudioUnitAPI* audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*)inRefCon;
retorno = AudioUnitRender([audioProcessor audioUnit],
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
ioData);
[audioProcessor hasError:retorno:__FILE__:__LINE__];
return retorno;
}
#pragma mark Playback callback
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessingWithAudioUnitAPI *audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*) inRefCon;
// iterate over incoming stream and copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
// copy buffer to audio buffer which gets played after function return
memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);
// set data size
buffer.mDataByteSize = size;
}
return noErr;
}
#pragma mark - objective-c class methods
-(AudioProcessingWithAudioUnitAPI*)init
{
self = [super init];
if (self) {
self.isPlaying = NO;
[self initializeAudio];
}
return self;
}
-(void)initializeAudio
{
OSStatus status;
// We define the audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output; // we want to ouput
desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
desc.componentFlags = 0; // must be zero
desc.componentFlagsMask = 0; // must be zero
desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider
// find the AU component by description
AudioComponent component = AudioComponentFindNext(NULL, &desc);
// create audio unit by component
status = AudioComponentInstanceNew(component, &_audioUnit);
[self hasError:status:__FILE__:__LINE__];
// and now for the fx AudioUnit
desc.componentType = kAudioUnitType_Effect;
desc.componentSubType = kAudioUnitSubType_HighPassFilter;
// find the AU component by description
component = AudioComponentFindNext(NULL, &desc);
// create audio unit by component
status = AudioComponentInstanceNew(component, &_fxAudioUnit);
[self hasError:status:__FILE__:__LINE__];
// define that we want record io on the input bus
AudioUnitElement inputElement = 1;
AudioUnitElement outputElement = 0;
UInt32 flag = 1;
status = AudioUnitSetProperty(self.audioUnit,
kAudioOutputUnitProperty_EnableIO, // use io
kAudioUnitScope_Input, // scope to input
inputElement, // select input bus (1)
&flag, // set flag
sizeof(flag));
[self hasError:status:__FILE__:__LINE__];
UInt32 anotherFlag = 0;
// disable output (I don't want to hear back from the device)
status = AudioUnitSetProperty(self.audioUnit,
kAudioOutputUnitProperty_EnableIO, // use io
kAudioUnitScope_Output, // scope to output
outputElement, // select output bus (0)
&anotherFlag, // set flag
sizeof(flag));
[self hasError:status:__FILE__:__LINE__];
/*
We need to specify our format on which we want to work.
We use Linear PCM cause its uncompressed and we work on raw data.
for more informations check.
We want 16 bits, 2 bytes per packet/frames at 44khz
*/
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = SAMPLE_RATE;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16; //65536
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// set the format on the output stream
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputElement,
&audioFormat,
sizeof(audioFormat));
[self hasError:status:__FILE__:__LINE__];
// set the format on the input stream
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
outputElement,
&audioFormat,
sizeof(audioFormat));
[self hasError:status:__FILE__:__LINE__];
/**
We need to define a callback structure which holds
a pointer to the recordingCallback and a reference to
the audio processor object
*/
AURenderCallbackStruct callbackStruct;
// set recording callback
callbackStruct.inputProc = recordingCallback; // recordingCallback pointer
callbackStruct.inputProcRefCon = (__bridge void*)self;
// set input callback to recording callback on the input bus
status = AudioUnitSetProperty(self.audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
inputElement,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
/*
We do the same on the output stream to hear what is coming
from the input stream
*/
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
// set playbackCallback as callback on our renderer for the output bus
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
outputElement,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
callbackStruct.inputProc = fxAudioUnitRenderCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
// set input callback to input AU
status = AudioUnitSetProperty(self.fxAudioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
// reset flag to 0
flag = 0;
/*
we need to tell the audio unit to allocate the render buffer,
that we can directly write into it.
*/
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
inputElement,
&flag,
sizeof(flag));
status = AudioUnitSetProperty(self.fxAudioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
0,
&flag,
sizeof(flag));
/*
we set the number of channels to mono and allocate our block size to
1024 bytes.
*/
_audioBuffer.mNumberChannels = 1;
_audioBuffer.mDataByteSize = 512 * 2;
_audioBuffer.mData = malloc( 512 * 2 );
// Initialize the Audio Unit and cross fingers =)
status = AudioUnitInitialize(self.fxAudioUnit);
[self hasError:status:__FILE__:__LINE__];
status = AudioUnitInitialize(self.audioUnit);
[self hasError:status:__FILE__:__LINE__];
NSLog(#"Started");
}
//For now, this just copies the buffer to self.audioBuffer
-(void)processBuffer: (AudioBufferList*) audioBufferList
{
AudioBuffer sourceBuffer = audioBufferList->mBuffers[0];
// we check here if the input data byte size has changed
if (_audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
// clear old buffer
free(self.audioBuffer.mData);
// assing new byte size and allocate them on mData
_audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
_audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
}
// copy incoming audio data to the audio buffer
memcpy(self.audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize);
}
#pragma mark - Error handling
-(void)hasError:(int)statusCode:(char*)file:(int)line
{
if (statusCode) {
printf("Error Code responded %d in file %s on line %d\n", statusCode, file, line);
exit(-1);
}
}
#end
Any help will be greatly appreciated.
This type of question comes up somewhat frequently, so I once wrote a mini-tutorial on this subject. However, this guide is really the nuts-and-bolts way to solve the problem, I now feel that a far more elegant way is to use the Novocaine framework, which takes a lot of the headache out of AudioUnit setup on iOS.
I found a demo codes, maybe useful 4 U;
DEMO URL:https://github.com/JNYJdev/AudioUnit
OR
blog: http://atastypixel.com/blog/using-remoteio-audio-unit/
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Then:
// Obtain recorded samples
OSStatus status;
status = AudioUnitRender([iosAudio audioUnit],
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
checkStatus(status);
// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];
// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}

Resources