Panning a mono signal with MultiChannelMixer & MTAudioProcessingTap - ios

I'm looking to pan a mono signal using MTAudioProcessingTap and a Multichannel Mixer audio unit, but am getting a mono output instead of a panned, stereo output. The documentation states:
"The Multichannel Mixer unit (subtype
kAudioUnitSubType_MultiChannelMixer) takes any number of mono or
stereo streams and combines them into a single stereo output."
So, the mono output was unexpected. Any way around this? I ran a stereo signal through the exact same code and everything worked great: stereo output, panned as expected. Here's the code from my tap's prepare callback:
static void tap_PrepareCallback(MTAudioProcessingTapRef tap,
CMItemCount maxFrames,
const AudioStreamBasicDescription *processingFormat) {
AVAudioTapProcessorContext *context = (AVAudioTapProcessorContext *)MTAudioProcessingTapGetStorage(tap);
// Store sample rate for -setCenterFrequency:.
context->sampleRate = processingFormat->mSampleRate;
/* Verify processing format (this is not needed for Audio Unit, but for RMS calculation). */
context->supportedTapProcessingFormat = true;
if (processingFormat->mFormatID != kAudioFormatLinearPCM) {
NSLog(#"Unsupported audio format ID for audioProcessingTap. LinearPCM only.");
context->supportedTapProcessingFormat = false;
}
if (!(processingFormat->mFormatFlags & kAudioFormatFlagIsFloat)) {
NSLog(#"Unsupported audio format flag for audioProcessingTap. Float only.");
context->supportedTapProcessingFormat = false;
}
if (processingFormat->mFormatFlags & kAudioFormatFlagIsNonInterleaved) {
context->isNonInterleaved = true;
}
AudioUnit audioUnit;
AudioComponentDescription audioComponentDescription;
audioComponentDescription.componentType = kAudioUnitType_Mixer;
audioComponentDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
audioComponentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
audioComponentDescription.componentFlags = 0;
audioComponentDescription.componentFlagsMask = 0;
AudioComponent audioComponent = AudioComponentFindNext(NULL, &audioComponentDescription);
if (audioComponent) {
if (noErr == AudioComponentInstanceNew(audioComponent, &audioUnit)) {
OSStatus status = noErr;
// Set audio unit input/output stream format to processing format.
if (noErr == status) {
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
processingFormat,
sizeof(AudioStreamBasicDescription));
}
if (noErr == status) {
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
processingFormat,
sizeof(AudioStreamBasicDescription));
}
// Set audio unit render callback.
if (noErr == status) {
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = AU_RenderCallback;
renderCallbackStruct.inputProcRefCon = (void *)tap;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&renderCallbackStruct,
sizeof(AURenderCallbackStruct));
}
// Set audio unit maximum frames per slice to max frames.
if (noErr == status) {
UInt32 maximumFramesPerSlice = (UInt32)maxFrames;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global,
0,
&maximumFramesPerSlice,
(UInt32)sizeof(UInt32));
}
// Initialize audio unit.
if (noErr == status) {
status = AudioUnitInitialize(audioUnit);
}
if (noErr != status) {
AudioComponentInstanceDispose(audioUnit);
audioUnit = NULL;
}
context->audioUnit = audioUnit;
}
}
NSLog(#"Tap channels: %d",processingFormat->mChannelsPerFrame); // = 1 for mono source file
}
I've tried a few different options for the output stream format, e.g., AVAudioFormat *outFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:processingFormat->mSampleRate channels:2];, but get this error each time: "Client did not see 20 I/O cycles; giving up." Here's the code that creates the exact same ASBD as the input format except for 2 channels instead of one, and this gives the same "20 I/O cycles" error too:
AudioStreamBasicDescription asbd;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = 0x29;
asbd.mSampleRate = 44100;
asbd.mBitsPerChannel = 32;
asbd.mChannelsPerFrame = 2;
asbd.mBytesPerFrame = 4;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerPacket = 4;
asbd.mReserved = 0;

Related

How to play live audio on iOS?

I have an IPCamera that requires the use of a custom library for connecting and communication. I have the video all taken care of, but I also want to give the user the option to listen to the audio that is recorded by the camera.
I receive the audio in the form of a byte stream (the audio is PCM u-law).
Since I don't read the data from a file or have an URL I can connect to, I think I would have to use something like AudioUnits or openAL to play my audio.
I tried to implement it with AudioUnits based on the examples I found online and this is what I have so far:
-(void) audioThread
{
char buffer[1024];
int size = 0;
boolean audioConfigured = false;
AudioComponentInstance audioUnit;
while (running) {
getAudioData(buffer,size); //fill buffer with my audio
int16_t* tempChar = (int16_t *)calloc(ret, sizeof(int16_t));
for (int i = 0; i < ret; i++) {
tempChar[i] = MuLaw_Decode(buf[i]);
}
uint8_t *data = NULL;
data = malloc(size);
data = memcpy(data, &tempChar, size);
CMBlockBufferRef blockBuffer = NULL;
OSStatus status = CMBlockBufferCreateWithMemoryBlock(NULL, data,
size,
kCFAllocatorNull, NULL,
0,
size,
0, &blockBuffer);
CMSampleBufferRef sampleBuffer = NULL;
// now I create my samplebuffer from the block buffer
if(status == noErr)
{
const size_t sampleSize = size;
status = CMSampleBufferCreate(kCFAllocatorDefault,
blockBuffer, true, NULL, NULL,
formatDesc, 1, 0, NULL, 1,
&sampleSize, &sampleBuffer);
}
AudioStreamBasicDescription audioBasic;
audioBasic.mBitsPerChannel = 16;
audioBasic.mBytesPerPacket = 2;
audioBasic.mBytesPerFrame = 2;
audioBasic.mChannelsPerFrame = 1;
audioBasic.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioBasic.mFormatID = kAudioFormatLinearPCM;
audioBasic.mFramesPerPacket = 1;
audioBasic.mSampleRate = 48000;
audioBasic.mReserved = 0;
if(!audioConfigured)
{
//initialize the circular buffer
if(instance.decodingBuffer == NULL)
instance.decodingBuffer = malloc(sizeof(TPCircularBuffer));
if(!TPCircularBufferInit(instance.decodingBuffer, 1024))
continue;
AudioComponentDescription componentDescription;
componentDescription.componentType = kAudioUnitType_Output;
componentDescription.componentSubType = kAudioUnitSubType_RemoteIO;
componentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
componentDescription.componentFlags = 0;
componentDescription.componentFlagsMask = 0;
AudioComponent component = AudioComponentFindNext(NULL, &componentDescription);
if(AudioComponentInstanceNew(component, &audioUnit) != noErr) {
NSLog(#"Failed to initialize the AudioComponent");
continue;
}
//enable IO for playback
UInt32 flag = 1;
if(AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &flag, sizeof(flag)) != noErr) {
NSLog(#"Failed to enable IO for playback");
continue;
}
// set the format for the outputstream
if(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 1, &audioBasic, sizeof(audioBasic)) != noErr) {
NSLog(#"Failed to set the format for the outputstream");
continue;
}
// set output callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void*) self;
if(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, 0, &callbackStruct, sizeof(callbackStruct))!= noErr) {
NSLog(#"Failed to Set output callback");
continue;
}
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ShouldAllocateBuffer, kAudioUnitScope_Output, 1, &flag, sizeof(flag));
if(AudioUnitInitialize(audioUnit) != noErr) {
NSLog(#"Failed to initialize audioUnits");
}
if(AudioOutputUnitStart(audioUnit)!= noErr) {
NSLog(#"[thread_ReceiveAudio] Failed to start audio");
}
audioConfigured = true;
}
AudioBufferList bufferList ;
if (sampleBuffer!=NULL) {
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &bufferList, sizeof(bufferList), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
UInt64 size = CMSampleBufferGetTotalSampleSize(sampleBuffer);
// Put audio into circular buffer
TPCircularBufferProduceBytes(self.decodingBuffer, bufferList.mBuffers[0].mData, size);
//TPCircularBufferCopyAudioBufferList(self.decodingBuffer, &bufferList, NULL, kTPCircularBufferCopyAll, NULL);
CFRelease(sampleBuffer);
CFRelease(blockBuffer);
}
}
//stop playing audio
if(audioConfigured){
if(AudioOutputUnitStop(audioUnit)!= noErr) {
NSLog(#"[thread_ReceiveAudio] Failed to stop audio");
}
else{
//clean up audio
AudioComponentInstanceDispose(audioUnit);
}
}
}
int16_t MuLaw_Decode(int8_t number)
{
const uint16_t MULAW_BIAS = 33;
uint8_t sign = 0, position = 0;
int16_t decoded = 0;
number = ~number;
if (number & 0x80)
{
number &= ~(1 << 7);
sign = -1;
}
position = ((number & 0xF0) >> 4) + 5;
decoded = ((1 << position) | ((number & 0x0F) << (position - 4))
| (1 << (position - 5))) - MULAW_BIAS;
return (sign == 0) ? (decoded) : (-(decoded));
}
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
int bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16 *targetBuffer = (SInt16*)ioData->mBuffers[0].mData;
int32_t availableBytes;
SInt16 *buffer = TPCircularBufferTail(instance.decodingBuffer, &availableBytes);
int sampleCount = MIN(bytesToCopy, availableBytes);
memcpy(targetBuffer, buffer, MIN(bytesToCopy, availableBytes));
TPCircularBufferConsume(self.decodingBuffer, sampleCount);
return noErr;
}
The code above doesn't produce any errors, but won't play any sound. I though I could set the audio through the bufferList in the recordCallback, but it is never called.
So my question is: How do I play audio from a byte stream on iOS?
I decided to look at the project with fresh eyes. I got rid of most of the code and got it to work now. It is not pretty, but at least it runs for now. For example: I had to set my sample rate to 4000, otherwise it would play to fast and I still have performance issues. Anyway this is what I came up with:
#define BUFFER_SIZE 1024
#define NUM_CHANNELS 2
#define kOutputBus 0
#define kInputBus 1
-(void) main
{
char buf[BUFFER_SIZE];
int size;
runloop: while (self.running) {
getAudioData(&buf, size);
if(!self.configured) {
if(![self activateAudioSession])
continue;
self.configured = true;
}
TPCircularBufferProduceBytes(self.decodingBuffer, buf, size);
}
//stop audiounits
AudioOutputUnitStop(self.audioUnit);
AudioComponentInstanceDispose(self.audioUnit);
if (self.decodingBuffer != NULL) {
TPCircularBufferCleanup(self.decodingBuffer);
}
}
static void audioSessionInterruptionCallback(void *inUserData, UInt32 interruptionState) {
if (interruptionState == kAudioSessionEndInterruption) {
AudioSessionSetActive(YES);
AudioOutputUnitStart(self.audioUnit);
}
if (interruptionState == kAudioSessionBeginInterruption) {
AudioOutputUnitStop(self.audioUnit);
}
}
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Notes: ioData contains buffers (may be more than one!)
// Fill them up as much as you can. Remember to set the size value in each buffer to match how much data is in the buffer.
if (!self.running ) {
return -1;
}
int bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16 *targetBuffer = (SInt16*)ioData->mBuffers[0].mData;
// Pull audio from playthrough buffer
int32_t availableBytes;
if(self.decodingBuffer == NULL || self.decodingBuffer->length < 1) {
NSLog(#"buffer is empty");
return 0;
}
SInt16 *buffer = TPCircularBufferTail(self.decodingBuffer, &availableBytes);
int sampleCount = MIN(bytesToCopy, availableBytes);
memcpy(targetBuffer, buffer, sampleCount);
TPCircularBufferConsume(self.decodingBuffer, sampleCount);
return noErr;
}
- (BOOL) activateAudioSession {
if (!self.activated_) {
OSStatus result;
result = AudioSessionInitialize(NULL,
NULL,
audioSessionInterruptionCallback,
(__bridge void *)(self));
if (kAudioSessionAlreadyInitialized != result)
[self checkError:result message:#"Couldn't initialize audio session"];
[self setupAudio]
self.activated_ = YES;
}
return self.activated_;
}
- (void) setupAudio
{
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
AudioComponentInstanceNew(inputComponent, &_audioUnit);
// // Enable IO for recording
// UInt32 flag = 1;
// status = AudioUnitSetProperty(audioUnit,
// kAudioOutputUnitProperty_EnableIO,
// kAudioUnitScope_Input,
// kInputBus,
// &flag,
// sizeof(flag));
// Enable IO for playback
UInt32 flag = 1;
AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
// Describe format
AudioStreamBasicDescription format;
format.mSampleRate = 4000;
format.mFormatID = kAudioFormatULaw; //kAudioFormatULaw
format.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;//
format.mBitsPerChannel = 8 * sizeof(char);
format.mChannelsPerFrame = NUM_CHANNELS;
format.mBytesPerFrame = sizeof(char) * NUM_CHANNELS;
format.mFramesPerPacket = 1;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;
format.mReserved = 0;
self.audioFormat = format;
// Apply format
AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&_audioFormat,
sizeof(_audioFormat));
AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&_audioFormat,
sizeof(_audioFormat));
// // Set input callback
// AURenderCallbackStruct callbackStruct;
// callbackStruct.inputProc = recordingCallback;
// callbackStruct.inputProcRefCon = self;
// status = AudioUnitSetProperty(audioUnit,
// kAudioOutputUnitProperty_SetInputCallback,
// kAudioUnitScope_Global,
// kInputBus,
// &callbackStruct,
// sizeof(callbackStruct));
// checkStatus(status);
// Set output callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
//initialize the circular buffer
if(self.decodingBuffer == NULL)
self.decodingBuffer = malloc(sizeof(TPCircularBuffer));
if(!TPCircularBufferInit(self.decodingBuffer, 512*1024))
return NO;
// Initialise
status = AudioUnitInitialize(self.audioUnit);
AudioOutputUnitStart(self.audioUnit);
}
I found most of this by looking through github and from a tasty pixel
If the AVAudioSession is configured to use short buffers, you can use the RemoteIO Audio Unit to play received audio with low additional latency.
Check errors during audio configuration. Some iOS devices only support a 48 kHz sample rate, so you may need to resample your audio PCM data from 8 kHz to another rate.
RemoteIO only supports linear PCM, so you will need to first convert all your incoming 8-bit u-law PCM samples to 16-bit linear PCM format before storing them in a lock-free circular buffer.
You need to call AudioOutputUnitStart to start audio callbacks being called by the OS. Your code should not be calling these callbacks. They will be called by the OS.
AudioUnitRender is used for recording callbacks, not for playing audio. So you don't need to use it. Just fill the AudioBufferList buffers with the requested number of frames in the play callback.
Then you can use the play audio callback to check your circular buffer and pull the requested number of samples, if enough are available. You should not do any memory management (such as a free() call) inside this callback.

AudioUnit + Opus codec = crackle issue

I am creating a voip app for iOS in objective-c. Currently i am trying to create the audio part: recording the audio data from microphone, encoding with Opus, decoding, and then playing. For the recording and playing i use AudioUnit. Also i made a buffer implementation which allocates places of memory each with initially set size. There are three main methods:
- setBufferSize - for setting buffer's sub allocated spaces.
- writeDataToBuffer - for creating new space(if needed), and filling data into current writing space.
- readDataFromBuffer - read data from current reading space.
I use the buffer for storing the audio data there. It works good. I've tested it. Also if i try to use it without Opus just reading audio data, storing it into the buffer, reading from the buffer and then playing, everything works great. But the problem comes when i include opus. Actually it encodes and decodes the audio data, but the quality is not so good and there are some crackle as well. I was wondering what am i doing wrong? Here are pieces of my code:
AudioUnit:
OSStatus status;
m_sAudioDescription.componentType = kAudioUnitType_Output;
m_sAudioDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO/*kAudioUnitSubType_RemoteIO*/;
m_sAudioDescription.componentFlags = 0;
m_sAudioDescription.componentFlagsMask = 0;
m_sAudioDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &m_sAudioDescription);
status = AudioComponentInstanceNew(inputComponent, &m_audioUnit);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(m_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
VOIP_AUDIO_INPUT_ELEMENT,
&flag,
sizeof(flag));
// Enable IO for playback
status = AudioUnitSetProperty(m_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
VOIP_AUDIO_OUTPUT_ELEMENT,
&flag,
sizeof(flag));
// Describe format
m_sAudioFormat.mSampleRate = 48000.00;//48000.00;/*44100.00*/;
m_sAudioFormat.mFormatID = kAudioFormatLinearPCM;
m_sAudioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked/* | kAudioFormatFlagsCanonical*/;
m_sAudioFormat.mFramesPerPacket = 1;
m_sAudioFormat.mChannelsPerFrame = 1;
m_sAudioFormat.mBitsPerChannel = 16; //8 * bytesPerSample
m_sAudioFormat.mBytesPerFrame = /*(UInt32)bytesPerSample;*/2; //bitsPerChannel / 8 * channelsPerFrame
m_sAudioFormat.mBytesPerPacket = 2; //bytesPerFrame * framesPerPacket
// Apply format
status = AudioUnitSetProperty(m_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
VOIP_AUDIO_INPUT_ELEMENT,
&m_sAudioFormat,
sizeof(m_sAudioFormat));
status = AudioUnitSetProperty(m_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
VOIP_AUDIO_OUTPUT_ELEMENT,
&m_sAudioFormat,
sizeof(m_sAudioFormat));
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = inputRenderCallback;
callbackStruct.inputProcRefCon = this;
status = AudioUnitSetProperty(m_audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
VOIP_AUDIO_INPUT_ELEMENT,
&callbackStruct,
sizeof(callbackStruct));
// Set output callback
callbackStruct.inputProc = outputRenderCallback;
callbackStruct.inputProcRefCon = this;
status = AudioUnitSetProperty(m_audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
VOIP_AUDIO_OUTPUT_ELEMENT,
&callbackStruct,
sizeof(callbackStruct));
//Enable Echo cancelation:
this->_setEchoCancelation(true);
//Enable Automatic Gain control:
this->_setAGC(false);
// Initialise
status = AudioUnitInitialize(m_audioUnit);
return noErr;
Input buffer allocation and setting the size of storing buffers:
void VoipAudio::_allocBuffer()
{
UInt32 numFramesPerBuffer;
UInt32 size = sizeof(/*VoipUInt32*/VoipInt16);
AudioUnitGetProperty(m_audioUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global,
VOIP_AUDIO_OUTPUT_ELEMENT, &numFramesPerBuffer, &siz
UInt32 inputBufferListSize = offsetof(AudioBufferList, mBuffers[0]) + (sizeof(AudioBuffer) * m_sAudioFormat.mChannelsPerFrame);
inputBuffer = (AudioBufferList *)malloc(inputBufferListSize);
inputBuffer->mNumberBuffers = m_sAudioFormat.mChannelsPerFrame;
//pre-malloc buffers for AudioBufferLists
for(VoipUInt32 tmp_int1 = 0; tmp_int1 < inputBuffer->mNumberBuffers; tmp_int1++)
{
inputBuffer->mBuffers[tmp_int1].mNumberChannels = 1;
inputBuffer->mBuffers[tmp_int1].mDataByteSize = 2048;
inputBuffer->mBuffers[tmp_int1].mData = malloc(2048);
memset(inputBuffer->mBuffers[tmp_int1].mData, 0, 2048);
}
this->m_oAudioBuffer = new VoipBuffer();
this->m_oAudioBuffer->setBufferSize(2048);
this->m_oAudioReadBuffer = new VoipBuffer();
this->m_oAudioReadBuffer->setBufferSize(2880);
}
Record callback:
this->m_oAudioReadBuffer->writeDataToBuffer(samples, samplesSize);
void* tmp_buffer = this->m_oAudioReadBuffer->readDataFromBuffer();
if (tmp_buffer != nullptr)
{
sVoipAudioCodecOpusEncodedResult* encodedSamples = VoipAudioCodecs::Opus_Encode((VoipInt16*)tmp_buffer, 2880);
sVoipAudioCodecOpusDecodedResult* decodedSamples = VoipAudioCodecs::Opus_Decode(encodedSamples->m_data, encodedSamples->m_dataSize);
this->m_oAudioBuffer->writeDataToBuffer(decodedSamples->m_data, decodedSamples->m_dataSize);
free(encodedSamples->m_data);
free(encodedSamples);
free(decodedSamples->m_data);
free(decodedSamples);
}
Playing callback:
void* tmp_buffer = this->m_oAudioBuffer->readDataFromBuffer();
if (tmp_buffer != nullptr)
{
memset(buffer->mBuffers[0].mData, 0, 2048);
memcpy(buffer->mBuffers[0].mData, tmp_buffer, 2048);
buffer->mBuffers[0].mDataByteSize = 2048;
} else {
memset(buffer->mBuffers[0].mData, 0, 2048);
buffer->mBuffers[0].mDataByteSize = 2048;
}
Opus Init Code:
int _error = 0;
VoipAudioCodecs::m_oEncoder = opus_encoder_create(SAMPLE_RATE, CHANNELS, APPLICATION, &_error);
if (_error < 0)
{
fprintf(stderr, "VoipAudioCodecs error: failed to create an encoder: %s\n", opus_strerror(_error));
return;
}
_error = opus_encoder_ctl(VoipAudioCodecs::m_oEncoder, OPUS_SET_BITRATE(BITRATE/*OPUS_BITRATE_MAX*/));
if (_error < 0)
{
fprintf(stderr, "VoipAudioCodecs error: failed to set bitrate: %s\n", opus_strerror(_error));
return;
}
VoipAudioCodecs::m_oDecoder = opus_decoder_create(SAMPLE_RATE, CHANNELS, &_error);
if (_error < 0)
{
fprintf(stderr, "VoipAudioCodecs error: failed to create decoder: %s\n", opus_strerror(_error));
return;
}
Opus encode/decode:
sVoipAudioCodecOpusEncodedResult* VoipAudioCodecs::Opus_Encode(VoipInt16* number, int samplesCount)
{
unsigned char cbits[MAX_PACKET_SIZE];
VoipInt32 nbBytes;
nbBytes = opus_encode(VoipAudioCodecs::m_oEncoder, number, FRAME_SIZE, cbits, MAX_PACKET_SIZE);
if (nbBytes < 0)
{
fprintf(stderr, "VoipAudioCodecs error: encode failed: %s\n", opus_strerror(nbBytes));
return nullptr;
}
sVoipAudioCodecOpusEncodedResult* result = (sVoipAudioCodecOpusEncodedResult* )malloc(sizeof(sVoipAudioCodecOpusEncodedResult));
result->m_data = (unsigned char*)malloc(nbBytes);
memcpy(result->m_data, cbits, nbBytes);
result->m_dataSize = nbBytes;
return result;
}
sVoipAudioCodecOpusDecodedResult* VoipAudioCodecs::Opus_Decode(void* encoded, VoipInt32 nbBytes)
{
VoipInt16 decodedPacket[MAX_FRAME_SIZE];
int frame_size = opus_decode(VoipAudioCodecs::m_oDecoder, (const unsigned char*)encoded, nbBytes, decodedPacket, MAX_FRAME_SIZE, 0);
if (frame_size < 0)
{
fprintf(stderr, "VoipAudioCodecs error: decoder failed: %s\n", opus_strerror(frame_size));
return nullptr;
}
sVoipAudioCodecOpusDecodedResult* result = (sVoipAudioCodecOpusDecodedResult* )malloc(sizeof(sVoipAudioCodecOpusDecodedResult));
result->m_data = (VoipInt16*)malloc(frame_size / sizeof(VoipInt16));
memcpy(result->m_data, decodedPacket, (frame_size / sizeof(VoipInt16)));
result->m_dataSize = frame_size / sizeof(VoipInt16);
return result;
}
Here are some constants i use:
#define FRAME_SIZE 2880 //120, 240, 480, 960, 1920, 2880
#define SAMPLE_RATE 48000
#define CHANNELS 1
#define APPLICATION OPUS_APPLICATION_VOIP//OPUS_APPLICATION_AUDIO
#define BITRATE 64000
#define MAX_FRAME_SIZE 4096
#define MAX_PACKET_SIZE (3*1276)
Can you help me please?
Your audio call back time may need increased. Try increasing your session setPreferredIOBufferDuration time. I have used opus on iOS and have measured the decoding time. It takes 2 to 3 ms to decode about 240 frames of data. There is a good chance you are missing your subsequent callbacks because it is taking to long to decode the audio.
i was have same problem in my project, the problem was the iOS give me unstable frame size, i used audio queue service and audio unit, they give me same result ( crackled voice ).
all you have to do is, save some samples in ring buffer in audio callback.
then in separate thread, do audio processing to make fixed frame to each round.
for example :
audioUnit give you frames or samples like this: [2048 .. 2048 .. 2048]
and opus codec need, 2880 fame for each packet, so you need to get 2048 from first buffer and 832 remain frames from next buffer to get fixed frame size to send it to opus encoder.
this function i used in my project
func audioProcessing(){
DispatchQueue.global(qos: .default).async {
// this to save remain data from ring buffer
var remainData:NSMutableData = NSMutableData()
var remainDataSize = 0
while self.room_oppened{
// here we define the fixed frame we want to use in our opus encoder
var packetOffset = 0
let fixedFrameSize:Int = 5760
var dataToGetFullFrame:Int = 5760
let packetData:NSMutableData = NSMutableData(length: fixedFrameSize)!// this need to filled with data
if remainDataSize > 0 {
if remainDataSize < fixedFrameSize{
memcpy(packetData.mutableBytes.advanced(by: packetOffset), remainData.mutableBytes.advanced(by: 0), remainDataSize)// add the remain data
dataToGetFullFrame = dataToGetFullFrame - remainDataSize
packetOffset = packetOffset + remainDataSize// - 1
}else{
memcpy(packetData.mutableBytes.advanced(by: packetOffset), remainData.mutableBytes.advanced(by: 0), fixedFrameSize)// add the remain data
dataToGetFullFrame = 0
}
remainDataSize = 0
}
// if the packet not fill full, we need to get more data from circle buffer
if dataToGetFullFrame > 0 {
while dataToGetFullFrame > 0 {
let bufferData = self.ringBufferEncodedAudio.read()// read chunk of data from bufer
if bufferData != nil{
var chunkOffset = 0
if dataToGetFullFrame > bufferData!.length{
memcpy(packetData.mutableBytes.advanced(by: packetOffset) , bufferData!.mutableBytes , bufferData!.length)
chunkOffset = bufferData!.length// this how much data we read
dataToGetFullFrame = dataToGetFullFrame - bufferData!.length // how much of data we need to fill packet
packetOffset = packetOffset + bufferData!.length// + 1
}else{
memcpy(packetData.mutableBytes.advanced(by: packetOffset) , bufferData!.mutableBytes , dataToGetFullFrame)
chunkOffset = dataToGetFullFrame// this how much data we read
packetOffset = packetOffset + dataToGetFullFrame// + 1
dataToGetFullFrame = dataToGetFullFrame - dataToGetFullFrame // how much of data we need to fill packet
}
if dataToGetFullFrame <= 0 {
var size = bufferData!.length - chunkOffset
remainData = NSMutableData(bytes: bufferData?.mutableBytes.advanced(by: chunkOffset), length: size)
remainDataSize = size
}
}
usleep(useconds_t(8 * 1000))
}
}
// send packet to encoder
if self.enable_streaming {
let dataToEncode:Data = packetData as Data
let packet = OpusSwiftPort.shared.encodeData(dataToEncode)
if packet != nil{
self.sendAudioPacket(packet: packet!)// <--- this to network
}
}
}
}
}
after i did this audio processing i get very clear audio.
i hope this was helpful for you.

AudioUnit Input Missing Periodic Samples

I have implemented an AUGraph containing a single AudioUnit for handling IO from the mic and headsets. The issue I'm having is that there are missing chunks of audio input.
I believe the samples are being lost during the hardware to software buffer exchange. I tried slowing down the sample rate of the iPhone, from 44.1 kHz to 20 kHz, to see if this would give me the missing data, but it did not produce the output I expected.
The AUGraph is setup as follows:
// Audio component description
AudioComponentDescription desc;
bzero(&desc, sizeof(AudioComponentDescription));
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
// Stereo ASBD
AudioStreamBasicDescription stereoStreamFormat;
bzero(&stereoStreamFormat, sizeof(AudioStreamBasicDescription));
stereoStreamFormat.mSampleRate = kSampleRate;
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
stereoStreamFormat.mBytesPerPacket = 4;
stereoStreamFormat.mBytesPerFrame = 4;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mChannelsPerFrame = 2;
stereoStreamFormat.mBitsPerChannel = 16;
OSErr err = noErr;
#try {
// Create new AUGraph
err = NewAUGraph(&auGraph);
NSAssert1(err == noErr, #"Error creating AUGraph: %hd", err);
// Add node to AUGraph
err = AUGraphAddNode(auGraph,
&desc,
&ioNode);
NSAssert1(err == noErr, #"Error adding AUNode: %hd", err);
// Open AUGraph
err = AUGraphOpen(auGraph);
NSAssert1(err == noErr, #"Error opening AUGraph: %hd", err);
// Add AUGraph node info
err = AUGraphNodeInfo(auGraph,
ioNode,
&desc,
&_ioUnit);
NSAssert1(err == noErr, #"Error adding noe info to AUGraph: %hd", err);
// Enable input, which is disabled by default.
UInt32 enabled = 1;
err = AudioUnitSetProperty(_ioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&enabled,
sizeof(enabled));
NSAssert1(err == noErr, #"Error enabling input: %hd", err);
// Apply format to input of ioUnit
err = AudioUnitSetProperty(_ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
NSAssert1(err == noErr, #"Error setting input ASBD: %hd", err);
// Apply format to output of ioUnit
err = AudioUnitSetProperty(_ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
NSAssert1(err == noErr, #"Error setting output ASBD: %hd", err);
// Set hardware IO callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = hardwareIOCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
err = AUGraphSetNodeInputCallback(auGraph,
ioNode,
kOutputBus,
&callbackStruct);
NSAssert1(err == noErr, #"Error setting IO callback: %hd", err);
// Initialize AudioGraph
err = AUGraphInitialize(auGraph);
NSAssert1(err == noErr, #"Error initializing AUGraph: %hd", err);
// Start audio unit
err = AUGraphStart(auGraph);
NSAssert1(err == noErr, #"Error starting AUGraph: %hd", err);
}
#catch (NSException *exception) {
NSLog(#"Failed with exception: %#", exception);
}
Where kOutputBus is defined to be 0, kInputBus is 1, and kSampleRate is 44100. The IO callback function is:
IO Callback Function
static OSStatus hardwareIOCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Scope reference to GSFSensorIOController class
GSFSensorIOController *sensorIO = (__bridge GSFSensorIOController *) inRefCon;
// Grab the samples and place them in the buffer list
AudioUnit ioUnit = sensorIO.ioUnit;
OSStatus result = AudioUnitRender(ioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
ioData);
if (result != noErr) NSLog(#"Blowing it in interrupt");
// Process input data
[sensorIO processIO:ioData];
// Set up power tone attributes
float freq = 20000.00f;
float sampleRate = kSampleRate;
float phase = sensorIO.sinPhase;
float sinSignal;
double phaseInc = 2 * M_PI * freq / sampleRate;
// Write to output buffers
for(size_t i = 0; i < ioData->mNumberBuffers; ++i) {
AudioBuffer buffer = ioData->mBuffers[i];
for(size_t sampleIdx = 0; sampleIdx < inNumberFrames; ++sampleIdx) {
// Grab sample buffer
SInt16 *sampleBuffer = buffer.mData;
// Generate power tone on left channel
sinSignal = sin(phase);
sampleBuffer[2 * sampleIdx] = (SInt16)((sinSignal * 32767.0f) /2);
// Write to commands to micro on right channel as necessary
if(sensorIO.newDataOut)
sampleBuffer[2*sampleIdx + 1] = (SInt16)((sinSignal * 32767.0f) /2);
else
sampleBuffer[2*sampleIdx + 1] = 0;
phase += phaseInc;
if (phase >= 2 * M_PI * freq) {
phase -= (2 * M_PI * freq);
}
}
}
// Store sine wave phase for next callback
sensorIO.sinPhase = phase;
return result;
}
The processIO function called within hardwareIOCallback is used to process the input and create response for the output. For debugging purposes I just have it pushing each sample of the input buffer to an NSMutableArray.
Process IO
- (void) processIO: (AudioBufferList*) bufferList {
for (int j = 0 ; j < bufferList->mNumberBuffers ; j++) {
AudioBuffer sourceBuffer = bufferList->mBuffers[j];
SInt16 *buffer = (SInt16 *) bufferList->mBuffers[j].mData;
for (int i = 0; i < (sourceBuffer.mDataByteSize / sizeof(sourceBuffer)); i++) {
// DEBUG: Array of raw data points for printing to a file
[self.rawInputData addObject:[NSNumber numberWithInt:buffer[i]]];
}
}
}
I then am writing the contents of this input buffer to a file after I have stopped the AUGraph and have all samples in the array rawInputData. I then open this file in MatLab and plot it. Here I see that the audio input is missing data (seen in the image below circled in red).
I'm out of ideas as to how to fix this issue and could really use some help understanding and fixing this problem.
You callback may be too slow. It's usually not recommended to use any Objective C methods (such as adding to a mutable array, or anything else that could allocate memory) inside an Audio Unit callback.

core audio offline rendering GenericOutput

Anybody successfully done offline rendering using core-Audio.?
I had to mix two audio files and apply reverb(used 2 AudioFilePlayer,MultiChannelMixer,Reverb2 and RemoteIO).
Got it working. and i could save it while its previewing(on renderCallBack of RemoteIO).
I need to save it without playing it (offline).
Thanks in advance.
Offline rendering Worked for me using GenericOutput AudioUnit.
I am sharing the working code here.
core-audio framework seems a little though. But small-small things in it like ASBD, parameters ...etc are making these issues. try hard it will work. Don't give-up :-). core-audio is very powerful and useful while dealing with low-level audio. Thats what I learned from these last weeks. Enjoy :-D ....
Declare these in .h
//AUGraph
AUGraph mGraph;
//Audio Unit References
AudioUnit mFilePlayer;
AudioUnit mFilePlayer2;
AudioUnit mReverb;
AudioUnit mTone;
AudioUnit mMixer;
AudioUnit mGIO;
//Audio File Location
AudioFileID inputFile;
AudioFileID inputFile2;
//Audio file refereces for saving
ExtAudioFileRef extAudioFile;
//Standard sample rate
Float64 graphSampleRate;
AudioStreamBasicDescription stereoStreamFormat864;
Float64 MaxSampleTime;
//in .m class
- (id) init
{
self = [super init];
graphSampleRate = 44100.0;
MaxSampleTime = 0.0;
UInt32 category = kAudioSessionCategory_MediaPlayback;
CheckError(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(category),
&category),
"Couldn't set category on audio session");
[self initializeAUGraph];
return self;
}
//ASBD setup
- (void) setupStereoStream864 {
// The AudioUnitSampleType data type is the recommended type for sample data in audio
// units. This obtains the byte size of the type for use in filling in the ASBD.
size_t bytesPerSample = sizeof (AudioUnitSampleType);
// Fill the application audio format struct's fields to define a linear PCM,
// stereo, noninterleaved stream at the hardware sample rate.
stereoStreamFormat864.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat864.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat864.mBytesPerPacket = bytesPerSample;
stereoStreamFormat864.mFramesPerPacket = 1;
stereoStreamFormat864.mBytesPerFrame = bytesPerSample;
stereoStreamFormat864.mChannelsPerFrame = 2; // 2 indicates stereo
stereoStreamFormat864.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat864.mSampleRate = graphSampleRate;
}
//AUGraph setup
- (void)initializeAUGraph
{
[self setupStereoStream864];
// Setup the AUGraph, add AUNodes, and make connections
// create a new AUGraph
CheckError(NewAUGraph(&mGraph),"Couldn't create new graph");
// AUNodes represent AudioUnits on the AUGraph and provide an
// easy means for connecting audioUnits together.
AUNode filePlayerNode;
AUNode filePlayerNode2;
AUNode mixerNode;
AUNode reverbNode;
AUNode toneNode;
AUNode gOutputNode;
// file player component
AudioComponentDescription filePlayer_desc;
filePlayer_desc.componentType = kAudioUnitType_Generator;
filePlayer_desc.componentSubType = kAudioUnitSubType_AudioFilePlayer;
filePlayer_desc.componentFlags = 0;
filePlayer_desc.componentFlagsMask = 0;
filePlayer_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// file player component2
AudioComponentDescription filePlayer2_desc;
filePlayer2_desc.componentType = kAudioUnitType_Generator;
filePlayer2_desc.componentSubType = kAudioUnitSubType_AudioFilePlayer;
filePlayer2_desc.componentFlags = 0;
filePlayer2_desc.componentFlagsMask = 0;
filePlayer2_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Create AudioComponentDescriptions for the AUs we want in the graph
// mixer component
AudioComponentDescription mixer_desc;
mixer_desc.componentType = kAudioUnitType_Mixer;
mixer_desc.componentSubType = kAudioUnitSubType_MultiChannelMixer;
mixer_desc.componentFlags = 0;
mixer_desc.componentFlagsMask = 0;
mixer_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Create AudioComponentDescriptions for the AUs we want in the graph
// Reverb component
AudioComponentDescription reverb_desc;
reverb_desc.componentType = kAudioUnitType_Effect;
reverb_desc.componentSubType = kAudioUnitSubType_Reverb2;
reverb_desc.componentFlags = 0;
reverb_desc.componentFlagsMask = 0;
reverb_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
//tone component
AudioComponentDescription tone_desc;
tone_desc.componentType = kAudioUnitType_FormatConverter;
//tone_desc.componentSubType = kAudioUnitSubType_NewTimePitch;
tone_desc.componentSubType = kAudioUnitSubType_Varispeed;
tone_desc.componentFlags = 0;
tone_desc.componentFlagsMask = 0;
tone_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponentDescription gOutput_desc;
gOutput_desc.componentType = kAudioUnitType_Output;
gOutput_desc.componentSubType = kAudioUnitSubType_GenericOutput;
gOutput_desc.componentFlags = 0;
gOutput_desc.componentFlagsMask = 0;
gOutput_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
//Add nodes to graph
// Add nodes to the graph to hold our AudioUnits,
// You pass in a reference to the AudioComponentDescription
// and get back an AudioUnit
AUGraphAddNode(mGraph, &filePlayer_desc, &filePlayerNode );
AUGraphAddNode(mGraph, &filePlayer2_desc, &filePlayerNode2 );
AUGraphAddNode(mGraph, &mixer_desc, &mixerNode );
AUGraphAddNode(mGraph, &reverb_desc, &reverbNode );
AUGraphAddNode(mGraph, &tone_desc, &toneNode );
AUGraphAddNode(mGraph, &gOutput_desc, &gOutputNode);
//Open the graph early, initialize late
// open the graph AudioUnits are open but not initialized (no resource allocation occurs here)
CheckError(AUGraphOpen(mGraph),"Couldn't Open the graph");
//Reference to Nodes
// get the reference to the AudioUnit object for the file player graph node
AUGraphNodeInfo(mGraph, filePlayerNode, NULL, &mFilePlayer);
AUGraphNodeInfo(mGraph, filePlayerNode2, NULL, &mFilePlayer2);
AUGraphNodeInfo(mGraph, reverbNode, NULL, &mReverb);
AUGraphNodeInfo(mGraph, toneNode, NULL, &mTone);
AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
AUGraphNodeInfo(mGraph, gOutputNode, NULL, &mGIO);
AUGraphConnectNodeInput(mGraph, filePlayerNode, 0, reverbNode, 0);
AUGraphConnectNodeInput(mGraph, reverbNode, 0, toneNode, 0);
AUGraphConnectNodeInput(mGraph, toneNode, 0, mixerNode,0);
AUGraphConnectNodeInput(mGraph, filePlayerNode2, 0, mixerNode, 1);
AUGraphConnectNodeInput(mGraph, mixerNode, 0, gOutputNode, 0);
UInt32 busCount = 2; // bus count for mixer unit input
//Setup mixer unit bus count
CheckError(AudioUnitSetProperty (
mMixer,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&busCount,
sizeof (busCount)
),
"Couldn't set mixer unit's bus count");
//Enable metering mode to view levels input and output levels of mixer
UInt32 onValue = 1;
CheckError(AudioUnitSetProperty(mMixer,
kAudioUnitProperty_MeteringMode,
kAudioUnitScope_Input,
0,
&onValue,
sizeof(onValue)),
"error");
// Increase the maximum frames per slice allows the mixer unit to accommodate the
// larger slice size used when the screen is locked.
UInt32 maximumFramesPerSlice = 4096;
CheckError(AudioUnitSetProperty (
mMixer,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global,
0,
&maximumFramesPerSlice,
sizeof (maximumFramesPerSlice)
),
"Couldn't set mixer units maximum framers per slice");
// set the audio data format of tone Unit
AudioUnitSetProperty(mTone,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Global,
0,
&stereoStreamFormat864,
sizeof(AudioStreamBasicDescription));
// set the audio data format of reverb Unit
AudioUnitSetProperty(mReverb,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Global,
0,
&stereoStreamFormat864,
sizeof(AudioStreamBasicDescription));
// set initial reverb
AudioUnitParameterValue reverbTime = 2.5;
AudioUnitSetParameter(mReverb, 4, kAudioUnitScope_Global, 0, reverbTime, 0);
AudioUnitSetParameter(mReverb, 5, kAudioUnitScope_Global, 0, reverbTime, 0);
AudioUnitSetParameter(mReverb, 0, kAudioUnitScope_Global, 0, 0, 0);
AudioStreamBasicDescription auEffectStreamFormat;
UInt32 asbdSize = sizeof (auEffectStreamFormat);
memset (&auEffectStreamFormat, 0, sizeof (auEffectStreamFormat ));
// get the audio data format from reverb
CheckError(AudioUnitGetProperty(mReverb,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&auEffectStreamFormat,
&asbdSize),
"Couldn't get aueffectunit ASBD");
auEffectStreamFormat.mSampleRate = graphSampleRate;
// set the audio data format of mixer Unit
CheckError(AudioUnitSetProperty(mMixer,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&auEffectStreamFormat, sizeof(auEffectStreamFormat)),
"Couldn't set ASBD on mixer output");
CheckError(AUGraphInitialize(mGraph),"Couldn't Initialize the graph");
[self setUpAUFilePlayer];
[self setUpAUFilePlayer2];
}
//Audio file playback setup here i am setting the voice file
-(OSStatus) setUpAUFilePlayer{
NSString *songPath = [[NSBundle mainBundle] pathForResource: #"testVoice" ofType:#".m4a"];
CFURLRef songURL = ( CFURLRef) [NSURL fileURLWithPath:songPath];
// open the input audio file
CheckError(AudioFileOpenURL(songURL, kAudioFileReadPermission, 0, &inputFile),
"setUpAUFilePlayer AudioFileOpenURL failed");
AudioStreamBasicDescription fileASBD;
// get the audio data format from the file
UInt32 propSize = sizeof(fileASBD);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyDataFormat,
&propSize, &fileASBD),
"setUpAUFilePlayer couldn't get file's data format");
// tell the file player unit to load the file we want to play
CheckError(AudioUnitSetProperty(mFilePlayer, kAudioUnitProperty_ScheduledFileIDs,
kAudioUnitScope_Global, 0, &inputFile, sizeof(inputFile)),
"setUpAUFilePlayer AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileIDs] failed");
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyAudioDataPacketCount,
&propsize, &nPackets),
"setUpAUFilePlayer AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] failed");
// tell the file player AU to play the entire file
ScheduledAudioFileRegion rgn;
memset (&rgn.mTimeStamp, 0, sizeof(rgn.mTimeStamp));
rgn.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
rgn.mTimeStamp.mSampleTime = 0;
rgn.mCompletionProc = NULL;
rgn.mCompletionProcUserData = NULL;
rgn.mAudioFile = inputFile;
rgn.mLoopCount = -1;
rgn.mStartFrame = 0;
rgn.mFramesToPlay = nPackets * fileASBD.mFramesPerPacket;
if (MaxSampleTime < rgn.mFramesToPlay)
{
MaxSampleTime = rgn.mFramesToPlay;
}
CheckError(AudioUnitSetProperty(mFilePlayer, kAudioUnitProperty_ScheduledFileRegion,
kAudioUnitScope_Global, 0,&rgn, sizeof(rgn)),
"setUpAUFilePlayer1 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileRegion] failed");
// prime the file player AU with default values
UInt32 defaultVal = 0;
CheckError(AudioUnitSetProperty(mFilePlayer, kAudioUnitProperty_ScheduledFilePrime,
kAudioUnitScope_Global, 0, &defaultVal, sizeof(defaultVal)),
"setUpAUFilePlayer AudioUnitSetProperty[kAudioUnitProperty_ScheduledFilePrime] failed");
// tell the file player AU when to start playing (-1 sample time means next render cycle)
AudioTimeStamp startTime;
memset (&startTime, 0, sizeof(startTime));
startTime.mFlags = kAudioTimeStampSampleTimeValid;
startTime.mSampleTime = -1;
CheckError(AudioUnitSetProperty(mFilePlayer, kAudioUnitProperty_ScheduleStartTimeStamp,
kAudioUnitScope_Global, 0, &startTime, sizeof(startTime)),
"setUpAUFilePlayer AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp]");
return noErr;
}
//Audio file playback setup here i am setting the BGMusic file
-(OSStatus) setUpAUFilePlayer2{
NSString *songPath = [[NSBundle mainBundle] pathForResource: #"BGmusic" ofType:#".mp3"];
CFURLRef songURL = ( CFURLRef) [NSURL fileURLWithPath:songPath];
// open the input audio file
CheckError(AudioFileOpenURL(songURL, kAudioFileReadPermission, 0, &inputFile2),
"setUpAUFilePlayer2 AudioFileOpenURL failed");
AudioStreamBasicDescription fileASBD;
// get the audio data format from the file
UInt32 propSize = sizeof(fileASBD);
CheckError(AudioFileGetProperty(inputFile2, kAudioFilePropertyDataFormat,
&propSize, &fileASBD),
"setUpAUFilePlayer2 couldn't get file's data format");
// tell the file player unit to load the file we want to play
CheckError(AudioUnitSetProperty(mFilePlayer2, kAudioUnitProperty_ScheduledFileIDs,
kAudioUnitScope_Global, 0, &inputFile2, sizeof(inputFile2)),
"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileIDs] failed");
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
CheckError(AudioFileGetProperty(inputFile2, kAudioFilePropertyAudioDataPacketCount,
&propsize, &nPackets),
"setUpAUFilePlayer2 AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] failed");
// tell the file player AU to play the entire file
ScheduledAudioFileRegion rgn;
memset (&rgn.mTimeStamp, 0, sizeof(rgn.mTimeStamp));
rgn.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
rgn.mTimeStamp.mSampleTime = 0;
rgn.mCompletionProc = NULL;
rgn.mCompletionProcUserData = NULL;
rgn.mAudioFile = inputFile2;
rgn.mLoopCount = -1;
rgn.mStartFrame = 0;
rgn.mFramesToPlay = nPackets * fileASBD.mFramesPerPacket;
if (MaxSampleTime < rgn.mFramesToPlay)
{
MaxSampleTime = rgn.mFramesToPlay;
}
CheckError(AudioUnitSetProperty(mFilePlayer2, kAudioUnitProperty_ScheduledFileRegion,
kAudioUnitScope_Global, 0,&rgn, sizeof(rgn)),
"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileRegion] failed");
// prime the file player AU with default values
UInt32 defaultVal = 0;
CheckError(AudioUnitSetProperty(mFilePlayer2, kAudioUnitProperty_ScheduledFilePrime,
kAudioUnitScope_Global, 0, &defaultVal, sizeof(defaultVal)),
"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFilePrime] failed");
// tell the file player AU when to start playing (-1 sample time means next render cycle)
AudioTimeStamp startTime;
memset (&startTime, 0, sizeof(startTime));
startTime.mFlags = kAudioTimeStampSampleTimeValid;
startTime.mSampleTime = -1;
CheckError(AudioUnitSetProperty(mFilePlayer2, kAudioUnitProperty_ScheduleStartTimeStamp,
kAudioUnitScope_Global, 0, &startTime, sizeof(startTime)),
"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp]");
return noErr;
}
//Start Saving File
- (void)startRecordingAAC{
AudioStreamBasicDescription destinationFormat;
memset(&destinationFormat, 0, sizeof(destinationFormat));
destinationFormat.mChannelsPerFrame = 2;
destinationFormat.mFormatID = kAudioFormatMPEG4AAC;
UInt32 size = sizeof(destinationFormat);
OSStatus result = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &destinationFormat);
if(result) printf("AudioFormatGetProperty %ld \n", result);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[NSString alloc] initWithFormat: #"%#/output.m4a", documentsDirectory];
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
(CFStringRef)destinationFilePath,
kCFURLPOSIXPathStyle,
false);
[destinationFilePath release];
// specify codec Saving the output in .m4a format
result = ExtAudioFileCreateWithURL(destinationURL,
kAudioFileM4AType,
&destinationFormat,
NULL,
kAudioFileFlags_EraseFile,
&extAudioFile);
if(result) printf("ExtAudioFileCreateWithURL %ld \n", result);
CFRelease(destinationURL);
// This is a very important part and easiest way to set the ASBD for the File with correct format.
AudioStreamBasicDescription clientFormat;
UInt32 fSize = sizeof (clientFormat);
memset(&clientFormat, 0, sizeof(clientFormat));
// get the audio data format from the Output Unit
CheckError(AudioUnitGetProperty(mGIO,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&clientFormat,
&fSize),"AudioUnitGetProperty on failed");
// set the audio data format of mixer Unit
CheckError(ExtAudioFileSetProperty(extAudioFile,
kExtAudioFileProperty_ClientDataFormat,
sizeof(clientFormat),
&clientFormat),
"ExtAudioFileSetProperty kExtAudioFileProperty_ClientDataFormat failed");
// specify codec
UInt32 codec = kAppleHardwareAudioCodecManufacturer;
CheckError(ExtAudioFileSetProperty(extAudioFile,
kExtAudioFileProperty_CodecManufacturer,
sizeof(codec),
&codec),"ExtAudioFileSetProperty on extAudioFile Faild");
CheckError(ExtAudioFileWriteAsync(extAudioFile, 0, NULL),"ExtAudioFileWriteAsync Failed");
[self pullGenericOutput];
}
// Manual Feeding and getting data/Buffer from the GenericOutput Node.
-(void)pullGenericOutput{
AudioUnitRenderActionFlags flags = 0;
AudioTimeStamp inTimeStamp;
memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));
inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
UInt32 busNumber = 0;
UInt32 numberFrames = 512;
inTimeStamp.mSampleTime = 0;
int channelCount = 2;
NSLog(#"Final numberFrames :%li",numberFrames);
int totFrms = MaxSampleTime;
while (totFrms > 0)
{
if (totFrms < numberFrames)
{
numberFrames = totFrms;
NSLog(#"Final numberFrames :%li",numberFrames);
}
else
{
totFrms -= numberFrames;
}
AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
bufferList->mNumberBuffers = channelCount;
for (int j=0; j<channelCount; j++)
{
AudioBuffer buffer = {0};
buffer.mNumberChannels = 1;
buffer.mDataByteSize = numberFrames*sizeof(AudioUnitSampleType);
buffer.mData = calloc(numberFrames, sizeof(AudioUnitSampleType));
bufferList->mBuffers[j] = buffer;
}
CheckError(AudioUnitRender(mGIO,
&flags,
&inTimeStamp,
busNumber,
numberFrames,
bufferList),
"AudioUnitRender mGIO");
CheckError(ExtAudioFileWrite(extAudioFile, numberFrames, bufferList),("extaudiofilewrite fail"));
}
[self FilesSavingCompleted];
}
//FilesSavingCompleted
-(void)FilesSavingCompleted{
OSStatus status = ExtAudioFileDispose(extAudioFile);
printf("OSStatus(ExtAudioFileDispose): %ld\n", status);
}
One way to do offline rendering is to remove the RemoteIO unit and explicitly call AudioUnitRender on the right-most unit in your graph (either the mixer or the reverb unit depending on your topology). By doing this in a loop until you exhaust the samples from both of your source files, and writing the resulting sample buffers with Extended Audio File Services, you can create a compressed audio file of the mixdown. You'll want to do this on a background thread to keep the UI responsive, but I've used this technique before with some success.
Above code is working on iOS7 device but not working on iOS8 device and on all simulators.
I had replaced the following code segment
UInt32 category = kAudioSessionCategory_MediaPlayback;
CheckError(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(category),
&category),
"Couldn't set category on audio session");
with the following code. Because AudioSessionSetProperty is deprecated so I had replaced following code.
AVAudioSession *session = [AVAudioSession sharedInstance];
NSError *setCategoryError = nil;
if (![session setCategory:AVAudioSessionCategoryPlayback
withOptions:AVAudioSessionCategoryOptionMixWithOthers
error:&setCategoryError]) {
// handle error
}
There must be some update for iOS 8. which can be in above code or in some where else.
I followed Abdusha's approach but my output file had no audio plus the size was very small as compared to the input. After looking into it, a fix I made was in "pullGenericOutput" function. After AudioUnitRender call:
AudioUnitRender(genericOutputUnit,
&flags,
&inTimeStamp,
busNumber,
numberFrames,
bufferList);
inTimeStamp.mSampleTime++; //Updated
increment the timeStamp by 1. After doing this, the output file was perfect with effects working. Thanks. Your answer helped a lot.

Realtime audio processing without output

I'm looking on this example http://teragonaudio.com/article/How-to-do-realtime-recording-with-effect-processing-on-iOS.html
and i want to turn off my output. I try to change: kAudioSessionCategory_PlayAndRecord to kAudioSessionCategory_RecordAudio but this is not working. I also try to get rid off:
if(AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 1, &streamDescription, sizeof(streamDescription)) != noErr) {
return 1;
}
Becouse i want to get sound from microphone but not playing it. But not matter what i do when my sound get to renderCallback method there is a -50 error. When audio is automatically play on output everything works fine...
Update with code:
using namespace std;
AudioUnit *audioUnit = NULL;
float *convertedSampleBuffer = NULL;
int initAudioSession() {
audioUnit = (AudioUnit*)malloc(sizeof(AudioUnit));
if(AudioSessionInitialize(NULL, NULL, NULL, NULL) != noErr) {
return 1;
}
if(AudioSessionSetActive(true) != noErr) {
return 1;
}
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
if(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(UInt32), &sessionCategory) != noErr) {
return 1;
}
Float32 bufferSizeInSec = 0.02f;
if(AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,
sizeof(Float32), &bufferSizeInSec) != noErr) {
return 1;
}
UInt32 overrideCategory = 1;
if(AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker,
sizeof(UInt32), &overrideCategory) != noErr) {
return 1;
}
// There are many properties you might want to provide callback functions for:
// kAudioSessionProperty_AudioRouteChange
// kAudioSessionProperty_OverrideCategoryEnableBluetoothInput
// etc.
return 0;
}
OSStatus renderCallback(void *userData, AudioUnitRenderActionFlags *actionFlags,
const AudioTimeStamp *audioTimeStamp, UInt32 busNumber,
UInt32 numFrames, AudioBufferList *buffers) {
OSStatus status = AudioUnitRender(*audioUnit, actionFlags, audioTimeStamp,
1, numFrames, buffers);
int doOutput = 0;
if(status != noErr) {
return status;
}
if(convertedSampleBuffer == NULL) {
// Lazy initialization of this buffer is necessary because we don't
// know the frame count until the first callback
convertedSampleBuffer = (float*)malloc(sizeof(float) * numFrames);
baseTime = (float)QRealTimer::getUptimeInMilliseconds();
}
SInt16 *inputFrames = (SInt16*)(buffers->mBuffers->mData);
// If your DSP code can use integers, then don't bother converting to
// floats here, as it just wastes CPU. However, most DSP algorithms rely
// on floating point, and this is especially true if you are porting a
// VST/AU to iOS.
int i;
for( i = numFrames; i < fftlength; i++ ) // Shifting buffer
x_inbuf[i - numFrames] = x_inbuf[i];
for( i = 0; i < numFrames; i++) {
x_inbuf[i + x_phase] = (float)inputFrames[i] / (float)32768;
}
if( x_phase + numFrames == fftlength )
{
x_alignment.SigProc_frontend(x_inbuf); // Signal processing front-end (FFT!)
doOutput = x_alignment.Align();
/// Output as text! In the real-time version,
// this is where we update visualisation callbacks and launch other services
if ((doOutput) & (x_netscore.isEvent(x_alignment.Position()))
&(x_alignment.lastAction()<x_alignment.Position()) )
{
// here i want to do something with my input!
}
}
else
x_phase += numFrames;
return noErr;
}
int initAudioStreams(AudioUnit *audioUnit) {
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
if(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(UInt32), &audioCategory) != noErr) {
return 1;
}
UInt32 overrideCategory = 1;
if(AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker,
sizeof(UInt32), &overrideCategory) != noErr) {
// Less serious error, but you may want to handle it and bail here
}
AudioComponentDescription componentDescription;
componentDescription.componentType = kAudioUnitType_Output;
componentDescription.componentSubType = kAudioUnitSubType_RemoteIO;
componentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
componentDescription.componentFlags = 0;
componentDescription.componentFlagsMask = 0;
AudioComponent component = AudioComponentFindNext(NULL, &componentDescription);
if(AudioComponentInstanceNew(component, audioUnit) != noErr) {
return 1;
}
UInt32 enable = 1;
if(AudioUnitSetProperty(*audioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input, 1, &enable, sizeof(UInt32)) != noErr) {
return 1;
}
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = renderCallback; // Render function
callbackStruct.inputProcRefCon = NULL;
if(AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input, 0, &callbackStruct,
sizeof(AURenderCallbackStruct)) != noErr) {
return 1;
}
AudioStreamBasicDescription streamDescription;
// You might want to replace this with a different value, but keep in mind that the
// iPhone does not support all sample rates. 8kHz, 22kHz, and 44.1kHz should all work.
streamDescription.mSampleRate = 44100;
// Yes, I know you probably want floating point samples, but the iPhone isn't going
// to give you floating point data. You'll need to make the conversion by hand from
// linear PCM <-> float.
streamDescription.mFormatID = kAudioFormatLinearPCM;
// This part is important!
streamDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked;
streamDescription.mBitsPerChannel = 16;
// 1 sample per frame, will always be 2 as long as 16-bit samples are being used
streamDescription.mBytesPerFrame = 2;
streamDescription.mChannelsPerFrame = 1;
streamDescription.mBytesPerPacket = streamDescription.mBytesPerFrame *
streamDescription.mChannelsPerFrame;
// Always should be set to 1
streamDescription.mFramesPerPacket = 1;
// Always set to 0, just to be sure
streamDescription.mReserved = 0;
// Set up input stream with above properties
if(AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 0, &streamDescription, sizeof(streamDescription)) != noErr) {
return 1;
}
// Ditto for the output stream, which we will be sending the processed audio to
if(AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 1, &streamDescription, sizeof(streamDescription)) != noErr) {
return 1;
}
return 0;
}
int startAudioUnit(AudioUnit *audioUnit) {
if(AudioUnitInitialize(*audioUnit) != noErr) {
return 1;
}
if(AudioOutputUnitStart(*audioUnit) != noErr) {
return 1;
}
return 0;
}
And calling from my VC:
initAudioSession();
initAudioStreams( audioUnit);
startAudioUnit( audioUnit);
If you want only recording, no playback, simply comment out the line that sets renderCallback:
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = renderCallback; // Render function
callbackStruct.inputProcRefCon = NULL;
if(AudioUnitSetProperty(*audioUnit, kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input, 0, &callbackStruct,
sizeof(AURenderCallbackStruct)) != noErr) {
return 1;
}
Update after seeing code:
As I suspected, you're missing input callback. Add these lines:
// at top:
#define kInputBus 1
AURenderCallbackStruct callbackStruct;
/**/
callbackStruct.inputProc = &ALAudioUnit::recordingCallback;
callbackStruct.inputProcRefCon = this;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
Now in your recordingCallback:
OSStatus ALAudioUnit::recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// TODO: Use inRefCon to access our interface object to do stuff
// Then, use inNumberFrames to figure out how much data is available, and make
// that much space available in buffers in an AudioBufferList.
// Then:
// Obtain recorded samples
OSStatus status;
ALAudioUnit *pThis = reinterpret_cast<ALAudioUnit*>(inRefCon);
if (!pThis)
return noErr;
//assert (pThis->m_nMaxSliceFrames >= inNumberFrames);
pThis->recorderBufferList->GetBufferList().mBuffers[0].mDataByteSize = inNumberFrames * pThis->m_recorderSBD.mBytesPerFrame;
status = AudioUnitRender(pThis->audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&pThis->recorderBufferList->GetBufferList());
THROW_EXCEPTION_IF_ERROR(status, "error rendering audio unit");
// If we're not playing, I don't care about the data, simply discard it
if (!pThis->playbackState || pThis->isSeeking) return noErr;
// Now, we have the samples we just read sitting in buffers in bufferList
pThis->DoStuffWithTheRecordedAudio(inNumberFrames, pThis->recorderBufferList, inTimeStamp);
return noErr;
}
Btw, I'm allocating my own buffer instead of using the one provided by AudioUnit. You might want to change those parts if you want to use AudioUnit allocated buffer.
Update:
How to allocate own buffer:
recorderBufferList = new AUBufferList();
recorderBufferList->Allocate(m_recorderSBD, m_nMaxSliceFrames);
recorderBufferList->PrepareBuffer(m_recorderSBD, m_nMaxSliceFrames);
Also, if you're doing this, tell AudioUnit to not allocate buffers:
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
You'll need to include CoreAudio utility classes
Thanks for #Mar0ux 's answer. Whoever got here looking for complete sample code doing this can take a look here:
https://code.google.com/p/ios-coreaudio-example/
I am doing a similar app working with the same code and I found that you can end playback by changing the enumeration kAudioSessionCategory_PlayAndRecord to RecordAudio
int initAudioStreams(AudioUnit *audioUnit) {
UInt32 audioCategory = kAudioSessionCategory_RecordAudio;
if(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(UInt32), &audioCategory) != noErr) {
return 1;
}
This stopped the feedback between mic and speaker on my hardware.

Resources