AudioUnit noise if there is no output buffer - ios

I am trying to implement playing pcm audio received from remote server via socket. Here was my previous question link. This works fine as I use circular buffer to always feed in the incoming buffer.
However I have a problem that there is a huge noise sound that is being produced if I have no buffer supplied to my output. This happens when I begin to use AudioOutputUnitStart(_audioUnit) and when there is no buffer to play.
I suspect I have to fix this in my OutputRenderCallback function below or may be there is something else I need to do :
static OSStatus OutputRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
Test *output = (__bridge Test*)inRefCon;
TPCircularBuffer *circularBuffer = [output outputShouldUseCircularBuffer];
if( !circularBuffer ){
SInt32 *left = (SInt32*)ioData->mBuffers[0].mData;
for(int i = 0; i < inNumberFrames; i++ ){
left[ i ] = 0.0f;
}
return noErr;
};
int32_t bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16* outputBuffer = ioData->mBuffers[0].mData;
uint32_t availableBytes;
SInt16 *sourceBuffer = TPCircularBufferTail(circularBuffer, &availableBytes);
int32_t amount = MIN(bytesToCopy,availableBytes);
memcpy(outputBuffer, sourceBuffer, amount);
TPCircularBufferConsume(circularBuffer,amount);
return noErr;
}
I highly appreciate you help.Thanks.

An audio unit callback requires that you always put the requested amount of samples in the AudioBufferList buffers. Your code does not do that if the amount (from that available circular buffer) is less.
So put something in the output buffer always, as your code does if there is no circular buffer.
BTW: calling a method:
[output outputShouldUseCircularBuffer]
inside a callback is a violation of Apple's rules for real-time audio.

I am posting my answer incase someone else stumbles at the same point as I was. I am new to objective c so incase someone has a better solution. I do welcome any suggestions.
As #hotpaw2 suggested the AudioBufferList needs to be feed with samples and in my case when my circularBuffer had nothing inside of it. I had to feed the AudioBufferList with frames being set to 0.0f
static OSStatus OutputRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
Test *output = (__bridge Test*)inRefCon;
TPCircularBuffer *circularBuffer = [output outputShouldUseCircularBuffer];
int32_t bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16* outputBuffer = ioData->mBuffers[0].mData;
uint32_t availableBytes;
SInt16 *sourceBuffer = TPCircularBufferTail(circularBuffer, &availableBytes);
int32_t amount = MIN(bytesToCopy,availableBytes);
if (amount>0) {
memcpy(outputBuffer, sourceBuffer, amount);
TPCircularBufferConsume(circularBuffer,amount);
}
else{
SInt32 *left = (SInt32*)ioData->mBuffers[0].mData;
for(int i = 0; i < inNumberFrames; i++ ){
left[ i ] = 0.0f;
}
return noErr;
}
return noErr; }

Related

Audiokit, how to playback a modified buffer in a tap?

I use Audiokit (in Objective-C) for realtime audio processing. I feed a C++ algorithm through a tap or lazy tap where the buffer is being modified.
I thought that would be obvious but...how can I playback the modified buffer in the output? Are taps only for analysis?
[self->microphoneGain.avAudioNode installTapOnBus:0 bufferSize:1024 format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
if (buffer.frameLength == 0) {
return;
}
// Process data -> return modified buffer
processData(buffer.floatChannelData[0], buffer.floatChannelData[1], buffer.frameLength);
// -> How to play back buffer?
}];
Furthermore, I can't get taps buffer size lower than 4800 samples. What would be my best option to get a better latency? I read about AUAudioUnit subclassing, render callback or realtime mode for AudioEngine, but I'm quite lost when trying to implement one of these with AudioKit. Thanks!
EDIT:
I managed to set a render callback which has apparently solved both of my problems.
AURenderCallbackStruct processingCallback;
processingCallback.inputProc = processingCalbackProc;
processingCallback.inputProcRefCon = (__bridge void *)(self);
OSStatus status = AudioUnitSetProperty(AudioKit.engine.outputNode.audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&processingCallback,
sizeof(processingCallback));
if(status != noErr) {
return false;
}
OSStatus processingCalbackProc (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
__unsafe_unretained MyClass *self = (__bridge MyClass *)inRefCon;
printf("%u, ", (unsigned int)inNumberFrames); // -> low latency!
if (!ioData) ioData = self->audioBufferList;
OSStatus status = AudioUnitRender(AudioKit.engine.outputNode.audioUnit,
ioActionFlags,
inTimeStamp,
1,
inNumberFrames,
ioData);
if(status != noErr) { return status; }
// Get buffers
unsigned int inputChannels = 2;
float *buffer[inputChannels];
for (int i = 0; i < inputChannels; i++) {
buffer[i] = (float *)ioData->mBuffers[i].mData;
}
// Process data
processData(buffer[0], buffer[1], inNumberFrames);
return noErr;
}
Now I can easily get buffers as low as 256samples (probably even less but not needed in my case) and when buffer[n]are modified, it outputs the modified buffers.
Everything seems to be fine, I just hope this is the right approach.

AURemoteIO::IOThread EXC_BAD_ACCESS

I am using AudioGraph.eqRenderInput is a callback function.
static OSStatus eqRenderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
#autoreleasepool {
NSLog(#"--method = %s--current thread:%#",__func__,[NSThread currentThread]);
MyAudioController *mycon = (__bridge MyAudioController *)inRefCon;
AudioUnit mixUNIT = mycon->mixUnit;
AudioUnitRender(mixUNIT, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, ioData);
int bufferCount = ioData->mNumberBuffers;
NSLog(#"current thread:%#----bufferCount=%d",[NSThread currentThread],bufferCount);
// Fill the provided AudioBufferList with the data from the AudioBufferList output by the audio data output
for (int bufferIndex = 0; bufferIndex < bufferCount; bufferIndex++) {
NSData *tmpData = [NSData dataWithBytes:ioData->mBuffers[bufferIndex].mData length:ioData->mBuffers[bufferIndex].mDataByteSize];
[mycon->mixMArray addObject:tmpData]; // this line carsh
NSLog(#"mchannel = %d---bufferIndex = %d",ioData->mBuffers[bufferIndex].mNumberChannels,bufferIndex);
}
}
}
mixMArray class is NSMutableArray,
[mycon->mixMArray addObject:tmpData];this line crash,
error message is exec bad access code = 1 adddress = 0xf42e8.

Convert sound in Core Audio (iOS): 1 buffer 2 channels to 2 buffers and each has 1 channel?

I used OrigamiEngine to play flac files. But the problem is I need to convert its output data (it used standard Core Audio with a custom static function) in this file.
Code:
static OSStatus Sound_Renderer(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
ORGMOutputUnit *output = (ORGMOutputUnit *)inRefCon;
OSStatus err = noErr;
void *readPointer = ioData->mBuffers[0].mData;
int amountToRead, amountRead;
amountToRead = inNumberFrames * (output->_format.mBytesPerPacket);
amountRead = [output readData:(readPointer) amount:amountToRead];
if (amountRead < amountToRead) {
int amountRead2;
amountRead2 = [output readData:(readPointer+amountRead) amount:amountToRead-amountRead];
amountRead += amountRead2;
}
ioData->mBuffers[0].mDataByteSize = amountRead;
ioData->mBuffers[0].mNumberChannels = output->_format.mChannelsPerFrame;
ioData->mNumberBuffers = 1;
return err;
}
So ioData has 1 buffer and 2 channels but I need vice versa - 2 buffers when each of them contains 1 channel. Or simply:
LRLRLR... -> LLL... + RRR...
UPDATED
In general I need to achieve something like the following:
ioData->mBuffers[0].mDataByteSize = ...;
ioData->mBuffers[0].mNumberChannels = 1;
ioData->mBuffers[1].mDataByteSize = ...;
ioData->mBuffers[1].mNumberChannels = 1;
ioData->mNumberBuffers = 2;

How to get the volume of an AudioUnit

I am using AudioUnit to play input from the microphone to the earphones.
It's working great. Now I need to increase the volume of weak sounds and decrease strong ones.
I found a way to increase the sound:
static OSStatus performRender (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus err = noErr;
if (*cd.audioChainIsBeingReconstructed == NO)
{
// we are calling AudioUnitRender on the input bus of AURemoteIO
// this will store the audio data captured by the microphone in ioData
err = AudioUnitRender(cd.rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
// filter out the DC component of the signal
cd.dcRejectionFilter->ProcessInplace((Float32*) ioData->mBuffers[0].mData, inNumberFrames);
//Add Volume
float desiredGain = 2.0f;
for(UInt32 bufferIndex = 0; bufferIndex < ioData->mNumberBuffers; ++bufferIndex) {
float *rawBuffer = (float *)ioData->mBuffers[bufferIndex].mData;
vDSP_vsmul(rawBuffer, 1, &desiredGain, rawBuffer, 1, inNumberFrames);
}
// mute audio if needed
if (*cd.muteAudio)
{
for (UInt32 i=0; i<ioData->mNumberBuffers; ++i)
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}
}
return err;
}
My question is how to I get what is the current volume so I would know how much to gain it and vice versa
Thanks!
Getting the "volume" depends on the type of AudioUnit. Some audio units have input levels, output levels, and "global" volume levels.
// MatrixMixer
Float32 volume = 0;
OSStatus result = AudioUnitGetParameter(mxmx_unit, kMatrixMixerParam_Volume, kAudioUnitScope_Global, 0, &volume);
// MultiChannelMixer
Float32 volume = 0;
OSStatus result = AudioUnitGetParameter(mcmx_unit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Global, 0, &volume);

Amplify audiobuffer xcode ios

I have AudioBuffer as shown below. It can play through the speaker. I would like to know a way to amplify those buffer before I play. How shall I modify?
/**
This callback is called when the audioUnit needs new data to play through the
speakers. If you don't have any, just don't write anything in the buffers
*/
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Notes: ioData contains buffers (may be more than one!)
// Fill them up as much as you can. Remember to set the size value in each buffer to match how
// much data is in the buffer.
for (int i=0; i < ioData->mNumberBuffers; i++) { // in practice we will only ever have 1 buffer, since audio format is mono
AudioBuffer buffer = ioData->mBuffers[i];
// NSLog(#" Buffer %d has %d channels and wants %d bytes of data.", i, buffer.mNumberChannels, buffer.mDataByteSize);
// copy temporary buffer data to output buffer
UInt32 size = min(buffer.mDataByteSize, [iosAudio tempBuffer].mDataByteSize); // dont copy more data then we have, or then fits
memcpy(buffer.mData, [iosAudio tempBuffer].mData, size);
buffer.mDataByteSize = size; // indicate how much data we wrote in the buffer
// uncomment to hear random noise
/*
UInt16 *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames; j++) {
frameBuffer[j] = rand();
}
*/
}
return noErr;
}

Resources