I use Audiokit (in Objective-C) for realtime audio processing. I feed a C++ algorithm through a tap or lazy tap where the buffer is being modified.
I thought that would be obvious but...how can I playback the modified buffer in the output? Are taps only for analysis?
[self->microphoneGain.avAudioNode installTapOnBus:0 bufferSize:1024 format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
if (buffer.frameLength == 0) {
return;
}
// Process data -> return modified buffer
processData(buffer.floatChannelData[0], buffer.floatChannelData[1], buffer.frameLength);
// -> How to play back buffer?
}];
Furthermore, I can't get taps buffer size lower than 4800 samples. What would be my best option to get a better latency? I read about AUAudioUnit subclassing, render callback or realtime mode for AudioEngine, but I'm quite lost when trying to implement one of these with AudioKit. Thanks!
EDIT:
I managed to set a render callback which has apparently solved both of my problems.
AURenderCallbackStruct processingCallback;
processingCallback.inputProc = processingCalbackProc;
processingCallback.inputProcRefCon = (__bridge void *)(self);
OSStatus status = AudioUnitSetProperty(AudioKit.engine.outputNode.audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&processingCallback,
sizeof(processingCallback));
if(status != noErr) {
return false;
}
OSStatus processingCalbackProc (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
__unsafe_unretained MyClass *self = (__bridge MyClass *)inRefCon;
printf("%u, ", (unsigned int)inNumberFrames); // -> low latency!
if (!ioData) ioData = self->audioBufferList;
OSStatus status = AudioUnitRender(AudioKit.engine.outputNode.audioUnit,
ioActionFlags,
inTimeStamp,
1,
inNumberFrames,
ioData);
if(status != noErr) { return status; }
// Get buffers
unsigned int inputChannels = 2;
float *buffer[inputChannels];
for (int i = 0; i < inputChannels; i++) {
buffer[i] = (float *)ioData->mBuffers[i].mData;
}
// Process data
processData(buffer[0], buffer[1], inNumberFrames);
return noErr;
}
Now I can easily get buffers as low as 256samples (probably even less but not needed in my case) and when buffer[n]are modified, it outputs the modified buffers.
Everything seems to be fine, I just hope this is the right approach.
Related
I am trying to implement playing pcm audio received from remote server via socket. Here was my previous question link. This works fine as I use circular buffer to always feed in the incoming buffer.
However I have a problem that there is a huge noise sound that is being produced if I have no buffer supplied to my output. This happens when I begin to use AudioOutputUnitStart(_audioUnit) and when there is no buffer to play.
I suspect I have to fix this in my OutputRenderCallback function below or may be there is something else I need to do :
static OSStatus OutputRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
Test *output = (__bridge Test*)inRefCon;
TPCircularBuffer *circularBuffer = [output outputShouldUseCircularBuffer];
if( !circularBuffer ){
SInt32 *left = (SInt32*)ioData->mBuffers[0].mData;
for(int i = 0; i < inNumberFrames; i++ ){
left[ i ] = 0.0f;
}
return noErr;
};
int32_t bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16* outputBuffer = ioData->mBuffers[0].mData;
uint32_t availableBytes;
SInt16 *sourceBuffer = TPCircularBufferTail(circularBuffer, &availableBytes);
int32_t amount = MIN(bytesToCopy,availableBytes);
memcpy(outputBuffer, sourceBuffer, amount);
TPCircularBufferConsume(circularBuffer,amount);
return noErr;
}
I highly appreciate you help.Thanks.
An audio unit callback requires that you always put the requested amount of samples in the AudioBufferList buffers. Your code does not do that if the amount (from that available circular buffer) is less.
So put something in the output buffer always, as your code does if there is no circular buffer.
BTW: calling a method:
[output outputShouldUseCircularBuffer]
inside a callback is a violation of Apple's rules for real-time audio.
I am posting my answer incase someone else stumbles at the same point as I was. I am new to objective c so incase someone has a better solution. I do welcome any suggestions.
As #hotpaw2 suggested the AudioBufferList needs to be feed with samples and in my case when my circularBuffer had nothing inside of it. I had to feed the AudioBufferList with frames being set to 0.0f
static OSStatus OutputRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
Test *output = (__bridge Test*)inRefCon;
TPCircularBuffer *circularBuffer = [output outputShouldUseCircularBuffer];
int32_t bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16* outputBuffer = ioData->mBuffers[0].mData;
uint32_t availableBytes;
SInt16 *sourceBuffer = TPCircularBufferTail(circularBuffer, &availableBytes);
int32_t amount = MIN(bytesToCopy,availableBytes);
if (amount>0) {
memcpy(outputBuffer, sourceBuffer, amount);
TPCircularBufferConsume(circularBuffer,amount);
}
else{
SInt32 *left = (SInt32*)ioData->mBuffers[0].mData;
for(int i = 0; i < inNumberFrames; i++ ){
left[ i ] = 0.0f;
}
return noErr;
}
return noErr; }
I have been attempting to recording my input during an inter-app audio session on iOS 9. The speaker output sounds fine but the recorded file has a rhythmic clicking sound.
The waveform looks like below...
I have tweaked every setting and parameter I can think of and nothing seems to work.
Here are the format settings (stream settings are identical)...
AudioStreamBasicDescription fileFormat;
fileFormat.mSampleRate = kSessionSampleRate;
fileFormat.mFormatID = kAudioFormatLinearPCM;
fileFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked;
fileFormat.mFramesPerPacket = 1;
fileFormat.mChannelsPerFrame = 1;
fileFormat.mBitsPerChannel = 32; //tone is correct but there is still pops
fileFormat.mBytesPerPacket = sizeof(Float32);
fileFormat.mBytesPerFrame = sizeof(Float32);
Here are the stream settings...
//connect instrument to output
AudioComponentDescription componentDescription = unit.componentDescription;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &componentDescription);
OSStatus status = AudioComponentInstanceNew(inputComponent, &_instrumentUnit);
NSLog(#"%d",status);
AudioUnitElement instrumentOutputBus = 0;
AudioUnitElement ioUnitInputElement = 0;
//connect instrument unit to remoteIO output's input bus
AudioUnitConnection connection;
connection.sourceAudioUnit = _instrumentUnit;
connection.sourceOutputNumber = instrumentOutputBus;
connection.destInputNumber = ioUnitInputElement;
status = AudioUnitSetProperty(_ioUnit,
kAudioUnitProperty_MakeConnection,
kAudioUnitScope_Output,
ioUnitInputElement,
&connection,
sizeof(connection));
NSLog(#"%d",status);
UInt32 maxFrames = 1024; //I tried setting this to 4096 but it did not help
status = AudioUnitSetProperty(_instrumentUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Output,
0,
&maxFrames,
sizeof(maxFrames));
NSLog(#"%d",status);
_connectedInstrument = YES;
_instrumentIconImageView.image = unit.icon;
NSLog(#"Remote Instrument connected");
status = AudioUnitInitialize(_ioUnit);
NSLog(#"%d",status);
status = AudioOutputUnitStart(_ioUnit);
NSLog(#"%d",status);
status = AudioUnitInitialize(_instrumentUnit);
NSLog(#"%d",status);
[self setupFile];
Here is my callback...
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
ViewController* This = This = (__bridge ViewController *)inRefCon;
if (inBusNumber == 0 && !(*ioActionFlags & kAudioUnitRenderAction_PostRenderError))
{
ExtAudioFileWriteAsync(This->fileRef, inNumberFrames, ioData);
}
return noErr;
}
Full view controller code here
Thanks for your help.
You are writing to file pre and post render. In your render callback, change your if statement to only write on post render.
if (inBusNumber == 0 && *ioActionFlags == kAudioUnitRenderAction_PostRender){
ExtAudioFileWriteAsync(This->fileRef, inNumberFrames, ioData);
}
ExtAudioFileWriteAsync does some internal copying and buffering so it's fine to use in the render callback as long as you prime it before the first write.
Most likely you'll have to check for both:
post-render action flags
post render error
The critical part of your callback will probably have to look somewhat like this:
if (*ioActionFlags & kAudioUnitRenderAction_PostRender){
static int TEMP_kAudioUnitRenderAction_PostRenderError = (1 << 8);
if (!(*ioActionFlags & TEMP_kAudioUnitRenderAction_PostRenderError))
{
ExtAudioFileWriteAsync(This->fileRef, inNumberFrames, ioData);
//whichever additional code needed
// { … }
}
I am using AudioUnit to play input from the microphone to the earphones.
It's working great. Now I need to increase the volume of weak sounds and decrease strong ones.
I found a way to increase the sound:
static OSStatus performRender (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus err = noErr;
if (*cd.audioChainIsBeingReconstructed == NO)
{
// we are calling AudioUnitRender on the input bus of AURemoteIO
// this will store the audio data captured by the microphone in ioData
err = AudioUnitRender(cd.rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
// filter out the DC component of the signal
cd.dcRejectionFilter->ProcessInplace((Float32*) ioData->mBuffers[0].mData, inNumberFrames);
//Add Volume
float desiredGain = 2.0f;
for(UInt32 bufferIndex = 0; bufferIndex < ioData->mNumberBuffers; ++bufferIndex) {
float *rawBuffer = (float *)ioData->mBuffers[bufferIndex].mData;
vDSP_vsmul(rawBuffer, 1, &desiredGain, rawBuffer, 1, inNumberFrames);
}
// mute audio if needed
if (*cd.muteAudio)
{
for (UInt32 i=0; i<ioData->mNumberBuffers; ++i)
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}
}
return err;
}
My question is how to I get what is the current volume so I would know how much to gain it and vice versa
Thanks!
Getting the "volume" depends on the type of AudioUnit. Some audio units have input levels, output levels, and "global" volume levels.
// MatrixMixer
Float32 volume = 0;
OSStatus result = AudioUnitGetParameter(mxmx_unit, kMatrixMixerParam_Volume, kAudioUnitScope_Global, 0, &volume);
// MultiChannelMixer
Float32 volume = 0;
OSStatus result = AudioUnitGetParameter(mcmx_unit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Global, 0, &volume);
Is there a way to capture the audio buffers that are being sent out of a remoteIOUnit to the speaker? I am rendering a couple of different loops on different threads to the same IOUnit ( one is a click, the other has music ), and would like to perform analysis on how the music is lining up with the click without having to filter out any noise coming from, say, using the microphone input. My math must be very accurate ( error less then 2ms ), so getting this post-mix buffer would be ideal.
Yes, you just add a render notify callback to the remoteIO with AudioUnitAddRenderNotify. You will then get four callbacks per buffer: input pre-render, input post-render, output pre-render, and output post-render. You just need to act on the appropriate ioActionFlags and inBusNumber.
AudioUnitAddRenderNotify(remoteIO, inputOutputTap, (__bridge void *)self);
OSStatus inputOutputTap (void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData) {
if (*ioActionFlags == kAudioUnitRenderAction_PostRender && inBusNumber == 0) {
MyObject *self = (__bridge MyObject *)inRefCon;
MyObjectDoTheThing(self,ioData,inTimeStamp);
}
return noErr;
}
I am trying to write (what should be) a simple app that has a bunch of audio units in sequence in an AUGraph and then writes the output to a file. I added a callback using AUGraphAddRenderNotify. Here is my callback function:
OSStatus MyAURenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *actionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
if (*actionFlags & kAudioUnitRenderAction_PostRender) {
ExtAudioFileRef outputFile = (ExtAudioFileRef)inRefCon;
ExtAudioFileWriteAsync(outputFile, inNumberFrames, ioData);
}
}
This sort of works. The file is playable and I can hear what I recorded but there is horrible amounts of static that makes it barely audible.
Does anybody know what is wrong with this? Or does anyone know of a better way to record the AUGraph output to a file?
Thanks for the help.
I had a epiphany with regards to Audio Units just now which helped me solve my own problem. I had a misconception about how audio unit connections and render callbacks work. I thought they were completely separate things but it turns out that a connection is just short hand for a render callback.
Doing an kAudioUnitProperty_MakeConnection from the output of audio unit A to the input of audio unit B is the same as doing kAudioUnitProperty_SetRenderCallback on the input of unit B and having the callback function call AudioUnitRender on the output of audio unit A.
I tested this by doing a make connection after setting my render callback and the render callback was no longer invoked.
Therefore, I was able to solve my problem by doing the following:
AURenderCallbackStruct callbackStruct = {0};
callbackStruct.inputProc = MyAURenderCallback;
callbackStruct.inputProcRefCon = mixerUnit;
AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&callbackStruct,
sizeof(callbackStruct));
And them my callback function did something like this:
OSStatus MyAURenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *actionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioUnit mixerUnit = (AudioUnit)inRefCon;
AudioUnitRender(mixerUnit,
actionFlags,
inTimeStamp,
0,
inNumberFrames,
ioData);
ExtAudioFileWriteAsync(outputFile,
inNumberFrames,
ioData);
return noErr;
}
This probably should have been obvious to me but since it wasn't I'll bet there are others that were confused in the same way so hopefully this is helpful to them too.
I'm still not sure why I had trouble with the AUGraphAddRenderNotify callback. I will dig deeper into this later but for now I found a solution that seems to work.
Here is some sample code from Apple (the project is PlaySequence, but it isn't MIDI specific) that might help:
{
CAStreamBasicDescription clientFormat = CAStreamBasicDescription();
ca_require_noerr (result = AudioUnitGetProperty(outputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0,
&clientFormat, &size), fail);
size = sizeof(clientFormat);
ca_require_noerr (result = ExtAudioFileSetProperty(outfile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat), fail);
{
MusicTimeStamp currentTime;
AUOutputBL outputBuffer (clientFormat, numFrames);
AudioTimeStamp tStamp;
memset (&tStamp, 0, sizeof(AudioTimeStamp));
tStamp.mFlags = kAudioTimeStampSampleTimeValid;
int i = 0;
int numTimesFor10Secs = (int)(10. / (numFrames / srate));
do {
outputBuffer.Prepare();
AudioUnitRenderActionFlags actionFlags = 0;
ca_require_noerr (result = AudioUnitRender (outputUnit, &actionFlags, &tStamp, 0, numFrames, outputBuffer.ABL()), fail);
tStamp.mSampleTime += numFrames;
ca_require_noerr (result = ExtAudioFileWrite(outfile, numFrames, outputBuffer.ABL()), fail);
ca_require_noerr (result = MusicPlayerGetTime (player, ¤tTime), fail);
if (shouldPrint && (++i % numTimesFor10Secs == 0))
printf ("current time: %6.2f beats\n", currentTime);
} while (currentTime < sequenceLength);
}
}
Maybe try this. Copy the data from the audio unit callback to a long buffer. Play the buffer to test it, then write the entire buffer to a file after you have verified that the whole buffer is OK.