How to connect multiple AudioUnits in Swift? - ios

I currently have a RemoteIO Audio Unit configured and working, it simply takes input, and passes it to output, so I can hear myself through the headphones of my iPhone when I speak into its microphone.
The next step in what I want to do is add optional effects and create a chain. I understand that AUGraph has been deprecated and that I need to use kAudioUnitProperty_MakeConnection to connect things together, but I have a few key questions and I'm unable to get audio out just yet.
Firstly: If I want to go RemoteIO Input -> Reverb -> RemoteIO Output, do I need two instances of the RemoteIO Audio Unit? Or can I use the same one? I am guessing just one, but to connect different things to its input and output scopes, but I'm having trouble making this happen.
Secondly: how do render callbacks play into this? I implemented a single render callback (an AURenderCallbackStruct and set that as the kAudioUnitProperty_SetRenderCallback property on my RemoteIO Audio Unit, and in the implementation of the callback, I do this:
func performRender(
_ ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBufNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>
) -> OSStatus {
guard let unit = audioUnit else { crash("Asked to render before the AURemoteIO was created.") }
return AudioUnitRender(unit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData)
}
Do I need a render callback at all to make this work? Do I need two, one to render from RemoteIO -> Reverb, and another to render back to Reverb -> RemoteIO?
The CoreAudio documentation is notoriously sketchy but I'm having trouble finding any up-to-date info on how to do this without AUGraph which is deprecated.
Any advice hugely appreciated!

You only need one RemoteIO (apps only get one), don't need any explicit render callbacks (unless you are synthesizing samples in code), and if you add kAudioUnitProperty_MakeConnection input connections on your full chain of Audio Units, starting the output unit will pull data from the rest of the chain of units, all the way back to the microphone (or from whatever the OS has connected to the mic input).

Related

iOS Core Audio: In which Audio Unit to implement the callback to render the necessary data?

I have a fully working audio model. There are three Audio Unit:
equalizerUnit,
mixerUnit,
remoteIOUnit.
With AUGraph and Nodes (equalizerNode, mixerNode, remoteNode), they are correctly connected to each other: equalizerUnit -> mixerUnit -> remoteIOUnit.
equalizerNode rendered callback:
AUGraphSetNodeInputCallback(audioGraph, equlizerNode, 0, &callbackStruct) where using the ExtAudioFileRead, the date is converted and, as a result, playback on the device. Everything works fine, all units work out correctly.
I want to add visualization. The buffer and data processing to fft is ready, the only question is where to render the callback in order to get the desired data?

AudioUnitRender got error kAudioUnitErr_CannotDoInCurrentContext (-10863)

I want to play the recorded audio directly to speaker when headset is plugged in an iOS device.
What I did is calling AudioUnitRender in AURenderCallback func so that the audio data is writed to AudioBuffer structure.
It works well if the "IO buffer duration" is not set or set to 0.020seconds. If the "IO buffer duration" is set to a small value (0.005 etc.) by calling setPreferredIOBufferDuration, AudioUnitRender() will return an error:
kAudioUnitErr_CannotDoInCurrentContext (-10863).
Any one can help to figure out why and how to resolve it please? Thanks
Just wanted to add that changing the output scope sample rate to match the input scope sample rate of the input to the OSx kAudioUnitSubType_HALOutput Audio Unit that I was using fixed this error for me
The buffer is full so wait until a subsequent render pass or use a larger buffer.
This same error code is used by AudioToolbox, AudioUnit and AUGraph but only documented for AUGraph.
To avoid spinning or waiting in the render thread (a bad idea!), many
of the calls to AUGraph can return:
kAUGraphErr_CannotDoInCurrentContext. This result is only generated
when you call an AUGraph API from its render callback. It means that
the lock that it required was held at that time, by another thread. If
you see this result code, you can generally attempt the action again -
typically the NEXT render cycle (so in the mean time the lock can be
cleared), or you can delegate that call to another thread in your app.
You should not spin or put-to-sleep the render thread.
https://developer.apple.com/reference/audiotoolbox/kaugrapherr_cannotdoincurrentcontext

Implementing Callback for AuAudioBuffer in AVAudioEngine

I recently watched the WWDC2014, Session on AVAudioEngine in practice, I have a question about the concept explained using AVAudioBuffers with NodeTap installed on the InputNode.
The Speaker mentioned that, its possible to notify the App module using Callback.
So my question is instead of Waiting for the callback until the buffer is full, is it possible to notify the app module after certain amount of time in ms. So once when the AVAudioEngine is started, is it possible to configure / register for Callback on this buffer for every 100 milliseconds of Recording. So that the App module gets notified to process this buffer for every 100ms.
Have anyone tried this before. Let me know your suggestions on how to implement this. It would be great if you point out some resource for this logic.
Thanks for your support in advance.
-Suresh
Sadly, the promising bufferSize argument of installTapOnBus which should let you choose a buffer size of 100ms:
input.installTapOnBus(bus, bufferSize: 512, format: input.inputFormatForBus(bus)) { (buffer, time) -> Void in
print("duration: \(buffer.frameLength, buffer.format.sampleRate) -> \((Double)(buffer.frameLength)/buffer.format.sampleRate)s")
}
is free to be ignored,
the requested size of the incoming buffers. The implementation may choose another size.
and is:
duration: (16537, 44100.0) -> 0.374988662131519s
So for more control over your input buffer size/duration, I suggest you use CoreAudio's remote io audio unit.

what's means AudioUnitRender?

Recently, I was watching aurioTouch.
But I can't understand this sentence:
OSStatus err = AudioUnitRender (THIS-> rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
According to apple documentaion explains:Initiates a rendering cycle for an audio unit.
But I feel are ambiguous.What is it to do?
Core Audio works on a "pull" model, where the output unit starts the process off by asking for audio samples from the unit connected to its input bus. Likewise, the unit connected to the output unit asks for samples connected to its input bus. Each of those "asks" is rendering cycle.
AudioUnitRender() typically passes in a buffer of samples that your audio unit can optionally process in some way. That buffer is the last argument in the function, ioData. inNumberFrames are the number of frames being passed in by ioData. 1 is the output element or 'bus' to render for (this could change depending on your configuration). rioUnit is the audio unit in question that is doing the processing.
Apple's Audio Unit Hosting Guide contains a section on rendering which I've found helpful.

ios audio unit remoteIO playback while recording

I have been charged to add VOIP into an game (cross-platform, so can't use the Apple gamekit to do it).
For 3 or 4 days now, i'm trying to get my head wrap around audio unit and remoteIO...
I have overlooked tens of examples and such, but every time it is only applying a simple algorithm to the input PCM and play it back on the speaker.
According to Apple's documentation in order to do VOIP we should use kAudioSessionCategory_PlayAndRecord.
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
status = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(audioCategory),
&audioCategory);
XThrowIfError(status, "couldn't set audio category");
1) But it seems (to me) that playAndRecord will always play what coming from the mic (or more excatly the PerformThru callback // aurioTouch), am I wrong ?
I have the simplest callback, doing nothing but AURender
static OSStatus PerformThru(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus err = AudioUnitRender(THIS->rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
if (err)
printf("PerformThru: error %d\n", (int)err);
return err
}
From that callback I'm intending to send data to the peer (Not directly of course, but data will come from it)...
I do not see how I can play different output than the input, except maybe with 2 units, one recording, one playing, but it doesn't seems to be what Apple intended to (still accroding to the documentation).
And of course, I cannot find any documentation about it, audio unit is still pretty much un-documented...
Anyone would have an idea on what would be the best way to do it ?
I have not used VOIP or kAudioSessionCategory_PlayAndRecord. But if you want to record/transmit voice picked up from the mic and play back incoming data from network packages: Here is a good sample which included both mic and playback. Also if you have not read this doc from Apple, I would strongly recommend this.
In short: You need to create an AudioUnits instance. In it, configure two callbacks: one for mic and one for playback. The callback mic function will supply you the data that was picked up from the mic. You then can convert and transmit to other devices with whatever chosen network protocol. The playback callback function is where you supply the incoming data from other network devices to play back.
You can see this simple example. It describes how to use remote IO unit. After understanding this example, you should watch PJSIP's audio driver. These should help you implementing your own solution. Best of luck.

Resources