what's means AudioUnitRender? - ios

Recently, I was watching aurioTouch.
But I can't understand this sentence:
OSStatus err = AudioUnitRender (THIS-> rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
According to apple documentaion explains:Initiates a rendering cycle for an audio unit.
But I feel are ambiguous.What is it to do?

Core Audio works on a "pull" model, where the output unit starts the process off by asking for audio samples from the unit connected to its input bus. Likewise, the unit connected to the output unit asks for samples connected to its input bus. Each of those "asks" is rendering cycle.
AudioUnitRender() typically passes in a buffer of samples that your audio unit can optionally process in some way. That buffer is the last argument in the function, ioData. inNumberFrames are the number of frames being passed in by ioData. 1 is the output element or 'bus' to render for (this could change depending on your configuration). rioUnit is the audio unit in question that is doing the processing.
Apple's Audio Unit Hosting Guide contains a section on rendering which I've found helpful.

Related

How to connect multiple AudioUnits in Swift?

I currently have a RemoteIO Audio Unit configured and working, it simply takes input, and passes it to output, so I can hear myself through the headphones of my iPhone when I speak into its microphone.
The next step in what I want to do is add optional effects and create a chain. I understand that AUGraph has been deprecated and that I need to use kAudioUnitProperty_MakeConnection to connect things together, but I have a few key questions and I'm unable to get audio out just yet.
Firstly: If I want to go RemoteIO Input -> Reverb -> RemoteIO Output, do I need two instances of the RemoteIO Audio Unit? Or can I use the same one? I am guessing just one, but to connect different things to its input and output scopes, but I'm having trouble making this happen.
Secondly: how do render callbacks play into this? I implemented a single render callback (an AURenderCallbackStruct and set that as the kAudioUnitProperty_SetRenderCallback property on my RemoteIO Audio Unit, and in the implementation of the callback, I do this:
func performRender(
_ ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBufNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>
) -> OSStatus {
guard let unit = audioUnit else { crash("Asked to render before the AURemoteIO was created.") }
return AudioUnitRender(unit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData)
}
Do I need a render callback at all to make this work? Do I need two, one to render from RemoteIO -> Reverb, and another to render back to Reverb -> RemoteIO?
The CoreAudio documentation is notoriously sketchy but I'm having trouble finding any up-to-date info on how to do this without AUGraph which is deprecated.
Any advice hugely appreciated!
You only need one RemoteIO (apps only get one), don't need any explicit render callbacks (unless you are synthesizing samples in code), and if you add kAudioUnitProperty_MakeConnection input connections on your full chain of Audio Units, starting the output unit will pull data from the rest of the chain of units, all the way back to the microphone (or from whatever the OS has connected to the mic input).

Can't add render callback to output unit

I'm writing an app that should mix several sounds from disk and save resulting file to disk. I'm trying to use Audio Units.
I used Apple's MixerHost as a base for my app. It has Multichannel mixer connected to Remote I/O. When I'm trying to add render callback to remote IO I've got error -10861 "The attempted connection between two nodes cannot be made." when call AUGraphConnectNodeInput(...).
What I'm doing wrong? What's the right way to mix and record file to disk?
callback stub:
static OSStatus saveToDiskRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
return noErr;
}
adding callback to Remote I/O Unit:
AURenderCallbackStruct saveToDiskCallbackStruct;
saveToDiskCallbackStruct.inputProc = &saveToDiskRenderCallback;
result = AUGraphSetNodeInputCallback (
processingGraph,
iONode,
0,
&saveToDiskCallbackStruct
);
error here:
result = AUGraphConnectNodeInput (
processingGraph,
mixerNode, // source node
0, // source node output bus number
iONode, // destination node
0 // desintation node input bus number
);
You are confused on how audio units works.
The node input callback (as set by AUGraphSetNodeInputCallback) and the node input connection (as set by AUGraphConnectNodeInput) are both on the same input side of your remote IO unit. It looks you believe that the input callback will be the output of your graph. This is wrong.
AUGraph offers two paths to feed the input of an AudioUnit:
Either from another upstream node (AUGraphConnectNodeInput)
or from a custom callback (AUGraphSetNodeInputCallback),
So you can't set them both simulatenously, it has no meaning.
Now two possibilities
1) Real time monitoring
This is not what you describe but this is the easier to get from where you are. So I assume you want to listen to the mix on the Remote I/O while it is being processed (in real time).
Then Read this
2) offline rendering
If you don't plan to listen in real time (which is what I understood first from your description), then the remote IO has nothing to do here since its purpose is to talk to a physical output. Then read that. It replaces the remote I/O unit with a Generic Output Unit. Be careful that the graph is not run in the same way.

Synchronising with Core Audio Thread

I am using the render callback of the ioUnit to store the audio data into a circular buffer:
OSStatus ioUnitRenderCallback(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus err = noErr;
AMNAudioController *This = (__bridge AMNAudioController*)inRefCon;
err = AudioUnitRender(This.encoderMixerNode->unit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
ioData);
// Copy the audio to the encoder buffer
TPCircularBufferCopyAudioBufferList(&(This->encoderBuffer), ioData, inTimeStamp, kTPCircularBufferCopyAll, NULL);
return err;
}
I then want to read the bytes out of the circular buffer, feed them to libLame and then to libShout.
I have tried starting a thread and using NSCondition to make it wait until data is available but this causes all sorts of issues due to using locks on the Core Audio callback.
What would be the recommended way to do this?
Thanks in advance.
More detail on how I implemented Adam's answer
I ended up taking Adam's advice and implemented it like so.
Producer
I use TPCircularBufferProduceBytes in the Core Audio Render callback to add the bytes to the circular buffer. In my case I have non-interleaved audio data so I ended up using two circular buffers.
Consumer
I spawn a new thread using pthread_create
Within the new thread create a new CFTimer and add it to the current
CFRunLoop (an interval of 0.005 seconds appears to work well)
I tell the current CFRunLoop to run
Within my timer callback I encode the audio and send it to the server (returning quickly if no data is buffered)
I also have a buffer size of 5MB which appears to work well (2MB was giving me overruns). This does seem a bit high :/
Use a repeating timer (NSTimer or CADisplayLink) to poll your lock-free circular buffer or FIFO. Skip doing work if there is not enough data in the buffer, and return (to the run loop). This works because you know the sample rate with high accuracy, and how much data you prefer or need to handle at a time, so can set the polling rate just slightly faster, to be on the safe side, but still be very close to the same efficiency as using conditional locks.
Using semaphores or locks (or anything else with unpredictable latency) in a real-time audio thread callback is not recommended.
You're on the right track, but you don't need NSCondition. You definitely don't want to block. The circular buffer implementation you're using is lock free and should do the trick. In the audio render callback, put the data into the buffer by calling TPCircularBufferProduceBytes. Then in the reader context (a timer callback is good, as hotpaw suggests), call TPCircularBufferTail to get the tail pointer (read address) and number of available bytes to read, and then call TPCircularBufferConsume to do the actual reading. Now you've done the transfer without taking any locks. Just make sure the buffer you allocate is large enough to handle the worst-case condition where your reader thread gets held off by the os for whatever reason, otherwise you can hit a buffer overrun condition and will lose data.

Recording volume drop switching between RemoteIO and VPIO

In my app I need to switch between these 2 different AudioUnits.
Whenever I switch from VPIO to RemoteIO, there is a drop in my recording volume. Quite a significant drop.
No change in the playback volume though.Anyone experienced this?
Here's the code where I do the switch, which is triggered by a routing change. (I'm not too sure whether I did the change correctly, so am asking here as well.)
How do I solve the problem of the recording volume drop?
Thanks, appreciate any help I can get.
Pier.
- (void)switchInputBoxTo : (OSType) inputBoxSubType
{
OSStatus result;
if (!remoteIONode) return; // NULL check
// Get info about current output node
AudioComponentDescription outputACD;
AudioUnit currentOutputUnit;
AUGraphNodeInfo(theGraph, remoteIONode, &outputACD, &currentOutputUnit);
if (outputACD.componentSubType != inputBoxSubType)
{
AUGraphStop(theGraph);
AUGraphUninitialize(theGraph);
result = AUGraphDisconnectNodeInput(theGraph, remoteIONode, 0);
NSCAssert (result == noErr, #"Unable to disconnect the nodes in the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
AUGraphRemoveNode(theGraph, remoteIONode);
// Re-init as other type
outputACD.componentSubType = inputBoxSubType;
// Add the RemoteIO unit node to the graph
result = AUGraphAddNode (theGraph, &outputACD, &remoteIONode);
NSCAssert (result == noErr, #"Unable to add the replacement IO unit to the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
result = AUGraphConnectNodeInput(theGraph, mixerNode, 0, remoteIONode, 0);
// Obtain a reference to the I/O unit from its node
result = AUGraphNodeInfo (theGraph, remoteIONode, 0, &_remoteIOUnit);
NSCAssert (result == noErr, #"Unable to obtain a reference to the I/O unit. Error code: %d '%.4s'", (int) result, (const char *)&result);
//result = AudioUnitUninitialize(_remoteIOUnit);
[self setupRemoteIOTest]; // reinit all that remoteIO/voiceProcessing stuff
[self configureAndStartAudioProcessingGraph:theGraph];
}
}
I used my apple developer support for this.
Here's what the support said :
The presence of the Voice I/O will result in the input/output being processed very differently. We don't expect these units to have the same gain levels at all, but the levels shouldn't be drastically off as it seems you indicate.
That said, Core Audio engineering indicated that your results may be related to when the voice block is created it is is also affecting the RIO instance. Upon further discussion, Core Audio engineering it was felt that since you say the level difference is very drastic it therefore it would be good if you could file a bug with some recordings to highlight the level difference that you are hearing between voice I/O and remote I/O along with your test code so we can attempt to reproduce in house and see if this is indeed a bug. It would be a good idea to include the results of the singe IO unit tests outlined above as well as further comparative results.
There is no API that controls this gain level, everything is internally setup by the OS depending on Audio Session Category (for example VPIO is expected to be used with PlayAndRecord always) and which IO unit has been setup. Generally it is not expected that both will be instantiated simultaneously.
Conclusion? I think it's a bug. :/
There is some talk about low volume issues if you don't dispose of your audio unit correctly. Basically, the first audio component stays in memory and any successive playback will be ducked under your or other apps, causing the volume drop.
Solution:
Audio units are AudioComponentInstance's and must be freed using AudioComponentInstanceDispose().
I've had success when I change the audio session category when going from voice processing io (PlayAndRecord) to Remote IO (SoloAmbient). Make sure you pause the Audio Session before changing this. You'll also have to uninitialize you're audio graph.
From a talk I had with an Apple AVAudioSession engineer.
VPIO - Is adding audio processing on the audio sample, which also creates the echo cancellation, this creats the drop in the audio level
RemoteIO - Wont do any audio processing so the volume level will remain high.
If you are lookign for echo cancellation while using the RemoteIO option, you should create you own audio processing in the render callback

ios audio unit remoteIO playback while recording

I have been charged to add VOIP into an game (cross-platform, so can't use the Apple gamekit to do it).
For 3 or 4 days now, i'm trying to get my head wrap around audio unit and remoteIO...
I have overlooked tens of examples and such, but every time it is only applying a simple algorithm to the input PCM and play it back on the speaker.
According to Apple's documentation in order to do VOIP we should use kAudioSessionCategory_PlayAndRecord.
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
status = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(audioCategory),
&audioCategory);
XThrowIfError(status, "couldn't set audio category");
1) But it seems (to me) that playAndRecord will always play what coming from the mic (or more excatly the PerformThru callback // aurioTouch), am I wrong ?
I have the simplest callback, doing nothing but AURender
static OSStatus PerformThru(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus err = AudioUnitRender(THIS->rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
if (err)
printf("PerformThru: error %d\n", (int)err);
return err
}
From that callback I'm intending to send data to the peer (Not directly of course, but data will come from it)...
I do not see how I can play different output than the input, except maybe with 2 units, one recording, one playing, but it doesn't seems to be what Apple intended to (still accroding to the documentation).
And of course, I cannot find any documentation about it, audio unit is still pretty much un-documented...
Anyone would have an idea on what would be the best way to do it ?
I have not used VOIP or kAudioSessionCategory_PlayAndRecord. But if you want to record/transmit voice picked up from the mic and play back incoming data from network packages: Here is a good sample which included both mic and playback. Also if you have not read this doc from Apple, I would strongly recommend this.
In short: You need to create an AudioUnits instance. In it, configure two callbacks: one for mic and one for playback. The callback mic function will supply you the data that was picked up from the mic. You then can convert and transmit to other devices with whatever chosen network protocol. The playback callback function is where you supply the incoming data from other network devices to play back.
You can see this simple example. It describes how to use remote IO unit. After understanding this example, you should watch PJSIP's audio driver. These should help you implementing your own solution. Best of luck.

Resources