AudioSessionAddPropertyListener deprecated for IOBufferDuration - ios

I need to determine when my RemoteIO callback is changing the buffer size. Until iOS 7 we could add a session property listener using AudioSessionAddPropertyListener and then property kAudioSessionProperty_PreferredHardwareIOBufferDuration. This is now deprecated. Is there any replacement? AVAudioSession is meant to be KVO compliant, but not for the IOBufferDuration or preferredIOBufferDuration properties.
What is the replacement here?

The buffer duration is given to the RemoteIO callback in the form of the frameCount (proportional to the number of samples in the callback buffer) at a known sample rate. Any other notification would be asynchronous to this callback information, and thus possibly received at the wrong time compared to the actual change (which happens in the audio thread, not in the UI main run loop).
But your audio callback could change some visible state (global or in the parameter struct) which could be found by any other polling thread or consumer thread after the buffer duration update.

Related

How to reset a IXAudio2SourceVoice's 'SamplesPlayed' counter after flushing source buffers?

IXAudio2SourceVoice has a GetState function which returns an XAUDIO2_VOICE_STATE structure. This structure has a SamplesPlayed member, which is:
Total number of samples processed by this voice since it last started, or since the last audio stream ended (as marked with the XAUDIO2_END_OF_STREAM flag).
What I want to be able to do it stop the source voice, flush all its buffers, and then reset the SamplesPlayed counter to zero. Neither calling Stop nor FlushSourceBuffers will by themselves reset SamplesPlayed. And while flagging the last buffer with XAUDIO2_END_OF_STREAM does correctly reset SamplesPlayed back to zero, this seemingly only works if that last buffer is played to completion; if the buffer is flushed, then SamplesPlayed does not get reset. I have also tried calling Discontinuity both before and after stopping/flushing with no effect.
My current workaround is, after stopping and flushing the source voice, to submit a tiny 1-sample silent buffer with the XAUDIO2_END_OF_STREAM flag set and then let the source voice play to process that buffer and thus reset SamplesPlayed to zero. This works fine-ish for my use case, but it seems pretty hacky/clumsy. Is there a better solution?
Looking at the XAudio2 source, there's no exposed way to do that in the API other than letting a packet play with XAUDIO2_END_OF_STREAM.
Calling Discontinuity sets up the end-of-stream flag on the currently playing buffer, or if there's none playing and a queued buffer it sets it there. You need to call Discontinuity and then let the voice play to completion before you recycle it.

AudioUnitRender got error kAudioUnitErr_CannotDoInCurrentContext (-10863)

I want to play the recorded audio directly to speaker when headset is plugged in an iOS device.
What I did is calling AudioUnitRender in AURenderCallback func so that the audio data is writed to AudioBuffer structure.
It works well if the "IO buffer duration" is not set or set to 0.020seconds. If the "IO buffer duration" is set to a small value (0.005 etc.) by calling setPreferredIOBufferDuration, AudioUnitRender() will return an error:
kAudioUnitErr_CannotDoInCurrentContext (-10863).
Any one can help to figure out why and how to resolve it please? Thanks
Just wanted to add that changing the output scope sample rate to match the input scope sample rate of the input to the OSx kAudioUnitSubType_HALOutput Audio Unit that I was using fixed this error for me
The buffer is full so wait until a subsequent render pass or use a larger buffer.
This same error code is used by AudioToolbox, AudioUnit and AUGraph but only documented for AUGraph.
To avoid spinning or waiting in the render thread (a bad idea!), many
of the calls to AUGraph can return:
kAUGraphErr_CannotDoInCurrentContext. This result is only generated
when you call an AUGraph API from its render callback. It means that
the lock that it required was held at that time, by another thread. If
you see this result code, you can generally attempt the action again -
typically the NEXT render cycle (so in the mean time the lock can be
cleared), or you can delegate that call to another thread in your app.
You should not spin or put-to-sleep the render thread.
https://developer.apple.com/reference/audiotoolbox/kaugrapherr_cannotdoincurrentcontext

Implementing Callback for AuAudioBuffer in AVAudioEngine

I recently watched the WWDC2014, Session on AVAudioEngine in practice, I have a question about the concept explained using AVAudioBuffers with NodeTap installed on the InputNode.
The Speaker mentioned that, its possible to notify the App module using Callback.
So my question is instead of Waiting for the callback until the buffer is full, is it possible to notify the app module after certain amount of time in ms. So once when the AVAudioEngine is started, is it possible to configure / register for Callback on this buffer for every 100 milliseconds of Recording. So that the App module gets notified to process this buffer for every 100ms.
Have anyone tried this before. Let me know your suggestions on how to implement this. It would be great if you point out some resource for this logic.
Thanks for your support in advance.
-Suresh
Sadly, the promising bufferSize argument of installTapOnBus which should let you choose a buffer size of 100ms:
input.installTapOnBus(bus, bufferSize: 512, format: input.inputFormatForBus(bus)) { (buffer, time) -> Void in
print("duration: \(buffer.frameLength, buffer.format.sampleRate) -> \((Double)(buffer.frameLength)/buffer.format.sampleRate)s")
}
is free to be ignored,
the requested size of the incoming buffers. The implementation may choose another size.
and is:
duration: (16537, 44100.0) -> 0.374988662131519s
So for more control over your input buffer size/duration, I suggest you use CoreAudio's remote io audio unit.

What causes ExtAudioFileRead to make ioData->mBuffers[0].mDataByteSize negative?

The problem occurs when I often stop and start audio playback and seek a lot back and forth in an AAC audio file through an ExtAudioFileRef object. In few cases, this strange behaviour is shown by ExtAudioFileRead:
Sometimes it assigns these numbers to the mDataByteSize of the only AudioBuffer of the AudioBufferList:
-51604480
-51227648
-51350528
-51440640
-51240960
In hex, these numbers have the pattern 0xFC....00.
The code:
status = ExtAudioFileRead(_file, &numberFramesRead, ioData);
printf("s=%li d=%p d.nb=%li, d.b.d=%p, d.b.dbs=%li, d.b.nc=%li\n", status, ioData, ioData->mNumberBuffers, ioData->mBuffers[0].mData, ioData->mBuffers[0].mDataByteSize, ioData->mBuffers[0].mNumberChannels);
Output:
s=0 d=0x16668bd0 d.nb=1, d.b.d=0x30de000, d.b.dbs=1024, d.b.nc=2 // good (usual)
s=0 d=0x16668bd0 d.nb=1, d.b.d=0x30de000, d.b.dbs=-51240960, d.b.nc=2 // misbehaving
The problem occurs on an iPhone 4S on iOS 7. I could not reproduce the problem in the Simulator.
The problem occurs when concurrently calling ExtAudioFileRead() and ExtAudioFileSeek() for the same ExtAudioFileRef from two different threads/queues.
The read function was called directly from the AURenderCallback, so it was executed on AudioUnit's real-time thread while the seek was done on my own serial queue.
I've modified the code of the render callback to also dispatch_sync() to the same serial queue to which the seek gets dispatched. That solved the problem.

Why call to CFRunLoopRunInMode() in Audio Queue Playback code?

I'm following the iOS "Audio Queue Programming Guide - Playing Audio". Near the end of the guide, there are calls to CFRunLoopRunInMode() in the step Start and Run an Audio Queue:
do { // 5
CFRunLoopRunInMode ( // 6
kCFRunLoopDefaultMode, // 7
0.25, // 8
false // 9
);
} while (aqData.mIsRunning);
//...
The documentation about line 6 says:
"The CFRunLoopRunInMode function runs the run loop that contains the audio queue’s thread."
But isn't that run loop executed anyways when my method returns? The code above is executed by the main thread upon pressing the play button in my app.
Now I'm having a hard time understanding what these calls to CFRunLoopRunInMode() are good for, because they have the disadvantage that my play-button does not update correctly (it looks pressed down for the whole time that the audio plays) and there is no positive effect, i.e. the audio also plays nicely if I remove the do-while-loop from my code along with the calls to CFRunLoopRunInMode() and instead directly return from this method.
Well this points to the obvious solution to simply keep these calls removed as this doesn't create a problem. Can someone explain why then this code is included in the official guide by Apple on using Audio Queues in iOS for Audio Playback?
Edit:
I'm just seeing that in Mac OS X, there exists the same audio queues API as on iOS, and the guide for iOS seems to be a copy-paste duplication of the Mac OS guide. This leads me to the suspicion that those calls to the run loop are only required in Mac OS and not anymore in iOS, e.g. because otherwise the Mac OS application would exit or something like that. Can someone please verify this or rule it out?
#bunnyhero is right, CFRunLoopRunInMode() is usually for command line examples
https://github.com/abbood/Learning-Core-Audio-Book-Code-Sample/blob/master/CH05_Player/CH05_Player/main.c
As long as your AudioQueueRef is not deallocated, you dont have to use CFRunLoopRunInMode() in IOS...
What I do is create a separate class for audio queue and as long as my class pointer and AudioQueueRef is allocated I can playback, pause, resume, stop etc....
Related to OP's question, regarding AQ blocking UI thread, to further liberate AQ user from copying that CoreAudio AQ example cited blindly,
I shall add that the example configures the AQ to run in the current runloop in the main thread, in Listing 3-11 Creating a playback audio queue:
AudioQueueNewOutput ( // 1
&aqData.mDataFormat, // 2
HandleOutputBuffer, // 3
&aqData, // 4
CFRunLoopGetCurrent (), // 5
kCFRunLoopCommonModes, // 6
0, // 7
&aqData.mQueue // 8
);
, see the parameter value CFRunLoopGetCurrent () above. The text explains
The current run loop, and the one on which the audio queue playback
callback will be invoked.
Looking at the function prototype:
OSStatus AudioQueueNewOutput(
const AudioStreamBasicDescription *inFormat, // 2
AudioQueueOutputCallback inCallbackProc, // 3
void *inUserData, // 4
CFRunLoopRef inCallbackRunLoop, // 5
CFStringRef inCallbackRunLoopMode, // 6
UInt32 inFlags, // 7
AudioQueueRef _Nullable *outAQ // 8
);
If you replace #5 with NULL, then AQ will run in a CoreAudio internal thread, making it more efficient for you app.
CFRunLoopRunInMode is needed to keep the audio queue alive while the execution of your code has ended, for example, when running a terminal app. iOS apps contain a lifecyle: To keep an audio queue alive you only need to declare AudioQueueRef as a member variable. Otherwise, if it is declared within a method scope it gets destroyed after execution of that method - and thus, it will stop - unless you keep it alive with CFRunLoopRunInMode.
To summarize, as long as you hold a member variable of AudioQueueRef - or the new AVAudioEngine - in an instantiated class that is not free'ed from memory, a CFRunLoopRunInMode is not needed.

Resources