IOS AudioUnit playback crackle issue (swift) - ios

I am trying to play bytes coming from the UDP from an android device in IOS. I am using TPCircularBuffer to play the bytes. My code is below:
let success = initCircularBuffer(&circularBuffer, 1024)
if success {
print("Circular buffer init was successful")
} else {
print("Circular buffer init not successful")
}
func udpReceive() {
receivingQueue.async {
repeat {
do {
let datagram = try self.tcpClient?.receive()
let byteData = datagram?["data"] as? Data
let dataLength = datagram?["length"] as? Int
self.dataLength = dataLength!
let _ = TPCircularBufferProduceBytes(&self.circularBuffer, byteData!.bytes, UInt32(dataLength! * MemoryLayout<UInt8>.stride * 2))
} catch {
fatalError(error.localizedDescription)
}
} while true
}
}
func consumeBuffer() -> UnsafeMutableRawPointer? {
self.availableBytes = 0
let tail = TPCircularBufferTail(&self.circularBuffer, &self.availableBytes)
return tail
}
We are recording with 16K sample rate and sending to IOS via UDP from Android side and from there we are using AudioUnit to play our bytes but the problem is the crackling and clipping sound in our voice.
Playback Callback code:
func performPlayback(
_ ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBufNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>
) -> OSStatus {
var buffer = ioData[0].mBuffers
let bufferTail = consumeBuffer()
memcpy(buffer.mData, bufferTail, min(self.dataLength, Int(availableBytes)))
buffer.mDataByteSize = UInt32(min(self.dataLength, Int(availableBytes)))
TPCircularBufferConsume(&self.circularBuffer, UInt32(min(self.dataLength, Int(availableBytes))))
return noErr
}
UDP is sending us 1280 bytes per sample. What we think the problem is the BUFFER SIZE that is not being set correctly. Can anyone guide me how to set proper buffer size. It would be great help indeed. I know the work of #Gruntcakes as an voip engineer https://stackoverflow.com/a/57136561/12020007. I have also studied the work of #hotpaw2 and was looking at https://stackoverflow.com/a/58545845/12020007 to check if there is some threading issue. Any kind of help would be appreciated.

An Audio Unit callback should return only the requested number of frames (samples), as indicated by the inNumberFrames parameter. Your code appears to be copying some different number of samples into the AudioBufferList, which won't work, as an iOS Audio Unit will only send the requested number of frames to the audio output.
You can suggest a preferred buffer duration in your Audio Session configuration, but this is only a suggestion. iOS is free to ignore this suggestion, and use an inNumberFrames that is better matched to the device's audio hardware, system activity, and current power state.
Don't forget to pre-fill the circular buffer enough to account for the the maximum expected jitter in network (UDP) transit time. Perhaps measure network packet-to-packet latency jitter, and compute its statistics, min, max, std.dev. etc.
If your UDP buffers are not a power-of-2 in size, or contain samples that are not at the iOS hardware sample rate, then you will also have to account for fractional buffer and resampling jitter in your safety buffering overhead.

The playback callback Code is:
private let AudioController_RecordingCallback: AURenderCallback = {(
inRefCon,
ioActionFlags/*: UnsafeMutablePointer<AudioUnitRenderActionFlags>*/,
inTimeStamp/*: UnsafePointer<AudioTimeStamp>*/,
inBufNumber/*: UInt32*/,
inNumberFrames/*: UInt32*/,
ioData/*: UnsafeMutablePointer<AudioBufferList>*/)
-> OSStatus
First we need to understand the callback based on our properties like our sample rate and number of channel, these things deduce the Frames the IOS callback will play, If our frames are less or more then the inNumberOfFrames the sound will contain crackle and clipping.
The solution to the problem is the TPCircularBuffer that will memcpy exactly the inNumberOfFrames to the ioData to play the sound correctly. Make sure that you only copy the number of frames that the callBack is producing at that time as it can vary from time to time.
The problem with your code is that you are trying to play 1280bytes whereas the call back is expecting less than these bytes.
For TPCircularBuffer refer to https://github.com/michaeltyson/TPCircularBuffer

Related

How to schedule MIDI events at "sample accurate" times?

I'm trying to build a sequencer app on iOS. There's a sample on the Apple Developer website that makes an audio unit play a repeating scale, here:
https://developer.apple.com/documentation/audiotoolbox/incorporating_audio_effects_and_instruments
In the sample code, there's a file "SimplePlayEngine.swift", with a class "InstrumentPlayer" which handles sending MIDI events to the selected audio unit. It spawns a thread with a loop that iterates through the scale. It sends a MIDI Note On message by calling the audio unit's AUScheduleMIDIEventBlock, sleeps the thread for a short time, sends a Note Off, and repeats.
Here's an abridged version:
DispatchQueue.global(qos: .default).async {
...
while self.isPlaying {
// cbytes is set to MIDI Note On message
...
self.audioUnit.scheduleMIDIEventBlock!(AUEventSampleTimeImmediate, 0, 3, cbytes)
usleep(useconds_t(0.2 * 1e6))
...
// cbytes is now MIDI Note Off message
self.noteBlock(AUEventSampleTimeImmediate, 0, 3, cbytes)
...
}
...
}
This works well enough for a demonstration, but it doesn't enforce strict timing, since the events will be scheduled whenever the thread wakes up.
How can I modify it to play the scale at a certain tempo with sample-accurate timing?
My assumption is that I need a way to make the synthesizer audio unit call a callback in my code before each render with the number of frames that are about to be rendered. Then I can schedule a MIDI event every "x" number of frames. You can add an offset, up to the size of the buffer, to the first parameter to scheduleMIDIEventBlock, so I could use that to schedule the event at exactly the right frame in a given render cycle.
I tried using audioUnit.token(byAddingRenderObserver: AURenderObserver), but the callback I gave it was never called, even though the app was making sound. That method sounds like it's the Swift version of AudioUnitAddRenderNotify, and from what I read here, that sounds like what I need to do - https://stackoverflow.com/a/46869149/11924045. How come it wouldn't be called? Is it even possible to make this "sample accurate" using Swift, or do I need to use C for that?
Am I on the right track? Thanks for your help!
You're on the right track. MIDI events can be scheduled with sample-accuracy in a render callback:
let sampler = AVAudioUnitSampler()
...
let renderCallback: AURenderCallback = {
(inRefCon: UnsafeMutableRawPointer,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBusNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>?) -> OSStatus in
if ioActionFlags.pointee == AudioUnitRenderActionFlags.unitRenderAction_PreRender {
let sampler = Unmanaged<AVAudioUnitSampler>.fromOpaque(inRefCon).takeUnretainedValue()
let bpm = 960.0
let samples = UInt64(44000 * 60.0 / bpm)
let sampleTime = UInt64(inTimeStamp.pointee.mSampleTime)
let cbytes = UnsafeMutablePointer<UInt8>.allocate(capacity: 3)
cbytes[0] = 0x90
cbytes[1] = 64
cbytes[2] = 127
for i:UInt64 in 0..<UInt64(inNumberFrames) {
if (((sampleTime + i) % (samples)) == 0) {
sampler.auAudioUnit.scheduleMIDIEventBlock!(Int64(i), 0, 3, cbytes)
}
}
}
return noErr
}
AudioUnitAddRenderNotify(sampler.audioUnit,
renderCallback,
Unmanaged.passUnretained(sampler).toOpaque()
)
That used AURenderCallback and scheduleMIDIEventBlock. You can swap in AURenderObserver and MusicDeviceMIDIEvent, respectively, with similar sample-accurate results:
let audioUnit = sampler.audioUnit
let renderObserver: AURenderObserver = {
(actionFlags: AudioUnitRenderActionFlags,
timestamp: UnsafePointer<AudioTimeStamp>,
frameCount: AUAudioFrameCount,
outputBusNumber: Int) -> Void in
if (actionFlags.contains(.unitRenderAction_PreRender)) {
let bpm = 240.0
let samples = UInt64(44000 * 60.0 / bpm)
let sampleTime = UInt64(timestamp.pointee.mSampleTime)
for i:UInt64 in 0..<UInt64(frameCount) {
if (((sampleTime + i) % (samples)) == 0) {
MusicDeviceMIDIEvent(audioUnit, 144, 64, 127, UInt32(i))
}
}
}
}
let _ = sampler.auAudioUnit.token(byAddingRenderObserver: renderObserver)
Note that these are just examples of how it's possible to do sample-accurate MIDI sequencing on the fly. You should still follow the rules of rendering to reliably implement these patterns.
Sample accurate timing generally requires using the RemoteIO Audio Unit, and manually inserting samples at the desired sample position in each audio callback block using C code.
(A WWDC session on core audio a few years back recommended against using Swift in the audio real-time context. Not sure if anything has changed that recommendation.)
Or, for MIDI, use a precisely incremented time value in each successive scheduleMIDIEventBlock call, instead of AUEventSampleTimeImmediate, and set these calls up slightly ahead of time.

Precise time of audio queue playback finish

I am using Audio Queues to playback audio files. I need precise timing on the finish of last buffer.
I need to notify a function no later than 150ms-200 ms after the last buffer is played...
Thru callback method I know how many buffers are enqueued
I know the buffer size, I know the how many bytes last buffer is filled with.
First I initialize a number of buffers end fill the buffers with audio data, then enqueue them. When Audio Queue needs a buffer to be filled it calls the callback and I fill the buffer with data.
When there is no more audio data available Audio Queue sends me the last empty buffer, so I fill it with whatever data I have:
if (sharedCache.numberOfToTalPackets>0)
{
if (currentlyReadingBufferIndex==[sharedCache.baseAudioCache count]-1) {
inBuffer->mAudioDataByteSize = (UInt32)bytesFilled;
lastEnqueudBufferSize=bytesFilled;
err=AudioQueueEnqueueBuffer(inAQ,inBuffer,(UInt32)packetsFilled,packetDescs);
if (err) {
[self failWithErrorCode:err customError:AP_AUDIO_QUEUE_ENQUEUE_FAILED];
}
printf("if that was the last free packet description, then enqueue the buffer\n");
//go to the next item on keepbuffer array
isBufferFilled=YES;
[self incrementBufferUsedCount];
return;
}
}
When Audio Queue asks for more data via callback and I have no more data , I start to countdown the buffers. If buffer count equals to zero, which means only one buffer left on the flight to be played, the moment playback is done I try to stop the audio queue.
-(void)decrementBufferUsedCount
{
if (buffersUsed>0) {
buffersUsed--;
printf("buffer on the queue %i\n",buffersUsed);
if (buffersUsed==0) {
NSLog(#"playback is finished\n");
// end playback
isPlayBackDone=YES;
double sampleRate = dataFormat.mSampleRate;
double bufferDuration = lastEnqueudBufferSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;
[self performSelector:#selector(stopPlayer) withObject:nil afterDelay:estimatedTimeNeded];
}
}
}
-(void)stopPlayer
{
#synchronized(self)
{
state=AP_STOPPING;
}
err=AudioQueueStop(queue, TRUE);
if (err) {
[self failWithErrorCode:err customError:AP_AUDIO_QUEUE_STOP_FAILED];
}
else
{
#synchronized(self)
{
state=AP_STOPPED;
NSLog(#"Stopped\n");
}
However it seems I can't get precise timing here. Above code stops player early.
if I do following audio cuts early too
double bufferDuration = XMAQDefaultBufSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;
if increase 1 to 2 since the buffer size is big I get some delay, seem 1.5 is the optimum value for now but I dont understand why lastEnqueudBufferSize/ sampleRate is not wotking
Details of the audio file, and buffers:
Audio file has 22050 sample rate
#define kNumberPlaybackBuffers 4
#define kAQDefaultBufSize 16384
it is a vbr file format with no bitrate information available
EDIT:
I found an easier way that gets the same results (+/-10ms). After you set up your output Queue with AudioQueueNewOutput() you initialize a AudioQueueTimelineRef to be used in your output callback. (ticksToSeconds function is included below in my first method) don't forget to import<mach/mach_time.h>
//After AudioQueueNewOutput()
AudioQueueTimelineRef timeLine; //ivar
AudioQueueCreateTimeline(queue, self.timeLine);
Then in your output callback you call AudioQueueGetCurrentTime(). Caveat: queue must be playing for valid timestamps. So for very short files you might need to use the AudioQueueProcessingTap method below.
AudioTimeStamp timestamp;
AudioQueueGetCurrentTime(queue, self->timeLine, &timestamp, NULL);
The timestamp ties together the current sample playing with the current machine time. With that info we can get an exact machine time in the future when our last sample will be played.
Float64 samplesLeft = self->frameCount - timestamp.mSampleTime;//samples in file - current sample
Float64 secondsLeft = samplesLeft / self->sampleRate; //seconds of audio to play
UInt64 ticksLeft = secondsLeft / ticksToSeconds(); //seconds converted to machine ticks
UInt64 machTimeFinish = timestamp.mHostTime + ticksLeft; //machine time of first sample + ticks left
Now that we have this future machine time we can use it to time whatever it is that you want to do with some accuracy.
UInt64 currentMachTime = mach_absolute_time();
Uint64 ticksFromNow = machTimeFinish - currentMachTime;
float secondsFromNow = ticksFromNow * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(secondsFromNow * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//do the thing!!!
printf("Giggety");
});
If GCD dispatch_async isn't accurate enough there are ways to set up a precision timer
Using AudioQueueProcessingTap
You can get fairly low response time from an AudioQueueProcessingTap. First you make your callback that will essentially put itself in-between the audio stream. The MyObject type is just whatever self is in your code(this is ARC bridging here to get self inside the function). Inspecting ioFlags tells you when the stream starts and finishes. The ioTimeStamp of an output callback describes time that the first sample in the callback will hit the speaker in the future. So if you want to get exact here's how you do it. I added some convenience functions for converting machine time to seconds.
#import <mach/mach_time.h>
double getTimeConversion(){
double timecon;
mach_timebase_info_data_t tinfo;
kern_return_t kerror;
kerror = mach_timebase_info(&tinfo);
timecon = (double)tinfo.numer / (double)tinfo.denom;
return timecon;
}
double ticksToSeconds(){
static double ticksToSeconds = 0;
if (!ticksToSeconds) {
ticksToSeconds = getTimeConversion() * 0.000000001;
}
return ticksToSeconds;
}
void processingTapCallback(
void * inClientData,
AudioQueueProcessingTapRef inAQTap,
UInt32 inNumberFrames,
AudioTimeStamp * ioTimeStamp,
UInt32 * ioFlags,
UInt32 * outNumberFrames,
AudioBufferList * ioData){
MyObject *self = (__bridge Object *)inClientData;
AudioQueueProcessingTapGetSourceAudio(inAQTap, inNumberFrames, ioTimeStamp, ioFlags, outNumberFrames, ioData);
if (*ioFlags == kAudioQueueProcessingTap_EndOfStream) {
Float64 sampTime;
UInt32 frameCount;
AudioQueueProcessingTapGetQueueTime(inAQTap, &sampTime, &frameCount);
Float64 samplesInThisCallback = self->frameCount - sampleTime;//file sampleCount - queue current sample
//double secondsInCallback = outNumberFrames / (double)self->sampleRate; outNumberFrames was inaccurate
double secondsInCallback = * samplesInThisCallback / (double)self->sampleRate;
uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (secondsInCallback / ticksToSeconds());
[self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
}
}
-(void)lastSampleDoneAt:(uint64_t)lastSampTime{
uint64_t currentTime = mach_absolute_time();
if (lastSampTime > currentTime) {
double secondsFromNow = (lastSampTime - currentTime) * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(secondsFromNow * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//do the thing!!!
});
}
else{
//do the thing!!!
}
}
You set it up like this after AudioQueueNewOutput and before AudioQueueStart. Notice the passing of bridged self to the inClientData argument. The queue actually holds self as void* to be used in callback where we bridge it back to an objective-C object within the callback.
AudioStreamBasicDescription format;
AudioQueueProcessingTapRef tapRef;
UInt32 maxFrames = 0;
AudioQueueProcessingTapNew(queue, processingTapCallback, (__bridge void *)self, kAudioQueueProcessingTap_PostEffects, &maxFrames, &format, &tapRef);
You could get the end machine time as soon as the file starts too. A little cleaner too.
void processingTapCallback(
void * inClientData,
AudioQueueProcessingTapRef inAQTap,
UInt32 inNumberFrames,
AudioTimeStamp * ioTimeStamp,
UInt32 * ioFlags,
UInt32 * outNumberFrames,
AudioBufferList * ioData){
MyObject *self = (__bridge Object *)inClientData;
AudioQueueProcessingTapGetSourceAudio(inAQTap, inNumberFrames, ioTimeStamp, ioFlags, outNumberFrames, ioData);
if (*ioFlags == kAudioQueueProcessingTap_StartOfStream) {
uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (self->audioDurSeconds / ticksToSeconds());
[self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
}
}
If you use AudioQueueStop in asynchronous mode, then stopping happens after all queued buffers have been played or recorded. See doc.
You're using it in a synchronous mode, where stopping happens ASAP, and playback cuts out immediately, without regard for previously buffered audio data. You want precise timing, but only because audio is cutting off. Right? So rather than go synchronous + add additional timing/callback code, I recommend going asynchronous:
err=AudioQueueStop(queue, FALSE);
From docs:
If you pass false, the function returns immediately, but the audio
queue does not stop until its queued buffers are played or recorded
(that is, the stop occurs asynchronously). Audio queue callbacks are
invoked as necessary until the queue actually stops.
For me this worked really well for what I heeded:
stopping the queue in callback when data is over using AudioQueueStop(queue, FALSE), while:
listening to actual stop using kAudioQueueProperty_IsRunning property (happens later than AudioQueueStop() is called, actually, when last buffer gets actually rendered)
after stopping the queue You can get prepared for action You need to execute on audio ending, and when listener fires - actually execute this action.
I am not sure about time precision of that event but for my task it behaved definitely better than using notification straight from callback. There is buffering inside AudioQueue and output device itself so definitely IsRunning listener gives better results as to when AudioQueue stops playing.

Detect when AvPlayer switch bit rate

In my application, I use the AVPlayer to read some streams (m3u8 file), with HLS protocol. I need to know how many times, during a streaming session, the client switches bitrate.
Let's assume the client's bandwidth is increasing. So the client will switch to a higher bitrate segment.
Can the AVPlayer detect this switch ?
Thanks.
I have had a similar problem recently. The solution felt a bit hacky but it worked as far as I saw. First I set up an observer for new Access Log notifications:
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(handleAVPlayerAccess:)
name:AVPlayerItemNewAccessLogEntryNotification
object:nil];
Which calls this function. It can probably be optimised but here is the basic idea:
- (void)handleAVPlayerAccess:(NSNotification *)notif {
AVPlayerItemAccessLog *accessLog = [((AVPlayerItem *)notif.object) accessLog];
AVPlayerItemAccessLogEvent *lastEvent = accessLog.events.lastObject;
float lastEventNumber = lastEvent.indicatedBitrate;
if (lastEventNumber != self.lastBitRate) {
//Here is where you can increment a variable to keep track of the number of times you switch your bit rate.
NSLog(#"Switch indicatedBitrate from: %f to: %f", self.lastBitRate, lastEventNumber);
self.lastBitRate = lastEventNumber;
}
}
Every time there is a new entry to the access log, it checks the last indicated bitrate from the most recent entry (the lastObject in the access log for the player item). It compares this indicated bitrate with a property that stored the the bitrate from that last change.
BoardProgrammer's solution works great! In my case, I needed the indicated bitrate to detect when the content quality switched from SD to HD. Here is the Swift 3 version.
// Add observer.
NotificationCenter.default.addObserver(self,
selector: #selector(handleAVPlayerAccess),
name: NSNotification.Name.AVPlayerItemNewAccessLogEntry,
object: nil)
// Handle notification.
func handleAVPlayerAccess(notification: Notification) {
guard let playerItem = notification.object as? AVPlayerItem,
let lastEvent = playerItem.accessLog()?.events.last else {
return
}
let indicatedBitrate = lastEvent.indicatedBitrate
// Use bitrate to determine bandwidth decrease or increase.
}

iOS - Generate and play indefinite, simple audio (sine wave)

I'm looking to build an incredibly simple application for iOS with a button that starts and stops an audio signal. The signal is just going to be a sine wave, and it's going to check my model (an instance variable for the volume) throughout its playback and change its volume accordingly.
My difficulty has to do with the indefinite nature of the task. I understand how to build tables, fill them with data, respond to button presses, and so on; however, when it comes to just having something continue on indefinitely (in this case, a sound), I'm a little stuck! Any pointers would be terrific!
Thanks for reading.
Here's a bare-bones application which will play a generated frequency on-demand. You haven't specified whether to do iOS or OSX, so I've gone for OSX since it's slightly simpler (no messing with Audio Session Categories). If you need iOS, you'll be able to find out the missing bits by looking into Audio Session Category basics and swapping the Default Output audio unit for the RemoteIO audio unit.
Note that the intention of this is purely to demonstrate some Core Audio / Audio Unit basics. You'll probably want to look into the AUGraph API if you want to start getting more complex than this (also in the interest of providing a clean example, I'm not doing any error checking. Always do error checking when dealing with Core Audio).
You'll need to add the AudioToolbox and AudioUnit frameworks to your project to use this code.
#import <AudioToolbox/AudioToolbox.h>
#interface SWAppDelegate : NSObject <NSApplicationDelegate>
{
AudioUnit outputUnit;
double renderPhase;
}
#end
#implementation SWAppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
// First, we need to establish which Audio Unit we want.
// We start with its description, which is:
AudioComponentDescription outputUnitDescription = {
.componentType = kAudioUnitType_Output,
.componentSubType = kAudioUnitSubType_DefaultOutput,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
// Next, we get the first (and only) component corresponding to that description
AudioComponent outputComponent = AudioComponentFindNext(NULL, &outputUnitDescription);
// Now we can create an instance of that component, which will create an
// instance of the Audio Unit we're looking for (the default output)
AudioComponentInstanceNew(outputComponent, &outputUnit);
AudioUnitInitialize(outputUnit);
// Next we'll tell the output unit what format our generated audio will
// be in. Generally speaking, you'll want to stick to sane formats, since
// the output unit won't accept every single possible stream format.
// Here, we're specifying floating point samples with a sample rate of
// 44100 Hz in mono (i.e. 1 channel)
AudioStreamBasicDescription ASBD = {
.mSampleRate = 44100,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagsNativeFloatPacked,
.mChannelsPerFrame = 1,
.mFramesPerPacket = 1,
.mBitsPerChannel = sizeof(Float32) * 8,
.mBytesPerPacket = sizeof(Float32),
.mBytesPerFrame = sizeof(Float32)
};
AudioUnitSetProperty(outputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&ASBD,
sizeof(ASBD));
// Next step is to tell our output unit which function we'd like it
// to call to get audio samples. We'll also pass in a context pointer,
// which can be a pointer to anything you need to maintain state between
// render callbacks. We only need to point to a double which represents
// the current phase of the sine wave we're creating.
AURenderCallbackStruct callbackInfo = {
.inputProc = SineWaveRenderCallback,
.inputProcRefCon = &renderPhase
};
AudioUnitSetProperty(outputUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&callbackInfo,
sizeof(callbackInfo));
// Here we're telling the output unit to start requesting audio samples
// from our render callback. This is the line of code that starts actually
// sending audio to your speakers.
AudioOutputUnitStart(outputUnit);
}
// This is our render callback. It will be called very frequently for short
// buffers of audio (512 samples per call on my machine).
OSStatus SineWaveRenderCallback(void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
// inRefCon is the context pointer we passed in earlier when setting the render callback
double currentPhase = *((double *)inRefCon);
// ioData is where we're supposed to put the audio samples we've created
Float32 * outputBuffer = (Float32 *)ioData->mBuffers[0].mData;
const double frequency = 440.;
const double phaseStep = (frequency / 44100.) * (M_PI * 2.);
for(int i = 0; i < inNumberFrames; i++) {
outputBuffer[i] = sin(currentPhase);
currentPhase += phaseStep;
}
// If we were doing stereo (or more), this would copy our sine wave samples
// to all of the remaining channels
for(int i = 1; i < ioData->mNumberBuffers; i++) {
memcpy(ioData->mBuffers[i].mData, outputBuffer, ioData->mBuffers[i].mDataByteSize);
}
// writing the current phase back to inRefCon so we can use it on the next call
*((double *)inRefCon) = currentPhase;
return noErr;
}
- (void)applicationWillTerminate:(NSNotification *)notification
{
AudioOutputUnitStop(outputUnit);
AudioUnitUninitialize(outputUnit);
AudioComponentInstanceDispose(outputUnit);
}
#end
You can call AudioOutputUnitStart() and AudioOutputUnitStop() at will to start/stop producing audio. If you want to dynamically change the frequency, you can pass in a pointer to a struct containing both the renderPhase double and another one representing the frequency you want.
Be careful in the render callback. It's called from a realtime thread (not from the same thread as your main run loop). Render callbacks are subject to some fairly strict time requirements, which means that there's many things you Should Not Do in your callback, such as:
Allocate memory
Wait on a mutex
Read from a file on disk
Objective-C messaging (Yes, seriously.)
Note that this is not the only way to do this. I've only demonstrated it this way since you've tagged this core-audio. If you don't need to change the frequency you can just use the AVAudioPlayer with a pre-made sound file containing your sine wave.
There's also Novocaine, which hides a lot of this verbosity from you. You could also look into the Audio Queue API, which works fairly similar to the Core Audio sample I wrote but decouples you from the hardware a little more (i.e. it's less strict about how you behave in your render callback).

iPhone playback AudioQueue stops/fails to continue after pausing at a breakpoint during debugging

For some reason, it seems that stopping at a breakpoint during debugging will kill my audio queue playback.
AudioQueue will be playing audio
output.
Trigger a breakpoint to
pause my iPhone app.
Subsequent
resume, audio no longer gets played.
( However, AudioQueue callback
functions are still getting called.)
( No AudioSession or AudioQueue
errors are found.)
Since the debugger pauses the application (rather than an incoming phone call, for example) , it's not a typical iPhone interruption, so AudioSession interruption callbacks do not get triggered like in this solution.
I am using three AudioQueue buffers at 4096 samples at 22kHz and filling them in a circular manner.
Problem occurs for both multi-threaded and single-threaded mode.
Is there some known problem that you can't pause and resume AudioSessions or AudioQueues during a debugging session?
Is it running out of "queued buffers" and it's destroying/killing the AudioQueue object (but then my AQ callback shouldn't trigger).
Anyone have insight into inner workings of iPhone AudioQueues?
After playing around with it for the last several days, before posting to StackOverflow, I figured out the answer just today. Go figure!
Just recreate the AudioQueue again by calling my "preparation functions"
SetupNewQueue(mDataFormat.mSampleRate, mDataFormat.mChannelsPerFrame);
StartQueue(true);
So detect when your AudioQueue may have "died". In my case, I would be writing data into an input buffer to be "pulled" by AudioQueue callback. If it doesn't occur in a certain time, or after X number of bytes of input buffer have been filled, I then recreate the AudioQueue.
This seems to solve the issue where "halts/fails" audio when you hit a debugging breakpoint.
The simplified versions of these functions are the following:
void AQPlayer::SetupNewQueue(double inSampleRate, UInt32 inChannelsPerFrame)
{
//Prep AudioStreamBasicDescription
mDataFormat.mSampleRate = inSampleRate;
mDataFormat.SetCanonical(inChannelsPerFrame, YES);
XThrowIfError(AudioQueueNewOutput(&mDataFormat, AQPlayer::AQBufferCallback, this,
NULL, kCFRunLoopCommonModes, 0, &mQueue), "AudioQueueNew failed");
// adjust buffer size to represent about a half second of audio based on this format
CalculateBytesForTime(mDataFormat, kBufferDurationSeconds, &mBufferByteSize, &mNumPacketsToRead);
ctl->cmsg(CMSG_INFO, VERB_NOISY, "AQPlayer Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)mBufferByteSize, (int)mNumPacketsToRead);
mBufferWaitTime = mNumPacketsToRead / mDataFormat.mSampleRate * 0.9;
XThrowIfError(AudioQueueAddPropertyListener(mQueue, kAudioQueueProperty_IsRunning, isRunningProc, this), "adding property listener");
//Allocate AQ buffers (assume we are using CBR (constant bitrate)
for (int i = 0; i < kNumberBuffers; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, mBufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed");
}
...
}
OSStatus AQPlayer::StartQueue(BOOL inResume)
{
// if we are not resuming, we also should restart the file read index
if (!inResume)
mCurrentPacket = 0;
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffers; ++i) {
mBuffers[i]->mAudioDataByteSize = mBuffers[i]->mAudioDataBytesCapacity;
memset( mBuffers[i]->mAudioData, 0, mBuffers[i]->mAudioDataByteSize );
XThrowIfError(AudioQueueEnqueueBuffer( mQueue,
mBuffers[i],
0,
NULL ),"AudioQueueEnqueueBuffer failed");
}
OSStatus status;
status = AudioSessionSetActive( true );
XThrowIfError(status, "\n\n*** AudioSession failed to become active *** \n\n");
status = AudioQueueStart(mQueue, NULL);
XThrowIfError(status, "\n\n*** AudioQueue failed to start *** \n\n");
return status;
}

Resources