I am working with video/audio stream and using AudioQueueRef to play audio. I use current time of AudioQueueRef for synchronizing video/ audio frame in real-time. When i seek to the specified time, the current time of AudioQueueRef i got is not return the correct value, it just returned continuously value with before of timing. I think the problem is i did not reset all old object of AudioQueueRef. But i tried many of method and i did not get the expectation current time value. Anyone has idea for this case? Thank in advanced.
Method i get current time of AudioQueueRef
- (double)currentTime {
double timeInterval = 0;
AudioTimeStamp timeStamp;
AudioQueueGetCurrentTime(_AudioQueues, NULL, &timeStamp, NULL);
timeInterval = 1000 * timeStamp.mSampleTime / self.inFormat.mSampleRate; // convert to miliseconds
return timeInterval;
}
And methods that i tried to reset value when seek. If i use any method below, the current time will return negative value.
AudioQueueFreeBuffer(_AudioQueues, _AudioBuffer);
AudioQueueFlush(_AudioQueues);
AudioQueueReset(_AudioQueues);
P/S: all things is correct before seeking, after seeking, the timestamp is return continuously value.
Related
I am trying to understand how timestamping works for an AUv3 MIDI plug-in of type "aumi", where the plug-in sends MIDI events to a host. I cache the MIDIOutputEventBlockand the transportStateBlock properties into _outputEventBlock and _transportStateBlock in the allocateRenderResourcesAndReturnError method and use them in the internalRenderBlockmethod:
- (AUInternalRenderBlock)internalRenderBlock {
// Capture in locals to avoid Obj-C member lookups. If "self" is captured in render, we're doing it wrong. See sample code.
return ^AUAudioUnitStatus(AudioUnitRenderActionFlags *actionFlags, const AudioTimeStamp *timestamp, AVAudioFrameCount frameCount, NSInteger outputBusNumber, AudioBufferList *outputData, const AURenderEvent *realtimeEventListHead, AURenderPullInputBlock pullInputBlock) {
// Transport State
if (_transportStateBlock) {
AUHostTransportStateFlags transportStateFlags;
_transportStateBlock(&transportStateFlags, nil, nil, nil);
if (transportStateFlags & AUHostTransportStateMoving) {
if (!playedOnce) {
// NOTE On!
unsigned char dataOn[] = {0x90,69,96};
_outputEventBlock(timestamp->mSampleTime, 0, 3, dataOn);
playedOnce = YES;
// NOTE Off!
unsigned char dataOff[] = {0x80,69,0};
_outputEventBlock(timestamp->mSampleTime+96000, 0, 3, dataOff);
}
}
else {
playedOnce = NO;
}
}
return noErr;
};
}
What this code is meant to do is to play the A4 note in a synthesizer at the host for 2 seconds (the sampling rate is 48KHz). What I get is a click sound. Experimenting some, I have tried delaying the start of the note on MIDI event by offsetting the _outputEventBlock AUEventSampleTime, but it plays the click sound as soon as the play button is pressed on the host.
Now, if I change the code to generate the note off MIDI event when the _transportStateFlags are indicating the state is "not moving" instead, then the note plays as soon as the play button is pressed and stops when the pause button is pressed, which would be the correct behavior. This tells me that my understanding of the AUEventSampleTime property in MIDIOutputEventBlock is flawed and that it cannot be used to schedule MIDI events for the host by adding offsets to it.
I see that there is another property scheduleMIDIEventBlock, and tried using this property instead but when I use it, there isn't any sound played.
Any clarification of how this all works would be greatly appreciated.
I am using Audio Queues to playback audio files. I need precise timing on the finish of last buffer.
I need to notify a function no later than 150ms-200 ms after the last buffer is played...
Thru callback method I know how many buffers are enqueued
I know the buffer size, I know the how many bytes last buffer is filled with.
First I initialize a number of buffers end fill the buffers with audio data, then enqueue them. When Audio Queue needs a buffer to be filled it calls the callback and I fill the buffer with data.
When there is no more audio data available Audio Queue sends me the last empty buffer, so I fill it with whatever data I have:
if (sharedCache.numberOfToTalPackets>0)
{
if (currentlyReadingBufferIndex==[sharedCache.baseAudioCache count]-1) {
inBuffer->mAudioDataByteSize = (UInt32)bytesFilled;
lastEnqueudBufferSize=bytesFilled;
err=AudioQueueEnqueueBuffer(inAQ,inBuffer,(UInt32)packetsFilled,packetDescs);
if (err) {
[self failWithErrorCode:err customError:AP_AUDIO_QUEUE_ENQUEUE_FAILED];
}
printf("if that was the last free packet description, then enqueue the buffer\n");
//go to the next item on keepbuffer array
isBufferFilled=YES;
[self incrementBufferUsedCount];
return;
}
}
When Audio Queue asks for more data via callback and I have no more data , I start to countdown the buffers. If buffer count equals to zero, which means only one buffer left on the flight to be played, the moment playback is done I try to stop the audio queue.
-(void)decrementBufferUsedCount
{
if (buffersUsed>0) {
buffersUsed--;
printf("buffer on the queue %i\n",buffersUsed);
if (buffersUsed==0) {
NSLog(#"playback is finished\n");
// end playback
isPlayBackDone=YES;
double sampleRate = dataFormat.mSampleRate;
double bufferDuration = lastEnqueudBufferSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;
[self performSelector:#selector(stopPlayer) withObject:nil afterDelay:estimatedTimeNeded];
}
}
}
-(void)stopPlayer
{
#synchronized(self)
{
state=AP_STOPPING;
}
err=AudioQueueStop(queue, TRUE);
if (err) {
[self failWithErrorCode:err customError:AP_AUDIO_QUEUE_STOP_FAILED];
}
else
{
#synchronized(self)
{
state=AP_STOPPED;
NSLog(#"Stopped\n");
}
However it seems I can't get precise timing here. Above code stops player early.
if I do following audio cuts early too
double bufferDuration = XMAQDefaultBufSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;
if increase 1 to 2 since the buffer size is big I get some delay, seem 1.5 is the optimum value for now but I dont understand why lastEnqueudBufferSize/ sampleRate is not wotking
Details of the audio file, and buffers:
Audio file has 22050 sample rate
#define kNumberPlaybackBuffers 4
#define kAQDefaultBufSize 16384
it is a vbr file format with no bitrate information available
EDIT:
I found an easier way that gets the same results (+/-10ms). After you set up your output Queue with AudioQueueNewOutput() you initialize a AudioQueueTimelineRef to be used in your output callback. (ticksToSeconds function is included below in my first method) don't forget to import<mach/mach_time.h>
//After AudioQueueNewOutput()
AudioQueueTimelineRef timeLine; //ivar
AudioQueueCreateTimeline(queue, self.timeLine);
Then in your output callback you call AudioQueueGetCurrentTime(). Caveat: queue must be playing for valid timestamps. So for very short files you might need to use the AudioQueueProcessingTap method below.
AudioTimeStamp timestamp;
AudioQueueGetCurrentTime(queue, self->timeLine, ×tamp, NULL);
The timestamp ties together the current sample playing with the current machine time. With that info we can get an exact machine time in the future when our last sample will be played.
Float64 samplesLeft = self->frameCount - timestamp.mSampleTime;//samples in file - current sample
Float64 secondsLeft = samplesLeft / self->sampleRate; //seconds of audio to play
UInt64 ticksLeft = secondsLeft / ticksToSeconds(); //seconds converted to machine ticks
UInt64 machTimeFinish = timestamp.mHostTime + ticksLeft; //machine time of first sample + ticks left
Now that we have this future machine time we can use it to time whatever it is that you want to do with some accuracy.
UInt64 currentMachTime = mach_absolute_time();
Uint64 ticksFromNow = machTimeFinish - currentMachTime;
float secondsFromNow = ticksFromNow * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(secondsFromNow * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//do the thing!!!
printf("Giggety");
});
If GCD dispatch_async isn't accurate enough there are ways to set up a precision timer
Using AudioQueueProcessingTap
You can get fairly low response time from an AudioQueueProcessingTap. First you make your callback that will essentially put itself in-between the audio stream. The MyObject type is just whatever self is in your code(this is ARC bridging here to get self inside the function). Inspecting ioFlags tells you when the stream starts and finishes. The ioTimeStamp of an output callback describes time that the first sample in the callback will hit the speaker in the future. So if you want to get exact here's how you do it. I added some convenience functions for converting machine time to seconds.
#import <mach/mach_time.h>
double getTimeConversion(){
double timecon;
mach_timebase_info_data_t tinfo;
kern_return_t kerror;
kerror = mach_timebase_info(&tinfo);
timecon = (double)tinfo.numer / (double)tinfo.denom;
return timecon;
}
double ticksToSeconds(){
static double ticksToSeconds = 0;
if (!ticksToSeconds) {
ticksToSeconds = getTimeConversion() * 0.000000001;
}
return ticksToSeconds;
}
void processingTapCallback(
void * inClientData,
AudioQueueProcessingTapRef inAQTap,
UInt32 inNumberFrames,
AudioTimeStamp * ioTimeStamp,
UInt32 * ioFlags,
UInt32 * outNumberFrames,
AudioBufferList * ioData){
MyObject *self = (__bridge Object *)inClientData;
AudioQueueProcessingTapGetSourceAudio(inAQTap, inNumberFrames, ioTimeStamp, ioFlags, outNumberFrames, ioData);
if (*ioFlags == kAudioQueueProcessingTap_EndOfStream) {
Float64 sampTime;
UInt32 frameCount;
AudioQueueProcessingTapGetQueueTime(inAQTap, &sampTime, &frameCount);
Float64 samplesInThisCallback = self->frameCount - sampleTime;//file sampleCount - queue current sample
//double secondsInCallback = outNumberFrames / (double)self->sampleRate; outNumberFrames was inaccurate
double secondsInCallback = * samplesInThisCallback / (double)self->sampleRate;
uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (secondsInCallback / ticksToSeconds());
[self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
}
}
-(void)lastSampleDoneAt:(uint64_t)lastSampTime{
uint64_t currentTime = mach_absolute_time();
if (lastSampTime > currentTime) {
double secondsFromNow = (lastSampTime - currentTime) * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(secondsFromNow * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//do the thing!!!
});
}
else{
//do the thing!!!
}
}
You set it up like this after AudioQueueNewOutput and before AudioQueueStart. Notice the passing of bridged self to the inClientData argument. The queue actually holds self as void* to be used in callback where we bridge it back to an objective-C object within the callback.
AudioStreamBasicDescription format;
AudioQueueProcessingTapRef tapRef;
UInt32 maxFrames = 0;
AudioQueueProcessingTapNew(queue, processingTapCallback, (__bridge void *)self, kAudioQueueProcessingTap_PostEffects, &maxFrames, &format, &tapRef);
You could get the end machine time as soon as the file starts too. A little cleaner too.
void processingTapCallback(
void * inClientData,
AudioQueueProcessingTapRef inAQTap,
UInt32 inNumberFrames,
AudioTimeStamp * ioTimeStamp,
UInt32 * ioFlags,
UInt32 * outNumberFrames,
AudioBufferList * ioData){
MyObject *self = (__bridge Object *)inClientData;
AudioQueueProcessingTapGetSourceAudio(inAQTap, inNumberFrames, ioTimeStamp, ioFlags, outNumberFrames, ioData);
if (*ioFlags == kAudioQueueProcessingTap_StartOfStream) {
uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (self->audioDurSeconds / ticksToSeconds());
[self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
}
}
If you use AudioQueueStop in asynchronous mode, then stopping happens after all queued buffers have been played or recorded. See doc.
You're using it in a synchronous mode, where stopping happens ASAP, and playback cuts out immediately, without regard for previously buffered audio data. You want precise timing, but only because audio is cutting off. Right? So rather than go synchronous + add additional timing/callback code, I recommend going asynchronous:
err=AudioQueueStop(queue, FALSE);
From docs:
If you pass false, the function returns immediately, but the audio
queue does not stop until its queued buffers are played or recorded
(that is, the stop occurs asynchronously). Audio queue callbacks are
invoked as necessary until the queue actually stops.
For me this worked really well for what I heeded:
stopping the queue in callback when data is over using AudioQueueStop(queue, FALSE), while:
listening to actual stop using kAudioQueueProperty_IsRunning property (happens later than AudioQueueStop() is called, actually, when last buffer gets actually rendered)
after stopping the queue You can get prepared for action You need to execute on audio ending, and when listener fires - actually execute this action.
I am not sure about time precision of that event but for my task it behaved definitely better than using notification straight from callback. There is buffering inside AudioQueue and output device itself so definitely IsRunning listener gives better results as to when AudioQueue stops playing.
I am using a timer in my cocos2d-x game (c++) ios. I am using cocos2d-x 2.2 version.
My function for time is as follows
in my init
this->schedule(schedule_selector(HelloWorld::UpdateTimer), 1);
I have defined the function as follows.
void HelloWorld::UpdateTimer(float dt)
{
if(seconds<=0)
{
CCLOG("clock stopped");
CCString *str=CCString::createWithFormat("%d",seconds);
timer->setString(str->getCString());
this->unschedule(schedule_selector(HelloWorld::UpdateTimer));
}
else
{
CCString *str=CCString::createWithFormat("%d",seconds);
timer->setString(str->getCString());
seconds--;
}
}
Everythings is working fine. But i have this timer to keep running even if the game enters background state. I have tried commenting the body of didEnter Background in appdelegate but not successfull. Any help will be appreciated
Thanks
If the app gets in the background, apart from some special background threads, no other thread gets executed.
Best way for you would be to save the unix timestamp in a variable during didEnterBackground, and when app resumes, fetch the current unix timestamp and compare the delta, to get the total time passed and update your timer accordingly.
In my AppDelegate.cpp I wrote the following code in applicationDidEnterBackground function. Here I took a value of time in seconds whenever the app goes background and store it in a CCUserdefault key. And when the app comes to foreground I again took the local system time and subtracted that from the time I stored in the key. Following is my code
void AppDelegate::applicationDidEnterBackground()
{
time_t rawtime;
struct tm * timeinfo;
time (&rawtime);
timeinfo = localtime (&rawtime);
CCLog("year------->%04d",timeinfo->tm_year+1900);
CCLog("month------->%02d",timeinfo->tm_mon+1);
CCLog("day------->%02d",timeinfo->tm_mday);
CCLog("hour------->%02d",timeinfo->tm_hour);
CCLog("minutes------->%02d",timeinfo->tm_min);
CCLog("seconds------->%02d",timeinfo->tm_sec);
int time_in_seconds=(timeinfo->tm_hour*60)+(timeinfo->tm_min*60)+timeinfo->tm_sec;
CCLOG("time in seconds is %d",time_in_seconds);
CCUserDefault *def=CCUserDefault::sharedUserDefault();
def->setIntegerForKey("time_from_background", time_in_seconds);
CCDirector::sharedDirector()->stopAnimation();
// if you use SimpleAudioEngine, it must be pause
// SimpleAudioEngine::sharedEngine()->pauseBackgroundMusic();
}
void AppDelegate::applicationWillEnterForeground()
{
CCUserDefault *def=CCUserDefault::sharedUserDefault();
int time1=def->getIntegerForKey("time_from_background");
time_t rawtime;
struct tm * timeinfo;
time(&rawtime);
timeinfo = localtime (&rawtime);
CCLog("year------->%04d",timeinfo->tm_year+1900);
CCLog("month------->%02d",timeinfo->tm_mon+1);
CCLog("day------->%02d",timeinfo->tm_mday);
CCLog("hour------->%02d",timeinfo->tm_hour);
CCLog("mintus------->%02d",timeinfo->tm_min);
CCLog("seconds------->%02d",timeinfo->tm_sec);
int time_in_seconds=(timeinfo->tm_hour*60)+(timeinfo->tm_min*60)+timeinfo->tm_sec;
int resume_seconds= time_in_seconds-time1;
CCLOG("app after seconds == %d", resume_seconds);
CCDirector::sharedDirector()->startAnimation();
// if you use SimpleAudioEngine, it must resume here
// SimpleAudioEngine::sharedEngine()->resumeBackgroundMusic();
}
You can see and calculate the time the app remained in background.
I've been working at this problem for a few days and none of my solutions have been adequate. I'm lacking the theoretical knowledge to make this happen, I think, and would love some advice (does not have to be iOS specific--I can translate C, pseudocode, whatever, into what I need).
Basically, I have two iPhones. Either one can trigger a repeating action when the user presses a button. It then needs to notify the other iPhone (via the MultiPeer framework) to trigger the same action...but they both need to start at the same instant and stay in step. I really need to get 1/100sec accuracy, which I think is achievable on this platform.
As a semi-rough gauge of how well in synch I am, I use AudioServices to play a "tick" sound on each device...you can very easily tell by ear how well in synch they are (ideally you would not be able to discern multiple sound sources).
Of course, I have to account for the MultiPeer latency somehow...and it's highly variable, anywhere from .1 sec to .8 sec in my testing.
Having found that the system clock is totally unreliable for my purposes, I found an iOS implementation of NTP and am using that. So I'm reasonably confident that the two phones have an accurate common reference for time (though I haven't figured out a way to test this assumption short of continuously displaying NTP time on both devices, which I do, and it seems nicely in synch to my eye).
What I was trying before was sending the "start time" with the P2P message, then (on the recipient end) subtracting that latency from a 1.5sec constant, and performing the action after that delay. On the sender end, I would simply wait for that constant to elapse and then perform the action. This didn't work at all. I was way off.
My next attempt was to wait, on both ends, for a whole second divisible by three, Since latency always seems to be <1sec, I thought this would work. I use the "delay" method to simply block the thread. It's a cudgel, I know, but I just want to get the timing working period before I worry about a more elegant solution. So, my "sender" (the device where the button is pressed) does this:
-(void)startActionAsSender
{
[self notifyPeerToStartAction];
[self delay];
[self startAction];
}
And the recipient does this, in response to a delegate call:
-(void)peerDidStartAction
{
[self delay];
[self startAction];
}
My "delay" method looks like this:
-(void)delay
{
NSDate *NTPTimeNow = [[NetworkClock sharedInstance] networkTime];
NSCalendar *calendar = [NSCalendar currentCalendar];
NSDateComponents *components = [calendar components:NSSecondCalendarUnit
fromDate:NTPTimeNow];
NSInteger seconds = [components second];
// If this method gets called on a second divisible by three, wait a second...
if (seconds % 3 == 0) {
sleep(1);
}
// Spinlock
while (![self secondsDivideByThree]) {}
}
-(BOOL)secondsDivideByThree
{
NSDate *NTPTime = [[NetworkClock sharedInstance] networkTime];
NSCalendar *calendar = [NSCalendar currentCalendar];
NSInteger seconds = [[calendar components:NSSecondCalendarUnit fromDate:NTPTime]
second];
return (seconds % 3 == 0);
}
This is old, so I hope you were able to get something working. I faced a very similar problem. In my case, I found that the inconsistency was almost entirely due to timer coalescing, which causes timers to be wrong by up to 10% on iOS devices in order to save battery usage.
For reference, here's a solution that I've been using in my own app. First, I use a simple custom protocol that's essentially a rudimentary NTP equivalent to synchronize a monotonically increasing clock between the two devices over the local network. I call this synchronized time "DTime" in the code below. With this code I'm able to tell all peers "perform action X at time Y", and it happens in sync.
+ (DTimeVal)getCurrentDTime
{
DTimeVal baseTime = mach_absolute_time();
// Convert from ticks to nanoseconds:
static mach_timebase_info_data_t s_timebase_info;
if (s_timebase_info.denom == 0) {
mach_timebase_info(&s_timebase_info);
}
DTimeVal timeNanoSeconds = (baseTime * s_timebase_info.numer) / s_timebase_info.denom;
return timeNanoSeconds + localDTimeOffset;
}
+ (void)atExactDTime:(DTimeVal)val runBlock:(dispatch_block_t)block
{
// Use the most accurate timing possible to trigger an event at the specified DTime.
// This is much more accurate than dispatch_after(...), which has a 10% "leeway" by default.
// However, this method will use battery faster as it avoids most timer coalescing.
// Use as little as necessary.
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, DISPATCH_TIMER_STRICT, dispatch_get_main_queue());
dispatch_source_set_event_handler(timer, ^{
dispatch_source_cancel(timer); // one shot timer
while (val - [self getCurrentDTime] > 1000) {
// It is at least 1 microsecond too early...
[NSThread sleepForTimeInterval:0.000001]; // Change this to zero for even better accuracy
}
block();
});
// Now, we employ a dirty trick:
// Since even with DISPATCH_TIMER_STRICT there can be about 1ms of inaccuracy, we set the timer to
// fire 1.3ms too early, then we use an until(time) { sleep(); } loop to delay until the exact time
// that we wanted. This takes us from an accuracy of ~1ms to an accuracy of ~0.01ms, i.e. two orders
// of magnitude improvement. However, of course the downside is that this will block the main thread
// for 1.3ms.
dispatch_time_t at_time = dispatch_time(DISPATCH_TIME_NOW, val - [self getCurrentDTime] - 1300000);
dispatch_source_set_timer(timer, at_time, DISPATCH_TIME_FOREVER /*one shot*/, 0 /* minimal leeway */);
dispatch_resume(timer);
}
I'm looking to build an incredibly simple application for iOS with a button that starts and stops an audio signal. The signal is just going to be a sine wave, and it's going to check my model (an instance variable for the volume) throughout its playback and change its volume accordingly.
My difficulty has to do with the indefinite nature of the task. I understand how to build tables, fill them with data, respond to button presses, and so on; however, when it comes to just having something continue on indefinitely (in this case, a sound), I'm a little stuck! Any pointers would be terrific!
Thanks for reading.
Here's a bare-bones application which will play a generated frequency on-demand. You haven't specified whether to do iOS or OSX, so I've gone for OSX since it's slightly simpler (no messing with Audio Session Categories). If you need iOS, you'll be able to find out the missing bits by looking into Audio Session Category basics and swapping the Default Output audio unit for the RemoteIO audio unit.
Note that the intention of this is purely to demonstrate some Core Audio / Audio Unit basics. You'll probably want to look into the AUGraph API if you want to start getting more complex than this (also in the interest of providing a clean example, I'm not doing any error checking. Always do error checking when dealing with Core Audio).
You'll need to add the AudioToolbox and AudioUnit frameworks to your project to use this code.
#import <AudioToolbox/AudioToolbox.h>
#interface SWAppDelegate : NSObject <NSApplicationDelegate>
{
AudioUnit outputUnit;
double renderPhase;
}
#end
#implementation SWAppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
// First, we need to establish which Audio Unit we want.
// We start with its description, which is:
AudioComponentDescription outputUnitDescription = {
.componentType = kAudioUnitType_Output,
.componentSubType = kAudioUnitSubType_DefaultOutput,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
// Next, we get the first (and only) component corresponding to that description
AudioComponent outputComponent = AudioComponentFindNext(NULL, &outputUnitDescription);
// Now we can create an instance of that component, which will create an
// instance of the Audio Unit we're looking for (the default output)
AudioComponentInstanceNew(outputComponent, &outputUnit);
AudioUnitInitialize(outputUnit);
// Next we'll tell the output unit what format our generated audio will
// be in. Generally speaking, you'll want to stick to sane formats, since
// the output unit won't accept every single possible stream format.
// Here, we're specifying floating point samples with a sample rate of
// 44100 Hz in mono (i.e. 1 channel)
AudioStreamBasicDescription ASBD = {
.mSampleRate = 44100,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagsNativeFloatPacked,
.mChannelsPerFrame = 1,
.mFramesPerPacket = 1,
.mBitsPerChannel = sizeof(Float32) * 8,
.mBytesPerPacket = sizeof(Float32),
.mBytesPerFrame = sizeof(Float32)
};
AudioUnitSetProperty(outputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&ASBD,
sizeof(ASBD));
// Next step is to tell our output unit which function we'd like it
// to call to get audio samples. We'll also pass in a context pointer,
// which can be a pointer to anything you need to maintain state between
// render callbacks. We only need to point to a double which represents
// the current phase of the sine wave we're creating.
AURenderCallbackStruct callbackInfo = {
.inputProc = SineWaveRenderCallback,
.inputProcRefCon = &renderPhase
};
AudioUnitSetProperty(outputUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&callbackInfo,
sizeof(callbackInfo));
// Here we're telling the output unit to start requesting audio samples
// from our render callback. This is the line of code that starts actually
// sending audio to your speakers.
AudioOutputUnitStart(outputUnit);
}
// This is our render callback. It will be called very frequently for short
// buffers of audio (512 samples per call on my machine).
OSStatus SineWaveRenderCallback(void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
// inRefCon is the context pointer we passed in earlier when setting the render callback
double currentPhase = *((double *)inRefCon);
// ioData is where we're supposed to put the audio samples we've created
Float32 * outputBuffer = (Float32 *)ioData->mBuffers[0].mData;
const double frequency = 440.;
const double phaseStep = (frequency / 44100.) * (M_PI * 2.);
for(int i = 0; i < inNumberFrames; i++) {
outputBuffer[i] = sin(currentPhase);
currentPhase += phaseStep;
}
// If we were doing stereo (or more), this would copy our sine wave samples
// to all of the remaining channels
for(int i = 1; i < ioData->mNumberBuffers; i++) {
memcpy(ioData->mBuffers[i].mData, outputBuffer, ioData->mBuffers[i].mDataByteSize);
}
// writing the current phase back to inRefCon so we can use it on the next call
*((double *)inRefCon) = currentPhase;
return noErr;
}
- (void)applicationWillTerminate:(NSNotification *)notification
{
AudioOutputUnitStop(outputUnit);
AudioUnitUninitialize(outputUnit);
AudioComponentInstanceDispose(outputUnit);
}
#end
You can call AudioOutputUnitStart() and AudioOutputUnitStop() at will to start/stop producing audio. If you want to dynamically change the frequency, you can pass in a pointer to a struct containing both the renderPhase double and another one representing the frequency you want.
Be careful in the render callback. It's called from a realtime thread (not from the same thread as your main run loop). Render callbacks are subject to some fairly strict time requirements, which means that there's many things you Should Not Do in your callback, such as:
Allocate memory
Wait on a mutex
Read from a file on disk
Objective-C messaging (Yes, seriously.)
Note that this is not the only way to do this. I've only demonstrated it this way since you've tagged this core-audio. If you don't need to change the frequency you can just use the AVAudioPlayer with a pre-made sound file containing your sine wave.
There's also Novocaine, which hides a lot of this verbosity from you. You could also look into the Audio Queue API, which works fairly similar to the Core Audio sample I wrote but decouples you from the hardware a little more (i.e. it's less strict about how you behave in your render callback).