"Precise" sample timing in cycles in ios - ios

I'm periodically sending data to an RC helicopter, at the moment I am using NSTimer with a time interval of 30 ms to do this. Now due to the imprecision of NSTimer and the non-real timedness of ios I'm getting quite big discrepancies with the sample times. (5 packets of data in the first 30 ms then nothing for 4 cycles and such). Is there a more precise way to handle data sampling and timing of functions in ios?

You could do something like you usually do in games to finely tune fps (and also keep logic and painting separate to make it work fine in any fps). Something like this, and call it in a background thread:
- (void)updateLoop
{
NSTimeInterval last = CFAbsoluteTimeGetCurrent();
NSTimeInterval elapsed = 0;
while (self.running) {
NSTimeInterval now = CFAbsoluteTimeGetCurrent();
NSTimeInterval dt = now - last;
elapsed += dt;
if (elapsed >= YOUR_INTERVAL) {
//Do stuff
elapsed = 0;
}
last = now;
[NSThread sleepForTimeInterval:0.001]; //There is not NSThread yield
}
}

Related

What does DispatchWallTime do on iOS?

I thought the difference between DispatchTime and DispatchWallTime had to do with whether the app was suspended or the device screen was locked or something: DispatchTime should pause, whereas DispatchWallTime should keep going because clocks in the real world keep going.
So I wrote a little test app:
#UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
var window: UIWindow?
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
// Override point for customization after application launch.
return true
}
func applicationDidEnterBackground(_ application: UIApplication) {
print("backgrounding the app, starting timers for 60 seconds", Date())
DispatchQueue.main.asyncAfter(deadline: .now() + 60) {
print("deadline 60 seconds ended", Date())
}
DispatchQueue.main.asyncAfter(wallDeadline: .now() + 60) {
print("wallDeadline 60 seconds ended", Date())
}
}
func applicationWillEnterForeground(_ application: UIApplication) {
print("app coming to front", Date())
}
}
I ran the app on my device. I backgrounded the app, waited for a while, then brought the app to the foreground. Sometimes "waited for a while" included switching off the screen. I got results like this:
backgrounding the app, starting timers for 60 seconds 2018-08-15 17:41:18 +0000
app coming to front 2018-08-15 17:41:58 +0000
wallDeadline 60 seconds ended 2018-08-15 17:42:24 +0000
deadline 60 seconds ended 2018-08-15 17:42:24 +0000
backgrounding the app, starting timers for 60 seconds 2018-08-15 17:42:49 +0000
app coming to front 2018-08-15 17:43:21 +0000
wallDeadline 60 seconds ended 2018-08-15 17:43:55 +0000
deadline 60 seconds ended 2018-08-15 17:43:55 +0000
The delay before the deadline timer fires is not as long as I expected: it's 6 seconds over the 60 second deadline, even though I "slept" the app for considerably longer than that. But even more surprising, both timers fire at the same instant.
So what does wallDeadline do on iOS that's different from what deadline does?
There's nothing wrong with The Dreams Wind's answer, but I wanted to understand these APIs more precisely. Here's my analysis.
DispatchTime
Here's the comment above DispatchTime.init:
/// Creates a `DispatchTime` relative to the system clock that
/// ticks since boot.
///
/// - Parameters:
/// - uptimeNanoseconds: The number of nanoseconds since boot, excluding
/// time the system spent asleep
/// - Returns: A new `DispatchTime`
/// - Discussion: This clock is the same as the value returned by
/// `mach_absolute_time` when converted into nanoseconds.
/// On some platforms, the nanosecond value is rounded up to a
/// multiple of the Mach timebase, using the conversion factors
/// returned by `mach_timebase_info()`. The nanosecond equivalent
/// of the rounded result can be obtained by reading the
/// `uptimeNanoseconds` property.
/// Note that `DispatchTime(uptimeNanoseconds: 0)` is
/// equivalent to `DispatchTime.now()`, that is, its value
/// represents the number of nanoseconds since boot (excluding
/// system sleep time), not zero nanoseconds since boot.
So DispatchTime is based on mach_absolute_time. But what is mach_absolute_time? It's defined in mach_absolute_time.s. There is a separate definition for each CPU type, but the key is that it uses rdtsc on x86-like CPUs and reads the CNTPCT_EL0 register on ARMs. In both cases, it's getting a value that increases monotonically, and only increases while the processor is not in a sufficiently deep sleep state.
Note that the CPU is not necessarily sleeping deeply enough even if the device appears to be asleep.
DispatchWallTime
There's no similarly helpful comment in the DispatchWallTime definition, but we can look at the definition of its now method:
public static func now() -> DispatchWallTime {
return DispatchWallTime(rawValue: CDispatch.dispatch_walltime(nil, 0))
}
and then we can consult the definition of dispatch_walltime:
dispatch_time_t
dispatch_walltime(const struct timespec *inval, int64_t delta)
{
int64_t nsec;
if (inval) {
nsec = (int64_t)_dispatch_timespec_to_nano(*inval);
} else {
nsec = (int64_t)_dispatch_get_nanoseconds();
}
nsec += delta;
if (nsec <= 1) {
// -1 is special == DISPATCH_TIME_FOREVER == forever
return delta >= 0 ? DISPATCH_TIME_FOREVER : (dispatch_time_t)-2ll;
}
return (dispatch_time_t)-nsec;
}
When inval is nil, it calls _dispatch_get_nanoseconds, so let's check that out:
static inline uint64_t
_dispatch_get_nanoseconds(void)
{
dispatch_static_assert(sizeof(NSEC_PER_SEC) == 8);
dispatch_static_assert(sizeof(USEC_PER_SEC) == 8);
#if TARGET_OS_MAC
return clock_gettime_nsec_np(CLOCK_REALTIME);
#elif HAVE_DECL_CLOCK_REALTIME
struct timespec ts;
dispatch_assume_zero(clock_gettime(CLOCK_REALTIME, &ts));
return _dispatch_timespec_to_nano(ts);
#elif defined(_WIN32)
static const uint64_t kNTToUNIXBiasAdjustment = 11644473600 * NSEC_PER_SEC;
// FILETIME is 100-nanosecond intervals since January 1, 1601 (UTC).
FILETIME ft;
ULARGE_INTEGER li;
GetSystemTimePreciseAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
return li.QuadPart * 100ull - kNTToUNIXBiasAdjustment;
#else
struct timeval tv;
dispatch_assert_zero(gettimeofday(&tv, NULL));
return _dispatch_timeval_to_nano(tv);
#endif
}
It consults the POSIX CLOCK_REALTIME clock. So it is based on the common idea of time and will change if you change your device's time in Settings (or System Preferences on a Mac).
The Mysterious Six Seconds
You said your timer fired
6 seconds over the 60 second deadline
so let's see where that came from.
Both asyncAfter(deadline:execute:) and asyncAfter(wallDeadline:execute:) call the same C API, dispatch_after. The kind of deadline (or “clock”) is encoded into a dispatch_time_t along with the time value. The dispatch_after function calls the internal GCD function _dispatch_after, which I quote in part here:
static inline void
_dispatch_after(dispatch_time_t when, dispatch_queue_t dq,
void *ctxt, void *handler, bool block)
{
dispatch_timer_source_refs_t dt;
dispatch_source_t ds;
uint64_t leeway, delta;
snip
delta = _dispatch_timeout(when);
if (delta == 0) {
if (block) {
return dispatch_async(dq, handler);
}
return dispatch_async_f(dq, ctxt, handler);
}
leeway = delta / 10; // <rdar://problem/13447496>
if (leeway < NSEC_PER_MSEC) leeway = NSEC_PER_MSEC;
if (leeway > 60 * NSEC_PER_SEC) leeway = 60 * NSEC_PER_SEC;
snip
dispatch_clock_t clock;
uint64_t target;
_dispatch_time_to_clock_and_value(when, &clock, &target);
if (clock != DISPATCH_CLOCK_WALL) {
leeway = _dispatch_time_nano2mach(leeway);
}
dt->du_timer_flags |= _dispatch_timer_flags_from_clock(clock);
dt->dt_timer.target = target;
dt->dt_timer.interval = UINT64_MAX;
dt->dt_timer.deadline = target + leeway;
dispatch_activate(ds);
}
The _dispatch_timeout function can be found in time.c. Suffice to say it returns the number of nanoseconds between the current time and the time passed to it. It determines the “current time” based on the clock of the time passed to it.
So _dispatch_after gets the number of nanoseconds to wait before executing your block. Then it computes leeway as one tenth of that duration. When it sets the timer's deadline, it adds leeway to the deadline you passed in.
In your case, delta is about 60 seconds (= 60 * 109 nanoseconds), so leeway is about 6 seconds. Hence your block is executed about 66 seconds after you call asyncAfter.
This question has been here for quite a while without any answers, so I'd like to give it a try and point out subtle difference I noticed in practice.
DispatchTime should pause, whereas DispatchWallTime should keep going
because clocks in the real world keep going
You are correct here, at least they are supposed to act this way. However it tends to be really tricky to check, that DispatchTime works as expected. When iOS app is running under Xcode session, it has unlimited background time and isn't getting suspended. I also couldn't achieve that by running application without Xcode connected, so it's still a big question if DispatchTime is paused under whatever conditions. However the main thing to note is that DispatchTime doesn't depend on the system clock.
DispatchWallTime works pretty much the same (it's not being suspended), apart from that it depends on the system clock. In order to see the difference, you can try out a little longer timer, say, 5 minutes. After that go to the system settings and set time 1 hour forward. If you now open the application you can notice, that WallTimer immediately expires whereas DispatchTime will keep waiting its time.

Swift stopwatch running slower than expected [duplicate]

This question already has answers here:
NSTimer Too Slow
(2 answers)
Closed 4 years ago.
I am trying to implement a stopwatch into my app, but I've noticed that it actually runs slower than it should. Here is the code:
timer = Timer.scheduledTimer(timeInterval: 0.01, target: self, selector: #selector(display), userInfo: nil, repeats: true)
func stringFromTimeInterval(interval: TimeInterval) -> NSString {
let ti = Int(interval)
let minutes = ti / 6000
let seconds = ti / 100
let ms = ti % 100
return NSString(format: "%0.2d:%0.2d.%0.2d",minutes,seconds,ms)
}
#objc func display() {
interval += 1
lapInterval += 1
timeLabel.text = stringFromTimeInterval(interval: TimeInterval(interval)) as String
lapLabel.text = stringFromTimeInterval(interval: TimeInterval(lapInterval)) as String
}
Hopefully I've included enough information. Thanks in advance!
Don't try to run a timer that fires every hundredth of a second and then count the number of times it fires. Timers are not exact. Their resolution is more like 50-100 milliseconds (0.05 to 0.1 seconds), and since a timer fires on the main thread, it depends on your code servicing the event loop frequently. If you get into a time-consuming block of code and don't return, the timer doesn't fire.
Plus, the screen refresh on iOS is every 1/60th of a second. There's no point in running a timer more often than that, since you won't be able to display changes any faster.
Run a timer more like every 1/30 of a second, and calculate the elapsed time each time it fires as described below:
To calculate elapsed time, record the time (With Date() or Date().timeIntervalSinceReferenceDate) when you want to begin timing, and then every time you want to update, calculate 'new_date - old_date'. The result will be the number of seconds that have elapsed, and that will be exact and with Double precision.

Why is decreasing interval not speeding up iOS timer execution?

When I run this timer code for 60 seconds duration/1 sec interval or 6 seconds/.1 sec interval it works as expected (completing 10X faster). However, decreasing the values to 0.6 seconds/.01 seconds doesn't speed up the overall operation as expected (having it complete another 10X faster).
When I set this value to less than 0.1 it doesn't work as expected:
// The interval to use
let interval: NSTimeInterval = 0.01 // 1.0 and 0.1 work fine, 0.01 does not
The rest of the relevant code (full playground here: donut builder gist):
// Extend NSTimeInterval to provide the conversion functions.
extension NSTimeInterval {
var nSecMultiplier: Double {
return Double(NSEC_PER_SEC)
}
public func nSecs() -> Int64 {
return Int64(self * nSecMultiplier)
}
public func nSecs() -> UInt64 {
return UInt64(self * nSecMultiplier)
}
public func dispatchTime() -> dispatch_time_t {
// Since the last parameter takes an Int64, the version that returns an Int64 is used.
return dispatch_time(DISPATCH_TIME_NOW, self.nSecs())
}
}
// Define a simple function for getting a timer dispatch source.
func repeatingTimerWithInterval(interval: NSTimeInterval, leeway: NSTimeInterval, action: dispatch_block_t) -> dispatch_source_t {
let timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, dispatch_get_main_queue())
guard timer != nil else { fatalError() }
dispatch_source_set_event_handler(timer, action)
// This function takes the UInt64 for the last two parameters
dispatch_source_set_timer(timer, DISPATCH_TIME_NOW, interval.nSecs(), leeway.nSecs())
dispatch_resume(timer)
return timer
}
// Create the timer
let timer = repeatingTimerWithInterval(interval, leeway: 0.0) { () -> Void in
drawDonut()
}
// Turn off the timer after a few seconds
dispatch_after((interval * 60).dispatchTime(), dispatch_get_main_queue()) { () -> Void in
dispatch_source_cancel(timer)
XCPlaygroundPage.currentPage.finishExecution()
}
The interval you set for a timer is not guaranteed. It is simply a target. The system periodically checks active timers and compares their target fire time to the current time and if the fire time has passed, it fires the timer. But there is no guarantee as to how rapidly the system is checking the timer. So the shorter the target interval and the more other work a thread is doing, the less accuracy a timer will have. From Apple's documentation:
A timer is not a real-time mechanism; it fires only when one of the
run loop modes to which the timer has been added is running and able
to check if the timer’s firing time has passed. Because of the various
input sources a typical run loop manages, the effective resolution of
the time interval for a timer is limited to on the order of 50-100
milliseconds. If a timer’s firing time occurs during a long callout or
while the run loop is in a mode that is not monitoring the timer, the
timer does not fire until the next time the run loop checks the timer.
Therefore, the actual time at which the timer fires potentially can be
a significant period of time after the scheduled firing time.
This does indeed appear to be a playground limitation. I'm able to achieve an interval of 0.01 seconds when testing on an actual iOS device.
Although I was wrong in my initial answer about the limitation of the run loop speed – GCD is apparently able to work some magic behind the scenes in order to allow multiple dispatch sources to be fired per run loop iteration.
However, that being said, you should still consider that the fastest an iOS device's screen can refresh is 60 times a second, or once every 0.0167 seconds.
Therefore it simply makes no sense to be doing drawing updates any faster than that. You should consider using a CADisplayLink in order to synchronise drawing with the screen refresh rate – and adjusting your drawing progress instead of timer frequency in order to control the speed of progress.
A fairly rudimentary setup could look like this:
var displayLink:CADisplayLink?
var deltaTime:CFTimeInterval = 0
let timerDuration:CFTimeInterval = 5
func startDrawing() {
displayLink?.invalidate()
deltaTime = 0
displayLink = CADisplayLink(target: self, selector: #selector(doDrawingUpdate))
displayLink?.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSRunLoopCommonModes)
}
func doDrawingUpdate() {
if deltaTime >= timerDuration {
deltaTime = timerDuration
displayLink?.invalidate()
displayLink = nil
}
draw(CGFloat(deltaTime/timerDuration))
deltaTime += displayLink?.duration ?? 0
}
func draw(progress:CGFloat) {
// do drawing
}
That way you can ensure that you're drawing at the maximum frame-rate available, and your drawing progress won't be affected if the device is under strain and the run loop is therefore running slower.

Precise time of audio queue playback finish

I am using Audio Queues to playback audio files. I need precise timing on the finish of last buffer.
I need to notify a function no later than 150ms-200 ms after the last buffer is played...
Thru callback method I know how many buffers are enqueued
I know the buffer size, I know the how many bytes last buffer is filled with.
First I initialize a number of buffers end fill the buffers with audio data, then enqueue them. When Audio Queue needs a buffer to be filled it calls the callback and I fill the buffer with data.
When there is no more audio data available Audio Queue sends me the last empty buffer, so I fill it with whatever data I have:
if (sharedCache.numberOfToTalPackets>0)
{
if (currentlyReadingBufferIndex==[sharedCache.baseAudioCache count]-1) {
inBuffer->mAudioDataByteSize = (UInt32)bytesFilled;
lastEnqueudBufferSize=bytesFilled;
err=AudioQueueEnqueueBuffer(inAQ,inBuffer,(UInt32)packetsFilled,packetDescs);
if (err) {
[self failWithErrorCode:err customError:AP_AUDIO_QUEUE_ENQUEUE_FAILED];
}
printf("if that was the last free packet description, then enqueue the buffer\n");
//go to the next item on keepbuffer array
isBufferFilled=YES;
[self incrementBufferUsedCount];
return;
}
}
When Audio Queue asks for more data via callback and I have no more data , I start to countdown the buffers. If buffer count equals to zero, which means only one buffer left on the flight to be played, the moment playback is done I try to stop the audio queue.
-(void)decrementBufferUsedCount
{
if (buffersUsed>0) {
buffersUsed--;
printf("buffer on the queue %i\n",buffersUsed);
if (buffersUsed==0) {
NSLog(#"playback is finished\n");
// end playback
isPlayBackDone=YES;
double sampleRate = dataFormat.mSampleRate;
double bufferDuration = lastEnqueudBufferSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;
[self performSelector:#selector(stopPlayer) withObject:nil afterDelay:estimatedTimeNeded];
}
}
}
-(void)stopPlayer
{
#synchronized(self)
{
state=AP_STOPPING;
}
err=AudioQueueStop(queue, TRUE);
if (err) {
[self failWithErrorCode:err customError:AP_AUDIO_QUEUE_STOP_FAILED];
}
else
{
#synchronized(self)
{
state=AP_STOPPED;
NSLog(#"Stopped\n");
}
However it seems I can't get precise timing here. Above code stops player early.
if I do following audio cuts early too
double bufferDuration = XMAQDefaultBufSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;
if increase 1 to 2 since the buffer size is big I get some delay, seem 1.5 is the optimum value for now but I dont understand why lastEnqueudBufferSize/ sampleRate is not wotking
Details of the audio file, and buffers:
Audio file has 22050 sample rate
#define kNumberPlaybackBuffers 4
#define kAQDefaultBufSize 16384
it is a vbr file format with no bitrate information available
EDIT:
I found an easier way that gets the same results (+/-10ms). After you set up your output Queue with AudioQueueNewOutput() you initialize a AudioQueueTimelineRef to be used in your output callback. (ticksToSeconds function is included below in my first method) don't forget to import<mach/mach_time.h>
//After AudioQueueNewOutput()
AudioQueueTimelineRef timeLine; //ivar
AudioQueueCreateTimeline(queue, self.timeLine);
Then in your output callback you call AudioQueueGetCurrentTime(). Caveat: queue must be playing for valid timestamps. So for very short files you might need to use the AudioQueueProcessingTap method below.
AudioTimeStamp timestamp;
AudioQueueGetCurrentTime(queue, self->timeLine, &timestamp, NULL);
The timestamp ties together the current sample playing with the current machine time. With that info we can get an exact machine time in the future when our last sample will be played.
Float64 samplesLeft = self->frameCount - timestamp.mSampleTime;//samples in file - current sample
Float64 secondsLeft = samplesLeft / self->sampleRate; //seconds of audio to play
UInt64 ticksLeft = secondsLeft / ticksToSeconds(); //seconds converted to machine ticks
UInt64 machTimeFinish = timestamp.mHostTime + ticksLeft; //machine time of first sample + ticks left
Now that we have this future machine time we can use it to time whatever it is that you want to do with some accuracy.
UInt64 currentMachTime = mach_absolute_time();
Uint64 ticksFromNow = machTimeFinish - currentMachTime;
float secondsFromNow = ticksFromNow * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(secondsFromNow * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//do the thing!!!
printf("Giggety");
});
If GCD dispatch_async isn't accurate enough there are ways to set up a precision timer
Using AudioQueueProcessingTap
You can get fairly low response time from an AudioQueueProcessingTap. First you make your callback that will essentially put itself in-between the audio stream. The MyObject type is just whatever self is in your code(this is ARC bridging here to get self inside the function). Inspecting ioFlags tells you when the stream starts and finishes. The ioTimeStamp of an output callback describes time that the first sample in the callback will hit the speaker in the future. So if you want to get exact here's how you do it. I added some convenience functions for converting machine time to seconds.
#import <mach/mach_time.h>
double getTimeConversion(){
double timecon;
mach_timebase_info_data_t tinfo;
kern_return_t kerror;
kerror = mach_timebase_info(&tinfo);
timecon = (double)tinfo.numer / (double)tinfo.denom;
return timecon;
}
double ticksToSeconds(){
static double ticksToSeconds = 0;
if (!ticksToSeconds) {
ticksToSeconds = getTimeConversion() * 0.000000001;
}
return ticksToSeconds;
}
void processingTapCallback(
void * inClientData,
AudioQueueProcessingTapRef inAQTap,
UInt32 inNumberFrames,
AudioTimeStamp * ioTimeStamp,
UInt32 * ioFlags,
UInt32 * outNumberFrames,
AudioBufferList * ioData){
MyObject *self = (__bridge Object *)inClientData;
AudioQueueProcessingTapGetSourceAudio(inAQTap, inNumberFrames, ioTimeStamp, ioFlags, outNumberFrames, ioData);
if (*ioFlags == kAudioQueueProcessingTap_EndOfStream) {
Float64 sampTime;
UInt32 frameCount;
AudioQueueProcessingTapGetQueueTime(inAQTap, &sampTime, &frameCount);
Float64 samplesInThisCallback = self->frameCount - sampleTime;//file sampleCount - queue current sample
//double secondsInCallback = outNumberFrames / (double)self->sampleRate; outNumberFrames was inaccurate
double secondsInCallback = * samplesInThisCallback / (double)self->sampleRate;
uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (secondsInCallback / ticksToSeconds());
[self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
}
}
-(void)lastSampleDoneAt:(uint64_t)lastSampTime{
uint64_t currentTime = mach_absolute_time();
if (lastSampTime > currentTime) {
double secondsFromNow = (lastSampTime - currentTime) * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(secondsFromNow * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//do the thing!!!
});
}
else{
//do the thing!!!
}
}
You set it up like this after AudioQueueNewOutput and before AudioQueueStart. Notice the passing of bridged self to the inClientData argument. The queue actually holds self as void* to be used in callback where we bridge it back to an objective-C object within the callback.
AudioStreamBasicDescription format;
AudioQueueProcessingTapRef tapRef;
UInt32 maxFrames = 0;
AudioQueueProcessingTapNew(queue, processingTapCallback, (__bridge void *)self, kAudioQueueProcessingTap_PostEffects, &maxFrames, &format, &tapRef);
You could get the end machine time as soon as the file starts too. A little cleaner too.
void processingTapCallback(
void * inClientData,
AudioQueueProcessingTapRef inAQTap,
UInt32 inNumberFrames,
AudioTimeStamp * ioTimeStamp,
UInt32 * ioFlags,
UInt32 * outNumberFrames,
AudioBufferList * ioData){
MyObject *self = (__bridge Object *)inClientData;
AudioQueueProcessingTapGetSourceAudio(inAQTap, inNumberFrames, ioTimeStamp, ioFlags, outNumberFrames, ioData);
if (*ioFlags == kAudioQueueProcessingTap_StartOfStream) {
uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (self->audioDurSeconds / ticksToSeconds());
[self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
}
}
If you use AudioQueueStop in asynchronous mode, then stopping happens after all queued buffers have been played or recorded. See doc.
You're using it in a synchronous mode, where stopping happens ASAP, and playback cuts out immediately, without regard for previously buffered audio data. You want precise timing, but only because audio is cutting off. Right? So rather than go synchronous + add additional timing/callback code, I recommend going asynchronous:
err=AudioQueueStop(queue, FALSE);
From docs:
If you pass false, the function returns immediately, but the audio
queue does not stop until its queued buffers are played or recorded
(that is, the stop occurs asynchronously). Audio queue callbacks are
invoked as necessary until the queue actually stops.
For me this worked really well for what I heeded:
stopping the queue in callback when data is over using AudioQueueStop(queue, FALSE), while:
listening to actual stop using kAudioQueueProperty_IsRunning property (happens later than AudioQueueStop() is called, actually, when last buffer gets actually rendered)
after stopping the queue You can get prepared for action You need to execute on audio ending, and when listener fires - actually execute this action.
I am not sure about time precision of that event but for my task it behaved definitely better than using notification straight from callback. There is buffering inside AudioQueue and output device itself so definitely IsRunning listener gives better results as to when AudioQueue stops playing.

Synchronize two clients temporally over Multipeer Framework

I've been working at this problem for a few days and none of my solutions have been adequate. I'm lacking the theoretical knowledge to make this happen, I think, and would love some advice (does not have to be iOS specific--I can translate C, pseudocode, whatever, into what I need).
Basically, I have two iPhones. Either one can trigger a repeating action when the user presses a button. It then needs to notify the other iPhone (via the MultiPeer framework) to trigger the same action...but they both need to start at the same instant and stay in step. I really need to get 1/100sec accuracy, which I think is achievable on this platform.
As a semi-rough gauge of how well in synch I am, I use AudioServices to play a "tick" sound on each device...you can very easily tell by ear how well in synch they are (ideally you would not be able to discern multiple sound sources).
Of course, I have to account for the MultiPeer latency somehow...and it's highly variable, anywhere from .1 sec to .8 sec in my testing.
Having found that the system clock is totally unreliable for my purposes, I found an iOS implementation of NTP and am using that. So I'm reasonably confident that the two phones have an accurate common reference for time (though I haven't figured out a way to test this assumption short of continuously displaying NTP time on both devices, which I do, and it seems nicely in synch to my eye).
What I was trying before was sending the "start time" with the P2P message, then (on the recipient end) subtracting that latency from a 1.5sec constant, and performing the action after that delay. On the sender end, I would simply wait for that constant to elapse and then perform the action. This didn't work at all. I was way off.
My next attempt was to wait, on both ends, for a whole second divisible by three, Since latency always seems to be <1sec, I thought this would work. I use the "delay" method to simply block the thread. It's a cudgel, I know, but I just want to get the timing working period before I worry about a more elegant solution. So, my "sender" (the device where the button is pressed) does this:
-(void)startActionAsSender
{
[self notifyPeerToStartAction];
[self delay];
[self startAction];
}
And the recipient does this, in response to a delegate call:
-(void)peerDidStartAction
{
[self delay];
[self startAction];
}
My "delay" method looks like this:
-(void)delay
{
NSDate *NTPTimeNow = [[NetworkClock sharedInstance] networkTime];
NSCalendar *calendar = [NSCalendar currentCalendar];
NSDateComponents *components = [calendar components:NSSecondCalendarUnit
fromDate:NTPTimeNow];
NSInteger seconds = [components second];
// If this method gets called on a second divisible by three, wait a second...
if (seconds % 3 == 0) {
sleep(1);
}
// Spinlock
while (![self secondsDivideByThree]) {}
}
-(BOOL)secondsDivideByThree
{
NSDate *NTPTime = [[NetworkClock sharedInstance] networkTime];
NSCalendar *calendar = [NSCalendar currentCalendar];
NSInteger seconds = [[calendar components:NSSecondCalendarUnit fromDate:NTPTime]
second];
return (seconds % 3 == 0);
}
This is old, so I hope you were able to get something working. I faced a very similar problem. In my case, I found that the inconsistency was almost entirely due to timer coalescing, which causes timers to be wrong by up to 10% on iOS devices in order to save battery usage.
For reference, here's a solution that I've been using in my own app. First, I use a simple custom protocol that's essentially a rudimentary NTP equivalent to synchronize a monotonically increasing clock between the two devices over the local network. I call this synchronized time "DTime" in the code below. With this code I'm able to tell all peers "perform action X at time Y", and it happens in sync.
+ (DTimeVal)getCurrentDTime
{
DTimeVal baseTime = mach_absolute_time();
// Convert from ticks to nanoseconds:
static mach_timebase_info_data_t s_timebase_info;
if (s_timebase_info.denom == 0) {
mach_timebase_info(&s_timebase_info);
}
DTimeVal timeNanoSeconds = (baseTime * s_timebase_info.numer) / s_timebase_info.denom;
return timeNanoSeconds + localDTimeOffset;
}
+ (void)atExactDTime:(DTimeVal)val runBlock:(dispatch_block_t)block
{
// Use the most accurate timing possible to trigger an event at the specified DTime.
// This is much more accurate than dispatch_after(...), which has a 10% "leeway" by default.
// However, this method will use battery faster as it avoids most timer coalescing.
// Use as little as necessary.
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, DISPATCH_TIMER_STRICT, dispatch_get_main_queue());
dispatch_source_set_event_handler(timer, ^{
dispatch_source_cancel(timer); // one shot timer
while (val - [self getCurrentDTime] > 1000) {
// It is at least 1 microsecond too early...
[NSThread sleepForTimeInterval:0.000001]; // Change this to zero for even better accuracy
}
block();
});
// Now, we employ a dirty trick:
// Since even with DISPATCH_TIMER_STRICT there can be about 1ms of inaccuracy, we set the timer to
// fire 1.3ms too early, then we use an until(time) { sleep(); } loop to delay until the exact time
// that we wanted. This takes us from an accuracy of ~1ms to an accuracy of ~0.01ms, i.e. two orders
// of magnitude improvement. However, of course the downside is that this will block the main thread
// for 1.3ms.
dispatch_time_t at_time = dispatch_time(DISPATCH_TIME_NOW, val - [self getCurrentDTime] - 1300000);
dispatch_source_set_timer(timer, at_time, DISPATCH_TIME_FOREVER /*one shot*/, 0 /* minimal leeway */);
dispatch_resume(timer);
}

Resources