Synchronize two clients temporally over Multipeer Framework - ios

I've been working at this problem for a few days and none of my solutions have been adequate. I'm lacking the theoretical knowledge to make this happen, I think, and would love some advice (does not have to be iOS specific--I can translate C, pseudocode, whatever, into what I need).
Basically, I have two iPhones. Either one can trigger a repeating action when the user presses a button. It then needs to notify the other iPhone (via the MultiPeer framework) to trigger the same action...but they both need to start at the same instant and stay in step. I really need to get 1/100sec accuracy, which I think is achievable on this platform.
As a semi-rough gauge of how well in synch I am, I use AudioServices to play a "tick" sound on each device...you can very easily tell by ear how well in synch they are (ideally you would not be able to discern multiple sound sources).
Of course, I have to account for the MultiPeer latency somehow...and it's highly variable, anywhere from .1 sec to .8 sec in my testing.
Having found that the system clock is totally unreliable for my purposes, I found an iOS implementation of NTP and am using that. So I'm reasonably confident that the two phones have an accurate common reference for time (though I haven't figured out a way to test this assumption short of continuously displaying NTP time on both devices, which I do, and it seems nicely in synch to my eye).
What I was trying before was sending the "start time" with the P2P message, then (on the recipient end) subtracting that latency from a 1.5sec constant, and performing the action after that delay. On the sender end, I would simply wait for that constant to elapse and then perform the action. This didn't work at all. I was way off.
My next attempt was to wait, on both ends, for a whole second divisible by three, Since latency always seems to be <1sec, I thought this would work. I use the "delay" method to simply block the thread. It's a cudgel, I know, but I just want to get the timing working period before I worry about a more elegant solution. So, my "sender" (the device where the button is pressed) does this:
-(void)startActionAsSender
{
[self notifyPeerToStartAction];
[self delay];
[self startAction];
}
And the recipient does this, in response to a delegate call:
-(void)peerDidStartAction
{
[self delay];
[self startAction];
}
My "delay" method looks like this:
-(void)delay
{
NSDate *NTPTimeNow = [[NetworkClock sharedInstance] networkTime];
NSCalendar *calendar = [NSCalendar currentCalendar];
NSDateComponents *components = [calendar components:NSSecondCalendarUnit
fromDate:NTPTimeNow];
NSInteger seconds = [components second];
// If this method gets called on a second divisible by three, wait a second...
if (seconds % 3 == 0) {
sleep(1);
}
// Spinlock
while (![self secondsDivideByThree]) {}
}
-(BOOL)secondsDivideByThree
{
NSDate *NTPTime = [[NetworkClock sharedInstance] networkTime];
NSCalendar *calendar = [NSCalendar currentCalendar];
NSInteger seconds = [[calendar components:NSSecondCalendarUnit fromDate:NTPTime]
second];
return (seconds % 3 == 0);
}

This is old, so I hope you were able to get something working. I faced a very similar problem. In my case, I found that the inconsistency was almost entirely due to timer coalescing, which causes timers to be wrong by up to 10% on iOS devices in order to save battery usage.
For reference, here's a solution that I've been using in my own app. First, I use a simple custom protocol that's essentially a rudimentary NTP equivalent to synchronize a monotonically increasing clock between the two devices over the local network. I call this synchronized time "DTime" in the code below. With this code I'm able to tell all peers "perform action X at time Y", and it happens in sync.
+ (DTimeVal)getCurrentDTime
{
DTimeVal baseTime = mach_absolute_time();
// Convert from ticks to nanoseconds:
static mach_timebase_info_data_t s_timebase_info;
if (s_timebase_info.denom == 0) {
mach_timebase_info(&s_timebase_info);
}
DTimeVal timeNanoSeconds = (baseTime * s_timebase_info.numer) / s_timebase_info.denom;
return timeNanoSeconds + localDTimeOffset;
}
+ (void)atExactDTime:(DTimeVal)val runBlock:(dispatch_block_t)block
{
// Use the most accurate timing possible to trigger an event at the specified DTime.
// This is much more accurate than dispatch_after(...), which has a 10% "leeway" by default.
// However, this method will use battery faster as it avoids most timer coalescing.
// Use as little as necessary.
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, DISPATCH_TIMER_STRICT, dispatch_get_main_queue());
dispatch_source_set_event_handler(timer, ^{
dispatch_source_cancel(timer); // one shot timer
while (val - [self getCurrentDTime] > 1000) {
// It is at least 1 microsecond too early...
[NSThread sleepForTimeInterval:0.000001]; // Change this to zero for even better accuracy
}
block();
});
// Now, we employ a dirty trick:
// Since even with DISPATCH_TIMER_STRICT there can be about 1ms of inaccuracy, we set the timer to
// fire 1.3ms too early, then we use an until(time) { sleep(); } loop to delay until the exact time
// that we wanted. This takes us from an accuracy of ~1ms to an accuracy of ~0.01ms, i.e. two orders
// of magnitude improvement. However, of course the downside is that this will block the main thread
// for 1.3ms.
dispatch_time_t at_time = dispatch_time(DISPATCH_TIME_NOW, val - [self getCurrentDTime] - 1300000);
dispatch_source_set_timer(timer, at_time, DISPATCH_TIME_FOREVER /*one shot*/, 0 /* minimal leeway */);
dispatch_resume(timer);
}

Related

Why is decreasing interval not speeding up iOS timer execution?

When I run this timer code for 60 seconds duration/1 sec interval or 6 seconds/.1 sec interval it works as expected (completing 10X faster). However, decreasing the values to 0.6 seconds/.01 seconds doesn't speed up the overall operation as expected (having it complete another 10X faster).
When I set this value to less than 0.1 it doesn't work as expected:
// The interval to use
let interval: NSTimeInterval = 0.01 // 1.0 and 0.1 work fine, 0.01 does not
The rest of the relevant code (full playground here: donut builder gist):
// Extend NSTimeInterval to provide the conversion functions.
extension NSTimeInterval {
var nSecMultiplier: Double {
return Double(NSEC_PER_SEC)
}
public func nSecs() -> Int64 {
return Int64(self * nSecMultiplier)
}
public func nSecs() -> UInt64 {
return UInt64(self * nSecMultiplier)
}
public func dispatchTime() -> dispatch_time_t {
// Since the last parameter takes an Int64, the version that returns an Int64 is used.
return dispatch_time(DISPATCH_TIME_NOW, self.nSecs())
}
}
// Define a simple function for getting a timer dispatch source.
func repeatingTimerWithInterval(interval: NSTimeInterval, leeway: NSTimeInterval, action: dispatch_block_t) -> dispatch_source_t {
let timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, dispatch_get_main_queue())
guard timer != nil else { fatalError() }
dispatch_source_set_event_handler(timer, action)
// This function takes the UInt64 for the last two parameters
dispatch_source_set_timer(timer, DISPATCH_TIME_NOW, interval.nSecs(), leeway.nSecs())
dispatch_resume(timer)
return timer
}
// Create the timer
let timer = repeatingTimerWithInterval(interval, leeway: 0.0) { () -> Void in
drawDonut()
}
// Turn off the timer after a few seconds
dispatch_after((interval * 60).dispatchTime(), dispatch_get_main_queue()) { () -> Void in
dispatch_source_cancel(timer)
XCPlaygroundPage.currentPage.finishExecution()
}
The interval you set for a timer is not guaranteed. It is simply a target. The system periodically checks active timers and compares their target fire time to the current time and if the fire time has passed, it fires the timer. But there is no guarantee as to how rapidly the system is checking the timer. So the shorter the target interval and the more other work a thread is doing, the less accuracy a timer will have. From Apple's documentation:
A timer is not a real-time mechanism; it fires only when one of the
run loop modes to which the timer has been added is running and able
to check if the timer’s firing time has passed. Because of the various
input sources a typical run loop manages, the effective resolution of
the time interval for a timer is limited to on the order of 50-100
milliseconds. If a timer’s firing time occurs during a long callout or
while the run loop is in a mode that is not monitoring the timer, the
timer does not fire until the next time the run loop checks the timer.
Therefore, the actual time at which the timer fires potentially can be
a significant period of time after the scheduled firing time.
This does indeed appear to be a playground limitation. I'm able to achieve an interval of 0.01 seconds when testing on an actual iOS device.
Although I was wrong in my initial answer about the limitation of the run loop speed – GCD is apparently able to work some magic behind the scenes in order to allow multiple dispatch sources to be fired per run loop iteration.
However, that being said, you should still consider that the fastest an iOS device's screen can refresh is 60 times a second, or once every 0.0167 seconds.
Therefore it simply makes no sense to be doing drawing updates any faster than that. You should consider using a CADisplayLink in order to synchronise drawing with the screen refresh rate – and adjusting your drawing progress instead of timer frequency in order to control the speed of progress.
A fairly rudimentary setup could look like this:
var displayLink:CADisplayLink?
var deltaTime:CFTimeInterval = 0
let timerDuration:CFTimeInterval = 5
func startDrawing() {
displayLink?.invalidate()
deltaTime = 0
displayLink = CADisplayLink(target: self, selector: #selector(doDrawingUpdate))
displayLink?.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSRunLoopCommonModes)
}
func doDrawingUpdate() {
if deltaTime >= timerDuration {
deltaTime = timerDuration
displayLink?.invalidate()
displayLink = nil
}
draw(CGFloat(deltaTime/timerDuration))
deltaTime += displayLink?.duration ?? 0
}
func draw(progress:CGFloat) {
// do drawing
}
That way you can ensure that you're drawing at the maximum frame-rate available, and your drawing progress won't be affected if the device is under strain and the run loop is therefore running slower.

Strange behaviour with setCurrentPlaybackTime

I use: MPMoviePlayerController to show video.
Below I put list of thumbs from the video.
When pressing a thumb I want to jump to a specific place in the video using: setCurrentPlaybackTime.
I also have a timer updating the selected thumb according to the location of the video using: currentPlaybackTime.
My problem: when calling: setCurrentPlaybackTime, the player keeps giving the seconds before seeking to the specific second. It take few seconds to the player to reflect the new seconds. In the mean time the experience of the user is bad: Pressing a thumb shows it selected for a show time, then the timer updates to the previous thumb, then it jumps back to the thumb I selected.
I tried using (in the timer):
if (moviePlayer.playbackState != MPMoviePlaybackStatePlaying && !(moviePlayer.loadState & MPMovieLoadStatePlaythroughOK)) return;
In order to prevent from the timer to update the selected thumb as long the player is in a transition phase between showing the previous thumb and the new thumb, but it doesn't seem to work. The "playbackState" and "loadState" seems to be totally inconstant and unpredictable.
For solving this issue, this how I have implemented this nasty state coverage in one of my projects. This is nasty and fragile but worked good enough for me.
I used two flags and two time intervals;
BOOL seekInProgress_;
BOOL seekRecoveryInProgress_;
NSTimeInterval seekingTowards_;
NSTimeInterval seekingRecoverySince_;
All of the above should be defaulted to NO and 0.0.
When initiating the seek:
//are we supposed to seek?
if (movieController_.currentPlaybackTime != seekToTime)
{ //yes->
movieController_.currentPlaybackTime = seekToTime;
seekingTowards_ = seekToTime;
seekInProgress_ = YES;
}
Within the timer callback:
//are we currently seeking?
if (seekInProgress_)
{ //yes->did the playback-time change since the seeking has been triggered?
if (seekingTowards_ != movieController_.currentPlaybackTime)
{ //yes->we are now in seek-recovery state
seekingRecoverySince_ = movieController_.currentPlaybackTime;
seekRecoveryInProgress_ = YES;
seekInProgress_ = NO;
seekingTowards_ = 0.0;
}
}
//are we currently recovering from seeking?
else if (seekRecoveryInProgress_)
{ //yes->did the playback-time change since the seeking-recovery has been triggered?
if (seekingRecoverySince_ != movieController_.currentPlaybackTime)
{ //yes->seek recovery done!
seekRecoveryInProgress_ = NO;
seekingRecoverySince_ = 0.0;
}
}
In the end, MPMoviePlayerController simply is not really meant for such "micro-management". I had to throw in at least half a dozen flags for state coverage in all kinds of situations and I would never recommend to repeat this within other projects. Once you reach this level, it might be a great idea to think about using AVPlayer instead.

CFReadStreamRead blocking forever under iOS 7

I'm seeing an issue wherein CFReadStreamRead, as part of a streamed file upload, never returns.
This seems to happen only on iOS7 — and far more often when debugging against a physical device than in the simulator — or at least, it's far more evident there.
We have an HTTP (or HTTPS, the problem occurs either way, with a locally-hosted or remote server) POST of a file, via straight-line, blocking (non-event-driven) CFNetwork calls. It's a necessity of the C code calling this handler; there's no provision for callbacks.
That's well and good, the network calls are happening in background threads and/or via async dispatch.
The network code in question boils down to (removing error handling for brevity):
CFReadStreamRef upload = CFReadStreamCreateWithFile(
kCFAllocatorDefault, upload_file_url);
CFRelease(upload_file_url);
CFReadStreamOpen(upload);
CFReadStreamRef myReadStream = CFReadStreamCreateForStreamedHTTPRequest(
kCFAllocatorDefault, myRequest, upload);
CFReadStreamOpen(myReadStream);
CFIndex numBytesRead = CFReadStreamRead(myReadStream, buf, sizeof(buf));
// etc.
On its own, this code wants to hang immediately under iOS7. If I add a loop with some calls to usleep before it (checking CFReadStreamHasBytesAvailable along the way), it will almost always succeed. Every few hundred tries, it will still fail, never returning. Again, the main thread is unaffected.
I'd hoped the GM would clear up this behavior, but it's still present.
Adding a runloop/callback method to watch for bytes-available events has no effect - when the call hangs, no events are seen, either.
Any suggestions as to why this is happening, or how it can be prevented? Anyone else seeing different CFReadStream behavior under iOS 7?
I've try such nasty workaround and it works for me, problem is that I'm requesting delta values from server, so if something goes wrong I'm just fetching new delta value with, in general case it will not work (in logs I see that timeout kicks in sometimes). At least this prevents form permanent thread blocking and gives a chance to handle somehow this problem:
NSInteger readStreamReadWorkaround(CFReadStreamRef readStrem, UInt8 *buffer, CFIndex bufferLength) {
static dispatch_once_t onceToken;
static BOOL isProblematicOs = YES;
dispatch_once(&onceToken, ^{
isProblematicOs = [[UIDevice currentDevice].systemName compare: #"7.0" options: NSNumericSearch]!=NSOrderedAscending;
});
NSInteger readBytesCount = -2;
if (isProblematicOs) {
CFStreamStatus sStatus = CFReadStreamGetStatus(readStrem);
NSDate *date = [NSDate date];
while (YES) {
if(CFReadStreamHasBytesAvailable(readStrem)) {
readBytesCount = CFReadStreamRead(readStrem, buffer, bufferLength);
break;
}
sStatus = CFReadStreamGetStatus(readStrem);
if (sStatus!=kCFStreamStatusOpen && sStatus !=kCFStreamStatusAtEnd
|| [date timeIntervalSinceNow]<-15.0) {
break;
}
usleep(50000);
}
} else {
readBytesCount = CFReadStreamRead(readStrem, buffer, sizeof(buffer));
}
return readBytesCount;
}
I don't like this solution but so far I don't see an alternative.

"Precise" sample timing in cycles in ios

I'm periodically sending data to an RC helicopter, at the moment I am using NSTimer with a time interval of 30 ms to do this. Now due to the imprecision of NSTimer and the non-real timedness of ios I'm getting quite big discrepancies with the sample times. (5 packets of data in the first 30 ms then nothing for 4 cycles and such). Is there a more precise way to handle data sampling and timing of functions in ios?
You could do something like you usually do in games to finely tune fps (and also keep logic and painting separate to make it work fine in any fps). Something like this, and call it in a background thread:
- (void)updateLoop
{
NSTimeInterval last = CFAbsoluteTimeGetCurrent();
NSTimeInterval elapsed = 0;
while (self.running) {
NSTimeInterval now = CFAbsoluteTimeGetCurrent();
NSTimeInterval dt = now - last;
elapsed += dt;
if (elapsed >= YOUR_INTERVAL) {
//Do stuff
elapsed = 0;
}
last = now;
[NSThread sleepForTimeInterval:0.001]; //There is not NSThread yield
}
}

iOS - dispatcherTimer blocking touches events?

I am using the dispatcher source timer update a view at different frame rates. (8, 12 or 24 FPS)
Here is the code that initializes the dispatcherTimer and the function used to create the timer.
(this function is directly taken from apple doc in the subsection "creating a timer": http://developer.apple.com/library/mac/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/GCDWorkQueues/GCDWorkQueues.html)
call:
self.dispatchTimer = [self createDispatchTimerWithInterval:self.project.frameDuration * NSEC_PER_SEC
leeway:0.0 * NSEC_PER_SEC
queue:dispatch_get_main_queue()
block:displayFrame];
function:
- (dispatch_source_t)createDispatchTimerWithInterval:(uint64_t)interval leeway:(uint64_t)leeway queue:(dispatch_queue_t)queue block:(dispatch_block_t)block {
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER,
0, 0, queue);
if (timer) {
dispatch_source_set_timer(timer, dispatch_walltime(NULL, 0), interval, leeway);
dispatch_source_set_event_handler(timer, block);
dispatch_resume(timer);
}
return timer;
}
My view updates perfectly, but the touch events are not caught. My first bet would be that the block "displayFrame" takes too much processing time because if I reduce the frameDuration to 0.5 second or so, the touch events are caught.
I only tested this on iOS 4 with a iPad 2.
Any help or hint would be greatly appreciated!
Etienne
UPDATE
I have asked a similar question on the apple developper forum, here is the answer I got: https://devforums.apple.com/thread/156633?tstart=0
The main run loop drains the main queue after each pass through the runloop. I think you're right when you say your duration is too short. If the source is adding new blocks to the queue faster than they can be drained, I would certainly expect the runloop to never resume processing events (since it's constantly trying to drain the queue).

Resources