iOS - dispatcherTimer blocking touches events? - ios

I am using the dispatcher source timer update a view at different frame rates. (8, 12 or 24 FPS)
Here is the code that initializes the dispatcherTimer and the function used to create the timer.
(this function is directly taken from apple doc in the subsection "creating a timer": http://developer.apple.com/library/mac/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/GCDWorkQueues/GCDWorkQueues.html)
call:
self.dispatchTimer = [self createDispatchTimerWithInterval:self.project.frameDuration * NSEC_PER_SEC
leeway:0.0 * NSEC_PER_SEC
queue:dispatch_get_main_queue()
block:displayFrame];
function:
- (dispatch_source_t)createDispatchTimerWithInterval:(uint64_t)interval leeway:(uint64_t)leeway queue:(dispatch_queue_t)queue block:(dispatch_block_t)block {
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER,
0, 0, queue);
if (timer) {
dispatch_source_set_timer(timer, dispatch_walltime(NULL, 0), interval, leeway);
dispatch_source_set_event_handler(timer, block);
dispatch_resume(timer);
}
return timer;
}
My view updates perfectly, but the touch events are not caught. My first bet would be that the block "displayFrame" takes too much processing time because if I reduce the frameDuration to 0.5 second or so, the touch events are caught.
I only tested this on iOS 4 with a iPad 2.
Any help or hint would be greatly appreciated!
Etienne
UPDATE
I have asked a similar question on the apple developper forum, here is the answer I got: https://devforums.apple.com/thread/156633?tstart=0

The main run loop drains the main queue after each pass through the runloop. I think you're right when you say your duration is too short. If the source is adding new blocks to the queue faster than they can be drained, I would certainly expect the runloop to never resume processing events (since it's constantly trying to drain the queue).

Related

How to disable the first fire of the dispatch source timer

I am using Dispatch source timer.
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
dispatch_source_set_timer(timer, dispatch_walltime(NULL, 0), interval, leeway);
dispatch_source_set_event_handler(timer, block);
dispatch_resume(timer);
However, I find that the block is called almost immediately after the code above is run over. After that the timer fires every interval.
My question is how to disable the first fire?
The second argument to dispatch_source_set_timer is the time when the timer should fire for the first time. You're setting it to dispatch_walltime(NULL, 0), i.e. "now".
To make it fire for the first time after some interval, pass dispatch_walltime(NULL, interval) instead.
This could happen when dispatch time is in past or is now. Check your dispatch time.

high precision timer in Swift

What would be the best solution in Swift for a precise clock to control external hardware? I need to write a loop function that would fire at a set rate per second (hundreds or thousands Hz) reliably and with high precision.
You can define a GCD timer property (because unlike NSTimer/Timer, you have to maintain your own strong reference to the GCD timer):
var timer: DispatchSourceTimer!
And then make a timer (probably a .strict one that is not subject to coalescing, app nap, etc., with awareness of the power/battery/etc. implications that entails) with makeTimerSource(flags:queue:):
let queue = DispatchQueue(label: "com.domain.app.timer", qos: .userInteractive)
timer = DispatchSource.makeTimerSource(flags: .strict, queue: queue)
timer.scheduleRepeating(deadline: .now(), interval: 0.0001, leeway: .nanoseconds(0))
timer.setEventHandler {
// do something
}
timer.resume()
Note, you should gracefully handle situations where the timer cannot handle the precision you are looking for. As the documentation says:
Note that some latency is to be expected for all timers, even when a leeway value of zero is specified.

How to loop despatch_after

I want to loop dispatch_after for looping showing images like this:
while isRunning {
let delay = Int64(Double(NSEC_PER_SEC) * settings.delay)
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, delay), dispatch_get_main_queue(), {
dispatch_async(dispatch_get_main_queue(), {
self.showNextImage()
})
})
}
It supposed to call showNextImage every delay seconds. But it stucks in infinite loop without showing images. I dont know how to solve this problem. Any help is appreciated.
Loop is dispatching infinite dispatch_after because isRunning is yes and loop is not waiting , better to put some count (i<10) or use NSTimer to certain condition then invalidate.
if (self.stimerShowNextImage.isValid) {
[self.stimerShowNextImage invalidate];
self.stimerShowNextImage = nil;
}
self.stimerShowNextImage = [NSTimer scheduledTimerWithTimeInterval: 5
target: self
selector: #selector(showNextImage)
userInfo: nil
repeats: YES];
Above is the timerCode with 5 second delay , in ViewWillDisapper you need to invalidate it , Also before scheduling timer you need to check its not already running.
Look in your code dispatch_after already running in main thread. And again inside main thread using. i mean not need.
when it is need means if u r using other than main thread outside and after u r using uielements, that time use main thread inside.

Timing issue with dispatch_source_t handler function - am I following the right pattern for dispatch timer?

I read the documentation and came to know that timer (dispatch_source_t) skips to fire if the handler is still in progress for previous iterations.
But this whole business of handler taking it longer makes this inaccurate. And I am observing that I am unable to stop the timer at intended times.
My code looks like this:
double secondsToFire = 1.0f;
dispatch_queue_t queue = dispatch_get_main_queue();
m_myTimer = CreateDispatchTimer(secondsToFire, queue,
^{
//Do some time consuming operation, taking any number of seconds
int retVal = DoSomeOperation();
if (retVal == 1)
{
cancelTimer(m_myTimer);
SomeOtherOperation();
}
});
dispatch_source_t CreateDispatchTimer(double interval, dispatch_queue_t queue, dispatch_block_t block)
{
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
if (timer)
{
// dispatch_source_set_timer(timer, dispatch_time(DISPATCH_TIME_NOW, interval * NSEC_PER_SEC), interval * NSEC_PER_SEC, (1ull * NSEC_PER_SEC) / 10);
dispatch_source_set_timer(timer, dispatch_time(DISPATCH_TIME_NOW, 0), interval * NSEC_PER_SEC, 0);
dispatch_source_set_event_handler(timer, block);
dispatch_resume(timer);
}
return timer;
}
void cancelTimer(dispatch_source_t _timer)
{
if (_timer)
{
dispatch_source_cancel(_timer);
_timer = nil;
}
}
Note:
Inside DoSomeOperation(), I have code enclosed with #synchronized(array), in which I access an array who is being written by another private queue. But entire DoSomeOperation() is executed on main queue.
My question is, is this is the right and accurate timing model? I am posting here because I am facing lot of inaccuracies - timer doesn't fire every second, and it doesn't stop as intended too. I am able to observe that SomeOtherOperation() gets called when retVal == 1, but timer isn't done yet.
Another Note:
m_myTimer above is an iVar, and my project is ARC, if that could make any difference.
No, a dispatch timer doesn't get "skipped" if it fires while your handler is running, the handler will get re-invoked for the pending event right away once the previous invocation returns.
If multiple firings occur while the handler is running or enqueued (or while the source is suspended), they will get all get coalesced into a single handler invocation (as is the case for all edge-triggered source types).
You can check how many firings a given handler invocation is for with dispatch_source_get_data()
My concern was about accuracy in starting, firing and stopping, and not in correct reporting of things.
I finally ended up assigning one of my tasks off a private queue instead of main one. The advantage could be clearly visible in timer accuracy and prompt timer cancellation.
Conclusion:
More the same (esp. main) queue getting flogged by timer tasks, more they become inaccurate.

Synchronize two clients temporally over Multipeer Framework

I've been working at this problem for a few days and none of my solutions have been adequate. I'm lacking the theoretical knowledge to make this happen, I think, and would love some advice (does not have to be iOS specific--I can translate C, pseudocode, whatever, into what I need).
Basically, I have two iPhones. Either one can trigger a repeating action when the user presses a button. It then needs to notify the other iPhone (via the MultiPeer framework) to trigger the same action...but they both need to start at the same instant and stay in step. I really need to get 1/100sec accuracy, which I think is achievable on this platform.
As a semi-rough gauge of how well in synch I am, I use AudioServices to play a "tick" sound on each device...you can very easily tell by ear how well in synch they are (ideally you would not be able to discern multiple sound sources).
Of course, I have to account for the MultiPeer latency somehow...and it's highly variable, anywhere from .1 sec to .8 sec in my testing.
Having found that the system clock is totally unreliable for my purposes, I found an iOS implementation of NTP and am using that. So I'm reasonably confident that the two phones have an accurate common reference for time (though I haven't figured out a way to test this assumption short of continuously displaying NTP time on both devices, which I do, and it seems nicely in synch to my eye).
What I was trying before was sending the "start time" with the P2P message, then (on the recipient end) subtracting that latency from a 1.5sec constant, and performing the action after that delay. On the sender end, I would simply wait for that constant to elapse and then perform the action. This didn't work at all. I was way off.
My next attempt was to wait, on both ends, for a whole second divisible by three, Since latency always seems to be <1sec, I thought this would work. I use the "delay" method to simply block the thread. It's a cudgel, I know, but I just want to get the timing working period before I worry about a more elegant solution. So, my "sender" (the device where the button is pressed) does this:
-(void)startActionAsSender
{
[self notifyPeerToStartAction];
[self delay];
[self startAction];
}
And the recipient does this, in response to a delegate call:
-(void)peerDidStartAction
{
[self delay];
[self startAction];
}
My "delay" method looks like this:
-(void)delay
{
NSDate *NTPTimeNow = [[NetworkClock sharedInstance] networkTime];
NSCalendar *calendar = [NSCalendar currentCalendar];
NSDateComponents *components = [calendar components:NSSecondCalendarUnit
fromDate:NTPTimeNow];
NSInteger seconds = [components second];
// If this method gets called on a second divisible by three, wait a second...
if (seconds % 3 == 0) {
sleep(1);
}
// Spinlock
while (![self secondsDivideByThree]) {}
}
-(BOOL)secondsDivideByThree
{
NSDate *NTPTime = [[NetworkClock sharedInstance] networkTime];
NSCalendar *calendar = [NSCalendar currentCalendar];
NSInteger seconds = [[calendar components:NSSecondCalendarUnit fromDate:NTPTime]
second];
return (seconds % 3 == 0);
}
This is old, so I hope you were able to get something working. I faced a very similar problem. In my case, I found that the inconsistency was almost entirely due to timer coalescing, which causes timers to be wrong by up to 10% on iOS devices in order to save battery usage.
For reference, here's a solution that I've been using in my own app. First, I use a simple custom protocol that's essentially a rudimentary NTP equivalent to synchronize a monotonically increasing clock between the two devices over the local network. I call this synchronized time "DTime" in the code below. With this code I'm able to tell all peers "perform action X at time Y", and it happens in sync.
+ (DTimeVal)getCurrentDTime
{
DTimeVal baseTime = mach_absolute_time();
// Convert from ticks to nanoseconds:
static mach_timebase_info_data_t s_timebase_info;
if (s_timebase_info.denom == 0) {
mach_timebase_info(&s_timebase_info);
}
DTimeVal timeNanoSeconds = (baseTime * s_timebase_info.numer) / s_timebase_info.denom;
return timeNanoSeconds + localDTimeOffset;
}
+ (void)atExactDTime:(DTimeVal)val runBlock:(dispatch_block_t)block
{
// Use the most accurate timing possible to trigger an event at the specified DTime.
// This is much more accurate than dispatch_after(...), which has a 10% "leeway" by default.
// However, this method will use battery faster as it avoids most timer coalescing.
// Use as little as necessary.
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, DISPATCH_TIMER_STRICT, dispatch_get_main_queue());
dispatch_source_set_event_handler(timer, ^{
dispatch_source_cancel(timer); // one shot timer
while (val - [self getCurrentDTime] > 1000) {
// It is at least 1 microsecond too early...
[NSThread sleepForTimeInterval:0.000001]; // Change this to zero for even better accuracy
}
block();
});
// Now, we employ a dirty trick:
// Since even with DISPATCH_TIMER_STRICT there can be about 1ms of inaccuracy, we set the timer to
// fire 1.3ms too early, then we use an until(time) { sleep(); } loop to delay until the exact time
// that we wanted. This takes us from an accuracy of ~1ms to an accuracy of ~0.01ms, i.e. two orders
// of magnitude improvement. However, of course the downside is that this will block the main thread
// for 1.3ms.
dispatch_time_t at_time = dispatch_time(DISPATCH_TIME_NOW, val - [self getCurrentDTime] - 1300000);
dispatch_source_set_timer(timer, at_time, DISPATCH_TIME_FOREVER /*one shot*/, 0 /* minimal leeway */);
dispatch_resume(timer);
}

Resources