Static Dispatch with Final performance comparison - ios

According to the article by Apple - Increasing Performance by Reducing Dynamic Dispatch, it suggests that dynamic dispatch is not good performance wise
dynamic dispatch, increases language expressivity at the cost of a
constant amount of runtime overhead for each indirect usage. In
performance sensitive code such overhead is often undesirable
Also,
three ways to improve performance by eliminating such dynamism: final,
private, and Whole Module Optimization.
From what I understood, it said that when we use final keyword, it ensures the compiler that class will never be subclassed and thus increases performance (as without that, there cannot be dynamic dispatch).
So to sum it up, final will increase the performance for that class.
So, I performed a basic test to make sure of it :
import Foundation
func calculateTimeElapsed(label: String, codeToRun: () -> Void ) {
let startTime = CFAbsoluteTimeGetCurrent()
codeToRun()
let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime
print("Time elapsed for \(label): \(timeElapsed) sec")
}
class Foo {
var number: Int = 1
func incrementNumber() {
number += 1
}
}
final class Bar {
var number: Int = 1
func incrementNumber() {
number += 1
}
}
let fooObject = Foo()
calculateTimeElapsed(label: "Static Dispatch for Foo") {
for _ in 0...100000 {
fooObject.incrementNumber()
}
}
let barObject = Bar()
calculateTimeElapsed(label: "Static Dispatch for Bar") {
for _ in 0...100000 {
barObject.incrementNumber()
}
}
/// 100000
//Time elapsed for Static Dispatch for Foo: 7.20068895816803 sec
//Time elapsed for Static Dispatch for Bar: 7.22502601146698 sec
/// 200000
//Time elapsed for Static Dispatch for Foo: 13.975957989692688 sec
//Time elapsed for Static Dispatch for Bar: 14.329360961914062 sec
/// 500000
//Time elapsed for Static Dispatch for Foo: 36.355777978897095 sec
//Time elapsed for Static Dispatch for Bar: 36.50222206115723 sec
/// 700000
//Time elapsed for Static Dispatch for Foo: 51.68453896045685 sec
//Time elapsed for Static Dispatch for Bar: 51.46391808986664 sec
//
Two classes , one final(Bar) and one not (Foo)
Performed some operations over certain input on objects of both the classes
Measured the time required for executing both the operations
However, to my surprise, the class that is not final takes less time in 3/4 cases. It can be that when the number of iterations increases, the result may go into the favor of final class, but i need to be sure that this concept is correct to measure.
I know my example might be wrong, but if it is, then can anyone please tell me some other example with which the fact that final increases the performance is correct

Related

How to add numbers using multiple threads?

I'm trying to add the numbers in a range eg. 1 to 50000000. But using a for-loop or reduce(_:_:) is taking too long to calculate the result.
func add(low: Int, high: Int) -> Int {
return (low...high).reduce(0, +)
}
Is there any way to do it using multiple threads?
Adding a series of integers does not amount to enough work to justify multiple threads. While this admittedly took 28 seconds on a debug build on my computer, in an optimized, release build, the single-threaded approach took milliseconds.
So, when testing performance, make sure to use an optimized “Release” build in your scheme settings (and/or manually change the optimization settings in your target’s build settings).
But, let us set this aside for a second and assume that you really were doing a calculation that was complex enough to justify running it on multiple threads. In that case, the simplest approach would be to just dispatch the calculation to another thread, and perhaps dispatch the results back to the main thread:
func add(low: Int, high: Int, completion: #escaping (Int) -> Void) {
DispatchQueue.global().async {
let result = (low...high).reduce(0, +)
DispatchQueue.main.async {
completion(result)
}
}
}
And you'd use it like so:
add(low: 0, high: 50_000_000) { result in
// use `result` here
self.label.text = "\(result)"
}
// but not here, because the above runs asynchronously
That will ensure that the main thread is not blocked while the calculation is being done. Again, in this example, adding 50 million integers on a release build may not even require this, but the general idea is to make sure that anything that takes more than a few milliseconds is moved off the main thread.
Now, if the computation was significantly more complicated, one might use concurrentPerform, which is like a for loop, but each iteration runs in parallel. You might think you could just dispatch each calculation to a concurrent queue using async, but that can easily exhaust the limited number of worker threads (called “thread explosion”, which can lead to locks and/or deadlocks). So we reach for concurrentPerform to perform calculations in parallel, but to constrain the number of concurrent threads to the capabilities of the device in question (namely, how many cores the CPU has).
Let’s consider this simple attempt to calculate the sum in parallel. This is inefficient, but we’ll refine it later:
func add(low: Int, high: Int, completion: #escaping (Int) -> Void) {
DispatchQueue.global().async {
let lock = NSLock()
var sum = 0
// the `concurrentPerform` below is equivalent to
//
// for iteration in 0 ... (high - low) { ... }
//
// but the iterations run in parallel
DispatchQueue.concurrentPerform(iterations: high - low + 1) { iteration in
// do some calculation in parallel
let value = iteration + low
// synchronize the update of the shared resource
lock.synchronized {
sum += value
}
}
// call completion handler with the result
DispatchQueue.main.async {
completion(sum)
}
}
}
Note, because we have multiple threads adding values, we must synchronize the interaction with sum to ensure thread-safety. In this case, I'm using NSLock and this routine (because introducing a GCD serial queue and/or using reader-writer in these massively parallelized scenarios is even slower):
extension NSLocking {
func synchronized<T>(block: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try block()
}
}
Above, I wanted to show the above simple use of concurrentPerform but you are going to find that that is much slower than the single threaded implementation. That is because there is not enough work running on each thread and we’ll do 50m synchronizations. So we might, instead, “stride” adding a million values per thread:
func add(low: Int, high: Int, completion: #escaping (Int) -> Void) {
DispatchQueue.global().async {
let stride = 1_000_000
let iterations = (high - low) / stride + 1
let lock = NSLock()
var sum = 0
DispatchQueue.concurrentPerform(iterations: iterations) { iteration in
let start = iteration * stride + low
let end = min(start + stride - 1, high)
let subtotal = (start...end).reduce(0, +)
lock.synchronized {
sum += subtotal
}
}
DispatchQueue.main.async {
completion(sum)
}
}
}
So, each thread adds up to 1 million values in a local subtotal and then when that calculation is done, it synchronizes the update of sum. This increases the work per thread and dramatically reduces the number of synchronizations. Frankly, adding a million integers is still no where near enough to justify the multithreading overhead, but it illustrates the idea.
If you want to see an example where concurrentPerform might be useful, consider this example, where we are calculating the Mandelbrot set, where each pixel of the calculation might be computationally intense. And we again stride (e.g. each iteration calculates a row of pixels), which (a) ensures that each thread is doing enough work to justify the multithreading overhead, and (b) avoid memory contention issues (a.k.a. “cache sloshing”).
If you want a function to just return the sum to all integers in range from low to high then you can do it even faster with some simple maths
you can consider an arithmetic sequence starting from low and going to high with a common difference of 1 ,and have (high - low + 1) elements in it.
then the sum will straight up be :-
sum = ( (high * ( high + 1 )) - ((low * (low - 1)) ) / 2

What does DispatchWallTime do on iOS?

I thought the difference between DispatchTime and DispatchWallTime had to do with whether the app was suspended or the device screen was locked or something: DispatchTime should pause, whereas DispatchWallTime should keep going because clocks in the real world keep going.
So I wrote a little test app:
#UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
var window: UIWindow?
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
// Override point for customization after application launch.
return true
}
func applicationDidEnterBackground(_ application: UIApplication) {
print("backgrounding the app, starting timers for 60 seconds", Date())
DispatchQueue.main.asyncAfter(deadline: .now() + 60) {
print("deadline 60 seconds ended", Date())
}
DispatchQueue.main.asyncAfter(wallDeadline: .now() + 60) {
print("wallDeadline 60 seconds ended", Date())
}
}
func applicationWillEnterForeground(_ application: UIApplication) {
print("app coming to front", Date())
}
}
I ran the app on my device. I backgrounded the app, waited for a while, then brought the app to the foreground. Sometimes "waited for a while" included switching off the screen. I got results like this:
backgrounding the app, starting timers for 60 seconds 2018-08-15 17:41:18 +0000
app coming to front 2018-08-15 17:41:58 +0000
wallDeadline 60 seconds ended 2018-08-15 17:42:24 +0000
deadline 60 seconds ended 2018-08-15 17:42:24 +0000
backgrounding the app, starting timers for 60 seconds 2018-08-15 17:42:49 +0000
app coming to front 2018-08-15 17:43:21 +0000
wallDeadline 60 seconds ended 2018-08-15 17:43:55 +0000
deadline 60 seconds ended 2018-08-15 17:43:55 +0000
The delay before the deadline timer fires is not as long as I expected: it's 6 seconds over the 60 second deadline, even though I "slept" the app for considerably longer than that. But even more surprising, both timers fire at the same instant.
So what does wallDeadline do on iOS that's different from what deadline does?
There's nothing wrong with The Dreams Wind's answer, but I wanted to understand these APIs more precisely. Here's my analysis.
DispatchTime
Here's the comment above DispatchTime.init:
/// Creates a `DispatchTime` relative to the system clock that
/// ticks since boot.
///
/// - Parameters:
/// - uptimeNanoseconds: The number of nanoseconds since boot, excluding
/// time the system spent asleep
/// - Returns: A new `DispatchTime`
/// - Discussion: This clock is the same as the value returned by
/// `mach_absolute_time` when converted into nanoseconds.
/// On some platforms, the nanosecond value is rounded up to a
/// multiple of the Mach timebase, using the conversion factors
/// returned by `mach_timebase_info()`. The nanosecond equivalent
/// of the rounded result can be obtained by reading the
/// `uptimeNanoseconds` property.
/// Note that `DispatchTime(uptimeNanoseconds: 0)` is
/// equivalent to `DispatchTime.now()`, that is, its value
/// represents the number of nanoseconds since boot (excluding
/// system sleep time), not zero nanoseconds since boot.
So DispatchTime is based on mach_absolute_time. But what is mach_absolute_time? It's defined in mach_absolute_time.s. There is a separate definition for each CPU type, but the key is that it uses rdtsc on x86-like CPUs and reads the CNTPCT_EL0 register on ARMs. In both cases, it's getting a value that increases monotonically, and only increases while the processor is not in a sufficiently deep sleep state.
Note that the CPU is not necessarily sleeping deeply enough even if the device appears to be asleep.
DispatchWallTime
There's no similarly helpful comment in the DispatchWallTime definition, but we can look at the definition of its now method:
public static func now() -> DispatchWallTime {
return DispatchWallTime(rawValue: CDispatch.dispatch_walltime(nil, 0))
}
and then we can consult the definition of dispatch_walltime:
dispatch_time_t
dispatch_walltime(const struct timespec *inval, int64_t delta)
{
int64_t nsec;
if (inval) {
nsec = (int64_t)_dispatch_timespec_to_nano(*inval);
} else {
nsec = (int64_t)_dispatch_get_nanoseconds();
}
nsec += delta;
if (nsec <= 1) {
// -1 is special == DISPATCH_TIME_FOREVER == forever
return delta >= 0 ? DISPATCH_TIME_FOREVER : (dispatch_time_t)-2ll;
}
return (dispatch_time_t)-nsec;
}
When inval is nil, it calls _dispatch_get_nanoseconds, so let's check that out:
static inline uint64_t
_dispatch_get_nanoseconds(void)
{
dispatch_static_assert(sizeof(NSEC_PER_SEC) == 8);
dispatch_static_assert(sizeof(USEC_PER_SEC) == 8);
#if TARGET_OS_MAC
return clock_gettime_nsec_np(CLOCK_REALTIME);
#elif HAVE_DECL_CLOCK_REALTIME
struct timespec ts;
dispatch_assume_zero(clock_gettime(CLOCK_REALTIME, &ts));
return _dispatch_timespec_to_nano(ts);
#elif defined(_WIN32)
static const uint64_t kNTToUNIXBiasAdjustment = 11644473600 * NSEC_PER_SEC;
// FILETIME is 100-nanosecond intervals since January 1, 1601 (UTC).
FILETIME ft;
ULARGE_INTEGER li;
GetSystemTimePreciseAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
return li.QuadPart * 100ull - kNTToUNIXBiasAdjustment;
#else
struct timeval tv;
dispatch_assert_zero(gettimeofday(&tv, NULL));
return _dispatch_timeval_to_nano(tv);
#endif
}
It consults the POSIX CLOCK_REALTIME clock. So it is based on the common idea of time and will change if you change your device's time in Settings (or System Preferences on a Mac).
The Mysterious Six Seconds
You said your timer fired
6 seconds over the 60 second deadline
so let's see where that came from.
Both asyncAfter(deadline:execute:) and asyncAfter(wallDeadline:execute:) call the same C API, dispatch_after. The kind of deadline (or “clock”) is encoded into a dispatch_time_t along with the time value. The dispatch_after function calls the internal GCD function _dispatch_after, which I quote in part here:
static inline void
_dispatch_after(dispatch_time_t when, dispatch_queue_t dq,
void *ctxt, void *handler, bool block)
{
dispatch_timer_source_refs_t dt;
dispatch_source_t ds;
uint64_t leeway, delta;
snip
delta = _dispatch_timeout(when);
if (delta == 0) {
if (block) {
return dispatch_async(dq, handler);
}
return dispatch_async_f(dq, ctxt, handler);
}
leeway = delta / 10; // <rdar://problem/13447496>
if (leeway < NSEC_PER_MSEC) leeway = NSEC_PER_MSEC;
if (leeway > 60 * NSEC_PER_SEC) leeway = 60 * NSEC_PER_SEC;
snip
dispatch_clock_t clock;
uint64_t target;
_dispatch_time_to_clock_and_value(when, &clock, &target);
if (clock != DISPATCH_CLOCK_WALL) {
leeway = _dispatch_time_nano2mach(leeway);
}
dt->du_timer_flags |= _dispatch_timer_flags_from_clock(clock);
dt->dt_timer.target = target;
dt->dt_timer.interval = UINT64_MAX;
dt->dt_timer.deadline = target + leeway;
dispatch_activate(ds);
}
The _dispatch_timeout function can be found in time.c. Suffice to say it returns the number of nanoseconds between the current time and the time passed to it. It determines the “current time” based on the clock of the time passed to it.
So _dispatch_after gets the number of nanoseconds to wait before executing your block. Then it computes leeway as one tenth of that duration. When it sets the timer's deadline, it adds leeway to the deadline you passed in.
In your case, delta is about 60 seconds (= 60 * 109 nanoseconds), so leeway is about 6 seconds. Hence your block is executed about 66 seconds after you call asyncAfter.
This question has been here for quite a while without any answers, so I'd like to give it a try and point out subtle difference I noticed in practice.
DispatchTime should pause, whereas DispatchWallTime should keep going
because clocks in the real world keep going
You are correct here, at least they are supposed to act this way. However it tends to be really tricky to check, that DispatchTime works as expected. When iOS app is running under Xcode session, it has unlimited background time and isn't getting suspended. I also couldn't achieve that by running application without Xcode connected, so it's still a big question if DispatchTime is paused under whatever conditions. However the main thing to note is that DispatchTime doesn't depend on the system clock.
DispatchWallTime works pretty much the same (it's not being suspended), apart from that it depends on the system clock. In order to see the difference, you can try out a little longer timer, say, 5 minutes. After that go to the system settings and set time 1 hour forward. If you now open the application you can notice, that WallTimer immediately expires whereas DispatchTime will keep waiting its time.

Sync calls from Swift to C based thread-unsafe library

My Swift code needs to call some C functions that are not thread safe. All calls need to be:
1) synchronous (sequential invocation of function, only after previous call returned),
2) on the same thread.
I've tried to create a queue and then access C from within a function:
let queue = DispatchQueue(label: "com.example.app.thread-1", qos: .userInitiated)
func calc(...) -> Double {
var result: Double!
queue.sync {
result = c_func(...)
}
return result
}
This has improved the behaviour yet I still get crashes - sometimes, not as often as before and mostly while debugging from Xcode.
Any ideas about better handling?
Edit
Based on the comments below, can somebody give an general example of how to use a thread class to ensure sequential execution on the same thread?
Edit 2
A good example of the problem can be seen when using this wrapper around C library:
https://github.com/PerfectlySoft/Perfect-PostgreSQL
It works fine when accessed from a single queue. But will start producing weird errors if several dispatch queues are involved.
So I am envisaging an approach of a single executor thread, which, when called, would block the caller, perform calculation, unblock the caller and return result. Repeat for each consecutive caller.
Something like this:
thread 1 | |
---------> | | ---->
thread 2 | executor | ---->
---------> | thread |
thread 3 | -----------> |
---------> | | ---->
...
If you really need to ensure that all API calls must come from a single thread, you can do so by using the Thread class plus some synchronization primitives.
For instance, a somewhat straightforward implementation of such idea is provided by the SingleThreadExecutor class below:
class SingleThreadExecutor {
private var thread: Thread!
private let threadAvailability = DispatchSemaphore(value: 1)
private var nextBlock: (() -> Void)?
private let nextBlockPending = DispatchSemaphore(value: 0)
private let nextBlockDone = DispatchSemaphore(value: 0)
init(label: String) {
thread = Thread(block: self.run)
thread.name = label
thread.start()
}
func sync(block: #escaping () -> Void) {
threadAvailability.wait()
nextBlock = block
nextBlockPending.signal()
nextBlockDone.wait()
nextBlock = nil
threadAvailability.signal()
}
private func run() {
while true {
nextBlockPending.wait()
nextBlock!()
nextBlockDone.signal()
}
}
}
A simple test to ensure the specified block is really being called by a single thread:
let executor = SingleThreadExecutor(label: "single thread test")
for i in 0..<10 {
DispatchQueue.global().async {
executor.sync { print("\(i) # \(Thread.current.name!)") }
}
}
Thread.sleep(forTimeInterval: 5) /* Wait for calls to finish. */
0 # single thread test
1 # single thread test
2 # single thread test
3 # single thread test
4 # single thread test
5 # single thread test
6 # single thread test
7 # single thread test
8 # single thread test
9 # single thread test
Finally, replace DispatchQueue with SingleThreadExecutor in your code and let's hope this fixes your — very exotic! — issue ;)
let singleThreadExecutor = SingleThreadExecutor(label: "com.example.app.thread-1")
func calc(...) -> Double {
var result: Double!
singleThreadExecutor.sync {
result = c_func(...)
}
return result
}
An interesting outcome... I benchmarked performance of solution by Paulo Mattos that I have accepted vs my own earlier experiments where I used a much less elegant and lower level run loop & object reference approach to achieve the same pattern.
Playground for closure based approach:
https://gist.github.com/deze333/23d11123f02e65c456d16ffe5621e2ee
Playground for run loop & reference passing approach:
https://gist.github.com/deze333/82c0ee3e82fd250097449b1b200b7958
Using closures:
Invocations processed : 1000
Invocations duration, sec: 4.95894199609756
Cost per invocation, sec : 0.00495894199609756
Using run loop and passing object reference:
Invocations processed : 1000
Invocations duration, sec: 1.62595099210739
Cost per invocation, sec : 0.00162432666544195
Passing closures is x3 times slower due to them being allocated on the heap vs reference passing. This really confirms the performance problem of closures outlined in an excellent Mutexes and closure capture in Swift article.
The lesson: don't overuse closures when maximum performance in needed, which is often the case in mobile development.
Closures are so beautifully looking though!
EDIT:
Things are much better in Swift 4 with whole module optimisation. Closures are fast!

How to make a performance test fail if it's too slow?

I'd like my test to fail if it runs slower than 0.5 seconds but the average time is merely printed in the console and I cannot find a way to access it. Is there a way to access this data?
Code
//Measures the time it takes to parse the participant codes from the first 100 events in our test data.
func testParticipantCodeParsingPerformance()
{
var increment = 0
self.measureBlock
{
increment = 0
while increment < 100
{
Parser.parseParticipantCode(self.fields[increment], hostCodes: MasterCalendarArray.getHostCodeArray()[increment])
increment++
}
}
print("Events measured: \(increment)")
}
Test Data
[Tests.ParserTest testParticipantCodeParsingPerformance]' measured [Time, seconds] average: 0.203, relative standard deviation: 19.951%, values: [0.186405, 0.182292, 0.179966, 0.177797, 0.175820, 0.205763, 0.315636, 0.223014, 0.200362, 0.178165]
You need to set a baseline for your performance test. Head to the Report Navigator:
and select your recent test run. You'll see a list of all your tests, but the performance ones will have times associated with them. Click the time to bring up the Performance Result popover:
The "Baseline" value is what you're looking for--set it to 0.5s and that will inform Xcode that this test should complete in half a second. If your test is more than 10% slower than the baseline, it'll fail!
The only way to do something similar to what you describe is setting a time limit graphically like #andyvn22 recommends.
But, if you want to do it completely in code, the only thing you can do is extend XCTestCase with a new method that measure the execution time of the closure and returns it to be used in an assertiong, here is an example of what you could do:
extension XCTestCase{
/// Executes the block and return the execution time in millis
public func timeBlock(closure: ()->()) -> Int{
var info = mach_timebase_info(numer: 0, denom: 0)
mach_timebase_info(&info)
let begin = mach_absolute_time()
closure()
let diff = Double(mach_absolute_time() - begin) * Double(info.numer) / Double(1_000_000 * info.denom)
return Int(diff)
}
}
And use it with:
func testExample() {
XCTAssertTrue( 500 < self.timeBlock{
doSomethingLong()
})
}

Why is decreasing interval not speeding up iOS timer execution?

When I run this timer code for 60 seconds duration/1 sec interval or 6 seconds/.1 sec interval it works as expected (completing 10X faster). However, decreasing the values to 0.6 seconds/.01 seconds doesn't speed up the overall operation as expected (having it complete another 10X faster).
When I set this value to less than 0.1 it doesn't work as expected:
// The interval to use
let interval: NSTimeInterval = 0.01 // 1.0 and 0.1 work fine, 0.01 does not
The rest of the relevant code (full playground here: donut builder gist):
// Extend NSTimeInterval to provide the conversion functions.
extension NSTimeInterval {
var nSecMultiplier: Double {
return Double(NSEC_PER_SEC)
}
public func nSecs() -> Int64 {
return Int64(self * nSecMultiplier)
}
public func nSecs() -> UInt64 {
return UInt64(self * nSecMultiplier)
}
public func dispatchTime() -> dispatch_time_t {
// Since the last parameter takes an Int64, the version that returns an Int64 is used.
return dispatch_time(DISPATCH_TIME_NOW, self.nSecs())
}
}
// Define a simple function for getting a timer dispatch source.
func repeatingTimerWithInterval(interval: NSTimeInterval, leeway: NSTimeInterval, action: dispatch_block_t) -> dispatch_source_t {
let timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, dispatch_get_main_queue())
guard timer != nil else { fatalError() }
dispatch_source_set_event_handler(timer, action)
// This function takes the UInt64 for the last two parameters
dispatch_source_set_timer(timer, DISPATCH_TIME_NOW, interval.nSecs(), leeway.nSecs())
dispatch_resume(timer)
return timer
}
// Create the timer
let timer = repeatingTimerWithInterval(interval, leeway: 0.0) { () -> Void in
drawDonut()
}
// Turn off the timer after a few seconds
dispatch_after((interval * 60).dispatchTime(), dispatch_get_main_queue()) { () -> Void in
dispatch_source_cancel(timer)
XCPlaygroundPage.currentPage.finishExecution()
}
The interval you set for a timer is not guaranteed. It is simply a target. The system periodically checks active timers and compares their target fire time to the current time and if the fire time has passed, it fires the timer. But there is no guarantee as to how rapidly the system is checking the timer. So the shorter the target interval and the more other work a thread is doing, the less accuracy a timer will have. From Apple's documentation:
A timer is not a real-time mechanism; it fires only when one of the
run loop modes to which the timer has been added is running and able
to check if the timer’s firing time has passed. Because of the various
input sources a typical run loop manages, the effective resolution of
the time interval for a timer is limited to on the order of 50-100
milliseconds. If a timer’s firing time occurs during a long callout or
while the run loop is in a mode that is not monitoring the timer, the
timer does not fire until the next time the run loop checks the timer.
Therefore, the actual time at which the timer fires potentially can be
a significant period of time after the scheduled firing time.
This does indeed appear to be a playground limitation. I'm able to achieve an interval of 0.01 seconds when testing on an actual iOS device.
Although I was wrong in my initial answer about the limitation of the run loop speed – GCD is apparently able to work some magic behind the scenes in order to allow multiple dispatch sources to be fired per run loop iteration.
However, that being said, you should still consider that the fastest an iOS device's screen can refresh is 60 times a second, or once every 0.0167 seconds.
Therefore it simply makes no sense to be doing drawing updates any faster than that. You should consider using a CADisplayLink in order to synchronise drawing with the screen refresh rate – and adjusting your drawing progress instead of timer frequency in order to control the speed of progress.
A fairly rudimentary setup could look like this:
var displayLink:CADisplayLink?
var deltaTime:CFTimeInterval = 0
let timerDuration:CFTimeInterval = 5
func startDrawing() {
displayLink?.invalidate()
deltaTime = 0
displayLink = CADisplayLink(target: self, selector: #selector(doDrawingUpdate))
displayLink?.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSRunLoopCommonModes)
}
func doDrawingUpdate() {
if deltaTime >= timerDuration {
deltaTime = timerDuration
displayLink?.invalidate()
displayLink = nil
}
draw(CGFloat(deltaTime/timerDuration))
deltaTime += displayLink?.duration ?? 0
}
func draw(progress:CGFloat) {
// do drawing
}
That way you can ensure that you're drawing at the maximum frame-rate available, and your drawing progress won't be affected if the device is under strain and the run loop is therefore running slower.

Resources