Data Race in Dispatch Timer Source - ios

ThreadSanitizer detects a data race in the following Swift program run on macOS:
import Dispatch
class Foo<T> {
var value: T?
let queue = DispatchQueue(label: "Foo syncQueue")
init(){}
func complete(value: T) {
queue.sync {
self.value = value
}
}
static func completeAfter(_ delay: Double, value: T) -> Foo<T> {
let returnedFoo = Foo<T>()
let queue = DispatchQueue(label: "timerEventHandler")
let timer = DispatchSource.makeTimerSource(queue: queue)
timer.setEventHandler {
returnedFoo.complete(value: value)
timer.cancel()
}
timer.scheduleOneshot(deadline: .now() + delay)
timer.resume()
return returnedFoo
}
}
func testCompleteAfter() {
let foo = Foo<Int>.completeAfter(0.1, value: 1)
sleep(10)
}
testCompleteAfter()
When running on iOS Simulator, ThreadSanitizer does not detect a race.
ThreadSanitizer output:
WARNING: ThreadSanitizer: data race (pid=71596)
Read of size 8 at 0x7d0c0000eb48 by thread T2:
#0 block_destroy_helper.5 main.swift (DispatchTimerSourceDataRace+0x0001000040fb)
#1 _Block_release <null>:38 (libsystem_blocks.dylib+0x000000000951)
Previous write of size 8 at 0x7d0c0000eb48 by main thread:
#0 block_copy_helper.4 main.swift (DispatchTimerSourceDataRace+0x0001000040b0)
#1 _Block_copy <null>:38 (libsystem_blocks.dylib+0x0000000008b2)
#2 testCompleteAfter() -> () main.swift:40 (DispatchTimerSourceDataRace+0x000100003981)
#3 main main.swift:44 (DispatchTimerSourceDataRace+0x000100002250)
Location is heap block of size 48 at 0x7d0c0000eb20 allocated by main thread:
#0 malloc <null>:144 (libclang_rt.tsan_osx_dynamic.dylib+0x00000004188a)
#1 _Block_copy <null>:38 (libsystem_blocks.dylib+0x000000000873)
#2 testCompleteAfter() -> () main.swift:40 (DispatchTimerSourceDataRace+0x000100003981)
#3 main main.swift:44 (DispatchTimerSourceDataRace+0x000100002250)
Thread T2 (tid=3107318, running) created by thread T-1
[failed to restore the stack]
SUMMARY: ThreadSanitizer: data race main.swift in block_destroy_helper.5
Is there anything suspicious with the code?

The comment from #Rob made me think again about the issue. I came up with the following modification for static func completeAfter - which ThreadSanitizer is happy with *):
static func completeAfter(_ delay: Double, value: T) -> Foo<T> {
let returnedFoo = Foo<T>()
let queue = DispatchQueue(label: "timerEventHandler")
queue.async {
let timer = DispatchSource.makeTimerSource(queue: queue)
timer.setEventHandler {
returnedFoo.complete(value: value)
timer.cancel()
}
timer.scheduleOneshot(deadline: .now() + delay)
timer.resume()
}
return returnedFoo
}
This change ensures that all accesses to timer will be executed in queue queue, which tries to synchronise the timer that way. Even though, this same solution in my "real" code didn't work with this solution, it's probably due to other external factors.
*) We should never think, our code has no races, just because ThreadSanitizer doesn't detect one. There may be external factors which just happen to "erase" a potential data race (for example, dispatch lib happens to execute two blocks with a conflicting access on the same thread - and no data race can happen)

Related

ThreadSanitizer vs. async/await in XCTest

I'm trying to test an async function that creates a WKWebsiteDataStore with some HTTPCookies.
import Foundation
import WebKit
#MainActor
class HttpCookieUtility {
func createWebsiteDataStore(httpCookies: [HTTPCookie]) async -> WKWebsiteDataStore {
let websiteDataStore = WKWebsiteDataStore.nonPersistent()
for cookie in httpCookies {
await websiteDataStore.httpCookieStore.setCookie(cookie)
}
return websiteDataStore
}
}
The unit test code is here:
import XCTest
#testable import MyFramework
final class HttpCookieUtilityTests: XCTestCase {
func test_websiteDataStore_is_created_from_cookies() async throws {
var httpCookies = [HTTPCookie]()
for index in 1...20 {
let properties: [HTTPCookiePropertyKey: Any] = [.domain: "www.example.com", .path: ".", .name: "name-\(index)", .value: "value-\(index)"]
let httpCookie = HTTPCookie(properties: properties)!
httpCookies.append(httpCookie)
}
let websiteDataStore = await HttpCookieUtility().createWebsiteDataStore(httpCookies: httpCookies)
XCTAssertNotNil(websiteDataStore)
}
}
The unit test passes. However, when I turn on the Thread Sanitizer in the scheme's Diagnostic settings, I see a series of warnings like the following (not all output include here for brevity):
WARNING: ThreadSanitizer: data race (pid=47736) Read of size 8 at
0x7b6400040310 by main thread:
#0 (1) suspend resume partial function for HttpCookieUtilityTests.test_websiteDataStore_is_created_from_cookies()
HttpCookieUtilityTests.swift:18 (MyFrameworkTests:x86_64+0x1e793)
#1 swift::runJobInEstablishedExecutorContext(swift::Job*) :2 (libswift_Concurrency.dylib:x86_64+0x2a4b5)
Previous write of size 8 at 0x7b6400040310 by thread T7:
#0 HttpCookieUtilityTests.test_websiteDataStore_is_created_from_cookies()
HttpCookieUtilityTests.swift:18 (MyFrameworkTests:x86_64+0x1e613)
#1 swift::runJobInEstablishedExecutorContext(swift::Job*) :2 (libswift_Concurrency.dylib:x86_64+0x2a4b5)
Location is heap block of size 1032 at 0x7b6400040100 allocated by
thread T7:
#0 malloc :2 (libclang_rt.tsan_iossim_dynamic.dylib:x86_64+0x533ac)
#1 swift::StackAllocator<1000ul, &(swift::TaskAllocatorSlabMetadata)>::getSlabForAllocation(unsigned
long) :2 (libswift_Concurrency.dylib:x86_64+0x2f19a)
#2 swift::runJobInEstablishedExecutorContext(swift::Job*) :2 (libswift_Concurrency.dylib:x86_64+0x2a4b5)
Thread T7 (tid=397262, running) is a GCD worker thread
SUMMARY: ThreadSanitizer: data race HttpCookieUtilityTests.swift:18 in
(1) suspend resume partial function for
HttpCookieUtilityTests.test_websiteDataStore_is_created_from_cookies()
When I run the same code from a normal application, I don't see the same ThreadSanitizer warnings. I'm assuming that running XCTests that require the main thread are not supported or are somehow problematic. But I wanted to outrule issues with the actual code. I'm also just wondering if async/await is creating unexpected complications.
Also, I tried this alternate implementation which does not have the ThreadSanitizer issue, and is also significantly faster (presumably because the setCookie operations can process concurrently), but it specifically avoids using the modern concurrency support (aside from the continuation).
class HttpCookieUtility {
func createWebsiteDataStore(httpCookies: [HTTPCookie]) async -> WKWebsiteDataStore {
return await withCheckedContinuation{ continuation in
DispatchQueue.main.async {
let websiteDataStore = WKWebsiteDataStore.nonPersistent()
let waitGroup = DispatchGroup()
for cookie in httpCookies {
waitGroup.enter()
websiteDataStore.httpCookieStore.setCookie(cookie, completionHandler: {
waitGroup.leave()
})
}
waitGroup.notify(queue: DispatchQueue.main) {
continuation.resume(returning: websiteDataStore)
}
}
}
}
}
Note that I removed the #MainActor attribute from the HttpCookieUtility class declaration.

Swift: returned result is repeated using DispatchQueue.global (qos: .userInitiated) .asyncAfter

I have the following code:
( Inside: ServerApiManager.sharedInstance.fetchMessages is a function for call api ).
The result returned is:
====didRequestReloadThread ATC Chat Thread
DispatchQueue.global
633
fetchMessages
DispatchQueue.global
633
fetchMessages
ServerApiManager.sharedInstance.fetchMessage
DispatchQueue
DispatchQueue messagesCollectionView
ServerApiManager.sharedInstance.fetchMessage
DispatchQueue
DispatchQueue messagesCollectionView
==> Wrong result because of duplicate.
Expected results are:
====didRequestReloadThread ATC Chat Thread
DispatchQueue.global
633
fetchMessages
ServerApiManager.sharedInstance.fetchMessage
DispatchQueue
DispatchQueue messagesCollectionView
Can anyone help ?
DispatchQueue.global(qos: .userInitiated).asyncAfter(deadline: .now() + 5) {
print("DispatchQueue.global")
if(self.messages.count > 0){
let lastMessage = self.messages[self.messages.count-1]
print(633)
ServerApiManager.sharedInstance.fetchMessages(channel: self.channel, minId: lastMessage.id ?? 0, loggedInUser: self.user, onSuccess: { (messages) -> () in
self.messages.append(contentsOf: messages)
print("ServerApiManager.sharedInstance.fetchMessage")
MessageStorage.sharedInstance.messageDic[self.channel.id] = self.messages
print("DispatchQueue")
DispatchQueue.main.async {
print("DispatchQueue messagesCollectionView" )
self.messagesCollectionView.reloadData()
self.messagesCollectionView.scrollToBottom()
}
}, onFailure: { (msg, logged) -> () in
});
}
}
As the logs shows us, this code block called two times. So, you should check where you call this code and find why it is called it two times. You can use break ponits to check stack trace to find where this method called second time.

Adding condition based on previous result on DispatchQueue

Is it possible to set a condition on the next queue of DispatchQueue? Supposed there are 2 API calls that should be executed synchronously, callAPI1 -> callAPI2. But, callAPI2 should be only executed if callAPI1 returning true. Please check code below for more clear situation:
let dispatchQueue: DispatchQueue = DispatchQueue(label: "queue")
let dispatchGroup = DispatchGroup()
var isSuccess: Bool = false
dispatchGroup.enter()
dispatchQueue.sync {
self.callAPI1(completion: { (result) in
isSuccess = result
dispatchGroup.leave()
}
}
dispatchGroup.enter()
dispatchQueue.sync {
if isSuccess { //--> This one always get false
self.callAPI2(completion: { (result) in
isSuccess = result
dispatchGroup.leave()
})
} else {
dispatchGroup.leave()
}
}
dispatchGroup.notify(queue: DispatchQueue.main, execute: {
completion(isSuccess) //--> This one always get false
})
Currently above code always returning isSuccess as false despite on callAPI1's call returning true, which cause only callAPI1's is called.
All non-playground code typed directly into answer, expect little errors.
It appears that you are trying to make an asynchronous call into a synchronous one, and the way you are attempting this simply will not work. Assuming callAPI1 is asynchronous then after:
self.callAPI1(completion: { (result) in
isSuccess = result
}
the completion block has (in all probability) not yet been run, you cannot test isSuccess immediately, as in:
self.callAPI1(completion: { (result) in
isSuccess = result
}
if isSuccess
{
// in all probability this will never be reached
}
Wrapping the code into a synchronous block will have no effect whatsoever:
dispatchQueue.sync
{
self.callAPI1(completion: { (result) in
isSuccess = result
}
// at this point, in all probability, the completion block
// has not yet run, therefore...
}
// at this point it has also not run
A sync dispatch just runs its block on a different queue and waits for it to complete; if that block contains asynchronous code, as yours does, then it is not magically made synchronous - it executes asynchronously as normal, the synchronously dispatched block terminates, the sync dispatch returns, and your code continues. The sync dispatch has no real effect (apart from running the block on a different queue while blocking the current one).
If you need to sequence a number of asynchronous calls you can do it a number of ways. One method is to simply chain the calls through the completion blocks. Using this approach your code becomes:
self.callAPI1(completion: { (result) in
if !result { completion(false) }
else
{
self.callAPI2(completion: { (result) in
completion(result)
}
}
}
Using Semaphores
If you have a long sequence of such calls using the above pattern then the code can become very nested, in such a case instead of nesting you can use semaphores to sequence the calls. A simple semaphore can be used to block (thread) execution, using wait(), until it is signalled (by an unblocked thread), using signal().
Notice the emphasis here on blocking, once you introduce the ability to block execution all sorts of issues have to be considered: among them are UI responsiveness - blocking the UI thread is not good; deadlock - for example if the code that will issue semaphore wait and signal operations is executing on the same thread then after a wait there will be no signal...
Here is a sample Swift Playground script to demonstrate using semaphores. The pattern follows your original code but uses a semaphore in addition to your boolean.
import Cocoa
// some convenience functions for our dummy callAPI1 & callAPI2
func random(_ range : CountableClosedRange<UInt32>) -> UInt32
{
let lower = range.lowerBound
let upper = range.upperBound
return lower + arc4random_uniform(upper - lower + 1)
}
func randomBool() -> Bool
{
return random(0...1) == 1
}
class Demo
{
// grab the global concurrent utility queue to schedule our work on
let workerQueue = DispatchQueue.global(qos : .utility)
// dummy callAPI1, just pauses and then randomly return success or failure
func callAPI1(_ completion : #escaping (Bool) -> Void) -> Void
{
// do the "work" on workerQueue, which is concurrent so other work
// can be executing, or *blocked*, on the same queue
let pause = random(1...2)
workerQueue.asyncAfter(deadline: .now() + Double(pause))
{
// produce a random success result
let success = randomBool()
print("callAPI1 after \(pause) -> \(success)")
completion(success)
}
}
func callAPI2(_ completion : #escaping (Bool) -> Void) -> Void
{
let pause = random(1...2)
workerQueue.asyncAfter(deadline: .now() + Double(pause))
{
let success = randomBool()
print("callAPI2 after \(pause) -> \(success)")
completion(success)
}
}
func runDemo(_ completion : #escaping (Bool) -> Void) -> Void
{
// We run the demo as a standard async function
// which doesn't block the main thread
workerQueue.async
{
print("Demo starting...")
var isSuccess: Bool = false
let semaphore = DispatchSemaphore(value: 0)
// do the first call
// this will asynchronously execute on a different thread
// *including* its completion block
self.callAPI1
{ (result) in
isSuccess = result
semaphore.signal() // signal completion
}
// we can safely wait for the semaphore to be
// signalled as callAPI1 is executing on a different
// thread so we will not deadlock
semaphore.wait()
if isSuccess
{
self.callAPI2
{ (result) in
isSuccess = result
semaphore.signal() // signal completion
}
semaphore.wait() // wait for completion
}
completion(isSuccess)
}
}
}
Demo().runDemo { (result) in print("Demo result: \(result)") }
// For the Playground
// ==================
// The Playground can terminate a program run once the main thread is done
// and before all async work is finished. This can result in incomplete execution
// and/or errors. To avoid this we sleep the main thread for a few seconds.
sleep(6)
print("All done")
// Run the Playground multiple times, the results should vary
// (different wait times, callAPI2 may not run). Wait until
// the "All done"" before starting next run
// (i.e. don't push stop, it confuses the Playground)
Or...
Another approach to avoid the nesting is to design functions (or operators) which take two async methods and produce a single one by implementing the nesting pattern. Long nested sequences can then be reduce to more linear sequences. This approach is left as an exercise.
HTH

How to handle group wait result in Swift 3

I was trying following code in playground, but it seems that they are not working as I expected.
Two group_async operations cause about 5~6 seconds in total on my mac.
When I set the timeout time to DispatchTime.now() + 10, "test returns" and "done" are both printed.
When I set the timeout time to DispatchTime.now() + 1 (some value make the group timed out), nothing is printed except the printing codes in two group_async operations.
What I want is to suspend the group and do some clean-up when timed out, and do some other further operations when group successfully finished. Any advice is appreciated. Thanks.
import Dispatch
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
let queue = DispatchQueue.global(qos: .utility)
func test() {
let group = DispatchGroup()
__dispatch_group_async(group, queue) {
var a = [String]()
for i in 1...999 {
a.append(String(i))
print("appending array a...")
}
print("a finished")
}
__dispatch_group_async(group, queue) {
var b = [String]()
for i in 1...999 {
b.append(String(i))
print("appending array b...")
}
print("b finished")
}
let result = group.wait(timeout: DispatchTime.now() + 10)
if result == .timedOut {
group.suspend()
print("timed out")
}
print("test returns")
}
queue.async {
test()
print("done")
}
This code snippet raises a variety of different questions:
I notice that the behavior differs a bit between the playground and when you run it in an app. I suspect it's some idiosyncrasy of needsIndefiniteExecution of PlaygroundPage and GCD. I'd suggest testing this in an app. With the caveat of the points I raise below, it works as expected when I ran this from an app.
I notice that you've employed this pattern:
__dispatch_group_async(group, queue) {
...
}
I would suggest:
queue.async(group: group) {
...
}
You are doing group.suspend(). A couple of caveats:
One suspends queues, not groups.
And if you ever call suspend(), make sure you have a corresponding call to resume() somewhere.
Also, remember that suspend() stops future blocks from starting, but it doesn't do anything with the blocks that may already be running. If you want to stop blocks that are already running, you may want to cancel them.
Finally, note that you can only suspend queues and sources that you create. You can't (and shouldn't) suspend a global queue.
I also notice that you're using wait on the same queue that you dispatched the test() call. In this case, you're getting away with that because it is a concurrent queue, but this sort of pattern invites deadlocks. I'd suggest avoiding wait altogether if you can, and certainly don't do it on the same queue that you called it from. Again, it's not a problem here, but it's a pattern that might get you in trouble in the future.
Personally, I might be inclined to use notify rather than wait to trigger the block of code to run when the two dispatched blocks are done. This eliminates any deadlock risk. And if I wanted to have a block of code to run after a certain amount of time (i.e. a timeout process), I might use a timer to trigger some cleanup process in case those two blocks were still running (perhaps canceling them; see How to stop a DispatchWorkItem in GCD?).
#Rob has really nice detail suggestions for each point. I have noticed when I run Evan's code with the tweak from Rob's notes, it seem to work in Playground. I have not tested this in an App. Notice how group is declared outside the test function so we can call the group.notify later on where we can call PlaygroundPage's finishExcution(). Also note that notify function of DispatchGroup is a great way to do any additional work after submitted task objects have completed. In PlayGround we call notify in:
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
let queue = DispatchQueue.global(qos: .utility)
let group = DispatchGroup()
func test() {
queue.async(group: group) {
var a = [String]()
for i in 1...999 {
a.append(String(i))
print("appending array a...")
}
print("a finished")
}
queue.async(group: group){
var b = [String]()
for i in 1...999 {
b.append(String(i))
print("appending array b...")
}
print("b finished")
}
DispatchQueue.global().asyncAfter(deadline: DispatchTime.now() + 0.01) {
print("doing clean up in timeout")
}
}
test()
print("done")
group.notify(queue: DispatchQueue.global()) {
print("work completed")
PlaygroundPage.current.finishExecution()
}
just to compare different approaches try this in your Playground
import Foundation
func test(timeout: Double) {
let queue = DispatchQueue(label: "test", attributes: .concurrent)
let group = DispatchGroup()
var stop = false
let delay = timeout
queue.async(group: group) {
var str = [String]()
var i = 0
while i < 1000 && !stop{
str.append(String(i))
i += 1
}
print(1, "did", i, "iterations")
}
queue.async(group: group) {
var str = [String]()
var i = 0
while i < 2000 && !stop{
str.append(String(i))
i += 1
}
print(2, "did", i, "iterations")
}
queue.async(group: group) {
var str = [String]()
var i = 0
while i < 100 && !stop{
str.append(String(i))
i += 1
}
print(3, "did", i, "iterations")
}
queue.async(group: group) {
var str = [String]()
var i = 0
while i < 200 && !stop{
str.append(String(i))
i += 1
}
print(4, "did", i, "iterations")
}
group.wait(wallTimeout: .now() + delay)
stop = true
queue.sync(flags: .barrier) {} // to be sure there are no more jobs in my queue
}
var start = Date()
test(timeout: 25.0)
print("test done in", Date().timeIntervalSince(start), "from max 25.0 seconds")
print()
start = Date()
test(timeout: 5.0)
print("test done in", Date().timeIntervalSince(start), "from max 5.0 seconds")
it prints (in my environment)
3 did 100 iterations
4 did 200 iterations
1 did 1000 iterations
2 did 2000 iterations
test done in 17.7016019821167 from max 25.0 seconds
3 did 100 iterations
4 did 200 iterations
2 did 697 iterations
1 did 716 iterations
test done in 5.00799399614334 from max 5.0 seconds

how to start a global thread in swift

Is there any way to start and stop a thread in swift along with it making the thread global also so that it can used anywhere.
As shown below, this is how i create thread in swift
var objThrd = SimpleClass()
let thread = NSThread(target: objThrd , selector: "createSimpleObj", object: nil)
Please give an example if feasible. Or can we achieve this via NSOperation ?
Use Grand Central Dispatch
let myQueue: dispatch_queue_t = dispatch_queue_create("com.example.queue", nil)
dispatch_async(myQueue, { () -> Void in
// Execute some code
})
I use a small function, which uses Central Dispatch, to start a new thread:
typealias FuncPointer = () -> () // Pointer to a function
func startprocess (_ f: FuncPointer)
{
let atrb = DispatchQueue.GlobalAttributes.qosUserInteractive
DispatchQueue.global(attributes: atrb).async(group: DispatchGroup())
{
f()
}
}
To start a function as a thread, you call it like:
startprocess(myFuncName)
The function "myFuncName" is a function without parameters and without a result. This example works in Swift 3.0 beta 3 and Xcode 8 beta 3. You can start multiple threads after each other without waiting for something.
You can use the following function:
_ = group.wait(timeout: DispatchTime.distantFuture)
To wait, until all threads of a specific group have finished. So you start the threads not with DispatchGroup(), but with a predefined group variable. This is an easy way to handle a group of more than 1 threads. Example for six threads:
let group = DispatchGroup()
let myThreadCount: Int = 6
for _ in 1...myThreadCount
{
DispatchQueue.global(attributes: DispatchQueue.GlobalAttributes.qosUserInteractive).async(group: group)
{
Calculate(NumberOfSteps / ThreadCurrentCount)
}
}
_ = group.wait(timeout: DispatchTime.distantFuture)

Resources