Sync calls from Swift to C based thread-unsafe library - ios

My Swift code needs to call some C functions that are not thread safe. All calls need to be:
1) synchronous (sequential invocation of function, only after previous call returned),
2) on the same thread.
I've tried to create a queue and then access C from within a function:
let queue = DispatchQueue(label: "com.example.app.thread-1", qos: .userInitiated)
func calc(...) -> Double {
var result: Double!
queue.sync {
result = c_func(...)
}
return result
}
This has improved the behaviour yet I still get crashes - sometimes, not as often as before and mostly while debugging from Xcode.
Any ideas about better handling?
Edit
Based on the comments below, can somebody give an general example of how to use a thread class to ensure sequential execution on the same thread?
Edit 2
A good example of the problem can be seen when using this wrapper around C library:
https://github.com/PerfectlySoft/Perfect-PostgreSQL
It works fine when accessed from a single queue. But will start producing weird errors if several dispatch queues are involved.
So I am envisaging an approach of a single executor thread, which, when called, would block the caller, perform calculation, unblock the caller and return result. Repeat for each consecutive caller.
Something like this:
thread 1 | |
---------> | | ---->
thread 2 | executor | ---->
---------> | thread |
thread 3 | -----------> |
---------> | | ---->
...

If you really need to ensure that all API calls must come from a single thread, you can do so by using the Thread class plus some synchronization primitives.
For instance, a somewhat straightforward implementation of such idea is provided by the SingleThreadExecutor class below:
class SingleThreadExecutor {
private var thread: Thread!
private let threadAvailability = DispatchSemaphore(value: 1)
private var nextBlock: (() -> Void)?
private let nextBlockPending = DispatchSemaphore(value: 0)
private let nextBlockDone = DispatchSemaphore(value: 0)
init(label: String) {
thread = Thread(block: self.run)
thread.name = label
thread.start()
}
func sync(block: #escaping () -> Void) {
threadAvailability.wait()
nextBlock = block
nextBlockPending.signal()
nextBlockDone.wait()
nextBlock = nil
threadAvailability.signal()
}
private func run() {
while true {
nextBlockPending.wait()
nextBlock!()
nextBlockDone.signal()
}
}
}
A simple test to ensure the specified block is really being called by a single thread:
let executor = SingleThreadExecutor(label: "single thread test")
for i in 0..<10 {
DispatchQueue.global().async {
executor.sync { print("\(i) # \(Thread.current.name!)") }
}
}
Thread.sleep(forTimeInterval: 5) /* Wait for calls to finish. */
0 # single thread test
1 # single thread test
2 # single thread test
3 # single thread test
4 # single thread test
5 # single thread test
6 # single thread test
7 # single thread test
8 # single thread test
9 # single thread test
Finally, replace DispatchQueue with SingleThreadExecutor in your code and let's hope this fixes your — very exotic! — issue ;)
let singleThreadExecutor = SingleThreadExecutor(label: "com.example.app.thread-1")
func calc(...) -> Double {
var result: Double!
singleThreadExecutor.sync {
result = c_func(...)
}
return result
}

An interesting outcome... I benchmarked performance of solution by Paulo Mattos that I have accepted vs my own earlier experiments where I used a much less elegant and lower level run loop & object reference approach to achieve the same pattern.
Playground for closure based approach:
https://gist.github.com/deze333/23d11123f02e65c456d16ffe5621e2ee
Playground for run loop & reference passing approach:
https://gist.github.com/deze333/82c0ee3e82fd250097449b1b200b7958
Using closures:
Invocations processed : 1000
Invocations duration, sec: 4.95894199609756
Cost per invocation, sec : 0.00495894199609756
Using run loop and passing object reference:
Invocations processed : 1000
Invocations duration, sec: 1.62595099210739
Cost per invocation, sec : 0.00162432666544195
Passing closures is x3 times slower due to them being allocated on the heap vs reference passing. This really confirms the performance problem of closures outlined in an excellent Mutexes and closure capture in Swift article.
The lesson: don't overuse closures when maximum performance in needed, which is often the case in mobile development.
Closures are so beautifully looking though!
EDIT:
Things are much better in Swift 4 with whole module optimisation. Closures are fast!

Related

How to use gcd barrier in iOS?

I want to use gcd barrier implement a safe store object. But it not work correctly. The setter sometime is more early than the getter. What's wrong with it?
https://gist.github.com/Terriermon/02c446d1238ad6ec1edb08b607b1bf05
class MutiReadSingleWriteObject<T> {
let queue = DispatchQueue(label: "com.readwrite.concurrency", attributes: .concurrent)
var _object:T?
var object: T? {
#available(*, unavailable)
get {
fatalError("You cannot read from this object.")
}
set {
queue.async(flags: .barrier) {
self._object = newValue
}
}
}
func getObject(_ closure: #escaping (T?) -> Void) {
queue.async {
closure(self._object)
}
}
}
func testMutiReadSingleWriteObject() {
let store = MutiReadSingleWriteObject<Int>()
let queue = DispatchQueue(label: "com.come.concurrency", attributes: .concurrent)
for i in 0...100 {
queue.async {
store.getObject { obj in
print("\(i) -- \(String(describing: obj))")
}
}
}
print("pre --- ")
store.object = 1
print("after ---")
store.getObject { obj in
print("finish result -- \(String(describing: obj))")
}
}
Whenever you create a DispatchQueue, whether serial or concurrent, it spawns its own thread that it uses to schedule and run work items on. This means that whenever you instantiate a MutiReadSingleWriteObject<T> object, its queue will have a dedicated thread for synchronizing your setter and getObject method.
However: this also means that in your testMutiReadSingleWriteObject method, the queue that you use to execute the 100 getObject calls in a loop has its own thread too. This means that the method has 3 separate threads to coordinate between:
The thread that testMutiReadSingleWriteObject is called on (likely the main thread),
The thread that store.queue maintains, and
The thread that queue maintains
These threads run their work in parallel, and this means that an async dispatch call like
queue.async {
store.getObject { ... }
}
will enqueue a work item to run on queue's thread at some point, and keep executing code on the current thread.
This means that by the time you get to running store.object = 1, you are guaranteed to have scheduled 100 work items on queue, but crucially, how and when those work items actually start executing are up to the queue, the CPU scheduler, and other environmental factors. While somewhat rare, this does mean that there's a chance that none of those tasks have gotten to run before the assignment of store.object = 1, which means that by the time they do happen, they'll see a value of 1 stored in the object.
In terms of ordering, you might see a combination of:
100 getObject calls, then store.object = 1
N getObject calls, then store.object = 1, then (100 - N) getObject calls
store.object = 1, then 100 getObject calls
Case (2) can actually prove the behavior you're looking to confirm: all of the calls before store.object = 1 should return nil, and all of the ones after should return 1. If you have a getObject call after the setter that returns nil, you'd know you have a problem. But, this is pretty much impossible to control the timing of.
In terms of how to address the timing issue here: for this method to be meaningful, you'll need to drop one thread to properly coordinate all of your calls to store, so that all accesses to it are on the same thread.
This can be done by either:
Dropping queue, and just accessing store on the thread that the method was called on. This does mean that you cannot call store.getObject asynchronously
Make all calls through queue, whether sync or async. This gives you the opportunity to better control exactly how the store methods are called
Either way, both of these approaches can have different semantics, so it's up to you to decide what you want this method to be testing. Do you want to be guaranteed that all 100 calls will go through before store.object = 1 is reached? If so, you can get rid of queue entirely, because you don't actually want those getters to be called asynchronously. Or, do you want to try to cause the getters and the setter to overlap in some way? Then stick with queue, but it'll be more difficult to ensure you get meaningful results, because you aren't guaranteed to have stable ordering with the concurrent calls.

How to handle Race Condition Read/Write Problem in Swift?

I have got a concurrent queue with dispatch barrier from Raywenderlich post Example
private let concurrentPhotoQueue = DispatchQueue(label: "com.raywenderlich.GooglyPuff.photoQueue", attributes: .concurrent)
Where write operations is done in
func addPhoto(_ photo: Photo) {
concurrentPhotoQueue.async(flags: .barrier) { [weak self] in
// 1
guard let self = self else {
return
}
// 2
self.unsafePhotos.append(photo)
// 3
DispatchQueue.main.async { [weak self] in
self?.postContentAddedNotification()
}
}
}
While read opeartion is done in
var photos: [Photo] {
var photosCopy: [Photo]!
// 1
concurrentPhotoQueue.sync {
// 2
photosCopy = self.unsafePhotos
}
return photosCopy
}
As this will resolve Race Condition. Here why only Write operation is done with barrier and Read in Sync. Why is Read not done with barrier and write with sync ?. As with Sync Write, it will wait till it reads like a lock and while barrier Read it will only be read operation.
set(10, forKey: "Number")
print(object(forKey: "Number"))
set(20, forKey: "Number")
print(object(forKey: "Number"))
public func set(_ value: Any?, forKey key: String) {
concurrentQueue.sync {
self.dictionary[key] = value
}
}
public func object(forKey key: String) -> Any? {
// returns after concurrentQueue is finished operation
// beacuse concurrentQueue is run synchronously
var result: Any?
concurrentQueue.async(flags: .barrier) {
result = self.dictionary[key]
}
return result
}
With the flip behavior, I am getting nil both times, with barrier on Write it is giving 10 & 20 correct
You ask:
Why is Read not done with barrier ... ?.
In this reader-writer pattern, you don’t use barrier with “read” operations because reads are allowed to happen concurrently with respect to other “reads”, without impacting thread-safety. It’s the whole motivating idea behind reader-writer pattern, to allow concurrent reads.
So, you could use barrier with “reads” (it would still be thread-safe), but it would unnecessarily negatively impact performance if multiple “read” requests happened to be called at the same time. If two “read” operations can happen concurrently with respect to each other, why not let them? Don’t use barriers (reducing performance) unless you absolutely need to.
Bottom line, only “writes” need to happen with barrier (ensuring that they’re not done concurrently with respect to any “reads” or “writes”). But no barrier is needed (or desired) for “reads”.
[Why not] ... write with sync?
You could “write” with sync, but, again, why would you? It would only degrade performance. Let’s imagine that you had some reads that were not yet done and you dispatched a “write” with a barrier. The dispatch queue will ensure for us that a “write” dispatched with a barrier won’t happen concurrently with respect to any other “reads” or “writes”, so why should the code that dispatched that “write” sit there and wait for the “write” to finish?
Using sync for writes would only negatively impact performance, and offers no benefit. The question is not “why not write with sync?” but rather “why would you want to write with sync?” And the answer to that latter question is, you don’t want to wait unnecessarily. Sure, you have to wait for “reads”, but not “writes”.
You mention:
With the flip behavior, I am getting nil ...
Yep, so lets consider your hypothetical “read” operation with async:
public func object(forKey key: String) -> Any? {
var result: Any?
concurrentQueue.async {
result = self.dictionary[key]
}
return result
}
This effective says “set up a variable called result, dispatch task to retrieve it asynchronously, but don’t wait for the read to finish before returning whatever result currently contained (i.e., nil).”
You can see why reads must happen synchronously, because you obviously can’t return a value before you update the variable!
So, reworking your latter example, you read synchronously without barrier, but write asynchronously with barrier:
public func object(forKey key: String) -> Any? {
return concurrentQueue.sync {
self.dictionary[key]
}
}
public func set(_ value: Any?, forKey key: String) {
concurrentQueue.async(flags: .barrier) {
self.dictionary[key] = value
}
}
Note, because sync method in the “read” operation will return whatever the closure returns, you can simplify the code quite a bit, as shown above.
Or, personally, rather than object(forKey:) and set(_:forKey:), I’d just write my own subscript operator:
public subscript(key: String) -> Any? {
get {
concurrentQueue.sync {
dictionary[key]
}
}
set {
concurrentQueue.async(flags: .barrier) {
self.dictionary[key] = newValue
}
}
}
Then you can do things like:
store["Number"] = 10
print(store["Number"])
store["Number"] = 20
print(store["Number"])
Note, if you find this reader-writer pattern too complicated, note that you could just use a serial queue (which is like using a barrier for both “reads” and “writes”). You’d still probably do sync “reads” and async “writes”. That works, too. But in environments with high contention “reads”, it’s just a tad less efficient than the above reader-writer pattern.

How to make thread safe code block in swift

I tried multiple solution/answers given on stackoverflow but none of them worked for me. Some of them is as below :
https://stackoverflow.com/a/30495424/3145189
Is this safe to call wait() of DispatchSemaphore several times at one time?
https://stackoverflow.com/a/37155631/3145189
I am trying to achieve very simple thing, code block or function should execute serially, regardless of from which thread it's been called.
My Example code :
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
DispatchQueue.global().async {
self.testLog(name:"first")
}
DispatchQueue.global().async {
self.testLog(name:"second")
}
DispatchQueue.main.async {
self.testLog(name: "mainthread")
}
}
func testLog(name:String) -> Void {
for i in 1..<1000 {
print("thread test \(i) name =\(name)")
}
}
So output should be like -
first thread call
thread test 1 name =first
thread test 2 name =first
thread test 3 name =first
.
.
.
thread test 999 name =first
second thread call
thread test 1 name =second
thread test 2 name =second
.
.
.
thread test 999 name =second
main thread call
thread test 1 name =mainthread
thread test 2 name =mainthread
.
.
.
thread test 999 name =mainthread
If function is called on first thread, it should continue print log for the first thread only. Order of thread can vary I don't care means even if it's print mainthread log first then second and first doesn't matter logs should be grouped.
This will execute the calls serially.
Keep a reference to serialQueue and you can submit blocks from any thread.
let serialQueue = DispatchQueue(label: "serial_queue")
serialQueue.async {
self.testLog(name: "first")
}
serialQueue.async {
self.testLog(name: "second")
}
serialQueue.async {
self.testLog(name: "third")
}
I am trying to achieve very simple thing, code block or function should execute serially, regardless of from which thread it's been called.
To execute serially you use a Dispatch serial queue. If you were writing a class or struct you could use a static let at class/struct level to store your queue in which your serialising function could dispatch to. A static let in this case is equivalent to a "class variable" in some languages.
If you were writing in (Objective-)C such variables can also be declared at the function level, that is a variable with global lifetime but with scope limited to within the function. Swift does not support these within a function, but you can scope a struct to a function...
func testLog(name:String) -> Void
{
struct LocalStatics
{
static let privateQueue = DispatchQueue(label: "testLogQueue")
}
// run the function body on the serial queue - could use async here
// and the body would still run not interleaved with other calls but
// the caller need not wait for it to do so
LocalStatics.privateQueue.sync {
for i in 1..<1000
{
print("thread test \(i) name =\(name)")
}
}
}
(For a debate on "local statics" in Swift see this SO Q&A)

Mutex alternatives in swift

I have a shared-memory between multiple threads. I want to prevent these threads access this piece of memory at a same time. (like producer-consumer problem)
Problem:
A thread add elements to a queue and another thread reads these elements and delete them. They shouldn't access the queue simultaneously.
One solution to this problem is to use Mutex.
As I found, there is no Mutex in Swift. Is there any alternatives in Swift?
There are many solutions for this but I use serial queues for this kind of action:
let serialQueue = DispatchQueue(label: "queuename")
serialQueue.sync {
//call some code here, I pass here a closure from a method
}
Edit/Update: Also for semaphores:
let higherPriority = DispatchQueue.global(qos: .userInitiated)
let lowerPriority = DispatchQueue.global(qos: .utility)
let semaphore = DispatchSemaphore(value: 1)
func letUsPrint(queue: DispatchQueue, symbol: String) {
queue.async {
debugPrint("\(symbol) -- waiting")
semaphore.wait() // requesting the resource
for i in 0...10 {
print(symbol, i)
}
debugPrint("\(symbol) -- signal")
semaphore.signal() // releasing the resource
}
}
letUsPrint(queue: lowerPriority, symbol: "Low Priority Queue Work")
letUsPrint(queue: higherPriority, symbol: "High Priority Queue Work")
RunLoop.main.run()
Thanks to beshio's comment, you can use semaphore like this:
let semaphore = DispatchSemaphore(value: 1)
use wait before using the resource:
semaphore.wait()
// use the resource
and after using release it:
semaphore.signal()
Do this in each thread.
As people commented (incl. me), there are several ways to achieve this kind of lock. But I think dispatch semaphore is better than others because it seems to have the least overhead. As found in Apples doc, "Replacing Semaphore Code", it doesn't go down to kernel space unless the semaphore is already locked (= zero), which is the only case when the code goes down into the kernel to switch the thread. I think that semaphore is not zero most of the time (which is of course app specific matter, though). Thus, we can avoid lots of overhead.
One more comment on dispatch semaphore, which is the opposite scenario to above. If your threads have different execution priorities, and the higher priority threads have to lock the semaphore for a long time, dispatch semaphore may not be the solution. This is because there's no "queue" among waiting threads. What happens at this case is that higher priority
threads get and lock the semaphore most of the time, and lower priority threads can lock the semaphore only occasionally, thus, mostly just waiting. If this behavior is not good for your application, you have to consider dispatch queue instead.
You can use NSLock or NSRecursiveLock. If you need to call one locking function from another locking function use recursive version.
class X {
let lock = NSLock()
func doSome() {
lock.lock()
defer { lock.unlock() }
//do something here
}
}

GCD differences in Swift 3

I was studying Grand Central Dispatch when I noticed Swift 3 changed its syntax.
So, is this:
let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(queue) { () -> Void in
let img1 = Downloader.downloadImageWithURL(imageURLs[0])
dispatch_async(dispatch_get_main_queue(), {
self.imageView1.image = img1
})
}
any different from this one?
DispatchQueue.global(qos: .default).async { [weak self]
() -> Void in
let img1 = Downloader.downloadImageWithURL(imageURLs[0])
DispatchQueue.main.async {
()->Void in
self?.imageView1.image = img1
}
}
Should I create a variable to contain DispatchQueue.global(qos: .default).async?
Swift 3 brings many improvements to Grand Central Dispatch syntax and usage.
Previously, we would choose the dispatch method (sync vs async) and then the queue we wanted to dispatch our task to. The updated GCD reverses this order - we first choose the queue and then apply a dispatch method.
DispatchQueue.global(attributes: [.qosDefault]).async {
// Background thread
DispatchQueue.main.async(execute: {
// UI Updates
})
}
Queue attributes
You will notice that queues now take attributes on init. This is a Swift OptionSet and can include queue options such as serial vs concurrent, memory and activity management options and the quality of service (.default, .userInteractive, .userInitiated, .utility and .background).
The quality of service replaces the old priority attributes that were deprecated in iOS8. If you were used to priority queues, here’s how they map over to QOS cases:
* DISPATCH_QUEUE_PRIORITY_HIGH: .userInitiated
* DISPATCH_QUEUE_PRIORITY_DEFAULT: .default
* DISPATCH_QUEUE_PRIORITY_LOW: .utility
* DISPATCH_QUEUE_PRIORITY_BACKGROUND: .background
Work items
Queues are not the only part of GCD to get a Swift OptionSet. There’s an updated Swift syntax for work items too:
let workItem = DispatchWorkItem(qos: .userInitiated, flags: .assignCurrentContext) {
// Do stuff
}
queue.async(execute: workItem)
A work item can now declare a quality or service and/or flags on init. Both of these are optional and affect the execution of the work item.
dispatch_once
dispatch_once was very useful for initialisation code and other functions that were to be executed once and only once.
In Swift 3, dispatch_once is deprecated and should be replaced with either global or static variables and constants.
// Examples of dispatch_once replacements with global or static constants and variables.
// In all three, the initialiser is called only once.
// Static properties (useful for singletons).
class Object {
static let sharedInstance = Object()
}
// Global constant.
let constant = Object()
// Global variable.
var variable: Object = {
let variable = Object()
variable.doSomething()
return variable
}()
dispatch_assert
Also new in this year’s Apple OS releases are dispatch preconditions. These replace dispatch_assert and allow you to check whether or not you are on the expected thread before executing code. This is particularly useful for functions that update the UI and must be executed on the main queue. Here’s a simple example:
let queue = DispatchQueue.global(attributes: .qosUserInitiated)
let mainQueue = DispatchQueue.main
mainQueue.async {
dispatchPrecondition(condition: .notOnQueue(mainQueue))
// This code won't execute
}
queue.async {
dispatchPrecondition(condition: .onQueue(queue))
// This code will execute
}
Source: https://medium.com/swift-and-ios-writing/a-quick-look-at-gcd-and-swift-3-732bef6e1838#.7hdtfwxb4
Beside not weakifying self in the first approach, both calls are equivalent.
Whether or not to create a variable is up to your (convenience) preferences and does not make a difference technically.

Resources