Swift serial queue die in runtime - ios

In my app i have a task with a lot of mathematics. If i run this task in main queue, i have frozen for a few seconds screen after each call of task, but it work. If i run task in other queue - from some random iteration it do nothing. If i run code in main queue, i get debug message in every iteration, in other queue - i get no one after random iteration. Looks like, the queue dying for some reasons. Usage of cpu and memory doesn't change and stay at level 50-70%. I think about endless loop, deadlock or something similar in function, but in main queue it always work fine. What goes wrong?
class MyClass {
let serialQueue = DispatchQueue(
label: "com.notrealcompany.hardMathematics",
qos: .userInteractive
)
func doStuff() {
serialQueue.async {
node.getArea()
debugPrint("get area call")
}
}
serialQueue is an instance variable, but situation don't change.

It sounds like serialQueue is being deallocated when the method your code is in returns. Try moving serialQueue's declaration to an instance variable instead of a local variable.
class MyClass {
let serialQueue = DispatchQueue(
label: "com.notrealcompany.hardMathematics",
qos: .userInteractive
)
func doStuff() {
serialQueue.async {
node.getArea()
debugPrint("get area call")
}
}
}

Related

How to use gcd barrier in iOS?

I want to use gcd barrier implement a safe store object. But it not work correctly. The setter sometime is more early than the getter. What's wrong with it?
https://gist.github.com/Terriermon/02c446d1238ad6ec1edb08b607b1bf05
class MutiReadSingleWriteObject<T> {
let queue = DispatchQueue(label: "com.readwrite.concurrency", attributes: .concurrent)
var _object:T?
var object: T? {
#available(*, unavailable)
get {
fatalError("You cannot read from this object.")
}
set {
queue.async(flags: .barrier) {
self._object = newValue
}
}
}
func getObject(_ closure: #escaping (T?) -> Void) {
queue.async {
closure(self._object)
}
}
}
func testMutiReadSingleWriteObject() {
let store = MutiReadSingleWriteObject<Int>()
let queue = DispatchQueue(label: "com.come.concurrency", attributes: .concurrent)
for i in 0...100 {
queue.async {
store.getObject { obj in
print("\(i) -- \(String(describing: obj))")
}
}
}
print("pre --- ")
store.object = 1
print("after ---")
store.getObject { obj in
print("finish result -- \(String(describing: obj))")
}
}
Whenever you create a DispatchQueue, whether serial or concurrent, it spawns its own thread that it uses to schedule and run work items on. This means that whenever you instantiate a MutiReadSingleWriteObject<T> object, its queue will have a dedicated thread for synchronizing your setter and getObject method.
However: this also means that in your testMutiReadSingleWriteObject method, the queue that you use to execute the 100 getObject calls in a loop has its own thread too. This means that the method has 3 separate threads to coordinate between:
The thread that testMutiReadSingleWriteObject is called on (likely the main thread),
The thread that store.queue maintains, and
The thread that queue maintains
These threads run their work in parallel, and this means that an async dispatch call like
queue.async {
store.getObject { ... }
}
will enqueue a work item to run on queue's thread at some point, and keep executing code on the current thread.
This means that by the time you get to running store.object = 1, you are guaranteed to have scheduled 100 work items on queue, but crucially, how and when those work items actually start executing are up to the queue, the CPU scheduler, and other environmental factors. While somewhat rare, this does mean that there's a chance that none of those tasks have gotten to run before the assignment of store.object = 1, which means that by the time they do happen, they'll see a value of 1 stored in the object.
In terms of ordering, you might see a combination of:
100 getObject calls, then store.object = 1
N getObject calls, then store.object = 1, then (100 - N) getObject calls
store.object = 1, then 100 getObject calls
Case (2) can actually prove the behavior you're looking to confirm: all of the calls before store.object = 1 should return nil, and all of the ones after should return 1. If you have a getObject call after the setter that returns nil, you'd know you have a problem. But, this is pretty much impossible to control the timing of.
In terms of how to address the timing issue here: for this method to be meaningful, you'll need to drop one thread to properly coordinate all of your calls to store, so that all accesses to it are on the same thread.
This can be done by either:
Dropping queue, and just accessing store on the thread that the method was called on. This does mean that you cannot call store.getObject asynchronously
Make all calls through queue, whether sync or async. This gives you the opportunity to better control exactly how the store methods are called
Either way, both of these approaches can have different semantics, so it's up to you to decide what you want this method to be testing. Do you want to be guaranteed that all 100 calls will go through before store.object = 1 is reached? If so, you can get rid of queue entirely, because you don't actually want those getters to be called asynchronously. Or, do you want to try to cause the getters and the setter to overlap in some way? Then stick with queue, but it'll be more difficult to ensure you get meaningful results, because you aren't guaranteed to have stable ordering with the concurrent calls.

Prevent multiple async calls to function

I have a thread setup where I call a setup() function:
let queue = DispatchQueue(label: "my-queue", qos: .utility)
queue.async {
self.setup {
}
}
If the .async block is triggered multiple times, I want the setup() function to be called either:
multiple times, just not at the same time.
or
one time, with previously unfinished calls being cancelled, and only the last call of the function returning.
Either is acceptable in my case.
Is there a swifty way to accomplish this or is the only solution RxSwift?

GCD serial async queue vs serial sync queue nested in async

I have to protect a critical section of my code.
I don't want the caller to be blocked by the function that can be time consuming so I'm creating a serial queue with background qos and then dispatching asynchronously:
private let someQueue = DispatchQueue(label: "\(type(of: self)).someQueue", qos: .background)
func doSomething() {
self.someQueue.async {
//critical section
}
}
For my understanding, the function will directly return on the calling thread without blocking.
I've also seen somewhere dispatching first asynchronously on the global queue, the synchronously on a serial queue:
private let someQueue2 = DispatchQueue(label: "\(type(of: self)).someQueue2")
func doSomething() {
DispatchQueue.global(qos: .background).async {
self.someQueue2.sync {
//critical section
}
}
}
What's the difference between the two approaches?
Which is the right approach?
In the first approach, the calling thread is not blocked and the task (critical section) passed in the async block will be executed in background.
In the second approach, the calling thread is not blocked, but the "background" thread will be waiting for the sync block (critical section) execution which is executed by another thread.
I don't know what you do in your critical section, but it seems first approach seems the best one. Note that background qos is quite slow, maybe use default qos for your queue, unless you know what you are doing. Also note that convention wants that you use bundle identifier as label for your queue. So something like this:
private let someQueue = DispatchQueue(label: "\(Bundle.main.bundleIdentifier ?? "").\(type(of: self)).someQueue")

Synchronise async tasks in a serial queue

let serialQueue = DispatchQueue(label: "Serial Queue")
func performCriticalSectionTask() {
serialQueue.async {
performLongRuningAsyncTask()
}
}
func performLongRuningAsyncTask() {
/// some long running task
}
The function performCriticalSectionTask() can be called from different places many times.
I want this function to be running one at a time. Thus, I kept the critical section of code inside the serial async queue.
But, the problem here is that the critical section itself is a performLongRuningAsyncTask() which will return immediately, and thus serial queue will not wait for the current task to complete first and will start another one.
How can I solve this problem?
if performLongRuningAsyncTask is only running in one thread, it will be called only once at the time. In your case it delegates it to another thread, so you wrapping it into another thread call doesn't work since it will be on another thread anyway
You could do checks in the method itself, the simplest way is to add a boolean. (Or you could add these checks in your class that executes this method, with a completion handler).
Another ways are adding dispatch groups / semaphores / locks.
If you still need it to be executed later, you should use a dispatch group / OperationQueue / Semaphore.
func performLongRunningAsyncTask() {
self.serialQueue.sync {
if isAlreadyRunning {
return
}
isAlreadyRunning = true
}
asyncTask { result in
self.serialQueue.sync {
self.isAlreadyRunning = false
}
}
}

Mutex alternatives in swift

I have a shared-memory between multiple threads. I want to prevent these threads access this piece of memory at a same time. (like producer-consumer problem)
Problem:
A thread add elements to a queue and another thread reads these elements and delete them. They shouldn't access the queue simultaneously.
One solution to this problem is to use Mutex.
As I found, there is no Mutex in Swift. Is there any alternatives in Swift?
There are many solutions for this but I use serial queues for this kind of action:
let serialQueue = DispatchQueue(label: "queuename")
serialQueue.sync {
//call some code here, I pass here a closure from a method
}
Edit/Update: Also for semaphores:
let higherPriority = DispatchQueue.global(qos: .userInitiated)
let lowerPriority = DispatchQueue.global(qos: .utility)
let semaphore = DispatchSemaphore(value: 1)
func letUsPrint(queue: DispatchQueue, symbol: String) {
queue.async {
debugPrint("\(symbol) -- waiting")
semaphore.wait() // requesting the resource
for i in 0...10 {
print(symbol, i)
}
debugPrint("\(symbol) -- signal")
semaphore.signal() // releasing the resource
}
}
letUsPrint(queue: lowerPriority, symbol: "Low Priority Queue Work")
letUsPrint(queue: higherPriority, symbol: "High Priority Queue Work")
RunLoop.main.run()
Thanks to beshio's comment, you can use semaphore like this:
let semaphore = DispatchSemaphore(value: 1)
use wait before using the resource:
semaphore.wait()
// use the resource
and after using release it:
semaphore.signal()
Do this in each thread.
As people commented (incl. me), there are several ways to achieve this kind of lock. But I think dispatch semaphore is better than others because it seems to have the least overhead. As found in Apples doc, "Replacing Semaphore Code", it doesn't go down to kernel space unless the semaphore is already locked (= zero), which is the only case when the code goes down into the kernel to switch the thread. I think that semaphore is not zero most of the time (which is of course app specific matter, though). Thus, we can avoid lots of overhead.
One more comment on dispatch semaphore, which is the opposite scenario to above. If your threads have different execution priorities, and the higher priority threads have to lock the semaphore for a long time, dispatch semaphore may not be the solution. This is because there's no "queue" among waiting threads. What happens at this case is that higher priority
threads get and lock the semaphore most of the time, and lower priority threads can lock the semaphore only occasionally, thus, mostly just waiting. If this behavior is not good for your application, you have to consider dispatch queue instead.
You can use NSLock or NSRecursiveLock. If you need to call one locking function from another locking function use recursive version.
class X {
let lock = NSLock()
func doSome() {
lock.lock()
defer { lock.unlock() }
//do something here
}
}

Resources