I’ve been learning Rxswift and applying it on a project since start. I would like your help to fell more assured about a concept.
I understand the changes in the UI should be performed on the Mainscheduler, and you should explicitly use .observeOn(MainSchedule… in case you don’t use Drivers.
My doubt is: normally, should I explicitly switch to a background thread when performing network requests?.
I haven’t found many literature talking about exactly this, but I’ve read some projects code and most of them don’t, but a few do. Those eventually use Drivers or .observeOn(MainSchedule… to make the changes on the UI.
In https://www.thedroidsonroids.com/blog/rxswift-examples-4-multithreading, for instance, he says
So as you may guessed, in fact everything we did was done on a MainScheduler. Why? Because our chain starts from searchBar.rx_text and this one is guaranteed to be on MainScheduler. And because everything else is by default on current scheduler – well our UI thread may get overwhelmed. How to prevent that? Switch to the background thread before the request and before the mapping, so we will just update UI on the main thread
So what he does to solve the problem he mentions, is to explicitly declare
.observeOn(ConcurrentDispatchQueueScheduler(globalConcurrentQueueQOS: .Background))
Assuming the API Request would be performed on background anyway, what this does is to perform all other computations in the background as well, right?
Is this a good practice? Should I, in every API request, explicitly change to background and then changes back to Main only when necessary?
If so, what would be best way? To observe on background and then on Main? Or to subscribe on background and observe on Main, as is done in this gist:
https://gist.github.com/darrensapalo/711d33b3e7f59b712ea3b6d5406952a4
?
Or maybe another way?
P.S.: sorry for the old code, but among the links I found, these better fit my question.
Normally, i.e. if you do not specify any schedulers, Rx is synchronous.
The rest really depends on your use case. Four instance, all UI manipulations must happen on main thread scheduler.
Background work, including network requests, should run on background schedulers. Which ones - depends on priority and preference for concurrent/serial execution.
.subscribeOn() defines where the work is being done and .observeOn() defines where the results of it are handled.
So the answer to your specific questions: in case of a network call which results will be reflected in UI, you must subscribe on background scheduler and observe on main.
You can declare schedulers like that (just an example):
static let main = MainScheduler.instance
static let concurrentMain = ConcurrentMainScheduler.instance
static let serialBackground = SerialDispatchQueueScheduler.init(qos: .background)
static let concurrentBackground = ConcurrentDispatchQueueScheduler.init(qos: .background)
static let serialUtility = SerialDispatchQueueScheduler.init(qos: .utility)
static let concurrentUtility = ConcurrentDispatchQueueScheduler.init(qos: .utility)
static let serialUser = SerialDispatchQueueScheduler.init(qos: .userInitiated)
static let concurrentUser = ConcurrentDispatchQueueScheduler.init(qos: .userInitiated)
static let serialInteractive = SerialDispatchQueueScheduler.init(qos: .userInteractive)
static let concurrentInteractive = ConcurrentDispatchQueueScheduler.init(qos: .userInteractive)
P.S. Some 3rd-party libraries may provide observables that are pre-configured to execute on a background scheduler. In that case explicitly calling .subscribeOn() is not necessary. But you need to know for sure whether this is the case.
And a recap:
normally, should I explicitly switch to a background thread when performing network requests?. - yes, unless a library does it for you
Should I, in every API request, explicitly change to background and then changes back to Main only when necessary? - yes
If so, what would be best way? [...] subscribe on background and observe on Main
You are right. Of course the actual network request, and waiting for and assembling the response, is all done on a background thread. What happens after that depends on the network layer you are using.
For example, if you are using URLSession, the response already comes back on a background thread so calling observeOn to do anything other than come back to the main thread is unnecessary and a reduction of performance. In other words, in answer to your question you don't need to change to a background thread on every request because it's done for you.
I see in the article that the author was talking in the context of Alamofire which explicitly responds on the main thread. So if you are using Alamofire, or some other networking layer that responds on the main thread, you should consider switching to a background thread if the processing of the response is expensive. If all you are doing is creating an object from the resulting dictionary and pushing it to a view the switch in context is probably overkill and could actually degrade performance considering you have already had to suffer through a context switch once.
I feel it's also important to note that calling subscribeOn is absolutely pointless for either network layer. That will only change the thread that the request is made on, not the background thread that waits for the response, nor the thread that the response returns on. The networking layer will decide what thread it uses to push the data out and subscribeOn can't change it. The best you can do is use observeOn to reroute the data flow to a different thread after the response. The subscribeOn operator is for synchronous operations, not network requests.
Related
I don't understand the workings of a DispatchQueue and wanted to learn more about how they implement the foundational queueing theory requirements. I tried to inspect a queue using:
dump(DispatchQueue.global())
And this gave this output:
- <OS_dispatch_queue_global: com.apple.root.default-qos[0x10c041f00] = { xref = -2147483648, ref = -2147483648, sref = 1, target = [0x0], width = 0xfff, state = 0x0060000000000000, in-barrier}> #0
- super: OS_dispatch_queue
- super: OS_dispatch_object
- super: OS_object
- super: NSObject
I got that the label is com.apple.root.default-qos, and this is specified in the Apple docs and the class is the packaged OS_dispatch_queue_global. I understand qos is queryable on the queue itself and that makes sense as well. Width I think just means the allocated memory size.
What I don't understand are the relevances of xref, ref and sref, I think they are internal ids for the queues but I am not sure. I think they are related to fundamental queueing concepts (multithreading came to mind) but would be great to hone into this in more detail.
Is the autoreleaseFrequency hidden from this debug description? Also, what does in-barrier = 0 mean? I tried creating a custom queue and this was replaced by in-flight = 0.. so confused about that as well.
Any ideas on how these undocumented variables relate to queueing theory? I think these are undocumented internals of the API, so any educated and justified explanations would be fine!
Thanks.
Why ask this?
This is a fairly broad question about the internals of grand-central-dispatch. I had difficulty understanding the dumped output because the original WWDC '10 videos and slides for GCD are no longer public. I also didn't know about the open-source libdispatch repo (thanks Rob). That needn't be a problem, but there are no related QAs on SO explaining the topic in detail.
Why GCD?
According to the WWDC '10 GCD transcripts (Thanks Rob), the main idea behind the API was to simplify the boilerplate associated with using the #selector API for multithreading.
Benefits of GCD
Apple released a new block-based API instead of going with function pointers, to also enable type-safe code that wouldn't crash if the block had the wrong type signature. Using typedefs also made code cleaner when used in function parameters, local variables and #property declarations. Queues allow you to capture code and some state as a chunk of data that get managed, enqueued and executed automatically behind the scenes.
The same session mentions how GCD manages low-level threads under the hood. It enqueues blocks to execute on threads when they need to be executed and then releases those threads (PThreads to be precise) when they are no longer referenced. GCD manages threads automatically and doesn't expose this API - when a DispatchWorkItem is dequeued GCD creates a thread for this block to execute on.
Drawbacks of performSelector
performSelector:onThread:withObject:waitUntilDone: has numerous drawbacks that suggest poor design for the modern challenges of concurrency, waiting, synchronisation. leads to pyramids of doom when switching threads in a func. Furthermore, the NSObject.performSelector family of threading methods are inflexible and limited:
No options to optimise for concurrent, initially inactive, or synchronisation on a particular thread. Unlike GCD.
Only selectors can be dispatched on to new threads (awful).
Lots of threads for a given function leads to messy code (pyramids of doom).
No support for queueing without a limited (at the time when GCD was announced in iOS 4) NSOperation API. NSOperations are a high-level, verbose API that became more powerful after incorporating elements of dispatch (low-level API that became GCD) in iOS 4.
Lots of bugs related to unhandled invalid Selector errors (type safety).
DispatchQueue internals
I believe the xref, ref and sref are internal registers that manage reference counts for automatic reference counting. GCD calls dispatch_retain and dispatch_release in most cases when needed, so we don't need to worry about releasing a queue after all its blocks have been executed. However, there were cases when a developer could call retain and release manually when trying to ensure the queue is retained even when not directly in use. These registers allow libDispatch to crash when release is called on a queue with a positive reference count, for better error handling.
When calling a block with DispatchQueue.global().async or similar, I believe this increments the reference count of that queue (xref and ref).
The variables in the question are not documented explicitly, but from what I can tell:
xref counts the number of external references to a general DispatchQueue.
ref counts the total number of references to a general DispatchQueue.
sref counts the number of references to API serial/concurrent/runloop queues, sources and mach channels (these need to be tracked differently as they are represented using different types).
in-barrier looks like an internal state flag (DispatchWorkItemFlag) to track whether new work items submitted to a concurrent queue should be scheduled or not. Only once the barrier work item finishes, the queue returns to scheduling work items that were submitted after the barrier. in-flight means that there is no barrier in force currently.
state is also not documented explicitly but I presume points to memory where the block can access variables from the scope where the block was scheduled.
In iOS, we have GCD and Operation to handle concurrent programming.
looking into GCD we have QoS classes, and they're simple and straight forward, this question is about why DispatchQueue.main.async is commonly used to asynchronies X tasks in the Main Thread.
So when we usually handle updating something in the UI we usually use that function since to prevent any irresponsiveness from the application.
makes me think is writing code inside the UIViewController usually executed in the main thread ?
but also knowing that callback & completionHandler usually execute without specifying on what thread they are in, and the UI never had a problem with that !! so it is on the background ?
How Swift handles this ? and what thread am i writing on by default without specifying anything ?
Since there are more than one question here, let's attempt to answer them one by one.
why DispatchQueue.main.async is commonly used to asynchronies X tasks
in the Main Thread.
Before mentioning a direct answer, make sure that you don't have confusion of understanding:
Serial <===> Concurrent.
Sync <===> Async.
Keep in mind that DispatchQueue.main is serial queue. Using sync or async has nothing to do with determining serialization or currency of a queue, instead they refer to how the task is handled. Thus saying DispatchQueue.main.async means that:
returns control to the current queue right after task has been sent to
be performed on the different queue. It doesn't wait until the task is
finished. It doesn't block the queue.
cited from: https://stackoverflow.com/a/44324968/5501940 (I'd recommend to check it.)
In other words, async means: this will happen on the main thead and update it when it is finished. That's what makes what you said:
So when we usually handle updating something in the UI we usually use
that function since to prevent any irresponsiveness from the
application.
seems to be sensible; Using sync -instead of async- will block the main.
makes me think is writing code inside the UIViewController usually
executed in the main thread ?
First of all: By default, without specifying which thread should execute a chunk of code it would be the main thread. However your question seems to be unspecific because inside a UIViewController we can call functionalities that are not executed on the main thread by specifying it.
but also knowing that callback & completionHandler usually execute
without specifying on what thread they are in, and the UI never had a
problem with that !! so it is on the background ?
"knowing that callback & completionHandler usually execute without specifying on what thread they are in" No! You have to specify it. A good real example for it, actually that's how Main Thread Checker works.
I believe that there is something you are missing here, when dealing when a built-in method from the UIKit -for instance- that returns a completion handler, we can't see that it contains something like DispatchQueue.main.async when calling the completion handler; So, if you didn't execute the code inside its completion handler inside DispatchQueue.main.async so we should assume that it handles it for you! It doesn't mean that it is not implemented somewhere.
Another real-world example, Alamofire! When calling
Alamofire.request("https://httpbin.org/get").responseJSON { response in
// what is going on here work has to be async on the main thread
}
That's why you can call it without facing any "hanging" issue on the main thread; It doesn't mean its not handled, instead it means they handle it for you so you don't have to worry about it.
I am getting race conditions in my code when I run TSan tool. As same code has been accessed from different queues and threads at the same time that's why I can not use Serial queues or barrier as Queue will block only single queues accessing the shared resource not the other queues.
I used objc_sync_enter(object) | objc_sync_exit(object) and locks NSLock() or NSRecursiveLock() to protect shared resource but these are also not working.
While when I use #synchronized() keyword in Objective C to protect shared resource, it's working fine as expected and I am not getting race conditions in particular block of code.
So, what is an alternative to protect data in Swift as we can not use #synchronized() keyword in Swift language.
PFA screenshot for reference -
I don't understand "I can not use Serial queues or barrier as Queue will block only single queues accessing the shared resource not the other queues." Using a queue is the standard solution to this problem.
class MultiAccess {
private var _property: String = ""
private let queue = DispatchQueue(label: "MultiAccess")
var property: String {
get {
var result: String!
queue.sync {
result = self._property
}
return result
}
set {
queue.async {
self._property = newValue
}
}
}
}
With this construction, access to property is atomic and thread-safe without the caller having to do anything special. Note that this intentionally uses a single queue for the class, not a queue per-property. As a rule, you want a relatively small number of queues doing a lot of work, not a large number of queues doing a little work. If you find that you're accessing a mutable object from lots of different threads, you need to rethink your system (probably reducing the number of threads). There's no simple pattern that will make that work efficiently and safely without you having to think about your specific use case carefully.
But this construction is useful for problems where system frameworks may call you back on random threads with minimal contention. It is simple to implement and fairly easy to use correctly. If you have a more complex problem, you will have to design a solution for that problem.
Edit: I haven't thought about this answer in a long time, but Brennan's comments brought it back to my attention. Because of the bug I had in the original code, my original answer was ok, but if you fixed the bug it was bad. (If you want to see my original code that used a barrier, look in the edit history, I don't want to put it here because people will copy it.) I've changed it to use a standard serial queue rather than a concurrent queue.
Don't generate concurrent queues without careful thought about how threads will be generated. If you are going to have many simultaneous accesses, you're going to create a lot of threads, which is bad. If you're not going to have many simultaneous accesses, then you don't need a concurrent queue. GCD talks make promises about managing threads that it doesn't actually live up to. You definitely can get thread explosion (as Brennan mentions.)
TLDR: I'm wondering how UndoManager automatic undo grouping based on run loops is effected when using from a background thread, and what my best option is for this.
I am using UndoManager (formerly NSUndoManager) in a custom Swift framework with targets for both iOS and macOS.
Within the framework, a decent amount of work takes place on background GCD serial queues. I understand that UndoManager automatically groups top-level registered undo actions per run loop cycle, but I am not sure how different threading situations would affect that.
My Questions:
What affect, if any, would the following situations have on UndoManagers run loop grouping of registered undo actions?
Which situation (other than situation 1, which is not feasible) is ideal to provide natural grouping assuming all changes that require undo registration will take place on a singular background serial dispatch queue?
In all below situations, assume methodCausingUndoRegistration() and anotherMethodCausingUndoRegistration() are nothing fancy and call UndoManager.registerUndo from the thread they were called on without any dispatch.
Situation 1: Inline on Main Thread
// Assume this runs on main thread
methodCausingUndoRegistration()
// Other code here
anotherMethodCausingUndoRegistration()
// Also assume every other undo registration in this framework takes place inline on the main thread
My Understanding: This is how UndoManager expects to be used. Both of the undo registrations above will take place in the same run loop cycle and therefore be placed in the same undo group.
Situation 2: Synchronous Dispatch on Main Thread
// Assume this runs on an arbitrary background thread, possibly managed by GCD.
// It is guaranteed not to run on the main thread to prevent deadlock.
DispatchQueue.main.sync {
methodCausingUndoRegistration()
}
// Other code here
DispatchQueue.main.sync {
anotherMethodCausingUndoRegistration()
}
// Also assume every other undo registration in this framework takes place
// by syncing on main thread first as above
My Understanding: Obviously, I would not want to use this code in production because synchronous dispatch is not a great idea in most situations. However, I suspect that it is possible for these two actions to get placed into separate run loop cycles based on timing considerations.
Situation 3: Asynchronous Dispatch on Main Thread
// Assume this runs from an unknown context. Might be the main thread, might not.
DispatchQueue.main.async {
methodCausingUndoRegistration()
}
// Other code here
DispatchQueue.main.async {
anotherMethodCausingUndoRegistration()
}
// Also assume every other undo registration in this framework takes place
// by asyncing on the main thread first as above
My Understanding: As much as I would like for this to produce the same effect as situation 1, I suspect it might be possible for this to cause similar undefined grouping as Situation 2.
Situation 4: Single Asynchronous Dispatch on Background Thread
// Assume this runs from an unknown context. Might be the main thread, might not.
backgroundSerialDispatchQueue.async {
methodCausingUndoRegistration()
// Other code here
anotherMethodCausingUndoRegistration()
}
// Also assume all other undo registrations take place
// via async on this same queue, and that undo operations
// that ought to be grouped together would be registered
// within the same async block.
My Understanding: I really hope this will act the same as Situation 1 as long as the UndoManager is used exclusively from this same background queue. However, I worry that there may be some factors that make the grouping undefined, especially since I don't think GCD queues (or their managed threads) always (if ever) get run loops.
TLDR: When working with UndoManager from a background thread, the least complex option is to simply disable automatic grouping via groupsByEvent and do it manually. None of the situations above will work as intended. If you really want automatic grouping in the background, you'd need to avoid GCD.
I'll add some background to explain expectations, then discuss what actually happens in each situation, based on experiments I did in an Xcode Playground.
Automatic Undo Grouping
The "Undo manager" chapter of Apple's Cocoa Application Competencies for iOS Guide states:
NSUndoManager normally creates undo groups automatically during a cycle of the run loop. The first time it is asked to record an undo operation in the cycle, it creates a new group. Then, at the end of the cycle, it closes the group. You can create additional, nested undo groups.
This behavior is easily observable in a project or Playground by registering ourself with NotificationCenter as an observer of NSUndoManagerDidOpenUndoGroup and NSUndoManagerDidCloseUndoGroup. By observing these notification and printing results to the console including undoManager.levelsOfUndo, we can see exactly what is going on with the grouping in real time.
The guide also states:
An undo manager collects all undo operations that occur within a single cycle of a run loop such as the application’s main event loop...
This language would indicate the main run loop is not the only run loop UndoManager is capable of observing. Most likely, then, UndoManager observes notifications that are sent on behalf of the CFRunLoop instance that was current when the first undo operation was recorded and the group was opened.
GCD and Run Loops
Even though the general rule for run loops on Apple platforms is 'one run loop per thread', there are exceptions to this rule. Specifically, it is generally accepted that Grand Central Dispatch will not always (if ever) use standard CFRunLoops with its dispatch queues or their associated threads. In fact, the only dispatch queue that seems to have an associated CFRunLoop seems to be the main queue.
Apple's Concurrency Programming Guide states:
The main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. This queue works with the application’s run loop (if one is present) to interleave the execution of queued tasks with the execution of other event sources attached to the run loop.
It makes sense that the main application thread would not always have a run loop (e.g. command line tools), but if it does, it seems it is guaranteed that GCD will coordinate with the run loop. This guarantee does not appear to be present for other dispatch queues, and there does not appear to be any public API or documented way of associated an arbitrary dispatch queue (or one of its underlying threads) with a CFRunLoop.
This is observable by using the following code:
DispatchQueue.main.async {
print("Main", RunLoop.current.currentMode)
}
DispatchQueue.global().async {
print("Global", RunLoop.current.currentMode)
}
DispatchQueue(label: "").async {
print("Custom", RunLoop.current.currentMode)
}
// Outputs:
// Custom nil
// Global nil
// Main Optional(__C.RunLoopMode(_rawValue: kCFRunLoopDefaultMode))
The documentation for RunLoop.currentMode states:
This method returns the current input mode only while the receiver is running; otherwise, it returns nil.
From this, we can deduce that Global and Custom dispatch queues don't always (if ever) have their own CFRunLoop (which is the underlying mechanism behind RunLoop). So, unless we are dispatching to the main queue, UndoManager won't have an active RunLoop to observe. This will be important for Situation 4 and beyond.
Now, let's observe each of these situations using a Playground (with PlaygroundPage.current.needsIndefiniteExecution = true) and the notification-observing mechanism discussed above.
Situation 1: Inline on Main Thread
This is exactly how UndoManager expects to be used (based on the documentation). Observing the undo notifications shows a single undo group being created with both undos inside.
Situation 2: Synchronous Dispatch on Main Thread
In a simple test using this situation, we get each of the undo registrations in its own group. We can therefore conclude that those two synchronously-dispatched blocks each took place in their own run loop cycle. This appears to always be the behavior dispatch sync produces on the main queue.
Situation 3: Asynchronous Dispatch on Main Thread
However, when async is used instead, a simple test reveals the same behavior as Situation 1. It seems that because both blocks were dispatched to the main thread before either had a chance to actually be run by the run loop, the run loop performed both blocks in the same cycle. Both undo registrations were therefore placed in the same group.
Based purely on observation, this appears to introduces a subtle difference in sync and async. Because sync blocks the current thread until done, the run loop must begin (and end) a cycle before returning. Of course, then, the run loop would not be able to run the other block in that same cycle because they would not have been there when the run loop started and looked for messages. With async, however, the run loop likely didn't happen to start until both blocks were already queued, since async returns before the work is done.
Based on this observation, we can simulate situation 2 inside situation 3 by inserting a sleep(1) call between the two async calls. This way, The run loop has a chance to begin its cycle before the second block is ever sent. This indeed causes two undo groups to be created.
Situation 4: Single Asynchronous Dispatch on Background Thread
This is where things get interesting. Assuming backgroundSerialDispatchQueue is a GCD custom serial queue, a single undo group is created immediately before the first undo registration, but it is never closed. If we think about our discussion above about GCD and run loops, this makes sense. An undo group is created simply because we called registerUndo and there was no top-level group yet. However, it was never closed because it never got a notification about the run loop ending its cycle. It never got that notification because background GCD queues don't get functional CFRunLoops associated with them, so UndoManager was likely never even able to observe the run loop in the first place.
The Correct Approach
If using UndoManager from a background thread is necessary, none of the above situations are ideal (other than the first, which does not meet the requirement of being triggered in the background). There are two options that seem to work. Both assume that UndoManager will only be used from the same background queue/thread. After all, UndoManager is not thread safe.
Just Don't Use Automatic Grouping
This automatic undo grouping based on run loops may easily be turned off via undoManager.groupsByEvent. Then manual grouping may be achieved like so:
undoManager.groupsByEvent = false
backgroundSerialDispatchQueue.async {
undoManager.beginUndoGrouping() // <--
methodCausingUndoRegistration()
// Other code here
anotherMethodCausingUndoRegistration()
undoManager.endUndoGrouping() // <--
}
This works exactly as intended, placing both registrations in the same group.
Use Foundation Instead of GCD
In my production code, I intend to simply turn off automatic undo grouping and do it manually, but I did discover an alternative while investigating the behavior of UndoManager.
We discovered earlier that UndoManager was unable to observe custom GCD queues because they did not appear to have associated CFRunLoops. But what if we created our own Thread and set up a corresponding RunLoop instead. In theory, this should work, and the code below demonstrates:
// Subclass NSObject so we can use performSelector to send a block to the thread
class Worker: NSObject {
let backgroundThread: Thread
let undoManager: UndoManager
override init() {
self.undoManager = UndoManager()
// Create a Thread to run a block
self.backgroundThread = Thread {
// We need to attach the run loop to at least one source so it has a reason to run.
// This is just a dummy Mach Port
NSMachPort().schedule(in: RunLoop.current, forMode: .commonModes) // Should be added for common or default mode
// This will keep our thread running because this call won't return
RunLoop.current.run()
}
super.init()
// Start the thread running
backgroundThread.start()
// Observe undo groups
registerForNotifications()
}
func registerForNotifications() {
NotificationCenter.default.addObserver(forName: Notification.Name.NSUndoManagerDidOpenUndoGroup, object: undoManager, queue: nil) { _ in
print("opening group at level \(self.undoManager.levelsOfUndo)")
}
NotificationCenter.default.addObserver(forName: Notification.Name.NSUndoManagerDidCloseUndoGroup, object: undoManager, queue: nil) { _ in
print("closing group at level \(self.undoManager.levelsOfUndo)")
}
}
func doWorkInBackground() {
perform(#selector(Worker.doWork), on: backgroundThread, with: nil, waitUntilDone: false)
}
// This function needs to be visible to the Objc runtime
#objc func doWork() {
registerUndo()
print("working on other things...")
sleep(1)
print("working on other things...")
print("working on other things...")
registerUndo()
}
func registerUndo() {
let target = Target()
print("registering undo")
undoManager.registerUndo(withTarget: target) { _ in }
}
class Target {}
}
let worker = Worker()
worker.doWorkInBackground()
As expected, the output indicates that both undos are placed in the same group. UndoManager was able to observe the cycles because the Thread was using a RunLoop, unlike GCD.
Honestly, though, it's probably easier to stick with GCD and use manual undo grouping.
Essentially, I have a set of data in an NSDictionary, but for convenience I'm setting up some NSArrays with the data sorted and filtered in a few different ways. The data will be coming in via different threads (blocks), and I want to make sure there is only one block at a time modifying my data store.
I went through the trouble of setting up a dispatch queue this afternoon, and then randomly stumbled onto a post about #synchronized that made it seem like pretty much exactly what I want to be doing.
So what I have right now is...
// a property on my object
#property (assign) dispatch_queue_t matchSortingQueue;
// in my object init
_sortingQueue = dispatch_queue_create("com.asdf.matchSortingQueue", NULL);
// then later...
- (void)sortArrayIntoLocalStore:(NSArray*)matches
{
dispatch_async(_sortingQueue, ^{
// do stuff...
});
}
And my question is, could I just replace all of this with the following?
- (void)sortArrayIntoLocalStore:(NSArray*)matches
{
#synchronized (self) {
// do stuff...
};
}
...And what's the difference between the two anyway? What should I be considering?
Although the functional difference might not matter much to you, it's what you'd expect: if you #synchronize then the thread you're on is blocked until it can get exclusive execution. If you dispatch to a serial dispatch queue asynchronously then the calling thread can get on with other things and whatever it is you're actually doing will always occur on the same, known queue.
So they're equivalent for ensuring that a third resource is used from only one queue at a time.
Dispatching could be a better idea if, say, you had a resource that is accessed by the user interface from the main queue and you wanted to mutate it. Then your user interface code doesn't need explicitly to #synchronize, hiding the complexity of your threading scheme within the object quite naturally. Dispatching will also be a better idea if you've got a central actor that can trigger several of these changes on other different actors; that'll allow them to operate concurrently.
Synchronising is more compact and a lot easier to step debug. If what you're doing tends to be two or three lines and you'd need to dispatch it synchronously anyway then it feels like going to the effort of creating a queue isn't worth it — especially when you consider the implicit costs of creating a block and moving it over onto the heap.
In the second case you would block the calling thread until "do stuff" was done. Using queues and dispatch_async you will not block the calling thread. This would be particularly important if you call sortArrayIntoLocalStore from the UI thread.