How to execute all methods of an object in a certain queue in ios - ios

For my ios app I am using the main queue and root queue. I have several objects and I want their mehtods to run in the root queue.
So far, what I have been doing is add a dispatch_async for each time I call one of those methods which will ultimately become very troublesome when I will use more queues and want to go back to main queue.
What I am looking for is way to assign objects to the root queue so that their methods are executed in the roof queue. What I mean is I am looking for sth. like this: [[TestClass alloc] initInQueue:testQueue];

It is possible to create this in a manner similar to KVO. You could swizzle all your methods to wrap them into dispatch_* calls, but I would strongly discourage it. The level of magic is too high, and you will almost certainly tie yourself up in knots. Moreover, you can't wrap an arbitrary method in dispatch_async since you can't have a return result from that. But you also can't wrap arbitrary methods in dispatch_sync because you would likely deadlock. The problems of solving the general case will quickly spiral out of control in my opinion.
What you should be asking instead is whether your queue architecture is correct. Do you really need to keep calling so many small methods on other queues? In many cases it is better to encapsulate full work units (i.e. a coherent sequence of operations that take an input and generate a final result) rather than individual method calls. (Once you think in work units, NSOperation suddenly gets a lot more useful.) While it is occasionally useful to wrap an accessor into a queue for thread-safety, this is not a general solution to concurrency.
While there are advantages to getting off of the main queue, this advice shouldn't be over-applied. You can do reasonable amounts of work on the main queue without any problems. We built single-threaded Cocoa apps on computers less powerful than iPhones long before GCD. (The iPhone is probably more powerful than my old PowerBooks and might be more powerful than my original MacBook.) I'm not discouraging queues here, just make sure you're doing it for the right reasons and don't overcomplicate things.
But if you really need to move the work, then I would recommend just being explicit with dispatch_ calls in the method itself. It's a little more typing, but it's much clearer and less error-prone.

Related

UIDocument synchronous read - completion handler stalled in dispatch

I tried multiple ways of wrapping a file read within a synchronous method call (including using multiple queues, specifying target queues, setting up an NSThread and signalling with NSCondition's, even moving the allocation of the UIDocument to the background thread in the end, and also trying dispatch_sync on the background queue as well).
What ended up consistently happening is the completion handler for UIDocument.openWithCompletionHandler wasn't executing, although the documentation indicates that shall happen on the same queue that initiated the openWithCompletionHandler call.
I figured this has ultimately something to do with the control not being returned by the outer/top-level method call to the run loop. It would seem that regardless of what other queues or threads are being set up, the dispatch system expects me to return from the outermost method call, or things get blocked. This would however defeat the whole synchronous design approach.
My use case requires synchronous file reads (with very small data sizes), and I'd prefer the convenience of UIDocument over moving to lower level methods, or looking at ways to introduce async patterns. I reckon UIDocument was designed for more conventional cases, I understand well enough the ubiquity - and in most cases user friendliness and efficiency of async patterns, but in this case it would present a cumbersome situation for both development and user experience.
I wonder if there is something else that could be tried with dispatch queues that could still be explored (like manually consuming events from a queue, creating a custom run loop) that could avoid this seemingly global synchronization effect.
EDIT: this is for an Audio Unit app extension. Instantiation is controlled by the platform, and a "half-initialized" state could become a problematic situation. It is pretty much standard in the industry to fully load the plugin before even allowing the host app to start playing any audio for example, not to mention starting to stream MIDI/automation events. (That's not to say there aren't extensions with crazy load times that could take another look at their design, but in most cases these are well justified in this domain).

Alternatives to pervasive use of the #synchronized directive on mutable array access

I am designing a singleton class in Objective-C which will be accessed by multiple threads. There are 3-4 NSMutableArrays in my class, outside classes have access with read, add and remove operations which is of course wrapped in this class.
As NSMutableArray is not thread safe, I am using #synchronized() to make my operations thread safe, however it causes too much usage of #synchronized() blocks.
Because for 3-4 array for each I have at least 1 add function, 1 remove function and 5 times I need to read the values. So, for 1 array I am using at least 7 #synchronized() blocks.
For 4 arrays, I need to add 28 #synchronized blocks in my singleton class.
Is there any better way to approach my problem?
or, If I do use too all these #synchronized directives, will it causes problem ?
I know that if I want to make my objects thread safe it will slow down my code, but besides that is there any drawback?
Typically it is not sufficient to just synchronize the primitive calls (CRUD) to gain thread safety. This fine granular level is just the basic, but you'll also have to think on a more global level and "atomize" larger amounts of code. How to do this is highly dependend on your actual implementation. Multithreading is evil(tm) and demands a comprehensive view, so there is no general answer for this.
Synchronized blocks will typically slow down your application, at least if they are called too frequently. Sometimes it's better to group multiple calls in one synchronized block, to avoid the locking overhead. Or you could use spin locks, if the calls are very short, to prevent unnecessary task suspensions (see here for an old question/answer).
For more details you can just consult the Apple documentation.

NSOperation, start vs main

According to Apple document on NSOperation, we have to override main method for non-concurrent operations and start method for concurrent operations. But why?
First, keep in mind that "concurrent" and "non-concurrent" have somewhat specialized meanings in NSOperation that tend to confuse people (and are used synonymously with "asynchronous/synchronous"). "Concurrent" means "the operation will manage its own concurrency and state." "Non-concurrent" means "the operation expects something else, usually a queue, to manage its concurrency, and wants default state handling."
start does all the default state handling. Part of that is that it sets isExecuting, then calls main and when main returns, it clears isExecuting and sets isFinished. Since you're handling your own state, you don't want that (you don't want exiting main to finish the operation). So you need to implement your own start and not call super. Now, you could still have a main method if you wanted, but since you're already overriding start (and that's the thing the calls main), most people just put all the code in start.
As a general rule, don't use concurrent operations. They are seldom what you mean. They definitely don't mean "things that run in the background." Both kinds of operations can run in the background (and neither has to run in the background). The question is whether you want default system behavior (non-concurrent), or whether you want to handle everything yourself (concurrent).
If your idea of handling it yourself is "spin up an NSThread," you're almost certainly doing it wrong (unless you're doing this to interface with a C/C++ library that requires it). If it's creating a queue, you're probably doing it wrong (NSOperation has all kinds of features to avoid this). If it's almost anything that looks like "manually handling doing things in the background," you're probably doing it wrong. The default (non-concurrent) behavior is almost certainly better than what you're going to do.
Where concurrent operations can be helpful is in cases that the API you're using already handles concurrency for you. A non-concurrent operation ends when main returns. So what if your operation wraps an async thing like NSURLConnection? One way to handle that is to use a dispatch group and then call dispatch_wait at the end of your main so it doesn't return until everything's done. That's ok. I do it all the time. But it blocks a thread that wouldn't otherwise be blocked, which wastes some resources and in some elaborate corner cases could lead to deadlock (really elaborate. Apple claims it's possible and they've seen it, but I've never been able to get it to happen even on purpose).
So another way you could do it is to define yourself as a concurrent operation, and set isFinished by hand in your NSURLConnection delegate methods. Similar situations happen if you're wrapping other async interfaces like Dispatch I/O, and concurrent operations can be more efficient for that.
(In theory, concurrent operations can also be useful when you want to run an operation without using a queue. I can kind of imagine some very convoluted cases where this makes sense, but it's a stretch, and if you're in that boat, I assume you know what you're doing.)
But if you have any question at all, just use the default non-conurrent behavior. You can almost always get the behavior you want that way with little hassle (especially if you use a dispatch group), and then you don't have to wrap your brain around the somewhat confusing explanation of "concurrent" in the docs.
I would assume that concurrent vs. non-concurrent is not just a flag somewhere but a very substantial difference. By having two different methods, it is made absolutely sure that you don't use a concurrent operation where you should use a non-concurrent one or vice versa.
If you get it wrong, your code will absolutely not work because of this design. That's what you want, because you immediately fix it. If there was one method only, then using concurrent instead of non-concurrent would lead to very subtle errors that might be very hard to find. And non-concurrent instead of concurrent will lead to performance problems that you also might miss.

Select proper multithreading technique in iOS

I am confused on where to use which multithreading tool in iOS for hitting services and changing UI based on service data,
firstly I got accustomed to using NSURLConnection and its delegates, used didreceiveresponse, didreceivedata etc delegates to achieve the task
secondly I learned and used GCD to hit services and update the UI from within the block code
Now I am learning to use performSelectorInBackground() to do work in background thread
Clearly confused on which tool to use where?
NSURLConnection with delegate calls is "old school" way of receiving data from remote server. Also it's not really comfortable to use with few NSURLConnection instances in a single class (UIViewController or what not). Now it's better to use sendAsynchronousRequest.. method with completion handler. You can also define on which operation queue (main for UI, or other, background one) the completion handler will be run.
GCD is good for different tasks, not only fetching remote resources with initWithContentsOfURL: methods. You can also control what type of queues will receive your blocks (concurrent, serial ,etc.)
performSelectorInBackground: is also "old school" way of performing a method in background thread. If you weren't using ARC, you'd need to setup separate autorelease pool to avoid memory leaks. It also has a limitation of not allowing to accept arbitrary number of parameters to given selector. In this case it's recommended to use dispatch_async.
There are also NSOperationQueue with NSOperation and its subclasses (NSInvocationOperation & NSBlockOperation), where you can run tasks in the background as well as get notifications on main thread about finished tasks. IMHO They are more flexible than GCD in a way that you can create your own subclasses of operations as well as define dependencies between them.
The most important thing is, that you never change UI anyway in another thread except the main thread.
I think, that all points you mentioned use the same technique in the background: GDC. But I'm not sure of that.
Anyway it doesn't matter which tool you should use in terms of threading.
It's rather a question of your purpose. If you wan't to fetch an small image or just few data you can use contentsOfURLin a performSelectorInBackground() or a GDC dispatch block.
If it's about more data and more information like progress or error handling you should stick with *NSURLConnection`.
I suggest using GCD in all cases. Other techniques are still around but mainly for backward compatibility.
GCD is better for 3 reasons (at least):
It's extremely easy to use and the code remains very readable because of the use of blocks
It is lower level than things like NSOperation so it is much faster when you need high performance multi threading
It's lightweight and non-intrusive so your code doesn't have to change substantially when you want to add thread management in the middle of a method.

Using a single shared background thread for iOS data processing?

I have an app where I'm downloading a number of resources from the network, and doing some processing on each one. I don't want this work happening on the main thread, but it's pretty lightweight and low-priority, so all of it can really happen on the same shared work thread. That seems like it'd be a good thing to do, because of the work required to set up & tear down all of these work threads (none of which will live very long, etc.).
Surprisingly, though, there doesn't seem to be a simple way to get all of this work happening on a single, shared thread, rather than spawning a new thread for each task. This is complicated by the large number of paths to achieving concurrency that seem to have cropped up over the years. (Explicit NSThreads, NSOperationQueue, GCD, etc.)
Am I over-estimating the overhead involved in spawning all of these threads? Should I just not sweat it, and use the easier thread-per-task approaches? Use GCD, and assume that it's smarter than I about thread (re)use?
Use GCD — it's the current official recommendation and it's less effort than any of the other solutions. If you explicitly need the things you pass in to occur serially (ie, as if on a single thread) then you can achieve that but it's probably smarter just to change, e.g.
[self doCostlyTask];
To:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^()
{
[self doCostlyTask];
dispatch_async(dispatch_get_main_queue(), ^()
{
// most UIKit tasks are permissible only from the main queue or thread,
// so if you want to update an UI as a result of the completed action,
// this is a safe way to proceed
[self costlyTaskIsFinished];
});
});
That essentially tells the OS "do this code with low priority wherever it would be most efficient to do it". The various things you post to any of the global queues may or may not execute on the same thread as each other and as the thread that dispatched them and may or may not occur concurrently. The OS applies the rules it considers optimal.
Exposition:
GCD is Apple's implementation of thread pooling, and they introduced closures (as 'blocks') at the same time to make it usable. So the ^(C-style args){code} syntax is a block/closure. That is, it's code plus the state of any variables (subject to caveats) that the code references. You can store and call blocks yourself with no GCD knowledge or use.
dispatch_async is a GCD function issues a block to the nominated queue. It executes the block on some thread at some time, and applies unspecified internal rules to do so in an optimal fashion. It'll judge that based on factors such as how many cores you have, how busy each is, what it's currently thinking on power saving (which may depend on power source), how the power costs for that specific CPU work out, etc.
So as far as the programmer is developed, blocks make code into something you can pass around as an argument. GCD lets you request that blocks are executed according to the best scheduling the OS can manage. Blocks are very lightweight to create and copy — a lot more so than e.g. NSOperations.
GCD goes beyond the basic asynchronous dispatch in the above example (eg, you can do a parallel for loop and wait for it to finish in a single call) but unless you have specific needs it's probably not all that relevant.
Surprisingly, though, there doesn't seem to be a simple way to get all
of this work happening on a single, shared thread, rather than
spawning a new thread for each task.
This is exactly what GCD is for. GCD maintains a pool of threads that can be used for executing arbitrary blocks of code, and it takes care of managing that pool for best results on whatever hardware is at hand. This avoids the cost of constantly creating and destroying threads and also saves you from having to figure out how many processors are available, etc.
Tommy provides the right answer if you really care that only a single thread should be used, but it sounds like you're really just trying to avoid creating one thread per task.
This is complicated by the large number of paths to achieving
concurrency that seem to have cropped up over the years. (Explicit
NSThreads, NSOperationQueue, GCD, etc.)
NSOperationQueue uses GCD, so you can use that if it makes life easier than using GCD directly.
Use GCD, and assume that it's smarter than I about thread (re)use?
Exactly.
I would use NSOperationQueue or GCD and profile. Can't imagine thread overhead will beat out network delays.
NSOperationQueue would let you limit the number of simultaneous operations, if they end up getting too greedy. In fact, you can limit it to one if you need to.

Resources