My app does a lot of writing/reading from the SQLite DB and I'd like it to execute all of these on another thread, so that the Main thread is not blocked.
But all these DB operations have to be executed one after another, or it won't work.
For what I understand, I should use a serial queue, and add all the tasks to it.
If this is it, how to create a global serial queue and add tasks to it from whatever view I'm in?
Or maybe I didn't get it at all, so I need someone to point me to the right direction.
Thanks.
As Ashley Mills suggested, you can create GCD queue:
dispatch_queue_t queue = dispatch_queue_create("SQLSerialQueue", DISPATCH_QUEUE_SERIAL);
dispatch_async(queue, ^{
// ...
});
But another option is to use NSOperationQueue, which I prefer:
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
queue.maxConcurrentOperationCount = 1;
queue.name = #"SQLSerialQueue";
[queue addOperationWithBlock:^{
// ...
}];
NSOperationQueues are built above GCD queues and allow you to wait for running operations to finish (something like converting async task to sync).
You can also create subclasses of NSOperation for tasks you perform frequently and add them easily to the queue.
Another advantage of NSOperationQueues is class method +currentQueue, which is hardly accessible in GCD environment.
On the other side, NSOperationQueue is missing barrier operations found in GCD. In the end, all differences can be achieved in the other framework, but with some little or more work.
If you decide to use GCD, but don't like its C interface, check my Objective-C wrapper: Grand Object Dispatch ;)
All you need to do to create a serial queue is:
dispatch_queue_t myQueue = dispatch_queue_create("myqueue", DISPATCH_QUEUE_SERIAL);
Perhaps look at using a singleton object that has myQueue as a property that can be accessed from anywhere in the app.
Speaking from my own experience, you don't want to try to thread your database access too much without using a framework to handle it for you. I would suggest looking into FMDatabaseQueue.
Related
If an iOS app has to make hundreds of server requests in background and save the result in local mobile database, which approach would be better in terms of performance (less crashes)?
Passing all requests as 1 block in Global Background Queue
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
for (i=0;i<users;i++)
{
Call Api 1
Call Api 2
Call Api 3
}
});
OR
Creating 'n' number of User Serial Queues and adding all 3 api calls as individual blocks in each serial queue.
for (i=0;i<users;i++)
{
dispatch_queue_t myQueue = dispatch_queue_create([[users objectAtIndex:i] UTF8String], DISPATCH_QUEUE_SERIAL);
dispatch_async(myQueue, ^{
Call Api 1
});
dispatch_async(myQueue, ^{
Call Api 2
});
dispatch_async(myQueue, ^{
Call Api 3
});
}
Edit: In each Call Api, I am using NSOperationQueue.
queue = [[NSOperationQueue alloc] init];
[NSURLConnection sendAsynchronousRequest:request queue:queue completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) {..}
I would suggest using NSOperations rather. Create 'n' NSOperations one for each API. Create a NSOperatioQueue, add the NSOperations to queue and sit back and relax while iOS takes the burden of deciding how many operations to run concurrently and all other complicated task ascociated with thread and memory for you :)
The major difference between GCD and NSOperations is the ability to pause and resume the operations :) In case of GCD once submitted operation is bound to happen and there is no way you can skip them or pause them :)
Adding the dependencies between multiple operations in GCD is cumbersome where as adding dependencies and prioritising tasks in NSOperations is just a matter of few statements :)
EDIT
As per your edit you are already using NSOperation for each API. So there is absolutely no need to call the NSOperations in dispatch_async again :) Rather consider creating an NSOperationQueue and adding these NSOperations to queue :)
QA for your comments
1.What if I create new NSOperationQueue every time instead of adding NSOperations to single NSOperationQueue?
Creating a seperate queue for each NSOperation is never a great idea :)
NSOperationQueue was suggested only with the intention of reducing the complication you will have to bear with multiple threads manually :)
When you submit an NSOperation to NSOperationQueue you can specify how many concurrent operations can be performed by NSOperationQueue.
Notice that it is just an upper value :) If you specify maximum concurrent operation allowed to 10 it means when iOS has resources like memory and CPU cycles it may carry out 10 operations.
You being a developer may not be always in a great position to decide what is the optimal number of threads that system can afford :) but OS can always do it efficiently. So it is always advisable to transfer these burden on OS as much as possible :)
But if you want to have a separate thread for each API calls rather then creating a separate queue you can consider executing the NSOpertaions individually by calling [NSOperation start] Creating an NSOperationQueue for each NSOperation is overhead.
I believe if you have any experience with JAVA you must have came across ExecutorPool concept. NSOperationQueue does exactly what ExecutorPool does for JAVA :)
2.Should I use [NSURLConnection sendAsynchronousRequest:request queue:[NSOperationQueue mainQueue] .. instead of [NSURLConnection sendAsynchronousRequest:request queue:[[NSOperationQueue alloc] init]
I believe you are aware that all you UI operations are performed in main thread and main thread makes use of Main Queue to dispatch the events. Performing the lengthy operations on Main Queue and keeping it busy will lead to a very bad user experience.
Calling 100 API's on Main queue may lead to user uninstalling your app and giving the worst possible rating :)
I guess you now know the answer for your question now :) to be specific use [[NSOperationQueue alloc] init] for your situation.
First you should read this: GCD Practicum.
Second you shouldn't roll your own solution here. Instead use AFNetworking, and just make the requests as needed. It has its own operation queue already setup so you don't need to deal with that. Then set the maximum number of concurrent requests to some value that you tune by hand. Start with four.
I am studying the NSOperation Object. According to the App Document (Listing 2-5), we can implement an asynchronously NSOperation. The part of the start part is as below:
- (void)start {
// Always check for cancellation before launching the task.
if ([self isCancelled])
{
// Must move the operation to the finished state if it is canceled.
[self willChangeValueForKey:#"isFinished"];
finished = YES;
[self didChangeValueForKey:#"isFinished"];
return;
}
// If the operation is not canceled, begin executing the task.
[self willChangeValueForKey:#"isExecuting"];
[NSThread detachNewThreadSelector:#selector(main) toTarget:self withObject:nil];
executing = YES;
[self didChangeValueForKey:#"isExecuting"];
}
I find out that a new thread is assigned to run the main function:
[NSThread detachNewThreadSelector:#selector(main) toTarget:self withObject:nil];
So, it seems that the NSOperation did nothing about the concurrent execution. The asynchrony is achieve by create a new thread. So why we need NSOperation?
You may use a concurrent NSOperation to execute an asynchronous task. We should emphasize on the fact, that the asynchronous task's main work will execute on a different thread than where its start method has been invoked.
So, why is this useful?
Wrapping an asynchronous task into a concurrent NSOperation lets you leverage NSOperationQueue and furthermore enables you to setup dependencies between other operations.
For example, enqueuing operations into a NSOperationQueue lets you define how many operations will execute in parallel. You can also easily cancel the whole queue, that is, all operations which have been enqueued.
An NSOperationQueue is also useful to associate it with certain resources - for example, one queue executes CPU bound asynchronous tasks, another executes an IO bound task, and yet another executes tasks related to Core Data, and so force. For each queue you can define the maximum number of concurrent operations.
With dependencies you can achieve composition. That means for example, you can define that Operation C only executes after Operation A and B have been finished. With composition you can solve more complex asynchronous problems.
That's it, IMHO.
I would like to mention, that using NSOperations is somewhat cumbersome, clunky and verbose, which requires a lot of boilerplate code and subclasses and such. There are much better alternatives which require third party libraries, though.
I am newer to iPhone development and going through GCD concept for multithreading.
'dispatch_queue_t' creates a serial queue and I have read that a serial queue will only execute one job at a time. GCD is intended to execute multiple tasks simultaneously then why serial queue even exist ?
For example, I want to do 2 task. Task A and Task B. I create one serial queue for executing both these tasks. I am doing this in the main queue. Here is the code what I am doing:
dispatch_queue_t my_serial_queue = dispatch_queue_create("queue_identifier", 0);
dispatch_sync(my_serial_queue, ^{
NSLog(#"Task 1");
});
dispatch_sync(my_serial_queue, ^{
NSLog(#"Task 2");
});
Now, As per the rule, both the task will execute serially as it is serial queue i.e. Task A is executed first and then after Task A is finished, Task B will be executed. And also it is giving me the same output in log.
So, my question is, what if I want to execute both the tasks simultaneously ? If the answer of this question is to create another serial queue for Task B then the code should be structured like this:
dispatch_queue_t my_serial_queue_1 = dispatch_queue_create("queue_identifier_1", 0);
dispatch_queue_t my_serial_queue_2 = dispatch_queue_create("queue_identifier_2", 0);
dispatch_sync(my_serial_queue_1, ^{
NSLog(#"Task 1");
});
dispatch_sync(my_serial_queue_2, ^{
NSLog(#"Task 2");
});
I am getting the same output. Reason is I am using 'dispatch_sync' call instead of 'dispatch_async' call. But as I am running both the tasks in different queues, shouldn't they execute simultaneously ? If not, then why should we create a different queue ? I might have used the same queue by 'dispatch_async' call for executing both the tasks simultaneously.
I really need answer of this question because, before designing structure of my multi-tasking Apps in future, it will guide me better.
Your confusion is pretty much entirely because you're using dispatch_sync. dispatch_sync is not a tool for getting concurrent execution, it is a tool for temporarily limiting it for safety.
Once you're using dispatch_async, you can get concurrency either by having more than one queue, or by using concurrent queues. The purpose of using serial queues in this context is to control which work is done simultaneously.
Consider the following very silly example:
__block void *shared = NULL;
for (;;) {
dispatch_async(aConcurrentDispatchQueue, ^{
shared = malloc(128);
free(shared);
}
}
this will crash, because eventually, two of the concurrently executing blocks will free 'shared' in a row. This is a contrived example obviously, but in practice, nearly all shared mutable state must not be changed concurrently. Serial queues are your tool for making sure that you do that.
Summarizing:
When the work in your blocks is truly independent and thread-safe, and purely computational, go ahead and use dispatch_async to a concurrent queue
When you have logically independent tasks that each consist of several related blocks, use a serial queue for each task
When you have work that accesses shared mutable state, do those accesses in dispatch_sync()ed blocks on a serial queue, and the rest of the work on a concurrent queue
When you need to do UI-related work, dispatch_async or dispatch_sync to the main queue, depending on whether you need to wait for it to finish or not
'dispatch_queue_t' doesn't create anything. dispatch_queue_t is a dispatch queue, either serial or concurrent.
dispatch_queue_create has two parameters. The second parameter decides whether the queue that it creates is a serial or concurrent queue. But usually you don't create concurrent queues yourself but use one of the three existing concurrent queues.
dispatch_sync dispatches a block on a queue and waits until the block is finished. It should be obvious that this very much limits concurrency. You should almost always use dispatch_async.
Sequential queues can only execute one block at a time. It should be obvious that this very much limits concurrency. Sequential queues are still useful when you need to perform various blocks one after another, and they can be executed concurrently with blocks in other queues.
So for maximum use of processors use dispatch_async on a concurrent queue. And there is no need to create more than one concurrent queue. It's concurrent. It can run any number of blocks concurrently.
Say I want to implement a pattern like so:
a = some array I download from the internet
b = manipulate a somehow (long operation)
c = first object of b
These would obviously need to be called synchronously, which is causing my problems in Objective C. I've read about NSOperationQueue and GCD, and I don't quite understand them, or which would be appropriate here. Can someone please suggest a solution? I know I can also use performSelector:#selector(sel)WaitUntilDone, but that doesn't seem efficient for larger operations.
So create a serial dispatch queue, dump all the work there (each in a block), and for the last block post a method back to yourself on the main queue, telling your control class that the work is done.
This is by far and away the best architecture for much such tasks.
I'm glad your question is answered. A couple of additional observations:
A minor refinement in your terminology, but I'm presuming you want to run these tasks asynchronously (i.e. don't block the main queue and freeze the user interface), but you want these to operate in a serial manner (i.e., where each will wait for the prior step to complete before starting the next task).
The easiest approach, before I dive into serial queues, is to just do these three in a single dispatched task:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self doDownloadSynchronously];
[self manipulateResultOfDownload];
[self doSomethingWithFirstObject];
});
As you can see, since this is all a single task running these three steps after each other, you can really dispatch this to any background queue (in the above, it's to one of the global background queues), but because you're doing it all within a given dispatched block, these three steps will sequentially, one after the next.
Note, while it's generally inadvisable to create synchronous network requests, as long as you're doing this on a background queue, it's less problematic (though you might want to create an operation-based network request as discussed below in point #5 if you want to enjoy the ability to cancel an in-progress network request).
If you really need to dispatch these three tasks separately, then just create your own private serial queue. By default, when you create your own custom dispatch queue, it is a serial queue, e.g.:
dispatch_queue_t queue = dispatch_queue_create("com.company.app.queuename", 0);
You can then schedule these three tasks:
dispatch_async(queue, ^{
[self doDownloadSynchronously];
});
dispatch_async(queue, ^{
[self manipulateResultOfDownload];
});
dispatch_async(queue, ^{
[self doSomethingWithFirstObject];
});
The operation queue approach is just as easy, but by default, operation queues are concurrent, so if we want it to be a serial queue, we have to specify that there are no concurrent operations (i.e. the max concurrent operation count is 1):
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
queue.maxConcurrentOperationCount = 1;
[queue addOperationWithBlock:^{
[self doDownloadSynchronously];
}];
[queue addOperationWithBlock:^{
[self manipulateResultOfDownload];
}];
[queue addOperationWithBlock:^{
[self doSomethingWithFirstObject];
}];
This begs the question of why you might use the operation queue approach over the GCD approach. The main reason is that if you need the ability to cancel operations (e.g. you might want to stop the operations if the user dismisses the view controller that initiated the asynchronous operation), operation queues offer the ability to cancel operations, whereas it's far more cumbersome to do so with GCD tasks.
The only tricky/subtle issue here, in my opinion, is how do you want to do that network operation synchronously. You can use NSURLConnection class method sendSynchronousRequest or just grabbing the NSData from the server using dataWithContentsOfURL. There are limitations to using these sorts of synchronous network requests (e.g., you can't cancel the request once it starts), so many of us would use an NSOperation based network request.
Doing that properly is probably beyond the scope of your question, so I might suggest that you might consider using AFNetworking to create an operation-based network request which you could integrate in solution #4 above, eliminating much of the programming needed if you did your own NSOperation-based network operation.
The main thing to remember is that when you're running this sort of code on a background queue, when you need to do UI updates (or update the model), these must take place back on the main queue, not the background queue. Thus if doing a GCD implementation, you'd do:
dispatch_async(queue, ^{
[self doSomethingWithFirstObject];
dispatch_async(dispatch_get_main_queue(),^{
// update your UI here
});
});
The equivalent NSOperationQueue rendition would be:
[queue addOperationWithBlock:^{
[self doSomethingWithFirstObject];
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
// update your UI here
}];
}];
The essential primer on these issues is the Concurrency Programming Guide.
There are a ton of great WWDC videos on the topic including WWDC 2012 videos Asynchronous Design Patterns with Blocks, GCD, and XPC and Building Concurrent User Interfaces on iOS and WWDC 2011 videos Blocks and Grand Central Dispatch in Practice and Mastering Grand Central Dispatch
I would take a look at reactive cocoa (https://github.com/ReactiveCocoa/ReactiveCocoa), which is a great way to chain operations together without blocking. You can read more about it here at NSHipster: http://nshipster.com/reactivecocoa/
In GCD, is there a way to tell if the current queue is concurrent or not?
I'm current attempting to perform a delayed save on some managed object contexts but I need to make sure that the queue the code is currently executed on is thread-safe (in a synchronous queue).
If you actually have to determine whether or not the queue passed in to you is serial or concurrent, you've almost certainly designed things incorrectly. Typically, an API will hide an internal queue as an implementation detail (in your case, your shared object contexts) and then enqueue operations against its internal queue in order to achieve thread safety. When your API takes a block and a queue as parameters, however, then the assumption is that the passed-in block can be safely scheduled (async) against the passed-queue (when, say, an operation is complete) and the rest of the code is factored appropriately.
Yes, assuming you're doing the work in an NSOperation subclass:
[myOperation isConcurrent] //or self, if you're actually in the NSOperation
If you need to ensure some operations are always executed synchronously, you can create a specific operation queue and set its maximum concurrent operations to 1.
NSOperationQueue * synchronousQueue = [[NSOperationQueue alloc] init];
[synchronousQueue setMaxConcurrentOperationCount:1];
GCD takes some planning ahead. The only other way I can think of is to observe the value isExecuting (or similar) on your NSOperation objects. Check out this reference on that. This solution would be more involved, so I hope the other one works for you.