Make NSOperations Mutually exclusive - ios

Refer this video from WWDC https://developer.apple.com/videos/play/wwdc2015/226/
The speaker shows that we can add dependency between two NSopeation instances of same type. Example an NSoperation that displays an alert. By achieving this we can make sure that we don't throw multiple alerts at same time and annoy the user.
If one alert is already being displayed next one will wait.
I still can't figure out how to implement this dependency of NSOperations cross queue.In more simpler words can anyone show an example(implementation) of following two things.
1.Implementation of adding dependency of operation B from queue 2 on operation A from queue 1.
2.Implementation of adding dependency of multiple instances of same NSOperation type, even if they are in different queue. Example: if i add multiple instances of "AlertOperation" to different queue I want to make sure they still take place sequentially among themselves.
I would appreciate if the examples are in Objective C.
Please ask for more clarification if needed.

I'm the engineer who presented that session.
The short answer is that in order to make your second operation dependent on the first operation, you have to maintain a reference to the first operation.
The sample code provided with the session uses a global table that keeps track of all the currently-executing operations. When a new operation comes in that specifies it should be mutually exclusive with other operations of the same kind, the code looks up in the table for the other operations of the same kind. The new operation is then made dependent on the last one in the list.
Since the table is a global table, it works regardless of which queue the operations are actually executing on. The only thing it requires is using the custom NSOperationQueue subclass ("OperationQueue") as the thing that's executing operations.

From the comments, the underlying question is:
how can I add a dependency to an existing operation when I don't have a reference to it
You should create multiple different queues, and specifically in this case a queue just for alert operations. Technically it can work with a single queue, but you need to do a bit more work.
With a specific queue you can simply iterate the operations currently on the queue and add a dependency to every one. If you don't have a specific queue then you'll need to do a class test (or use some other logic) to decide exactly which operations to add a dependency to.

Related

Different DispatchQueue for different requests

I want to have different priorities for different http requests and I want to be able to stop/pause some requests immediately and to execute only some of them if I have a request with High priority. I was going to use different approaches, but it seems that all of them are deprecated now:
1) To use AFHTTPRequestOperation, so I can create different OperationQueue base on DispatchQueue and add operations there, but it's deprecated in a new version of AFNetworking
2) To use different DispatchQueues with different priorities for different Synchronous requests (using NSURLConnection.sendSynchronousRequest) (so I can stop some queues if I have highest priority operations and I can cancel operations immediately). But as far as I understand according to Stackoverflow Question , this way is deprecated, so I can't send a Synchronous request.
I understand that there are ways where Requests seem to be Synchronous using semaphores, but that's only an illusion, because all the requests will be executed in the queues I can't control.
Are there any ways to control the DispatchQueue (or OperationQueue) where the request is executed?
You should/could use NSURLSessionTask/URLSessionTask subclasses to grab data from the network. You could easily encapsulate them inside one NSOperation/Operation subclasses (see this slide.
You would just need to add cancellation support to it. You can find information about cancellation inside the Responding to the Cancel Command section of the NSOperation documentation.
To play with priorities you have several tools:
To handle priorities between OperationQueue instances, you can attribute them different qualityOfService values.
To handle priorities between Operation instances, you can attribute them different qualityOfService, queuePriority or threadPriority values.
Some additional explanation about those parameters can be found in the Prioritize Work with Quality of Service Classes section of the Energy Efficiency Guide for iOS Apps.

Best way to ensure an initial network request is completed before other requests are sent (iOS app)

An app I am working on requires creating a container object on a server and inserting items into that container. I don't want to create the container object until the first item needs to be inserted. However, creating the container object requires some initialization that may take a little time. While that container is still initializing the user can still to send insertion requests that aren't getting handled because the container isn't ready yet. I have two main questions:
Should this be dealt with on the client or server side?
What is the best practice for dealing with kind of this issue?
Essentially, I need to ensure my initial createContainer data task in complete before any insertItem requests are sent.
Addition Information
An insertItem request is sent by clicking on a corresponding tableViewCell. The first tableViewCell a user clicks on sends a createContainer request that creates a container holding the first item.
For a container holding n items, the request should be sent in the following order:
createContainer(Container(with: item1)
insertItem(item2)
...
insertItem(itemn)
After the first request completes, the remaining n – 1 requests may complete in any order.
My Thoughts
It sounds like I want the createContainer request to be handled synchronously while the insertItem request should be handled asynchronously. I'm not sure if that is the best approach or even how to perform that appropriately, so any guidance would be greatly appreciated.
You can use a NSOperationQueue and multiple NSOperations to implement your desired behavior. A NSOperation instance can be dependent on the completion of another NSOperation instance:
dependencies
An array of the operation objects that must finish
executing before the current object can begin executing.
For your example this would mean that the insertItem-Operations are dependent on the createContainer operation.
When you add all those operations to a NSOperationQueue your createContainer operation will run first. When it has finished, the other operations will start running as their dependencies are now satisfied. You can also control how many operations you want to run concurrently using maxConcurrentOperationCount on NSOperationQueue.
As you will be using asynchronous API in your NSOperations you will need to implement a ConcurrentOperation and handle the state changes yourself. The API Reference is explaining this in pretty good detail.
Check out the API Reference for NSOperation for further information.
There is also a nice NSHipster article on NSOperations.
Adding to the NSOperationQueue answer, it's sometimes difficult to manually manage all the state changes that an NSOperation requires to handle something asynchronous like a network call.
To simplify that, you can use a Swift Library called Overdrive. It's an amazing library in which you simply subclass a Task class and write your network code in the run() function. And when you're done, you simply call self.finish to finish the task. Here's an example: Just create a simple download task:
Then, just add it to the queue.
You can also add dependencies between tasks, which basically solves your use case.
Hope this helps.

Bolts framework task queue

I'm developing an iOS app and have been looking into using Bolts framework by Parse (facebook) to manage network operations (using Alamofire for network requests).
I'm wondering if there is a good implementation/pattern out there for a task queue for Bolts. I need to have offline functionality and therefore I (think) need to have some sort of task queue so if the user is offline all of their save/create operations are saved (queued and persisted) and then executed once they have a network connection, also needed for retries of requests. I've looked at NSOperation queue so I may go that route although I like how Bolts does things with BFTask and would prefer to use that.
I understand your problem, but I think that you mix up the purpose of NSOperation queue and BFTasks a little bit.
BFTasks are used in order to use and create asynchronous and synchronous methods/network requests in a cohesive and minimalistic way. For instance, suppose that would have to login a user, present a search view and then download user`s search query results.
In order to keep your app optimized and have the UI at 60fps you would need to run your network request asynchronously. Apparently, you would present search view only if user logged in (using your method) (this technique is called "async tasks in series") and then you would download search results using parallel async requests (Think about downloading movie artworks for a movie name query in iTunes. They start downloading at the same time, "in parallel" to each other, so user images are downloaded independently from each other). (Whereas this is one is called "async tasks in parallel").
As you can see from this example, we can only achieve the desired logic along with desired performance if we use sequential and parallel async requests.
Bolts framework allows you to achieve all of the aforementioned logic in a VERY cohesive and convenient way.
NSOperation queue, on the other hand, allow you to build a complex sequence of both sync and async methods. It even allows you to get the status of a particular operation and bind dependencies. A good example of it, is view controller lifecycle.
If I were you, I would first learn how to use Bolts and NSOperation queue apart from each other. Then, depending on what you actually need to achieve in your app in terms of functionality, I would start thinking about binding Bolts and NSOperation queue in a class or a struct (in case you use swift). Like using Bolts for "online" stuff (executing network requests) and NSOperation queue for "offline" (storing the sequence of actions the user makes while being offline, in order to execute this sequence when the internet connection is back).
You can read more about NSOperation here and about Bolts for iOS here.
UPDATE:
In terms of implementation pattern, one suggestion that you might want to consider is to create a simple class/struct that would be responsible for storing("stacking") your Bolts methods. You can use arrays for sequential logic and sets for parallel one. You can also use sets to easily make sure that some of the requests happen only once as sets store only unique objects. Honestly, in my opinion, you should try to implement something similar to what I described, because Bolts itself (almost for sure) incorporates NSOperation and NSOperaitionQueue.
By the way, since Parse iOS SDK is open source right now, you could see how they implement saveEvenutually method which saves an object when internet connection is back and think how you could replicate their logic according to your needs.

NSOperation with dependency on another operation on a different queue does not start

I have dependency graph of operations and I use multiple queues to organize various streams of operations.
E.g. peopleQueue, sitesQueue, sessionQueue
sessionQueue: loginOp, fetchUpdatedAccountOp
peopleQueue: mostFrequentlyManagedClientsOp, remainingClientsOp
sitesQueue: mostFrequentlyAccessedSitesOp, remainingSitesOp
dependencies:
*all* -> loginOp
remainingClientsOp -> mostFrequentlyManagedClientsOp
remainingSitesOp -> mostFrequentlyAccessedSitesOp
The current setup works: after login completes, all the other operations kick off
mostFrequently* is a subset fetch that allows for quick app response, a subsequent op fetches much more data (sometimes in pages) in the background.
Recently I thought I'd add an operation that depended on all the leaf operations.
This latest operation would act as a sentinel to tell me when the graph traversal had completed (firing it would cause on NSNotification post or something). So:
sentinelOp -> remainingClientsOp, remainingSitesOp, fetchUpdatedAccountOp
What I discovered, however, is that even though all its dependencies completed, the sentinel operation never started/fired.
The sentinel, at the time was queued, on the sessionQueue (no particular reason).
After playing around in the debugger, I discovered that I could only get it to fire if the sentinel depended on only operations that were on the same queue.
I finally got the sentinel to run by introducing a 4th queue for just that operation.
The sentinel depends on the other 3 leaf operations in their respective queues and then gets called when they all complete.
I can go with this working model but it really bothers me.
The Apple docs for both mac and iOS suggest that inter-queue dependency should work.
I will need to extend the graph a bit further so it troubles me that using an existing queue for inter-queue dependencies prevents the operation from executing.
Clearly, inter-queue dependencies work to some extent because I got loginOp to be the root dependency for other operations regardless of their queues in the first place.
What am I doing wrong by placing the sentinel operation on one of the existing 3 queues?
I resolved this issue by using only 1 queue. I still can't understand what was wrong with the original implementation but I learned a couple of things that removed the need for multiple queues.
First, it's somewhat straightforward to observe the queues pending operations count using KVO. This is how I was able to do away with the sentinel (see Reference).
Second, I was maintaining several queues to logically separate out related operations. With one queue, I achieved pretty much the same results by composing the operations generation method into helper methods, 1 for each logical unit and then enqueuing all the operations returned by the helper.
I am not sure if there are performance implications to going from 3 queues to 1. As far as I can tell, so long as the operations are concurrent and the queue has no restrictions on current execution, it shouldn't matter whether the operations are distributed among multiple queues or all on the same queue.

Is there any reason to share a dispatch queue?

I'm working on some code that I can't contact the original developer.
He passes a class a reference to another class's serial dispatch queue and I'm not seeing any purpose to him not just creating another dispatch queue (each class is a singleton).
It's not creating issues but I'd like to understand it further, so any insight into the positive implications is appreciated, thank you.
Edit: I suggest reading all the answers here, they explain a lot.
It's actually not a good idea to share queues in this fashion (and no, not because they are expensive - they're not, quite the converse). The rationale is that it's not clear to anyone but a queue's creator just what the semantics of the queue are. Is it serial? Concurrent? High priority? Low priority? All are possible, and once you start passing internal queues around which were actually created for the benefit of a specific class, the external caller can schedule work on it which causes a mutual deadlock or otherwise behaves in an unexpected fashion with the other items on that queue because caller A knew to expect concurrent behavior and caller B was thinking it was a serial queue, without any of the "gotchas" that concurrent execution implies.
Queues should therefore, wherever possible, be hidden implementation details of a class. The class can export methods for scheduling work against its internal queue as necessary, but the methods should be the only accessors since they are the only ones who know for sure how to best access and schedule work on that specific type of queue!
If it's a serial queue, then they may be intending to serialize access to some resource shared between all objects that share it.
Dispatch queues are somewhat expensive to create, and tie up system resources. If you can share one, the system runs more efficiently. For that matter, if your app does work in the background, using a shared queue allows you to manage a single pool of tasks to be completed. So yes, there are good reasons for using a shared dispatch queue.

Resources