Are URLSession objects resource intensive? - ios

Would it be resource intensive to create a new URLSession for every single web request?
Some background:
I'm working on a library for making web requests. I'm trying to add a feature that allows downloading the result to a file that would also report its progress. For that, I'm going to have to become the session's delegate.
This wouldn't be a big deal except the public interface allows customizing the URLSession used for the requests. I don't want to override any customization the developer wants to do with its own delegate.
Right now, I'm thinking that the way to do this would be to secretly make a copy of the session they think is being used (yes I'm going to do more than copy the object itself) and then my internal delegate would call out to the original public session's methods. There could still be confusion/problems if they try to manipulate the session during the request, but that seems like a much smaller edge case.
My only concern right now is this might be very resource intensive if many requests are being made. Does anyone have a sense for that?

Yes, they are intensive. Here is a quote from Apple Staff on the developer forums.
This is a common anti-pattern, one that we specifically
warned against at at WWDC this year. Creating a session per request
is inefficient both on the CPU and, more importantly, on the network.
Specifically, it prevents connection reuse, which can radically slow
down back-to-back requests. This is especially bad for HTTP/2. We
encourage folks to group all similar tasks in a single session, using
multiple sessions only if you have different sets of tasks with
different requirements (like interactive tasks versus background
download tasks). That means that many simple apps can get away with
using a single statically-allocated session.

Related

What‘s the best practices of NSURLSession?

Subscribe my app first:
Most scenes use AFNetworking, a small part of scenes use NSURLSession.sharedSession or create a new NSURLSession.
Using one URLProtocol instance to handle almost all requests, and at the end of -startLoading function, using only one NSURLSession to resume all tasks.
My question is:
I know URLSession instance will cause memory growth and it persists for about 10 minutes, so what is the maximum limit for an app to hold URLSession instances?
What‘s the best practices of NSURLSession?Is it recommended to use only one URLSession instance for the entire app? Or a fixed domain uses a fixed NSURLSession( A-domain using A-session, B-domain using B-session)?
Should I create several URLProtocol instances to handle different domain requests
Thanks!
You want to use as few URLSessions as necessary. Usually, this is only one and it's also unlikely you invalidate this session and create a new one during the lifetime of the app.
The reason for having as few as possible, is that URLSession is specifically designed to handle more than one network request - both executing them in parallel or sequential - and can optimise all requests executed in this session in order to use less memory, less power and achieve faster execution times.
On the other hand, there are far less options to optimise requests executed in different URLSessions. Especially performance gains from using HTTP/2 cannot be achieved for requests running in different URLSessions.
However, there may be requirements or situations where you create more than one. For example, you utilise a third party image loading library which creates its own URLSession. Or you need distinct URLSession configurations, like cellular usage, cooky policy or cache policy, etc., which cannot be shared.
Or for example, you want to tie a certain URLSession and it's URLCache and Credential cache to a certain authenticated user. When you have some "sign-out" feature in your app, you can invalidate the session, clear the credential storage and the URLCache. At the same time you have another URLSession for your "public" API, and another URLSession for your image loading, which are not affected by a "sign-out" - and where cached responses should be kept.
It's a gross no-no to create a URLSession for every request, then let this hang around when the request completes and create another URLSession with the next request. You can't do it worse than this.
See also
Apple Developer Forum
WWDC: NSURLSession: New Features and Best Practices
WWDC: Networking with NSURLSession

URLSession across different REST endpoints for the same server

I have an app that makes a whole bunch of different REST calls to the same server from a bunch of different view controllers. What’s best practice with regard to URLSession: Share the same URLSession object? Or just the URLSessionConfiguration object? Or doesn’t matter about either?
For example, when making request to an endpoint, should I
Instantiate a brand new URLSession each request with with the shared URLSessionConfiguration?
Instantiate a single URLSession once for the current active app instance, and reuse it across all requests?
It is not best practice to create multiple URLSessions. Apple recommends creating only one if possible:
WWDC2017 Advances in Networking, Part 2
"We have seen developers that take their old NSURLConnection code and convert it to the new URLSession code by mechanically making a URLSession for every old NSURLConnection they used to have. This is very inefficient and wasteful. For almost all of your apps what you want to have is just one URLSession, which can then have as many tasks as you want. The only time you would want more than one URLSession is when you have groups of different operations that have radically different requirements. And in that case you might create two different configuration objects and create two different URLSessions using those two configuration objects."
Though the Apple developer presenting this session was answering a slightly different question, clearly the answer he gives is good for your question too.
A long-lived shared URLSession object only makes sense if you need to use methods on that class that affect multiple tasks at the same time. For example if you need to call getTasksWithCompletionHandler(_:) or finishTasksAndInvalidate(), the session object needs to exist for long enough to cover all of the tasks you want those methods to affect.
It might also make sense if creating them on the fly would result in having several identical instances at the same time.
Otherwise, create a URLSession when you need it and then let it get deallocated when you don't.
In either case I wouldn't keep a shared URLSessionConfiguration object in memory at all times. Set up a factory method that can create one, and call that whenever you need a URLSession.

Is using a Web API as dataprovider for a website efficient?

I was thinking about setting up a project with Web API. Basically build the API first and program the web site using this API.
Although it's sound promising I was wondering:
If I separate the logic in a nice way, I might end up retrieving data on a web-page through multiple API call's, which in turn are multiple connections with the server with all the overhead etc..
For example, if I use, let's say 8 different API call's on one page, I can't imagine it won't have an impact on the web-page's performance.
So, have I misunderstood something? Or is this kind of overhead negligible - or does the need for multiple call's indicates that the design is wrong?
Thanks in advance.
Well, we did it. Web API server providing the REST access to all the data. Independent UI Clients consuming it as the only access-point to underlying peristence.
The first request takes some time. It is significantly longer. It must init all the UI Client stuff, and get the least needed data from a server. (Menu, user, access rights, metadata...list-view data)
The point, the real advantage, is hidden in the second, the third... request. Lot of stuff is already there on a UI Client. And even, if this is requested again, caching (Server, Client, both) could be introduced.
So, this would mean more requests (at least during the UI Client start up)... but it does not imply ... slower application.
The maintenance benefit is hidden (maybe it is not hidden, it should be obvious) in the Separation of Concern. On the server, we are no longer solving the issue, where to place the user data handling, the base-controller or child-controller... should there by the Master-page, the Layout-controller...
Solved. We are taking care about single, specific stuff, published via REST. One method, one business operation. And that's the dream if we'd like to keep that application alive and be the repairman and extender.
One aspect is that you can display the page to the end user very very fast . Once the page is loaded, use Jquery async calls and any Javscript template tool (like angularjs or mustacheJs) to call the web api simultaneously to build the client page views.
I have used this approach in multiple project and experience of the user is tremendous.
Most modern browsers support 6-8 parallel connections to the same site. So you do have to be careful about that. Unless you are connecting to that many separate systems, I would try to reduce the number of connections. Or ensure the calls are called asynchronously by different events to reduce the chance of parallel connections.
Making a series of HTTP calls to obtain data for your page will have an overhead. Only testing will tell you how that might impact in your scenario.
There is little point using Web API just because you can. You should have a legitimate reason for building a RESTful API. Even then, if it is primarily for your own consumption, design it to deliver a ViewModel for each page in one call.

iOS App Offline and synchronization

I am trying to build an offline synchronization capability into my iOS App and would like to get some feedback/advice from the community on the strategy and best practice to be followed to do the same. The app details are as follows:
The app shows a digital catalog to users and allows them to perform actions like creating and placing orders, among others.
Currently the app only works when online, and we have APIs for all actions like viewing the catalog, creating/placing orders which return JSON data.
We would like to provide offline/synchronization capability to users, through which users can view the catalog and create/place orders while offline, and when they come online the order details will be synchronized and updated to our server.
We would also like to pull the latest data from the server, and have the app keep itself up to date in case of catalog changes or order changes that happened at the Server while the app was offline.
Can you guys help me to come with the best design and approach for handling this kind of functionality?
I have done something similar just in the beginning of this year. After I read about NSOperationQueue and NSOperation I did a straight forward approach:
Whenever an object is changed/added/... in my local database, I add a new "sync"-operation to the queue and I do not care about, if the app is online or offline (I added a reachability observer which either suspended the queue or takes it back working; of course, I do re-queueing if an error occurs (lost network during sync)). The operation itself reads/writes the database and does the networking stuff. My ViewController use a NSFetchedResultsController (with delegate=self) to get callbacks on changes. In some cases I needed some extra local data (it is about counting objects), where I have used NSManagedObjectContextObjectsDidChangeNotification.
Furthermore, I have used Multi-Context CoreData which sounded quite reasonable to use (I have only two contexts).
To get notified about changes from your server, I believe that iOS 7 has something new for you.
On the server side, you should read a little for the actual approach you want to go for: i.e. Data Synchronization by Dan Grover or Developing Android REST Client Applications (of course there are many more good articles out there).
Caution: you might be disappointed when you expect an easy solution. Your requirement is not unusual, but the solution might become more complex than you expect - depending on the "business rules" and other reasonable requirements. If you intelligently restrict your requirements you may find a solution which you can implement yourself, otherwise you may also consider to use a commercial product.
I could imagine, that if you design the business logic such that it takes an offline state into account and exposes this explicitly in the business logic, you may find a solution which you can implement yourself with moderate effort. What I mean by this is for example, when a user creates an order, it is initially in "not committed" stated. The order will only be committed when there is access to the server and if the server gives the "OK" that this order can actually be placed by this user. The server may also deny the order, sending corresponding messages to the user.
There are probably quite a few subtle issues that may arise due to the requirement of eventual consistency.
See also this question which contains pointers to solutions from commercial products, and if you visit their web sites give valuable information about the complexity of the problem and how this can be solved.

Dropbox API request pattern

I am using the Dropbox SDK on iOS, and am mirroring a remote directory locally. I understand the basic usage pattern - make a request, wait for the delegate to be called with the results.
When I have a large number of requests to perform, should I serialize them by waiting for the result before making the next call, or make all requests at once and then just wait for them each to come in? Does the Dropbox SDK handle the latter case intelligently (e.g. with an NSOperationQueue), or am I better off doing this myself?
If I am better off handling request queuing myself, should I change behavior when the user is on a wifi vs. cellular connection?
EDIT: I have seen CHBgDropboxSync and other existing solutions. My app requires more control over syncing than these provide, so I need to roll my own.
Depends on how many requests you need to make and how reliant they are on each other. With either GCD or NSOperation you can daisy-chain requests, you can issue them all at once and keep semaphores in your program, or you can make requests rely on others to complete. You're creating an asynchronous state machine, and its design will depend on whether that state machine is dynamic or static.

Resources