Using AFNetworking to process multiple JSON responses for a single request - ios

I'm trying to find a way to open up a connection to a web service and have that service send down JSON objects on an as-needed basis.
Say I request 20 profiles from a service. Instead of waiting for the service to build all 20, the service would build the first profile and throw it back down to the client until all 20 are created.
I've been using AFNetworking and would like to continue using it. Eventually I'd like to contribute this component back to the community if it requires an addition.
Anyone have any ideas on tackling something like this? Right now I have a service pushing JSON every few seconds to test with.

A couple of thoughts:
If you want to open a connection and respond to transmissions from the server, socket-based model seems to make sense. See Ray Wenderlich's How To Create A Socket Based iPhone App and Server for an example (the server-side stuff is likely to change based upon your server architecture, but it gives you an example). But AFNetworking is built on a NSURLConnection framework, not a socket framework, so if you wanted to integrate your socket classes into that framework, a non-inconsiderable amount of work would be involved.
Another, iOS-specific model is to use Apple's push notification service (see the push-related sections of the Local and Push Notification Programming Guide).
A third approach would be to stay with a pull mechanism, but if you're looking for a way to consume multiple feeds in a non-serial fashion would be to create multiple AFURLConnectionOperation (or the appropriate subclass) operations, and submit them concurrently (you may want to constraint maxConcurrentOperations on the queue to 4 or 5 as iOS can only have so many concurrent network operations). By issuing these concurrently, you mitigate many of the delays that result from network latencies. If you pursue this approach, some care might have to be taken for thread safety, but it's probably easier than the above two techniques.

This sounds like a job for a socket (or a web socket, whatever is easier).
I don't believe there is support for this in AF. This could be implemented in the NSURLConnection's didRecieveData method. This is triggered every time a piece of data is received, so you can do your parsing and messaging from that point. Unfortunately, I can't think of a very clean way to implement this.
Perhaps a better approach to this is to handle the appropriate rerequest via a pagination-style technique. You would request page 1 of profiles with 1/page, then request page 2, etc. You could then control the flow, i.e. if you want to request all in paralel or request one then the next sequentially. This would be less work to implement, and would (in my opinion) be cleaner and easier to maintain.

AFNetworking supports batching of requests with AFHTTPClient -enqueueBatchOfHTTPRequestOperations:progressBlock:completionBlock:.
You can use this method to return on each individual operation, as well as when all of the operations in the batch have finished.

Related

Asynchronous UI Testing in Xcode With Swift

I am writing an app that makes plenty of network requests. As usual they are
async, i.e. the call of the request method returns immediately and the result
is delivered via a delegate method or in a closure after some delay.
Now on my registration screen I sent a register request to my backend and
want to verify that the success UI is shown when the request finishes.
Which options are out there to wait for the request to finish, verify the
success UI and only after that leave the test method?
Also are there any more clever options than waiting for the request to finish?
Thanks in advance!
Trivial Approach
Apple implemented major improvements in Xcode 9 / iOS 11 that enables you to wait for the appearance of a UI element. You can use the following one-liner:
<#yourElement#>.waitForExistence(timeout: 5)
Advanced Approach
In general UI and unit tests (referred to as tests here) must run as fast as possible so the developer can run them often and does not get frustrated by the need to run a slow test suite multiple times a day. In some cases, there is the possibility that an (internal or security-related) app accesses an API that can only be accessed from certain networks / IP ranges / hosts. Also, most CI services offer pretty bad hardware and limited internet-connection speed.
For all of those reasons, it is recommended to implement tests in a way that they do no real network requests. Instead, they are run with fake data, so-called fixtures. A clever developer realizes this test suite in a way that source of the data can be switched using a simple switch like a boolean property. Additionally, when the switch is set to fetch real backend data the fixtures can be refreshed/recorded from the backend automatically. This way it is pretty easy to update the fake data and quickly detect changes of the API.
But the main advantage of this approach is speed. Your test will not make real network requests but instead run against local data what makes them independent on:
server issues
connection speed
network restrictions
This way you can run your tests very fast and thus much more often - which is a good way of writing code ("Test Driven Development").
On the other hand, you won't detect server changes immediately anymore since the fake data won't change when the backend data changes. But this is solved by simply refreshing your fixtures using the switch you have implemented because you are a smart developer which makes this issue a story you can tell your children!
But wait, I forgot something! Why this is a replacement for the trivial approach above - you ask? Simple! Since you use local data which is available immediately you also can call the completion handler immediately too. So there is no delay between doing the request and verifying your success UI. This means you don't need to wait which makes your tests even faster!
I hope this helps some of my fellows out there. If you need more guidance regarding this topic don't hesitate and reply to this post.
Cya!

Async NSURLConnection triggering other Async NSURLConnection: what is the best way of doing this?

this is an open question aiming at understanding what is the best practice or the most common solution for a problem that I think might be common.
Let's say I have a list of URLs to download; the list is itself hosted on a server, so I start a NSURLConnection that downloads it. The code in connectionDidFinishLoading will use the list of URLs to instantiate one new NSURLConnection, asynchronously, per each URL; this in turn will trigger even more NSURLConnections, and so on - until there are no more URLs. See it as a tree of connections.
What is the best way to detect when all connections have finished?
I'm aiming the question to iOS7, but comments about other versions are welcome.
A couple of thoughts:
In terms of triggering the subsequent downloads after you retrieve the list from the server, just put the logic to perform those subsequent downloads inside the completion handler block (or completion delegate method) of the first request.
In terms of downloading a bunch of files, if targeting iOS 7 and later, you might consider using NSURLSession instead of NSURLConnection.
First, the downloading of files with a nice modest memory footprint is enabled by initiating "download" tasks (rather than "data" tasks).
Second, you can do the downloads using a background NSURLSessionConfiguration, which will let the downloads continue even if the user leaves the app. See the Downloading Content in the Background section of the App Programming Guide for iOS. There are a lot of i's that need dotting and t's that need crossing if you do this, but it's a great feature to consider implementing.
See WWDC 2013 What's New in Foundation Networking for an introduction to NSURLSession. Or see the relevent chapter of the URL Loading System Programming Guide.
In terms of keeping track of whether you're done, as Wain suggests, you can just keep track of the number of requests issued and the number of requests completed/failed, and in your "task completion" logic, just compare these two numbers, and initiate the "all done" logic if the number of completions matches the number of requests. There are a bunch of ways of doing this, somewhat dependent upon the details of your implementation, but hopefully this illustrates the basic idea.
Instead of using GCD you should consider using NSOperationQueue. You should also limit the number of concurrent operations, certainly on mobile devices, to perhaps 4 so you don't flood the network with requests.
Now, the number of operations on the queue is the remaining count. You can add a block to the end of each operation to check the queue count and execute any completion logic.
As Rob says in his answer you might want to consider NSURLSession rather than doing this yourself. It has a number of advantages.
Other options are building your own download manager class, or using a ready-made third party framework like AFNetworking. I've only worked with AFNetworking a little bit but from what I've seen its elegant, powerful, and easy to use.
Our company wrote an async download manager class based on NSURLConnection for a project that predates both AFNetworking and NSURLSession. It's not that hard, but it isn't as flexible as either NSURLSession or AFNetworking.

Is using a Web API as dataprovider for a website efficient?

I was thinking about setting up a project with Web API. Basically build the API first and program the web site using this API.
Although it's sound promising I was wondering:
If I separate the logic in a nice way, I might end up retrieving data on a web-page through multiple API call's, which in turn are multiple connections with the server with all the overhead etc..
For example, if I use, let's say 8 different API call's on one page, I can't imagine it won't have an impact on the web-page's performance.
So, have I misunderstood something? Or is this kind of overhead negligible - or does the need for multiple call's indicates that the design is wrong?
Thanks in advance.
Well, we did it. Web API server providing the REST access to all the data. Independent UI Clients consuming it as the only access-point to underlying peristence.
The first request takes some time. It is significantly longer. It must init all the UI Client stuff, and get the least needed data from a server. (Menu, user, access rights, metadata...list-view data)
The point, the real advantage, is hidden in the second, the third... request. Lot of stuff is already there on a UI Client. And even, if this is requested again, caching (Server, Client, both) could be introduced.
So, this would mean more requests (at least during the UI Client start up)... but it does not imply ... slower application.
The maintenance benefit is hidden (maybe it is not hidden, it should be obvious) in the Separation of Concern. On the server, we are no longer solving the issue, where to place the user data handling, the base-controller or child-controller... should there by the Master-page, the Layout-controller...
Solved. We are taking care about single, specific stuff, published via REST. One method, one business operation. And that's the dream if we'd like to keep that application alive and be the repairman and extender.
One aspect is that you can display the page to the end user very very fast . Once the page is loaded, use Jquery async calls and any Javscript template tool (like angularjs or mustacheJs) to call the web api simultaneously to build the client page views.
I have used this approach in multiple project and experience of the user is tremendous.
Most modern browsers support 6-8 parallel connections to the same site. So you do have to be careful about that. Unless you are connecting to that many separate systems, I would try to reduce the number of connections. Or ensure the calls are called asynchronously by different events to reduce the chance of parallel connections.
Making a series of HTTP calls to obtain data for your page will have an overhead. Only testing will tell you how that might impact in your scenario.
There is little point using Web API just because you can. You should have a legitimate reason for building a RESTful API. Even then, if it is primarily for your own consumption, design it to deliver a ViewModel for each page in one call.

NSURLConnection and multiple asynchronous requests - is it messing with the data being transmitted?

I have an NSArray of links. I want to parse through them with an online article extractor API (Clear Read), and with the result given back for each article (some HTML) I throw it into an NSString.
My problem arises from the fact that, say my array has 100 URLs in it, I loop through the array shooting each item into the API and getting back some results in JSON. This is firing like 100 NSURLConnection calls at once asynchronously.
I wasn't sure if that'd be a problem, but when I give it 100 URLs (real strings, none are nil) the data that comes back often has either empty values for the JSON keys (when they shouldn't), or the data coming back is nil. There's also a bunch of duplicates.
Should I be handling multiple asynchronous connections better than I am now? If so, how?
A couple of thoughts:
If you're doing concurrent asynchronous requests and are using asynchronous NSURLConnection, then you'll want to define your own class for this download operation to make sure that every connection keeps track of its own properties. That way, everything can be encapsulated within this class where the resulting download objects can keep track of what's downloaded, what's been parsed, etc. If you're not using asynchronous NSURLConnection (e.g. you're just using dataWithContentsOfURL), it's even easier, though you lose some of the progress updates that NSURLConnection provides and/or streaming opportunities.
For best performance, you should do concurrent requests. Having said that, you should not have more than four or five concurrent requests going to any particular server. This is an iOS imposed constraint, and especially if you have a slow network connection, you risk having connections timeout otherwise.
If you're doing preliminary testing on the simulator, you may want to make sure you try out the "network link conditioner". It's part of the "Hardware IO Tools for Xcode", available at the Downloads for Apple Developers. There are issues (such as the aforementioned timeout problems if you have too many concurrent requests going to a particular server) that only manifest themselves in slow connections.
Having said that, you also want to make sure to test your solution on a device with real world network speeds. It's easy to successfully run massively parallel tasks successfully on the simulator that are too greedy for the device. Limiting the number of concurrent sessions to five will diminish this resource problem, but it should be part of your testing strategy.
I agree with JRG-Developer, that you should look into established frameworks, such as AFNetworking. Make sure to set the maxConcurrentOperationCount for the queue of the AFHTTPClient, though, if queueing 100 plus operations.
I don't know how much data your 100 requests entail, but be forewarned that the app approval process has been known to reject apps that make extraordinary networks requests on cellular networks. What constitutes excessive cellular network activity is not explicitly stated in the app review guidelines, though Avoiding iPhone App Rejection From Apple has claimed that you should ensure that you don't exceed more than 4.5mb in 5 minutes. You can use Reachability to determine what type of network you are on and perhaps warn the user if they're on cellular (if the amount of data approaches this threshold).
Have you considered using a third party framework - such as AFNetworking - and limiting the number of asynchronous calls happening at once? Perhaps this might help / solve your problem.
In particular, you might consider creating a networking manager class that creates and manages AFHTTPClient(s), which in turn manages AFHTTPRequestOperations, for each endpoint (base URL) you hit.

Dropbox API request pattern

I am using the Dropbox SDK on iOS, and am mirroring a remote directory locally. I understand the basic usage pattern - make a request, wait for the delegate to be called with the results.
When I have a large number of requests to perform, should I serialize them by waiting for the result before making the next call, or make all requests at once and then just wait for them each to come in? Does the Dropbox SDK handle the latter case intelligently (e.g. with an NSOperationQueue), or am I better off doing this myself?
If I am better off handling request queuing myself, should I change behavior when the user is on a wifi vs. cellular connection?
EDIT: I have seen CHBgDropboxSync and other existing solutions. My app requires more control over syncing than these provide, so I need to roll my own.
Depends on how many requests you need to make and how reliant they are on each other. With either GCD or NSOperation you can daisy-chain requests, you can issue them all at once and keep semaphores in your program, or you can make requests rely on others to complete. You're creating an asynchronous state machine, and its design will depend on whether that state machine is dynamic or static.

Resources