I'm new to AFNetworking and I'm interested in using it to handle a few dozen JSON requests (for example, using a web service's API that responds with JSON) for my application, but I'm having some trouble understanding how I should do this.
Could anyone offer some insight on how I'd go about accomplishing this? Like I said, I'm new to the library so an explanation would be greatly appreciated if you explain with code.
For a more specific example as to what I'm trying to do, here's the Clear Read API I'm using, where you pass the URL as a parameter in the URL and are returned a JSON response (the API extracts the article from a URL, removing the other bloat).
Example URL: http://api.thequeue.org/v1/clear?url=http://blogs.balsamiq.com/product/2012/02/27/uxstackexchange/&format=json
I'll be taking a few dozen URLs and running them all through that service and wish to save the results.
I was previously doing this with NSURLConnection in a for loop, firing off several dozen NSURLConnections, which was causing my data to be quite messed up by the end, with timeouts and whatnot from so many going at once.
I understand that it would be better to do only a few at a time, and AFNetworking seems perfect for this kind of problem, but I'm really just confused how I'd use it/subclass it or whatever.
I'd recommend starting with their Getting Started guide.
There's not much too it, really: build an AFJSONRequestOperation for each call to the API you want to make, and in the success callback, handle the deserialized JSON appropriately. If you have a bunch of calls to make, use AFHTTPClient to a) simply some of the repetitive work of building those operations, and b) use the client's operation queue to batch them all up. You can then throttle the number of requests in flight at once with the queue's setMaxConcurrentOperationCount: method.
Related
this is an open question aiming at understanding what is the best practice or the most common solution for a problem that I think might be common.
Let's say I have a list of URLs to download; the list is itself hosted on a server, so I start a NSURLConnection that downloads it. The code in connectionDidFinishLoading will use the list of URLs to instantiate one new NSURLConnection, asynchronously, per each URL; this in turn will trigger even more NSURLConnections, and so on - until there are no more URLs. See it as a tree of connections.
What is the best way to detect when all connections have finished?
I'm aiming the question to iOS7, but comments about other versions are welcome.
A couple of thoughts:
In terms of triggering the subsequent downloads after you retrieve the list from the server, just put the logic to perform those subsequent downloads inside the completion handler block (or completion delegate method) of the first request.
In terms of downloading a bunch of files, if targeting iOS 7 and later, you might consider using NSURLSession instead of NSURLConnection.
First, the downloading of files with a nice modest memory footprint is enabled by initiating "download" tasks (rather than "data" tasks).
Second, you can do the downloads using a background NSURLSessionConfiguration, which will let the downloads continue even if the user leaves the app. See the Downloading Content in the Background section of the App Programming Guide for iOS. There are a lot of i's that need dotting and t's that need crossing if you do this, but it's a great feature to consider implementing.
See WWDC 2013 What's New in Foundation Networking for an introduction to NSURLSession. Or see the relevent chapter of the URL Loading System Programming Guide.
In terms of keeping track of whether you're done, as Wain suggests, you can just keep track of the number of requests issued and the number of requests completed/failed, and in your "task completion" logic, just compare these two numbers, and initiate the "all done" logic if the number of completions matches the number of requests. There are a bunch of ways of doing this, somewhat dependent upon the details of your implementation, but hopefully this illustrates the basic idea.
Instead of using GCD you should consider using NSOperationQueue. You should also limit the number of concurrent operations, certainly on mobile devices, to perhaps 4 so you don't flood the network with requests.
Now, the number of operations on the queue is the remaining count. You can add a block to the end of each operation to check the queue count and execute any completion logic.
As Rob says in his answer you might want to consider NSURLSession rather than doing this yourself. It has a number of advantages.
Other options are building your own download manager class, or using a ready-made third party framework like AFNetworking. I've only worked with AFNetworking a little bit but from what I've seen its elegant, powerful, and easy to use.
Our company wrote an async download manager class based on NSURLConnection for a project that predates both AFNetworking and NSURLSession. It's not that hard, but it isn't as flexible as either NSURLSession or AFNetworking.
I have a Rails 3 application that lets a user perform a search against a 3rd party database via an API. It could potentially bring back quite a bit of XML data. Also, the API could be busy serving requests for other users and have a nontrivial delay in its response time.
I only run 2 webservers so I can't afford to have a delayed request obviously. I use Sidekiq to process long-running jobs, but in all the cases I've needed that I haven't had to return a value to the screen.
I also use Pusher to communicate back to the user when a background job is finished. I am checking it out, but I don't know if it can be used for the kind of data I want to push to the screen. Right now it just pops up dialog boxes with messages that I send it.
I have thought of some pretty kooky stuff, like running the request via Sidekiq, sending the results to a session object or file, then using Pusher to kick off some kind of event to grab the data and populate the screen with it. Seems kind of Rube Goldberg-ish.
I appreciate any help or insight anyone can offer into the problem!
I had a similar situation not long ago and the way I've fixed was using memcache and threads.
I've also thought about using Sidekiq, but Sidekiq is ideal if you don't expect to use the data right away, so memcache and threads worked pretty well and gave us a good amount of control.
Instead of calling the API directly I would assign the API request to a thread and this thread once done would write to memcache, in my case this can happen incrementally, with the same API being able to return more data from the same endpoint until is complete.
From the UI I would have a basic ajax pooling mechanism that would hit a controller and check memcache for data and for the status to see if it was complete or not, this would sign the UI that it need to keep pooling for more data.
I'm trying to find a way to open up a connection to a web service and have that service send down JSON objects on an as-needed basis.
Say I request 20 profiles from a service. Instead of waiting for the service to build all 20, the service would build the first profile and throw it back down to the client until all 20 are created.
I've been using AFNetworking and would like to continue using it. Eventually I'd like to contribute this component back to the community if it requires an addition.
Anyone have any ideas on tackling something like this? Right now I have a service pushing JSON every few seconds to test with.
A couple of thoughts:
If you want to open a connection and respond to transmissions from the server, socket-based model seems to make sense. See Ray Wenderlich's How To Create A Socket Based iPhone App and Server for an example (the server-side stuff is likely to change based upon your server architecture, but it gives you an example). But AFNetworking is built on a NSURLConnection framework, not a socket framework, so if you wanted to integrate your socket classes into that framework, a non-inconsiderable amount of work would be involved.
Another, iOS-specific model is to use Apple's push notification service (see the push-related sections of the Local and Push Notification Programming Guide).
A third approach would be to stay with a pull mechanism, but if you're looking for a way to consume multiple feeds in a non-serial fashion would be to create multiple AFURLConnectionOperation (or the appropriate subclass) operations, and submit them concurrently (you may want to constraint maxConcurrentOperations on the queue to 4 or 5 as iOS can only have so many concurrent network operations). By issuing these concurrently, you mitigate many of the delays that result from network latencies. If you pursue this approach, some care might have to be taken for thread safety, but it's probably easier than the above two techniques.
This sounds like a job for a socket (or a web socket, whatever is easier).
I don't believe there is support for this in AF. This could be implemented in the NSURLConnection's didRecieveData method. This is triggered every time a piece of data is received, so you can do your parsing and messaging from that point. Unfortunately, I can't think of a very clean way to implement this.
Perhaps a better approach to this is to handle the appropriate rerequest via a pagination-style technique. You would request page 1 of profiles with 1/page, then request page 2, etc. You could then control the flow, i.e. if you want to request all in paralel or request one then the next sequentially. This would be less work to implement, and would (in my opinion) be cleaner and easier to maintain.
AFNetworking supports batching of requests with AFHTTPClient -enqueueBatchOfHTTPRequestOperations:progressBlock:completionBlock:.
You can use this method to return on each individual operation, as well as when all of the operations in the batch have finished.
In my iOS5-App I do a lot of Request, parsing the results and storing them in CoreData simultaneously.
For doing the requests I use asynchronous ASIHttpRequest.
But the app has perfromance problems while this requests are running. What is a good approach to to this in the background? And how do I avoid conflicts with the context when storing the results to the db? All "commits" are performed in the main-thread because I had problems when putting the requests in a backgroundque.
Can you give me an example or a good pattern to use in my app?
Since ASI is no longer being supported a lot of people switched to AFNetworking.
There's also the minimalistic approach of using NSURLConnections with Blocks.
I develop an iOS app that uses a REST API. The iOS app requests data in worker threads and stores the parsed results in core data. All views use core data to visualize the information. The REST API changes rapidly and I have no real control over the interface.
I am looking for advice how perform integration tests for the app as easy as possible. Should I test against the API or against Mock data? But how to mock GET requests properly if you can create resources with POST or modify them with PUT?
What frameworks do you use for these kind of problems? I played with Frank, which looks nice but is complicated due to rapid UI changes in the iOS app. How would you test the "API request layer" in the app? Worker threads are NSOperations in a queue - everything is build asynchronously. Any recommendations?
I would strongly advise you to mock the server. Servers go down, the behavior changes, and if a test failure implies "maybe my code still works", you have a problem on your hands, because your test doesn't tell you whether or not the code is broken, which is the whole point.
As for how to mock the server, for a unit test that does this:
first_results = list_things()
delete_first_thing()
results_after_delete = list_thing()
I have a mock data structure that looks like this:
{ list_things_request : [first_results, results_after_delete],
delete_thing_request: [delete_thing_response] }
It's keyed on your request, and the value is an array of responses for that request in the order that they were seen. Thus you can support repeatedly running the same request (like listing the things) and getting a different result. I use this format because in my situation it is possible for my API calls to run in a slightly different order than it did last time. If your tests are simpler, you might be able to get away with a simple list of request/response pairs.
I have a flag in my unit tests that indicate if I am in "record" mode (that is, talking to a real server and recording this data-structure to disk) or if I am in "playback" mode (talking to the datastructure). When I need to work with a test, I "record" the interactions with the server and then play them back.
I use the little-known SenTestCaseDidStartNotification to track which unit test is running and isolate my data files appropriately.
The other thing to keep in mind is that instability is the root of all evil. If you have code that does things with sets, or gets the current date, and such, this tends to change the requests and responses, which do not work in an offline scenario. So be careful with those.
(Since nobody stepped in yet and gave you a complete walkthrough) My humble advice: Step back a bit, take out the magic of async, regard everything as sync (api calls, parsing, persistence), and isolate each step as a consumer/producer. After all you don't wan't to unit-test NSURLConnection, or JSONKit or whatever (they should have been tested if you use them), you want to test YOUR code. Your code takes some input and produces output, non-aware of the fact that the input was in fact the output genereated in a background thread somewhere. You can do the isolated test all sync.
Can we agree on the fact that your Views don't care about how their model data was provided? If yes, well, test your View with mock objects.
Can we agree on the fact that your parser doesn't care about how the data was provided? If yes, well, test your parser with mock data.
Network layer: same applies as described above, in the end you'll get an NSDictionary of headers, and some NSData or NSString of content. I don't think you want to unit-test NSURLConnection or any 3'rd party networking api you trust (asihttp, afnetworking,...?), so in the end, what's to be tested?
You can mock up URLs, request headers and POST data for each use-case you have, and setup test cases for expected responses.
In the end, IMHO, it's all about "normalizing" out asyc.
Take a look at Nocilla
For more info, check this other answer to a similar question