What are the pros and cons of reusing an AFHTTPRequestOperationManager? - ios

We're making calls to an API to get JSON data using AFHTTPRequestOperationManager.
Currently, we are instantiating a new AFHTTPRequestOperationManager for each request. We're considering instantiating just a single AFHTTPRequestOperationManager and reusing it across requests.
What are the trade-offs?

There are several reasons why AFHTTPRequestOperationManager per domain is a handy pattern:
1) You save the resources of creating a new manager for each request which can create significant memory pressure
2) Having just one reference to a manager allows you to easily manage all the network requests in your app. For example, when a user logs out you may want to cancel all requests. With just one manager you can easily access the operation queue and cancel them all at once.
3) Related to #2, having one instance allows you to manage the configuration for all your requests at once. For instance adding authorization headers or configuring custom parsers. These could of course be done before every request but it adds unnecessary complexity.

Related

How can I send common information will all the AFNetworking Request?

I am using AFNetworking for my client and server communication. I want to make a wrapper on top of AFNetworking so that I can set common header and extra information for all the HTTP requests. Basically all my HTTP request will go through one layer to AFNetworking. It will make my client server communication easier and I will be able to include any kind of data with all the http request at any point of time. What will be the best way to do it?
As example I want to send token, network status, user info etc.
More specifically:
I want to include some common info with all the request like network info, user info, token. Now its really difficult to change in each and every request. So I want to design in such a way that all the http call will go through one path and I can send anything with AFNetworking HTTP Request without touching all the file.
You should create one separate class that manage all the network related calls. You should subclass NSObject and make a class with different required methods that you need. Import your AFNetwotking in this class and use this class in whole project when needed to make network call!

Monitor Network Calls IOS using Restkit

I am completely dependent on Rest Kit for my app for network calls. I want to see the logs of how much
1) Time taken by each API to get a response
2) Size of Request/Response Payload
3) URL of the API
Is there any way I can enable such logging in Restkit. My app is calling like 50-60 API an dI don't want to dig into entire code base an add manual logs. Also I don't want to use network profiling tool since I will be tracking this data when an actual user is using the application.
Cant also use any third party paid tool so want to log these values in application database.
RestKit does have a log you can enable, but that isn't what you want to do if you plan to actually release this. It also writes to the log, not a value you can actually process and save.
Your likely best option is to subclass the RKObjectManager and intercept the requests that are being placed and the NSURLRequests which are being generated.

How to format/serialize asynchronous API callback responses, when inside the model

I am exposing an API that responds asynchronously to certain requests. This is possible, as the client appends a callback_url in their request, to which the asynchronous action will send the result when it completes.
Problem is, the action completes while inside a model, which makes it tricky to keep a clear seperation of concerns, as I usually handle stiching together JSON responses in the controller using ActiveModelSerializer.
Any advice on how to approach this in an idiomatic way?
Thanks
My approach would be to extract the outgoing callback response into a separate service (called from within the model) and place that service on an asynchronous queue.
This service should be as generic as possible. Any logic that relates to building/sending/logging outgoing responses would then be contained within the service, and is separated out of the Model.
I would then wrap the service call in an asynchronous priority queue system, such as DelayedJob. This would allow the Model to do its thing before handing the response off to the service for asynchronous execution.
The benefit to using a queue system is that should anything prevent the response from being posted it will not 'freeze' the Model whilst executing. Bottom line; the Model can hand the response over to the queue and forget about the details of sending the response.
Ryan B. himself says (pro account required):
OlderRailsCast

Custom Trust Store with AFNetworking

I'm developing some iOS apps and I'm downloading/uploading data which are very sensitive.
I'm using AFNetworking to do that requests and my question is simple:
I reach only 3 different certificates in all the app, can I custom AFNetworking's layer to accept only these 3 certificates?
The aim of this manipulation will be to avoid "Man To the Middle" attacks and so avoid injection and/or retrieval of any additional information during the HTTP exchanges.
All AFNetworking operations inherit from AFURLConnectionOperation, which defines a block called authenticationChallenge. Setting this block on your operations will define how AFNetworking responds to the NSURLConnectionDelegate method connection:didReceiveAuthenticationChallenge:. Specifically, you will want to inspect challenge.proposedCredential.
If you don't want to set this block on every operation, you could also subclass the operation type you're using (like AFJSONRequestOperation, for example), and override connection:
willSendRequestForAuthenticationChallenge: with the behavior you want.

Rails application design: Queueing, Resque, Background Services, and Redis

I am designing a Rails app that takes in requests, uses data within the request to call a 3rd party web service, process the reply and then sends out a response to the original requestor and also issues a PUT request to yet another service.
I am trying to wrap my head around how to design this Rails app as it's different from the canonical Rails structure.
The objects are Lists and Tasks. Each List has many Tasks, and each Task belongs to a List.
The request I would get is something like:
http://myrailsapp.heroku.com/v1/lists?id=1&from=2012-02-12&to=2012-02-14&priority=high
In this example I am requesting tasks from 2/12/2012 to 2/14/2012 with a high priority in List #1
I would then issue a 3rd party web service call like this:
http://thirdpartywebservice.com/v1/lists?id=4128&from=2012-02-12&to=2012-02-14&priority=high
As you can see some processing was done on the data (id was changed in this case)
The results are then sent back to the requestor and to another web service via PUT.
My question is, how do I set up the Rails app to handle these types of behaviors? How does the controller structure change? This looks like a good use case for queues, how do I distribute multiple concurrent requests among queues?
For one thing I don't need data persistence (data can be discarded after the response is sent out) and also data structure design is simplified. (I don't think I need ruby objects, simply dictionaries or hashes representing these would be lighter weight and quicker to implement)
Edit
So I broke down the work flow of the app into these components
Parse incoming request
Construct 3rd part web service request
Send 3rd party request
Enqueue a worker to process the expected response
Process the response once it arrives
Send the parsed result back as a response
Which of the standard ruby controllers handle each of these steps? What are the models needed besides Lists and Tasks?
You should still use a database because passing data to Resque is messy. Rather, you should store it in the database and then pass the id to the workers, fetch the data, commit any new data or delete the record. It's really up to you but this method is cleaner. You can also use a push service like faye to let the user know when the processing is complete.
If you expect to have many concurrent requests, I would recommend Sidekiq as it's less of a memory hog. Having 4-5 resque workers can already suck up about 512 MB. The controller structure should not change. Please comment on anything you need clarified and I'll be happy to update my answer.
EDIT
You would want to use a separate database store, such as Postgres. Not sure if it's important what models you need, but essentially this is what should be happening.
In your controller, create a Request object which contains the query params you want to query this 3rd party service with. Then enqueue a job to be handled by Sidekiq/Resque, let's call this ThirdPartyRequest and pass in the id of the Request object you just created as an argument. Then render a view here showing the Request object. Let's say that Request#response is still empty cause it hasn't been processed yet, so let the user know it's still processing.
A worker then handles your job ThirdPartyRequest. ThirdPartyRequest should then fetch the Request object and obtain the query params needed to contact the third party service. It does that then gets a Request. Update the Request object with this Request then save it.
class ThirdPartyRequest
def self.perform(request_id)
request = Request.find(request_id)
# contact third party service
request.response = ...
request.save
end
end
The user can continually refresh his page to check on his/her Request object. Once it gets updated with the response, they will know its completed. If you want the page to refresh automatically, look into faye/juggernaut/private_pub or a SaaS solution like Pusher.

Resources