Is using a Web API as dataprovider for a website efficient? - asp.net-mvc

I was thinking about setting up a project with Web API. Basically build the API first and program the web site using this API.
Although it's sound promising I was wondering:
If I separate the logic in a nice way, I might end up retrieving data on a web-page through multiple API call's, which in turn are multiple connections with the server with all the overhead etc..
For example, if I use, let's say 8 different API call's on one page, I can't imagine it won't have an impact on the web-page's performance.
So, have I misunderstood something? Or is this kind of overhead negligible - or does the need for multiple call's indicates that the design is wrong?
Thanks in advance.

Well, we did it. Web API server providing the REST access to all the data. Independent UI Clients consuming it as the only access-point to underlying peristence.
The first request takes some time. It is significantly longer. It must init all the UI Client stuff, and get the least needed data from a server. (Menu, user, access rights, metadata...list-view data)
The point, the real advantage, is hidden in the second, the third... request. Lot of stuff is already there on a UI Client. And even, if this is requested again, caching (Server, Client, both) could be introduced.
So, this would mean more requests (at least during the UI Client start up)... but it does not imply ... slower application.
The maintenance benefit is hidden (maybe it is not hidden, it should be obvious) in the Separation of Concern. On the server, we are no longer solving the issue, where to place the user data handling, the base-controller or child-controller... should there by the Master-page, the Layout-controller...
Solved. We are taking care about single, specific stuff, published via REST. One method, one business operation. And that's the dream if we'd like to keep that application alive and be the repairman and extender.

One aspect is that you can display the page to the end user very very fast . Once the page is loaded, use Jquery async calls and any Javscript template tool (like angularjs or mustacheJs) to call the web api simultaneously to build the client page views.
I have used this approach in multiple project and experience of the user is tremendous.

Most modern browsers support 6-8 parallel connections to the same site. So you do have to be careful about that. Unless you are connecting to that many separate systems, I would try to reduce the number of connections. Or ensure the calls are called asynchronously by different events to reduce the chance of parallel connections.

Making a series of HTTP calls to obtain data for your page will have an overhead. Only testing will tell you how that might impact in your scenario.
There is little point using Web API just because you can. You should have a legitimate reason for building a RESTful API. Even then, if it is primarily for your own consumption, design it to deliver a ViewModel for each page in one call.

Related

Are URLSession objects resource intensive?

Would it be resource intensive to create a new URLSession for every single web request?
Some background:
I'm working on a library for making web requests. I'm trying to add a feature that allows downloading the result to a file that would also report its progress. For that, I'm going to have to become the session's delegate.
This wouldn't be a big deal except the public interface allows customizing the URLSession used for the requests. I don't want to override any customization the developer wants to do with its own delegate.
Right now, I'm thinking that the way to do this would be to secretly make a copy of the session they think is being used (yes I'm going to do more than copy the object itself) and then my internal delegate would call out to the original public session's methods. There could still be confusion/problems if they try to manipulate the session during the request, but that seems like a much smaller edge case.
My only concern right now is this might be very resource intensive if many requests are being made. Does anyone have a sense for that?
Yes, they are intensive. Here is a quote from Apple Staff on the developer forums.
This is a common anti-pattern, one that we specifically
warned against at at WWDC this year. Creating a session per request
is inefficient both on the CPU and, more importantly, on the network.
Specifically, it prevents connection reuse, which can radically slow
down back-to-back requests. This is especially bad for HTTP/2. We
encourage folks to group all similar tasks in a single session, using
multiple sessions only if you have different sets of tasks with
different requirements (like interactive tasks versus background
download tasks). That means that many simple apps can get away with
using a single statically-allocated session.

Rails process controls to delay/impede multiple submissions

An application is based upon client's use and thus network connections. Sometimes, some processes are longer (i.e. capturing a hand-written line, converting to JSON, generating image and uploading to a static server), which can be compounded via flaky web connections.
Overanxious users may believe something is wrong and continue submitting... making the situation worse.
For specific actions, say update_signature, how could one conceivably trash other requests for that particular action and unique identifier ?
You can handle rate limiting with Rack::Throttle. And debounce double clicks (convert to a single click) with javascript.
But don't optimize prematurely.

Using AFNetworking to process multiple JSON responses for a single request

I'm trying to find a way to open up a connection to a web service and have that service send down JSON objects on an as-needed basis.
Say I request 20 profiles from a service. Instead of waiting for the service to build all 20, the service would build the first profile and throw it back down to the client until all 20 are created.
I've been using AFNetworking and would like to continue using it. Eventually I'd like to contribute this component back to the community if it requires an addition.
Anyone have any ideas on tackling something like this? Right now I have a service pushing JSON every few seconds to test with.
A couple of thoughts:
If you want to open a connection and respond to transmissions from the server, socket-based model seems to make sense. See Ray Wenderlich's How To Create A Socket Based iPhone App and Server for an example (the server-side stuff is likely to change based upon your server architecture, but it gives you an example). But AFNetworking is built on a NSURLConnection framework, not a socket framework, so if you wanted to integrate your socket classes into that framework, a non-inconsiderable amount of work would be involved.
Another, iOS-specific model is to use Apple's push notification service (see the push-related sections of the Local and Push Notification Programming Guide).
A third approach would be to stay with a pull mechanism, but if you're looking for a way to consume multiple feeds in a non-serial fashion would be to create multiple AFURLConnectionOperation (or the appropriate subclass) operations, and submit them concurrently (you may want to constraint maxConcurrentOperations on the queue to 4 or 5 as iOS can only have so many concurrent network operations). By issuing these concurrently, you mitigate many of the delays that result from network latencies. If you pursue this approach, some care might have to be taken for thread safety, but it's probably easier than the above two techniques.
This sounds like a job for a socket (or a web socket, whatever is easier).
I don't believe there is support for this in AF. This could be implemented in the NSURLConnection's didRecieveData method. This is triggered every time a piece of data is received, so you can do your parsing and messaging from that point. Unfortunately, I can't think of a very clean way to implement this.
Perhaps a better approach to this is to handle the appropriate rerequest via a pagination-style technique. You would request page 1 of profiles with 1/page, then request page 2, etc. You could then control the flow, i.e. if you want to request all in paralel or request one then the next sequentially. This would be less work to implement, and would (in my opinion) be cleaner and easier to maintain.
AFNetworking supports batching of requests with AFHTTPClient -enqueueBatchOfHTTPRequestOperations:progressBlock:completionBlock:.
You can use this method to return on each individual operation, as well as when all of the operations in the batch have finished.

Core Data on client (iOS) to cache data from a server Strategy

I have written many iOS apps that was communicating with the backend. Almost every time, I used HTTP cache to cache queries and parse the response data (JSON) into objective-C objects. For this new project, I'm wondering if a Core Data approach would make sense.
Here's what I thought:
The iOS client makes request to the server and parse the objects from JSON to CoreData models.
Every time I need a new object, instead of fetching the server directly, I parse CoreData to see if I already made that request. If that object exists and hasn't expired, I use the fetched object.
However, if the object doesn't exist or has expired (Some caching logic would be applied here), I would fetch the object from the server and update CoreData accordingly.
I think having such an architecture could help with the following:
1. Avoid unnecessary queries to the backend
2. Allow a full support for offline browsing (You can still make relational queries with DataCore's RDBMS)
Now here's my question to SO Gods:
I know this kinda requires to code the backend logic a second time (Server + CoreData) but is this overkill?
Any limitation that I have under estimated?
Any other idea?
First of all, If you're a registered iOS Dev, you should have access to the WWDC 2010 Sessions. One of those sessions covered a bit of what you're talking about: "Session 117, Building a Server-driven User Experience". You should be able to find it on iTunes.
A smart combination of REST / JSON / Core Data works like a charm and is a huge time-saver if you plan to reuse your code, but will require knowledge about HTTP (and knowledge about Core Data, if you want your apps to perform well and safe).
So the key is to understand REST and Core Data.
Understanding REST means Understanding HTTP Methods (GET, POST, PUT, DELETE, ...HEAD ?) and Response-Codes (2xx, 3xx, 4xx, 5xx) and Headers (Last-Modified, If-Modified-Since, Etag, ...)
Understanding Core Data means knowing how to design your Model, setting up relations, handling time-consuming operations (deletes, inserts, updates), and how to make things happen in the background so your UI keeps responsive. And of course how to query locally on sqlite (eg. for prefetching id's so you can update objects instead of create new ones once you get their server-side equivalents).
If you plan to implement a reusable API for the tasks you mentioned, you should make sure you understand REST and Core Data, because that's where you will probably do the most coding. (Existing API's - ASIHttpRequest for the network layer (or any other) and any good JSON lib (eg. SBJSON) for parsing will do the job.
The key to make such an API simple is to have your server provide a RESTful Service, and your Entities holding the required attributes (dateCreated, dateLastModified, etc.) so you can create Requests (easily done with ASIHttpRequest, be they GET, PUT, POST, DELETE) and add the appropriate Http-Headers, e.g. for a Conditional GET: If-Modified-Since.
If you already feel comfortable with Core Data and can handle JSON and can easily do HTTP Request and handle Responses (again, ASIHttpRequest helps a lot here, but there are others, or you can stick to the lower-level Apple NS-Classes and do it yourself), then all you need is to set the correct HTTP Headers for your Requests, and handle the Http-Response-Codes appropriately (assuming your Server is REST-ful).
If your primary goal is to avoid to re-update a Core-Data entity from a server-side equivalent, just make sure you have a "last-modified" attribute in your entity, and do a conditional GET to the server (setting the "If-Modified-Since" Http-Header to your entities "last-modified" date. The server will respond with Status-Code 304 (Not-Modified) if that resource didn't change (assuming the server is REST-ful). If it changed, the server will set the "Last-Modified" Http-Header to the date the last change was made, will respond with Status-Code 200 and deliver the resource in the body (eg. in JSON format).
So, as always, the answer is to your question is as always probably 'it depends'.
It mostly depends what you'd like to put in your reusable do-it-all core-data/rest layer.
To tell you numbers: It took me 6 months (in my spare time, at a pace of 3-10 hours per week) to have mine where I wanted it to be, and honestly I'm still refactoring, renaming, to let it handle special use-cases (cancellation of requests, roll-backs etc) and provide fine-grained call-backs (reachability, network-layer, serialization, core data saving...), . But it's pretty clean and elaborate and optimized and hopefully fits my employer's general needs (an online market-place for classifieds with multiple iOS apps). That time included doing learning, testing, optimizing, debugging and constantly changing my API (First adding functionality, then improving it, then radically simplifying it, and debugging it again).
If time-to-market is your priority, you're better off with a simple and pragmatic approach: Nevermind reusability, just keep the learnings in mind, and refactor in the next project, reusing and fixing code here and there. In the end, the sum of all experiences might materialize in a clear vision of HOW your API works and WHAT it provides. If you're not there yet, keep your hands of trying to make it part of project budget, and just try to reuse as much of stable 3'rd-Party API's out there.
Sorry for the lenghty response, I felt you were stepping into something like building a generic API or even framework. Those things take time, knowledge, housekeeping and long-term commitment, and most of the time, they are a waste of time, because you never finish them.
If you just want to handle specific caching scenarios to allow offline usage of your app and minimize network traffic, then you can of course just implement those features. Just set if-modified-since headers in your request, inspect last-modified headers or etags, and keep that info persistent in your persistet entities so you can resubmit this info in later requests. Of course I'd also recommend caching (persistently) resources such as images locally, using the same HTTP headers.
If you have the luxury of modifying (in a REST-ful manner) the server-side service, then you're fine, provided you implement it well (from experience, you can save as much as 3/4 of network/parsing code iOS-side if the service behaves well (returns appropriate HTTP status codes, avoids checks for nil, number transformations from strings, dates, provide lookup-id's instead of implicit strings etc...).
If you don't have that luxury, then either that service is at least REST-ful (which helps a lot), or you'll have to fix things client-side (which is a pain, often).
There is a solution out there that I couldn't try because I'm too far in my project to refactor the server caching aspect of my app but it should be useful for people out there that are still looking for an answer:
http://restkit.org/
It does exactly what I did but it's much more abstracted that what I did. Very insightful stuff there. I hope it helps somebody!
I think it's a valid approach. I've done this a number of times. The tricky part is when you need to deal with synchronizing: if client and server can both change things at the same time. You almost always need app-specific merging logic for this.

Is there a way to determine if a user is using broadband or dial-up

We have a requirement from a customer to provide a "lite" version for dial-up and all the bells-and-whistles for a broadband user.
The solution will use Flex / Flash / Java EJB and some jsp.
Is there a way for the web server to distinguish between the two?
You don't care about the user's connection type, you care about the download speed.
Have a tiny flash app that downloads the rest the of the flash, and times how long it takes. Or an HTML page that times how long an Ajax download takes.
If the download of the rich-featured app takes too long, have the initially downloaded stub page/flash redirect to the slow download page (or download the bare-bones flash app, or whatever).
The simplest and most reliable mechanism is probably to get the user to select their connection type from a drop down. Simple, I know, but it may save you a world of grief!
There's no way to distinguish between a broadband or dial-up as a connection type, but you can make an educated guess by connection speed.
Gmail does this and provides a link to a basic HTML version of their service if they detect it.
(source: nirmaltv.com)
My guess is that there is some client side javascript polling done on AJAX requests. If the turnaround time surpasses a threshold, the option to switch to "lite" appears.
The best part about this option is that you allow the user to choose if they want to use the lite version instead of forcing them.
Here's a short code snippet from a code who attempted something similar. It's in C#, but it's pretty short and it's just the concept that's of interest.
Determine the Connection Speed of your client
Of course, they could be a temporary speed problem that has nothing to do with the user's connection at the time you test, etc, etc.
I had a similar problem a couple of years ago and just let the user choose between the hi and lo bandwidth sites. The very first thing I loaded on the page was this option, so they could move on quickly.
I think the typical approach to this is just to ask the user. If you don't feel confidant that your users will provide an accurate answer, I suspect you'll have to write an application that runs a speed test on the client. Typically these record how long it takes the client to receive x number of bytes, and use that to determine bandwidth.
Actionscript 3 has a library to help you with this task, but I believe it requires you to deploy your flex/flash app on Flash Media Server. See ActionScript 3.0 native bandwidth detection for details.
#Apphacker (I'd comment instead of answering if I had enough reputation...):
Can't guarantee the reverse, either--I have Earthlink dial-up, soon to upgrade to Earthlink DSL (it's what's available here...).
You could check their IP and see if it resolves to/is assigned to a dial up provider, such as AOL, Earthlink, NetZero. Wouldn't guarantee that those that don't resolve to such a provider are broadband users.
you could ...
ask the user
perform a speed test and ask the user if the result you found is correct
perform a speed test and hope that the result found is correct
I think a speed test should be enough.
If you only have a small well known user group it is sometimes possible to determine the connection speed by the ip. (Some providers assign different subnets to dial-up/broadband connections)

Resources