SOA vs MVC - when to use - asp.net-mvc

I'v read this topic but still have not complete picture and I would really appreciate your answer to the next question:
for what type of application should be used SOA approach (get JSON from server side and generate html on the client side using javascript framework,
like knockout js, angular js and so on), and ASP.net MVC on the server side - like alternative architecture approach (generate pages totally on the server side and return views as result).
For example, for last SPA with rich client side logic wcf services + knockout js (client side MVVM) provided great result. But what approach will be better suited
for CRUD application (for instance, several tables for adding, updating data with different user roles in use).

SOA is about a lot more than just sending JSON to a web client.
Imagine you have a business with a database-driven software system for things like sales, inventory, reporting, etc. Most systems start out small, with just a client or web app talking directly to the database... and that's okay.
However, as the system grows, there are some things you'll find that don't fit well inside this model: long-running batch processes that lock up the app or web page, scheduled jobs that involve more than just the database server, processes involving data living in outside sources, or complex reports that bog down your DB while they run.
At this point, you'll want to think about adding an application server to handle some of these tasks. An application server can take some of that workload off of your clients. It can also take certain loads off what is likely by this time to be an over-worked database, such that the application server requests or moves raw data to and from the DB, and your user-facing client requests/submits transformed data to and from the application server.
As the system grows even more, you'll also find that different parts of the system have unexpected side effects elsewhere as you maintain things. Even simple enhancements become more and more complicated to complete. Development slows, and bug counts increase. The application server now becomes a great place to centralize design efforts on how to make sure a change in one area has the expected consequences (and only the expected consequences) everywhere else.
At its outset, what an SOA really is, then, is taking that application server (which might happen to use json over http, but might also offer a completely different interface or even automatically translate among several data transport technologies) and enforcing all, not just some, database access goes through this application server: the service layer.
Once this access is enforced, and nothing else talks directly to the database any more (at least, nothing that's not specifically accounted for), the layer also becomes a great place to start enforcing business rules and system logic. It allows you to write traditional application-style code here that's easier to use with source control than sql and will automatically be shared among any applications using the system. The code all lives in about the same place and so it's easier to model changes and their effects through the system.
As a bonus, this layer is often very easy to scale out to multiple redundant servers, especially compared to a traditional relational database server. The result is scaling the application server can become a way to improve and manage performance and reliability of a large application. On the back-end, it can also improve performance by simplifying and centralizing efforts to use database caching tools like Redis, making it easier to involve a dedicated DBA in performance tuning, and helping you centralize access to data that lives in multiple places.
At this point, your MVC web site is just one more app that connects to the application server in your SOA system. You might also have a legacy client-server app installed on some desktops, or your MVC app may be public sales facing while actual sales and support reps use something completely different, billing uses a different app, and order fulfillment or procurement have yet another interface ... but they all talk to the same service layer. An additional advantage here is this service layer makes it easier to pull in data from multiple sources, so if your manufacturing system needs material availability information from an outside system, the service layer can know how to go find it and front-end code doesn't have to know this data came from anywhere special.
The point of all this is it's not a case of either/or here. If you have an SOA, you can use MVC at one level of the system, and the interface provided by the SOA's service layer will determine some of what your MVC model looks like and how the controller behaves. If you don't have an SOA, MVC just happens to work okay at building the whole stack, from database to presentation, and in fact works such that the model becomes a microcosm for a larger service layer.
The question, then, of when to use JSON vs when to use ASP.Net MVC takes on a new form. ASP.Net MVC can be part of an SOA architecture, and service frameworks that offer JSON data are often implemented using client-side MVC libraries. You really want to know when it's more appropriate to do more on the client side vs more on the server side. Honestly, I think this is mostly personal preference, but there are trade-offs you should be aware of.
Doing more work client side can be great for performance and scalability, because it spreads some of the application's work-load among your users' computers systems, and it can reduce latency introduced by making round trips to a web server or application server.
On the other hand, doing more work server side is good for avoiding latency in transferring larger data sets over slower public internet links, can make it easier to meet compliance requirements for things like the American Disabilities Act accessibility mandates, where too much javascript can cause problems with accessible browsers or pushing data to client systems may constitute a privacy or security risk, and can make it easier to develop, deploy, and maintain new code when more of the processing happens all within the same layer.

Client side MV* (MVC, MVP, MVVM etc.) architectures and server side MV* architectures are the same as far as the SOA part of your architecture is concerned.
The Model is where you communicates with the services and fetch data from the the various services. The choice between client side MV* and server side is orthogonal.

Related

How to implement OData federation for Application integration

I have to integrate various legacy applications with some newly introduced parts that are silos of information and have been built at different times with varying architectures. At times these applications may need to get data from other system if it exists and display it to the user within their own screens based on the business needs.
I was looking to see if its possible to implement a generic federation engine that kind of abstracts the aggregation of the data from various other OData endpoints and have a single version of truth.
An simplistic example could be as below.
I am not really looking to do an ETL here as that may introduce some data related side effects in terms of staleness etc.
Can some one share some ideas as to how this can be achieved or point me to any article on the net that shows such a concept.
Regards
Kiran
Officially, the answer is to use either the reflection provider or a custom provider.
Support for multiple data sources (odata)
Allow me to expose entities from multiple sources
To decide between the two approaches, take a look at this article.
If you decide that you need to build a custom provider, the referenced article also contains links to a series of other articles that will help you through the learning process.
Your project seems non-trivial, so in addition I recommend looking at other resources like the WCF Data Services Toolkit to help you along.
By the way, from an architecture standpoint, I believe your idea is sound. Yes, you may have some domain logic behind OData endpoints, but I've always believed this logic should be thin as OData is primarily used as part of data access layers, much like SQL (as opposed to service layers which encapsulate more behavior in the traditional sense). Even if that thin logic requires your aggregator to get a little smart, it's likely that you'll always be able to get away with it using a custom provider.
That being said, if the aggregator itself encapsulates a lot of behavior (as opposed to simply aggregating and re-exposing raw data), you should consider using another protocol that is less data-oriented (but keep using the OData backends in that service). Since domain logic is normally heavily specific, there's very rarely a one-size-fits-all type of protocol, so you'd naturally have to design it yourself.
However, if the aggregated data is exposed mostly as-is or with essentially structural changes (little to no behavior besides assembling the raw data), I think using OData again for that central component is very appropriate.
Obviously, and as you can see in the comments to your question, not everybody would agree with all of this -- so as always, take it with a grain of salt.

Designing a modern platform in Rails 3

I'm in the early stages of prototyping a Rails 3 application that will expose a public API. The site has three separate concerns which I am planning to split across three subdomains.
api.mysite.com
The publicly exposed API.
admin.mysite.com
The admin portal for creating blogs (using the public API).
x.mysite.com
The public blog site created at admin.mysite.com where x is the name of the blog. This too will make use of the public API.
All three will share domain objects. For example, you should be able to login to admin.mysite.com using an account you created on api.mysite.com or x.mysite.com.
Questions
Should I attempt to build one rails application to handle all three concerns or should I split this in multiple applications each handling a specific concern?
What are the Pros & Cons of each?
Does anyone have any insight into how some of the larger sites (basecamp, github, shopify) are organized?
Your question is fairly general so I'll try and answer in general terms. And the fact that you mention "larger sites" leads me to the conclusion that you're concerned about scaling.
In the beginning it is definitely going to be easier to build one application - especially since the domain is shared. You can do separate controllers for the various interfaces (api, html, etc) but with a shared back-end. This will reduce code duplication and the complexity of keeping 3 apps in sync. Also remember that you might change your mind about features based on user feedback and you want to be nimble enough to respond quickly.
The main benefit I can see of separating out three different deployables is that you can have an independent deploy schedule for each. For example, a bug fix in the api won't have to wait for admin to be ready to deploy. Or that you can have separate teams working in parallel.
If you're careful about what you keep in your session you'll be able to deploy multiple instances of your application on multiple servers, pointing at the same database (a.k.a. horizontal scaling). Each of these instances are identical to the others and a load balancer (either dedicated hardware or virtual) directs traffic between them. Eventually this approach runs out of steam when your database can't handle the load. At that point you can look at more caching, sharding, no-sql and all sorts of clever scaling techniques.
Most (but not all) larger sites end up doing some sort of horizontal scaling with some sharding of data.
All told, focus on getting a useful application to your users. If things take off you can worry about scaling. More applications fail because the user experience is awful rather than not being able to scale.

Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly

We develop mostly low traffic but highly specialized web applications. Normally we use L2S, EF or nHibernate as access layer and then throws Asp.Net MVC to it and in which for normal crud operations we query the ISession/DataContext directly but for more advanced functions/side effects we put it in a some kind of service layer.
Now, i was think about publishing the data through OData (WCF Data Service) and query that from the controllers (or even from jQuery when the a good template engine shows up) and publish the service operations through a WCF service (or as custom methods on the WCF Data Service?). What advantages/disadvantages does this architecture poses?
Do I gain something except higher complexity and latency? Better separations of concerns (or is it just a illusion)?
Edit:
Can it be a good idea to create a complete ajax driven solution with eg. WCF RIA Services? Or do one loose too much flexibility? Feels like you can completely dispatch your views from your logic then, heck, one should be able to just write pure HTML, not even a asp.net MVC should be needed? but i guess there's a lot of new problems arising?
Don't Do it. Sorry, but this is a stupid over-engineered approach. You are IN ONE PROCESS and you insist on running a network connection AND coding all passing data into XML and back out, plus running it over a HTTP connection with limited query semantics? Don't tell anyone you even tried.
Separation of concern is an illusion here - you replace a highly optimized domain model with a simplified data layer.
THAT SAID: I love OData - great. But it is not an in program technology, it is a FRONT END technology, like ASP.NET MVC - just not for the end user, but for ANOTHER program to integrate into your data. It should be used in similar scenarios, and when exposing data over trust borders (Silverlight - for example - is a trust border as the requests can be faked).
It is NOT optimized to replace in process high end application run-time layers like NHibernate.
As TomTom mentions, you don't want to pay the cost of loopback for OData when within a process. If you have direct line-of-sight to your database and it's your own application's database, then there is no reason to put WCF Data Services in the middle. I would continue to use one of the other options you mentioned (L2S, EF, nHibernate).
Now, if you need to expose data over your http endpoint for other applications to consume, or even for your own application if you have some jQuery code in the client that needs to access data from the server, then definitely an OData endpoint may help and WCF Data Services is the simplest way to create one.
TomTom has a lot of votes and although he's not wrong, he's also not right, in spite of his persuasive tone.
In this particularly instance, the OP appears to be writing an intranet LOB style app that probably only stands to be impeded by an OData service mimicking the underlying database, but what if he were not mimicking the underlying database?
If he were building an application based on various or unknown future data sources, then the services layer can unify, re-present, simplify and aggregate those services, even if a large proportion of queries eventually back to a SQL Server in the next room.
Similarly, if you're building an application of massive scale, and by scale I mean millions of users expecting to wait a few seconds between actions, not millions of FX trades an hour, then placing a services layer between your application the data is a common pattern. The scalability of the internet is based on many small stateless HTTP servers and the caching infrastructure in between.
In real life, the same queries are run countless times, people refresh pages or click the same link over and over. No one really asks for 10m rows, because not many humans can look at it in one go. So working in small pages keeps the data flowing and requests interleaving. You also have the opportunity to introduce a shared in RAM cache in the services layer, or even a RAM database.
You may even find that you need to shard your database or partition it between SQL and a key/value store. You can then do the joins in the middle tier, scaled out, and offload the joining and compute-intensive stuff away from the database server.
The rule with internet scale is that the database is your hot spot and you need to do everything you can to prevent anyone talking to it! Be that local HTTP cache in an iPad, in your ISPs proxy, in the IIS output cache, or in a Redis cache, all those layers are helping to spread the load, ease the burden.
So if Carl came to interview with me and told me he'd considered putting an OData layer before his SQL boxes, I'd be interested to hear his reasoning.
WCF Data Services and OData support JSON, so you can minimize the payload by leveraging that. Plus, with WCF Data Services you can completely control your data access. You don't have to roll Entity Framework. You can customize everything. The benefit is that the protocol structure is completely handled for you by using WCF Data Services and OData. And consuming the service from MVC is an Add Service Reference away. WCF Data Services runs on WCF so you have the ability to do other web services beyond just OData type delivery, so it is extremely flexible.
There are limitations here and there that come with the nature of OData as well as the way WCF Data Services handles OData, but they are fairly specific and if they arise in your architecture there are ways around them.
If you solution is isolated to a single web application, then having the data layer embedded in that application works well. But if you have any need whatsoever to have another app or process hit the data layer or shared business logic then exploring the option of putting your data layer in a WCF Data Service is well worth it. For example, you could write a PowerShell script to call a web service method in 2 lines of code. So if you have domain logic that you want to be able to run from your web app and from a command line or scheduled task then your WCF Data Service layer could handle that scenario for all without having to duplicate logic or code.
Many ways to skin a cat. I have used both approaches in business applications and would not say that one or the other should be avoided. They both work well and provide plenty of value without being detrimental.
To be fair, there are benefits to this approach that may outweigh the performance concerns, which are admittedly tremendous. An application built this way will have orders of magnitude more latency and may cost several times more in compute resources to execute than an in-process solution.
That having been said, in development scenarios where human resources are limited, this may work better. It allows for contractors to be quickly hired on to write new screens or whole new applications very quickly in whatever language suits them. Developers can get up-to-speed faster than a proprietary homegrown solution. No more sa passwords in config files, injection of a custom security layer if required, unified logging and auditing, combining several data stores into one consistent resource. If you have a heterogenous platform, you don't need to write SDKs, they have already been written in many important languages. oData works very well with MS Excel, which is a huge win at many organizations. Depending on your network topology, it might be cheaper and even faster to route out over the internet than to use a leased line if you're in a remote office, or behind a firewall (at a client site doing a demo, for instance).
For large datasets, the overhead of the request and packaging becomes less important. For reporting scenarios, for instance. While I have never designed something like this, I can see where it might be useful, depending on your corporate culture and available resources, to consume oData endpoints internally.

Should mvc web applications be 3 tier?

I will designing a couple of web applications shortly. They will probably be done in asp.net mvc.
In my existing web apps, done in delphi, the data access layer is seperated out into a completely separate application, sometimes running on a different server. This is done more for code reuse than for architectuaral reasons. This won't be a factor in the next app as it will be all new.
Is having a separate data access application overkill in a mvc app? I will already be separating out the business classes by virtue of using MVC, and I will be using an ORM to do the db persistance.
Edit: Just to clarify; I use the term tier to refer to separate physical applications, something more than just a logical separation or layer.
The term "Tier" in my experience is generally referring to physical application seperations e.g. Client Tier & Server Tier.
MVC - refers to 3 "Layers" with the concern being separation around the 3 concerns it details Model (Data), View (UI), Controller (App Logic).
Now that I have made that distinction regarding my terminology..
Is having a separate data access application overkill in a mvc app?
I would say No(again depending what you mean by application), it is not overkill, as it may in actual fact result in a more maintainable system. Your ORM will possibly allow for new data access options to be plugged in, but what if you wish to add a new ORM? Having a clearly separated Data Access Layer (DAL) will allows for much greater future flexibility in this aspect of your application.
On the other hand, depending on the scale and vision for the application creating a completely stand alone Data Access option may be overkill, but in a nutshell separation of the DAL into different assemblies is very much a recommended practice in the composition of an application implementing the MVC pattern.
Hope this helps, If you need more depth do comment.
Well I gues it depends a little on whether you're talking about tiers (pysical) or layers (logical/projects).
Regarding the layering - you could take a look at something like s#arp architecture (code.google.com/p/sharp-architecture/ ) for an example of how they do it (they have taken a pretty maximal approach to layering).
For an example of more minimalistic views here, take a look at Ayende's blog: ayende.com/Blog/
Regarding tiers - I think needlessly adding extra tiers and putting everthing over the wire will just hit your performance, unless you need to do this for capacity reasons. Get the layers right, and them seperate them into teirs as you need to adjust for capacity (should not take too much refactoring if you've seperated your concerns well).
Great comment Tobias.
I say add enough layers so that it makes sense to you and makes it easier to maintain. Also to keep a seperation of concerns.

ASP.Net MVC with web service as model?

Does anyone have advice or tips on using a web service as the model in an ASP.Net MVC application? I haven't seen anyone writing about doing this. I'd like to build an MVC app, but not tie it to using a specific database, nor limit the database to the single MVC app. I feel a web service (RESTful, most likely ADO.Net Data Services) is the way to go.
How likely, or useful, is it for your MVC app to be decoupled from your database? How often have you seen, in your application lifetime, a change from SQL Server to Oracle? From the last 10 years of projects I've delivered, it's never happened.
Architectures are like onions, they have layers of abstractions above things they depend on. And if you're going to use an RDBMS for storage, that's at the core of your architecture. Abstracting yourself from the DB so you can swap it around is very much a fallacy.
Now you can decouple your database access from your domain, and the repository pattern is one of the ways to do that. Most mature solutions use an ORM these days, so you may want to have a look at NHibernate if you want a mature technology, or ActiveRecord / linq2sql for a simpler active record pattern on top of your data.
Now that you have your data strategy in place, you have a domain of some sort. When you expose data to your client, you can choose to do so through an MVC pattern, where you'll usually send DTOs generated from your domain for rendering, or you can decide to leverage an architecture style like REST to provide more loosely coupled systems, by providing links and custom representations.
You go from tight coupling to looser coupling as you go towards the external layers of your solution.
If your question however was to build an MVC app on top of a REST architecture or web services, and use that as a model... Why bother? If you're going to have a domain model, why not reuse it in your system and your services where it makes sense?
Generating a UI from an MVC app and generating documents needed for a RESTful architecture are two completely different contexts, basing one on top of each other is just going to cause much more pain than needed. And you're sacrificing performance.
Depends on your exact scenario, but remote XML-based service as the model in MVC, from experience, not a good idea, it's probably over-engineering and disregarding the need for a domain to start with.
Edit 2010-11-27; clarified my thoughts, which was really needed.
A web service exposes functionality across different types of applications, not for abstraction in one single application, most often. You are probably thinking more of a way of encapsulating commands and reads in a way that doesn't interfere with your controller/view programming.
Use a service from a service bus if you're after the decoupling and do an async pattern in your async pages. You can see Rhino.ServiceBus, nServiceBus and MassTransit for .Net native implementations and RabbitMQ for something different http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/.
Edit: I've had some time to try rabbit out in a way that pushed messages to my service which in turn pushed updates to the book keeping app. RabbitMQ is a message broker, aka a MOM (message oriented middle-ware) and you could use it to send messages to your application server.
You can also simply provide service interfaces. Read Eric Evan's Domain Driven Design for a more detailed description.
REST-ful service interfaces deal a lot with data, and more specifically with addressable resources. It can greatly simplify your programming model and allows great control over output through the HTTP protocol. WCF's upcoming programming model uses true rest as defined in the original thesis, where each document should to some extent provide URIs for continued navigation. Have a look at this.
(In my first version of this post, I lamented REST for being 'slow', whatever that means) REST-based APIs are also pretty much what CouchDB and Riak uses.
ADO.Net is rather crap (!) [N+1 problems with lazy collection because of code-to-implementation, data-access leakage - you always need your db context where your query code is etc] in comparison to for example LightSpeed (commercial) or NHibernate. Spring.Net also allows you to wrap service interfaces in their contain with a web service facade, but (without having browsed it for a while) I think it's a bit too xmly in its configuration.
Edit 1: With ADO.Net here I mean the default "best practice" with DataSets, DataAdapter and iterating lots of rows from a DataReader; it breeds rather ugly and hard-to-debug code. The N+1 stuff, yes, that is about the entity framework.
(Edit 2: EntityFramework doesn't impress me either!)
Edit 1: Create your domain layer in a separate assembly [aka. Core] and provide all domain and application services there, then import this assembly from your specific MVC application. Wrap data access in some DAO/Repository, through an interface in your core assembly, which your Data assembly then references and implements. Wire up interface and implementation with IoC. You can even program something for dynamic service discovery with the above mentioned service buses, to solve for the interfaces. WCF uses interfaces like this and so do most of the above service busses; you can provide a subcomponentresolver in your IoC container to do this automatically.
Edit 2:
A great combo for the above would be CQRS+EventSourcing+ReactiveExtensions. Your write-model would take commands, your domain model would decide whether to accept them, it would push events to the reactive-extensions pipeline, perhaps also over RabbitMQ, which your read-model would consume.
Update 2010-01-02 (edit 1)
The jest of my idea has been codified by something called MindTouch Dream. They have made a screencast where they treat almost all parts of a web application as a (web)-service, which also is exposed with REST.
They have created a highly parallel framework using co-routines to handle this, including their own elastic thread pool.
To all the nay-sayers in this question, in ur face :p! Listen to this screen-cast, especially at 12 minutes.
The actual framework is here.
If you are into this sort of programming, have a look at how monads work and their implementations in C#. You can also read up on CoRoutines.
Happy new year!
Update 2010-11-27 (edit 2)
It turned out CoRoutines got productized with the task parallel library from Microsoft. Your Task now implement the same features, as it implements IAsyncResult. Caliburn is a cool framework that uses them.
Reactive Extensions took the monad comprehensions to the next level of asynchronocity.
The ALT.Net world seems to be moving in the direction I talked about when I wrote this answer the first time, albeit with new types of architectures I knew little of.
You should define your models in a data access agnostic way, e.g. using Repository pattern. Then you can create concrete implementations backed by specific data access technologies (Web Service, SQL, etc).
It really depends on the size of this mvc project. I would say keep the UI and Domain in same running environment if the website is going to be used by a small number of users ( < 5000).
On the other side, if you are planning on a site that is going to be accessed by millions, you have to think distributed and that means you need to build your website in a way that it can scale up/out. That means you might need to use extra servers (Web, application and database).
For this to work nicely, you need to decouple your mvc UI site from the application. The application layer would usually contain your domain model and might be exposed through WCF or a service bus. I would prefer a Service Bus because it is more reliable and might use persistent queues like msmq.
I hope this helps

Resources