I am looking for some guidance on how to incorporate business rules into an asp.net mvc application and how they relate to the model.
First a little background so we know what kind of solutions are relative for this question. At work we use WinForms, MVP, BusinessObjects, DataAccessObjects, and DataTransferObjects. The boundaries of the layers use DTOs to send parameters to methods and as return types, or return List types.
Right now we are adding a facade layer to translate the DTOs into Domain Objects for the UI to work with, since the architect does not like how using DTOs in the PresentationLayer is working currently. I am comfortable about all of this in theory aside from it being practical or not.
I am making a website for fun, but for considerations lets say it serves the same amount of traffic as SO, something like 60,000 hits a month last I heard. I am comfortable with the mechanics of the controllers and the views, and how the model integrates with the two.
I am using NerdDinner as a sample for building the site and I follow the Repository pattern implementation in the examples. What I don't get is how to incorporate business objects into the mix.
I hear people talk about LINQ as the DataAccessLayer/DataAccessObjects. If I force all of my requests though the business objects as I am used to I have introduced some weird dependencies. Both my UI and my BO have to know about my DAO.
What would kind of make sense is to use the LINQ classes as a true DAO layer, hide it behind the BO, and have the BO transform between POCO and LINQ objects.
My only concern there is I am fine with binding my UI to LINQ classes, and don't really need all the extra work, I am happy with a thin lightweight approach as in NerdDinner.
So what I have essentially is the Repository that is instantiated in the controllers that takes and return LINQ objects. My business objects have static methods that just take LINQ classes and perform some calculation, say apply a certain states tax %, or w/e.
Since a lot of these calculations have to be done across the results of the repository I am thinking of combining them into one central area, like a facade layer, but one that just does transforms against the data and not translating to other objects sets (DomainObjects <-> DTOs).
Should I do that, or should I say that those business methods really are part of my model and that they should be in the repository methods that return the objects?
From a design standpoint I would design it like this. Of course naming is just for the purpose of this post you don't have to name your DAL and BLL ..Repository and ..Service.
Have repositories (or one) where your data access/queries should be happening. It should ideally just contain queries (compiled or not). I personally have a repository for each data type to help keep queries separated.
The next layer should be your business layer which I like to call services. These classes are responsible for all logic regarding validation, prep steps and anything else needed to be done to get the consumer of the service the information it needs. As with an ASP.NET MVC app I have my services return view models which are then directly passed into strongly-typed views. With my services I usually group them logically together instead of one for each data type.
This is a great design because it keeps your data access code and presentation code nice and thin and most of the logic where things can go wrong is in your service (or business) layer.
Related
I am in the process of making project decisions on development patterns for a solution, which involves the use of Entity Framework 6 as the ORM choice,
and ASP.NET MVC 5.
I need insight on how transactions and business logic will be implemented. In respect to layers, I came to an initial assumption for the design where
Entity Framework on top of SQL Server can be considered the Data Access Layer (DAL). On top of Entity Framework, there will be a Service Layer, where business logic and validation will be implemented. On top of the Service Layer, I will have ASP.NET MVC Controllers consuming what the service layer offers.
Let me ellaborate on this initial conclusion drawn as a starting point for defining the architecture:
I want to follow principles to achieve the minimum complexity scenario possible, in respect to layers, abstractions and all the solution components responsibilities. As an excercise, with this "simplicity" in mind, I could just embrace the template "proposed" by Microsoft as when you just create a new Visual Studio ASP.NET MVC Web application, but I believe that does not fit the minimum design scenario needed for an enterprise application, since in Microsoft's template, the controller directly makes use of Entity Framework DbContext and consumes the Data Access Layer, besides the fact that no service layer is present. This leads to several issues, such as extremely tight coupling
between the presentation and data access layer, as well as the so called "fat controller" problem, where controllers become the bloated piece of the software with all the added responsibilities of business logic, transactions, and so on, making it truly a mess to software maintainability with, for example, the most basic principle of DRY (don't repeat yourself) being violated since you would get duplicated code and logic all over your fat controllers.
Ramping up the next stage on the path from simplicity to complexity, I assume it is fair to add a Service Layer to the design, because this way ASP.NET MVC controllers would talk only to this service layer, who would be responsible for all CRUD and validation of CRUD operations, and all other more complex business logic operations. This Service Layer then would talk to the data access layer being represented by Entity Framework.
I would stop there and say the design with these proposed layers is enough, but that's where I need more insight on how to proceed. I need to resolve the question on how would transactions be implemented, if you think of them as a wrapper for a series of individual operations performed by methods responsible for validation and business logic residing in classes inside the service layer. In terms of implementation using Entity Framework, if I get every individual operation performed by a service layer method to issue a .SaveChanges(), I would lose the ability of having DBContext to behave like a Unit of Work, wrapping up a single .SaveChanges() for many individual DBSet operations. In this case, the DBSets may behave like repositories. Many people argue that Entity Framework's DBContext and DBSet are implementations if Unit of Work and Repository pattern, respectively.
In a more objective question then, how can I implement these patterns using directly DBContext and DBSet, without any further abstraction into new generic or specific entity repository classes and unit of work generic class? The implementation needs to rely on the consumption of a service layer for the reasons I have already stated.
I think an answer to that would be just the last complexity leap I feel necessary to get my "least complex viable design".
Let me put a more concrete example to illustrate:
In my service layer, I have 2 methods to implement validation logic for 2 insert operations, with a programmer defined method Insert such as:
EntityOneService.Insert
EntityTwoService.Insert
Each of these methods in their corresponding service layer classes would have access to a DBContext and use DBSet.Add to signal they should be persisted,
in case all validation and/or business logic passes. The desired scenario is that I can use each service layer method call in an isolated way, and/or in groups, such as in a new different service layer class method, such as:
OperationOnePlusTwoService.Insert
This new method would implement calls to EntityOneService.Insert and EntityTwoService.Insert, IN A TRANSACTION-LIKE FASHION.
By transaction-like I mean that all calls must succeed, not violating any validation or business rule, in order to have the persistence layer to commit the operations.
DBContext.SaveChanges() apparently would have to be called only once for this to happen, OUTSIDE of any service layer Insert method implementation. In the
context of an ASP.NET Controller consuming service layer methods, how could that be achieved, without actual implementation of a Unit of Work and Repostory abstraction over DBContext and DBSet?
Any advice please would be very much appreciated. I am not posting this to argue the value of a real abstraction and implementation of Repository and Unit of Work patterns, or if Entity Framework's DBContext and DBSet are or are not equivalent to proper Repository and Unit of Work patterns, that's not the point. My project requirements do not involve in any way the need to decouple the application from Entity Framework, or to ideally and fully promote testability.
These are not concerns and I am well aware of consequences and future maintainability impacts on not adopting full fledged implementations of half a dozen layers and all design patterns possible to make a big world-class enterprise solution.
desired scenario is that I can use each service layer method call in an isolated way ... but that behave IN A TRANSACTION-LIKE FASHION.
This is rather simple with EF, assuming your services are not remote (in which case transactions are not advisable in the first place).
In the simplest implementation, each service instance requires a DbContext to be passed in its constructor, and contributes its changes to that DbContext. The orchestrating controller code can control the lifetime of the DbContext and control its use of transactions.
In practice interfaces, rather than concrete service types are typically used, and Dependency Injection is often used instead of constructors. But the pattern is the same.
eg
using (var db = new MyDbContext())
using (var s1 = new SomeService(db))
using (var s2 = new SomeOtherService(db))
using (var tran = db.Database.BeginTransaction())
{
s1.DoStuff();
s2.DoStuff();
tran.Commit();
}
David
I'm (re)designing large-scale application, we use multi-layer architecture based on DDD.
We have MVC with data layer (implementation of repositories), domain layer (definition of domain model and interfaces - repositories, services, unit of work), service layer (implementation of services). So far, we use domain models (mostly entities) across all layers, and we use DTOs only as view models (in controller, service returns domain model(s) and controller creates view model, which is passed to the view).
I'v read countless articles about using, not using, mapping and passing DTOs. I understand that there's no any definitive answer, but I'm not sure if it's ok or not returning domain models from services to controllers. If I return domain model, it's still never passed to the view, since controller always creates view-specific view model - in this case, it seem legit. On the other hand, it doesn't feel right when domain model leaves business layer (service layer). Sometimes service needs to return data object that wasn't defined in the domain and then we either have to add new object to the domain that isn't mapped, or create POCO object (this is ugly, since some services return domain models, some effectively return DTOs).
The question is - if we strictly use view models, is it ok to return domain models all the way to controllers, or should we always use DTOs for communication with service layer? If so, is it ok to adjust domain models based on what services need? (Frankly I don't think so, since services should consume what domain has.) If we should strictly stick to DTOs, should they be defined in service layer? (I think so.) Sometimes it's clear that we should use DTOs (e.g., when service performs lot of business logic and creates new objects), sometimes it's clear that we should use just domain models (e.g., when Membership service returns anemic User(s) - it seems it wouldn't make much sense to create DTO that is the same as domain model) - but I prefer consistency and good practices.
Article Domain vs DTO vs ViewModel - How and When to use them? (and also some other articles) is very similar to my problem, but it doesn't answer this question(s). Article Should I implement DTOs in repository pattern with EF? is also similar, but it doesn't deal with DDD.
Disclaimer: I don't intend to use any design pattern only because it exists and is fancy, on the other hand, I'd like to use good design patterns and practices also because it helps designing the application as a whole, helps with separation of concerns, even though using particular pattern isn't "necessary", at least at the moment.
it doesn't feel right when domain model leaves business layer (service layer)
Makes you feel like you are pulling the guts out right? According to Martin Fowler: the Service Layer defines the application's boundery, it encapsulates the domain. In other words it protects the domain.
Sometimes service needs to return data object that wasn't defined in the domain
Can you provide an example of this data object?
If we should strictly stick to DTOs, should they be defined in service layer?
Yes, because the response is part of your service layer. If it is defined "somewhere else" then the service layer needs to reference that "somewhere else", adding a new layer to your lasagna.
is it ok to return domain models all the way to controllers, or should we always use DTOs for communication with service layer?
A DTO is a response/request object, it makes sense if you use it for communication. If you use domain models in your presentation layer (MVC-Controllers/View, WebForms, ConsoleApp), then the presentation layer is tightly coupled to your domain, any changes in the domain requires you to change your controllers.
it seems it wouldn't make much sense to create DTO that is the same as domain model)
This is one of the disadvantage of DTO to new eyes. Right now, you are thinking duplication of code, but as your project expands then it would make much more sense, specially in a team environment where different teams are assigned to different layers.
DTO might add additional complexity to your application, but so are your layers. DTO is an expensive feature of your system, they don't come free.
Why use a DTO
This article provides both advantage and disadvantage of using a DTO, http://guntherpopp.blogspot.com/2010/09/to-dto-or-not-to-dto.html
Summary as follows:
When to Use
For large projects.
Project lifetime is 10 years and above.
Strategic, mission critical application.
Large teams (more than 5)
Developers are distributed geographically.
The domain and presentation are different.
Reduce overhead data exchanges (the original purpose of DTO)
When not to Use
Small to mid size project (5 members max)
Project lifetime is 2 years or so.
No separate team for GUI, backend, etc.
Arguments Against DTO
Duplication of code.
Cost of development time, debugging. (use DTO generation tools http://entitiestodtos.codeplex.com/)
You must synchronize both models all the time. (personally, I like this because it helps know the ripple effect of the change)
Cost of developement: Additional mapping is necessary. (use auto mappers like https://github.com/AutoMapper/AutoMapper)
Why are data transfer objects (DTOs) an anti-pattern?
Arguments With DTO
Without DTO, the presentation and the domain is tightly coupled. (This is ok for small projects.)
Interface/API stability
May provide optimization for the presentation layer by returning a DTO containing only those attributes that are absolutely required. Using linq-projection, you don't have to pull an entire entity.
To reduce development cost, use code-generating tools
I'm late to this party, but this is such a common, and important, question that I felt compelled to respond.
By "services" do you mean the "Application Layer" described by Evan's in the blue book? I'll assume you do, in which case the answer is that they should not return DTOs. I suggest reading chapter 4 in the blue book, titled "Isolating the Domain".
In that chapter, Evans says the following about the layers:
Partition a complex program into layers. Develop a design within each layer that is cohesive and that depends only on the layers below.
There is good reason for this. If you use the concept of partial order as a measure of software complexity then having a layer depend on a layer above it increases complexity, which decreases maintainability.
Applying this to your question, DTOs are really an adapter that is a concern of the User Interface / Presentation layer. Remember that remote/cross-process communication is exactly the purpose of a DTO (it's worth noting that in that post Fowler also argues against DTOs being part of a service layer, although he isn't necessarily talking DDD language).
If your application layer depends on those DTOs, it is depending on a layer above itself and your complexity increases. I can guarantee that this will increase the difficulty of maintaining your software.
For example, what if your system interfaces with several other systems or client types, each requiring their own DTO? How do you know which DTO a method of your application service should return? How would you even solve that problem if your language of choice doesn't allow overloading a method (service method, in this case) based on return type? And even if you figure out a way, why violate your Application Layer to support a Presentation layer concern?
In practical terms, this is a step down a road that will end in a spaghetti architecture. I've seen this kind of devolution and its results in my own experience.
Where I currently work, services in our Application Layer return domain objects. We don't consider this a problem since the Interface (i.e. UI/Presentation) layer is depending on the Domain layer, which is below it. Also, this dependency is minimized to a "reference only" type of dependency because:
a) the Interface Layer is only able to access these Domain objects as read-only return values obtained by calls to the Application layer
b) methods on services in the Application Layer accept as input only "raw" input (data values) or object parameters (to reduce parameter count where necessary) defined in that layer. Specifically, application services never accept Domain objects as input.
The Interface Layer uses mapping techniques defined within the Interface Layer itself to map from Domain objects to DTOs. Again, this keeps DTOs focused on being adapters that are controlled by the Interface Layer.
In my experience you should do what's practical. "The best design is the simplest design that works" - Einstein. With that is mind...
if we strictly use view models, is it ok to return domain models all the way to controllers, or should we always use DTOs for communication with service layer?
Absolutely it's ok! If you have Domain Entities, DTO's and View Models then including database tables you have all the fields in the application repeated in 4 places. I've worked on large projects where Domain Entities and View Models worked just fine. The only expception to this is if the application is distributed and the service layer resides on another server in which case DTOs are required to send across the wire for serialization reasons.
If so, is it ok to adjust domain models based on what services need? (Frankly I don't think so, since services should consume what domain has.)
Generally I'd agree and say no because the Domain model is typically a reflection of the business logic and doesn't usually get shaped by the consumer of that logic.
If we should strictly stick to DTOs, should they be defined in service layer? (I think so.)
If you decide to use them I'd agree and say yes the Service layer is the perfect place as it's returning the DTOs at the end of the day.
Good luck!
It seems that your application is big and complex enough as you have decided to go through DDD approach.
Don't return your poco entities or so called domain entities and value objects in you service layer. If you want to do this then delete your service layer because you don't need it anymore! View Model or Data transfer objects should live in Service layer because they should map to domain model members and vice versa.
So why do you need to have DTO? In complex application with lots of scenarios you should separate the concerns of domain and you presentation views, a domain model could be divided into several DTO and also several Domain models could be collapsed into a DTO. So it's better to create your DTO in layered architecture even it would be the same as your model.
Should we always use DTOs for communication with service layer?
Yes, you have to return DTO by your service layer as you have talk to your repository in service layer with domain model members and map them to DTO and return to the MVC controller and vice versa.
Is it ok to adjust domain models based on what services need?
A service just talks to repository and domain methods and domain services, you should solve the business in your domain based on your needs and it's not the service task to tell the domain what is needed.
If we should strictly stick to DTOs, should they be defined in service layer? Yes try to have DTO or ViewModel just in service later because they should be mapped to domain members in service layer and it's not a good idea to places DTO in controllers of your application(try to use Request Response pattern in your Service layer), cheers!
Late to the party, but I’m facing the exact same type of architecture and I’m leaning towards “only DTOs from service”. This is mainly because I’ve decided to only use domain objects/aggregates to maintain validity within the object, thus only when updating, creating or deleting. When we’re querying for data, we only use EF as a repository and maps the result to DTOs. This makes us free to optimize read queries and not adapt them to business objects, often using database functions as they are fast.
Each service method defines its own contract and is therefore easier to maintain over time. I hope.
So far, we use domain models (mostly entities) across all layers, and we use DTOs only as view models (in controller, service returns domain model(s) and controller creates view model, which is passed to the view).
Since Domain Model provides terminology (Ubiquitous Language) for whole your application it is better to use Domain Model widely.
The only reason to use ViewModels/DTOs is an implementation of MVC pattern in your application to separate View (any kind of presentation layer) and Model (Domain Model). In this case your presentation and domain model are loosely coupled.
Sometimes service needs to return data object that wasn't defined in the domain and then we either have to add new object to the domain that isn't mapped, or create POCO object (this is ugly, since some services return domain models, some effectively return DTOs).
I assume that you talk about Application/Business/Domain Logic services.
I suggest you return domain entities when you can. If it is needed to return additional information it is acceptable to return DTO that holds several domain entities.
Sometimes, people who use 3rd part frameworks, that generates proxies over domain entities, face difficulties exposing domain entities from their services but it is only a matter of wrong usage.
The question is - if we strictly use view models, is it ok to return domain models all the way to controllers, or should we always use DTOs for communication with service layer?
I would say it is enough to return domain entities in 99,9% cases.
In order to simplify creation of DTOs and mapping your domain entities into them you can use AutoMapper.
If you return part of your domain model, it becomes part of a contract. A contract is hard to change, as things outside of your context depend on it. As such, you would be making part of your domain model hard to change.
A very important aspect of a domain model is that it is easy to change. This makes us flexible to the domain's changing requirements.
I'd suggest analyzing these two questions:
Are your upper layers (i.e. view & view models / controllers) consuming the data in a different way of what the domain layer exposes? If there is a lot of mapping being done or even logic involved I'll suggest revisiting your design: it should probably be closer to how the data is actually used.
How likely is it that you deeply change your upper layers? (e.g. swapping ASP.NET for WPF). If this is highly unlike and your architecture is not very complex, you may be better off exposing as many domain entities as you can.
I'm afraid it is quite a broad topic and it really gets down to how complex your system is and its requirements.
In my experience, unless you are using an OO UI pattern (like naked objects), exposing the domain objects to the UI is a bad idea. This because as the application grows, the needs from the UI change and force your objects to accommodate those changes. You end up serving 2 masters: UI and DOMAIN which is a very painful experience. Believe me, you don't want to be there. The UI model has the function of communicating with the user, the DOMAIN model to hold the business rules and the persistence models deals with storing data effectively. They all address different needs of the application. I'm in the middle of writing a blog post about this, will add it when it's done.
I'm new to asp.net MVC and EF, so excuse me if my question is not clear or easy.
I have read some tutorials about EF Code-First approach and now I'm trying this tutorial Getting Started with EF using MVC.
Using Code-First approach I'm using the POCO classes to define my DB model, DB logic, DB validation, UI validation and some UI logic. So I can use these classes(or objects) in my presentation layer or use them as JSON objects when dealing with Web-Services(or in JavaScript code).
My question: Isn't that mixing some logic together? I mean shouldn't I use like special view-model classes for presentation and these classes should have the UI logic and validation ?!
Is it good practice to send the POCO object to the view(or to the client in general) ?
Finally i need some guidance on how to organize my project layers or folders? I see many ways and I'm confused about what to choose or what format I should base my own on?!!
You should not use your data entities in higher layers of your application. Transform/compose/copy data from DAL into view model or business classes using a library like Automapper. This keeps the layers of your application independent and hides your database schema from end users.
Keep in mind that MVC is only concerned with presentation, all real work should happen in other layers of your app. I try to keep layers independent and use a little bit of glue code to bundle them into an app. I may write data access and business logic layers when working on MVC project, but then re-use in a command line utility. There are a lot of books that talk about writing quality code. I personally found Clean Code, Code Complete 2, and Design Patterns extremely helpful.
As LeffeBrune said, is bad practice to show directly your data entities to the presentation layer, you can try to expose some interfaces in form of project services that return the view model to the controller. This can help to keep the layers separate, and implement Unit of Work pattern and some other cool stuff.
You can start reading the Scott Millet Book
ASP.NET Design Patterns
for a starting point in designing a good layered application, here his blog.
You can define interfaces in your business layer which your EF entities can implement; the business logic doesn't know about the actual implementation. To do this you need the data layer to reference the business layer which means you're inverting the dependencies - you then use an IoC container to bind the interfaces to their implementations. ...but yeah that's one of many, many ways to go about it.
The thing is, with single responsibility in mind, your entities shouldn't worry about UI stuff - this can mean your interfaces also need to be implemented by "view model" classes, which implement things like validation and other business & presentation concerns.
Your questions require a lot of discussions.
Choosing infrastructure for your project is a complicated issue that depends on lots of factors. There are some design patterns that address different requirements in a project which involve you or your team in multiple concepts and technologies. I recommend you two valuable resources that help you understanding software architecture focused on .Net technologies.
CSLA.NET : The CSLA .NET framework is an application development framework that reduces the cost of building and maintaining applications. Rockford Lhotka, createor of CSLA.NET, has some books that deeply describe an optimal infrastructure of a project; all of your questions have answered in his books.
Microsoft Spain - Domain Oriented N-Layered .NET 4.0 Sample App: the project/sample is willfully restricted to .NET implementation of most used patterns in N-Layered Domain Oriented Architectures based on simple scenarios easy to understand (Customers, Orders, Bank Transfers, etc.). The project/sample is very well documented and all of your questions , also, have answered in the project.
All in all, view-models, POCOs, layers, etc. are concepts that have root in software architecture and I believe that they can not be described in short.
What I normally use is a POCO layer to use in my data access and a view model layer to use in my views just as LeffeBrune said.
All my layers use POCOs and the controllers are responsible to construct the view model that each view needs.
To get this work more automated I use automapper.
So my solution structure gets usually to be like this:
Model (POCO)
Data access (NHibertante)
Service (Business Layer)
Web (UI)
ViewModel
So as I understand it with good loose coupling I should be able to swap out my DAL with a couple lines of code at the application root.
I have 2 DAL written, Linq-to-sql and a JSon file repository (for testing and because I wanted to try out the System.Web.Scripting.JavascriptSerializer).
linq to sql will create entities instead of my business models. and feed them upwards through an IRepository which is using constructor injection at the application root.
my JSon layer doesn't have any autogenerated classes from which to deserialize so I'm lost as to a simple way to have it depend on an interface or abstract class and still function.
This question is based on the following assumptions/understandings:
I believe I would need the linq to sql layer to implement an interface so the application domain at compile time can dictate that the entity classes are going to have a place to read/write all the current model's fields
Any Business logic dictates a need for another set of classes with almost the same names and same properties in the model layer
Then conversion methods that take the DALs objects and translate them to business objects and back would be needed. (even if both sides are implementing the same interface this seems very inefficient)
This code is yet another place that would have to make a change if the model class or interface changed (interface, business class, view, dal entity)
Deserialization of any alternative DALs requires I create 'entities' with the same properties and fields in that layer(more duplication)
So to meet all of the flexibility/agility goals it appears I need an interface for each application domain/business object, a concrete class where business logic can live, and DAL objects that implement the interface (this means layers that don't autogenerate entities would have to be hand coded pure duplication).
How would I use loose coupling without a ton of duplication and loss of DRY?
Welcome to the beautiful and exciting world of loosely coupled code :)
You understand the problem correctly, but let me first reiterate what you are already implying: The Domain Model (that is, all Domain classes) must be defined independently of any specific Data Access technology, so you can't use auto-generated LINQ to SQL (L2S) classes as a basis for your Domain classes for the simple reason that you can't really reuse those together with other technologies (as you have found out with your JSON-based Repository).
Interfaces for each Domain object is not even going to help you, because to avoid Anemic Domain Models you will need to implement behavior in the Domain classes (and you can't put behavior into interfaces).
This means that to hydrate and dehydrate Domain objects you must have some mapping code. It has always been like this: in the old days we had to map from IDataReader instances to Domain classes, while now we need to map from Data (L2S) classes to Domain classes.
Could we wish for something better? Yes. Can we get something better? Probably. The next version of the Entity Framework will support Persistence Ignorance for exactly this reason: you should be able to define your Domain model as POCOs and if you provide a map and a database schema, EF will take care of the rest.
Until that arrives Microsoft doesn't have anything that offers this kind of functionality, but NHibernate does (caveat: I have zero experience with NHibernate, but lots of smart people say that this is true, and I trust them on that). This is a major reason that so many people prefer NHibernate over EF.
Loose coupling requires lots of mapping, so I can only second queen3's suggestion of employing AutoMapper for this kind of tedious work.
As a closing note I do want to point out a related issue: Mapping doesn't necessarily imply a violation of DRY. The best example is when it comes to strongly typed ViewModels that correspond to a given Domain object. Don't be fooled by the semantic similarity. They may have more or less the same properties with the same values, but their responsibilities differ vastly. As an application grows, you will likely experience that little divergences sneak in here and there, and you will be glad you have that separation of concerns - even if it initially looked like a lot of repetitious work.
In any case: Loose coupling is more work at the beginning, but it will enable you to keep on evolving an application where a tightly coupled application would freeze in maintenance hell long before. You are in for the long haul, but instant gratification it ain't.
Not that I understand the problem correctly, but to solve duplicated classes you may use AutoMapper.
Note that you may apply mapping declaratively or using reflection, i.e. semi-automatically. For example see here - this is not about data layer, but shows how simple attributes can help to automate mapping. In that case MVC applies attributes, but you may invent your own engine that looks for [Entity("Order")] attribute and applies AutoMapper.
Also you cannot have 100% persistence independency with just "couple of lines". ORM selection plays big role here. For example Linq-To-SQL cannot use plain classes (POCOs) so it's not as easy to re-use them as with NHibernate, for example. And with Repository you're going to have many queries in the data layer; different ORMs usually have different query syntax or implementation (even Linq not always compatible between ORMs) so switching data access can be a matter of replacing data layer completely, which is not couple of lines (unless your app is "Hello, world!").
The solution with AutoMapper above is actually a kind of self-baked ORM... so maybe you need to consider a better ORM that suites your requirements? Why don't you use EF4, especially given that it supports POCOs now, and is very similar to Linq-to-SQL, at least with query language?
My controllers turn over the request to the appropriate service. The Service then makes calls to various Repositories. Repositories use Linq to Sql Entities purely for DataAccess and then map and return as Domain Objects. The service then decides what the Controller will present and replaces the DO with Presentation objects which are returned to the Controller to display in View.
SO I have Services-Repositories-Domain Objects- Presentation Objects
I am asking because it seems like i have a lot of objects, some not doing anything but passing data. Is this a reasonable scenairo or am i not following proper MVC pattern?
Yes, you've got the right idea. It can be a lot of classes and interfaces (not even counting the unit tests and mock/test classes) but if you have a decent sized application, you're libel to have many anyway. But to start out, it's a lot of work for not much initial gain.
I have seen projects skip the some of the services implementations for basic services that just pass through to the repository without any value added by the service. They go straight to the repository from the controller and don't seem to lose much.
There are other ways to ease the burden of some classes by using tools where possible. For example, projects like AutoMapper can help streamline your domain object to view model mappings.
If your application is big enough, your pattern makes sense. Otherwise, I smell overengineering...
Ask yourself: what if this layer didn't exist and you'll find out if it's true or not.
I had a very similar scenario. Initially my project had UI, Controllers, Service Layer and Repositories. My unit tests covered both the service layer and repositories (filters) and in some cases the unit tests were doing the same thing (as the service layer was sometimes a pass through to the repositories).
Due to a large refactor I cut out the service layer and this gave me a lot of flexibility with the Controllers dealing directly with the Repositories and applying Filters to get exactly what I want.
One problem I ran into was you cannot Serialize Linq2Sql objects and therefore sometimes had to translate these object to presentation objects.