MVC 4 - consuming a WCF in a data layer - asp.net-mvc

In my test project, I created a WCF service and got it running.
Next, I created an MVC 4 project. It is broken into layers under one solution.
The model layer.
The UI/View/Controller layer
The repository layer.
To do a quick test:
In the UI layer, I added a web reference to my WCF service. In the controller, I contacted the WCF service by "using" to populate a dropdown in the view.
However, I'm driving for separation with Dependency Injection.
In the repository layer, I created an Interface with the populate dropdown and injected it. Not a problem.
What I'm struggling with the concept is:
Do I consume the WCF service in the UI layer and reference it in the repository layer? (doesn't seem right)
Do I create another project - a data layer - and add a web reference to the WCF service then from the repository, I create a reference to the data layer?
Which brings me to another question - if I create a web reference to a WCF service in a separate project (layer), the information pertaining to the WCF service is not present in the main config.sys file...
So I'm struggling to grasp this portion...anything I should be reading up on some more?

Realizing that your cake can be cut in many ways, I present you here with my take on how to cut it:
Question #1: "Do I consume the WCF service in the UI layer and reference it in the repository layer?"
First off: As Steven points to in your comment, you should consider deeply if you want to inject a third server tier. The UI layer can be renamed to the "layer that is hosted in IIS". Doing so makes it the host layer for both your MVC presentation layer and the host layer for the services.
However this does not mean that your services are implemented in this layer, only that they are hosted here (a matter of referencing the interfaces and implementation namespaces in the web.config only).
Question #2: "Do I create another project - a data layer - and add a web reference to the WCF service then from the repository, I create a reference to the data layer?"
As I read this you are asking multiple questions at once: Yes, you should create a separate layer to encapsulate the repository. No you should not let this layer contain anything related to wcf. If a server layer needs client access to a wcf service, you should separate this into a logical layer, either residing as a folder in your businesslogic layer, or as a separate physical "data adapter layer". Never ever in your repository layer. While it is true that conceptually the wcf data adapter layer is at the same level as your repository layer, you should not mix database access and service access in the same logical or physical layer.
The third question I can only answer as a question: What purpose is it that you want WCF to fullfil in your architecture? Do you want to inject a separate service tier or have you confused layer separation with tier separation?
Layer separation can be coupled runtime in-process, while tier separation can only be coupled runtime through some sort of on-the-network remoting (such as wcf).
As Steven points out in his comment, you havnt really documented the need to use wcf in your architecture at all.
Lets look at your architecture in terms of the logical layers and tiers (cutting the cake):
Client-Access-Layer (#1, running in the context of the browser)
Web-Host-Layer (#2, representing the outer server tier, contains all MVC code. Perhaps Model exists in its own physical layer.
Service-Layer (Not existing yet in your architecture, as there have been no need demonstrated for this further abstraction of layers into tiers)
Business-Layer (#2, representing the business logic that lies between presentation and database and data access layers)
Repository layer (#2, encapsulating database access logic and mapping)
Service Access layer (#2, encapsulating access to services on the outside, can be implemented as a folder in BL layer or in a separate physical layer)
Database tier (#3, the sql server)
So are you asking how to implement DI in WCF using IIS as the host and C# as the implementation language.
Or.
Are you asking general guidance into architecturing a n-tier .net application.
The first is easy, as there a tons of examples out there, such as this
The second is more tricky as the answer will always be "it depends".
It depends on what your needs are, what your level of expertise is, what the projected timespan of your project is and so on.
Hope this helps you a bit.
If you want I can send you my implementation of the behavior wcf unity stuff.
Concepts:
Logical layer - an architectural concept used to isolate responsibility. Layers have downward coupling, never upwards.
Physical layer - an implementation construct used to enforce the logical separation. A .net assembly can be used as the implementation of a physical layer.
Tier - one or more layers that can exist on a machine that is not hosting other tiers.

It sounds to me like you're treating the WCF call as a data source, which makes it a natural fit for the repository layer. The repository's job is to abstract knowledge of the data source's implementation from the other layers.
I would recommend not using .NET projects to enforce layer boundaries but rather to use folders within the same project to enforce logical separation rather than physical separation. Most use cases don't require physical separation and it adds complexity, such as the more difficult configuration you noted.

Related

Entity Framework and ASP.NET MVC - how to directly make use of DBContext and DBSet to behave as Repository and Unity of Work patterns?

I am in the process of making project decisions on development patterns for a solution, which involves the use of Entity Framework 6 as the ORM choice,
and ASP.NET MVC 5.
I need insight on how transactions and business logic will be implemented. In respect to layers, I came to an initial assumption for the design where
Entity Framework on top of SQL Server can be considered the Data Access Layer (DAL). On top of Entity Framework, there will be a Service Layer, where business logic and validation will be implemented. On top of the Service Layer, I will have ASP.NET MVC Controllers consuming what the service layer offers.
Let me ellaborate on this initial conclusion drawn as a starting point for defining the architecture:
I want to follow principles to achieve the minimum complexity scenario possible, in respect to layers, abstractions and all the solution components responsibilities. As an excercise, with this "simplicity" in mind, I could just embrace the template "proposed" by Microsoft as when you just create a new Visual Studio ASP.NET MVC Web application, but I believe that does not fit the minimum design scenario needed for an enterprise application, since in Microsoft's template, the controller directly makes use of Entity Framework DbContext and consumes the Data Access Layer, besides the fact that no service layer is present. This leads to several issues, such as extremely tight coupling
between the presentation and data access layer, as well as the so called "fat controller" problem, where controllers become the bloated piece of the software with all the added responsibilities of business logic, transactions, and so on, making it truly a mess to software maintainability with, for example, the most basic principle of DRY (don't repeat yourself) being violated since you would get duplicated code and logic all over your fat controllers.
Ramping up the next stage on the path from simplicity to complexity, I assume it is fair to add a Service Layer to the design, because this way ASP.NET MVC controllers would talk only to this service layer, who would be responsible for all CRUD and validation of CRUD operations, and all other more complex business logic operations. This Service Layer then would talk to the data access layer being represented by Entity Framework.
I would stop there and say the design with these proposed layers is enough, but that's where I need more insight on how to proceed. I need to resolve the question on how would transactions be implemented, if you think of them as a wrapper for a series of individual operations performed by methods responsible for validation and business logic residing in classes inside the service layer. In terms of implementation using Entity Framework, if I get every individual operation performed by a service layer method to issue a .SaveChanges(), I would lose the ability of having DBContext to behave like a Unit of Work, wrapping up a single .SaveChanges() for many individual DBSet operations. In this case, the DBSets may behave like repositories. Many people argue that Entity Framework's DBContext and DBSet are implementations if Unit of Work and Repository pattern, respectively.
In a more objective question then, how can I implement these patterns using directly DBContext and DBSet, without any further abstraction into new generic or specific entity repository classes and unit of work generic class? The implementation needs to rely on the consumption of a service layer for the reasons I have already stated.
I think an answer to that would be just the last complexity leap I feel necessary to get my "least complex viable design".
Let me put a more concrete example to illustrate:
In my service layer, I have 2 methods to implement validation logic for 2 insert operations, with a programmer defined method Insert such as:
EntityOneService.Insert
EntityTwoService.Insert
Each of these methods in their corresponding service layer classes would have access to a DBContext and use DBSet.Add to signal they should be persisted,
in case all validation and/or business logic passes. The desired scenario is that I can use each service layer method call in an isolated way, and/or in groups, such as in a new different service layer class method, such as:
OperationOnePlusTwoService.Insert
This new method would implement calls to EntityOneService.Insert and EntityTwoService.Insert, IN A TRANSACTION-LIKE FASHION.
By transaction-like I mean that all calls must succeed, not violating any validation or business rule, in order to have the persistence layer to commit the operations.
DBContext.SaveChanges() apparently would have to be called only once for this to happen, OUTSIDE of any service layer Insert method implementation. In the
context of an ASP.NET Controller consuming service layer methods, how could that be achieved, without actual implementation of a Unit of Work and Repostory abstraction over DBContext and DBSet?
Any advice please would be very much appreciated. I am not posting this to argue the value of a real abstraction and implementation of Repository and Unit of Work patterns, or if Entity Framework's DBContext and DBSet are or are not equivalent to proper Repository and Unit of Work patterns, that's not the point. My project requirements do not involve in any way the need to decouple the application from Entity Framework, or to ideally and fully promote testability.
These are not concerns and I am well aware of consequences and future maintainability impacts on not adopting full fledged implementations of half a dozen layers and all design patterns possible to make a big world-class enterprise solution.
desired scenario is that I can use each service layer method call in an isolated way ... but that behave IN A TRANSACTION-LIKE FASHION.
This is rather simple with EF, assuming your services are not remote (in which case transactions are not advisable in the first place).
In the simplest implementation, each service instance requires a DbContext to be passed in its constructor, and contributes its changes to that DbContext. The orchestrating controller code can control the lifetime of the DbContext and control its use of transactions.
In practice interfaces, rather than concrete service types are typically used, and Dependency Injection is often used instead of constructors. But the pattern is the same.
eg
using (var db = new MyDbContext())
using (var s1 = new SomeService(db))
using (var s2 = new SomeOtherService(db))
using (var tran = db.Database.BeginTransaction())
{
s1.DoStuff();
s2.DoStuff();
tran.Commit();
}
David

Should services always return DTOs, or can they also return domain models?

I'm (re)designing large-scale application, we use multi-layer architecture based on DDD.
We have MVC with data layer (implementation of repositories), domain layer (definition of domain model and interfaces - repositories, services, unit of work), service layer (implementation of services). So far, we use domain models (mostly entities) across all layers, and we use DTOs only as view models (in controller, service returns domain model(s) and controller creates view model, which is passed to the view).
I'v read countless articles about using, not using, mapping and passing DTOs. I understand that there's no any definitive answer, but I'm not sure if it's ok or not returning domain models from services to controllers. If I return domain model, it's still never passed to the view, since controller always creates view-specific view model - in this case, it seem legit. On the other hand, it doesn't feel right when domain model leaves business layer (service layer). Sometimes service needs to return data object that wasn't defined in the domain and then we either have to add new object to the domain that isn't mapped, or create POCO object (this is ugly, since some services return domain models, some effectively return DTOs).
The question is - if we strictly use view models, is it ok to return domain models all the way to controllers, or should we always use DTOs for communication with service layer? If so, is it ok to adjust domain models based on what services need? (Frankly I don't think so, since services should consume what domain has.) If we should strictly stick to DTOs, should they be defined in service layer? (I think so.) Sometimes it's clear that we should use DTOs (e.g., when service performs lot of business logic and creates new objects), sometimes it's clear that we should use just domain models (e.g., when Membership service returns anemic User(s) - it seems it wouldn't make much sense to create DTO that is the same as domain model) - but I prefer consistency and good practices.
Article Domain vs DTO vs ViewModel - How and When to use them? (and also some other articles) is very similar to my problem, but it doesn't answer this question(s). Article Should I implement DTOs in repository pattern with EF? is also similar, but it doesn't deal with DDD.
Disclaimer: I don't intend to use any design pattern only because it exists and is fancy, on the other hand, I'd like to use good design patterns and practices also because it helps designing the application as a whole, helps with separation of concerns, even though using particular pattern isn't "necessary", at least at the moment.
it doesn't feel right when domain model leaves business layer (service layer)
Makes you feel like you are pulling the guts out right? According to Martin Fowler: the Service Layer defines the application's boundery, it encapsulates the domain. In other words it protects the domain.
Sometimes service needs to return data object that wasn't defined in the domain
Can you provide an example of this data object?
If we should strictly stick to DTOs, should they be defined in service layer?
Yes, because the response is part of your service layer. If it is defined "somewhere else" then the service layer needs to reference that "somewhere else", adding a new layer to your lasagna.
is it ok to return domain models all the way to controllers, or should we always use DTOs for communication with service layer?
A DTO is a response/request object, it makes sense if you use it for communication. If you use domain models in your presentation layer (MVC-Controllers/View, WebForms, ConsoleApp), then the presentation layer is tightly coupled to your domain, any changes in the domain requires you to change your controllers.
it seems it wouldn't make much sense to create DTO that is the same as domain model)
This is one of the disadvantage of DTO to new eyes. Right now, you are thinking duplication of code, but as your project expands then it would make much more sense, specially in a team environment where different teams are assigned to different layers.
DTO might add additional complexity to your application, but so are your layers. DTO is an expensive feature of your system, they don't come free.
Why use a DTO
This article provides both advantage and disadvantage of using a DTO, http://guntherpopp.blogspot.com/2010/09/to-dto-or-not-to-dto.html
Summary as follows:
When to Use
For large projects.
Project lifetime is 10 years and above.
Strategic, mission critical application.
Large teams (more than 5)
Developers are distributed geographically.
The domain and presentation are different.
Reduce overhead data exchanges (the original purpose of DTO)
When not to Use
Small to mid size project (5 members max)
Project lifetime is 2 years or so.
No separate team for GUI, backend, etc.
Arguments Against DTO
Duplication of code.
Cost of development time, debugging. (use DTO generation tools http://entitiestodtos.codeplex.com/)
You must synchronize both models all the time. (personally, I like this because it helps know the ripple effect of the change)
Cost of developement: Additional mapping is necessary. (use auto mappers like https://github.com/AutoMapper/AutoMapper)
Why are data transfer objects (DTOs) an anti-pattern?
Arguments With DTO
Without DTO, the presentation and the domain is tightly coupled. (This is ok for small projects.)
Interface/API stability
May provide optimization for the presentation layer by returning a DTO containing only those attributes that are absolutely required. Using linq-projection, you don't have to pull an entire entity.
To reduce development cost, use code-generating tools
I'm late to this party, but this is such a common, and important, question that I felt compelled to respond.
By "services" do you mean the "Application Layer" described by Evan's in the blue book? I'll assume you do, in which case the answer is that they should not return DTOs. I suggest reading chapter 4 in the blue book, titled "Isolating the Domain".
In that chapter, Evans says the following about the layers:
Partition a complex program into layers. Develop a design within each layer that is cohesive and that depends only on the layers below.
There is good reason for this. If you use the concept of partial order as a measure of software complexity then having a layer depend on a layer above it increases complexity, which decreases maintainability.
Applying this to your question, DTOs are really an adapter that is a concern of the User Interface / Presentation layer. Remember that remote/cross-process communication is exactly the purpose of a DTO (it's worth noting that in that post Fowler also argues against DTOs being part of a service layer, although he isn't necessarily talking DDD language).
If your application layer depends on those DTOs, it is depending on a layer above itself and your complexity increases. I can guarantee that this will increase the difficulty of maintaining your software.
For example, what if your system interfaces with several other systems or client types, each requiring their own DTO? How do you know which DTO a method of your application service should return? How would you even solve that problem if your language of choice doesn't allow overloading a method (service method, in this case) based on return type? And even if you figure out a way, why violate your Application Layer to support a Presentation layer concern?
In practical terms, this is a step down a road that will end in a spaghetti architecture. I've seen this kind of devolution and its results in my own experience.
Where I currently work, services in our Application Layer return domain objects. We don't consider this a problem since the Interface (i.e. UI/Presentation) layer is depending on the Domain layer, which is below it. Also, this dependency is minimized to a "reference only" type of dependency because:
a) the Interface Layer is only able to access these Domain objects as read-only return values obtained by calls to the Application layer
b) methods on services in the Application Layer accept as input only "raw" input (data values) or object parameters (to reduce parameter count where necessary) defined in that layer. Specifically, application services never accept Domain objects as input.
The Interface Layer uses mapping techniques defined within the Interface Layer itself to map from Domain objects to DTOs. Again, this keeps DTOs focused on being adapters that are controlled by the Interface Layer.
In my experience you should do what's practical. "The best design is the simplest design that works" - Einstein. With that is mind...
if we strictly use view models, is it ok to return domain models all the way to controllers, or should we always use DTOs for communication with service layer?
Absolutely it's ok! If you have Domain Entities, DTO's and View Models then including database tables you have all the fields in the application repeated in 4 places. I've worked on large projects where Domain Entities and View Models worked just fine. The only expception to this is if the application is distributed and the service layer resides on another server in which case DTOs are required to send across the wire for serialization reasons.
If so, is it ok to adjust domain models based on what services need? (Frankly I don't think so, since services should consume what domain has.)
Generally I'd agree and say no because the Domain model is typically a reflection of the business logic and doesn't usually get shaped by the consumer of that logic.
If we should strictly stick to DTOs, should they be defined in service layer? (I think so.)
If you decide to use them I'd agree and say yes the Service layer is the perfect place as it's returning the DTOs at the end of the day.
Good luck!
It seems that your application is big and complex enough as you have decided to go through DDD approach.
Don't return your poco entities or so called domain entities and value objects in you service layer. If you want to do this then delete your service layer because you don't need it anymore! View Model or Data transfer objects should live in Service layer because they should map to domain model members and vice versa.
So why do you need to have DTO? In complex application with lots of scenarios you should separate the concerns of domain and you presentation views, a domain model could be divided into several DTO and also several Domain models could be collapsed into a DTO. So it's better to create your DTO in layered architecture even it would be the same as your model.
Should we always use DTOs for communication with service layer?
Yes, you have to return DTO by your service layer as you have talk to your repository in service layer with domain model members and map them to DTO and return to the MVC controller and vice versa.
Is it ok to adjust domain models based on what services need?
A service just talks to repository and domain methods and domain services, you should solve the business in your domain based on your needs and it's not the service task to tell the domain what is needed.
If we should strictly stick to DTOs, should they be defined in service layer? Yes try to have DTO or ViewModel just in service later because they should be mapped to domain members in service layer and it's not a good idea to places DTO in controllers of your application(try to use Request Response pattern in your Service layer), cheers!
Late to the party, but I’m facing the exact same type of architecture and I’m leaning towards “only DTOs from service”. This is mainly because I’ve decided to only use domain objects/aggregates to maintain validity within the object, thus only when updating, creating or deleting. When we’re querying for data, we only use EF as a repository and maps the result to DTOs. This makes us free to optimize read queries and not adapt them to business objects, often using database functions as they are fast.
Each service method defines its own contract and is therefore easier to maintain over time. I hope.
So far, we use domain models (mostly entities) across all layers, and we use DTOs only as view models (in controller, service returns domain model(s) and controller creates view model, which is passed to the view).
Since Domain Model provides terminology (Ubiquitous Language) for whole your application it is better to use Domain Model widely.
The only reason to use ViewModels/DTOs is an implementation of MVC pattern in your application to separate View (any kind of presentation layer) and Model (Domain Model). In this case your presentation and domain model are loosely coupled.
Sometimes service needs to return data object that wasn't defined in the domain and then we either have to add new object to the domain that isn't mapped, or create POCO object (this is ugly, since some services return domain models, some effectively return DTOs).
I assume that you talk about Application/Business/Domain Logic services.
I suggest you return domain entities when you can. If it is needed to return additional information it is acceptable to return DTO that holds several domain entities.
Sometimes, people who use 3rd part frameworks, that generates proxies over domain entities, face difficulties exposing domain entities from their services but it is only a matter of wrong usage.
The question is - if we strictly use view models, is it ok to return domain models all the way to controllers, or should we always use DTOs for communication with service layer?
I would say it is enough to return domain entities in 99,9% cases.
In order to simplify creation of DTOs and mapping your domain entities into them you can use AutoMapper.
If you return part of your domain model, it becomes part of a contract. A contract is hard to change, as things outside of your context depend on it. As such, you would be making part of your domain model hard to change.
A very important aspect of a domain model is that it is easy to change. This makes us flexible to the domain's changing requirements.
I'd suggest analyzing these two questions:
Are your upper layers (i.e. view & view models / controllers) consuming the data in a different way of what the domain layer exposes? If there is a lot of mapping being done or even logic involved I'll suggest revisiting your design: it should probably be closer to how the data is actually used.
How likely is it that you deeply change your upper layers? (e.g. swapping ASP.NET for WPF). If this is highly unlike and your architecture is not very complex, you may be better off exposing as many domain entities as you can.
I'm afraid it is quite a broad topic and it really gets down to how complex your system is and its requirements.
In my experience, unless you are using an OO UI pattern (like naked objects), exposing the domain objects to the UI is a bad idea. This because as the application grows, the needs from the UI change and force your objects to accommodate those changes. You end up serving 2 masters: UI and DOMAIN which is a very painful experience. Believe me, you don't want to be there. The UI model has the function of communicating with the user, the DOMAIN model to hold the business rules and the persistence models deals with storing data effectively. They all address different needs of the application. I'm in the middle of writing a blog post about this, will add it when it's done.

Onion Architecture should we inject domain models into the presentation layer?

I am trying to implement an Onion architecture for a ASP.Net MVC 5 project. I have seen opinions that services should be injected rather than instantiated even though, correct me if I am wrong, the idea expressed by Jeffery Palermo (http://jeffreypalermo.com/blog/the-onion-architecture-part-3/) was that any outer layer should be able to directly call any inner layer. So my question is
Can the onion architecture work without IOC, and if yes, is it ideal?
Let's say we go with IOC, if the UI should not know about the actual implementation of
domain services, should we apply the same principle to the domain models
themselves e.g. injecting models into the UI instead of referencing
them directly?
I am understand why some solutions apply IOC on domain services but are accessing the domain models directly in the controllers.
OA can be thought of as n-tier architecture + Dependency Injection--so you would be hard-pressed to implement OA without IOC.
Regarding outer layers using any inner layer, I personally disagree with Palermo on this point. I think that outer layers should be constrained to working with the next layer in (to restate: outer layers should not be allowed to bypass a layer). I asked him about this on twitter once and he said that it's probably not a good idea for the data access implementation code to work with the presentation layer (remembering that the implementation code is on the outer rim of his architecture).
I think Palermo makes room for bypassing a layer precisely because he wants to be able to manipulate a Domain Models and Domain Services in the Controller. As far as I understand Domain Driven Design, Domain Services are only created when logic does not neatly fit into a Domain Model. If that is the case, then Domain Services and Domain Models are not really 2 separate layers. Rather, it's better to think of them as a single Business Layer. If they are both the same layer, then the question of whether you can use both in a Controller resolves itself. Then you can say without contradiction that outer layers should be constrained to talking to the next layer in the onion.
First, remember that the Onion Architecture (OA) is agnostic to application design styles, so there is nothing stopping you from injecting your domain into the UI. Second, the linked article points out that IoC is core, so you won't want to try to go without it. I'm also surmising you're talking about your domain entities - those things with data/state in your domain.
OA is about shielding your domain (business rules, entities, etc) from the vagaries of infrastructure changes, not the other way around. All you're buying yourself by using interfaces to get to your domain is extra code and indirection. Ask yourself this - If my domain changes (the core of your business), is it realistic to expect my user interface won't? Put another way, if your business rules change, the users are probably going to want to see that reflected in the UI if it involves adding or removing fields or whole lines of business.
Now ask the inverse of that question - If I change from Oracle to NoSQL, will my Domain code change? Injection of your infrastructure laden services is meant to give you the answer "No". This presumably means easier to maintain and more stable code.
So, in conclusion, unless you need to hide logic on your domain entities, or there is a compelling architectural reason against it (e.g., you're flinging these out of your server to a remote client or you want to add UI-specific attributes to your properties to affect validation or label display), you should go ahead and reference your concrete domain entities directly in your "application/UI" layer.

Service Layer/Repository Pattern

I am building an MVC app using the Service Layer/Repository/Unit of Work pattern with EF4.
I am a bit confused on the logic. I know the point is to decouple the system, but I am a little confused.
So the MVC Controllers call on the Services to fill the View Models. So is it safe to say the MVC App is coupled to the Service Layer?
Then the Service Layer calls on the Repository to get and persist objects. Is then safe to say the Service Layer is dependent to the Repository?
The Repository the utilizes EF4 to get and persist data to SQL server, so I would assume the Repository depends on EF4 which in turn depends on SQL Server.
Where does the unit of work all fit in.
Any examples please?
Thanks!!
I started with hiding Unit of work somewhere in lower layer but it is wrong way to do that. After some experience my opinion is:
In case of monolitic application UnitOfWork should be accessible by Controller and lower layers.
In case of distributed application (UI and BL are on different servers) UnitOfWork should be accessible by business layer facade (service layer for remote calls) and lower layers.
The reason is that mentioned layers define what is the "business transaction" = what is current unit of work. Only this layer knows when it wants to commit changes to data store. Doing it this way allows service composition (code reuse). I discussed similar question here and here.
Sam,
Julie Lerman did a good screencast on DNR tv, talking about this, there is also another screen cast on Channel 9, around creating and testing repositories just EF here.
The general thing abut these is create the abstraction of the Unit of Work in Nhibernate it would be Session, in EF would be you context and passing that session or context into your repositories, as part of you test you can fake the connections to use a list of dictinary.
Hope these help.
Iain
You are correct in your assumptions on the layering. Your EF Context is the Unit Of Work. Generally you'll abstract this away through an interface and then constructor inject into each Repository for CRUD operations. Another approach is to expose your Repositories on the UoW interface (I prefer the former). Either way allows for easier unit testing of each layer. A single call to Save on the UnitOfWork from within the service layer will then persist all changes across all Repositories.
Here's a nice article on MSDN that looks at UoW from a unit testing perspective but covers repositories also. Where it references Repositories from the MVC Controller you'll have another intermediary service layer.

object served for communcation between business layer and presentation layer

This is a general question about design. What's the best way to communicate between your business layer and presentation layer? We currently have a object that get pass into our business layer and the services reads the info from the object and sets the result into the object. When the service are finish, we'll have a object populated with result from business layer and then the UI can display according to the result of the object.
Is this the best approach? What other approach are out there?
Domain Driven Design books (the quickly version is freely avaible here) can give you insights into this.
In a nutshell, they suggest the following approach: the model objects transverse from model tier to view tier seamlessly (this can be tricky if you are using static typed languages or different languages on clinet/server, but it is trivial on dynamic ones). Also, services should only be used to perform action that do not belong to the model objects themselves (or when you have an action that involves lots of model objects).
Also, business logic should be put into model tier (entities, services, values objects), in order to prevent the famous anemic domain model anti pattern.
This is another approach. If it suits you, it depends a lot on the team, how much was code written, how much test coverage you have, how long the project is, if your team is agile or not, and so on. Domain Driven Design quickly discusses it even further, and any decision would be far less risky if you at least skim over it first (getting the original book from Eric Evans will help if you choose to delve further).
We use a listener pattern, and have events in the business layer send information to the presentation layer.
It depends on your architecture.
Some people structure their code all in the same exe or dll and follow a standard n-tier architecture.
Others might split it out so that their services are all web services instead of just standard classes. The benefit to this is re-usable business logic installed in one place within your physical infrastructure. So single changes apply accross all applications.
Software as a service and cloud computing are becoming the platform where things are moving towards. Amazons Elastic cloud, Microsofts Azure and other cloud providers are all offering numerous services which may affect your decisions for architecture.
One I'm about to use is
Silverlight UI
WCF Services - business logic here
NHibernate data access
Sql Server Database
We're only going to allow the layers of the application to talk via interfaces so that we can progress upto Azure cloud services once it becomes more mature.

Resources