I am exploring the idea of implementing a web service api using WCF Data Services and EF4. Realizing that some operations require complex business logic, I decided to create a partial class the same name as the main EF data context partial class and implement additional methods there to handle the more complex business logic. When the EF context object is used directly, the additional method shows up (via intellisense) and works properly. When the EF classes are exposed through a WCF Data Service and a Service Reference is created and consumed in another project, the new method does not show up in intellisense or in the generated Service.cs file (of course, I updated the reference and even deleted it and re-added it). The native data methods (i.e. context.AddObject() and context.AddToPeople()) work properly, but the new method isn't even available.
My EF classes look something like this:
namespace PeopleModel
{
//EF generated class
public partial class PeopleEntities : ObjectContext
{
//Constructors here
//Partial Methods here
//etc....
}
//Entity classes here
//My added partial class
public partial class PeopleEntities
{
public void AddPerson(Person person)
{
base.AddObject("People", person);
}
}
}
There's nothing special about the .svc file. The Reference.cs file containing the auto generated proxy classes do not have the new "AddPerson()" method.
My questions are:
1. Any idea why the web service doesn't see the added partial class, but when directly using the EF objects the method is there and works properly?
2. Is using a partial class with additional methods a good solution to the problem of handling complex business rules with an EF generated model?
I like the idea of letting the oData framework provide a querying mechanism on the exposed data objects and the fact that you can have a restful web service with some of the benefits of SOAP.
Service operations are only recognized if they are present on the class which derives from DataService. The WCF Data Service will not look into the context class for these. Also note that methods are not exposed by default, you need to attribute them with either WebGet or WebInvoke and allow access to them in your InitializeService implementation.
http://msdn.microsoft.com/en-us/library/cc668788.aspx
Related
I am implementing a form of CQRS that uses a single data store but separate Query and Command models. For the command side of things I am implementing DDD including Repositories, IoC and Dependency Injection. For the Query side I am using the Finder pattern as described here. Basically, a finder is similar to a Repository, but with Find methods only.
So in my application for the read side, in my DAL, I use ADO.net and raw SQL to do my queries. The ADO.Net stuff is all abstracted away into a nice helper class so that my Finder classes simply pass the query to the ADO helper which returns generic data objects which the finder/mapper class turns into read models.
Currently the Finder methods, like my command repositories, are accessed through interfaces that are injected into my controllers, but I am wondering if the interfaces, DI and IoC are overkill for the query side, as everything I have read about the read side of CQRS recommends a "thin data layer".
Why not just access my Finders directly? I understand the arguments for interfaces and DI. ie Separation of Concerns and testability. In the case of SOC, my DAL has already separated out database specific logic by using a mapper class and putting the ADO.net stuff in a helper class. As far as testing is concerned, according to this question unit testing read models is not a necessity.
So in summary, for read models, can I just do this:
public class PersonController : Controller
{
public ActionResult Details(int id)
{
var person = new Person();
person = PersonFinder.GetByID(id);
// TODO: Map person to viewmodel
return this.View(viewmodel);
}
}
Instead of this:
public class PersonController : Controller
{
private IPersonFinder _person;
public PersonController(IPersonFinder person)
{
_person = person;
}
public ActionResult Details(int id)
{
Person person = _person.GetByID(id);
// TODO: Map person to viewmodel
return this.View(viewmodel);
}
}
Are you using both IoC and DI? That's bad ass! Anyways, the second version is the better one because it doesn't depend on a static class. Using statics will open Pandora's box, don't do it, for all the reasons that using static is bad.
You really don't get any benefits for using a static class and once you are already using a DI Container, there's no additional cost. And you are using the Finders directly but you let the DI Container instantiate one instead of you calling a static object.
Update
A thin read layer refers to using a simplified read model instead of the rich domain objects. It is unrelated to DI, it doesn't matter how the query service is built or by whom, it matters to not involve the business objects in queries.
Read/Write separation is completely unrelated to coding techniques like dependency injection. Your read models are serving fewer purposes than your combined read/write models were before. Could you consider ditching all the server-side code and just using your database's native REST API? Could you wire your controller to directly query the database with SQL and return the data as JSON? Do you need a generic repository-like pattern to deal with specific read requests?
I am not sure if this is the correct way to go, if so, please advise otherwise.
This is an ASP.Net MVC 4 site using EF 5.x
Suppose you have your Entity Framework in a class library on it's own.
Code Generation Item has now generated all of your Models (the xxx.tt section of your EF mode)
This project is then added/referenced in the development of a site.
You can now access the data via the EF.
Now - in the site project I want to create a partial class of one of my EF models, for example "Users", with an additional property that isn't in the DB.
In the past on a web forms project when the EF was part of the project and not a reference I would simply create the partial class and all would be good; my "Users" would now have a bunch of other stuff in it that wasn't database related but needed on the "User".
I can't seem to get this to work in this MVC project where the EF is in a separate project.
I have tried doing this for example:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using MyTestEntity.Entity;
namespace MyTestMVCSite.Models
{
public partial class Email
{
public string OtherEmail {
get { return "me#myEmail.com"; }
}
}
}
I have also tried inheriting the EF models class, like this:
public partial class Email : MyTestEntity.Entity.Email
{
public string OtherEmail {
get { return "me#myEmail.com"; }
}
}
Nothing I seem to be doing gives me access to "OtherEmail"
What I actually want to be able to do is create a partial class for some of my models and then have this partial class implement an interface so i can inject an instance of this interface into another object rather than overloading.
Am i talking crazy nonsense?
You cannot have two partial classes referring to the same class in two different assemblies (projects). Once the assembly is compiled, the meta-data is baked in, and your classes are no longer partial. Partial classes allows you to split the definition of the same class into two files.
Is it possible to have two partial classes in different assemblies represent the same class?
If you wish to augment your models with additional properties used for display purposes, then you should consider using view models, and a mechanism for mapping data to and from your models to view models.
You can then perform validation independently from the model based on the current view. View models will also protect you from accidentally exposing properties on your model that you do not wish users to alter through post data, even if you haven't explicitly specified them in your view.
In the examples for ServiceStack I don't see a single application that is ASP.NET MVC website first and then made ServiceStack service second.
Let's take a very simple ASP.NET MVC web application that renders products through Views. It uses controllers, views, models and viewmodels.
Let's say we have a model of Product which gets persisted into a document DB. Let's assume we have a viewmodel of ProductViewModel which gets mapped from Product and display within MVC Razor View/PartialView.
so this is a web side of things..now let's assume we want to add a service returning products to various clients like the Windows 8 applications.
Should the request/response classes be completely disconnected from what we already have? Our ProductViewModel might already contain everything we want to return from the service.
Since we already have Product (model class) we can't have another Product class in the API namespace..well we could but that makes things unclear and I'd like to avoid that.
So, should we introduce standalone ProductRequest class and ProductRequestResponse (inherits ProductViewModel) class in the API namespace?
Like so ProductRequestResponse : ProductViewModel?
What i'm saying is, we already have the Model and ViewModel classes and to construct Request and Response classes for the SS service we would have to create another two files, mostly by copying everything from the classes we already have. This doesn't look DRY to me, it might follow the separation of concerns guidelines but DRY is important too, actually more than separating everything (separating everything leads to duplication of code).
What I would like to see is a case where a web application has already been made, it currently features Models and ViewModels and returns the appropriate Views for display on the Web but can be extended into a fully functional service to support programmatic clients? Like AJAX clients etc...with what we already have.
Another thing:
If you take a look at this example https://github.com/ServiceStack/ServiceStack.Examples/blob/master/src/ServiceStack.MovieRest/MovieService.cs
you will see there is Movie Request class and Movies Request class (one for single movie request, the other one for a list of movies). As such, there are also two services, MovieService and MoviesService, one dealing with requests for a single movie, the other one for a genre of movies.
Now, while I like SS approach to services and I think it is the right one, I don't like this sort of separation merely because of the type of request. What if I wanted movies by director? Would I be inventing yet another request class having a Director property and yet another service (MoviesByDirector) for it?
I think the samples should be oriented towards one service. Everything that has to deal with movies need to be under one roof. How does one achieve that with ServiceStack?
public class ProductsService : Service
{
private readonly IDocumentSession _session;
private readonly ProductsHelperService _productsHelperService;
private readonly ProductCategorizationHelperService _productCategorization;
public class ProductRequest : IReturn<ProductRequestResponse>
{
public int Id { get; set; }
}
// Does this make sense?
// Please note, we use ProductViewModel in our Views and it holds everything we'd want in service response also
public class ProductRequestResponse : ProductViewModel
{
}
public ProductRequestResponse GetProducts(ProductRequest request)
{
ProductRequestResponse response = null;
if (request.Id >= 0)
{
var product = _session.Load<Product>(request.Id);
response.InjectFrom(product);
}
return response;
}
}
The Service Layer is your most important Contract
The most important interface that you can ever create in your entire system is your external facing service contract, this is what consumers of your service or application will bind to, i.e. the existing call-sites that often won't get updated along with your code-base - every other model is secondary.
DTOs are Best practices for remote services
In following of Martin Fowler's recommendation for using DTOs (Data Transfer Objects) for remote services (MSDN), ServiceStack encourages the use of clean, untainted POCOs to define a well-defined contract with that should kept in a largely implementation and dependency-free .dll. The benefits of this allows you to be able to re-use typed DTOs used to define your services with, as-is, in your C#/.NET clients - providing an end-to-end typed API without the use of any code-gen or other artificial machinery.
DRY vs Intent
Keeping things DRY should not be confused with clearly stating of intent, which you should avoid trying to DRY or hide behind inheritance, magic properties or any other mechanism. Having clean, well-defined DTOs provides a single source of reference that anyone can look at to see what each service accepts and returns, it allows your client and server developers to start their work straight away and bind to the external service models without the implementation having been written.
Keeping the DTOs separated also gives you the freedom to re-factor the implementation from within without breaking external clients, i.e. your service starts to cache responses or leverages a NoSQL solution to populate your responses with.
It's also provides the authoritative source (that's not leaked or coupled inside your app logic) that's used to create the auto-generated metadata pages, example responses, Swagger support, XSDs, WSDLs, etc.
Using ServiceStack's Built-in auto-mapping
Whilst we encourage keeping separate DTO models, you don't need to maintain your own manual mapping as you can use a mapper like AutoMapper or using ServiceStack's built-in Auto Mapping support, e.g:
Create a new DTO instance, populated with matching properties on viewModel:
var dto = viewModel.ConvertTo<MyDto>();
Initialize DTO and populate it with matching properties on a view model:
var dto = new MyDto { A = 1, B = 2 }.PopulateWith(viewModel);
Initialize DTO and populate it with non-default matching properties on a view model:
var dto = new MyDto { A = 1, B = 2 }.PopulateWithNonDefaultValues(viewModel);
Initialize DTO and populate it with matching properties that are annotated with the Attr Attribute on a view model:
var dto = new MyDto { A=1 }.PopulateFromPropertiesWithAttribute<Attr>(viewModel);
When mapping logic becomes more complicated we like to use extension methods to keep code DRY and maintain the mapping in one place that's easily consumable from within your application, e.g:
public static class MappingExtensions
{
public static MyDto ToDto(this MyViewModel viewModel)
{
var dto = viewModel.ConvertTo<MyDto>();
dto.Items = viewModel.Items.ConvertAll(x => x.ToDto());
dto.CalculatedProperty = Calculate(viewModel.Seed);
return dto;
}
}
Which is now easily consumable with just:
var dto = viewModel.ToDto();
If you are not tied specifically to ServiceStack and just want "fully functional service to support programmatic clients ... with what we already have", you could try the following: Have your controllers return either a ViewResult or a JsonResult based on the request's accept header - Request.AcceptTypes.Contains("text/html") or Request.AcceptTypes.Contains("application/json").
Both ViewResult and JsonResult are ActionResult, so the signature of actions remains the same, and both View() and Json() accept a ViewModel. Furthermore, if you have a ControllerBase you can make a base method (for example protected ActionResult RespondWith(Object viewModel)) which calls either View() or Json() so the change to existing code is minimal.
Of course, if your ViewModels are not pure (i.e. have some html-specific stuff or you rely on some ViewBag magic) then it's a little more work. And you won't get SOAP or other binding types provided by ServiceStack, but if your goal is to support a JSON data interface with minimal code changes to the existing MVC app then this could be a solution.
Lp
I'm trying to be a better developer...
What I'm working with:
.Net MVC Framework 1.0
Entity Framework 3.5
I've been doing some reading and I think what i want to do is:
Create a repository for each aggregate in the domain. An Order repository for example will manage an Order's OrderItems.
Create a service layer to handle business logic. Each repository will have a corresponding service object with similar methods.
Create DTOs to past between the repository and service
Possibly create ViewModels which are classes for the View to consume.
I have a base repository interface which my aggregate repository interfaces will implement...
public interface IRepository<T>
{
IEnumerable<T> ListAll();
T GetById(int id);
bool Add(T entity);
bool Remove(T entity);
}
My Order Repository interface is defined as follows...there will likely be additional methods as I get more into this learning exercise.
public interface IOrderRepository : IRepository<Order>
{
}
My service classes are essentially defined the same as the repositories except that each service implementation includes the business logic. The services will take a repository interface in the constructor (I'm not ready for IoC in this exercise but believe that is where I'd like to end up down the road).
The repository implementations will push and pull from the database using Entity Framework. When retrieving data; the methods will only return the DTOs and not the EF generated objects
The services (as I'm calling them) will control the repository and perform the business logic. The services are what you will see in the controller i.e. _orderService.GetById(1).
This is where I started flip flopping and could use some feedback...should I maybe have my service classes populate ViewModel classes...should I not have ViewModel classes....maybe that is too much mapping from one type to another?
I would love to get some feedback on the direction I am heading with regards to a separation of concerns.
Thanks
I think you are heading in the right direction about the Repository pattern. Regarding your question about the ViewModel classes, i suggest that you use something that transforms the output of the business service method outputs to some desired outputs. For example your Order business service may have a method called GetOrders(). Using a custom attribute you may define the view class type for it. The view is able to get the output of this method, possibly joins it with other kinds of data and returns the result as a collection of objects with anonymous types. In this case the view will take IQueryable<Order> or IEnumerable<Order> as input and returns IList as the output.
This method will help you greatly when you need to show different kinds of views of your data on the client side. We have already utilized something similar (but more complex) to this method in our company's framework.
I'm trying to set up NHibernate in an ASP.NET MVC application using a DDD approach. However, I do get an error when trying to lazy load an objects related entity. Heres how I've structured my application:
Infrastructure layer:
Contains mapping files, repository implementations and a NHibernate bootstrapper to configure and build a session factory.
Heres a repository example:
public class CustomerRepository : ICustomerRepository
{
public Customer GetCustomerById(int customerId)
{
using (var session = NHibernateBootstrapper.OpenSession())
return session.Get<Customer>(customerId);
}
}
Domain layer:
Has simple POCO classes, repository and service interfaces
Application layer:
Contains Service implementations.
Heres a service example:
public class CustomerService : ICustomerService
{
private ICustomerRepository _repository;
public CustomerService(ICustomerRepository repository)
{
_repository = repository;
}
public Customer GetCustomerById(int customerId)
{
return _repository.GetCustomerById(customerId);
}
}
Presentation layer:
Contains the ASP.NET MVC application. And this is where I discovered my problem.
Using the MVC approach, I have a controller which, using the CustomerService service, gets a customer and displays the customer in a View (strongly typed). This customer has a related entity Contact, and when I try to access it in my View using Model.Contact, where Model is my Customer object, I get an LazyInitializationException.
I know why I get this. It's because the session used to retrieve the Customer in the CustomerRepository is dead by now. My problem is how I can fix this. I would like if I could avoid getting the related Contact entity for the Customer in my repository, because some views only need the Customer data, not the Contact data. If this is possible at all?
So to the question: is it possible to wait querying the database, until the presentation layer needs the related entity Contact?
I think that what I need is something like what this article describes. I just can't figure out how to implement it in infrastructure layer, or where should it be implemented?
Thanks in advance. Any help will be much appreciated!
As for session management it is common to use single session per request. You can see an example of implementation here. It is an open source project that were designed to setup new asp.net applications with the help of Nhibernate wery easy. source code can be founded here.
Hope it helps.
I also recommend Sharp Architecture.
Another approach, as well as suggestion, is to avoid passing entities to views. There're other problems with it except session management - business rules leaking into views, bloated/spagetti code in there, etc. Use ViewModel approach.
Another problem you'll get is storing your entities in Session. Once you try to get your Customer from Session["customer"] you'll get the same exception. There're several solutions to this, for example storing IDs, or adding repository methods to prevent lazy-loading of objects you're going to store in session - read NHibernate's SetFetchMode - which, of course, you can also use to pass entity to views. But as I said you better stick with ViewModel approach. Google for ViewModel, or refer to ASP.NET MVC In Action book, which uses samples of code from http://code.google.com/p/codecampserver/. Also read this, for example.
Are all your properties and methods in your Customer class marked virtual?
How are you opening and closing your session? I use an ActionFilterAttribute called TransactionPerRequest and decorate all my controllers with it.
Check out this for an implementation.