NHibernate: "failed to lazily initialize...", DDD approach - asp.net-mvc

I'm trying to set up NHibernate in an ASP.NET MVC application using a DDD approach. However, I do get an error when trying to lazy load an objects related entity. Heres how I've structured my application:
Infrastructure layer:
Contains mapping files, repository implementations and a NHibernate bootstrapper to configure and build a session factory.
Heres a repository example:
public class CustomerRepository : ICustomerRepository
{
public Customer GetCustomerById(int customerId)
{
using (var session = NHibernateBootstrapper.OpenSession())
return session.Get<Customer>(customerId);
}
}
Domain layer:
Has simple POCO classes, repository and service interfaces
Application layer:
Contains Service implementations.
Heres a service example:
public class CustomerService : ICustomerService
{
private ICustomerRepository _repository;
public CustomerService(ICustomerRepository repository)
{
_repository = repository;
}
public Customer GetCustomerById(int customerId)
{
return _repository.GetCustomerById(customerId);
}
}
Presentation layer:
Contains the ASP.NET MVC application. And this is where I discovered my problem.
Using the MVC approach, I have a controller which, using the CustomerService service, gets a customer and displays the customer in a View (strongly typed). This customer has a related entity Contact, and when I try to access it in my View using Model.Contact, where Model is my Customer object, I get an LazyInitializationException.
I know why I get this. It's because the session used to retrieve the Customer in the CustomerRepository is dead by now. My problem is how I can fix this. I would like if I could avoid getting the related Contact entity for the Customer in my repository, because some views only need the Customer data, not the Contact data. If this is possible at all?
So to the question: is it possible to wait querying the database, until the presentation layer needs the related entity Contact?
I think that what I need is something like what this article describes. I just can't figure out how to implement it in infrastructure layer, or where should it be implemented?
Thanks in advance. Any help will be much appreciated!

As for session management it is common to use single session per request. You can see an example of implementation here. It is an open source project that were designed to setup new asp.net applications with the help of Nhibernate wery easy. source code can be founded here.
Hope it helps.

I also recommend Sharp Architecture.
Another approach, as well as suggestion, is to avoid passing entities to views. There're other problems with it except session management - business rules leaking into views, bloated/spagetti code in there, etc. Use ViewModel approach.
Another problem you'll get is storing your entities in Session. Once you try to get your Customer from Session["customer"] you'll get the same exception. There're several solutions to this, for example storing IDs, or adding repository methods to prevent lazy-loading of objects you're going to store in session - read NHibernate's SetFetchMode - which, of course, you can also use to pass entity to views. But as I said you better stick with ViewModel approach. Google for ViewModel, or refer to ASP.NET MVC In Action book, which uses samples of code from http://code.google.com/p/codecampserver/. Also read this, for example.

Are all your properties and methods in your Customer class marked virtual?
How are you opening and closing your session? I use an ActionFilterAttribute called TransactionPerRequest and decorate all my controllers with it.
Check out this for an implementation.

Related

CQRS - Are Interfaces & Dependency Injection Neccessary For Read Model?

I am implementing a form of CQRS that uses a single data store but separate Query and Command models. For the command side of things I am implementing DDD including Repositories, IoC and Dependency Injection. For the Query side I am using the Finder pattern as described here. Basically, a finder is similar to a Repository, but with Find methods only.
So in my application for the read side, in my DAL, I use ADO.net and raw SQL to do my queries. The ADO.Net stuff is all abstracted away into a nice helper class so that my Finder classes simply pass the query to the ADO helper which returns generic data objects which the finder/mapper class turns into read models.
Currently the Finder methods, like my command repositories, are accessed through interfaces that are injected into my controllers, but I am wondering if the interfaces, DI and IoC are overkill for the query side, as everything I have read about the read side of CQRS recommends a "thin data layer".
Why not just access my Finders directly? I understand the arguments for interfaces and DI. ie Separation of Concerns and testability. In the case of SOC, my DAL has already separated out database specific logic by using a mapper class and putting the ADO.net stuff in a helper class. As far as testing is concerned, according to this question unit testing read models is not a necessity.
So in summary, for read models, can I just do this:
public class PersonController : Controller
{
public ActionResult Details(int id)
{
var person = new Person();
person = PersonFinder.GetByID(id);
// TODO: Map person to viewmodel
return this.View(viewmodel);
}
}
Instead of this:
public class PersonController : Controller
{
private IPersonFinder _person;
public PersonController(IPersonFinder person)
{
_person = person;
}
public ActionResult Details(int id)
{
Person person = _person.GetByID(id);
// TODO: Map person to viewmodel
return this.View(viewmodel);
}
}
Are you using both IoC and DI? That's bad ass! Anyways, the second version is the better one because it doesn't depend on a static class. Using statics will open Pandora's box, don't do it, for all the reasons that using static is bad.
You really don't get any benefits for using a static class and once you are already using a DI Container, there's no additional cost. And you are using the Finders directly but you let the DI Container instantiate one instead of you calling a static object.
Update
A thin read layer refers to using a simplified read model instead of the rich domain objects. It is unrelated to DI, it doesn't matter how the query service is built or by whom, it matters to not involve the business objects in queries.
Read/Write separation is completely unrelated to coding techniques like dependency injection. Your read models are serving fewer purposes than your combined read/write models were before. Could you consider ditching all the server-side code and just using your database's native REST API? Could you wire your controller to directly query the database with SQL and return the data as JSON? Do you need a generic repository-like pattern to deal with specific read requests?

ServiceStack new service side by side ASP.NET MVC website

In the examples for ServiceStack I don't see a single application that is ASP.NET MVC website first and then made ServiceStack service second.
Let's take a very simple ASP.NET MVC web application that renders products through Views. It uses controllers, views, models and viewmodels.
Let's say we have a model of Product which gets persisted into a document DB. Let's assume we have a viewmodel of ProductViewModel which gets mapped from Product and display within MVC Razor View/PartialView.
so this is a web side of things..now let's assume we want to add a service returning products to various clients like the Windows 8 applications.
Should the request/response classes be completely disconnected from what we already have? Our ProductViewModel might already contain everything we want to return from the service.
Since we already have Product (model class) we can't have another Product class in the API namespace..well we could but that makes things unclear and I'd like to avoid that.
So, should we introduce standalone ProductRequest class and ProductRequestResponse (inherits ProductViewModel) class in the API namespace?
Like so ProductRequestResponse : ProductViewModel?
What i'm saying is, we already have the Model and ViewModel classes and to construct Request and Response classes for the SS service we would have to create another two files, mostly by copying everything from the classes we already have. This doesn't look DRY to me, it might follow the separation of concerns guidelines but DRY is important too, actually more than separating everything (separating everything leads to duplication of code).
What I would like to see is a case where a web application has already been made, it currently features Models and ViewModels and returns the appropriate Views for display on the Web but can be extended into a fully functional service to support programmatic clients? Like AJAX clients etc...with what we already have.
Another thing:
If you take a look at this example https://github.com/ServiceStack/ServiceStack.Examples/blob/master/src/ServiceStack.MovieRest/MovieService.cs
you will see there is Movie Request class and Movies Request class (one for single movie request, the other one for a list of movies). As such, there are also two services, MovieService and MoviesService, one dealing with requests for a single movie, the other one for a genre of movies.
Now, while I like SS approach to services and I think it is the right one, I don't like this sort of separation merely because of the type of request. What if I wanted movies by director? Would I be inventing yet another request class having a Director property and yet another service (MoviesByDirector) for it?
I think the samples should be oriented towards one service. Everything that has to deal with movies need to be under one roof. How does one achieve that with ServiceStack?
public class ProductsService : Service
{
private readonly IDocumentSession _session;
private readonly ProductsHelperService _productsHelperService;
private readonly ProductCategorizationHelperService _productCategorization;
public class ProductRequest : IReturn<ProductRequestResponse>
{
public int Id { get; set; }
}
// Does this make sense? 
// Please note, we use ProductViewModel in our Views and it holds everything we'd want in service response also
public class ProductRequestResponse : ProductViewModel
{
}
public ProductRequestResponse GetProducts(ProductRequest request)
{
ProductRequestResponse response = null;
if (request.Id >= 0)
{
var product = _session.Load<Product>(request.Id);
response.InjectFrom(product);
}
return response;
}
}
The Service Layer is your most important Contract
The most important interface that you can ever create in your entire system is your external facing service contract, this is what consumers of your service or application will bind to, i.e. the existing call-sites that often won't get updated along with your code-base - every other model is secondary.
DTOs are Best practices for remote services
In following of Martin Fowler's recommendation for using DTOs (Data Transfer Objects) for remote services (MSDN), ServiceStack encourages the use of clean, untainted POCOs to define a well-defined contract with that should kept in a largely implementation and dependency-free .dll. The benefits of this allows you to be able to re-use typed DTOs used to define your services with, as-is, in your C#/.NET clients - providing an end-to-end typed API without the use of any code-gen or other artificial machinery.
DRY vs Intent
Keeping things DRY should not be confused with clearly stating of intent, which you should avoid trying to DRY or hide behind inheritance, magic properties or any other mechanism. Having clean, well-defined DTOs provides a single source of reference that anyone can look at to see what each service accepts and returns, it allows your client and server developers to start their work straight away and bind to the external service models without the implementation having been written.
Keeping the DTOs separated also gives you the freedom to re-factor the implementation from within without breaking external clients, i.e. your service starts to cache responses or leverages a NoSQL solution to populate your responses with.
It's also provides the authoritative source (that's not leaked or coupled inside your app logic) that's used to create the auto-generated metadata pages, example responses, Swagger support, XSDs, WSDLs, etc.
Using ServiceStack's Built-in auto-mapping
Whilst we encourage keeping separate DTO models, you don't need to maintain your own manual mapping as you can use a mapper like AutoMapper or using ServiceStack's built-in Auto Mapping support, e.g:
Create a new DTO instance, populated with matching properties on viewModel:
var dto = viewModel.ConvertTo<MyDto>();
Initialize DTO and populate it with matching properties on a view model:
var dto = new MyDto { A = 1, B = 2 }.PopulateWith(viewModel);
Initialize DTO and populate it with non-default matching properties on a view model:
var dto = new MyDto { A = 1, B = 2 }.PopulateWithNonDefaultValues(viewModel);
Initialize DTO and populate it with matching properties that are annotated with the Attr Attribute on a view model:
var dto = new MyDto { A=1 }.PopulateFromPropertiesWithAttribute<Attr>(viewModel);
When mapping logic becomes more complicated we like to use extension methods to keep code DRY and maintain the mapping in one place that's easily consumable from within your application, e.g:
public static class MappingExtensions
{
public static MyDto ToDto(this MyViewModel viewModel)
{
var dto = viewModel.ConvertTo<MyDto>();
dto.Items = viewModel.Items.ConvertAll(x => x.ToDto());
dto.CalculatedProperty = Calculate(viewModel.Seed);
return dto;
}
}
Which is now easily consumable with just:
var dto = viewModel.ToDto();
If you are not tied specifically to ServiceStack and just want "fully functional service to support programmatic clients ... with what we already have", you could try the following: Have your controllers return either a ViewResult or a JsonResult based on the request's accept header - Request.AcceptTypes.Contains("text/html") or Request.AcceptTypes.Contains("application/json").
Both ViewResult and JsonResult are ActionResult, so the signature of actions remains the same, and both View() and Json() accept a ViewModel. Furthermore, if you have a ControllerBase you can make a base method (for example protected ActionResult RespondWith(Object viewModel)) which calls either View() or Json() so the change to existing code is minimal.
Of course, if your ViewModels are not pure (i.e. have some html-specific stuff or you rely on some ViewBag magic) then it's a little more work. And you won't get SOAP or other binding types provided by ServiceStack, but if your goal is to support a JSON data interface with minimal code changes to the existing MVC app then this could be a solution.
Lp

How to implement Aggregate Root repository an add child entity with EF

I'm developing an MVC application. I have a Domain Model, and I use a repositry pattern for data access and Entity Framework Code First. I also have a UnitOfWork class which I call the repository operations through.
My problem mainly arises when I try to take advantage of aggregate roots and handle child objects through their parent repository.
This is the problem:
Parent class "Supplier" has several Contracts with Departments. I've chosen to make the contract a child of Supplier in this case.
To add a new contract I need to add a method to add a contract in my SupplierRepository, I tried:
public class SupplierRepository : GenericRepository<Supplier>
{
public SupplierRepository(MyContext context)
: base(context)
{
}
public void AddSupplierContract(SupplierContract contract)
{
var supplier = context.Suppliers.Find(contract.SupplierId);
supplier.Contract.Add(contract);
}
And I also tried:
public void AddSupplierContract(SupplierContract contract)
{
context.Entry(contract).State = EntityState.Added;
}
}
When i call
_unitOfWork.save();
I get an error telling me:
An entity object cannot be referenced by multiple instances of IEntityChangeTracker
UnitOfWork instansiates my DbContext (myDbContext) and my SupplierRepository and calls the myDbContext.Save()
Why do I get this behavior
How should I implement an Aggregate Root Repository (CRUD operations for the child objects)
As far as I understand I should have a method in the repository that takes a contract and adds it, and not do this in the Controller of my MVC app, but I don't seem to get it working.
I've seen a lot of information about Aggregate Roots, but no examples of how to implement it.
Thanks.
Solution:
Well I finally figured it out.
So it was not a problem with the repository, but the new SupplierContract queryed the store for the user entity that created it (through an extension method). Obviously this context did not dispose and therefore i had two current DbContexts when I instansiated it to save the contract entity.
Hopefully someone saves time by reading this.
The Aggregate Root repository I solved by just doing like this in the SupplierRepository:
public void AddSupplierContract(SupplierContract contract)
{
db.SupplierContracts.Add(contract);
}
And calling UnitOfWork.Save() method.
While technically you may have solved this issue, I do hope you are aware there's something fundamentally flawed in your design (unless you're using Fowler repositories): repositories (the DDD kind) deal in aggregates only. The fact that SupplierContract needs to be added to the context is not a concern of the calling code. So, why expose that method? I would also reconsider having the repository delegate the save (why else have a UoW). As far as aggregates are concerned I get the feeling you seem to be treating them as structural objects, not as behavioral ones. Hence you seem to be in for a world of pain, going through some of the moves, but not getting any of the value.
To get rid of that error, you should use same MyContext instance to create all the repositories. If you use some dependency-injector, it should allow you to configure same MyContext object through single request. For example, for Ninject, that would be
kernel.Bind<MyContext>().ToSelf().InRequestScope();

Service Layer are repeating my Repositories

I'm developing an application using asp.net mvc, NHibernate and DDD. I have a service layer that are used by controllers of my application. Everything are using Unity to inject dependencies (ISessionFactory in repositories, repositories in services and services in controllers) and works fine.
But, it's very common I need a method in service to get only object in my repository, like this (in service class):
public class ProductService {
private readonly IUnitOfWork _uow;
private readonly IProductRepository _productRepository;
public ProductService(IUnitOfWork unitOfWork, IProductRepository productRepository) {
this._uow = unitOfWork;
this._productRepository = productRepository;
}
/* this method should be exists in DDD ??? It's very common */
public Domain.Product Get(long key) {
return _productRepository.Get(key);
}
/* other common method... is correct by DDD ? */
public bool Delete(long key) {
usign (var tx = _uow.BeginTransaction()) {
try
{
_productRepository.Delete(key);
tx.Commit();
return true;
} catch {
tx.RollBack();
return false;
}
}
}
/* ... others methods ... */
}
This code is correct by DDD ? For each Service class I have a Repository, and for each service class need I do a method "Get" for an entity ?
Thanks guys
Cheers
Your ProductService doesn't look like it followed Domain-Driven Design principles. If I understand it correctly, it is a part of Application layer between Presentation and Domain. If so, the methods on ProductService should have business meaning with regard to products.
Let's talk about deleting products. Is it as simple as executing delete on the database (NHibernate, or whatever?) I think it is not. What about orders which reference the to-be-deleted product? And so on and so forth. Btw, Udi Dahan wrote a great article on deleting entities.
Bottom line is, if your application is so simple that services do really replicate your repositories and contain only CRUD operations, you probably shouldn't do DDD, throw away your repositories and let services operate on entities (which would be simple data containers in that case).
On the other hand, if there is a complicated behavior (like the one with handling 'deleted' products), there is a point in going DDD path and I strongly advocate doing so.
PS. Despite which approach (DDD or not) you will eventually take I would encourage you to use some Aspect Oriented Programming to handle transaction and exception related stuff. You would end up with way to many methods such as DeleteProduct with same TX and exception handling code.
That looks correct from my perspective. I really didn't like repeating service and repository method names over and over in my asp.net MVC project, so I went for a generic repository approach/pattern. This means that I really only need one or two Get() methods in my repository to retrieve my objects. This is possible for me because I am using Entity Framework and I just have my repository's get() method return a IQueryable. Then I can just do the following:
Product product = from p in _productRepository.Get() where p.Id == Id select p;
You can probably replicate this in NHibernate with linq -> NHibernate.
Edit: This works for DDD because this still allows me to interchange my DAL/repositories as long as the data library I am using (Nhibernate, EF, etc..) supports IQueryable.
I am not sure how to do a generic repository without IQueryable, but you might be able to use delegates/lambda functions to incorporate it.
Edit2: And just in case I didn't answer your question correctly, if you are asking if you are supposed to call your repository's Get() method from the service then yes, that is the correct DDD design as well. The reason is that the service layer is supposed to handle all your business logic, so it decides exactly how and what data to retrieve (for example, do you want it in alphabetical order, unordered, etc...). It also means that it can perform validation after loading if needed or validation before deleting and/or saving.
This means that the service layer doesn't care exactly how that data is stored and retrieved, it only decides what data is stored and retrieved. It then calls on the repository to handle the request correctly and retrieve/store the data in the way the service layer tells it to. Thus you have correct separation of concerns.

MVC Custom Model - Where is a simple example?

I need to make a web application and I want to use MVC. However, my Model can't be one of the standard Models -- the data is not stored in a database but instead in an external application accessible only via a API. Since this is the first MVC application I've implemented I'm relying on examples to understand how to go about it. I can't find any examples of a non-DB based Model. An example of a custom Model would be fine too. Can anyone point me to such a beast? Maybe MVC is just to new and none exist.
It seems like I might be able to get away with the DataSet Model, however I've not seen any examples of how to use this object. I expect an example of DataSet could help me also. (Maybe it is the same thing?)
Please note: I've seen countless examples of custom bindings. This is NOT what I want. I need an example of a custom Model which is not tied to a specific database/table.
UPDATE
I found a good example from MS located here:
http://msdn.microsoft.com/en-us/library/dd405231.aspx
While this is the "answer" to my question, I don't really like it because it ties me to MS's view of the world. #Aaronaught, #jeroenh, and #tvanfosson give much better answers from a meta perspective of moving my understanding (and yours?) forward with respect to using MVC.
I'm giving the check to #Aaronaught because he actually has example code (which I asked for.) Thanks all and feel free to add even better answers if you have one.
In most cases it shouldn't matter what the backing source is for the actual application data; the model should be exactly the same. In fact, one of the main reasons for using something like a repository is so that you can easily change the underlying storage.
For example, I have an MVC app that uses a lot of web services - rarely does it have access to a local database, except for simple things like authentication and user profiles. A typical model class might look like this:
[DataContract(Namespace = "http://services.acme.com")]
public class Customer
{
[DataMember(Name = "CustomerID")]
public Guid ID { get; set; }
[DataMember(Name = "CustomerName")]
public string Name { get; set; }
}
Then I will have a repository interface that looks like this:
public interface ICustomerRepository
{
Customer GetCustomerByID(Guid id);
IList<Customer> List();
}
The "API" is all encapsulated within the concrete repository:
public class AcmeWSCustomerRepository : ICustomerRepository, IDisposable
{
private Acme.Services.CrmServiceSoapClient client;
public AcmeWSCustomerRepository()
: this(new Acme.Services.CrmServiceSoapClient())
public AcmeWSCustomerRepository(Acme.Services.CrmServiceSoapClient client)
{
if (client == null)
throw new ArgumentNullException("client");
this.client = client;
}
public void Dispose()
{
client.SafeClose(); // Extension method to close WCF proxies
}
public Customer GetCustomerByID(Guid id)
{
return client.GetCustomerByID(id);
}
public IList<Customer> List()
{
return client.GetAllCustomers();
}
}
Then I'll also probably have a local testing repository with just a few customers that reads from something like an XML file:
public class LocalCustomerRepository : ICustomerRepository, IDisposable
{
private XDocument doc;
public LocalCustomerRepository(string fileName)
{
doc = XDocument.Load(fileName);
}
public void Dispose()
{
}
public Customer GetCustomerByID(Guid id)
{
return
(from c in doc.Descendants("Customer")
select new Customer(c.Element("ID").Value, c.Element("Name").Value))
.FirstOrDefault();
}
// etc.
}
The point I'm trying to make here is, well, this isn't tied to any particular database. One possible source in this case is a WCF service; another is a file on disk. Neither one necessarily has a compatible "model". In this case I've assumed that the WCF service exposes a model that I can map to directly with DataContract attributes, but the Linq-to-XML version is pure API; there is no model, it's all custom mapping.
A really good domain model should actually be completely independent of the true data source. I'm always a bit skeptical when people tell me that a Linq to SQL or Entity Framework model is good enough to use throughout the entire application/site. Very often these simply don't match the "human" model and simply creating a bunch of ViewModel classes isn't necessarily the answer.
In a sense, it's actually better if you're not handed an existing relational model. It forces you to really think about the best domain model for your application, and not necessarily the easiest one to map to some database. So if you don't already have a model from a database - build one! Just use POCO classes and decorate with attributes if necessary, then create repositories or services that map this domain model to/from the API.
I think what you are looking for is really a non-DB service layer. Models, typically, are relatively simple containers for data, though they may also contain business logic. It really sounds like what you have is a service to communicate with and need a layer to mediate between the service and your application, producing the appropriate model classes from the data returned by the service.
This tutorial may be helpful, but you'd need to replace the repository with your class that interacts with the service (instead of the DB).
There is no fixed prescription of what a "Model" in MVC should be, just that it should contain the data that needs to be shown on screen, and probably also manipulated.
In a well-designed MVC application, data access is abstracted away somehow anyway, typically using some form of the Repository pattern: you define an abstraction layer (say, an IRepository interface) that defines the contract needed to get and persist data. The actual implementation will usually call a database, but in your case should call your 'service API'.
Here is an example of an MVC application that calls out to a WCF service.

Resources