When is it good to use both DTOs and Breeze? - breeze

I've built my WebAPI to serve DTOs to the client as a means of separating the domain models from the client-side models. I'm now ramping on client-side technologies like Breeze and I'm wondering how using Breeze would affect this pattern, and if it's an either/or kind of scenario. When is it a good idea to use both breeze and DTOs, if ever?

Breeze doesn't really care whether you want to use a DTO or a more full fledged domain model 'Entity' object. From a .NET perspective, Breeze can apply its full range of query services to any collection that can be exposed as an IEnumerable or an IQueryable. If you don't want to use queries, you can expose individual DTO's or collections of DTO's via WebApi methods that take parameters.
You also have the option of using Breeze queries with projections to construct DTO objects from entities on the server and only work with the DTO's on the client.
If querying is important to you, then the primary issue with DTO's vs domain model 'Entities' from your perspective is how easy it is for you to expose your DTO's as 'Queryable' objects and how efficient this querying is likely to be. Many ORM tools, like Entity Framework, are able to transform a query so that most of the heavy processing is performed by the Database engine. Such optimizations can be very performant in comparison to the alternative of trying to iterate over a DTO collection in order to execute a query.
One interesting alternative is to use something like the Entity Framework and WebApi to expose only the mapped subset of your domain model that you want exposed on the client. i.e. you use the Entity Framework to do your DTO mapping for you. So you have two EF models, a full domain model and a DTO domain model. The advantage of this, is that you still get the advantage of query optimization.
Hope this helps.

Related

Creating Lite mixed entities with breeze

We are working on an SPA which uses Durandal + Breeze, we have used DTOs for data transfer, is there a way in Breeze through which we can make some of the properties of objects as Observable while keeping the rest as plain Javascript properties.
Please help.
Not easily, but depending on your willingness to look at the breeze source, you could take a look at the JsonResultsAdapter and the "ko" (knockout) model library implementation. Together these two control how entities and projections get materialized.

Domain Entities, DTO, and View Models

I have an ASP.NET MVC 2 application with a POCO domain model and an NHibernate repository layer. My domain model has no awareness of my viewmodels so I use automapper to go from viewmodel to entity and vice/versa.
When I introduced WCF to my project (a late requirement), I started having to deal with disconnected objects. That is, I retrieve an entity from the database with NHibernate and once that entity is serialized it becomes disconnected and each child collection is loaded regardless of whether or not I plan on using it meaning I'm doing alot of unnecessary database work.
After reading up on this, I see that it is highly recommended that you not expose your entities outside of your domain project and you should instead use DTOs.
I see the reason for this but I'm having trouble figuring out how to implement it.
Do I map from viewmodel to DTO in ASP.NET MVC, send DTOs through the service layer, and map from DTO to entity in the service layer? Where should I define my DTOs?
I like to have my service layer keep entities encapsulated within it, and return/receive only DTOs. I keep the service contracts as well as the DTO's in a separate assembly which both the MVC project and the Service implementation reference.
Inside the service call implementation, the service maps dto's to entities, then does the interaction with repositories and other entities as it needs to.
On the app/mvc project I sometimes will get lazy and just use DTO's as the models for certain actions (especially CRUDy ones). If i need a projection or something like that, then I'll make a viewmodel and convert between DTO and viewmodel with automapper etc.
How exposed your entities are is a subject of much debate. Some people will push them all the way to the view/app layer. I prefer to keep them in the service layer. I find that when the entities leave the service layer, you find yourself doing business logic type stuff anywhere where they're interacted with, stuff that should probably reside in a service.
I treat my DTOs like ViewModels because the UI layer ( MVC app ) is requesting them. You could go Entity -> DTO -> ViewModel but I think thats over engineering if the only consumer of your service is an MVC application. If somehow the DTOs will actually be used for data and not simply screen specifications then you should probably use additional mapping.
I've also simply returned entities from my WCF layer and let the automatically generated proxy objects on the client be the DTO. The entities almost become DTOs because of the proxy classes and no business logic comes over to the client.
And of course, this is all "It Depends" what your architectural goals are. This question is borderline subjective and argumentative IMHO.
I like defining the DTO in the MVC project and then creating extension methods to transform from domain entity to DTO (and vice-versa).
The transformation would take place in the mvc functions.
I just wrote a post about a way of getting around all this DTO <-> DO transformation. Maybe you check it out http://codeblock.engio.net/?p=17

Dto/TransactionScripts and Odata Services

With an odata service, we can query from the clientside without using dto. Do i really need dto layer if i use odata svc? What are the cons and pros if i don't use dto. In our old system for querying mechanism there are many query service-methods that returns dto collection. But odata services confuses my mind... It seems like; the responsibility of server moves to the client. The same confusion goes on, for transaction scripts. I'm curios about your thoughts.
When you are on the server side - the only thing that matters for oData is either a EDM model or POCO models. So when you genrate a EDMX file you can always considered those to be your business object or model layer and pump then in to those namespaces. So in a way there is no business logic that you are applying in there.
But on the client side you can always centralize the oData method invocation. Since they support callbacks you can always have a view model call up a repository and pass the call back. In this way you dont bloat your view model with extensive odata query invocation. Sort of repositroy pattern is what i am talking about.
Hope this gives you a direction.
regards :)

ASP.NET MVC - model decision: how to design it?

This is concerning an enterprise application with a very generic database (all objects are identified using data in the database and internationalized/globalized/localized).
Make a model for Repository pattern, then make (generate 1:1) another model for DB access (LINQ2SQL or EF) and use the later as repository model data access layer?
Just use L2S/EF/NHibernate model directly, mapping model to DB and opening persistence layer?
Will this dual model idea (repository pattern) popup problems making dynamic stackable LINQ search queries possible when using L2S/EF model directly in a dual model environment?
Please advise.
As long as you are exposing IQueryable objects in your repository, you should have no problem stacking queries in the manner you suggest.
I would be cautious about using Entity Framework for this, since lazy loading is not supported in the way you might expect. Linq to SQL will handle lazy loading without problems.
For more information about lazy loading in the Entity Framework, see: http://www.singingeels.com/Articles/Entity_Framework_and_Lazy_Loading.aspx
Take a look at sharp architecture.
Regarding returning IQueryable from your repository objects, it is my opinion that doing such blurs a proper separation of concerns in your application. I'm all for working with IQueryable within your data access layer but once you start returning objects as IQueryable you provide the opportunity for your controllers and/or views to start meddling with data access. Such may even negatively impact the testability of your application as well.

MVC: Repositories and Services

I am confused as to the limitations of what gets defined in the Repositories and what to leave to the Services. Should the repository only create simple entities matching tables from the database or can it create complex custom object with combinations of those entities?
in other words: Should Services be making various Linq to SQL queries on the Repository? Or should all the queries be predefined in the Repository and the business logic simply decide which method to call?
You've actually raised a question here that's currently generating a lot of discussion in the developer community - see the follow-up comments to Should my repository expose IQueryable?
The repository can - and should - create complex combination objects containing multiple associated entities. In domain-driven design, these are called aggregates - collections of associated objects organized into some cohesive structure. Your code doesn't have to call GetCustomer(), GetOrdersForCustomer(), GetInvoicesForCustomer() separately - you just call myCustomerRepository.Load(customerId), and you get back a deep customer object with those properties already instantiated. I should also add that if you're returning individual objects based on specific database tables, then that's a perfectly valid approach, but it's not really a repository per sé - it's just a data-access layer.
On one hand, there is a compelling argument that Linq-to-SQL objects, with their 'smart' properties and their deferred execution (i.e. not loading Customer.Orders until you actually use it) are a completely valid implementation of the repository pattern, because you're not actually running database code, you're running LINQ statements (which are then translated into DB code by the underlying LINQ provider)
On the other hand, as Matt Briggs' post points out, L2S is fairly tightly coupled to your database structure (one class per table) and has limitations (no many-many mappings, for example) - and you may be better off using L2S for data access within your repository, but then map the L2S objects onto your own domain model objects and return those.
A repository should be the only piece of your application that knows anything about your data access technology. So it should not be returning objects generated by L2S at all, but map those properties to model objects of your own.
If you are using this sort of pattern, you may want to re think L2S. It generates up a data access layer for you, but doesn't really handle impedance mismatch, you have to do that manually. If you look at something like NHibernate, that mapping is done in a more robust fashion. L2S is more for a 2 tier application, where you want a quick and dirty DAL that you can extend on easily.
If you're using LINQ then my belief is that the repository should be a container for your LINQ syntax. This gives you a level of abstraction of the database access routines from your model object interfacing. As Dylan mentions above there are other views on the matter, some people like to return the IQueryable so they can continue to query the database at a later point after the repository. There is nothing wrong with either of these approaches, as long as you're clear in your standards for your application. There is more informaiton on the best practices I use for LINQ repositories here.

Resources