Does this violate the DRY principle? - asp.net-mvc

I have 3 domain models - Item, ItemProductLine, and ProductLine. Each of these map to already existing database tables. I also have a view model that I use in my view.
Domain models:
public class Item
{
public string itemId { get; set; }
public string itemDescription { get; set; }
public float unitPrice { get; set; }
// more fields
public virtual ItemProductLine itemProductLine { get; set; }
}
public class ItemProductLine
{
public string itemId { get; set; }
public String productLineId { get; set; }
// more fields
public virtual ProductLine productLine { get; set; }
}
public class ProductLine
{
public string productLineId { get; set; }
public string productLine { get; set; }
// more fields
}
View model:
public class ItemViewModel
{
public string itemNumber { get; set; }
public String itemDescription { get; set; }
public Double unitPrice { get; set; }
public string productLine { get; set; }
}
My current query is:
from item in dbContext.Items
where unitPrice > 10
select new ItemViewModel()
{
itemNumber = item.itemNumber
itemDescription = item.itemDescription
unitPrice = item.unitPrice
productLine = item.itemProductLine.productLine.productLine
}
I currently have this query in the controller, but I am refactoring the code. I want to put the query code in a repository class in a data access layer. From what I've read, I should not reference any view models in that layer. If I change select new ItemViewModel() to select new Item(), it will return the error:
The entity or complex type 'proj.DAL.Item' cannot be constructed in a LINQ to Entities query.
A solution I have seen is to create a data transfer object (DTO) to transfer data from my domain model to my view model.
However, by doing this, I would have 3 copies of the data. If I need to add another database field and display it, I need to update 3 files. I believe I am violating the DRY principle. Is it inevitable to violate the DRY principle when using DTOs and view models? If not, can you provide an example of how to refactor this to have DRY code?

Having multiple models is not a DRY violation however your code breaks the Separation of Concerns principle because the domain model is the same with (or built upon, read: coupled to) persistence model. You should keep your models separated for each layer and use a tool like automapper to map them. This prevents the model to serve more than one purpose.
It looks like repeating yourself, but in fact you are keeping your layers decoupled and ensuring code maintainability.

Unlike ramiramulu, I would refrain from introducing too many abstractions.
If you use EF, your DAL is actually Entity Framework, no need to abstract that. A lot of people attempts to do this but this only complicates your code a lot, for no gain. If you were doing SQL requests and calling stored procedures directly, then a DAL would be helpful, but building an abstraction on top of EF (which is another abstraction, or over NHibernate) is a bad idea.
Also, pure DTOs as an abstraction are more and more frown upon, but they can be used if you have a middleware and do not directly access the database - for example, a message bus like NServiceBus: messages would be considered DTOs in that case.
Unless you do very simple and pure CRUD (in which case, go ahead, put the logic in controllers - no reason to add complexity for pretty straightforward business), you should move business logic outside of your controllers for sure. For this you have many options, but 2 of the most popular are : a rich domain model with domain driven design or rich business services with service oriented design. They are a lot of ways to do this, but these 2 illustrates very different approaches.
Rich Domain (Controller per Aggregate)
In the first case, your controller would be responsible for acquiring the domain object, calling the logic, and returning a View Model. They do the bridge between the View world and the Model world. How to acquire the domain object(s) needs to be somewhat abstracted, often simple virtual methods works great - keep it simple.
Aggregate Root:
public class Item
{
public string itemId { get; set; }
public string itemDescription { get; set; }
public float unitPrice { get; set; }
// more fields
public virtual ItemProductLine itemProductLine { get; set; }
// Example of logic, should always be in your aggregate and not in ItemProductLine for example
public void UpdatePrice(float newPrice)
{
// ... Implement logic
}
}
View Model:
public class ItemViewModel
{
public int id { get; set; }
public string itemNumber { get; set; }
public String itemDescription { get; set; }
public Double unitPrice { get; set; }
public string productLine { get; set; }
}
Controller:
public class ItemController : Controller
{
[HttpGet]
public ActionResult Edit(int id)
{
var item = GetById(id);
// Some logic to map to the VM, maybe automapper, valueinjector, etc.
var model = item.MapTo<ItemViewModel>();
return View(model);
}
[HttpPost]
public ActionResult Update(int id, ItemViewModel model)
{
// Do some validation
if (!model.IsValid)
{
View("Edit", model); // return edit view
}
var item = GetById(model.id);
// Execute logic
item.UpdatePrice(model.unitPrice);
// ... maybe more logic calls
Save(item);
return RedirectToAction("Edit");
}
public virtual Item GetById(int id)
{
return dbContext.Items.Find(id);
}
public virtual bool Save(Item item)
{
// probably could/should be abstracted in a Unit of Work
dbContext.Items.Update(item);
dbContext.Save();
}
}
This works great with logic that trickles down and are very model specific. It is also great when you do not use CRUD and are very action-based (e.g. a button to update only the price compared to an edit page where you can change all item values). It is pretty decoupled and the separation of concerns is there - you can edit and test business logic on their own, you can test controllers without a backend (by overriding the virtual functions), and you do not have hundreds of abstractions built on one another. You might roll out the virtual function in a repository class, but by experience you always have very specific filters and concerns that are controller/view dependent, and often you end up with one controller per aggregate root, so controllers are a good place for them (e.g. .GetAllItemsWithAPriceGreaterThan(10.0))
In an architecture like that, you have to be careful about boundaries. For example, you could have a Product controller/aggregate and want to list all Items related to that product, but it should be read-only - you couldn't call any business on Items from Products - you need to navigate to the Item controller for that. The best way to do this is to automatically map to the ViewModel :
public class ProductController : Controller
{
// ...
public virtual IEnumerable<ItemViewModel> GetItemsByProductId(int id)
{
return dbContext.Items
.Where(x => ...)
.Select(x => x.MapTo<ItemViewModel>())
.ToList();
// No risks of editing Items
}
}
Rich Services (Controller per Service)
With rich services, you build a more service oriented abstraction. This is great when business logic spawns multiple boundaries and models. Services play the role of the bridge between the View and the Model. They should NEVER expose the underlying Models, only specific ViewModels (which play the role of DTO in that case). This is very good when you have a MVC site and some REST WebApi working on the same dataset for example, they can reuse the same services.
Model:
public class Item
{
public string itemId { get; set; }
public string itemDescription { get; set; }
public float unitPrice { get; set; }
// more fields
public virtual ItemProductLine itemProductLine { get; set; }
}
View Model:
public class ItemViewModel
{
public int id { get; set; }
public string itemNumber { get; set; }
public String itemDescription { get; set; }
public Double unitPrice { get; set; }
public string productLine { get; set; }
}
Service:
public class ItemService
{
public ItemViewModel Load(int id)
{
return dbContext.Items.Find(id).MapTo<ItemViewModel>();
}
public bool Update(ItemViewModel model)
{
var item = dbContext.Items.Find(model.id);
// update item with model and check rules/validate
// ...
if (valid)
{
dbContext.Items.Update(item);
dbContext.Save();
return true;
}
return false;
}
}
Controller:
public class ItemController : Controller
{
public ItemService Service { get; private set; }
public ItemController(ItemService service)
{
this.Service = service;
}
[HttpGet]
public ActionResult Edit(int id)
{
return View(Service.Load(id));
}
[HttpPost]
public ActionResult Update(int id, ItemViewModel model)
{
// Do some validation and update
if (!model.IsValid || !Service.Update(model))
{
View("Edit", model); // return edit view
}
return RedirectToAction("Edit");
}
}
Controllers are only there to call the Service(s) and compose the results for the Views. They are "dumb" compared to domain oriented controllers, but if you have a lot of views complexities (tons of composed views, ajax, complex validation, json/xml processing along side html, etc.), this is the preferred approach.
Also, in this case, services do not have to related to only one model. The same service could manipulate multiple model types if they share business logic. So an OrderService could access the inventory and make adjustments there, etc. They are more process-based than model-based.

I would do it this way -
My Domain Model -
public class Item
{
// more fields
public virtual ItemProductLine itemProductLine { get; set; }
}
public class ItemProductLine : ProductLine
{
// more fields
}
public class ProductLine
{
// more fields
}
DAL Would be -
public class ItemRepository
{
public Item Fetch(int id)
{
// Get Data from Database into Item Model
}
}
BAL would be -
public class ItemBusinessLayer
{
public Item GetItem(int id)
{
// Do business logic here
DAL.Fetch(10);
}
}
Controller would be -
public class ItemController : Controller
{
public ActionResult Index(int id)
{
Item _item = BAL.GetItem(10);
ItemViewModel _itemViewModel = AutomapperExt.Convert(_item); // something where automapper will be invoked for conversion process
return View(_itemViewModel);
}
}
Automapper will be maintained in a separate class library.
The main reason why I choose this way is that, for a particular business there can be any number of applications/frontends, but their business domain models shouldn't change. So my BAL is not going to change. It returns business domains itself. Thats doesn't mean everytime I need to return Item model, instead I will have MainItemModel, MiniItemModel etc., all these models will server business requirements.
Now it is the responsibility of frontend (probably controllers) to decide which BAL method to be invoked and how much data to be used on frontend.
Now some devs might argue, that UI shouldn't be having that judgement capacity to decide how much data to use and what data to see, instead BAL should have that power to make decision. I agree and that happens in BAL itself if our domain model is strong and flexible. If security is main constraint and domain models are very rugged, then we can have the automapper conversion at BAL itself. Or else simply have it on UI side. End of the day, MVC is all about making code more manageable, cleaner, reusable and comfortable.

Related

Query result of a child collection of Breeze entity

I am trying to perform a query using Breeze that will return a filtered selection of child entities. I have two custom dtos defined as follows:
#region Dto Models
public class ProductDto   {
public int ProductDtoId { get; set; }
public int ProductClassId { get; set; }
public ICollection<ProductRequiredInputDto> RequiredInputs { get; set; }  
}
public class ProductRequiredInputDto
{
public int ProductRequiredInputDtoId { get; set; }
public string Product { get; set; }
public string Capacity { get; set; }
public string Electrical { get; set; }
//Navigation properties
public virtual ProductDto ProductDto { get; set; }
}
#endregion
My first query is to simply return a populated ProductDto model.
var query1a = this.entityQuery.from('ProductModel')
return this.entityManager.executeQuery(query1a) // returns a promise
.then(data => { this.product = data.results}
When I make a call to my web api controller everything works as expected as I receive a singular ProductDto model populated with a collection of ProductRequiredInputDto models. Here is a sample:
0: ProductDto__IPE_Data_DtoModels
ProductClassId: 1
ProductDtoId: 1
RequiredInputs: Array[40]
_backingStore: Object
ProductClassId: 1
ProductDtoId: 1
RequiredInputs: Array[40]
Now, what I am trying to achieve is to perform a second query on the ProductDto model that will return a filtered array of ProductRequiredDto models from the RequiredInputs property. I have looked over the Breeze examples and samples but have not been able to find a solution to this particular question.
Short answer: No I don't think you can filter on ICollection Navigation Properties from the EntityQuery.
Longer answer: You can write a custom method on the controller that uses .Include("RequiredInputs") and you can use LINQ to perform the filtering you want on the controller.
Aside: I find it peculiar that you don't have a ProductDtoID property on the ProductRequiredInputDto object.
Is it absolutely necessary to call the function that retrieves ProductDto? Because it doesn't sound logical to me. I would create a controller function:
[HttpGet]
public IQueryable<ProductRequiredInputDto> ProductRequiredInputDtos()
{
return _contextProvider.ProductRequiredInputDto;
}
And use a client side query in the lines of:
var idPredicate = breeze.Predicate.create('id', '==', yourSelectedProductDtoId);
var yourPredicate = breeze.Predicate.create('yourProductRequiredInputDtosProperty, 'yourOperator, 'yourValue');
var query = entityQuery.from('ProductRequiredInputDtos').where(idPredicate).and(yourPredicate);
Jonathan's method would also work, but then you have a specialized controller function for one type of call and those pile up quickly (unless you make them general by receiving params but that's another story). This way you can do any query on this model from your client without cluttering the controller up.

When to expose an IEnumerable instead of an ICollection?

public class Order
{
public int Id {get;set;}
[DisplayName("User")]
public long UserId { get; set; }
[ForeignKey("UserId")]
public virtual User User { get; set; }
public decimal Amount { get; set; }
}
With IEnumurable
public class User
{
public int Id{get;set;}
public virtual IEnumerable<Order> Orders { get; set; }
}
public User GetWithOrders()
{
var myUser=UserRepository.GetByEmail("email#email.com");
myUser.Orders=OrderRepository.GetByUserId(myUser.Id);
return myUser;
}
With ICollection
public class User
{
public int Id{get;set;}
public virtual ICollection<Order> Orders { get; set; }
}
public User GetWithOrders()
{
var myUser=UserRepository.GetByEmail("email#email.com");
return myUser;
}
I don't have lazy loading using IEnumerable for a navigation property. Therefore, I have to get the orders for this user with another query.
I have navigation with ICollection. So I can reach orders from user. This seems cool. But then I can add new orders to the user in the Controller without using service or repository.
It's kind of manipulating data on controller level. Is this anti-pattern?
But [with ICollection] I can add new order in Controller without using service or repository.
You mean you can do this (assuming there's a viewmodel for adding an order to a user and a SaveChanges() somewhere):
public class UserController
{
public ActionResult AddUserOrder(AddUserOrderModel addOrder)
{
User user = User.GetByEmail(addOrder.UserEmail);
user.Orders.Add(addOrder.Order);
User.SaveChanges();
}
}
And especially that you can do user.Orders.Add(...), then that's a side effect of exposing entity types from your service or repository layer.
If you want to avoid that, you'd have to define and expose a business object containing the members you want to expose:
public class UserBLL
{
public int Id { get; private set; }
public IEnumerable<Order> Orders { get { return _orders.AsEnumerable(); } }
private IEnumerable<Order> _orders;
public UserBLL(User user)
{
Id = user.Id;
_orders = user.Orders;
}
public void AddOrder(Order order)
{
_orders.Add(order);
}
}
There's not a real choice here. ICollection is needed by EF to control some of it's aspects like binding query results and lazy loading. By using IEnumerable, you're essentially turning all this functionality off, but along with it, EF's understanding of your underlying structure. When you generate migrations, EF will not generate any requisite underlying join tables for M2M relationships, foreign keys on related tables, etc.
Long and short, use ICollection. While you're correct that this allows you to add items by simply adding them to the collection on a related entity, sans-DAL, they still can't be saved without access to the context. If you've set up your DAL correctly, that's only available through the DAL, itself, so you still have to pass the entity back into your DAL pipeline to perpetuate any of these changes. In other words, don't worry about it.

Confusion between DAL, Service Layer and repositories

Say I have a simple model like these (small part of a pretty large app)
public class EHR : IEntity
{
public int ID { get; set; }
public string UserName { get; set; }
public DateTime CreationDate { get; set; }
public virtual ICollection<PhysicalTest> PhysicalTests { get; set; }
}
public class PhysicalTest : IEntity
{
public int ID { get; set; }
public virtual EHR Ehr { get; set; }
public Boolean IsDeleted { get; set; }
}
And i want for an easy way to get the physicalTests that are NOT deleted for a given EHR.
So, I can think of three ways of doing this.
one is simply adding a method to my EHR class.(it doesnt seem as such a bad idea cause I dont want to suffer from anemic domain model)
public IEnumerable<PhysicalTest> ActivePhysicalTests()
{
return this.PhysicalTests.Where(!m=>m.IsDeleted).ToList();
}
the other one is creating an extension method under a EHRRepositoryExtensions class:
public static class EHRRepositoryExtensions
{
public static IEnumerable<PhysicalTest> Active(this IEnumerable<PhysicalTest> physicalTests)
{
return physicalTests.Where(test => !test.IsDeleted).OrderByDescending(test => test.CreationDate).ToList();
}
}
I also think I could have extended my IRepository to include a method that returns only the physsicalTests that arent deleted.
something like
public class EHRRepository : IRepository<EHR>
{
//TODO: method that returns only the physsicalTests that arent deleted.
}
I am still trying to grasp many concepts on DDD and I want it to be as pure as possible.
Which of this approaches would you recommend?
whats a rule of thumb on topics like this?
Please Help.
The first approach is recommended as EHR is your Aggregate Root and it is the information expert about its physical tests.
Second approach is not relevant as you have already the model and you can add this method to the entity instead.
The third approach would be preferable only if the list of physical tests takes much time to load from the database, still you can utilize lazy loading but if you want to separate the fetching from the domain or you dont use a lazy loading enabled ORM then put it as a query method in the repository

EF Entities vs. Service Models vs. View Models (MVC)

I'm trying to understand and figure good practices for designing your app/domain models (POCOs/DTOs).
Let's say I have the following database table, Account:
UserID int
Email varchar(50)
PasswordHash varchar(250)
PasswordSalt varchar(250)
Of course, EF4 would build the entity like so:
public class Account
{
public int UserID { get; set; }
public string Email { get; set; }
public string PasswordHash { get; set; }
public string PasswordSalt { get; set; }
}
Now, let's say I have a view model for registering a new user, which may look something like so:
public class RegistrationViewModel
{
public string Email { get; set; }
public string Password { get; set; }
}
Lastly, I have a service which needs to register the user:
public class RegistrationService
{
public void RegisterUser(??? registration)
{
// Do stuff to register user
}
}
I'm trying to figure out what to pass into the RegisterUser method. The view model is, of course, located under my web app (presentation layer), so I do not want this getting passed to my service.
So, I'm thinking one of four possibilities:
1) Set up a service model that is similar, if not identical, to the RegistrationViewModel, and use this:
public class RegistrationServiceModel
{
public string Email { get; set; }
public string Password { get; set; }
}
public class RegistrationService
{
public void RegisterUser(RegistrationServiceModel registration)
{
// Do stuff to register user
}
}
2) Set up an interface of the model, and inherit this in my view model, and set up my method to accept the interface:
public interface IRegistrationModel
{
string Email;
string Password;
}
public class RegistrationServiceModel : IRegistrationModel
{
public string Email { get; set; }
public string Password { get; set; }
}
public class RegistrationService
{
public void RegisterUser(IRegistrationModel registration)
{
// Do stuff to register user
}
}
3) Pass in the Account entity, doing the RegistrationViewModel-to-Account mapping in my controller:
public class RegistrationService
{
public void RegisterUser(Account account)
{
// Do stuff to register user
}
}
4) Move my view model out of the presentation into a domain/service layer, and pass that into the service method:
public class RegistrationService
{
public void RegisterUser(RegistrationViewModel account)
{
// Do stuff to register user
}
}
None of these three scenarios seem ideal, as I see problems in each of them. So I'm wondering if there's another method I can't think of.
What are good practices for this?
Thanks in advance.
You never pass a view model to the service. A service doesn't even know about the existence of a view model that you might have defined in your presentation tier. A service works with domain models.
Use Auto mapper to map between view model and domain model and vice versa.
Personally, I've never heard of service models in DDD (view models for services).
Use 3rd option, for sure. As šljaker said, Service should be unaware of presentation part of application (which your ViewModel is a part of).
Sure, as well, don't overcomplicate things around by including tons of transition models like RegistrationServiceModel or - even worse - IRegistrationModel (last one will lead to "interface explosion" one day).
So:
Have a Domain entity (POCO entity that is persisted with Entity Framework or NHibernate or NoRM or whatever).
Have a ViewModel that represents your domain model in given context. Don't hesitate to make a ViewModel per Controller Action if necessary. The side-effect benefit of strict ViewModels (those which are 1:1 with your View) is complete absence of over-posting and under-posting problems. It depends on your concrete situation/taste though.
Use DataAnnotation attributes with your ViewModels to provide basic validation (remember to validate business rules too but it should sit behind the wire - inside Services/Repositories layer).
Don't let App Service ever know about ViewModels. Create a domain entity instance and feed it to the Service instead (to validate/persist).
Use AutoMapper as an option to quicky map from your domain entities to ViewModels.
Map from incoming ViewModel or FormCollection to your entity in either Controller action or custom IModelBinder.
(Optionally) I'd recommend to follow the Thunderdome Principle. It's a really really convenient usage of ViewModels.
In this case it makes perfect sense to use a DTO (Data Transfer Object). You can create an AccountDto class at the service layer and use it to pass the registration data down to the service. It might be similar to the ViewModel in some cases, but generally you can show much more in your View than is required to create a user. To further illustrate the point, your ViewModel will probably at least look something like this:
public class RegistrationViewModel
{
[Required]
public string Email { get; set; }
[Required]
public string Password { get; set; }
[Required]
[Compare("Password")]
public string RepeatPassword { get; set; }
}
While your DTO will only require the Email and Password properties.
public class AccountDto
{
public string Email { get; set; }
public string Password { get; set; }
}
So as you see, the ViewModel only contains data needed for the View. The email validation and password comparison logic happens in your Web layer. You use the DTO to get get only email and password to the Service. And then at the service layer you hash the password, populate your Entity object and persist the values to the database.

How to connect Controller to Service layer to Repository layer

Lets say I have the following entities that map to database tables (every matching property name can be considered a PK/FK relationship):
public class Person
{
public int PersonID { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public class Employee
{
public int EmployeeID { get; set; }
public int PersonID { get; set; }
public int Salary { get; set; }
}
public class Executive
{
public int ExecutiveID { get; set; }
public int EmployeeID { get; set; }
public string OfficeNumber { get; set; }
}
public class Contact
{
public int ContactID { get; set; }
public int PersonID { get; set; }
public string PhoneNumber { get; set; }
}
My architecture is as follows: Controller calls Service layer which calls Repository layer.
I have a View called AddExecutive that collects the following information: FirstName, LastName, PhoneNumber, Salary, and OfficeNumber.
What is the best way to commit this data given my architecture? I am thinking I that I would post up a ViewModel that contains all the information I collected and pass it off to a Service method AddExecutive(AddExecutiveViewModel addExecutiveViewModel), then within the Service method I would create new instances of Person, Employee, Executive, and Contact and attach them to each other (Person object) and pass ALL the data off to a Repository method AddExecutive(Person person). The Repository method would then simply commit the data. Does that sound right? What would be a better solution?
So long as you maintain separation of concerns you good.
Controller: Binds data to service / model
Service: Enforces Business Logic, hands persistence to Repo
Repo: performs ACID transactions and queries.
If your viewmodel is decoupled from any sort of framework concerns (i.e.: a POCO) You should be good since you maintain testability.
When you talk about committing data, you're talking about a unit of work. So start there:
public ActionResult AddExecutive(AddExecutiveViewModel addExecutiveViewModel)
{
// simplified; no error handling
using (var uow = new UnitOfWork()) // or use constructor injection on the controller...
{
// ???
uow.Commit();
}
return RedirectToAction(// ...
}
Now your services come out of the unit of work (because both the unit of work and the repositories share an ObjectContext in the background; ObjectContext is the EF's "native" unit of work). So we can fill in the // ???:
public ActionResult AddExecutive(AddExecutiveViewModel model)
{
// simplified; no error handling
using (var uow = new UnitOfWork()) // or use constructor injection on the controller...
{
uow.EmployeeService.AddExecutive(model);
uow.Commit();
}
return RedirectToAction(// ...
}
uow.Commit() is a thin shell around ObjectContext.SaveChanges(). The unit of work is injected with the same ObjectContext as the repositories. The services are EF-ignorant.
For a working (albeit in an early stage) example, see my open source repository/service project, Halfpipe.

Resources