I'm currently working on an ASP.NET MVC project using NHibernate and I need to keep track of changes on some entities in order to be able to make some reports and queries over the data. For this reason I wanted to have the data in a table, but I'm trying to decide where to "hook" the auditing code.
On the NHibernate layer:
PRO: Powerful event system to track any change
PRO: Nothing can be changed in the application without notice (unless someone uses raw SQL...)
CON: As I have a generic repository... then I have to filter out the useful entities (I don't need to track everything).
CON: I don't have easy access to the controller and the action so I can only track basic operations (update, delete...). I can get the HttpContext at least to get some info.
On an Action Filter at Controller level:
PRO: Full information on the request and web application status. This way I can distinguish an "edit" from a "status change" and be more descriptive in the audit information.
CON: Someone can forget a filter and an important action can be taken without notice which is a big con.
Any clue?
Update: See how to Create an Audit Log using NHibernate Events.
I think doing this at the repository level is a much better fit. Mostly because you may, in the future, decide to add some method of access to your repository which does not go through MVC (e.g., a WCF interface to the data).
So the question becomes, how do you address the cons you've listed about doing it on the NHibernate layer?
Filtering out the useful entities is simple enough. I would probably do this via a custom attribute on the entity type. You can tag the entities you want to track, or the ones you don't; whichever is easier.
Figuring out what the controller really intended is harder. I'm going to dispute that you can "get the HttpContext"; I don't think it is a good idea to do this in a repository, because the separation of concerns. The repository should not be dependent on the web. One method would be to create custom methods on the repository for actions you'd like to track differently; this is especially attractive if there are other aspects of these edits which behave differently, such as different security. Another method is to examine the changes by comparing the old and new versions of the objects and derive the actual nature of the change. A third method is to make no attempt to derive the nature of the change, but just store the before and after versions in the log so that the person who reads the log can figure it out for themselves.
I'd rather put it in the data (NHibernate in your case) layer. Putting it in the controller and asking other people (or yourself, in the future) to implement controllers accordingly conflicts with object-oriented design principles.
I do this with NHibernate. Objects that require auditing implement an IAudtable interface and I use an Interceptor do the auditing on any object that implements IAuditable by intercepting OnFlushDirty, OnDelete, and OnSave.
Related
I'm aware that in model-view-controller, the Model is the class part.
If I have a User class and instantiate an object, the object must refer to a single user from the database.
So I'll have the CRUD methods on the user, for that specific user.
But if I need a function to run a SELECT * FROM Users, should I create a function within the User class? Or a function in a helper file? Or in the controller? Where should it go, in order to respect the MVC pattern?
I mean, it makes no sense to instantiate a User object just to run a function to display the Users table.
I'm not sure if this will raise "primarily opinion based" flags. I just don't know where those functions should go. If you guys consider the question worth closing, it's ok. But tell me in the comments in which stack community I should ask this.
Back up a bit. Let's go foundational for a moment.
In the MVC pattern
The model is your state (in simple terms), meaning a representation of the data important to the business functionality you are working with
The view is a way of presenting the state to the user (NOTE user here could be another system, as you can use MVC patterns for service endpoints)
The controller ensures the model gets to the view and back out of the view
In a system designed with good separation of state, business functions are not present in the model, the view or the controller. You segregate business functionality into its own class library. Why? You never know when the application will require a mobile (native, not web) implementation or a desktop implementation or maybe even become part of a windows service.
As for getting at data, proper separation of concerns states the data access is separate not only from the model, view and controller, but also from the business functionality. This is why DALs are created.
Before going on, let's go to your questions.
should I create a function within the User class? This is an "active record" pattern, which is largely deprecated today, as it closely couples behavior and state. I am sure there are still some instances where it applies, but I would not use it.
Or a function in a helper file? Better option, as you can separate it out. But I am not fond of "everything in a single project" approach, personally.
Or in the controller? Never, despite Scott Gu's first MVC examples where he put LINQ to SQL (another groan?) in the controller.
Where should it go, in order to respect the MVC pattern?
Here is my suggestion:
Create a DAL project to access the data. One possible pattern that works nicely here is the repository pattern. If you utilize the same data type for your keys in all/most tables, you can create a generic repository and then derive individual versions for specific data. Okay, so this one is really old, but looking over it, it still has the high level concepts (https://gregorybeamer.wordpress.com/2010/08/10/generics-on-the-data-access-layer)
Create a core project for the business logic, if needed. I do this every time, as it allows me to test the ideas on the DAL through unit tests. Yes, I can test directly in the MVC project (and do), but I like a clean separation as you rarely find 0% business rules in a solution.
Have the core pull the data from the DAL project and the MVC project use the core project. Nice flow. Easy to unit test. etc.
If you want this in a single project, then separate out the bits into different folders so you can make individual projects, if needed, in the future.
For the love of all things good and holy, don't use the repository pattern. #GregoryABeamer has a great answer in all respects, except recommending you create repository instances to access your entities. Your ORM (most likely Entity Framework) covers this, and completely replaces the concepts of repositories and unit of work.
Simply, if you need something from your database, hit your ORM directly in your controller. If you prefer, you can still add a level of abstraction to hide the use of the ORM itself, such that you could more easily switch out the data access with another ORM, Web Api, etc. Just don't do a traditional unit of work with tens or hundreds of repository instances. If you're interested, you can read a series of posts I wrote about that on my blog. I use the term "repository" with my approach, but mostly just to contrast with the typical "generic" repository approach you find scattered all over the interwebs.
I'd use some kind of 'Repository' layer. Then, my controller calls the UserRepository GetAll method and sends the data to View layer.
We are developing an MVC 5 Application and while we ran security scan using Veracode we are getting the below flaw saying
"Improperly Controlled Modification of Dynamically-Determined Object Attributes"
And added this link as reference to fix.
Tried implementing Bind Attribute to my Controllers functions with HTTP Post and the issue is fixed.
So in ASP.NET MVC is it mandatory to use Bind Attribute for all the Post to avoid security violation ?
Or can i ignore this flaw or any other alternative way i can address this as hard coding and maintaining Bind Attributes really gets difficult in real time applications.
Please share your views.
it is not mandatory to use the Bind attribute.
The link which you have posted is basically the dirtiest example they could have came up with. They are directly binding an EF model into the controller, which no real world application would do and I hate Miscrosoft where they show you how easily you can go from DB to Web by applying the dirtiest worst practise patterns without explaining that this is not something you would want to do in real life.
In real life you would create a (View)Model which is tailored to your View. This means the class will ONLY have the properties which you want to accept from the request, therefore you wouldn't really need the Bind attribute in most cases.
EF models are low level classes in your data layer and shouldn't be bound to any controllers IMO.
UPDATE:
Actually on the top of the link they have posted this:
Note It's a common practice to implement the repository pattern in
order to create an abstraction layer between your controller and the
data access layer. To keep these tutorials simple and focused on
teaching how to use the Entity Framework itself, they don't use
repositories. For information about how to implement repositories, see
the ASP.NET Data Access Content Map.
However, this is just talking about the repository pattern, which is a good pattern to abstract your data layer, but the DTO which the repository pattern would return is still too low level for binding to a View.
You should create a model which is tailored to your view and in your controller or service layer you can do the infrastructure mapping between the different layers.
I'd like to create a good app in ASP.NET MVC 5 using EF 6 Code first concept. I want it to be well-designed i.e. having generally speaking: Presentation, Logic and Data layers separated. I want it to be testable :)
Here's my idea and some issues related with creating application
Presentation layer: It's my whole MVC - view models(not models), views, controllers
I believe that's validation should be done somewhere else (in my opinion - it's a part of business logic) but it's quite convenient to use attributes from the DataAnnotations namespace in ViewModelds and check validation in controller.
Logic layer: Services - classes with their interfaces to rule business logic.
I put there functions like: AddNewPerson(PersonViewModel Person), SendMessageToPerson(...).
They will use DB context to make their actions (there's a chance that not all of them will be relying on context). There's a direct connection between service and db - I mean the service class have reference do context.
Where should I do mapping between ViewModel and Model? I've heard that service is a bad place for it - so maybe in controllers. I've heard that service should do the work related with db exclusively.
Is it right? Is my picture of service layer is good?
Data layer: I've read about Repository and UoW patterns a lot. There're some articles which suggest that EF6 implements these two things. I don't want to create extra code if there's no need for such a behavior. The question is: am i right to assume that i don't need them?
Here's my flow:
View<->Controllers(using ViewModels)<->Services(using Models)<->DB.
**I'm gonna use DI in my project.
What do you think about my project structure?
There is no reason to use a Unit of Work pattern with Entity Framework if you have no need to create a generic data access mechanism. You would only do this if you were:
using a data access technology that did not natively support a Unit of work pattern (EF does)
Wanted to be able to swap out data providers sometime in the future.. however, this is not as easy as it might seem as it's very hard NOT to introduce dependencies on specific data technologies even when using an Unit of Work (maybe even BECAUSE you are)... or
You need to have a way of unifying disparate data sources into an atomic transaction.
If none of those are the case, you most likely don't need a custom Unit of Work. A Repository, on the other hand can be useful... but with EF6 many of the benefits of a Repository are also available since EF6 provides mocking interfaces for testing. Regardless, stay away from a generic repository unless it's simply an implementation detail of your concrete repositories. Exposing generic repositories to your other layers is a huge abstraction leak...
I always use a Repository/Service/Façade pattern though to create a separation between my data and business (and UI and business for that matter) layers. It provides a convenient way to mock without having to mock your data access itself and it decouples your logic from the specific that are introduced by the Linq layer used by EF (Linq is relatively generic, but there are things that are specific to EF), a façade/repository/server interface decouples that).
In general, you're on the right path... However, let me point out that using Data Attributes on your view models is a good thing. This centralizes your validation on your model, rather than making you put validation logic all over the place.
You're correct that you need validation in your business logic as well, but your mistake is the assumption that you should only have it on the business logic. You need validation at all layers of your application.. And in particular, your UI validation may have different requirements than your business logic validation.
For instance, you may implement creating a new account as a multi-step wizard in your UI, this would require different validation than your business layer because each step has only a subset of the validation of the total object. Or you might require that your mobile interface has different validation requirements from your web site (one might use a captcha, while the other might use a touch based human validation for instance).
Either way, it's important to keep in mind that validation is important both at the client, server, and various layers...
Ok, let’s clarify a few things...
The notion of ViewModel (or the actual wording of ViewModel) is something introduced by Microsoft Martin Fowler. In fact, a ViewModel is nothing more than a simple class.
In reality, your Views are strongly typed to classes. Period. To avoid confusion, the wording ViewModel came up to help people understand that
“this class, will be used by your View”
hence why we call them ViewModel.
In addition, although many books, articles and examples use the word ViewModel, let's not forget that it's nothing more than just a Model.
In fact, did you ever noticed why there is a Models folder inside an MVC application and not a ViewModels folder?
Also, ever noticed how at the top of a View you have #model directive and not # viewmodel directive?
That's because everything could be a model.
By the way, for clarity, you are more than welcomed to delete (or rename) the Models folder and create a new one called ViewModels if that helps.
Regardless of what you do, you’ll ultimately call #model and not #viewmodel at the top of your pages.
Another similar example would be DTO classes. DTO classes are nothing more than regular classes but they are suffixed with DTO to help people (programmers) differentiate between all the other classes (including View Models).
In a recent project I’ve worked on, that notion wasn’t fully grasped by the team so instead of having their Views strongly typed to Models, they would have their Views strongly typed to DTO classes. In theory and in practice everything was working but they soon found out that they had properties such as IsVisible inside their DTO’s when in fact; these kind of properties should belongs to your ViewModel classes since they are used for UI logic.
So far, I haven’t answered your question but I do have a similar post regarding a quick architecture. You can read the post here
Another thing I’d like to point out is that if and only if your Service Layer plans on servicing other things such as a Winform application, Mobile web site, etc...then your Service Layer should not be receiving ViewModels.
Your Service Layer should not have the notion of what is a ViewModel. It should only accept, receive, send, etc... POCO classes.
This means that from your Controller, inside your ActionResult, once the ModelState is Valid, you need to transform your ViewModel into a POCO which in turn, will be sent to the method inside your Service Layer.
In other words, I’d use/install the Automapper nugget package and create some extension methods that would convert a ViewModel into a POCO and vice-versa (POCO into a ViewModel).
This way, your AddNewPerson() method would receive a Person object for its parameter instead of receiving a PersonViewModel parameter.
Remember, this is only valid if and only if your Service Layer plans on servicing other things...
If that's not the case, then feel free to have your Service Layer receive, send, add, etc...ViewModels instead of POCOs. This is up to you and your team.
Remember, there are many ways to skin a cat.
Hope this helps.
I'm having some trouble with deciding on a solution for my mvc application.
Background.
We have an EF model which we perform operations on via WCF Services (not data services).
I have an MVC application which has a number of Repositories that talk directly to the Services and return WCF types back to a controller which is calling the repository method, a type called for example WCFUserEntity (it's not actually prefixed with WCF).
Inside the controller I plan to automap the WCFUserEntity to a ViewModel entity.
What is bugging me about this solution is that because i'm returning WCFUserEntity to the controller I have to have a reference to the WebService proxy in my controller which doesn't sit well with me, i'd like my controllers to know nothing of where the repository has got the data from. So another option for me is to do the automapping inside of the repository and return the ViewModel entity to the controller, i can't find much around which supports this idea though, so really what i'm looking for is validation of this 2nd solution or help with a 3rd.
thanks, Dom
You may want to consider a third option.
The use of ViewModelBuilders.
in your controller they would work like this:
var myViewModel = myViewModelBuilder.WithX().WithY().Build();
WithX and WithY would be methods that would add stuff to your viewmodel internally (within the builder, for example WithCountriesList() if you want to add a dropdown showing the countries in your view) and the Build method would return the internal viewmodel after adding all the bits with the WithXXX methods. This is so because most of the time you may want to add lists for dropdowns and things that are not part of your original model (your userEntity in this case).
This way, your controller doesn't know anything about how to build the viewmodel, your repository is also agnostic of viewmodels. All the work is done in the Builder. On the downside, you need to create a ViewModelBuilder for each ViewModel.
I hope this helps.
How I would approach this might require some architecture changes, but I would suggest you approach your WCF API to return ViewModels instead of entities.
For starters, think about bandwidth issues (which would be an issue if you are hosting the WCF in Azure or the cloud). If your ViewModel is only using a few specific properties, why waste the bandwidth returning the other data? In high traffic scenarios, this could cause a waste of traffic that could end up costing money. For example, if your view is only display a user and his questions, there's no reason to send his email, answers, point count, etc.. over the wire.
Another issue to think about is eager loading. By having the WCF service return a ViewModel, you know you have all the data (even when it pertains to related entities) required from the view in one trip to the WCF service. You do not need to get the WCFUserEntity and then ask WCF for WCFDocumentEntities that are related to that specific user.
Finally, if your WCF API is built around ViewModels then you have a MUCH clearer understanding of the business processes involved. You know that this specific request (and view in the system) will give you this specific information, and if you need different information for a different view then you know that it's a completely different business request that has different business requirements. Using stack overflow as an example, it makes it trivial to see that this business process is asking for the current user with his related questions, while this business process is requesting the current user with his related answers.
Using ViewModels in your data retrieval WCF API means that your frontend layers do not necessarily know where the data came from, it just knows that it called a business process and got the data it needs. As far as it knows the data layer connected to the database directly instead of WCF.
Edit:
After re-reading, this actually looks like your 3rd option. Most research on the net don't talk about this option, and I don't know why, but after having some similar frustrations you are having (plus others listed in this post) this is the way I have gone with my business layer. It makes more sense and is actually (imho) easier to manage.
I've been reading up on DDD a little bit, and I am confused how this would fit in when using an ORM like NHibernate.
Right now I have a .NET MVC application with fairly "fat" controllers, and I'm trying to figure out how best to fix that. Moving this business logic into the model layer would be the best way to do this, but I am unsure how one would do that.
My application is set up so that NHibernate's session is managed by an HttpModule (gets session / transaction out of my way), which is used by repositories that return the entity objects (Think S#arp arch... turns out a really duplicated a lot of their functionality in this). These repositories are used by DataServices, which right now are just wrappers around the Repositories (one-to-one mapping between them, e.g. UserDataService takes a UserRepository, or actually a Repository). These DataServices right now only ensure that data annotations decorating the entity classes are checked when saving / updating.
In this way, my entities are really just data objects, but do not contain any real logic. While I could put some things in the entity classes (e.g. an "Approve" method), when that action needs to do something like sending an e-mail, or touching other non-related objects, or, for instance, checking to see if there are any users that have the same e-mail before approving, etc., then the entity would need access to other repositories, etc. Injecting these with an IoC wouldn't work with NHibernate, so you'd have to use a factory pattern I'm assuming to get these. I don't see how you would mock those in tests though.
So the next most logical way to do it, I would think, would be to essentially have a service per controller, and extract all of the work being done in the controller currently into methods in each service. I would think that this is breaking with the DDD idea though, as the logic is now no longer contained in the actual model objects.
The other way of looking at that I guess is that each of those services forms a single model with the data object that it works against (Separation of data storage fields and the logic that operates on it), but I just wanted to see what others are doing to solve the "fat controller" issue with DDD while using an ORM like NHibernate that works by returning populated data objects, and the repository model.
Updated
I guess my problem is how I'm looking at this: NHibernate seems to put business objects (entities) at the bottom of the stack, which repositories then act on. The repositories are used by services which may use multiple repositories and other services (email, file access) to do things. I.e: App > Services > Repositories > Business Objects
The pure DDD approach I'm reading about seems to reflect an Active Record bias, where the CRUD functions exist in the business objects (This I call User.Delete directly instead of Repository.Delete from a service), and the actual business object handles the logic of things that need to be done in this instance (Like emailing the user, and deleting files belonging to the user, etc.). I.e. App > (Services) > Business Objects > Repositories
With NHibernate, it seems I would be better off using the first approach given the way NHibernate functions, and I am looking for confirmation on my logic. Or if I'm just confused, some clarification on how this layered approach is supposed to work. My understanding is that if I have an "Approve" method that updates the User model, persists it, and lets say, emails a few people, that this method should go on the User entity object, but to allow for proper IoC so I can inject the messagingService, I need to do this in my service layer instead of on the User object.
From a "multiple UI" point of view this makes sense, as the logic to do things is taken out of my UI layer (MVC), and put into these services... but I'm essentially just factoring the logic out to another class instead of doing it directly in the controller, and if I am not ever going to have any other UI's involved, then I've just traded a "fat controller" for a "fat service", since the service is essentially going to encapsulate a method per controller action to do it's work.
DDD does not have an Active Record slant. Delete is not a method that should be on an Entity (like User) in DDD.
NHibernate does support a DDD approach very well, because of how completely divorced it remains from your entity classes.
when that action needs to do something
like sending an e-mail, or touching
other non-related objects
One piece of the puzzle it seems you are missing is Domain Events. A domain entity shouldn't send an email directly. It should raise an event in the Domain that some significant event has happened. Implement a class whose purpose is to send the email when the event occurs, and register it to listen for the Domain Event.
or, for instance, checking to see if
there are any users that have the same
e-mail before approving
This should probably be checked before submitting the call to "approve," rather than in the function that does the approving. Push the decision up a level in calling code.
So the next most logical way to do it,
I would think, would be to essentially
have a service per controller
This can work, if it's done with the understanding that the service is an entry point for the client. The service's purpose here is to take in parameters in a DTO from the front end/client and translate that into method calls against an entity in order to perform the desired funcitonality.
The only limitations NHibernate creates for classes is all methods/properties must be virtual and a class must have a default constructor (can be internal or protected). Otherwise, it does not [edit] interfere with object structure and can map to pretty complex models.
The short answer to you question is yes, in fact, I find NHibernate enhances DDD - you can focus on developing (and altering) your domain model with a code first approach, then easily retro-fit persistence later using NHibernate.
As you build out your domain model following DDD, I would expect that much of the business logic that's found you way into you MVC controllers should probably reside in your domain objects. In my first attempt at using ASP.NET MVC I quickly found myself in the same position as yourself - fat controllers and an anemic domain model.
To avoid this, I'm now following the approach of keeping a rich domain model that implements the business logic and using MVC's model as essentially simple data objects used by my views. This simplifies my controllers - they interact with my domain model and provide simple data objects (from the MVC model) to the views.
Updated
The pure DDD approach I'm reading about seems to reflect an Active Record bias...
To me the active record pattern means entities are aware of their persistance mechanism and an entity maps directly to a database table record. This is one way of using NHibernate e.g. see Castle Active Record, however, I find this pollutes domain enitities with knowledge of their persistence mechanism. Instead, typically, I'll have a repository per aggregate root in my domain model which implements an abstract repository. The abstract repository provides basic CRUD methods such as:
public IList<TEntity> GetAll()
public TEntity GetById(int id)
public void SaveOrUpdate(TEntity entity)
public void Delete(TEntity entity)
.. which my concrete repositories can supplement/extend.
See this post on The NHibernate FAQ which I've based a lot of my stuff on. Also remember, NHibernate (depending on how you set up your mappings) will allow you to de-persist a complete object graph, i.e. your aggregate root plus all the objects hanging off it and once you've finished working with it, can cascade saves through you entire object graph, this certainly isn't active record.
...since the service is essentially going to encapsulate a method per controller action to do it's work...
I still think you should consider what functionality that you currently have in your controllers should, more logically, be implemented within your domain objects. e.g. in your approval example, I think it would be sensible for an entity to expose an approve method which does whatever it needs to do to within the entity and if, as in your example, needs to send emails, delegate this to a service. Services should be reserved for cross-cutting concerns. Then, once you've finished working with your domain objects, pass them back to your repository to persist changes.
A couple of books I've found useful on these topics are:
Domain-Driven Design by Eric Evans
Applying Domain-Driven Design and Patterns by Jimmy Nilsson