Which layer should DBContext, Repository, and UnitOfWork be in? [closed] - asp.net-mvc

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I want to use Layered Architecture and EF, Repository and UoW Pattern in the project.
Which layer should DBContext, Repository, and UnitOfWork be in?
DAL or BLL?

I would put your DbContext implementation in your DAL (Data Access Layer). You will probably get different opinions on this, but I would not implement the repository or unit of work patterns. Technically, the DBContext is the unit of work and the IDbSet is a repository. By implementing your own, you are adding an abstraction on top of an abstraction.
More on this here and here.

DAL is an acronym for Data Access Layer. DbContext, repositories and Unit Of Work are related to working with data so you should definitely place them in DAL.

"Should" is probably not the correct word here, as there are many views on this subject.
If you want to implement these patterns out of the book, I would check out this link from the ASP.NET guys:
https://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
But I actually have started to layer it like this:
Controller / Logic <- Where business logic and boundary objects are created and transformed.
Repository <- Where logic related to persistence and transforming entities and query objects
Store <- Where the actual implementations of storage mechanisms reside. This is abstracted away behind an interface.
This way leverages the fact that both the business logic and repository logic are testable, decoupled and free to use whatever mechanism for persistence - or lack thereof. Without the rest of the application knowing anything about it.
This is offcourse true with other patterns as well, this is just my take on this.
DbContext should never cross beyond the boundary of the DAL, if you want to put your repositories or units of work there, you are free to, just do not let them leak their details or dependencies upwards. The DbContext should in my opinion be scoped to as narrow scopes as possible, to keep it as clean as possible - you never know where that context has been... please wear protection! But jokes aside, if you have a async, multithreaded, multinode, big application, using these DbContexts everywhere passing them around, you will get into general concurrency and data race issues.
What I like to do is start with a InMemory store, that I inject into my controller. As soon as that store starts serving multiple entities, and the persistence logic get's more and more complex - I refactor it into a store with a repository on top. Once all my tests pass and I get it working like I want, I start to create database or file system based implementations of that store.
Again my opinions here, because this is a pretty general question, which has few "true" answers, just a lot of opinions.
Most opinions on this are valid, they just have different strengths and weaknesses, and the important part is to figure out which strengths you need, and how you will work with the weaknesses.

Your repository should have a reference to DbSet<T> objects, and once you add, update or remove, from one or more repositories, you should invoke SaveChanges from the UnitOfWork. Therefore you should place DbContext into Unit of Work implementation.

Related

Mapping violation of DRY principle? How to separate layers in ASP.NET MVC app [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am the lone developer on a new ASP.NET MVC project, so my ability to discuss design with peers is limited. I'd like to ensure that any change to my app in the future is confined to the layer being changed, so that the whole app doesn't have to change at once.
I'm planning to have 3 layers, each in it's own project, consisting of data layer, service/business layer, and presentation layer.
My data layer will use Entity Framework with a generic repository. This layer will return Entity types from the repo methods.
My service/business layer will be thin, but I wanted a nice separate place for business logic down the road. In the beginning, it will be nothing more than service classes for each of the major areas of my app. ie EmployeeService provides CRUD methods related to Employees that call upon the data layer. At some point, I may replace it with a Web API service layer and serve many clients.
My presentation layer will be ASP.NET MVC, with ViewModels and strongly typed views. Down the road, there may be additional clients.
I'm most interested in the communication between layers and project structure. My initial thoughts were to map data layer Entities to service layer Business Objs/Domain Objs or DTOs using AutoMapper, then mapping again to ViewModels in presentation. The mappings in the beginning would be mostly 1:1 though, so it feels redundant.
Is it a violation of DRY to have a DTO that is the same as the Entity class? That's the only way I know how to decouple from my database structure. Ideally, if I make a database change, I want to only have to change the Entities and the mappings. ie, I totally rearrange how I'm storing something and I have all new entity classes/relationships... I'd like to map the new data implementation back to the same DTO and higher layers never know.
The same repetition feeling comes up when mapping from service layer to presentation layer. My DTOs will get mapped to ViewModels. It's mostly the same stuff, but my thinking was that ViewModels also contain UI implementation details like sort fields and UI specific types like SelectListItem.
So is this actually repetition or does it just feel repetitive? Is there another way to accomplish my aim of isolating changes in layers? I'd like to be able to change or replace presentation, service or data layer with relative ease.
I recommend finding a solid (and SOLID) open source MVC project and follow that pattern. No need to re-invent the wheel -- .NET MVC is robust and there are plenty of projects that follow SOLID principles.
Take a look at NopCommerce, you can get the source code and you will see a well-architected app that answers many of your questions

Which data layer / handling architecture or pattern to choose for a non-enterprise web application? (MVC / EF) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need some help in making a design choice for my application. It’s a fairly straightforward web application, definitely not enterprise class or enterprise-anything.
The architecture is standard MVC 5 / EF 6 / C# ASP.NET, and the pages talk to a back-end database that’s in SQL server, and all the tables have corresponding entity objects generated from VS 2013 using the EF designer and I don’t see that changing anytime in the near future. Therefore creating super abstract “what if my database changes” etc. separations is possibly pointless. I am a one-man operation so we're not talking huge teams etc.
What I want is a clean way to do CRUD and query operations on my database, using DbContext and LINQ operations – but I’m not good with database related code design. Here are my approaches
1. Static class with methods - Should I create a static class (my DAL) that holds my datacontext and then provide functions that controllers can call directly
e.g. MyStaticDBLib.GetCustomerById(id)
but this poses problems when we try to update records from disconnected instances (i.e. I create an object that from a JSON response and need to ‘update’ my table). The good thing is I can centralize my operations in a Lib or DAL file. This is also quickly getting complicated and messy, because I can’t create methods for every scenario so I end up with bits of LINQ code in my controllers, and bits handled by these LIB methods
2. Class with context, held in a singleton, and called from controller
MyContext _cx = MyStaticDBLib.GetMyContext(“sessionKey”);
var xx = cx.MyTable.Find(id) ; //and other LINQ operations
This feels a bit messy as my data query code is in my controllers now but at least I have clean context for each session. The other thinking here is LINQ-to-SQL already abstracts the data layer to some extent as long as the entities remain the same (the actual store can change), so why not just do this?
3. Use a generic repository and unitofwork pattern – now we’re getting fancy. I’ve read a bit about this pattern, and there’s so many different advises, including some strongly suggesting that EF6 already builds the repository into its context therefore this is overkill etc. It does feel overkill but need someone here to tell me that given my context
4. Something else? Some other clean way of handling basic database/CRUD
Right now I have the library type approach (1. above) and it's getting increasingly messy. I've read many articles and I'm struggling as there's so many different approaches, but I hope the context I've given can elicit a few responses as to what approach may suit me. I need to keep it simple, and I'm a one-man-operation for the near future.
Absolutely not #1. The context is not thread safe and you certainly wouldn't want it as a static var in a static class. You're just asking for your application to explode.
Option 2 is workable as long as you ensure that your singleton is thread-safe. In other words, it'd be a singleton per-thread, not for the entire application. Otherwise, the same problems with #1 apply.
Option 3 is typical but short-sighted. The repository/unit of work patterns are pretty much replaced by having an ORM. Wrapping Entity Framework in another layer like this only removes many of the benefits of working with Entity Framework while simultaneously increasing the friction involved in developing your application. In other words, it's a lose-lose and completely unnecessary.
So, I'll go with #4. If the app is simple enough, just use your context directly. Employ a DI container to inject your context into the controller and make it request-scoped (new context per request). If the application gets more complicated or you just really, really don't care for having a dependency on Entity Framework, then apply a service pattern, where you expose endpoints for specific datasets your application needs. Inject your context into the service class(es) and then inject your service(s) into your controllers. Hint: your service endpoints should return fully-formed data that has been completely queried from the database (i.e. return lists and similar enumerables, not queryables).
While Chris's answer is a valid approach, another option is to use a very simple concrete repository/service façade. This is where you put all your data access code behind an interface layer, like IUserRepository.GetUsers(), and then in this code you have all your Entity Framework code.
The value here is separation of concerns, added testability (although EF6+ now allows mocking directly, so that's less of an issue) and more importantly, should you decide someday to change your database code, it's all in one place... Without a huge amount of overhead.
It's also a breeze to inject via dependency injection.

The concept of ASP.NET Model design [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
First of all, this is 2nd week of my MVC learning and I'm very curious about designing a better website structure using MVC
In ASP.NET MVC framework, it is highly recommended to write most of business logic code into model but not controller, and my question is, what's the benefit behind it? Isn't it good to manipulate data in controller? Will that occupy more resources and times?
Any kind of ideas are welcomed. Please send me any article links if you have =]
#MystereMan is only partially correct. In true-blue MVC pattern, yes, all business logic belongs on the model. I'm not talking about ASP.NET MVC, here, but the actual abstract MVC pattern.
In practice, the model is most generally a representation of a table row from your database, so it is many times not practical or even possible to place all your business logic in the "model". We tend to refer to a principally databased-backed "model" in this sense as an "entity". The entity is a "model" of your database state (or an alteration to that state in the case of an update). It's not really appropriate, in this sense, to tack on other logic not represented in or applicable to the database-layer.
This is why most developers will add in what's called a "view model", a concept borrowed somewhat from a pattern called MVVVM (Model-View-View Model). This pattern is an alternative to MVC, but the two are not mutually exclusive. In other words, it's possible, and many times even recommended to mix and match the concepts from both into a sort of hybrid pattern.
In ASP.NET MVC, this usually manifests as just the addition of a "view model" to the existing MVC structure. Your model becomes your database-backed entity, the view model will contain a subset of the model data needed for the view in context and any additional data or logic only relevant to said view, the view utilizes this view model to render itself and the controller still ties everything together.
The basic effect is the same, though. The view model essentially assumes the role of the "model" of MVC, and yes, all of your business logic should go here. A well designed view will have only the minimal amount of server-side code to render it; for loops, simple if statements, etc. are okay, but calculations are not. The controller's job is merely to return the response, which means fetching whatever the view needs to render itself. It should not know about your data, nor care what data is being interacted with. It just passes whatever it gets to the view and sends the response.
The point of MVC is separation of concerns - the controller should not know where the data comes from, or what format it's in, or what logic need be applied to retrieve it.
The model's job is to provide the data to the controller; no more, no less. The benefit is separation of concern - if you need to change business logic in the future you need only change it one place, in the model.
In terms of resources and time, I don't know that the program would necessarily be less efficient if data manipulation was done in the controller. But it would likely be poorly designed and be harder to maintain.
The MVC wikipedia article is a good place to start.
The Idea is not resource utilization. But the idea is Separation of Concerns as mentioned by Mansfield.
Mostly people misunderstand term Model as data - Raw data. But that is not correct. Model is underlying Logical structure of data. Which means the data logically structured and filtered for the current context. Hence, only the processed data should come out of your Model.
Because of this Separation of Concern you can also do independent automated testing of the three parts. You can test you model completly independent from your Controller and vice-versa.
So, where do you put your business logic? I've wrestled with this for a while, and I have come up with different locations based on the size and complexity of the app.
Tiny app: Put the business logic on your model using data annotations and implement the IValidatableobject interface for any custom rules you can't realise with the annotations.
Medium app: Build out a service layer, where your service objects act as gateways over your domain models, and can validate any business rules. Here is a great resource for that: http://www.asp.net/mvc/tutorials/older-versions/models-%28data%29/validating-with-a-service-layer-cs
Big app: The service layer is now a facade over your business layer, where validation takes place in complex messaging frameworks and workflows.
I would point out that I like putting validation rules on my models/viewmodels, irrespective of the size of the app. I believe that they should know when they are in an error state, which is different than when a business rule has been violated.

Building a testable MVC3 & EF 4.1 app [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
First, I apologize for the open-ended nature of this question. However, I've been paralyzed by this for months, and in spite of consent searching, still cannot get past this.
I have been working on an MVC/EF app for a while. I'm trying to get an understanding on how to design and build a testable MVC3 application backed by Entity Framework (4.1). You can see a few questions I've asked on the subject here, here, and here.
I'm trying not to over complicate it, but I want it to be a sound, loosely coupled design that can grow. The way I understand it, the following are pretty much bare minimum required components:
MVC app
This is very thin. As little logic as possible goes here. My views have as little conditional logic as possible, my view models are never more than POCOs, and my controllers simply handle the mapping between the view models and domain models, and calling out to services.
Service layer + interfaces (separate assemblies)
This is where all of my business logic goes. The goal of this is to be able to slap any thin client (forms app, mobile app, web service) on top of this to expose the guts of my application. Interfaces for the service layer sits in another assembly.
Core utilities/cross-cutting + interfaces (separate assemblies)
This is stuff I build that is not specific to my application, but is not part of the framework or any third party plugin I'm using. Again, interfaces to these components sit in their own assembly.
Repository (EF context)
This is the interface between my domain models and my database. My service layer uses this to retrieve/modify my database via the domain models.
Domain models (EF POCOs)
The EF4 generated POCOs. Some of these may be extended for convenience to other nested properties or computed properties (such as Order.Total = Order.Details.Sum(d => d.Price))
IoC container
This is what is used for injecting my concrete/fake dependencies (services/utilities) into the MVC app & services. Constructor injection is used exclusively throughout.
Here is where I'm struggling:
1) When integration testing is appropriate vs. unit testing. For example, will some assemblies require a mix of both, or is integration testing mainly for the MVC app and unit testing for my services & utilities?
2) Do I bother writing tests against the repository/domain model code? Of course in the case of POCOs, this is not applicable. But what about when I extend my POCOs w/ computed properties?
3) The proper pattern to use for repositories. I know this is very subjective, as every time I see this discussed, it seems everyone has a different approach. Therefore it makes it hard to figure out which way to go. For example, do I roll my own repositories, or just use EF (DbContext) directly?
4) When I write tests for my services, do I mock my repositories, or do I use SQL Lite to build a mock database and test against that? (See debates here and here).
5) Is this an all-or-nothing affair, as in, if I do any testing at all, I should test everything? Or, is it a matter of any testing is better than no testing? If the latter, where are the more important areas to hit first (I'm thinking service layer)?
6) Are there any good books, articles, or sample apps that would help answer most of these questions for me?
I think that's enough for now. If this ends up being too open ended, let me know and I will gladly close. But again, I've already spent months trying to figure this out on my own with no luck.
This is really complex question. Each of your point is large enough to be a separate question so I will write only short summary:
Integration testing and unit testing don't replace each other. You always needs both if you want to have well tested application. Unit test is for testing logic in isolation (usually with help of mocks, stubs, fakes, etc.) whereas integration test is for testing that your components work correctly together (= no mocks, stubs or fakes). When to use integration test and when to use unit test really depends on the code you are testing and on the development approach you are following (for example TDD).
If your POCOs contains any logic you should write unit tests for them. Logic in your repositories is usually heavily dependent on database so mocking context and test them without database is usually useless so you should cover them with integration tests.
It is really dependent on what you expect from repositories. If it is only stupid DbContext / DbSet wrapper then the value of repository is zero and it will most probably not make your code unit testable as described in some referenced debates. If it wraps queries (no LINQ-to-entites in upper layer), expose access to aggregate roots then the meaning of repository is correct separation of data access and exposing mockable interface.
It is fully dependent on previous point. If you expose IQueryable or methods accepting Expression<Func<>> passed to IQueryable internally you cannot correctly mock the repository (well you can but you still need to pair each unit test with integration test testing the same logic). LINQ-to-entities is "side effect" / leaky abstraction. If you completely wrap the queries inside repository and use your own declarative query language (specification pattern) you can mock them.
Any testing is better then no testing. Many methodologies expects high density coverage. TDD goes even to 100% test coverage because test is written always first and there is no logic without test. It is about the methodology you are following and up to your professional decision to chose if you need a test for piece of code.
I don't think that there is any "read this and you will know how to do that". This is software engineering and software engineering is an art. There is no blue print which works in every case (neither in most cases).

Constructor Injection: How many dependencies is too many? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've been using manual constructor injection DI for a little bit now. One thing that I notice is that my constructors are starting to get rather long.
I have a class that depends on a bunch of little objects - anywhere between 6 and 10 sometimes. As I continue to break my application into smaller chunks, I could see this number increasing over time. Is this a common problem?
Obviously this is going to depend a great deal on the project. However, the basic question is this:
When do you start to get uncomfortable with the number of dependencies that a class has? What are some strategies that you use to reduce these dependencies?
I would not worry about it.
Instead, I would worry about the class being too complex.
A class with many dependencies that uses them all but has no loops or if statements is fine. In some code I was working on recently there were around 14 dependencies in a class. However, there was only one path through the code and no logical way to group the dependencies into better classes.
A class with a small number of dependencies that contains many branch statements or complex loop conditions should be simplified.
This may be a sign that the class with the 6-10 dependencies itself needs to be refactored.
I would think no more than three or four. If you are getting more than that, I would start thinking about how well you are abstracting your concerns. A single repository object, for example, should fulfill all of your data retrieval needs within the class in question.
Runcible,
Here is a link to the Castle Windsor project. It is an Inversion of Control container. These containers allow factory classes to collect your dependencies together and inject them as a single object into your constructor.
http://www.castleproject.org/container/index.html
I have heard good things about Windsor. Spring also makes an IoC container, and there are others.
A class with 6-10 dependencies is a code smell. It is an indication that the class is probably violating the Single Responsibility Principle.
What are some strategies that you use to reduce these dependencies?
Mark Seemann has made that task clear in his post Refactoring to Aggregate Services and moreso in his book Dependency Injection in .NET. The fact your class has so many dependencies indicates there are more than one responsibilities within the class. Often there is an implicit domain concept waiting to be made explicit by identifying it and making it into its own service. Generally speaking, most classes should never need more than 4-5 dependencies.
You may also want to see if any of the parameters to your constructor should be combined into a single class as well (assuming that the parameters make sense as a class).
It might also be that you want to look at using the ServiceLocator pattern for some of your dependencies. This is particularly true if you're having to pass the dependencies down a long chain of constructors.

Resources