although i dont have much experience, but in terms of organization and logical thinking i am a big fan of putting bussiness logic just in BLL layer and against putting any business logic in stored procedures.
i am starting a new project now and not planning to put any business logic in stored procedures however i have some performance concerns, if i have some business operation that needs to check say 4 pieces of data from database while executing, if i made this operation as a stored procedure, i will visit database once, but if i made it in business layer i will have to visit database 4 times.
could this have an unforgivable implication on my application performance?
Related
This question already has answers here:
Is the Controller in MVC considered an application service for DDD?
(4 answers)
Closed 6 years ago.
In an ASP.NET MVC world, could controllers act as the application layer, calling into my domain objects and services to get the work done (assuming the controllers just strictly calls the domain and does nothing more). In the particular case that am dealing there is very minimal application flow logic that I need to model, hence am thinking about doing away with application layer and calling the domain directly from within the controller.
Is this a fair approach?
I guess what you mean here is that your business logic is implemented using the Domain Model pattern. In such case, you application layer should be very simple, and by definition it shouldn't host any business logic. All business logic should reside in the domain layer; methods in your application layer should resemble the following steps:
Load instance of an aggregate
Execute action
Persist the updated aggregate
Return response to the user
If that's all you do in your application layer, I see no reason not to put it in the controller.
If, on the other hand, your domain model is anemic, and you do have business logic in the application layer, then I'd prefer to introduce a separate layer.
Treating your MVC controllers as your DDD application layer will present a few negative situations.
The biggest one is that your application layer (regarding DDD), is one of the key places where you model your domain/business processes. For example, you might in an application service have a method called:
PayrollService.CalculatePayslips();
This might involve, checking a domain service or external system for the current active employees, it might then need to query another domain service or external system for absences, pension deductions, etc. It will then need to call a domain service (or entity) to do the calculations.
Capturing domain/business logic like this in the MVC controllers would cause an architectural issue, should you want to trigger the task of calculating payslips from elsewhere in the system. Also if you ever wanted to expose this process as a web service, or even from another MVC controller, you would have references going across or up into your presentation layer. Solutions like these "might work", but they will contribute towards your application becoming Spaghetti Code or a Big Ball of Mud (both code smells in an architectural sense). You risk causing circular references and highly coupled code, which increases maintenance costs (both time, money, developer sanity, etc.) in the future.
At the end of the day, software development is a game of trade-offs. Architectural concerns become bigger issues, the bigger your application grows. For a lot of very small CRUD apps, architectural concerns are probably negligible. One of the key goals of DDD is to tackle complexity, so it could be argued that most DDD projects should be of sufficient complexity to be concerned about good enterprise architecture principles.
I would not do it.
The effort of creating a separate class for the application service and then invoke it from the controller is minimal. The cost is also only there when you build the application.
However, the price you have to pay once you start to maintain the application is much higher due to the high coupling between the business layer and the presentation layer.
Testing, reuse, refactoring etc is so much lower when you make sure to separate different concerns in an application.
We are doing a new project, for all devices and browsers compatibility we have decided to use asp.net mvc 4, Html5, css 3, for communicating with Database Entity Framework we want to use.
Our senior members(Manager, DBA(they are also new to mvc 4, EF)) in the team asking us to write every thing will be in the stored procedures while communicating Database so that maintenance becomes easy.
Is it the correct match if we go like that(MVC4+ EF + stored procedures)? Will i not get maintenance and performance if i go with Code first reverse engineering(because database tables are ready i want to do like that), Please reply.
Below is the flow we want to do, please correct me
As Database is already ready, so first we will write the stored procedures for communication with DB.
New Mvc 4 project and will add .edmx file(EF) and select tables and Stored procedures
in mvc controller or web api we write the consuming stored procedures
There is nothing technically wrong with ASP.NET MVC + EF + Stored Procedures approach, from the first sight.
But my experience show, is that typically it's huge overkill. The common problem I see is the conflicting interests between developers and DBA's. In most worst scenarios all DB releated stuff are controlled by DBA, so if developer what to add/change some feature he needs to wait for implementation of it by DBA (or wait for approve, which could also take long).
So, I personally see that as more bureaucratic way of development.
My own perpective is to be more agile on development and tools like Code First matches that. Stored Procedures could still play major role, while code/performance optimization, but not something to start with.
I agree that using stored procedures in the database is a good approach. Centralizing data validation and calculations in the database ensures data integrity. Client-side validation is important for the user experience but you must also ensure that you test the data validity in the database.
Using Entity Framework, you can generate entities which relate directly to tables in your database, or else you can design entities which use procedures for insert/update/delete operations rather than simple table updates.
In MVC you will use the entities as models to manage your data interactions.
Good luck
This is my personal view. I am sure others might have different ones. Since you are asking this question I am hoping you are open for discussions, otherwise I wouldn’t have bothered as this topic is like a religious discussion as lots of people have very strong opinions and are not likely to change them.
Personally I don't think stored procedures are meant to write business logic. They should be used for writing data access logic. I would only use a stored procedure if I want to optimize an expensive query such as a dynamic search but nothing else. You will get slightly less performance if you have your logic in the domain model, but its not even noticeable in most situations.
One of the strong arguments for writing business logic in stored procedures is because you can easily change some logic by changing your stored procedure. But should we really go and change the business logic of a deployed application without doing proper testing. What will happen if you accidently do a mistake? Doing a deployment is not such a big deal now with continuous builds and I don’t think as a professional developer you should take that risk.
When you decide to write your logic in stored procedures, you give up all the object oriented concepts and you end up writing some procedural code that we wrote maybe 10 years ago. C# language has come a long way now and you will not be able to use those new language features in heart of your application which is business logic. You also loose the visual studio features to refactor code, advanced and easy debugging features etc.
I also don’t like the idea of having triggers as it’s not visible in source code. Imagine someone new in your team trying to add a new feature some time later and if he doesn’t know that a trigger exists, he might write some incorrect logic.
If your application contains some complex business logic, (I am sure most applications do) you should have a domain model that contains not only just properties of your entities, but also your logic. Otherwise you will be falling in to the anti-pattern called anemic data model.
You will not be able to test your business logic by writing unit testing if you have your logic in stored procedures.
You will also not be able to deploy your business logic to multiple servers if you have them in stored procedures if your site becomes really successful.
You will also not be using all powerful capabilities of Entity framework and LINQ if you have all your logic in your stored procedures. You actually don’t need an ORM Mapper if that is the approach you are going to take.
This is what I would recommend for your project.
Even though you already have the database, you can still use code first approach of Entity framework. You can download the EF code first reverse engineer power tool and have the code first code auto generated for you. This is going to be a one off thing and after than if you have any more changes, you can directly do to the database and update the code first code accordingly. Fluent API is bit confusing at first, but you can easily learn that from the generated code.
Do not access your data context from the controller. Have a repository layer that will contain all your data access logic. You can access the repository from your controller. (This allows you to unit test your code by mocking the repository). There are lots of video tutorials on how to use the repository pattern on asp.net site.
Your domain model is going to be the entities that got generated from the Entity framework. Try to have your business logic in those models. It takes a little while to get use to the domain model pattern. But one you get used to it you will start to appreciate its benefits.
Hope this helps.
I am developing an application using MVC3 and Entity Framework. Its a three layered approach with Presentation Layer hosted in Web Server and Business Layer and Data Access Layer in Application Server. We are not exposing the Object Context to Presentation Layer or Business Layer. Object Context is wrapped in Data Access Layer only and exposing data access and data persistence as functionalities as Data Access Layer methods(ie data access logic is separated and implemented in Data Access Layer only). Business Layer is calling data access layer methods and returns data to Presentation Layer.
My concern is most of the Business Layer Methods are for just to access the data and it is just forwarding the call to Data Access Layer without any operation. So we are repeating the code in both layers. Do we have any other better approach to avoid this duplication.
Is it a good practise to implement the data access logic in the business layer in Layered approach?
Can someone suggest a good implementation approach for Layered application with 3-tier architecture?
If you find yourself that all that your business layer is doing is simply delegating the calls to the data access layer then this is a strong indication that this business layer is probably not necessary for your application as it brings no additional value. You can get rid of it and have the controllers talk directly to the data access layer (via an abstraction of course) - that's what most of the tutorials out there show with the EF data context.
In simple applications that are consisting mainly of CRUD actions, a business layer might not be necessary. As your application grows in complexity you might decide to introduce it later, but don't do too many abstractions from the beginning especially if those abstractions don't bring you any additional value.
As previously mentioned there is no "right" way to set things up. I have found a few things over the years that help me decide which approach to take.
Two-Tiered
If your shop is stored procedure heavy with a lot of database programmers and tends to put business logic in the database then go with a two-tiered approach. In this situation you will find that your business layer is generally just calling the data layer. Also, If your code base and pages tend to be small and you never repeat functionality then go with a two tiered approach. You will save a lot of time.
Three-Tiered
If your shop likes to have a lot of business logic in code and that logic is repeated everywhere then go with three-tiered. Even if you notice a lot of your business layer just calling the data layer you will find over time when you add functionality that it is a whole lot easier to just add code to the business layer. Also, If you have a large code base with a ton of pages with a lot of repeated logic then I would definitely go with this approach.
I have found a mixture of both in our enterprise level application at my work. The problematic areas are the ones where dynamic sql (gag) was used and business logic was repeated over and over. I'm finding as I refactor I pull this code into a 3-tiered architecture and save myself a ton of time in the future because I can just reuse it.
I don't think its necessarily bad to duplicate the code for logical separation. Time will come when this will pay off. Say you will replace SQL server by Oracle. Or Microsoft will come up with Linq 2.0 and some implementations will change. You will be thanking yourself for having it separate, while those who called database right from the business layer will have to do modifications in both places - DAL and BLL.
For what its worth. But again, there's no right answer, up to utility, usability, convenience, and most importantly having it matching its purpose.
Hope this is of help.
In banking sector, they use stored procedures for business logic. Their logic is moved in the db instead in business logic layer.
What is the reason that the banks insists for stored procedures ?
Regards
The stored procedures may have been there for 30 years on the mainframe. Client languages have come and gone in the meantime.
Anyway, you have to define "business logic": a lot of "business logic" comes down to being "data integrity" rules (such as "Only set this coumn when aggregate of child rows is zero") which needs to be transactional and atomic.
Related:
Why are mainframes still around? (SO)
How necessary or convenient is it to write portable SQL? (SO)
Why use Stored Procedures? (Blog)
Simply, my DB code will outlive your client code...
That's certainly not true of many banks I have worked in. Applications in banks are just like applications in any other company, and range from being coded almost entirely in stored procedures, to eschewing stored procedures altogether in favour of something like an ORM.
As for why they might choose to put logic in the stored procedures? Sometimes it's the sensible place to do it. I know the ALT.NET crowd (or whoever the NoSQL/ORM fanbois are for your platform of choice) will have you believe that stored procedures are evil and that ORMs are the only reasonable solution, but in the real world building real applications with real differing requirements, it's not that simple.
In the perfect application every business rule would exist only once.
I work for a shop that enforces business rules in as much as possible in the database. In many cases to achieve a better user experience we're performing identical validations on the client side. Not very DRY. Being a SPOT purist, I hate this.
On the other end of the spectrum, some shops create dumb databases (the Rails community leans in this direction) and relegate the business logic to a separate tier. But even with this tack, some validation logic ends up repeated client side.
To further complicate the matter, I understand why the database should be treated as a fortress and so I agree that validations be enforced/repeated at the database.
Trying to enforce validations in one spot isn't easy in light of conflicting concerns -- stay DRY, keep the database a fortress, and provide a good user experience. I have some idea for overcoming this issue, but I imagine there are better.
Can we balance these conflicting concerns in a DRY manner?
Anyone who doesn't enforce the required business rules in the database where they belong is going to have bad data, simple as that. Data integrity is the job of the database. Databases are affected by many more sources than the application and to put required rules in the application only is short-sighted. If you do this you will get bad data from imports, from other applications when they connect, from ad hoc queries to fix large amounts of data (think increasing all the prices by 10%), etc. It is foolish in the extreme to enforce rules only through an application. But then again, I'm the person who has to fix the bad data that gets into poorly designed databases where application developers think they should do stuff only in the application.
The data will live on long past the application in many cases. You lose the rules when this happens as well.
In way, a clean separation of concerns where all business logic exists in a single core location is a Utopian fantasy that’s hard to uphold
Can't see why.
handle all of the business logic in a separate tier (in Rails the models would house “most” of this)
Correct. Django does this also.
some business logic does end up spilling into other places (in Rails it might spill over into the controllers
Not really. The business logic can -- and should -- be in the model tier. Some of it will be encoded as methods of classes, libraries, and other bundles of logic that can be used elsewhere. Django does this with Form objects that validate input. These come from the model, but are used as part of the front-end HTML as well as any bulk loads.
There's no reason for business logic to be defined elsewhere. It can be used in other tiers, but it should be defined in the model.
Use an ORM layer to generate the SQL from the model. Everything in one place.
[database] built on constraints that cause it to reject bad data
I think that's a misreading the Database As A Fortress post. It says "A solid data model", "reject data that does not belong, and to prevent relationships that do not make sense". These are simple declarative referential integrity.
An ORM layer can generate this from the model.
The concerns of database-as-a-fortress and single-point-of-truth are one and the same thing. They are not mutually exclusive. And so there is no need what so ever to "balance them".
What you call "database-as-a-fortress" really is the only possible way to implement single-point-of-truth.
"database-as-a-fortress" is a good thing. It means that the database forces anyone to stay away who does not want to comply (to the truth that the database is supposed to adhere to). That so many people find that a problem (rather than the solution that it really is), says everything about how the majority of database users thinks about the importance of "truth" (as opposed to "I have this shit load of objects I need to persist, and I just want the database to do that despite any business rule what so ever.").
The Model-View-Controller framework is one way to solve this issue. The rails community uses a similar concept...
The basic idea is that all business logic is handled in the controller, and where rules need to be applied in the view or model they are passed into it by the controller.
Well, in general, business rules are much more than sets of constraints, so it's obvious not all business rules can be placed in the database. On the other hand, HLGEM pointed out that it's naive to expect the application to handle all data validation: I can confirm this from experience.
I don't think there's a nice way to put all of the business rules in one place and have them applied on the client, the server side and the database. I live with business rules at the entity level (as hibernate recreates them at the database level) and the remaining rules at the service level.
Database-as-a-fortress is nonsense for relational databases. You need an OODB for that.