Is it possible to properly use DDD with all building blocks in monolith application? - asp.net-mvc

I watched some videos, read some blogs about it. SO has many questions and answers on that subject but I can not find anywhere exact answer for my question.
Almost every question and answer has a lack of usage context.
I have one middle sized, asp.net-mvc, monolith application which is running in one process on IIS. I want to (refactor and) go all the way with DDD (and CQRS without separated storage for reads and writes for now) but for me it looks like impossible mission without breaking some rules/guides/etc.
Bounded Context
For example I have more than one BCs. Each should not cross their boundaries which means should not share their storage. Right?
It is not possible if you use the whole known (everywhere scattered over the web) solution to work with NHibernate session and UoW.
Aggregate Root
Only one AR should be modified in one transaction. When others ARs are involved should introduce eventual consistency (if I remember those are Eric Evans words).
I try to do it but it is not easy in app like that. Pub/Sub not working as desired because if event is published then all subscribers are take their action within one transaction (NSB/MT does that way).
If event handlers wants to modify others ARs they should be executed in separated transactions, right?
Is it possible to deal with it in monolith application (application where whole code works in one process)?

It is not possible if you use the whole known (everywhere scattered
over the web) solution
[...]
if event is published then all subscribers are take their action
within one transaction
I think you're setting yourself useless and harmful constraints by trying to stick to some "state of the art".
Migrating an entire application to DDD + CQRS is a massive undertaking. Some areas of it don't have well-documented beaten paths yet and you'll probably have a fair bit of exploration to do. My best advice would be to stay at a reasonable distance from "the way people do things". Both in traditional ASP.Net web apps because mainstream practices often don't match the way DDD+CQRS works, and in CQRS itself because the case studies out there are few and far between and most probably very domain specific, with a tendency to advocate the use of heavy tools which may not make sense in your context.
You may need to think out of the box, adopt things incrementally and refrain from goldplating everything. You'll be better off starting with very simple implementations that suit your needs exactly than throwing a ton of tools and frameworks at your codebase.
For instance, do you really need a service bus or could a simple Observer pattern suffice ?
Regarding NHibernate, most implementations out there use a (single) Session Per Request approach, but just because it's the most popular doesn't mean it's the only one. Have you tried using multiple ISessions (one for each BC) available at a more programmable level, such as per-action, or managed entirely manually ? Conversely, have you considered sharing a database between Bounded Contexts at first and see for yourself if that's bad or not ?

Related

Is Microsoft Orleans not really made to support legacy applications?

After a bunch of googling, I don't really see a good way to have Orleans work with an existing Relation-Database backend.
Every example that I have found for doing this relies on adding columns to deal with concurrency and I haven't really seen any samples of how to use Orleans with, as is the typical example, the northwind database or something.
This leads me to believe that Orleans is not really intended to be used in this way (because if it was I would expect someone somewhere to have create a sample app demonstrating it by now). Am I missing something? Has anyone seen a sample project or blog post explaining how to use, say, an existing EF context with Orleans? This needs to be done without adding additional columns. I am working with data that is controlled by multiple teams in a mission critical system, so there is no way I will get approval to start adding columns to hundreds of tables.
As #Milney says, to my knowledge, there is nothing special in Orleans that would prevent you from using a normal EF DbContext, no extra columns required.
If, on the other hand, your issue is that other applications are causing concurrency issues from outside Orleans, then I think you'll need to deal with them as you would in any application (e.g. with optimistic concurrency checks).
But it's possible I'm misunderstanding your use case.

One big Rails application vs separate application

I am working on one big project. Now we need to add new functionality: scheduler managment.
It's not a main task of application, but it is quite complicated part of it.
Is it good idea to extract it as a separate application?
It will share some data (users and some other personalities) and it will use the same database.
The main reason I want to do it is to simplify main application.
I understand, that it is mayby too wide question. But maybe you can share your expirience of developing this kind of applications and maybe there are any articles I can read and world-wide best practices.
While others have mentioned some of the benefits of separating the applications, I'll touch on a couple of reasons why you might NOT want to separate the code.
You're going to need to maintain a single set of tests, especially if both applications are sharing the same database. If you don't do this, it's hard to predict when changing one application would break the other, especially if the applications start to need different things out of the database.
The two applications are obviously going to have a lot of overlap (users, for example). Separating into two applications could potentially force you to duplicate code, since rails by default has some pretty specific ideas about how a rails application should be structured. If your applications are sharing certain views, for example, what will you do to coordinate change in both applications when one application wants to modify the view?
Neither of these is insurmountable, but rails is easiest to develop when you follow rails conventions. As you begin to deviate, you end up having to do more work. Also, don't take either of these points as invalidating the other answers here, but merely counterpoints that you need to think about.
When you can use the functionality in other projects too, then I would separate it.
Maybe you can create a rails engine to share it easily between projects.
Consider asking yourself "What about re-usability?" Is the new scheduling functionality likely to be re-usable in another context with another application? If the answer is "yes," then perhaps making the scheduling management more modular in design will save you time in the future. If the answer is "no," then I would think you have more leeway in how tightly you integrate scheduling management with your existing app.
The practical difference here would be writing generalized scheduling management functionality that has assignable tables and methods upon which to act versus more 'hard coding' it with the data/code scheme of your 'onebig project.'
hth -
Perry
Adding management-tools into a web-app often complicate deployment, is my experience. Especially when the use of your application grows, and you need to performance-tune it, dragging along a huge "backend" may be problematic.
For sake of deploy-, scale- and test-ability, I prefer my applications to be small and focused. Sometimes it even pays off to have the entire admin-enviroment over REST-XML-services.
But as other answers point out: this is more a "it depends" solution. These are my €0.02.

Business Logic Layer Pattern on Rails? MVCL

That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)"
PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails.
The app has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad.
The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation)
What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place.
ie:
--> Sell some products (need to wait the operation to finish)
1. Set a View able to get the products input
2. Controller gets the product list inputed by employee and call the logic
Logic::ExecuteWithResponse('sell', 'products', :prods => #product_list_with_qtt, :when => #date, :employee => current_user() )
This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others.
Have in mind that a callback on SalesOrder is not enough, since it depends on where it is called (no field for that), depends on the class of the user, among other stuff not visible for the model, or in some cases it would take long for the model to process.
The inherent complexity of the business object and logic will need to be tackled with, understood, and internalized (or at least documented well) for the team. That will not go away. The recommendation I have is to grab all the logic peppered throughout the "fat" controllers and move them into either the domain objects, application services (service layer), or simply transaction scripts (see Martin Fowler's "Patterns of Enterprise Application Architecture").
Ideally all the business logic embedded in the labyrinth of callbacks can be refactored into components described above to promote understanding. This gets rid of all the incidental complexity built up over time on the controllers. But even after getting rid of all that, I suspect a certain level of inherent complexity will remain in the problem domain.
The idea of Service Layer is to incorporate high level logic which is not associated with certain model in a good way. If you develop enterprise like system with multiple services integrated and have number of data sources you should better look to Domain-Driven Design (DDD) by Eric Evans. Surely, Fowler's Enterprise patterns book is good in this case too.
Also look at DataMapper(2, the first one was similar to ActiveRecord). It has better design approach for such type of systems and have less limitation (on conceptual level) than ActiveRecord in Rails.
Truly say, that's because of dynamic nature of Ruby people so long tackling conceptual problems of ActiveRecord and try to fit it enterprise needs, IMHO.
Some people have done some work out there on service layers and callbacks Pat Maddox (especially callbacks) is one, Jay Fields (his very early works around the rails presenter pattern later to be replaced with a service layer like pattern) is another. I must admit I like the idea of adding a extra layer. To me, business logic just doesn't belong in models and models should be decoupled in complex projects. I also like the idea of an extra layer over callbacks, for me callbacks become too complex as there numbers increase
This is far from full blown DDD and at the moment I don't know how DDD would work in Rails, however ,I am sure it could and I am sure someone is working on it out there right now. For my projects, at the moment, it would be a bit of a overkill to implement, however, I would consider adding a service layer.

server side db programming: why?

Given that database is generally the least scalable component (of a web application), are there any situations where one would put logic in procedures/triggers over keeping it in his favorite programming language (ruby...) or her favorite web framework (...rails!).
Server-side logic is often much faster, even with procedural approach.
You can fine-tune your grant options and hide the data you don't want to show
All queries in one places are more convenient than if they were scattered all around the code.
And here's a (very subjective) article in my blog on the reason I prefer stored procedures:
Schema Junk
BTW, triggers (as opposed to functions / stored procedures / packages) I generally dislike.
They are completely other story.
You're keeping the processing in the database, along with the data.
If you process on the server side, then you have to transfer the data out to a server process across the network, process it, and (optionally) send it back. You have the network bandwidth/latency issues, plus memory overheads.
To clarify - if I have 10m rows of data, my two extreme scenarios are to a) pull those 10m rows across the network and process on the server side, or b) process in place in the database using the server and language (SQL) optimised for this purpose. Note that this is a generalisation and not a hard-and-fast rule, but it's the one I follow for most scenarios.
When many heterogeneous applications and various other systems need to access your single database and be sure through their operations data stays consistent without integrity conflicts. So you put your logic into triggers and stored procedures that will offer an interface to external clients.
Maybe not for most web-based systems, but certainly for enterprise databases. Stored procedures and the like allow you much greater control over security and performance, as well as offering a bit of encapsulation for the database itself. You can change the schema all you want as long as the stored procedure interface remains the same.
In (almost) every situation you would keep the processing that is part of the database in the database. Application code cannot substitute for triggers, you won't get very far before you have updated the database and failed to fire the application's equivalent of the triggers (the first time you use the DBMS's management console, for instance).
Let the database do the database work and let the application to the application's work. If you have a specific performance problem with the database, and that performance problem can be addressed by moving processing from the database, in that case you might want to consider doing so.
But worrying about database performance without a database performance problem existing (which is what you seem to be doing here) is both silly and, sadly, apparently a pre-occupation of many Stackoverlow posters.
Least scalable? SQL???
Look up, "federating."
If the database is shared, having logic in the database is better in order to control everything that happens. If it's not it might just make the system overly complicated.
If you have multiple applications that talk to your database, stored procedures and triggers can enforce correctness more pervasively. Accordingly, if correctness is more important than convenience, putting logic in the database is sensible.
Scalability may be a red herring, though. Sometimes it's easier to express the behavior you want in the domain layer of an OO language, but it can be actually more expensive than doing the idiomatic SQL way.
The security mechanism at a previous company was first built in the service layer, then pushed to the db side. The motivation was actually due to some limitations in a data access framework we were using. The solution turned out to be a bit buggy because our security model was complicated, but the upside was that bugs only had to be fixed in the database; we didn't have to worry about different clients following different rules.
Triggers mean 3rd-party apps can modify the database without creating logical inconsistencies.
If you do that, you are tying your business logic to your model. If you code all your business logic in T-SQL, you aren't going to have a lot of fun if later you need to use Oracle or what have you as your database server. Actually, I'm not sure I understand this question exactly. How do you think this would improve scalability? It really shouldn't.
Personally, I'm really not a fan of triggers, particularly in a database dedicated to a single application. I hate trying to track down why some data is inconsistent, to find it's down to a poorly written trigger (and they can be tricky to get exactly correct).
Security is another advantage of using stored procs. You do not have to set the security at the table level if you don't use dynamic code (Including ithe stored proc). This means your users cannot do anything unless they have a proc to to it. This is one way of reducing the possibility of fraud.
Further procs are easier to performance tune than most application code and even better, when one needs to change, that is all you have to put on production, not recomplie the whole application.
Data integrity must be maintained at the database level. That means constraints, defaults values, foreign keys, possibly triggers (if you have very complex rules or ones involving multiple tables). If you do not do this at the database level, you will eventually have integrity issues. Peolpe will write a quick fix for a problem and run the code in the query window and the required rules are missed creating a larger problem. A millino new records will have to be imported through an ETL program that doesn't access the application because going through the application code would take too long running one record at a time.
If you think you are building an application where scalibility will be an issue, you need to hire a database professional and follow his or her suggestions for design based on performance. Databases can scale to terrabytes of data but only if they are originally designed by someone is a specialist in this kind of thing. When you wait until the while application is runnning slower than dirt and you havea new large client coming on board, it is too late. Database design must consider performance from the beginning as it is very hard to redesign when you already have millions of records.
A good way to reduce scalability of your data tier is to interact with it on a procedural basis. (Fetch row..process... update a row, repeat)
This can be done within a stored procedure by use of cursors or within an application (fetch a row, process, update a row) .. The result (poor performance) is the same.
When people say they want to do processing in their application it sometimes implies a procedural interaction.
Sometimes its necessary to treat data procedurally however from my experience developers with limited database experience will tend to design systems in a way that do not leverage the strenght of the platform because they are not comfortable thinking in terms of set based solutions. This can lead to severe performance issues.
For example to add 1 to a count field of all rows in a table the following is all thats needed:
UPDATE table SET cnt = cnt + 1
The procedural treatment of the same is likely to be orders of magnitude slower in execution and developers can easily overlook concurrency issues that make their process inconsistant. For example this kind of code is inconsistant given the avaliable read isolation levels of many RDMBS platforms.
SELECT id,cnt FROM table
...
foreach row
...
UPDATE table SET cnt = row.cnt+1 WHERE id=row.id
...
I think just in terms of abstraction and ease of servicing a running environment utilizing stored procedures can be a useful tool.
Procedure plan cache and reduced number of network round trips in high latency environments can also have significant performance advantages.
It is also true that trying to be too clever or work very complex problems in the RDBMS's half-baked procedural language can easily become a recipe for disaster.
"Given that database is generally the least scalable component (of a web application), are there any situations where one would put logic in procedures/triggers over keeping it in his favorite programming language (ruby...) or her favorite web framework (...rails!)."
What makes you think that "scalability" is the only relevant concern in a system design ? I agree with rexem where he commented that it is very obvious that you are "not" biased ...
Databases are sets of assertions of fact. Those sets become more valuable if they can also be guaranteed to conform to certain integrity rules. Those guarantees are not worth a dime if it is the applications that are expected to enforce such integrity. Triggers and sprocs are the only way SQL systems have to allow such guarantees to be offered by the DBMS itself.
That aspect outweighs "scalability" anytime, anywhere, anyhow.

Difference between BPM and App. workflow?

I know there is a lot of talk about BPM these days and I am conscious that some may see it to be a craze rather than a fundamentally important piece of software.
As someone from what most would call 'The Business', I have been doing my best to learn about BPM to ensure we continue to make decisions that not only make sense to the business, but IT as well.
I have noticed while reading that mention is made to application workflow when sometimes discussing BPM. I hadn't given this much thought until recently.
Therefore, what is the difference? When would you use one and not the other?
BPM is about the process and improving it, which takes into account users and potentially more than one application,e.g. an ERP system may have more than one application to it, though there may be other uses of the term. Note that the process could be viewed without what applications or technologies are used.
Application workflow is how an application is used to go from a to b. Here it is a specific set of code that is used and what happens over the course of an application getting from a to b. In this case, the application is front and center rather than the process.
Does that provide an answer? Another way to think of it is that multiple application workflows can make up a system which is used in a process that can have BPM applied to it.
Late to the game, but workflow is to database as BPMS is to DBMS. (Convenient how the letters line up, huh?)
IOW, BPM(S) is traditionally meant to refer to a particular framework/application that allows you to manage business processes: defining them, storing them, versioning them, measuring them, etc. This is similar to how a DBMS manages databases.
Now, a workflow is a definition, much like a database is a definition. In the former case, it is a definition of operations/work (Fufill Order), steps thereof (Send Invoice) and rules/constraints on the work (If no stock, send notice). In the latter, similar case, it is a definition of data structure (CREATE TABLE) and constraints (InvoiceTotal must be > $0.00).
I think this is a potentially confusing subject, particular as some development environments use a type of process flow model to generate user facing applications (I'm thinking about Outsystems here, for example).
But, for me, the distinction is crystal clear. Application workflow, as people talk about it, refers to a user's path through an application, i.e. the pages they complete/visit, the data they enter, etc. on their way to completing a transaction of some sort. Application orkflow is a poor term for this though, I think application flow would be more meaningful.
BPM on other hand, is about modelling and executing a workflow process. By workflow, in this context, I mean a series of discrete steps (or tasks) that have to be completed (either programmatically or via human interaction) in a certain order to complete a process. These tasks can be implemented as individual application modules (each with their own "application workflow", see above). The job of the workflow engine is to make sure that these separate steps are assigned to the right people (of groups of people) in the right sequence, and that overall the process completes in an orderly way.
I don't think there's a clear answer to this at all. These are words, as opposed to theoretical concepts. If you add the word "checklist" into the mix - that just turns out to be a linear version of a process (but you can have conditionals in checklists - making them a workflow).
I am not sure how to help in reframing this question, but it's almost as if no answer can ever be possible. My own thoughts are at https://tallyfy.com/improving-efficiency-workflow-vs-business-process-management/

Resources