Does anyone know if there is a definitive best practices guide for GORM? I find information scattered across different blogs and different resources, but I can't find a definitive guide. I find stuff that says to not do database related stuff in controllers and to keep those things in the service layer, for example. However, it'd just be nice to se what the suggested approach for writing a simple web app is. Should we always use command objects in controllers and pass those command objects to services? Should we store those command objects in session rather than storing actual domain objects in session, which seems to cause a lot of lazy init exceptions, etc.?
I've tried to piece together the information that I've found, but if anyone knows of a comprehensive resource, that would be great.
There is some great information available from the GORM Gotchas series. It's in three parts.
GORM Gotchas (Part 1)
GORM Gotchas (Part 2)
GORM Gotchas (Part 3)
To answer your specific questions about Services and Command objects.
Q: "Should we always use command objects and Services?"
A: Some would argue that it's overkill to do so, however I personally think it's a great pattern and makes things much easier to test and extend. It may seem like a lot of effort but it does pay off in large projects.
Q: "Should we store command objects in session rather than domain objects?"
A: Store as little in the session as possible (if at all). If you have to store something there it's best that it be small and light weight. Command objects (typically) are going to be better for this than a Domain class.
Update (11/19/2014)
I'd like to highlight a very good series that outlines a lot of the potential issues faced using GORM and Hibernate. It's very long, but worth reading if you plan on using GORM/Hibernate in a large scale multi-user project. Don't be turned away by the negative approach because it does contain a lot of useful information.
I don't like Hibernate (and Grails), PART 1
I don't like Hibernate/Grails, part 2, repeatable finder problem: trust in nothing!
I don't like Grails/Hibernate part 3. DuplicateKeyException: Catch it if you can.
I don't like Grails/Hibernate, part 4. Hibernate proxy objects.
I don't like Hibernate/Grails part 5: auto-saving and auto-flushing
I don't like Hibernate/Grails part 6, how to save objects using refresh()
I don't like Hibernate/Grails part 7: working on more complex project
I don't like Hibernate/Grails, part 8, but some like Hibernate and Grails. Why?
I don't like Hibernate/Grails part 9: Testable code
I don't like Hibernate/Grails part 10: Repeatable finder, lessons learned
I don't like Hibernate/Grails part 11. Final thoughts.
The book Grails in Action talks a lot about best practices in Grails. At the time of this writing it isn't published in its final form, but you can buy and read the preview.
I was recently looking for the same answers you are asking and that book has helped me a lot.
Related
After a bunch of googling, I don't really see a good way to have Orleans work with an existing Relation-Database backend.
Every example that I have found for doing this relies on adding columns to deal with concurrency and I haven't really seen any samples of how to use Orleans with, as is the typical example, the northwind database or something.
This leads me to believe that Orleans is not really intended to be used in this way (because if it was I would expect someone somewhere to have create a sample app demonstrating it by now). Am I missing something? Has anyone seen a sample project or blog post explaining how to use, say, an existing EF context with Orleans? This needs to be done without adding additional columns. I am working with data that is controlled by multiple teams in a mission critical system, so there is no way I will get approval to start adding columns to hundreds of tables.
As #Milney says, to my knowledge, there is nothing special in Orleans that would prevent you from using a normal EF DbContext, no extra columns required.
If, on the other hand, your issue is that other applications are causing concurrency issues from outside Orleans, then I think you'll need to deal with them as you would in any application (e.g. with optimistic concurrency checks).
But it's possible I'm misunderstanding your use case.
I watched some videos, read some blogs about it. SO has many questions and answers on that subject but I can not find anywhere exact answer for my question.
Almost every question and answer has a lack of usage context.
I have one middle sized, asp.net-mvc, monolith application which is running in one process on IIS. I want to (refactor and) go all the way with DDD (and CQRS without separated storage for reads and writes for now) but for me it looks like impossible mission without breaking some rules/guides/etc.
Bounded Context
For example I have more than one BCs. Each should not cross their boundaries which means should not share their storage. Right?
It is not possible if you use the whole known (everywhere scattered over the web) solution to work with NHibernate session and UoW.
Aggregate Root
Only one AR should be modified in one transaction. When others ARs are involved should introduce eventual consistency (if I remember those are Eric Evans words).
I try to do it but it is not easy in app like that. Pub/Sub not working as desired because if event is published then all subscribers are take their action within one transaction (NSB/MT does that way).
If event handlers wants to modify others ARs they should be executed in separated transactions, right?
Is it possible to deal with it in monolith application (application where whole code works in one process)?
It is not possible if you use the whole known (everywhere scattered
over the web) solution
[...]
if event is published then all subscribers are take their action
within one transaction
I think you're setting yourself useless and harmful constraints by trying to stick to some "state of the art".
Migrating an entire application to DDD + CQRS is a massive undertaking. Some areas of it don't have well-documented beaten paths yet and you'll probably have a fair bit of exploration to do. My best advice would be to stay at a reasonable distance from "the way people do things". Both in traditional ASP.Net web apps because mainstream practices often don't match the way DDD+CQRS works, and in CQRS itself because the case studies out there are few and far between and most probably very domain specific, with a tendency to advocate the use of heavy tools which may not make sense in your context.
You may need to think out of the box, adopt things incrementally and refrain from goldplating everything. You'll be better off starting with very simple implementations that suit your needs exactly than throwing a ton of tools and frameworks at your codebase.
For instance, do you really need a service bus or could a simple Observer pattern suffice ?
Regarding NHibernate, most implementations out there use a (single) Session Per Request approach, but just because it's the most popular doesn't mean it's the only one. Have you tried using multiple ISessions (one for each BC) available at a more programmable level, such as per-action, or managed entirely manually ? Conversely, have you considered sharing a database between Bounded Contexts at first and see for yourself if that's bad or not ?
I'm building a MVC4 app, I've used EF5 model first, and kept it pretty simple. This isn't going to a huge application, there will only ever be 4 or 5 people on it at once and all users will be authenticated before being able to access any part of the application, it's very simply a place order - dispatcher sees order - dispatcher compeletes order sort of application.
Basically my question is do I need to be worrying about repositories and ViewModels if the size and scope of my application is so small. Any view that is strongly typed to a domain entity is using all of the properties within that entity. I'm using TryOrUpdateModel in my controllers and have read some things saying this can cause a lot of problems, but not a lot of information on exactly what those problems can be. I don't want to use an incredibly complicated pattern for a very simple app.
Hopefully I've given enough detail, if anyone wants to see my code just ask, I'm really at a roadblock here though, and could really use some advice from the community. Thanks so much!
ViewModels: Yes
I only see bad points when passing an EF Entities directly to a view:
You need to do manual whitelisting or blacklisting to prevent over-posting and mass assignment
It becomes very easy to accidentally lazy load extra data from your view, resulting in select N+1 problems
In my personal opinion, a model should closely resembly the information displayed on the view and in most cases (except for basic CRUD stuff), a view contains information from more than one Entity
Repositories: No
The Entity Framework DbContext already is an implementation of the Repository and Unit of Work patterns. If you want everything to be testable, just test against a separate database. If you want to make things loosely coupled, there are ways to do that with EF without using repositories too. To be honest, I really don't understand the popularity of custom repositories.
In my experience, the requirements on a software solution tend to evolve over time well beyond the initial requirement set.
By following architectural best practices now, you will be much better able to accommodate changes to the solution over its entire lifetime.
The Respository pattern and ViewModels are both powerful, and not very difficult or time consuming to implement. I would suggest using them even for small projects.
Yes, you still want to use a repository and view models. Both of these tools allow you to place code in one place instead of all over the place and will save you time. More than likely, it will save you copy paste errors too.
Moreover, having these tools in place will allow you to make expansions to the system easier in the future, instead of having to pour through all of the code which will have poor readability.
Separating your concerns will lead to less code overall, a more efficient system, and smaller controllers / code sections. View models and a repository are not heavily intrusive to implement. It is not like you are going to implement a controller factory or dependency injection.
I am planning to build a web application using ASP MVC3 that runs on Azure with a SQL Azure back end. I would like to use the Microsoft stack and have no plans to ever change to another stack. I am looking into the use of WCF and WF but that would be in the future.
I looked at the traditional and Code First approach to using Entity Framework but I can't see if there is any advantage in using one or the other approach. Sure they each have advantages but for me I don't care if my classes do inherit from EF classes. All I want is to find the most efficient solution.
Can anyone out there give me some advice as to which approach might be the best.
thanks very much
Richard
This is really more of an opinion gathering question and probably belongs more to the Programmers site of StackExchange, but I'll take a stab:
I am definitely a traditional approach kind-of-a-guy. To me, data is key. It is most important. Various objects, layers, applications, services come, go and evolve. But data lingers on. Which is why I design my databases first. In my experiences, data has always been king.
I'd go with Code First approach.
This great blog post by Scott Guthrie explains its advantages.
Code first for me also. If you suddenly started to hate Entity Framework and wanted to switch to NHibernate you will have a lot less work on your hands.
Also, there is a cleaner separation of concerns by totally isolating your domain layer from your data access layer.
I am not 100% sure it still applies, but I think the code generation, partial class malarky of entity framework can cause problems when testing.
Did I mention code first is a lot less hassle.
Code First is an "Architecturally correct" approach, but reality tends to differ on these things when you have to consider effort, value, and speed of developement.
Using the "Model First" approach is much faster and easier to maintain. Database changes propagate with a simple right click "Regen from database", you don't get strange errors creeping into your code when you forget to change a property name or type.
Having said that you can have a bit of both with the the new POCO support in EF4. You can remove the dependencies on base classes while at the same time use the modelling tools:
A lot of good links in this thread:
Entity Framework 4 / POCO - Where to start?
Maybe I am mistating the problems and conflating the answer with the questions, but please here me out. I would like to think (communally, with you) about a site that is based on any any of the MVC frameworks(something PHP or ASP.NET MVC, whtever) that would use a search engine (lucene/solr, FAST ESP, whatever) as the back end of the Model. That is to say, there is no database per se in the project. Just a giant index of docuements that are semistructured content.
I am looking to understand - and keep in mind the site is primarily read-only - where I am likely to run into trouble. What are the things that make you think this is a bad idea from the get go. Also, please assume that there will be a robust infrastructure with caching surrounding the search engine - so while perf comments are welcomed, we feel they are not the major problem.
Thanks!
In general, I'd use a tool like Lucene for searching content, and a database for retrieving it. That doesn't mean that it won't work. It's more a question of why you don't want to use a database. Yes, it can work, and it probably will work (depending on the functional requirements of the site, read on), but that still doesn't make a tool like Lucene the right tool for the job per se.
That being said, it also it does depend on the kind of site however. Is it really a site with just a whole bunch of searchable data and nothing else, or is it something much more than that? If the answer is the first, then good! If it is the latter, there are some issues I can think of:
Updates to the data can be troublesome. "Instant updates" are usually a no-go, as Lucene would have to rebuild its index, which is time-consuming. If there aren't many updates to the data that's fine. You can just recreate the index a couple of times per day, or nightly, if that works.
Trying to stuff any data in an index which is not really suited to be indexed is usually not a good idea. If the site lets users register on your site, then that user data should really go in a database. It's not impossible to store it in a lucene index, it's just not the right tool for the job. Use the index as a bunch of indexed documents, but don't use it as a database as well.