Is Microsoft Orleans not really made to support legacy applications? - orleans

After a bunch of googling, I don't really see a good way to have Orleans work with an existing Relation-Database backend.
Every example that I have found for doing this relies on adding columns to deal with concurrency and I haven't really seen any samples of how to use Orleans with, as is the typical example, the northwind database or something.
This leads me to believe that Orleans is not really intended to be used in this way (because if it was I would expect someone somewhere to have create a sample app demonstrating it by now). Am I missing something? Has anyone seen a sample project or blog post explaining how to use, say, an existing EF context with Orleans? This needs to be done without adding additional columns. I am working with data that is controlled by multiple teams in a mission critical system, so there is no way I will get approval to start adding columns to hundreds of tables.

As #Milney says, to my knowledge, there is nothing special in Orleans that would prevent you from using a normal EF DbContext, no extra columns required.
If, on the other hand, your issue is that other applications are causing concurrency issues from outside Orleans, then I think you'll need to deal with them as you would in any application (e.g. with optimistic concurrency checks).
But it's possible I'm misunderstanding your use case.

Related

Is it possible to properly use DDD with all building blocks in monolith application?

I watched some videos, read some blogs about it. SO has many questions and answers on that subject but I can not find anywhere exact answer for my question.
Almost every question and answer has a lack of usage context.
I have one middle sized, asp.net-mvc, monolith application which is running in one process on IIS. I want to (refactor and) go all the way with DDD (and CQRS without separated storage for reads and writes for now) but for me it looks like impossible mission without breaking some rules/guides/etc.
Bounded Context
For example I have more than one BCs. Each should not cross their boundaries which means should not share their storage. Right?
It is not possible if you use the whole known (everywhere scattered over the web) solution to work with NHibernate session and UoW.
Aggregate Root
Only one AR should be modified in one transaction. When others ARs are involved should introduce eventual consistency (if I remember those are Eric Evans words).
I try to do it but it is not easy in app like that. Pub/Sub not working as desired because if event is published then all subscribers are take their action within one transaction (NSB/MT does that way).
If event handlers wants to modify others ARs they should be executed in separated transactions, right?
Is it possible to deal with it in monolith application (application where whole code works in one process)?
It is not possible if you use the whole known (everywhere scattered
over the web) solution
[...]
if event is published then all subscribers are take their action
within one transaction
I think you're setting yourself useless and harmful constraints by trying to stick to some "state of the art".
Migrating an entire application to DDD + CQRS is a massive undertaking. Some areas of it don't have well-documented beaten paths yet and you'll probably have a fair bit of exploration to do. My best advice would be to stay at a reasonable distance from "the way people do things". Both in traditional ASP.Net web apps because mainstream practices often don't match the way DDD+CQRS works, and in CQRS itself because the case studies out there are few and far between and most probably very domain specific, with a tendency to advocate the use of heavy tools which may not make sense in your context.
You may need to think out of the box, adopt things incrementally and refrain from goldplating everything. You'll be better off starting with very simple implementations that suit your needs exactly than throwing a ton of tools and frameworks at your codebase.
For instance, do you really need a service bus or could a simple Observer pattern suffice ?
Regarding NHibernate, most implementations out there use a (single) Session Per Request approach, but just because it's the most popular doesn't mean it's the only one. Have you tried using multiple ISessions (one for each BC) available at a more programmable level, such as per-action, or managed entirely manually ? Conversely, have you considered sharing a database between Bounded Contexts at first and see for yourself if that's bad or not ?

Grails GORM best practices

Does anyone know if there is a definitive best practices guide for GORM? I find information scattered across different blogs and different resources, but I can't find a definitive guide. I find stuff that says to not do database related stuff in controllers and to keep those things in the service layer, for example. However, it'd just be nice to se what the suggested approach for writing a simple web app is. Should we always use command objects in controllers and pass those command objects to services? Should we store those command objects in session rather than storing actual domain objects in session, which seems to cause a lot of lazy init exceptions, etc.?
I've tried to piece together the information that I've found, but if anyone knows of a comprehensive resource, that would be great.
There is some great information available from the GORM Gotchas series. It's in three parts.
GORM Gotchas (Part 1)
GORM Gotchas (Part 2)
GORM Gotchas (Part 3)
To answer your specific questions about Services and Command objects.
Q: "Should we always use command objects and Services?"
A: Some would argue that it's overkill to do so, however I personally think it's a great pattern and makes things much easier to test and extend. It may seem like a lot of effort but it does pay off in large projects.
Q: "Should we store command objects in session rather than domain objects?"
A: Store as little in the session as possible (if at all). If you have to store something there it's best that it be small and light weight. Command objects (typically) are going to be better for this than a Domain class.
Update (11/19/2014)
I'd like to highlight a very good series that outlines a lot of the potential issues faced using GORM and Hibernate. It's very long, but worth reading if you plan on using GORM/Hibernate in a large scale multi-user project. Don't be turned away by the negative approach because it does contain a lot of useful information.
I don't like Hibernate (and Grails), PART 1
I don't like Hibernate/Grails, part 2, repeatable finder problem: trust in nothing!
I don't like Grails/Hibernate part 3. DuplicateKeyException: Catch it if you can.
I don't like Grails/Hibernate, part 4. Hibernate proxy objects.
I don't like Hibernate/Grails part 5: auto-saving and auto-flushing
I don't like Hibernate/Grails part 6, how to save objects using refresh()
I don't like Hibernate/Grails part 7: working on more complex project
I don't like Hibernate/Grails, part 8, but some like Hibernate and Grails. Why?
I don't like Hibernate/Grails part 9: Testable code
I don't like Hibernate/Grails part 10: Repeatable finder, lessons learned
I don't like Hibernate/Grails part 11. Final thoughts.
The book Grails in Action talks a lot about best practices in Grails. At the time of this writing it isn't published in its final form, but you can buy and read the preview.
I was recently looking for the same answers you are asking and that book has helped me a lot.

One big Rails application vs separate application

I am working on one big project. Now we need to add new functionality: scheduler managment.
It's not a main task of application, but it is quite complicated part of it.
Is it good idea to extract it as a separate application?
It will share some data (users and some other personalities) and it will use the same database.
The main reason I want to do it is to simplify main application.
I understand, that it is mayby too wide question. But maybe you can share your expirience of developing this kind of applications and maybe there are any articles I can read and world-wide best practices.
While others have mentioned some of the benefits of separating the applications, I'll touch on a couple of reasons why you might NOT want to separate the code.
You're going to need to maintain a single set of tests, especially if both applications are sharing the same database. If you don't do this, it's hard to predict when changing one application would break the other, especially if the applications start to need different things out of the database.
The two applications are obviously going to have a lot of overlap (users, for example). Separating into two applications could potentially force you to duplicate code, since rails by default has some pretty specific ideas about how a rails application should be structured. If your applications are sharing certain views, for example, what will you do to coordinate change in both applications when one application wants to modify the view?
Neither of these is insurmountable, but rails is easiest to develop when you follow rails conventions. As you begin to deviate, you end up having to do more work. Also, don't take either of these points as invalidating the other answers here, but merely counterpoints that you need to think about.
When you can use the functionality in other projects too, then I would separate it.
Maybe you can create a rails engine to share it easily between projects.
Consider asking yourself "What about re-usability?" Is the new scheduling functionality likely to be re-usable in another context with another application? If the answer is "yes," then perhaps making the scheduling management more modular in design will save you time in the future. If the answer is "no," then I would think you have more leeway in how tightly you integrate scheduling management with your existing app.
The practical difference here would be writing generalized scheduling management functionality that has assignable tables and methods upon which to act versus more 'hard coding' it with the data/code scheme of your 'onebig project.'
hth -
Perry
Adding management-tools into a web-app often complicate deployment, is my experience. Especially when the use of your application grows, and you need to performance-tune it, dragging along a huge "backend" may be problematic.
For sake of deploy-, scale- and test-ability, I prefer my applications to be small and focused. Sometimes it even pays off to have the entire admin-enviroment over REST-XML-services.
But as other answers point out: this is more a "it depends" solution. These are my €0.02.

Is entity framework a bad choice for multiple websites and large aplications?

Scenario : We currently have a website and are working on building couple of websites with an admin website. We are using asp.net-mvc , SQL Server 2005 and Entity Framework 4. So, currently we have a single solution that has all the websites and all the websites are using the same entity framework model. The Model currently has over 70 tables and will potentially have a lot more in the future... around 400?
Questions : Is Entity Framework model going to be slower when it is going to grow bigger? I have read quite a few articles where they say it is pretty slow due to the additional layers of mapping when as compared to say ado.net? Also , we thought of having multiple models but it seems that it is a bad practice too and is LINQ useful when we are not using any ORM?
So, we are just curious what and how all the large websites using a similiar technology as we have achieve good performance while using an ORM like EF or do they never opt for an ORM ? I have also worked on a LINQ to SQL application that had over 150 tables and we encurred a huge startup penalty, site took 15-20 seconds to respond when first loaded. I am pretty sure this was due to large startup cost of LINQ to SQL ORM. It would be great if someone can share their experience and thoughts regarding this ? What are the best practices to follow and I know it depends on every application but if performance is a concern then what are the best steps to be taken ?
I don't have a definite answer for you, but I have found this SO post: ORM performance cost, it will probably be informative for you, expecially the second highest answer mentioning this site:
http://ormbattle.net/
My personal experience is that for any ORM mapper I have seen so far, Joel's law of leaky abstraction applies heavily. So if you are going to use EF, make sure you have alternatives for optimization at hand.
I think you can certainly get EF4 to work in a performant way with a database with a large number of tables. That said, you will certainly have to overcome a number of hurdles that are specific to EF.
I don't think LinqToSql is a good alternative since Microsoft has stopped enhancing it for the most part.
What other alternatives have you considered? ADO.NET? NHibernate? Stored Procedures?
I know NHibernate may have trouble establishing the SessionFactory for 400 tables quickly, but that only happens once when the website application starts, which should be fairly rare if the application is used heavily. Each web request generally has a new Session and creating sessions from the session factory is very quick and inexpensive.
My biggest concern with EF is the management of the thing, if you have multiple models, then you're suddenly going to have multiple work to do maintaining them, making sure you never update the wrong model for the right database, or vice versa. This is a problem for us at the moment, and looks to only get worse.
Personally, I like to write SQL rather than rely on an abstraction on top of an abstration. The DB knows SQL and so I keep it happy with hand-crafted stored procedures, or hand-crafted SQL in some cases. One huge benefit to this is that I can reply code to see what it was trying to do, and see the resulting data by c&p the sql from the log to the sql query editor. That, in my opinion, makes support so much easier it entirely invalidates any programmer benefit you might have from using an ORM in the first place (especially as EF generates absolutely unreadable SQL).
In fact, come to think of it, the only benefit an ORM gives you is that you can code a bit quicker (once you have everything set up and are not changing the schema, of course), and ultimately, I don't think the benefit is worth the cost, not when you consider that I spend most of my coding time thinking about what I'm going to do as the 'doing it' part is relatively small cmpared to the design, test, support and maintain parts.

Am I the only one that queries more than one database?

After much reading on ruby on rails and multiple database connections, it seems that I have found something that not that many folks do, at least not with ror. I am used to querying many different databases and schemas and pulling back the information either for a report or for one seamless page. So, a user doesn't have to log on to several different systems. I can create a page that has all the systems on one or two web pages.
Is that not a normal occurrence in the web and database driven design?
EDIT: Is this because most all my original code is in classic asp?
I really honestly think that most ORM designers don't seem to take the thought that users may want to access more than one database into account. This seems to be a pretty common limitation in the ORM universe.
Our client website runs across 3 databases, so I do this to. Actually, I'm condensing everything into views off of one central database which then connects to the others.
I never considered this to be "normal" behavior though. I would guess that most of the time you would be designing for one system and working against that.
EDIT: Just to elaborate, we use Linq to SQL for our data layer and we define the objects against the database views. This way we keep reports and application code working off the same data model. There is some extra work setting up the Linq entities, because you have to manually define primary keys and set up associations... however so far it has definitely proven worthwhile. We tried to do so with Entity Framework, but had a lot of trouble getting the relationships set up appropriately and had to give up. The funny thing is I had thought Entity Framework was supposed to be designed for more advanced scenarios like ours...
It is not uncommon to hit multiple databases during a single part of an application's workflow. However, in every instance that I have done it, this has been performed through several web service calls, which among other things wrap the databases in question.
I have not, to my knowledge, ever had a need to hit multiple databases directly at once and merge results into a single report.
I've seen this kind of architecture in corporate Portals- where lots of data is pulled in via different data sources. The whole point of a portal is to bring silo'd systems together- users might not want to be using lots of systems in isolation (especially if they have to sign into each one). In that sort of scenario it is normal, particularly if it is a large company that has expanded rapidly and has a large number of heterogenous systems.
In your case whether this is the right thing to do depends on why you have these seperate DBs.
With ORM's it may be a little difficult. However, it can be done. Pull the objects as needed from the various databases, then use them as a composite to create a new object that is the actual one that is desired. If you can skip the ORM part of the process, then you can directly query the databases and build your object directly.
Pulling data from two databases and compiling a report is not uncommon, but because cross-database queries cannot be optimized by the query engine of either database, OLTP systems typically use a single database, to keep the application performant.
If you build the system from the ground up, it is not advisable to do it this way. If you are working with a system you didn't design, there is no much choice and it is not uncommon (that is the difference between "organic" and "planned" grow).
Not counting master and various test instances, I hit nine databases on a regular basis. Yes, I inherited it, and yes, "Classic" ASP figures prominently. Of course, all the "brillant" designers of this mess are long gone. We're replacing it with things more sane as quickly as we safely can.
I would think that if you're building a new system, and keep adding databases and get to the point of two or three databases, it's probably time to re-think your design. OTOH, if you're aggregating data from multiple, disparate systems, then, no, it's not that strange. Depending on the timliness you need, and your budget for throwing hardware at the problem, and if your data is mostly static, this would be a good scenario for a "reporting server" that pulls the data down from the Live server periodically.

Resources