Reflection and performance in web - asp.net-mvc

We know Reflection is a quite expensive engine. But nevertheless ASP.NET MVC is full of it. And there is so much ways to use and implement additional reflection-based practices like ORM, different mappings between DTO-entities-view models, DI frameworks, JSON-parsing and many many others.
So I wonder do they all affect performance so much that it is strongly recommended to avoid using reflection as much as possible and find any another solutions like scaffolding etc? And what is the tool to perform server's load testing?

There's nothing wrong with Reflection. Just use it wisely, a.k.a cache the results so that you don't have to perform those expensive calls over and over again. Reflection is used extensively in ASP.NET MVC. For example when the controller and action names are parsed from the route, Reflection is used to find the corresponding method to invoke. Except that once found, the result is cached so that the next time someone requests same controller and action name, the method to be invoked is fetched from the cache.
So if you are using a third party framework check the documentation/source code whether it uses reflection and whether it caches the results of those calls.
And if you have to use it in your code, same rule applies => cache it.

For stress testing, this SO post gives quite a few possibilities: Stress Testing ASP.Net application.
I have thought about this question myself, and come to the following conclusions:
Most people don't spend their days resubmitting pages over and over again. The time the user spends reading and consuming pages which at worst contain a few Ajax calls is minimal when taken into context with the time spent visiting an actual website. Even if you have a million concurrant users of your application, you will generally not have to deal with a million requests at any given time.
The web is naturally based on string comparisons... there are no types in an HTTP response, so any web application is forced to deal with these kinds of tasks as a fact of everyday life. The fewer string comparisons and dynamic objects the better, but they are at their core, unavoidable.
Although things like mapping by string comparison or dynamic type checking are slow, a site built with a non-compiled, weakly-typed language like PHP will contain far more of these actions. Despite the number of possible performance hits in MVC compared to a C# console application, it is still a superior solution to many others in the web domain.
The use of any framework will have a performance cost associated with it. An application built in C# with the .NET framework will for all intents and purposes not perform as well as an application written in C++. However, the benefits are better reliability, faster coding time and easier testing among others. Given how the speed of computers has exploded over the past decade or two, we have come to accept a few extra milliseconds here and there in exchange for these benefits (which are huge).
Given these points, in developing ASP.NET MVC applications I don't avoid things such as reflection like the plague, because it is clear that they can have quite a positive impact on how your application functions. They are tools, and when properly employed have great benefits for many applications.
As for performance, I like to build the best solution I can and then go back and run stress tests on it. Maybe the reflection I implemented in class X isn't a performance problem after all? In short, my first task is to build a great architecture, and my second is to optimise it to squeeze every last drop of performance from it.

Related

Is it possible to properly use DDD with all building blocks in monolith application?

I watched some videos, read some blogs about it. SO has many questions and answers on that subject but I can not find anywhere exact answer for my question.
Almost every question and answer has a lack of usage context.
I have one middle sized, asp.net-mvc, monolith application which is running in one process on IIS. I want to (refactor and) go all the way with DDD (and CQRS without separated storage for reads and writes for now) but for me it looks like impossible mission without breaking some rules/guides/etc.
Bounded Context
For example I have more than one BCs. Each should not cross their boundaries which means should not share their storage. Right?
It is not possible if you use the whole known (everywhere scattered over the web) solution to work with NHibernate session and UoW.
Aggregate Root
Only one AR should be modified in one transaction. When others ARs are involved should introduce eventual consistency (if I remember those are Eric Evans words).
I try to do it but it is not easy in app like that. Pub/Sub not working as desired because if event is published then all subscribers are take their action within one transaction (NSB/MT does that way).
If event handlers wants to modify others ARs they should be executed in separated transactions, right?
Is it possible to deal with it in monolith application (application where whole code works in one process)?
It is not possible if you use the whole known (everywhere scattered
over the web) solution
[...]
if event is published then all subscribers are take their action
within one transaction
I think you're setting yourself useless and harmful constraints by trying to stick to some "state of the art".
Migrating an entire application to DDD + CQRS is a massive undertaking. Some areas of it don't have well-documented beaten paths yet and you'll probably have a fair bit of exploration to do. My best advice would be to stay at a reasonable distance from "the way people do things". Both in traditional ASP.Net web apps because mainstream practices often don't match the way DDD+CQRS works, and in CQRS itself because the case studies out there are few and far between and most probably very domain specific, with a tendency to advocate the use of heavy tools which may not make sense in your context.
You may need to think out of the box, adopt things incrementally and refrain from goldplating everything. You'll be better off starting with very simple implementations that suit your needs exactly than throwing a ton of tools and frameworks at your codebase.
For instance, do you really need a service bus or could a simple Observer pattern suffice ?
Regarding NHibernate, most implementations out there use a (single) Session Per Request approach, but just because it's the most popular doesn't mean it's the only one. Have you tried using multiple ISessions (one for each BC) available at a more programmable level, such as per-action, or managed entirely manually ? Conversely, have you considered sharing a database between Bounded Contexts at first and see for yourself if that's bad or not ?

Wicket: “large memory footprint!”, "Does Wicket scale?".. etc

Wicket uses the Session heavily which could mean “large memory footprint” (as stated by some developers) for larger apps with lots of pages. If you were to explain to a bunch of CTOs from Fortune 500 that they have to adopt Apache Wicket for their large web application deployments and that their fears about Wicket problems with scaling are just bad assumptions; what would you argue?
PS:
The question concerns only
scaling.
Technical details and real world
examples are very welcomed.
IMO credibility for Apache Wicket in very large scale deployment is satisfied with the following URL: http://mobile.walmart.com View the source.
See also http://mexico.com, http://vegas.com, http://adscale.de, and look those domains up with alexa to see their ranking.
So, yes it is quite possible to build internet scale applications using Wicket. But whether or not you are using Wicket, Struts, SpringMVC, or just plain old JSPs: internet scale software development is hard. No framework can make that easy for you. No framework can give you software with a next-next-finish wizard that services 5M users.
Well, first of all, explain where the footprint comes from, and it is mainly the PageMap.
The next step would be to explain what a page map does, what is it for and what problems it solves (back buttons and popup dialogs for example). Problems, which would have to be solved manually, at similar memory costs but at a much bigger development cost and risk.
And finally, tell them how you can affect what goes in the page map, the secondary page cache and thus how the size can be kept under control.
Obviously you can also show them benchmarks, but probably an even better bet is to drop a line to Martijn Dashorst (although I believe he's reading this post anyway :)).
In any case, I'd try to put two points across:
There's nothing Wicket stores in memory which you wouldn't have to store in memory anyway. It's just better organised, easier to develop, keep consistent, and test.
Java itself means that you're carrying some inevitable excess baggage around all the time. If they are so worried about footprint, maybe Java isn't the language they want to use at all. There are hundreds of large traffic websites written in other languages, so that's a perfectly workable solution. The worst thing they can do is to go with Java, take on the excess baggage and then not use the advantages that come with an advanced framework.
Wicket saves the last N pages in the session. This is done to be able to load the page faster when it is needed. It is needed mostly in two cases - using browser back button or in Ajax applications.
The back button is clear, no need to explain, I think.
About Ajax - each ajax requests needs the current page (the last page in the session cache) to find a component in it and call its callback method, update some model, etc.
From their on the session size completely depends on your application code. It will be the same for any web framework.
The number of pages to cache (N above) is configurable, i.e. depending on the type of your application you may tweak it as your find appropriate. Even when there is no inmemory cache (N=0) the pages are stored in the disk (again configurable) and the page will be find again, just it will be a bit slower.
About some references:
http://fabulously40.com/ - social network with many users,
several education sites - I know two in USA and one in Netherlands. They also have quite a lot users,
currently I work on a project that expects to be used by several million users. Wicket 1.5 will be improved wherever we find hotspots.
Send this to your CTO ;-)

Is entity framework a bad choice for multiple websites and large aplications?

Scenario : We currently have a website and are working on building couple of websites with an admin website. We are using asp.net-mvc , SQL Server 2005 and Entity Framework 4. So, currently we have a single solution that has all the websites and all the websites are using the same entity framework model. The Model currently has over 70 tables and will potentially have a lot more in the future... around 400?
Questions : Is Entity Framework model going to be slower when it is going to grow bigger? I have read quite a few articles where they say it is pretty slow due to the additional layers of mapping when as compared to say ado.net? Also , we thought of having multiple models but it seems that it is a bad practice too and is LINQ useful when we are not using any ORM?
So, we are just curious what and how all the large websites using a similiar technology as we have achieve good performance while using an ORM like EF or do they never opt for an ORM ? I have also worked on a LINQ to SQL application that had over 150 tables and we encurred a huge startup penalty, site took 15-20 seconds to respond when first loaded. I am pretty sure this was due to large startup cost of LINQ to SQL ORM. It would be great if someone can share their experience and thoughts regarding this ? What are the best practices to follow and I know it depends on every application but if performance is a concern then what are the best steps to be taken ?
I don't have a definite answer for you, but I have found this SO post: ORM performance cost, it will probably be informative for you, expecially the second highest answer mentioning this site:
http://ormbattle.net/
My personal experience is that for any ORM mapper I have seen so far, Joel's law of leaky abstraction applies heavily. So if you are going to use EF, make sure you have alternatives for optimization at hand.
I think you can certainly get EF4 to work in a performant way with a database with a large number of tables. That said, you will certainly have to overcome a number of hurdles that are specific to EF.
I don't think LinqToSql is a good alternative since Microsoft has stopped enhancing it for the most part.
What other alternatives have you considered? ADO.NET? NHibernate? Stored Procedures?
I know NHibernate may have trouble establishing the SessionFactory for 400 tables quickly, but that only happens once when the website application starts, which should be fairly rare if the application is used heavily. Each web request generally has a new Session and creating sessions from the session factory is very quick and inexpensive.
My biggest concern with EF is the management of the thing, if you have multiple models, then you're suddenly going to have multiple work to do maintaining them, making sure you never update the wrong model for the right database, or vice versa. This is a problem for us at the moment, and looks to only get worse.
Personally, I like to write SQL rather than rely on an abstraction on top of an abstration. The DB knows SQL and so I keep it happy with hand-crafted stored procedures, or hand-crafted SQL in some cases. One huge benefit to this is that I can reply code to see what it was trying to do, and see the resulting data by c&p the sql from the log to the sql query editor. That, in my opinion, makes support so much easier it entirely invalidates any programmer benefit you might have from using an ORM in the first place (especially as EF generates absolutely unreadable SQL).
In fact, come to think of it, the only benefit an ORM gives you is that you can code a bit quicker (once you have everything set up and are not changing the schema, of course), and ultimately, I don't think the benefit is worth the cost, not when you consider that I spend most of my coding time thinking about what I'm going to do as the 'doing it' part is relatively small cmpared to the design, test, support and maintain parts.

ASP.NET MVC and EF Code First Memory Usage

I have an application built in ASP.NET MVC 3 that uses SQL CE for storage and EF CTP 5 for data access.
I've deployed this site to a shared host only to find that it is constantly being recycled as it's hitting the 100mb limit they set on their (dedicated) application pools.
The site, when running in release mode uses around 110mb RAM.
I've tried using SQL Server Express rather than CE and this made little difference.
The only significant difference is when I removed EF completely (using a fake repo). This dropped the memory usage between 30mb-40mb. A blank MVC template uses around 20mb so I figured this isn't too bad?
Are there any benchmarks for "standard" ASP.NET MVC applications?
It would be good to know what memory utilisation other EF CTP users are getting as well as some suggestions for memory profiling tools (preferably free ones).
It's worth mentioning how I'm handling the lifetime of the EF ObjectContext. I am using session per request and instantiating the ObjectContext using StructureMap:
For<IDbContext>().HttpContextScoped().Use(ctx => new MyContext("MyConnStringName"));
Many thanks
Ben
We did manage to reduce our memory footprint quite significantly. The IIS worker process now sits around 50mb compared to the 100+mb before.
Below are some of the things that helped us:
Check the basics. Make sure you compile in release mode and set compilation debug to false in web.config. It's easy to forget such things.
Use DEBUG symbols for diagnostic code. An example of this would be when using tools like NHProf (yes I've been caught out by this before). The easiest thing is to wrap such code in an #if DEBUG directive to ensure it's not compiled into the release of your application.
Don't forget about SQL. ORMs make it too easy to ignore how your application is talking to your database. Using SQL Profiler or tools like EFProf/NHProf can show you exactly what is going on. In the case of EF you will probably feel a little ill afterwards, especially if you make significant use of lazy loading. Once you've got over this, you can begin to optimize (see point below).
Lazy loading is convenient but shouldn't be used in MVC views (IMO). This was one of the root causes of our high memory usage. The home page of our web site was creating 59 individual queries due to lazy loading (SELECT N+1). After creating a specific viewmodel for this page and eagerly loading the associations we needed we got down to 6 queries that executed in half the time.
Design patterns are there to guide you, not rule the development of your application. I tend to follow a DDD approach where possible. In this case I didn't really want to expose foreign keys on my domain model. However since EF does not handle many-to-one associations quite as well as NH (it will issue another query just to get the foreign key of an object we already have in memory), I ended up with an additional query (per object) displayed on my page. In this case I decided that I could put up with a bit of code smell (including the FK in my model) for the sake of improved performance.
A common "solution" is to throw caching at performance issues. It's important to identify the real problem before you formulate your caching strategy. I could have just applied output caching to our home page (see note below) but this doesn't change the fact that I have 59 queries hitting my database when the cache expires.
A note on output caching:
When ASP.NET MVC was first released we were able to do donut caching, that is, caching a page APART from a specific region(s). The fact that this is no longer possible makes output caching pretty useless if you have user specific information on the page. For example, we have a login status within the navigation menu of the site. This alone means I can't use Output caching for the page as it would also cache the login status.
Ultimately there is no hard and fast rule on how to optimize an application. The biggest improvement in our application's performance came when we stopped using the ORM for building our associations (for the public facing part of our site) and instead loaded them manually into our viewmodels. We couldn't use EF to eagerly load them as there were too many associations (resulting in a messy UNION query).
An example was our tagging mechanism. Entities like BlogPost and Project can be tagged. Tags and tagable entities have a many-to-many relationship. In our case it was better to retrieve all tags and cache them. We then created a linq projection to cache the association keys for our tagable entities (e.g. ProjectId / TagId). When creating the viewmodel for our page we could then build up the tags for each tagable entity without hitting the database. Again, this was specific to our application but it yielded a massive improvement in performance and in lowering our memory usage.
Some of the resources / tools we used along the way:
EFProf - to monitor the queries generated by Entity Framework (free trial available)
ANTS Memory Profiler (free trial available)
Windows performance monitor (perfmon)
Tess Ferrandez's blog
Lots of coffee :)
Whilst we did make improvements that would take us under the hosting company's (Arvixe) application pool limits, I do feel a sense of duty to advise people who are looking at their Windows Reseller plans, that such restrictions are in place (since Arvixe do not mention this anywhere when advertising the plan). So when something looks too good to be true (unlimited x,y,z), it usually is.
The funny thing is, I think they got their estimate from this URL:
http://blog.whitesites.com/w3wp-exe-using-too-much-memory-and-resources__633900106668026886_blog.htm
P.S. It's great article to check and see if you're doing anything that the guy is describing. (For example caching your pages)
P.S.S. Just checked our system and it's running at 50 megs currently. We're using MVC 2 and EF CTP 4.

Ruby on Rails scalability/performance? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have used PHP for awhile now and have used it well with CodeIgniter, which is a great framework. I am starting on a new personal project and last time I was considering what to use (PHP vs ROR) I used PHP because of the scalability problems I heard ROR had, especially after reading what the Twitter devs had to say about it. Is scalability still an issue in ROR or has there been improvements to it?
I would like to learn a new language, and ROR seems interesting. PHP gets the job done but as everyone knows its syntax and organization are fugly and it feels like one big hack.
To expand on Ryan Doherty's answer a bit...
I work in a statically typed language for my day job (.NET/C#), as well as Ruby as a side thing. Prior to my current day job, I was the lead programmer for a ruby development firm doing work for the New York Times Syndication service. Before that, I worked in PHP as well (though long, long ago).
I say that simply to say this: I've experienced rails (and more generally ruby) performance problems first hand, as well as a few other alternatives. As Ryan says, you aren't going to have it automatically scale for you. It takes work and immense amounts of patience to find your bottlenecks.
A large majority of the performance issues we saw from others and even ourselves were dealing with slow performing queries in our ORM layer. We went from Rails/ActiveRecord to Rails/DataMapper and finally to Merb/DM, each iteration getting more speed simply because of the underlying frameworks.
Caching does amazing wonders for performance. Unfortunately, we couldn't cache our data. Our cache would effectively be invalidated every five minutes at most. Nearly every single bit of our site was dynamic. So if/when you can't do that, perhaps you can learn from our experience.
We had to end up seriously fine tuning our database indexes, making sure our queries weren't doing very stupid things, making sure we weren't executing more queries than was absolutely necessary, etc. When I say "very stupid things", I mean the 1 + N query problem...
# 1 query
Dog.find(:all).each do |dog|
# N queries
dog.owner.siblings.each do |sibling|
# N queries per above N query!
sibling.pets.each do |pet|
# Do something here
end
end
end
DataMapper is an excellent way to handle the above problem (there are no 1 + N problems with it), but an even better way is to use your brain and stop doing queries like that. When you need raw performance, most of the ORM layers won't easily handle extremely custom queries, so you might as well hand write them.
We also did common sense things. We bought a beefy server for our growing database, and moved it off onto it's own dedicated box. We also had to do TONS of processing and data importing constantly. We moved our processing off onto its own box as well. We also stopped loading our entire freaking stack just for our data import utilities. We tastefully loaded only what we absolutely needed (thus reducing memory overhead!).
If you can't tell already... generally, when it comes to ruby/rails/merb, you have to scale out, throwing hardware at the problem. But in the end, hardware is cheap; though that's no excuse for shoddy code!
And even with these difficulties, I personally would never start projects in another framework if I can help it. I'm in love with the language, and continually learn more about it every day. That's something that I don't get from C#, though C# is faster.
I also enjoy the open source tools, the low cost to start working in the language, the low cost to just get something out there and try to see if it's marketable, all the while working in a language that often times can be elegant and beautiful...
In the end, it's all about what you want to live, breathe, eat, and sleep in day in and day out when it comes to choosing your framework. If you like Microsoft's way of thinking, go .NET. If you want open source but still want structure, try Java. If you want to have a dynamic language and still have a bit more structure than ruby, try python. And if you want elegance, try Ruby (I kid, I kid... there are many other elegant languages that fit the bill. Not trying to start a flame war.)
Hell, try them all! I tend to agree with the answers above that worrying about optimizations early isn't the reason you should or shouldn't pick a framework, but I disagree that this is their only answer.
So in short, yes there are difficulties you have to overcome, but the elegance of the language, imho, far outweighs those shortcomings.
Sorry for the novel, but I've been there and back with performance issues. It can be overcome. So don't let that scare you off.
RoR is being used with lots of huge websites, but as with any language or framework, it takes a good architecture (db scaling, caching, tuning, etc) to scale to large numbers of users.
There's been a few minor changes to RoR to make it easier to scale, but don't expect it to scale magically for you. Every website has different scaling issues, so you'll have to put in some work to make it scale.
Develop in the technology that is going to give your project the best chance of success - quick to develop in, easy debugging, easy deployment, good tools, you know it inside out (unless the point is to learn a new language), etc.
If you get tens of million of uniques a month you can always hire in a couple of people and rewrite in a different technology if you need to as ...
... you'll be rake-ing in the cache (sorry - couldn't resist!!)
First of all, it would perhaps make more sense to compare Rails to
Symfony, CodeIgniter or CakePHP, since Ruby on Rails is a complete web application
framework. Compared to PHP or PHP frameworks, Rails applications offer
the advantages that they are small, clean, and readable. PHP is perfect
for small, personal pages (originally it stood for "Personal Home Page"),
while Rails is a full MVC framwork which can be used to build large
sites.
Ruby on Rails has not a larger scalability issue than comparable PHP frameworks.
Both Rails and PHP will scale well if you have only a moderate number
of users (10,000-100,000) which operate on a similar number of objects.
For a few thousand users a classic monolithic architecture will
be sufficient. With a bit of M&M (Memcached and MySQL) you can also
handle millions of objects. The M&M architecture uses a MySQL server to
handle writes and Memcached to handle high read loads. The traditional
storage pattern, a single SQL server using normalized relational tables
(or at best a SQL Master/Multiple Read Slave setup), no longer works
for very large sites.
If you have billions of users like Google, Twitter and Facebook, then
probably a distributed architecture will be better. If you really want to
scale your application without limit, use some kind of cheap commodity hardware
as a foundation, divide your application into a set of services, keep
each component or service scalable itself (design every component as
a scalable service), and adapt the architecture to your application.
Then you will need suitable scalable datastores like NoSQL databases
and distributed hash tables (DHTs), you will need sophisticated map-reduce
algorithms to work with them, you will have to deal with SOA, external
services, and messaging. Neither PHP nor Rails offer a magic bullet here.
What is breaks down to with RoR is that unless you're in Alexa's top 100, you will not have any scalability problems. You'll have more issues with stability on shared hosting unless you can squeeze Phusion, Passenger, or Mongrel out.
Take a little while to look at the problems the Twitter people had to deal with, then ask yourself if your app is going to need to scale to that level.
Then build it in Rails anyway, because you know it makes sense. If you get to Twitter-level volumes then you'll be in the happy position of considering performance optimisaton options. At least you'll be applying them in a nice language!
You can't compare PHP and ROR, PHP is a scripting language as Ruby, and Rails is a framework as CakePHP.
Stated that, I strongly suggest you Rails, because you will have an application strictly organized in MVC pattern, and this is a MUST for your scalability requirement. (Using PHP you had to take care about the project organization on your own).
But for what about scalability, Rails it's not just MVC: For instance, you can start to develop your application with a database, changing it on road without any effort (in the most part of cases), so we can state that a Rails application is (almost) database indipendent because it's ORM (that allow you to avoid database query), you can do a lot of other stuff. (take a look to this video http://www.youtube.com/watch?v=p5EIrSM8dCA )
Just wanted to add some more info to Keith Hanson's smart point about 1 + N problem where he states:
DataMapper is an excellent way to handle the above problem (there are no 1 + N problems with it), but an even better way is to use your brain and stop doing queries like that. When you need raw performance, most of the ORM layers won't easily handle extremely custom queries, so you might as well hand write them.
Doctrine is one of the most popular ORM's for PHP. It addresses this 1 + N complexity problem intrinsic to ORMs by providing a language called Doctrine Query Language (DQL). This allows you to write SQL like statements that use your existing model relationships. e.g
$q = Doctrine_Query::Create()
->select(*)
->from(ModelA m)
->leftJoin(m.ModelB)
->execute()
I'm getting the impression from this thread that the scalability issues of ROR come down primarily to the mess that ORMs are in with regard to loading child objects - ie the '1+N' problem mentioned above. In the above example that Ryan gave with dogs and owners:
Dog.find(:all).each do |dog|
#N queries
dog.owner.siblings.each do |sibling|
#N queries per above N query!!
sibling.pets.each do |pet|
#Do something here
end
end
end
You could actually write a single sql statement to get all that data, and you could also 'stitch' that data up into the Dog.Owner.Siblings.Pets object heirarchy of your custom-written objects. But could someone write an ORM that did that automatically, so that the above example would incur a single round-trip to the DB and a single SQL Statement, instead of potentially hundreds? Totally. Just join those tables into one dataset, then do some logic to stitch it up. It's a bit tricky to make that logic generic so it can handle any set of objects but not the end of the world. In the end, tables and objects only relate to each other in one of three categories (1:1, 1:many, many:many). It's just that no one ever built that ORM.
You need a syntax that tells the system upfront what children you want to load for this particular query. You can sort of do this with the 'eager' loading of LinqToSql (C#), which is not a part of ROR, but even though that results in one round trip to the DB, it's still hundreds of separate SQL statements the way it has currently been set up. It's really more about the history of ORMs. They just got started down the wrong path with that and never really recovered in my opnion. 'Lazy loading' is the default behavior of most ORMs, ie incurring another round trip for every mention of a child object, which is crazy. Then with 'eager' loading - loading the children upfront, that is set up statically in everything I am aware outside of LinqToSql - ie which children always load with certain objects - as if you would always need the same children loaded when you loaded a collection of Dogs.
You need some kind of strongly-typed syntax saying that this time I want to load these children and grandchilren. Ie, something like:
Dog.Owners.Include()
Dog.Owners.Siblings.Include()
Dog.Owners.Siblings.Pets.Include()
then you could issue this command:
Dog.find(:all).each do |dog|
The ORM system would know what tables it needs to join, then stitch up the resulting data into the OM heirarchy. It's true that you can throw hardware at the current problem, which I'm generally in favor of, but it's no reason the ORM (ie Hibernate, Entity Framework, Ruby ActiveRecord) shouldn't just be better written. Hardware really doesn't bail you out of an 8 round-trip, 100-SQL statement query that should have been one round trip and one SQL statement.

Resources