Linq to Sql structure standard - asp.net-mvc

I was wondering what the "correct"-way is when using a Linq to Sql-model in Visual Studio.
Should I create a new model for each of my components, say Blog, Users, News and so on and have all different xxxDataContext's with tables and SPROCs added in each of these.
Or should I create one MyDbDataContext and always work against that?
What's the pro's/con's? My gut tells me to divide it up in smaller context's, but it also feels like that could bring problems as the project expands?
What's the deal? Help me Stackoverflow :)

There will always be overhead when creating the data context as the model needs to be built. Depending on the number of tables in your database this might not be much of a big deal though. If it's only 10 tables or so, the overhead will not be much more than that for a context with say 1 table (sorry, I don't have actual stress testing to show the overhead, but, hey, maybe that gives me something to blog on this weekend).. When looking at large databases the overhead might be a enough to consider using seperate contexts.
The main advantage I would see with using a single data context is that you gain the ability to use JOINs in your LINQ query and that will be translated to T-SQL. Where as if you do the join after the arrays of objects are pulled, the performance might be a bit slower. Additionally, keeping track of multiple data contexts might be confusing and good naming conventions would be needed. So building your own data model w/ business logic which encapsulates the contexts would be a bit harder. I've done this and it's not fun :)
However, if you still feel you want to go that route, then I would recommend putting similar tables (that you might need to join) in the same context. Also, there are some tuts online that recommend using a shared MappingSource when using multiple contexts that use the same source. Information on this can be found here: http://www.albahari.com/nutshell/speedinguplinqtosql.aspx
Sorry, I know that's not really a black and white answer, but hopefully it helps :)
Addition:
Just wanted to add that I did a small test and ran 20,000 SELECT statements against a small sized table using 2 different data contexts:
DataClasses1DataContext contained mappings to all tables in the db (4 total)
DataClasses2DataContext contained a single mapping for just the one table
Results:
Time to execute 20000 SELECTs using DataClasses1DataContext: 00:00:10.4843750
Time to execute 20000 SELECTs using DataClasses2DataContext: 00:00:10.4218750
As you can see, it's not much of a difference.

Related

ActiveRecord Views vs Tables (in Rails 4)

I'm only just starting to learn about views in ActiveRecord from reading a few blog posts and some tutorials on how to set them up in Rails.
What I would like to know is what are some of the pros and cons of using a View instead of a query on existing ActiveRecord tables? Are there real, measurable performance benefits of using a view?
For example, I have a standard merchant application with orders, line items, and products. For an admin dashboard, I have various queries, many of which ping a query that I reuse a lot in the code - namely one that returns user_id, order_id and total_revenue from that order. From a business perspective, many good stats are based off that core query. At what point does it make sense to switch to to a view instead?
The ActiveRecord docs on Views are also a bit sparse so any references to some good resources on both the why and how would be greatly appreciated.
Clarification: I am not talking about HTML views but SQL database views. Essentially, are the performance wins maintained in an ActiveRecord implementation? Since the docs are sparse, are there any potential obstacles to those performance wins that could be lost if you implement them incorrectly (i.e., any non-obvious gotchas)?
I got this information from another developer off-line, so I am answering my own question here in case other people stumble upon it.
Basically, ActiveRecord starting in Rails 3.1 began implementing prepared statements which preprocess and cache SQL statement patterns ahead of time which later allow faster query responses. You can read more about it in this blog post.
This actually might result in not much benefit in switching to views in PostgreSQL since views may not perform much better than prepared statements in PG.
The PostgreSQL documentation on prepared statements seems clear and well-written. More thoughts on PostgreSQL views performance can be found in this stackoverflow post.
Additionally, it's probably much more likely that your Rails app has performance issues due to N+1 queries - this is a great post that explains the problem and one of the easiest ways to prevent it with eager loading.
This question is not directly related to ActiveRecord and it seems more a database related question to me. The answer is "it depends". Many times people use view because they want to:
represent a subset of the data contained in a table
simplify your query by join multiple tables into a virtual table represented by the view
do aggregation in the view
hide complexity of your data
for some security reasons
etc.
But most of the aforementioned features can be implemented by using raw tables. It's just a little complicated than using views. Another place you may considering using a view is for performance reason. That's materialized view or indexed view in SQL Server. Basically it saved a copy of your data in the view in the form you want and it can greatly boost performance.

Entity, dealing with large number of records (> 35 mlns)

We have a rather large set of related tables with over 35 million related records each. I need to create a couple of WCF methods that would query the database with some parameters (data ranges, type codes, etc.) and return related results sets (from 10 to 10,000 records).
The company is standardized on EF 4.0 but is open to 4.X. I might be able to make argument to migrate to 5.0 but it's less likely.
What’s the best approach to deal with such a large number of records using Entity? Should I create a set of stored procs and call them from Entity or there is something I can do within Entity?
I do not have any control over the databases so I cannot split the tables or create some materialized views or partitioned tables.
Any input/idea/suggestion is greatly appreciated.
At my work I faced a similar situation. We had a database with many tables and most of them contained around 7- 10 million records each. We used Entity framework to display the data but the page seemed to display very slow (like 90 to 100 seconds). Even the sorting on the grid took time. I was given the task to see if it could be optimized or not. and well after profiling it (ANTS profiler) I was able to optimize it (under 7 secs).
so the answer is Yes, Entity framework can handle loads of records (in millions) but some care must be taken
Understand that call to database made only when the actual records are required. all the operations are just used to make the query (SQL) so try to fetch only a piece of data rather then requesting a large number of records. Trim the fetch size as much as possible
Yes, not you should, you must use stored procedures and import them into your model and have function imports for them. You can also call them directly ExecuteStoreCommand(), ExecuteStoreQuery<>(). Sames goes for functions and views but EF has a really odd way of calling functions "SELECT dbo.blah(#id)".
EF performs slower when it has to populate an Entity with deep hierarchy. be extremely careful with entities with deep hierarchy .
Sometimes when you are requesting records and you are not required to modify them you should tell EF not to watch the property changes (AutoDetectChanges). that way record retrieval is much faster
Indexing of database is good but in case of EF it becomes very important. The columns you use for retrieval and sorting should be properly indexed.
When you model is large, VS2010/VS2012 Model designer gets real crazy. so break your model into medium sized models. There is a limitation that the Entities from different models cannot be shared even though they may be pointing to the same table in the database.
When you have to make changes in the same entity at different places, try to use the same entity by passing it and send the changes only once rather than each one fetching a fresh piece, makes changes and stores it (Real performance gain tip).
When you need the info in only one or two columns try not to fetch the full entity. you can either execute your sql directly or have a mini entity something. You may need to cache some frequently used data in your application also.
Transactions are slow. be careful with them.
if you keep these things in mind EF should give almost similar performance as plain ADO.NET if not the same.
My experience with EF4.1, code first: if you only need to read the records (i.e. you won't write them back) you will gain a performance boost by turning of change tracking for your context:
yourDbContext.Configuration.AutoDetectChangesEnabled = false;
Do this before loading any entities. If you need to update the loaded records you can allways call
yourDbContext.ChangeTracker.DetectChanges();
before calling SaveChanges().
The moment I hear statements like: "The company is standardized on EF4 or EF5, or whatever" This sends cold shivers down my spine.
It is the equivalent of a car rental saying "We have standardized on a single car model for our entire fleet".
Or a carpenter saying "I have standardized on chisels as my entire toolkit. I will not have saws, drills etc..."
There is something called the right tool for the right job
This statement only highlights that the person in charge of making key software architecture decisions has no clue about software architecture.
If you are dealing with over 100K records and the datamodels are complex (i.e. non trivial), Maybe EF6 is not the best option.
EF6 is based on the concepts of dynamic reflection and has similar design patterns to Castle Project Active Record
Do you need to load all of the 100K records into memory and perform operations on these ? If yes ask yourself do you really need to do that and why wouldn't executing a stored procedure across the 100K records achieve the same thing. Do some analysis and see what is the actual data usage pattern. Maybe the user performs a search which returns 100K records but they only navigate through the first 200. Example google search, Hardly anyone goes past page 3 of the millions of search results.
If the answer is still yes you need to load all of the 100K records into memory and perform operations. Then maybe you need to consider something else like a custom built write through cache with light weight objects. Maybe lazy load dynamic object pointers for nested objects. etc... One instance where I use something like this is large product catalogs for eCommerce sites where very large numbers of searches get executed against the catalog. Why is in order to provide custom behavior such as early exit search, and regex wildcard search using pre-compiled regex, or custom Hashtable indexes into the product catalog.
There is no one size fits all answer to this question. It all depends the data usage scenarios and how the application works with the data. Consider Gorilla Vs Shark who would win? It all depends on the environment and the context.
Maybe EF6 is perfect for one piece that would benefit from dynamic reflection, While NetTiers is better for another that needs static reflection and an extensible ORM. While low level ADO is perhaps best for extreme high performance pieces.

What database should I use in an app where my models don't represent different ideas, but instead different types with overlapping fields?

I'm building an application where I will be gathering statistics from a game. Essentially, I will be parsing logs where each line is a game event. There are around 50 different kinds of events, but a lot of them are related. Each event has a specific set of values associated with it, and related events share a lot of these attributes. Overall there are around 50 attributes, but any given event only has around 5-10 attributes.
I would like to use Rails for the backend. Most of the queries will be event type related, meaning that I don't especially care about how two event types relate with each other in any given round, as much as I care about data from a single event type across many rounds. What kind of schema should I be building and what kind of database should I be using?
Given a relational database, I have thought of the following:
Have a flat structure, where there are only a couple of tables, but the events table has as many columns as there are overall event attributes. This would result in a lot of nulls in every row, but it would let me easily access what I need.
Have a table for each event type, among other things. This would let me save space and improve performance, but it seems excessive to have that many tables given that events aren't really seperate 'ideas'.
Group related events together, minimizing both the numbers of tables and number of attributes per table. The problem then becomes the grouping. It is far from clear cut, and it could take a long time to properly establish event supertypes. Also, it doesn't completely solve the problem of there being a fair amount of nils.
It was also suggested that I look into using a NoSQL database, such as MongoDB. It seems very applicable in this case, but I've never used a non-relational database before. It seems like I would still need a lot of different models, even though I wouldn't have tables for each one.
Any ideas?
This feels like a great use case for MongoDB and a very awkward fit for a relational database.
The types of queries you would be making against this data is very key to best schema design but imagine that your documents (in a single collection similar to 1. above) look something like this:
{ "round" : 1,
"eventType": "et1",
"attributeName": "attributeValue",
...
}
You can easily query by round, by eventType, getting back all attributes or just a specified subset, etc.
You don't have to know up front how many attributes you might have, which ones belong with which event types, or even how many event types you have. As you build your prototype/application you will be able to evolve your model as needed.
There is a very large active community of Rails/MongoDB folks and there's a good chance that you can find a lot of developers you can ask questions and a lot of code you can look at as examples.
I would encourage you to try it out, and see if it feels like a good fit. I was going to add some links to help you get started but there are too many of them to choose from!
Since you might have a question about whether to use an object mapper or not so here's a good answer to that.
A good write-up of dealing with dynamic attributes with Ruby and MongoDB is here.

Am I the only one that queries more than one database?

After much reading on ruby on rails and multiple database connections, it seems that I have found something that not that many folks do, at least not with ror. I am used to querying many different databases and schemas and pulling back the information either for a report or for one seamless page. So, a user doesn't have to log on to several different systems. I can create a page that has all the systems on one or two web pages.
Is that not a normal occurrence in the web and database driven design?
EDIT: Is this because most all my original code is in classic asp?
I really honestly think that most ORM designers don't seem to take the thought that users may want to access more than one database into account. This seems to be a pretty common limitation in the ORM universe.
Our client website runs across 3 databases, so I do this to. Actually, I'm condensing everything into views off of one central database which then connects to the others.
I never considered this to be "normal" behavior though. I would guess that most of the time you would be designing for one system and working against that.
EDIT: Just to elaborate, we use Linq to SQL for our data layer and we define the objects against the database views. This way we keep reports and application code working off the same data model. There is some extra work setting up the Linq entities, because you have to manually define primary keys and set up associations... however so far it has definitely proven worthwhile. We tried to do so with Entity Framework, but had a lot of trouble getting the relationships set up appropriately and had to give up. The funny thing is I had thought Entity Framework was supposed to be designed for more advanced scenarios like ours...
It is not uncommon to hit multiple databases during a single part of an application's workflow. However, in every instance that I have done it, this has been performed through several web service calls, which among other things wrap the databases in question.
I have not, to my knowledge, ever had a need to hit multiple databases directly at once and merge results into a single report.
I've seen this kind of architecture in corporate Portals- where lots of data is pulled in via different data sources. The whole point of a portal is to bring silo'd systems together- users might not want to be using lots of systems in isolation (especially if they have to sign into each one). In that sort of scenario it is normal, particularly if it is a large company that has expanded rapidly and has a large number of heterogenous systems.
In your case whether this is the right thing to do depends on why you have these seperate DBs.
With ORM's it may be a little difficult. However, it can be done. Pull the objects as needed from the various databases, then use them as a composite to create a new object that is the actual one that is desired. If you can skip the ORM part of the process, then you can directly query the databases and build your object directly.
Pulling data from two databases and compiling a report is not uncommon, but because cross-database queries cannot be optimized by the query engine of either database, OLTP systems typically use a single database, to keep the application performant.
If you build the system from the ground up, it is not advisable to do it this way. If you are working with a system you didn't design, there is no much choice and it is not uncommon (that is the difference between "organic" and "planned" grow).
Not counting master and various test instances, I hit nine databases on a regular basis. Yes, I inherited it, and yes, "Classic" ASP figures prominently. Of course, all the "brillant" designers of this mess are long gone. We're replacing it with things more sane as quickly as we safely can.
I would think that if you're building a new system, and keep adding databases and get to the point of two or three databases, it's probably time to re-think your design. OTOH, if you're aggregating data from multiple, disparate systems, then, no, it's not that strange. Depending on the timliness you need, and your budget for throwing hardware at the problem, and if your data is mostly static, this would be a good scenario for a "reporting server" that pulls the data down from the Live server periodically.

Why all the Active Record hate? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
As I learn more and more about OOP, and start to implement various design patterns, I keep coming back to cases where people are hating on Active Record.
Often, people say that it doesn't scale well (citing Twitter as their prime example) -- but nobody actually explains why it doesn't scale well; and / or how to achieve the pros of AR without the cons (via a similar but different pattern?)
Hopefully this won't turn into a holy war about design patterns -- all I want to know is ****specifically**** what's wrong with Active Record.
If it doesn't scale well, why not?
What other problems does it have?
There's ActiveRecord the Design Pattern and ActiveRecord the Rails ORM Library, and there's also a ton of knock-offs for .NET, and other languages.
These are all different things. They mostly follow that design pattern, but extend and modify it in many different ways, so before anyone says "ActiveRecord Sucks" it needs to be qualified by saying "which ActiveRecord, there's heaps?"
I'm only familiar with Rails' ActiveRecord, I'll try address all the complaints which have been raised in context of using it.
#BlaM
The problem that I see with Active Records is, that it's always just about one table
Code:
class Person
belongs_to :company
end
people = Person.find(:all, :include => :company )
This generates SQL with LEFT JOIN companies on companies.id = person.company_id, and automatically generates associated Company objects so you can do people.first.company and it doesn't need to hit the database because the data is already present.
#pix0r
The inherent problem with Active Record is that database queries are automatically generated and executed to populate objects and modify database records
Code:
person = Person.find_by_sql("giant complicated sql query")
This is discouraged as it's ugly, but for the cases where you just plain and simply need to write raw SQL, it's easily done.
#Tim Sullivan
...and you select several instances of the model, you're basically doing a "select * from ..."
Code:
people = Person.find(:all, :select=>'name, id')
This will only select the name and ID columns from the database, all the other 'attributes' in the mapped objects will just be nil, unless you manually reload that object, and so on.
I have always found that ActiveRecord is good for quick CRUD-based applications where the Model is relatively flat (as in, not a lot of class hierarchies). However, for applications with complex OO hierarchies, a DataMapper is probably a better solution. While ActiveRecord assumes a 1:1 ratio between your tables and your data objects, that kind of relationship gets unwieldy with more complex domains. In his book on patterns, Martin Fowler points out that ActiveRecord tends to break down under conditions where your Model is fairly complex, and suggests a DataMapper as the alternative.
I have found this to be true in practice. In cases, where you have a lot inheritance in your domain, it is harder to map inheritance to your RDBMS than it is to map associations or composition.
The way I do it is to have "domain" objects that are accessed by your controllers via these DataMapper (or "service layer") classes. These do not directly mirror the database, but act as your OO representation for some real-world object. Say you have a User class in your domain, and need to have references to, or collections of other objects, already loaded when you retrieve that User object. The data may be coming from many different tables, and an ActiveRecord pattern can make it really hard.
Instead of loading the User object directly and accessing data using an ActiveRecord style API, your controller code retrieves a User object by calling the API of the UserMapper.getUser() method, for instance. It is that mapper that is responsible for loading any associated objects from their respective tables and returning the completed User "domain" object to the caller.
Essentially, you are just adding another layer of abstraction to make the code more managable. Whether your DataMapper classes contain raw custom SQL, or calls to a data abstraction layer API, or even access an ActiveRecord pattern themselves, doesn't really matter to the controller code that is receiving a nice, populated User object.
Anyway, that's how I do it.
I think there is a likely a very different set of reasons between why people are "hating" on ActiveRecord and what is "wrong" with it.
On the hating issue, there is a lot of venom towards anything Rails related. As far as what is wrong with it, it is likely that it is like all technology and there are situations where it is a good choice and situations where there are better choices. The situation where you don't get to take advantage of most of the features of Rails ActiveRecord, in my experience, is where the database is badly structured. If you are accessing data without primary keys, with things that violate first normal form, where there are lots of stored procedures required to access the data, you are better off using something that is more of just a SQL wrapper. If your database is relatively well structured, ActiveRecord lets you take advantage of that.
To add to the theme of replying to commenters who say things are hard in ActiveRecord with a code snippet rejoinder
#Sam McAfee Say you have a User class in your domain, and need to have references to, or collections of other objects, already loaded when you retrieve that User object. The data may be coming from many different tables, and an ActiveRecord pattern can make it really hard.
user = User.find(id, :include => ["posts", "comments"])
first_post = user.posts.first
first_comment = user.comments.first
By using the include option, ActiveRecord lets you override the default lazy-loading behavior.
My long and late answer, not even complete, but a good explanation WHY I hate this pattern, opinions and even some emotions:
1) short version: Active Record creates a "thin layer" of "strong binding" between the database and the application code. Which solves no logical, no whatever-problems, no problems at all. IMHO it does not provide ANY VALUE, except some syntactic sugar for the programmer (which may then use an "object syntax" to access some data, that exists in a relational database). The effort to create some comfort for the programmers should (IMHO...) better be invested in low level database access tools, e.g. some variations of simple, easy, plain hash_map get_record( string id_value, string table_name, string id_column_name="id" ) and similar methods (of course, the concepts and elegance greatly varies with the language used).
2) long version: In any database-driven projects where I had the "conceptual control" of things, I avoided AR, and it was good. I usually build a layered architecture (you sooner or later do divide your software in layers, at least in medium- to large-sized projects):
A1) the database itself, tables, relations, even some logic if the DBMS allows it (MySQL is also grown-up now)
A2) very often, there is more than a data store: file system (blobs in database are not always a good decision...), legacy systems (imagine yourself "how" they will be accessed, many varieties possible.. but thats not the point...)
B) database access layer (at this level, tool methods, helpers to easily access the data in the database are very welcome, but AR does not provide any value here, except some syntactic sugar)
C) application objects layer: "application objects" sometimes are simple rows of a table in the database, but most times they are compound objects anyway, and have some higher logic attached, so investing time in AR objects at this level is just plainly useless, a waste of precious coders time, because the "real value", the "higher logic" of those objects needs to be implemented on top of the AR objects, anyway - with and without AR! And, for example, why would you want to have an abstraction of "Log entry objects"? App logic code writes them, but should that have the ability to update or delete them? sounds silly, and App::Log("I am a log message") is some magnitudes easier to use than le=new LogEntry(); le.time=now(); le.text="I am a log message"; le.Insert();. And for example: using a "Log entry object" in the log view in your application will work for 100, 1000 or even 10000 log lines, but sooner or later you will have to optimize - and I bet in most cases, you will just use that small beautiful SQL SELECT statement in your app logic (which totally breaks the AR idea..), instead of wrapping that small statement in rigid fixed AR idea frames with lots of code wrapping and hiding it. The time you wasted with writing and/or building AR code could have been invested in a much more clever interface for reading lists of log-entries (many, many ways, the sky is the limit). Coders should dare to invent new abstractions to realize their application logic that fit the intended application, and not stupidly re-implement silly patterns, that sound good on first sight!
D) the application logic - implements the logic of interacting objects and creating, deleting and listing(!) of application logic objects (NO, those tasks should rarely be anchored in the application logic objects itself: does the sheet of paper on your desk tell you the names and locations of all other sheets in your office? forget "static" methods for listing objects, thats silly, a bad compromise created to make the human way of thinking fit into [some-not-all-AR-framework-like-]AR thinking)
E) the user interface - well, what I will write in the following lines is very, very, very subjective, but in my experience, projects that built on AR often neglected the UI part of an application - time was wasted on creation obscure abstractions. In the end such applications wasted a lot of coders time and feel like applications from coders for coders, tech-inclined inside and outside. The coders feel good (hard work finally done, everything finished and correct, according to the concept on paper...), and the customers "just have to learn that it needs to be like that", because thats "professional".. ok, sorry, I digress ;-)
Well, admittedly, this all is subjective, but its my experience (Ruby on Rails excluded, it may be different, and I have zero practical experience with that approach).
In paid projects, I often heard the demand to start with creating some "active record" objects as a building block for the higher level application logic. In my experience, this conspicuously often was some kind of excuse for that the customer (a software dev company in most cases) did not have a good concept, a big view, an overview of what the product should finally be. Those customers think in rigid frames ("in the project ten years ago it worked well.."), they may flesh out entities, they may define entities relations, they may break down data relations and define basic application logic, but then they stop and hand it over to you, and think thats all you need... they often lack a complete concept of application logic, user interface, usability and so on and so on... they lack the big view and they lack love for the details, and they want you to follow that AR way of things, because.. well, why, it worked in that project years ago, it keeps people busy and silent? I don't know. But the "details" separate the men from the boys, or .. how was the original advertisement slogan ? ;-)
After many years (ten years of active development experience), whenever a customer mentions an "active record pattern", my alarm bell rings. I learned to try to get them back to that essential conceptional phase, let them think twice, try them to show their conceptional weaknesses or just avoid them at all if they are undiscerning (in the end, you know, a customer that does not yet know what it wants, maybe even thinks it knows but doesn't, or tries to externalize concept work to ME for free, costs me many precious hours, days, weeks and months of my time, live is too short ... ).
So, finally: THIS ALL is why I hate that silly "active record pattern", and I do and will avoid it whenever possible.
EDIT: I would even call this a No-Pattern. It does not solve any problem (patterns are not meant to create syntactic sugar). It creates many problems: the root of all its problems (mentioned in many answers here..) is, that it just hides the good old well-developed and powerful SQL behind an interface that is by the patterns definition extremely limited.
This pattern replaces flexibility with syntactic sugar!
Think about it, which problem does AR solve for you?
Some messages are getting me confused.
Some answers are going to "ORM" vs "SQL" or something like that.
The fact is that AR is just a simplification programming pattern where you take advantage of your domain objects to write there database access code.
These objects usually have business attributes (properties of the bean) and some behaviour (methods that usually work on these properties).
The AR just says "add some methods to these domain objects" to database related tasks.
And I have to say, from my opinion and experience, that I do not like the pattern.
At first sight it can sound pretty good. Some modern Java tools like Spring Roo uses this pattern.
For me, the real problem is just with OOP concern. AR pattern forces you in some way to add a dependency from your object to infraestructure objects. These infraestructure objects let the domain object to query the database through the methods suggested by AR.
I have always said that two layers are key to the success of a project. The service layer (where the bussiness logic resides or can be exported through some kind of remoting technology, as Web Services, for example) and the domain layer. In my opinion, if we add some dependencies (not really needed) to the domain layer objects for resolving the AR pattern, our domain objects will be harder to share with other layers or (rare) external applications.
Spring Roo implementation of AR is interesting, because it does not rely on the object itself, but in some AspectJ files. But if later you do not want to work with Roo and have to refactor the project, the AR methods will be implemented directly in your domain objects.
Another point of view. Imagine we do not use a Relational Database to store our objects. Imagine the application stores our domain objects in a NoSQL Database or just in XML files, for example. Would we implement the methods that do these tasks in our domain objects? I do not think so (for example, in the case of XM, we would add XML related dependencies to our domain objects...Truly sad I think). Why then do we have to implement the relational DB methods in the domain objects, as the Ar pattern says?
To sum up, the AR pattern can sound simpler and good for small and simple applications. But, when we have complex and large apps, I think the classical layered architecture is a better approach.
The question is about the Active
Record design pattern. Not an orm
Tool.
The original question is tagged with rails and refers to Twitter which is built in Ruby on Rails. The ActiveRecord framework within Rails is an implementation of Fowler's Active Record design pattern.
The main thing that I've seen with regards to complaints about Active Record is that when you create a model around a table, and you select several instances of the model, you're basically doing a "select * from ...". This is fine for editing a record or displaying a record, but if you want to, say, display a list of the cities for all the contacts in your database, you could do "select City from ..." and only get the cities. Doing this with Active Record would require that you're selecting all the columns, but only using City.
Of course, varying implementations will handle this differently. Nevertheless, it's one issue.
Now, you can get around this by creating a new model for the specific thing you're trying to do, but some people would argue that it's more effort than the benefit.
Me, I dig Active Record. :-)
HTH
Although all the other comments regarding SQL optimization are certainly valid, my main complaint with the active record pattern is that it usually leads to impedance mismatch. I like keeping my domain clean and properly encapsulated, which the active record pattern usually destroys all hope of doing.
I love the way SubSonic does the one column only thing.
Either
DataBaseTable.GetList(DataBaseTable.Columns.ColumnYouWant)
, or:
Query q = DataBaseTable.CreateQuery()
.WHERE(DataBaseTable.Columns.ColumnToFilterOn,value);
q.SelectList = DataBaseTable.Columns.ColumnYouWant;
q.Load();
But Linq is still king when it comes to lazy loading.
#BlaM:
Sometimes I justed implemented an active record for a result of a join. Doesn't always have to be the relation Table <--> Active Record. Why not "Result of a Join statement" <--> Active Record ?
I'm going to talk about Active Record as a design pattern, I haven't seen ROR.
Some developers hate Active Record, because they read smart books about writing clean and neat code, and these books states that active record violates single resposobility principle, violates DDD rule that domain object should be persistant ignorant, and many other rules from these kind of books.
The second thing domain objects in Active Record tend to be 1-to-1 with database, that may be considered a limitation in some kind of systems (n-tier mostly).
Thats just abstract things, i haven't seen ruby on rails actual implementation of this pattern.
The problem that I see with Active Records is, that it's always just about one table. That's okay, as long as you really work with just that one table, but when you work with data in most cases you'll have some kind of join somewhere.
Yes, join usually is worse than no join at all when it comes to performance, but join usually is better than "fake" join by first reading the whole table A and then using the gained information to read and filter table B.
The problem with ActiveRecord is that the queries it automatically generates for you can cause performance problems.
You end up doing some unintuitive tricks to optimize the queries that leave you wondering if it would have been more time effective to write the query by hand in the first place.
Try doing a many to many polymorphic relationship. Not so easy. Especially when you aren't using STI.

Resources