Transforming objects to other types - ruby-on-rails

I'm working on a RoR app, but this is a general question of strategy for OOP. Consider the case where there are multiple types of references that you are storing data for: Books, Articles, Presentations, Book Chapters, etc. Each type of reference is part of a hierarchy where common behaviors sit at the most general point of inheritance, and at the DB level I am using single-table inheritance. The type is set by use of a select option, so lets say that I was entering the data as if it were a Book, but then realize that it is only a chapter. So I change the type of reference by selecting "Book Chapter", which then posts an update to the existing model/form. The question is what is the correct strategy for handling this?
On one hand it seems preferable to transform the existing record in the DB to avoid id exhaustion, and potentially save on operations for creating/deleting records. This however tends to make the update strategy complex.
On the other hand, it seems more in keeping with general object orientation to create a new object (and record) using the old object to initialize values that you want to persist, then delete the old object. This I think makes more sense in terms of an Object Space (heap), and I think is more aligned to ideas like those of general systems.
However, I haven't nailed this down, and after sitting on it for a while, I'm pitching it to this community to see what "right" way to do this is.

Prefer immutable objects, in other words: the second strategy. Your objects may not be immutable by themselves, but reducing mutability is often a step in the right direction.
Besides that, this is the more natural way. In general OOP terms there's no way to change the type of an object. In your situation you can, but it's still an awkward and unusual thing to do.
On the other hand, if your objects are represented by the same (identical) class and changing the type is done by setting a high-level property, one could argue that re-creating the object is overkill.
Still, reducing mutability is a good think, but if your class is already designed to be mutable, it might not be worth it. (In that special case where there's actually only one actual class from the language point of view)

The transformation you're talking about doesn't seem to warrant a new record or even a new object.
Each of the entries you cited have the same form. They are block of text with siblings, parents and children. A chapter may have a block of text, with a parent book and a child endnote, for example. They are differentiated on a DB level only by their type, which itself could be a field.
All you need is a model to handle these 'elements' differently depending on whether or not it is flagged as a book, or a chapter, etc. If an element is flagged as a chapter, yet has no parent, for example, then you might flag it as a 'book' when it is saved to the DB.
Changing the way an element is flagged doesn't change the element, it only changes the way it is viewed. So long as the element knows how to find it's children the data will compute in the same way. As far as the model is concerned it's just an element you're worrying about. The rest is done in the UI.

Related

When should inferred relationships and nodes be used over explicit ones?

I was looking up how to utilise temporary relationships in Neo4j when I came across this question: Cypher temp relationship
and the comment underneath it made me wonder when they should be used and since no one argued against him, I thought I would bring it up here.
I come from a mainly SQL background and my main reason for using virtual relationships was to eliminate duplicated data and do traversals to get properties of something instead.
For a more specific example, let's say we have a robust cake recipe, which has sugar as an ingredient. The sugar is what makes the cake sweet.
Now imagine a use case where I don't like sweet cakes so I want to get all the ingredients of the recipe that make the cake sweet and possibly remove them or find alternatives.
Then there's another use case where I just want foods that are sweet. I could work backwards from the sweet ingredients to get to the food or just store that a cake is sweet in general, which saves time from traversal and makes a query easier. However, as I mentioned before, this duplicates known data that can be inferred.
Sorry if the example is too strange, I suck at making them. I hope the main question comes across, though.
My feeling is that the only valid scenario for creating redundant "shortcut" relationships is this:
Your use case has a stringent time constraint (e.g., average query time must be less than 200ms), but your neo4j query -- despite optimization -- exceeds that constraint, and you have verified that adding "shortcut" relationships will indeed make the response time acceptable.
You should be aware that adding redundant "shortcut" relationships comes with its own costs:
Queries that modify the DB would need to be more complex (to modify the redundant relationships) and also slower.
You'd always have to add the redundant relationships -- even if actually you never need some (most?) of them.
If you want to make concurrent updates to the DB, the chances that you may lose some updates and introduce inconsistencies into the DB would increase -- meaning that you'd have to work even harder to avoid inconsistencies.
NOTE: For visualization purposes, you can use virtual nodes and relationships, which are temporary and not actually stored in the DB.

Gave up DDD, but need some of its benefits

I'm giving up traditional DDD, which is often a massive timewaster, and forces me to do endless mapping: data layer <--> domain layer <--> presentation layer.
For even a small change I must change data models, domain models, presentation models / viewmodels, then the repositories, manager/service classes, and of course the AutoMapper maps, and then test the whole thing! Each call requires calling a layer which calls a layer which calls the underlying code. And I don't get anything in return other than "you might need it in the future". Meh.
My current approach is more pragmatic:
I don't worry about the difference between the "data layer" and "domain layer" any longer, as there's no point - the terms are interchangeable. I let EF do it's thing, and add interfaces and repositories on top when needed.
I've merged my "data" and "domain" projects (into "core", boring name, I know), and I could almost swear that Visual Studio is actually running faster.
I allow EF entities to go up and down the stack, but, I still map them to presentation models / viewmodels as usual.
For simple operations I call repositories directly from controllers, for complex operations I use domain managers/services as usual; the repositories never expose IQueryable.
I define entities/POCOs as partial classes, so I can add domain behavior separately in corresponding partial classes.
The problem: I now use the entities all over the place, so client code can see their navigation properties. And the models are always materialized after they leave a repository, so those navigation properties are often null.
Possible solutions:
1. Live with it. It's ugly but preferable to the problems explained above.
2. For each entity, define an interface which hides the navigation properties; and make client code use the interfaces. But ironically, this means another layer (albeit thin and manageable).
3. What else?
I'm not used to this sort of fast-and-loose programming style, so maybe I'm missing some obvious tricks. Is there anything else I should take into account? I'm sure there are other problems I will encounter soon.
EDIT:
This question is not about DDD. And note that many struggle with a traditional DDD approach -- Seemann appears to arrive at the same conclusion, Rahien speaks about the "Useless Abstraction For The Sake Of Abstraction Anti Pattern", and Evans himself said DDD is only truly useful in 5% of cases. Also see this thread. Some of the comments/answers are predictably about how I'm doing DDD wrong, or how I can tweak my system to do it right. However, I'm not asking about DDD or bashing it for the cases where it is suitable, rather I'd like to know what others are doing in line with the thinking I've described above. It's not as if DDD is a panacea to all design ills, every decade a new process comes out (RUP anyone? XP, Agile, Booch, blah...). DDD is just the shiniest new one, and the most well known and used. But pragmatism should come first as I'm trying to build salable products that ship on time and are easy to maintain. The most useful programming axiom I've learned, by far, is YAGNI. What I want is to change my system to a sort of "DDD-lite", where I get it's strong design/OOP/pattern philosophy, but without the fat.
A typical persistence approach with DDD is to map the domain model directly to corresponding tables. Technically, the mappings are still there (and are usually declared in code), but there is no explicit data model, as pointed out by lazyberezovsky.
The problem with navigation properties can be resolved in a few different ways, regardless of whether you are employing DDD or not. I dislike approach 1 because it makes it more difficult to reason about your code - you never know which properties will be set and which won't. Approach 2 is much better in theory, because it makes it very explicit what that a given query requires and making things explicit is a good practice in general. A similar, but simpler and less brittle approach is to use read-models, which are just objects designed to fulfill requirements of a given query of set of queries. Within the context of DDD, they allow you to decouple behavior rich entities from queries, which are quite often at odds. Now proponents of DRY may scream heresy and come at you with torches and pitchforks, but in practice it is often much easier to maintain a read-model and an entity then to try to coerce entities to fulfill query requirements by way of interfaces or complex mapping strategies. Additionally, the responsibilities of a read-model and a behavior model are quite different, therefore DRY isn't applicable.
This is not to say that DDD is applicable in your scenario. It is often a wise decision to avoid full fledged DDD, especially in scenarios that are mostly CRUD. You are correct to be cautious, a good example of KISS and YAGNI. DDD reaps benefits when your domain consists of complex behavior, not just data. At any rate, the read-model pattern applies.
UPDATE
For implementations that don't employ a read-model, take a look at Fetching Strategy Design where the notion of a fetching strategy allows the specification of exactly what is needed from the database which mitigates issues with navigational properties. The material referenced in the linked post is also of interest. Overall, this attempts to avoid the a layer of indirection present in other approaches. However, in my opinion, using the proposed fetching strategy is more complex than using a read-model while the net result is the same.
Some thoughts about this point:
... the repositories never expose IQueryable ... the models are always
materialized after they leave a repository ...
Your question is tagged with "asp.net-mvc", so you have a web application in mind. 90% or more of all requests will be GET requests that are supposed to fetch some data from the database and show those data in a web view. How often are those needed data really entities rather than only bags of properties (a selection of properties of an entity type or perhaps composed of properties from multiple entities)?
Say, your application has 100 views. Only a minority of these will show complete entities:
50 of them are list views that show selected data (a customer with ID and address, but without the customer's contact person, phone number and sales volume)
20 of them contain autocomplete text boxes to select a reference (the customer for an order, but only the customer's name and city is shown in the autocomplete list, not the rest of the address nor contact person, phone number and sales volume and only the first 5 hits are displayed)
1 is an edit view for a customer that shows everything, but not the sales volume
1 is a details view for a customer with his last five orders
1 is a details view for an order including order items including product of each item but without the product's supplier name
1 is the same view but specialized for the purchasing department that wants to see the supplier for each item and item's product with average supplier's lead time for the last three months.
1 is a view for the service department that shows the order with only the order items of product category "repair service"
1 view for the Human Resources department shows employees including a photo stored as a big blob
1 view for personnel planning department shows a short version of the employee without photo
etc., etc.
As a UI programmer I would have all kinds of data requirements to render a view with the examples above:
I need only a selection of properties
I need even different selections of the same entity's properties for different views
I need an order including all items but without a reference to a product
I need an order including all items (but not all properties of the items) and including a reference to a product and to a supplier (but not all supplier's properties)
I need an order including only a filtered list of order items
I need a customer including the last five orders, not all 3000 orders he ever had
I need an employee but please without the big blob image
etc., etc.
How to fulfill these requirements as a data access/repository/service developer?
I only provide a handful of methods and materialize entities: load order header, load order header with items, load order header with items and product, load order header with items and product and supplier, load customer header (throw 15 of the 20 properties away, dear UI developer, if you only need five properties), load customer header with all 3000 orders (throw 2995 away, dear UI developer, if you only need five), etc., etc. I return interfaces from the repositories that hide not loaded navigation properties.
I care about every detail that the UI needs: I create repository/service methods like GetFiveCustomerPropertiesForAutoComplete, GetCustomerWithLastFiveOrders, etc. etc. I return interfaces from the repositories that hide the properties (also scalar) I haven't loaded. Or I return "DTOs" that contain the requested properties. I change the repository/services and create new DTOs every day when a UI developer calls with a data requirement for the next view.
I return IQueryable<TEntity> from the repositories and tell the UI developer "create the LINQ query yourself to fetch the data you need for your views". (Next morning the DBA is complaining about hundreds of terrible performing database queries.)
I return "prepared" IQueryable<TEntity>s from the repositories/services that cover - for example - security concerns like applying Where clauses for the user's access rights or append a Where clause for a search term or apply a NoTracking option to the query. I tell the UI developer: "You are allowed to extend the query with a) projections (Select), b) paging (Take and Skip) and perhaps c) sorting (OrderBy) because I consider those three query parts as UI concerns. All other query requirements (filtering, joining, grouping, etc.) have to be implemented in the repository/service layer and are forbidden in the UI layer." The most important piece here are projections that materialize ViewModels directly through the LINQ/SQL query without intermediate mapping layer and without the overhead to load more than the needed columns/properties.
These are only some thoughts. Every approach has its benefits and downsides. Working in small teams where at least one or a few developers have an overview what is happening in both the repository/service and the UI/"projection" layer the last option works fine for me in my experience although it doesn't always work with the strict rules decribed (for example, the filter by product category for included order items of an order requires to apply a Where clause inside of the projection, i.e. in the UI layer). For POST requests and data modifications I would use DTOs that send to data collected from a view back to a service to be processed there.
For stricter separation of "query layer" and UI layer I would probably prefer something close to the second option, maybe not with an interface/DTO for every UI requirement, but somehow reduced to a set of DTOs for the most common requirements (with the price of a little overhead of sometimes unnecessarily loaded properties). However, I expect that to be more work than the last option due to the larger amount of necessary repository/service methods, the additional maintenance of (perhaps many) DTOs and the intermediate mapping between DTOs and ViewModels.
Personally I am concerned about materializing full entities, especially complex object graphs, when I don't need them 90% of the time. But my concern is not verified by extensive performance measurements proving that this approach is really a problem for a "normal" application that doesn't have special high performance needs.
How can anyone give you sound advice when we have no clue as to what it is you are building? In the grand scheme of things, you might be building the wrong solution (not saying you are). So do realize all we can relate to is technical design issues and similar past experiences.
Many people face your problem, indeed. The mapping is loose coupling tax in the land of static typing. Maybe a more dynamic language could solve some of your pain. Or maybe you might find virtue in automating more (DSL, MDA). You could also switch to client server instead.
Interfaces are not layers, rather abstractions. Use them wisely.
Personally, I'd never take these shortcuts. Been bitten too many times trying to skip steps. Logic starts popping up in odd places. If I have a data driven app to develop simple datasets come to mind, EF as well. But I don't call the objects aggregate or entity in the DDD sense, just entity in the ERD sense. Transactionscript might be a better fit than doing the partial method sprinkeling. As for read model objects, these are not layers of indirection.
Overall, I get the feeling, and it is just that, you're making a mess of things because you fight the mapping friction by taking on a dependency on objects that don't reveal the required shape (navigation properties that are null) thereby causing problems in a different area.
I'll just try to be short - we went for the method 2 - ie, add layer of interfaces that you use on the client. You can have EF generate them for you, just a little tweak of the .tt templates.
Yes, it creates (yet) another layer, but it's logic-free and adds no complexity. Of course, if your client needs to deserialize entities, you have to add (yet) another layer that will handle deserialization and reference both the entities definitions and the interfaces that he'll return to the client. But it's also thin, so we learned to live with it, because it turned out to work just fine, and the client really stays clean...
The problem: I now use the entities all over the place, so client code
can see their navigation properties.
I don't quite get why this is a problem and how it's related to EF entities in particular. By client code do you mean presentation layer code or any code consuming your entities ?
For UI code a simple solution is to define ViewModels that just don't expose these navigation properties (or only expose a few of them depending on the object graph depth your GUIs need).
For other code it's only normal to be able to see the navigation properties of entities. They are public for a reason. You can end up breaking the Law of Demeter if you abuse them, but it's a matter of developer discipline not to fall into that trap.
An entity contains its own contract - all code that has access to the entity is supposed to be able to use any part of this contract. If you feel like your entities are exposing too much and that you need to put interfaces on top of them to restrain access to certain parts, maybe it's just a different entity.
I don't worry about the difference between the "data layer" and "domain layer" any longer, as there's no point - the terms are
interchangeable. I let EF do it's thing, and add interfaces and
repositories on top when needed.
I've merged my "data" and "domain" projects (into "core", boring name, I know), and I could almost swear that Visual Studio is
actually running faster.
I allow EF entities to go up and down the stack, but, I still map them to presentation models / viewmodels as usual.
For simple operations I call repositories directly from controllers, for complex operations I use domain managers/services as
usual; the repositories never expose IQueryable.
I define entities/POCOs as partial classes, so I can add domain behavior separately in corresponding partial classes.
None of these things seems to be fundamentally anti-DDD to me, except data/domain separation.
Especially if you do database-first EF -DDD is clearly a domain-centric approach and you shouldn't define your tables before defining your entities. It's also not clear whether some of your domain entities talk to the database or EF directly (not DDD - and more generally, layered-architecture - compliant) or you systematically have data access objects in between (DDD compliant).

Would like to Understand 6NF with an Example

I have just read #PerformanceDBA's arguments re: 6NF and E-A-V. I am intrigued. I had previously been skeptical of 6NF as it was presented as "merely" sticking some timestamp columns on tables.
I have always worked with a data dictionary and do not need to be convinced to use one, or to generate SQL code. So I expect an answer that would require a dictionary (or catalog) that is used to generate code.
So I would like to know how 6NF would deal with an extremely simple example. A table of items, descriptions and prices. The prices change over time.
So anyway, what does the Items table look like when converted to 6NF? What is the "explosion of tables?" that happens here?
If the example does not work with a table this simple, feel free to add what is necessary to get the point across.
I actually started putting an answer together, but I ran into complications, because you (quite understandably) want a simple example. The problem is manifold.
First I don't have a good idea of your level of actual expertise re Relational Databases and 5NF; I don't have a starting point to take up and then discuss the specifics of 6NF,
Second, just like any of the other NFs, it is variegated. You can just barely step into it; you can implement 6NF for certan tables; you can go the full hog on every table, etc. Sure there is an explosion of tables, but then you Normalise that, and kill the explosion; that's an advanced or mature implementation of 6NF. No use providing the full or partial levels of 6NF, when you are asking for the simplest, most straight-forward example.
I trust you understand that some tables can be "in 5NF" while others are "in 6NF".
So I put one together for you. But even that needs explanation.
Now SQL barely supports 5NF, it does not support 6NF at all (I think dportas says the same thing in different words). Now I implement 6NF at a deep level, for performance reasons, simplified pivoting (of entire tables; any and all columns, not the silly PIVOT function in MS), columnar access, etc. For that you need a full catalogue, which is an extension to the SQL catalogue, to support the 6NF that SQL does not support, and maintain data Integrity and business Rules. So, you really do not want to implement 6NF for fun, you only do that if you have a need, because you have to implement a catalogue. (This is what the EAV crowd do not do, and this is why most EAV systems have data integrity problems. Most of them do not use the declarative Referential & Data Integrity that SQL does have.)
But most people who implement 6NF don't implement the deeper level, with a full catalogue. They have simpler needs, and thus implement a shallower level of 6NF. So, let's take that, to provide a simple example for you. Let's start with an ordinary Product table that is declared to be in 5NF (and let's not argue about what 5NF is). The company sells various different kinds of Products, half the columns are mandatory, and the other half are optional, meaning that, depending on the Product Type, certain columns may be Null. While they may have done a good job with the database, the Nulls are now a big problem: columns that should be Not Null for certain ProductTypes are Null, because the declaration states NULL, and their app code is only as good as the next guy's.
So they decide to go with 6NF to fix that problem, because the subtitle of 6NF states that it eliminates The Null Problem. Sixth Normal Form is the irreducible Normal Form, there will be no further NFs after this, because the data cannot be Normalised further. The rows have been Normalised to the utmost degree. The definition of 6NF is:
a table is in 6NF when the row contains the Primary Key, and at most one, attribute.
Notice that by that definition, millions of tables across the planet are already in 6NF, without having had that intent. Eg. typical Reference or Look-up tables, with just a PK and Description.
Right. Well, our friends look at their Product table, which has eight non-key attributes, so if they make the Product table 6NF, they will have eight sub-Product tables. Then there is the issue that some columns are Foreign Keys to other tables, and that leads to more complications. And they note the fact that SQL does not support what they are doing, and they have to build a small catalogue. Eight tables are correct, but not sensible. Their purpose was to get rid of Nulls, not to write a little subsytem around each table.
Simple 6NF Example
Readers who are unfamiliar with the Standard for Modelling Relational Databases may find IDEF1X Notation useful in order to interpret the symbols in the example.
So typically, the Product Table retains all the Mandatory columns, especially the FKs, and each Optional column, each Nullable column, is placed in a separate sub-Product table. That is the simplest form I have seen. Five tables instead of eight. In the Model, the four sub-Product tables are "in 6NF"; the main Product table is "in 5NF".
Now we really do not need every code segment that SELECTs from Product to have to figure out what columns it should construct, based on the ProductType, etc, so we supply a View, which essentially provides the 5NF "view" of the Product table cluster.
The next thing we need is the basic rudiments of an extension to the SQL catalog, so that we can ensure that the rules (data integrity) for the various ProductTypes are maintained in one place, in the database, and not dependent on app code. The simplest catalogue you can get away with. That is driven off ProductType, so ProductType now forms part of that Metadata. You can implement that simple structure without a catalogue, but I would not recommend it.
Update
It is important to note that I implement all Business Rules in the database. Otherwise it is not a database (the notion of implementing rules "in application code" is hilarious in the extreme, especially nowadays, when we have florists working as "developers"). Therefore all rules, etc are first and foremost implemented as SQL declarations, CHECK constraints, functions, etc. That preserves all Declarative Referential Integrity, and declarative Data Integrity. The extension to the SQL catalog covers the area that SQL does not have declarations for, and they are then implemented as SQL. Being a good data dictionary, it does much more. Eg. I do not write Views every time I change the tables or add or change columns or their characteristics, they are created directly from the catalog+extension using a simple code generator.
One more very important note. You cannot implement 6NF (or EAV properly, for that matter), without completing a full and faithful Normalisation exercise, to 5NF. The problem I see at every site is, they don't have a genuine 5NF state, they have a mish-mash of partial normalisation or no normalisation at all, but they are very attached to that. Creating either 6NF or EAV from that is a disaster. Creating EAV or 6NF from that without all business rules implemented in declarative SQL is a nuclear disaster, burning for years. You get what you pay for.
End update.
Finally, yes, there are at least four further levels of Normalisation (Normalisation is a Principle, not a mere reference to a Normal Form), that can be applied to that simple 6NF Product cluster, providing more control, less tables, etc. The deeper we go, the more extensive the catalogue. And higher levels of performance. When you are ready, just ask, I have already erected the models and posted details in other answers.
In a nutshell, 6NF means that every relation consists of a candidate key plus no more than one other (key or non-key) attribute. To take up your example, if an "item" is identified by a ProductCode and the other attributes are Description and Price then a 6NF schema would consist of two relations (* denotes the key in each):
ItemDesc {ProductCode*, Description}
ItemPrice {ProductCode*, Price}
This is potentially a very flexible approach because it minimises the dependencies. That's also its main disadvantage however, especially in a SQL database. SQL makes it hard or impossible to enforce many multi-table constraints. Using the above schema, in most cases it will not be possible to enforce a business rule that every product must always have a description AND a price. Similarly, you may not be able to enforce some compound keys that ought to apply (because their attributes could be split over multiple tables).
So in considering 6NF you have to weigh up which dependencies and integrity rules are important to you. In many cases you may find it more practical and useful to stick to 5NF and normalize no further than that.
I had previously been skeptical of 6NF
as it was presented as "merely"
sticking some timestamp columns on
tables.
I'm not quite sure where this apparent misconception comes from. Perhaps the fact that 6NF was introduced for the book "Temporal Data and The Relational Mode" by Date, Darwen and Lorentzos? Anyhow, I hope the other answers here have clarified that 6NF is not limited to temporal databases.
The point I wanted to make is, although 6NF is "academically respectable" and always achievable, it may not necessarily lead to the optimal design in every case (and not just when considering implementation using SQL either). Even the aforementioned discoverers and proponents of 6NF seem to agree e.g.
Chris Date: "For practical purposes, stick to 5NF (and 6NF)."
Hugh Darwen: "the 6NF decomposition around Date [not the person!] would be overkill... an optimal design for the soccer club is... 5-and-a-bit-NF!"
Hugh Darwen: "we are in 5NF but not in 6NF, and again 5NF is sufficient" (several similar examples).
Then again, I can also find evidence to the contrary:
Chris Date: "Darwen and I have both felt for some time that all base relvars should be in 6NF".
On a practical note, I recently extended the SQL schema of one of our products to add a minor feature. I adopted a 6NF to avoid nullable columns and ended up with six new tables where most (all?) of my colleagues would have used one table (or perhaps extended an existing table) with nullable columns. Despite me proving several 'helper' stored procs and a 'denormalized' VIEW with a INSTEAD OF triggers, every coder that has had to work with this feature at the SQL level has gone out of their way to curse me :)
These guys have it down: Anchor Modeling. Great academic papers on the subject, combined with practical examples. Their writings have finally pushed me over the edge to consider building a DW in 6nf on an upcoming project. The POC work I have done has validated (for me, at least) that the enormous benefits of 6nf don't outweigh the costs.

SPROC to update record: how to handle unchanged values

I'm calling a update SPROC from my DAL, passing in all(!) fields of the table as parameters. For the biggest table this is a total of 78.
I pass all these parameters, even if maybe just one value changed.
This seems rather inefficent to me and I wondered, how to do it better.
I could define all parameters as optional, and only pass the ones changed, but my DAL does not know which values changed, cause I'm just passing it the model - object.
I could make a select on the table before updateing and compare the values to find out which ones changed but this is probably way to much overhead, also(?)
I'm kinda stuck here ... I'm very interested what you think of this.
edit: forgot to mention: I'm using C# (Express Edition) with SQL 2008 (also Express). The DAL I wrote "myself" (using this article).
Its maybe not the latest state of the art way (since its from 2006, "pre-Linq" so to say but Linq works only for local SQL instances in Express anyways) of doing it, but my main goal was learning C#, so I guess this isn't too bad.
If you can change the DAL (without changes being discarded once the layer is "regenerated" from the new schema when changes are made), i would recomend passing a structure containing the column being changed with values, and a structure kontaing key columns and values for the update.
This can be done using hashtables, and if the schema is known, should be fairly easy to manipulate this in the "new" update function.
If this is an automated DAL, these are some of the drawbacks using DALs
You could implement journalized change tracking in your model objects. This way you could keep track of any changes in your objects by saving the previous value of a property every time a new value is set.This information could be stored in one of two ways:
As part of each object's own private state
Centrally in a "manager" class.
In the first solution, you could easily implement this functionality in a base class and have it run in all model objects through inheritance.
In the second solution, you need to create some kind of container class that will keep a reference and a unique identifier to any model object that is created and record all changes in its state in a central store.This is similar to the way many ORM (Object-Relational Mapping) frameworks achieve this kind of functionality.
There are off the shelf ORMs that support these kinds of scenarios relatively well. Writing your own ORM will leave you without many features like this.
I find the "object.Save()" pattern leads to this kind of behavior, but there is no reason you need to follow that pattern (while I'm not personally a fan of object.Save(), I feel like I'm in the minority).
There are multiple ways your data layer can know what changed and most of them are supported by off the shelf ORMs. You could also potentially make the UI and/or business layer's smart enough to pass that knowledge into the data layer.
Two options that I prefer:
Generating/hand coding update
methods that only take the set of
parameters that tend to change.
Generating the update statements
completely on the fly.

Why is ActiveRecord not smart enough to know that the object_id of the father should be equal to the object_id of the parent of its children?

#father = Hierarchy.find(:first, :conditions => ['label = ?', 'father'])
#father.children.each do |child|
puts #father.object_id == child.parent.object_id
end
I would have thought the results here would be all true.
Instead they are all false.
Why does ActiveRecord work this way instead of recognizing that these are the same Ruby objects?
To return existing objects when possible instead of creating new ones ActiveRecord would have to keep track of which objects were created and which entry in the database they respond to, which would be some overhead. Even then it would still have to lookup child.parent in the database to know that it's the same entry that #father represents, so there wouldn't be any considerable gain performance wise from this caching (well on the ruby side it safes allocating multiple objects but at the cost of the bookkeeping overhead, but on the database side it should be basically the same).
So given that the AR people probably decided that preventing different objects corresponding to the same database entry would either be detrimental or at least not worth the effort, so they chose not to do it.
For anyone who stumbles across this years after it was asked (like me!), check out https://stackoverflow.com/a/4116397/1000655.
It suggests using :inverse_of to set up Bi-directional associations. I wish I hadn't skimmed that bit of the documentation so many times in the past!
Object ID is the pointer (sort of) to the object. The loading of rails objects doesn't "share" memory space, so you're getting a copy of the parent object when you do child.parent. To understand this -- you can do parent.something = foo, and subsequently compare child.parent.something and you'll see they are different. You'd have to re-load the child from the database before it will reflect the changes to the parent object.
However, odds are you're using the wrong ID value. If you want the ActiveRecord ID (e.g. the value of the ID column in your SQL DBMS), use #father.id == child.parent.id
Reusing objects is problematic. What if you make changes to a model instance, but before you save it, you do another find that brings back that same object? The new one should reflect the database, but the old one should still have the changed data.
So yes, it would be a lot of overhead to keep track of each instance and the dirty fields in each one. The memory savings would not be worth the effort.
As far as I'm concerned AR should at least support an Identity Map plugin. This is the biggest issue I deal with when working with AR. I realize it adds some overhead, but short-lived memory overhead is better than the undue tax on the database by having it pull the same records again and again. It's like GC. You can do it yourself or you can use a high-level language to abstract it away so that you can concentrate on business logic. You can really fuss to optimize your AR or you can have take advantage of Identity Map.

Resources