I am creating an ASP.NET MVC app attempting to avoid the Fat Controller smell. I am doing this by making controller methods simply send lightweight commands to a command bus, which then get picked up by command handlers. The command handlers enact the commands on the domain model, which in turn creates state-change events that are persisted.
I am doing this to try and get away from the CRUD model of "get X from repository, change it and put it back", remove all domain-specific knowledge from the web application and to allow the intent of the user to be communicated directly to the domain model.
So, let's say a Contact aggregate is composed as follows (I have omitted all but one of the setter methods for brevity).
public class Contact {
private Address _homeAddress;
public Address HomeAddress {
get { return _homeAddress; }
set {
if(newHomeAddress.Equals(_homeAddress)) return;
_homeAddress = newHomeAddress;
AddEvent(new HomeAddressChanged(Id, _homeAddress));
}
}
public Address WorkAddress { get; set; }
public PhoneNumber PhoneNumber { get; set; }
public EmailAddress EmailAddress { get; set; }
}
The command handler that enacts a change of HomeAddress would look like so.
public class ChangeHomeAddressCommandHandler : IHandleCommand<ChangeHomeAddressCommand>
{
private IRepository<Contact> _repo;
public ChangeHomeAddressCommandHandler(IRepository<Contact> repo)
{
_repo = repo;
}
public void Execute(ChangeHomeAddressCommand command)
{
var toEdit = _repo.One(command.Id);
toEdit.HomeAddress = command.NewHomeAddress;
_repo.CommitChanges(toEdit);
}
}
My trouble is that the form that the user submits needs to allow editing of a WHOLE CONTACT i.e. all of its associated addresses, phone numbers &c) which means that there needs to be a command and a handler for each and every property state change.
Each one of these handlers needs to load the aggregate, make the changes and then commit the changes. So even if you don't change all the properties, the command handler still has to load and build the Contact aggregate four times, which is unnecessarily expensive.
I have considered some options...
A "macro" command (called maybe EditContactCommand) into which instances of each possible sub-command (i.e. the individual ChangeHomeAddressCommand) can be added. The macro command loads the aggregate and passes it through the sub-commands and commits changes on dispose.
Making the UI more "task focussed". Instead of the Edit page being a structured collection of textboxes to gather input, use labels accompanied by a "Change" button which invokes a modal dialog. When the modal dialog is OK'd, make an AJAX post back to the controller which in turns buses a command. Or indeed, build smaller pages which only expose certain facets of the Contact aggregate. You only ever change what has actually changed, and changes can happen without a big "Save"-style commit. (I'm not sure whether the users would wear this because they seem to like their sea of textboxes!)
I'd be grateful for any advice, experience and wisdom. Thanks.
The problem might be that you're trying hard to un-CRUDify an application that is (as far as we can tell from that little code) very CRUDish in nature.
No matter how you try to bend your commands to make them look less like CRUD, they won't make any sense if they don't describe a domain reality -- it only adds more unnecessary complexity. Changing an email address might be a command of its own right if it triggers a whole process of re-sending a validation email and so on, but not if it just modifies the email field.
I think there's nothing wrong with commands that modify an entire entity, as long as they are valid domain operations/events explored with your domain expert and there's not 100% of them. Applications are rarely purely CRUD, but when they are, DDD is certainly not the best approach to choose.
You might be already painting yourself in a corner. I'm missing the user's intent. Why is the home address being changed? Did the user make a typo or did the contact really move? If it's the latter, you might need to send an email - if it's the former, probably not.
Let scenario's drive you to discovering the user's intent.
Related
I have a list of physics parameters (like Pressure, Voltage and etc.) accessible to all users from all tenants (multi-tenant application). Now, I need a way to display appropriate language to different users.
Parameter is an aggregate root:
class Parameter
{
public string Name { get; }
public string Description { get; }
}
I need a way to localize both name and description. My first approach was this:
class Parameter
{
public IDictionary<Locale, NameAndDescription> Info { get; }
}
but I feel somehow that this is not correct.
Also, administrators will want to write different translations in the UI. But users will want to see only selected translation (switchable if needed).
How should I solve this problem? Should I remove it completely from domain? Can my application layer have methods to write appropriate translations (facilitating administrators)? Should I resolve current locale from context or should I expect it to be passed to URI/DTO when hitting endpoints? Any other information on localization in DDD would be appreciated.
Localization should be in another bounded context, probably implemented using a CRUD architecture as there are no business rules/invariants that need to be protected. Then, in the UI, using translation methods that access that bounded context, names of the parameters are displayed to the user according to their locale and/or administration settings.
Put it other way, localization does not seem to play any role inside your core domain, they do not participate in protecting the domain invariants.
Can anyone post correct and useful an example of using EF lazy loading in MVC application?
I've tried to research the question, but I can't get proper case.
As a result my conclusion is: since web apps are stateless there is no sense to include LL to entities. But it sounds strange. That's why the question is here.
Can you confirm or otherwise refute my conclusion?
EDIT
The statment "stateless" in question context is important in my mind. Let's pretend 2 scenarios. First one relates for example to WPF app and the second one to MVC. Let's suppose that thre is the next simple object:
public class Person
{
public int Age { get; set; }
public string Name { get; set; }
...
public virtual List<Activity> Activities { get; set; }
}
1) WPF. User is able to request the only Person without his Activities. Thus he get a small portion of data. Overhead are reasonable. At the same time user can decide to request person's activities.
Due to ll mechanism, EF simply loads activities without requesting person object again, since Person still exists in application (of course, if we code it in such a way).
2) MVC. The same actions are there. But the only difference that, after server response, all resources including object Person are disposes. And we can't load Person activities as we did in WPF application. We are forced to load Person again (overhead is increases comparing with WPF app)
The point is that Lazy loading can be executed only in the scope of the context to which the entity is attached - if you dispose the context you cannot use it.
I don't think you understand what lazy loading does, as it has nothing to do with whether there's any state or not. It's not like caching or something. Lazy loading is simply Entity Framework overloading a property to add a custom getter that issues a query to fetch the object or set of objects when the property is accessed for the first time.
For example, if you had something like:
public class Foo
{
public virtual Bar Bar { get; set; }
}
And you were to query a set of Foos from the database, the Bar property on all of them would be null, as EF would not have issued any queries yet to fetch the related Bar instance. However, if you were to iterate over this list of Foo and access some property on Bar (i.e. foo.Bar.Baz, then EF would issue a just-in-time query for the Bar instance, so that it could then return the Baz property on it.
I have an object called StyleBundle.
public class StyleBundle
{
public StylePricingType StylePricingType { get; private set;}
public decimal Price {get; private set;}
public IEnumerable<Style> Styles { get; set;}
public DateTime StartDate {get; private set;}
public TimeSpan Duration {get; private set;}
public bool IsTransient {get; set;}
public void ChangeStylePricingType(StylePricingType newStylePricingType)
{
this.StylePricingType = newStylePricingType;
}
}
This StyleBundle object has a property called StylePricingType. The StylePricingType is an enum of two types:
PerStyle
Unlimited
The StylePricingType will effect the overall Price of the StyleBundle. The way it will affect the Price is by changing the Styles kept in the Styles list. An Unlimited StyleBundle will automatically include all available Styles, but a PerStyle StyleBundle will allow a user to manually pick which Styles they want to include.
I now need to allow the StylePricingType to be changed if the StyleBundle is transient (previous rules stated that once a StyleBundle is new'ed up, you can not change the StylePricingType).
BUT, in order to make this change, I need to run a check against the database via a repository/specification/service... aka, however I want to do it.
The check basically looks for any other StyleBundles during the same duration of the current StyleBundle, and makes sure there are no overlap in Styles in the StyleBundle.
Since changing this property on a transient StyleBundle requires a check against other persisted StyleBundles, what is the best way to go about implementing this?
Use Constructor injection: inject a service into the StyleBundle entity's constructor. I don't like this, b/c I don't like injecting dependencies into my entities unless I need to do so. Also, since I don't like the idea of injecting the dependency into the constructor when it's only needed for the method call that will change the StylePricingType, I see this as bad design.
Use Method injection: Since I would only need the service for this one method call, this seems to make more sense. Yet at the same time, I don't like the idea the user being able to change this type without knowing they're running a db query. Also, I'm still injecting a service into my entity, just in a different way, and I really do not like injecting anything into my entities.
Use a Domain Service: this seems to be the most explicit of all. I could create a StyleBundleService class that has a ChangeStylePricingType method that uses a repository or specification to run the check given a StyleBundle. This way, the requirement is made very explicit in the code, but the drawback here is code could still call the ChangeStylePricingType method directly on the StyleBundle object, and BYPASS the ChangeStylePricingType method on the service I need to make. Even if I set the StylePricingType to get;set; instead of private set; and got rid of the ChangeStylePricingType method on StyleBundle, code could still make the change, bypassing the domain service.
So, these all seem like legitimate ways to go about doing something like this, so what is the best/most accepted way of doing it using DDD? Also, maybe my StyleBundle object is trying to do too much, and should be broken into smaller classes/functionality that would allow this requirement change to be handled more eloquently?
Mike
This is a common issue encountered in DDD. A similar problem is discussed by Udi Dahan in this post. Option 1 is discussed in this question. Option 2 is discussed elsewhere on SO (don't have exact link), but like you, I am not a fan, even though it is the simplest and most direct way. Option 3 is often associated with an anemic domain model, however I often find it to be preferable. The reason is that an encapsulating service layer is something that arises natural as part of DDD - it exposes the domain layer to other layers, such as the presentation layer, or an open host service. Furthermore, actions performed on domain entities can be represented as command objects which are handled by the service. In this case, you can have:
class ChangeStylePricingTypeCommand {
public string StyleBundleId { get; set; }
public StylePricingType StylePricingType { get; set; }
}
class StyleBundleService {
IStyleBundleRepository db;
public void Process(ChangeStylePricingTypeCommand command) {
using (var tx = this.db.BeginTransaction()) {
var bundle = this.db.Get(command.StyleBundleId);
// verification goes here.
bundle.ChangeStylePricingType(command.StylePricingType);
this.db.Commit();
}
}
}
The service StyleBundleService is a perfect place for accessing repositories and other services.
The approach outlined by Udi entails the ChangeStylePricingType raising a domain event, to which would be subscribed a handler, which in turn executes the required business logic. This approach is more decoupled, but is more complex and may be overkill. The other issue with a domain event based approach is that the handler executes after the event happened, and thus cannot prevent it, it can only deal with the consequences.
While I agree it is a good idea to externalize that behavior out of StyleBundle, I usually try to avoid using Services as much as possible. To be more precise, I try to avoid naming something a Service if there are known pattern names that better suit what you really want the object to do.
In your example, it's still unclear to me whether you simply want to check the validity of a StyleBundle against the new StylePricingType you assign to it, rejecting the operation altogether if the bundle doesn't comply, or if you want to adjust the contents of the bundle according to the new StylePricingType.
In the first case a Specification seems best suited for that (you mentioned in the comments you're already using one when adding Styles to a bundle). In the second you need an object that will actually act on the Bundle, eliminating non-compliant styles. I'd use a StyleRuleOutStrategy/Policy with an Enforce() method taking the Bundle as a parameter. In both cases you'd call the relevant method of the new Specification/Strategy in the property setter when changing Specification/Strategy.
Note that the Strategy part takes all its meaning if the action to take is not the same when switching to PerStyle than when switching to Unlimited, but again from what you explained it is not clear this is the case.
I understand that the "proper" structure for separation-of-concerns in MVC is to have view-models for your structuring your views and separate data-models for persisting in your chosen repository. I started experimenting with MongoDB and I'm starting to think that this may not apply when using a schema-less, NO-SQL style database. I wanted to present this scenario to the stackoverflow community and see what everyone's thoughts are. I'm new to MVC, so this made sense to me, but maybe I am overlooking something...
Here is my example for this discussion: When a user wants to edit their profile, they would go to the UserEdit view, which uses the UserEdit model below.
public class UserEditModel
{
public string Username
{
get { return Info.Username; }
set { Info.Username = value; }
}
[Required]
[MembershipPassword]
[DataType(DataType.Password)]
public string Password { get; set; }
[DataType(DataType.Password)]
[DisplayName("Confirm Password")]
[Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
public string ConfirmPassword { get; set; }
[Required]
[Email]
public string Email { get; set; }
public UserInfo Info { get; set; }
public Dictionary<string, bool> Roles { get; set; }
}
public class UserInfo : IRepoData
{
[ScaffoldColumn(false)]
public Guid _id { get; set; }
[ScaffoldColumn(false)]
public DateTime Timestamp { get; set; }
[Required]
[DisplayName("Username")]
[ScaffoldColumn(false)]
public string Username { get; set; }
[Required]
[DisplayName("First Name")]
public string FirstName { get; set; }
[Required]
[DisplayName("Last Name")]
public string LastName { get; set; }
[ScaffoldColumn(false)]
public string Theme { get; set; }
[ScaffoldColumn(false)]
public bool IsADUser { get; set; }
}
Notice that the UserEditModel class contains an instance of UserInfo that inherits from IRepoData? UserInfo is what gets saved to the database. I have a generic repository class that accepts any object that inherits form IRepoData and saves it; so I just call Repository.Save(myUserInfo) and its's done. IRepoData defines the _id (MongoDB naming convention) and a Timestamp, so the repository can upsert based on _id and check for conflicts based on the Timestamp, and whatever other properties the object has just get saved to MongoDB. The view, for the most part, just needs to use #Html.EditorFor and we are good to go! Basically, anything that just the view needs goes into the base-model, anything that only the repository needs just gets the [ScaffoldColumn(false)] annotation, and everything else is common between the two. (BTW - the username, password, roles, and email get saved to .NET providers, so that is why they are not in the UserInfo object.)
The big advantages of this scenario are two-fold...
I can use less code, which is therefore more easily understood, faster to develop, and more maintainable (in my opinion).
I can re-factor in seconds... If I need to add a second email address, I just add it to the UserInfo object - it gets added to the view and saved to the repository just by adding one property to the object. Because I am using MongoDB, I don't need to alter my db schema or mess with any existing data.
Given this setup, is there a need to make separate models for storing data? What do you all think the disadvantages of this approach are? I realize that the obvious answers are standards and separation-of-concerns, but are there any real world examples can you think of that would demonstrate some of the headaches this would cause?
Its also worth noting that I'm working on a team of two developers total, so it's easy to look at the benefits and overlook bending some standards. Do you think working on a smaller team makes a difference in that regard?
The advantages of view models in MVC exist regardless of database system used (hell even if you don't use one). In simple CRUD situations, your business model entities will very closely mimick what you show in the views, but in anything more than basic CRUD this will not be the case.
One of the big things are business logic / data integrity concerns with using the same class for data modeling/persistence as what you use in views. Take the situation where you have a DateTime DateAdded property in your user class, to denote when a user was added. If you provide an form that hooks straight into your UserInfo class you end up with an action handler that looks like:
[HttpPost]
public ActionResult Edit(UserInfo model) { }
Most likely you don't want the user to be able to change when they were added to the system, so your first thought is to not provide a field in the form.
However, you can't rely on that for two reasons. First is that the value for DateAdded will be the same as what you would get if you did a new DateTime() or it will be null ( either way will be incorrect for this user).
The second issue with this is that users can spoof this in the form request and add &DateAdded=<whatever date> to the POST data, and now your application will change the DateAdded field in the DB to whatever the user entered.
This is by design, as MVC's model binding mechanism looks at the data sent via POST and tries to automatically connect them with any available properties in the model. It has no way to know that a property that was sent over wasn't in the originating form, and thus it will still bind it to that property.
ViewModels do not have this issue because your view model should know how to convert itself to/from a data entity, and it does not have a DateAdded field to spoof, it only has the bare minimum fields it needs to display (or receive) it's data.
In your exact scenario, I can reproduce this with ease with POST string manipulation, since your view model has access to your data entity directly.
Another issue with using data classes straight in the views is when you are trying to present your view in a way that doesn't really fit how your data is modeled. As an example, let's say you have the following fields for users:
public DateTime? BannedDate { get; set; }
public DateTime? ActivationDate { get; set; } // Date the account was activated via email link
Now let's say as an Admin you are interested on the status of all users, and you want to display a status message next to each user as well as give different actions the admin can do based on that user's status. If you use your data model, your view's code will look like:
// In status column of the web page's data grid
#if (user.BannedDate != null)
{
<span class="banned">Banned</span>
}
else if (user.ActivationDate != null)
{
<span class="Activated">Activated</span>
}
//.... Do some html to finish other columns in the table
// In the Actions column of the web page's data grid
#if (user.BannedDate != null)
{
// .. Add buttons for banned users
}
else if (user.ActivationDate != null)
{
// .. Add buttons for activated users
}
This is bad because you have a lot of business logic in your views now (user status of banned always takes precedence over activated users, banned users are defined by users with a banned date, etc...). It is also much more complicated.
Instead, a better (imho at least) solution is to wrap your users in a ViewModel that has an enumeration for their status, and when you convert your model to your view model (the view model's constructor is a good place to do this) you can insert your business logic once to look at all the dates and figure out what status the user should be.
Then your code above is simplified as:
// In status column of the web page's data grid
#if (user.Status == UserStatuses.Banned)
{
<span class="banned">Banned</span>
}
else if (user.Status == UserStatuses.Activated)
{
<span class="Activated">Activated</span>
}
//.... Do some html to finish other columns in the table
// In the Actions column of the web page's data grid
#if (user.Status == UserStatuses.Banned)
{
// .. Add buttons for banned users
}
else if (user.Status == UserStatuses.Activated)
{
// .. Add buttons for activated users
}
Which may not look like less code in this simple scenario, but it makes things a lot more maintainable when the logic for determining a status for a user becomes more complicated. You can now change the logic of how a user's status is determined without having to change your data model (you shouldn't have to change your data model because of how you are viewing data) and it keeps the status determination in one spot.
tl;dr
There are at least 3 layers of models in an application, sometimes they can be combined safely, sometimes not. In the context of the question, it's ok to combine the persistence and domain models but not the view model.
full post
The scenario you describe fits equally well using any entity model directly. It could be using a Linq2Sql model as your ViewModel, an entity framework model, a hibernate model, etc. The main point is that you want to use the persisted model directly as your view model. Separation of concerns, as you mention, does not explicitly force you to avoid doing this. In fact separation of concerns is not even the most important factor in building your model layers.
In a typical web application there are at least 3 distinct layers of models, although it is possible and sometimes correct to combine these layers into a single object. The model layers are, from highest level to lowest, your view model, your domain model and your persistence model. Your view model should describe exactly what is in your view, no more and no less. Your domain model should describe your complete model of the system exactly. Your persistence model should describe your storage method for your domain models exactly.
ORMs come in many shapes and sizes, with different conceptual purposes, and MongoDB as you describe it is simply one of them. The illusion most of them promise is that your persistence model should be the same as your domain model and the ORM is just a mapping tool from your data store to your domain object. This is certainly true for simple scenarios, where all of your data comes from one place, but eventually has it's limitations, and your storage degrades into something more pragmatic for your situation. When that happens, the models tend to become distinct.
The one rule of thumb to follow when deciding whether or not you can separate your domain model from your persistence model is whether or not you could easily swap out your data store without changing your domain model. If the answer is yes, they can be combined, otherwise they should be separate models. A repository interface naturally fits here to deliver your domain models from whatever data store is available. Some of the newer light weight ORMs, such as dapper and massive, make it very easy to use your domain model as your persistence model because they do not require a particular data model in order to perform persistence, you are simply writing the queries directly, and letting the ORM just handle the mapping.
On the read side, view models are again a distinct model layer because they represent a subset of your domain model combined however you need in order to display information to the page. If you want to display a user's info, with links to all his friends and when you hover over their name you get some info about that user, your persistence model to handle that directly, even with MongoDB, would likely be pretty insane. Of course not every application is showing such a collection of interconnected data on every view, and sometimes the domain model is exactly what you want to display. In that case there is no reason to put in the extra weight of mapping from an object that has exactly what you want to display to a specific view model that has the same properties. In simple apps if all I want to do is augment a domain model, my view model will directly inherit from the domain model and add the extra properties I want to display. That being said, before your MVC app becomes large, I highly recommend using a view model for your layouts, and having all of page based view models inherit from that layout model.
On the write side, a view model should only allow the properties you wish to be editable for the type of user accessing the view. Do not send an admin view model to the view for a non admin user. You could get away with this if you write the mapping layer for this model yourself to take into account the privileges of the accessing user, but that is probably more overhead than just creating a second admin model that inherits from the regular view model and augments it with the admin properties.
Lastly about your points:
Less code is only an advantage when it actually is more understandable. Readability and understand-ability of it are results of the skills of the person writing it. There are famous examples of short code that has taken even solid developers a long time to dissect and understand. Most of those examples come from cleverly written code which is not more understandable. More important is that your code meets your specification 100%. If your code is short, easily understood and readable but does not meet the specification, it is worthless. If it is all of those things and does meet the specification, but is easily exploitable, the specification and the code are worthless.
Refactoring in seconds safely is the result of well written code, not it's terseness. Following the DRY principle will make your code easily refactorable as long as your specification correctly meets your goals. In the case of model layers, your domain model is the key to writing good, maintainable and easy to refactor code. Your domain model will change at the pace at which your business requirements change. Changes in your business requirements are big changes, and care has to be taken to make sure that a new spec is fully thought out, designed, implemented, tested, etc. For example you say today you want to add a second email address. You still will have to change the view (unless you're using some kind of scaffolding). Also, what if tomorrow you get a requirements change to add support for up to 100 email addresses? The change you originally proposed was rather simple for any system, bigger changes require more work.
I was reading Steven Sanderson's book Pro ASP.NET MVC Framework and he suggests using a repository pattern:
public interface IProductsRepository
{
IQueryable<Product> Products { get; }
void SaveProduct(Product product);
}
He accesses the products repository directly from his Controllers, but since I will have both a web page and web service, I wanted to have add a "Service Layer" that would be called by the Controllers and the web services:
public class ProductService
{
private IProductsRepository productsRepsitory;
public ProductService(IProductsRepository productsRepository)
{
this.productsRepsitory = productsRepository;
}
public Product GetProductById(int id)
{
return (from p in productsRepsitory.Products
where p.ProductID == id
select p).First();
}
// more methods
}
This seems all fine, but my problem is that I can't use his SaveProduct(Product product) because:
1) I want to only allow certain fields to be changed in the Product table
2) I want to keep an audit log of each change made to each field of the Product table, so I would have to have methods for each field that I allow to be updated.
My initial plan was to have a method in ProductService like this:
public void ChangeProductName(Product product, string newProductName);
Which then calls IProductsRepository.SaveProduct(Product)
But there are a few problems I see with this:
1) Isn't it not very "OO" to pass in the Product object like this? However, I can't see how this code could go in the Product class since it should just be a dumb data object. I could see adding validation to a partial class, but not this.
2) How do I ensure that no one changed any other fields other than Product before I persist the change?
I'm basically torn because I can't put the auditing/update code in Product and the ProductService class' update methods just seem unnatural (However, GetProductById seems perfectly natural to me).
I think I'd still have these problems even if I didn't have the auditing requirement. Either way I want to limit what fields can be changed in one class rather than duplicating the logic in both the web site and the web services.
Is my design pattern just bad in the first place or can I somehow make this work in a clean way?
Any insight would be greatly appreciated.
I split the repository into two interfaces, one for reading and one for writing.
The reading implements IDisposeable, and reuses the same data-context for its lifetime. It returns the entity objects produced by linq to SQL. For example, it might look like:
interface Reader : IDisposeable
{
IQueryable<Product> Products;
IQueryable<Order> Orders;
IQueryable<Customer> Customers;
}
The iQueryable is important so I get the delayed evaluation goodness of linq2sql. This is easy to implement with a DataContext, and easy enough to fake. Note that when I use this interface I never use the autogenerated fields for related rows (ie, no fair using order.Products directly, calls must join on the appropriate ID columns). This is a limitation I don't mind living with considering how much easier it makes faking read repository for unit tests.
The writing one uses a separate datacontext per write operation, so it does not implement IDisposeable. It does NOT take entity objects as input or out- it takes the specific fields needed for each write operation.
When I write test code, I can substitute the readable interface with a fake implementation that uses a bunch of List<>s which I populate manually. I use mocks for the write interface. This has worked like a charm so far.
Don't get in a habit of passing the entity objects around, they're bound to the datacontext's lifetime and it leads to unfortunate coupling between your repository and its clients.
To address your need for the auditing/logging of changes, just today I put the finishing touches on a system I'll suggest for your consideration. The idea is to serialize (easily done if you are using LTS entity objects and through the magic of the DataContractSerializer) the "before" and "after" state of your object, then save these to a logging table.
My logging table has columns for the date, username, a foreign key to the affected entity, and title/quick summary of the action, such as "Product was updated". There is also a single column for storing the change itself, which is a general-purpose field for storing a mini-XML representation of the "before and after" state. For example, here's what I'm logging:
<ProductUpdated>
<Deleted><Product ... /></Deleted>
<Inserted><Product ... /></Inserted>
</ProductUpdated>
Here is the general purpose "serializer" I used:
public string SerializeObject(object obj)
{
// See http://msdn.microsoft.com/en-us/library/bb546184.aspx :
Type t = obj.GetType();
DataContractSerializer dcs = new DataContractSerializer(t);
StringBuilder sb = new StringBuilder();
XmlWriterSettings settings = new XmlWriterSettings();
settings.OmitXmlDeclaration = true;
XmlWriter writer = XmlWriter.Create(sb, settings);
dcs.WriteObject(writer, obj);
writer.Close();
string xml = sb.ToString();
return xml;
}
Then, when updating (can also be used for logging inserts/deletes), grab the state before you do your model-binding, then again afterwards. Shove into an XML wrapper and log it! (or I suppose you could use two columns in your logging table for these, although my XML approach allows me to attach any other information that might be helpful).
Furthermore, if you want to only allow certain fields to be updated, you'll be able to do this with either a "whitelist/blacklist" in your controller's action method, or you could create a "ViewModel" to hand in to your controller, which could have the restrictions placed upon it that you desire. You could also look into the many partial methods and hooks that your LTS entity classes should have on them, which would allow you to detect changes to fields that you don't want.
Good luck! -Mike
Update:
For kicks, here is how I deserialize an entity (as I mentioned in my comment), for viewing its state at some later point in history: (After I've extracted it from the log entry's wrapper)
public Account DeserializeAccount(string xmlString)
{
MemoryStream s = new MemoryStream(Encoding.Unicode.GetBytes(xmlString));
DataContractSerializer dcs = new DataContractSerializer(typeof(Product));
Product product = (Product)dcs.ReadObject(s);
return product;
}
I would also recommend reading Chapter 13, "LINQ in every layer" in the book "LINQ in Action". It pretty much addresses exactly what I've been struggling with -- how to work LINQ into a 3-tier design. I'm leaning towards not using LINQ at all now after reading that chapter.