NHibernate and contextual entities - asp.net-mvc

I'm trying to use NHibernate for a new app with a legacy database. It's going pretty well but I'm stuck and can't find a good solution for a problem.
Let's say I have this model :
a Service table (Id, ServiceName..)
a Movie table (Id, Title, ...)
a Contents table which associates a service and a movie (IdContent, Name, IdMovie, IdService)
So I mapped this and it all went good. Now I can retrieve a movie, get all the contents associated, ...
My app is a movies shop "generator". Each "service" is in fact a different shop, when a user enter my website, he's redirected to one of the shops and obviously, I must show him only movies available for his shop. The idea is : user comes, his service is recognized, I present him movies which have contents linked to his service. I need to be able to retrieve all contents for a movie for the backoffice too.
I'm trying to find the most transparent way to accomplish this with NHibernate. I can't really make changes to the db model.
I thought about a few solutions :
Add the service condition into all my queries. Would work but it's a bit cumbersome. The model is very complex and has tons of tables/queries..
Use nhibernate filter. Seemed ideal and worked pretty good, I added the filter on serviceid in all my mappings and did the EnableFilter as soon as my user's service was recognized but.. nhibernate filtered collections don't work with 2nd lvl cache (redis in my case) and 2nd lvl cache usage is mandatory.
Add computed properties to my object like Movie.PublishedContents(Int32 serviceId). Probably would work but requires to write a lot of code and "pollutes" my domain.
Add new entities inheriting from my nhibernate entity like a PublishedMovie : Movie wich only presents the contextual data
None of these really satisfies me. Is there a good way to do this ?
Thanks !

You're asking about multi-tenancy with all the tenants in the same database. I've handled that scenario effectively using Ninject dependency injection. In my application the tenant is called "manual" and I'll use that in the sample code.
The route needs to contain the tenant e.g.
{manual}/{controller}/{action}/{id}
A constraint can be set on the tenant to limit the allowed tenants.
I use Ninject to configure and supply the ISessionFactory as a singleton and ISession in session-per-request strategy. This is encapsulated using Ninject Provider classes.
I do the filtering using lightweight repository classes, e.g.
public class ManualRepository
{
private readonly int _manualId;
private readonly ISession _session;
public ManualRepository(int manualId, ISession session)
{
_manualId = manualId;
_session = session;
}
public IQueryable<Manual> GetManual()
{
return _session.Query<Manual>().Where(m => m.ManualId == _manualId);
}
}
If you want pretty urls you'll need to translate the tenant route parameter into its corresponding database value. I have these set up in web.config and I load them into a dictionary at startup. An IRouteConstraint implementation reads the "manual" route value, looks it up, and sets the "manualId" route value.
Ninject can handle injecting the ISession into the repository and the repository into the controller. Any queries in the controller actions must be based on the repository method so that the filter is applied. The trick is injecting the manualId from the routing value. In NinjectWebCommon I have two methods to accomplish this:
private static int GetManualIdForRequest()
{
var httpContext = HttpContext.Current;
var routeValues = httpContext.Request.RequestContext.RouteData.Values;
if (routeValues.ContainsKey("manualId"))
{
return int.Parse(routeValues["manualId"].ToString());
}
const string msg = "Route does not contain 'manualId' required to construct object.";
throw new HttpException((int)HttpStatusCode.BadRequest, msg);
}
/// <summary>
/// Binding extension that injects the manualId from route data values to the ctor.
/// </summary>
private static void WithManualIdConstructor<T>(this IBindingWithSyntax<T> binding)
{
binding.WithConstructorArgument("manualId", context => GetManualIdForRequest());
}
And the repository bindings are declared to inject the manualId. There may be a better way to accomplish this through conventions.
kernel.Bind<ManualRepository>().ToSelf().WithManualIdConstructor();
The end result is that queries follow the pattern
var manual = _manualRepository
.GetManual()
.Where(m => m.EffectiveDate <= DateTime.Today)
.Select(m => new ManualView
{
ManualId = m.ManualId,
ManualName = m.Name
}).List();
and I don't need to worry about filtering per tenant in my queries.
As for the 2nd level cache, I don't use it in this app but my understanding is that you can set the cache region to segregate tenants. This should get you started: http://ayende.com/blog/1708/nhibernate-caching-the-secong-level-cache-space-is-shared

Related

persisting MVC data to ORM

Java or dotNet world is rich of open source frameworks and libraries. We all like to use Spring and Hibernate almost everywhere.
Everyone agrees that hibernate is a very handy tool.
What Hibernate can do ? well, Basically - Hibernate can track our domain objects changes and persist only modified data to database, that is it.
Basically, That is everything we want. I want to load some records from database, do some modifications to them, and call transaction.commit(), and all modifications get persisted, instantaneously.
That is excelent, right !
But how about web world ? In web applications database session must be closed.
I cannot load some domain objects and wait for user to do modifications through HTTP, and persist those objects after modifications.
We have to use detached objects or DTO. How it works ?
User makes modifications in HTML browser, spring Mvc automatically thransfers those HTML modifiactions to our customized DTO objects using MVC model binding,
then we do some programming effort to transfer modifications from DTO objects to hibernate domain objects and only then we persist them.
For example - we have a web form that updates Customer address, and another form which updates customer details.
We must have two different business layer methods - UpdateAddress() and UpdateDetails(), both methods must accept some kind of DTO,
one represents address information, the other represents details infprmation.
We also have custom logic that transfers data from those 2 DTO to the domain class 'Customer'.
Yes, of course, instead of DTO objects we could reuse our domain classes. But it does not make it simpler.
In both cases we will still have to implement custom logic that transfer modifications to persistent objects,
I cannot persist detached object rightaway, because usually domain classes have lots and lots of properties representing numerous relations, for ex. Customer has - Orders property. When I update customer address I don't want to update its orders.
Is there a beautifull universal way to mapping modifications from mvc model to domain objects without writing a lot of custom code and without risk of overwriting too many fields ?
It's good practice to have a data access layer, which translates into having a repository for each domain object / entity. Furthermore, all repositories share common code so you you naturally have an abstract repository:
public abstract class AbstractRepository<E extends BaseModel> implements Repository<E> {
#PersistenceContext
private EntityManager entityManager;
private Class<E> entityClass;
public AbstractRepository(Class<E> entityClass) {
this.entityClass = entityClass;
}
protected EntityManager getEM() {
return entityManager;
}
protected TypedQuery<E> createQuery(String jpql) {
return createQuery(jpql, entityClass);
}
protected <T> TypedQuery<T> createQuery(String jpql, Class<T> typeClass) {
return getEM().createQuery(jpql, typeClass);
}
#Override
public E merge(E entity) {
return getEM().merge(entity);
}
#Override
public void remove(E entity) {
getEM().remove(entity);
}
#Override
public E findById(long id) {
return getEM().find(entityClass, id);
}
}
It's also good practice to have a service layer where you are to create, update and delete instances of an entity (where you could pass through a DTO to the create and update methods if you so desire).
...
#Inject
private CustomerRepository customerRepository;
public Customer createCustomer(CustomerDto customerDto) {
Customer customer = new Customer();
customer.setEmail(customerDto.getEmail());
...
return customerRepository.merge(customer);
}
public Customer updateCustomerAddress(Customer customer, String address) {
customer.setAddress(address);
return customerRepository.merge(customer);
}
...
So it's up to you how many update methods you want. I would typically group them into common operations such as updating the customer's address, where you would pass the customer Id and the updated address from the front end (probably via ajax) to your controller listening on a specific endpoint. This endpoint is where you would use the repository to find the entity first by Id and then pass it to your service to do the address update for example.
Lastly you need to ensure that the data actually gets persisted, so in Spring you can add the #Transactional annotation either to you Spring MVC controller or to your service that does the persisting. I'm not aware of any best practices around this but I prefer adding it to my controllers so that you're always guaranteed to have a transaction no matter what service you are in.

ServiceStack new service side by side ASP.NET MVC website

In the examples for ServiceStack I don't see a single application that is ASP.NET MVC website first and then made ServiceStack service second.
Let's take a very simple ASP.NET MVC web application that renders products through Views. It uses controllers, views, models and viewmodels.
Let's say we have a model of Product which gets persisted into a document DB. Let's assume we have a viewmodel of ProductViewModel which gets mapped from Product and display within MVC Razor View/PartialView.
so this is a web side of things..now let's assume we want to add a service returning products to various clients like the Windows 8 applications.
Should the request/response classes be completely disconnected from what we already have? Our ProductViewModel might already contain everything we want to return from the service.
Since we already have Product (model class) we can't have another Product class in the API namespace..well we could but that makes things unclear and I'd like to avoid that.
So, should we introduce standalone ProductRequest class and ProductRequestResponse (inherits ProductViewModel) class in the API namespace?
Like so ProductRequestResponse : ProductViewModel?
What i'm saying is, we already have the Model and ViewModel classes and to construct Request and Response classes for the SS service we would have to create another two files, mostly by copying everything from the classes we already have. This doesn't look DRY to me, it might follow the separation of concerns guidelines but DRY is important too, actually more than separating everything (separating everything leads to duplication of code).
What I would like to see is a case where a web application has already been made, it currently features Models and ViewModels and returns the appropriate Views for display on the Web but can be extended into a fully functional service to support programmatic clients? Like AJAX clients etc...with what we already have.
Another thing:
If you take a look at this example https://github.com/ServiceStack/ServiceStack.Examples/blob/master/src/ServiceStack.MovieRest/MovieService.cs
you will see there is Movie Request class and Movies Request class (one for single movie request, the other one for a list of movies). As such, there are also two services, MovieService and MoviesService, one dealing with requests for a single movie, the other one for a genre of movies.
Now, while I like SS approach to services and I think it is the right one, I don't like this sort of separation merely because of the type of request. What if I wanted movies by director? Would I be inventing yet another request class having a Director property and yet another service (MoviesByDirector) for it?
I think the samples should be oriented towards one service. Everything that has to deal with movies need to be under one roof. How does one achieve that with ServiceStack?
public class ProductsService : Service
{
private readonly IDocumentSession _session;
private readonly ProductsHelperService _productsHelperService;
private readonly ProductCategorizationHelperService _productCategorization;
public class ProductRequest : IReturn<ProductRequestResponse>
{
public int Id { get; set; }
}
// Does this make sense? 
// Please note, we use ProductViewModel in our Views and it holds everything we'd want in service response also
public class ProductRequestResponse : ProductViewModel
{
}
public ProductRequestResponse GetProducts(ProductRequest request)
{
ProductRequestResponse response = null;
if (request.Id >= 0)
{
var product = _session.Load<Product>(request.Id);
response.InjectFrom(product);
}
return response;
}
}
The Service Layer is your most important Contract
The most important interface that you can ever create in your entire system is your external facing service contract, this is what consumers of your service or application will bind to, i.e. the existing call-sites that often won't get updated along with your code-base - every other model is secondary.
DTOs are Best practices for remote services
In following of Martin Fowler's recommendation for using DTOs (Data Transfer Objects) for remote services (MSDN), ServiceStack encourages the use of clean, untainted POCOs to define a well-defined contract with that should kept in a largely implementation and dependency-free .dll. The benefits of this allows you to be able to re-use typed DTOs used to define your services with, as-is, in your C#/.NET clients - providing an end-to-end typed API without the use of any code-gen or other artificial machinery.
DRY vs Intent
Keeping things DRY should not be confused with clearly stating of intent, which you should avoid trying to DRY or hide behind inheritance, magic properties or any other mechanism. Having clean, well-defined DTOs provides a single source of reference that anyone can look at to see what each service accepts and returns, it allows your client and server developers to start their work straight away and bind to the external service models without the implementation having been written.
Keeping the DTOs separated also gives you the freedom to re-factor the implementation from within without breaking external clients, i.e. your service starts to cache responses or leverages a NoSQL solution to populate your responses with.
It's also provides the authoritative source (that's not leaked or coupled inside your app logic) that's used to create the auto-generated metadata pages, example responses, Swagger support, XSDs, WSDLs, etc.
Using ServiceStack's Built-in auto-mapping
Whilst we encourage keeping separate DTO models, you don't need to maintain your own manual mapping as you can use a mapper like AutoMapper or using ServiceStack's built-in Auto Mapping support, e.g:
Create a new DTO instance, populated with matching properties on viewModel:
var dto = viewModel.ConvertTo<MyDto>();
Initialize DTO and populate it with matching properties on a view model:
var dto = new MyDto { A = 1, B = 2 }.PopulateWith(viewModel);
Initialize DTO and populate it with non-default matching properties on a view model:
var dto = new MyDto { A = 1, B = 2 }.PopulateWithNonDefaultValues(viewModel);
Initialize DTO and populate it with matching properties that are annotated with the Attr Attribute on a view model:
var dto = new MyDto { A=1 }.PopulateFromPropertiesWithAttribute<Attr>(viewModel);
When mapping logic becomes more complicated we like to use extension methods to keep code DRY and maintain the mapping in one place that's easily consumable from within your application, e.g:
public static class MappingExtensions
{
public static MyDto ToDto(this MyViewModel viewModel)
{
var dto = viewModel.ConvertTo<MyDto>();
dto.Items = viewModel.Items.ConvertAll(x => x.ToDto());
dto.CalculatedProperty = Calculate(viewModel.Seed);
return dto;
}
}
Which is now easily consumable with just:
var dto = viewModel.ToDto();
If you are not tied specifically to ServiceStack and just want "fully functional service to support programmatic clients ... with what we already have", you could try the following: Have your controllers return either a ViewResult or a JsonResult based on the request's accept header - Request.AcceptTypes.Contains("text/html") or Request.AcceptTypes.Contains("application/json").
Both ViewResult and JsonResult are ActionResult, so the signature of actions remains the same, and both View() and Json() accept a ViewModel. Furthermore, if you have a ControllerBase you can make a base method (for example protected ActionResult RespondWith(Object viewModel)) which calls either View() or Json() so the change to existing code is minimal.
Of course, if your ViewModels are not pure (i.e. have some html-specific stuff or you rely on some ViewBag magic) then it's a little more work. And you won't get SOAP or other binding types provided by ServiceStack, but if your goal is to support a JSON data interface with minimal code changes to the existing MVC app then this could be a solution.
Lp

Can I handle different multi-tenancy with a global filter?

I have an asp.net mvc app which has membership implemented.
So a user has to log in. Each user belongs to a organisation (multi-tenancy).
How would I handle the organisation parameter globaly? I was thinking this could be a good thing for a global filter because all the data needs to be filtered for the given organisation. And the organisation is connected with the username but not in the same table.
for example I have a index action like this
public ActionResult Index()
{
var result = _repository.GetListByOrganisation(organisation);
return View(result);
}
I was thinking about having a global attribute thats queries the db for an organisation based on a giving username. Since my controller already contains the authorize attribute I have the user name. It would be nice to cache the organisation (session, controllercontext) and not query the organisation from db on each request.
Is this a way to implement something like this? Or are there other ways which would be better? And how can I set a property on the controller / controllercontext from whithin a filter?
So any thoughts on this as well as comments would be great...
I would do this via DI.
You can use either a third-party DI container or your own code. Either way, you want to set the organization ID on a per-request basis.
So you'll be creating a unit of work and injecting that in your controller. For the sake of simplicity, let's pretend that your unit of work is the _repository field in your sample code, even though most real-world apps are more complex.
You add a constructor parameter to the controller:
public FooController(IFooRepository repository)
{
this._repository = repository;
}
...and an organization ID on FooRepository:
public class FooRepository: IFooRepository
{
public FooRepository(long organizationId)
{
this._organizationId = organizationId;
}
}
Now in either your DI container setup or a MVC controller factory, you set this all up:
builder.Register(c => new FooRepository(GetOrgIdForCurrentUser()).As<IFooRepository>();
builder.Register(c => new FooController(c.Resolve<IFooRepository>());
Perhaps you could have the organization embedded on the URL, for example if your route looks like this /{organization}/{controller}/{action}
then you'll get URLs like
/bigorg/products/list
/smallorg/products/list
and you'll receive the organization in your controller either a parameter to your method or via the RouteData object.

ASP.net MVC Controller - Constructor usage

I'm working on an ASP.net MVC application and I have a question about using constructors for my controllers.
I'm using Entity Framework and linq to Entities for all of my data transactions. I need to access my Entity model for nearly all of my controller actions. When I first started writing the app I was creating an entity object at the beginning of each Action method, performing whatever work I needed to and then returning my result.
I realized that I was creating the same object over and over for each action method so I created a private member variable for the Entity object and started instantiating it in the constructor for each controller. Now each method only references that private member variable to do its work.
I'm still questioning myself on which way is right. I'm wondering A.) which method is most appropriate? B.) in the constructor method, how long are those objects living? C.) are there performance/integrity issues with the constructor method?
You are asking the right questions.
A. It is definitely not appropriate to create this dependencies inside each action method. One of the main features of MVC is the ability to separate concerns. By loading up your controller with these dependencies, you are making the controller for thick. These should be injected into the controller. There are various options for dependency injection (DI). Generally these types of objects can be either injected into the constructor or into a property. My preference is constructor injection.
B. The lifetime of these objects will be determined by the garbage collector. GC is not deterministic. So if you have objects that have connections to resource constrained services (database connections) then you may need to be sure you close those connections your self (instead of relying on dispose). Many times the 'lifetime' concerns are separated out into an inversion of control (IOC) container. There are many out there. My preference is Ninject.
C. The instantiation costs are probably minimal. The database transactions cost are where you probably want to focus your attention. There is a concept called 'unit of work' you may want to look into. Essentially, a database can handle transactions larger than just one save/update operation. Increasing the transaction size can lead to better db performance.
Hope that gets you started.
RCravens has some excellent insights. I'd like to show how you can implement his suggestions.
It would be good to start by defining an interface for the data access class to implement:
public interface IPostRepository
{
IEnumerable<Post> GetMostRecentPosts(int blogId);
}
Then implement a data class. Entity Framework contexts are cheap to build, and you can get inconsistent behavior when you don't dispose of them, so I find it's usually better to pull the data you want into memory, and then dispose the context.
public class PostRepository : IPostRepository
{
public IEnumerable<Post> GetMostRecentPosts(int blogId)
{
// A using statement makes sure the context is disposed quickly.
using(var context = new BlogContext())
{
return context.Posts
.Where(p => p.UserId == userId)
.OrderByDescending(p => p.TimeStamp)
.Take(10)
// ToList ensures the values are in memory before disposing the context
.ToList();
}
}
}
Now your controller can accept one of these repositories as a constructor argument:
public class BlogController : Controller
{
private IPostRepository _postRepository;
public BlogController(IPostRepository postRepository)
{
_postRepository = postRepository;
}
public ActionResult Index(int blogId)
{
var posts = _postRepository.GetMostRecentPosts(blogId);
var model = new PostsModel { Posts = posts };
if(!posts.Any()) {model.Message = "This blog doesn't have any posts yet";}
return View("Posts", model);
}
}
MVC allows you to use your own Controller Factory in lieu of the default, so you can specify that your IoC framework like Ninject decides how Controllers are created. You can set up your injection framework to know that when you ask for an IPostRepository it should create a PostRepository object.
One big advantage of this approach is that it makes your controllers unit-testable. For example, if you want to make sure that your model gets a Message when there are no posts, you can use a mocking framework like Moq to set up a scenario where your repository returns no posts:
var repositoryMock = new Mock<IPostRepository>();
repositoryMock.Setup(r => r.GetMostRecentPosts(1))
.Returns(Enumerable.Empty<Post>());
var controller = new BlogController(repositoryMock.Object);
var result = (ViewResult)controller.Index(1);
Assert.IsFalse(string.IsNullOrEmpty(result.Model.Message));
This makes it easy to test the specific behavior you're expecting from your controller actions, without needing to set up your database or anything special like that. Unit tests like this are easy to write, deterministic (their pass/fail status is based on the code, not the database contents), and fast (you can often run a thousand of these in a second).

Best practices when limiting changes to specific fields with LINQ2SQL

I was reading Steven Sanderson's book Pro ASP.NET MVC Framework and he suggests using a repository pattern:
public interface IProductsRepository
{
IQueryable<Product> Products { get; }
void SaveProduct(Product product);
}
He accesses the products repository directly from his Controllers, but since I will have both a web page and web service, I wanted to have add a "Service Layer" that would be called by the Controllers and the web services:
public class ProductService
{
private IProductsRepository productsRepsitory;
public ProductService(IProductsRepository productsRepository)
{
this.productsRepsitory = productsRepository;
}
public Product GetProductById(int id)
{
return (from p in productsRepsitory.Products
where p.ProductID == id
select p).First();
}
// more methods
}
This seems all fine, but my problem is that I can't use his SaveProduct(Product product) because:
1) I want to only allow certain fields to be changed in the Product table
2) I want to keep an audit log of each change made to each field of the Product table, so I would have to have methods for each field that I allow to be updated.
My initial plan was to have a method in ProductService like this:
public void ChangeProductName(Product product, string newProductName);
Which then calls IProductsRepository.SaveProduct(Product)
But there are a few problems I see with this:
1) Isn't it not very "OO" to pass in the Product object like this? However, I can't see how this code could go in the Product class since it should just be a dumb data object. I could see adding validation to a partial class, but not this.
2) How do I ensure that no one changed any other fields other than Product before I persist the change?
I'm basically torn because I can't put the auditing/update code in Product and the ProductService class' update methods just seem unnatural (However, GetProductById seems perfectly natural to me).
I think I'd still have these problems even if I didn't have the auditing requirement. Either way I want to limit what fields can be changed in one class rather than duplicating the logic in both the web site and the web services.
Is my design pattern just bad in the first place or can I somehow make this work in a clean way?
Any insight would be greatly appreciated.
I split the repository into two interfaces, one for reading and one for writing.
The reading implements IDisposeable, and reuses the same data-context for its lifetime. It returns the entity objects produced by linq to SQL. For example, it might look like:
interface Reader : IDisposeable
{
IQueryable<Product> Products;
IQueryable<Order> Orders;
IQueryable<Customer> Customers;
}
The iQueryable is important so I get the delayed evaluation goodness of linq2sql. This is easy to implement with a DataContext, and easy enough to fake. Note that when I use this interface I never use the autogenerated fields for related rows (ie, no fair using order.Products directly, calls must join on the appropriate ID columns). This is a limitation I don't mind living with considering how much easier it makes faking read repository for unit tests.
The writing one uses a separate datacontext per write operation, so it does not implement IDisposeable. It does NOT take entity objects as input or out- it takes the specific fields needed for each write operation.
When I write test code, I can substitute the readable interface with a fake implementation that uses a bunch of List<>s which I populate manually. I use mocks for the write interface. This has worked like a charm so far.
Don't get in a habit of passing the entity objects around, they're bound to the datacontext's lifetime and it leads to unfortunate coupling between your repository and its clients.
To address your need for the auditing/logging of changes, just today I put the finishing touches on a system I'll suggest for your consideration. The idea is to serialize (easily done if you are using LTS entity objects and through the magic of the DataContractSerializer) the "before" and "after" state of your object, then save these to a logging table.
My logging table has columns for the date, username, a foreign key to the affected entity, and title/quick summary of the action, such as "Product was updated". There is also a single column for storing the change itself, which is a general-purpose field for storing a mini-XML representation of the "before and after" state. For example, here's what I'm logging:
<ProductUpdated>
<Deleted><Product ... /></Deleted>
<Inserted><Product ... /></Inserted>
</ProductUpdated>
Here is the general purpose "serializer" I used:
public string SerializeObject(object obj)
{
// See http://msdn.microsoft.com/en-us/library/bb546184.aspx :
Type t = obj.GetType();
DataContractSerializer dcs = new DataContractSerializer(t);
StringBuilder sb = new StringBuilder();
XmlWriterSettings settings = new XmlWriterSettings();
settings.OmitXmlDeclaration = true;
XmlWriter writer = XmlWriter.Create(sb, settings);
dcs.WriteObject(writer, obj);
writer.Close();
string xml = sb.ToString();
return xml;
}
Then, when updating (can also be used for logging inserts/deletes), grab the state before you do your model-binding, then again afterwards. Shove into an XML wrapper and log it! (or I suppose you could use two columns in your logging table for these, although my XML approach allows me to attach any other information that might be helpful).
Furthermore, if you want to only allow certain fields to be updated, you'll be able to do this with either a "whitelist/blacklist" in your controller's action method, or you could create a "ViewModel" to hand in to your controller, which could have the restrictions placed upon it that you desire. You could also look into the many partial methods and hooks that your LTS entity classes should have on them, which would allow you to detect changes to fields that you don't want.
Good luck! -Mike
Update:
For kicks, here is how I deserialize an entity (as I mentioned in my comment), for viewing its state at some later point in history: (After I've extracted it from the log entry's wrapper)
public Account DeserializeAccount(string xmlString)
{
MemoryStream s = new MemoryStream(Encoding.Unicode.GetBytes(xmlString));
DataContractSerializer dcs = new DataContractSerializer(typeof(Product));
Product product = (Product)dcs.ReadObject(s);
return product;
}
I would also recommend reading Chapter 13, "LINQ in every layer" in the book "LINQ in Action". It pretty much addresses exactly what I've been struggling with -- how to work LINQ into a 3-tier design. I'm leaning towards not using LINQ at all now after reading that chapter.

Resources