Handling transaction with UnitOfWork and EF - asp.net-mvc

I'm using Entity Framework 5.0 and MVC4. I have couple of domains. Each of them has its own DbContext (which uses appropriate tables), repository and service. I also implemented UnitOfWork.
Handling specific flow in transaction inside one service for specific domain is simple. I'm doing some operations on tables and at the end I'm invoking UnitOfWork.Save, which behaves as Transaction.Commit.
But lets assume I have a case in which I have to invoke operations for two different domains and this two operations must be put inside one transaction. I have access to services for domains from controller so this actions are invoked from there. At the moment I can see three solutions:
I must have UnitOfWork in controller and call Save method at the end (I don't like this idea).
Create some service in which I will have UnitOfWork and access to both services (actually it is the same solution as above, but I'm moving logic to separate class)
I have to create additional TransactionScope inside controller and commit it at the end
Please let me know what option you think is the best. If you have any other than the three above, let me know. Or maybe something is wrong with my concept? I mean domains and their db contexts?

Assuming your UnitOfWork implementation supports normal Transactions in .NET, you can do the following and they should enroll in the currently running transaction.
protected TransactionScope CreateTransactionScope()
{
var options = new TransactionOptions();
options.IsolationLevel = IsolationLevel.ReadCommitted;
options.Timeout = TransactionManager.MaximumTimeout;
return new TransactionScope(TransactionScopeOption.Required, options);
}
using (var scope = this.CreateTransactionScope())
{
// do operations with context 1
// do operations with context 2
scope.Complete();
}

Related

Onion Architecture - Repository Vs Service?

I am learning the well-known Onion Architecture from Jeffrey Palermo.
Not specific to this pattern, but I cannot see clearly the separation between repositories and domain services.
I (mis)understand that repository concerns data access and service are more about business layer (reference one or more repositories).
In many examples, a repository seems to have some kind of business logic behind like GetAllProductsByCategoryId or GetAllXXXBySomeCriteriaYYY.
For lists, it seems that service is just a wrapper on repository without any logic.
For hierarchies (parent/children/children), it is almost the same problem : is it the role of repository to load the complete hierarchy ?
The repository is not a gateway to access Database. It is an abstraction that allow you to store and load domain objects from some form of persistence store. (Database, Cache or even plain Collection). It take or return the domain objects instead of its internal field, hence it is an object oriented interface.
It is not recommended to add some methods like GetAllProductsByCategoryId or GetProductByName to the repository, because you will add more and more methods the repository as your use case/ object field count increase. Instead it is better to have a query method on the repository which takes a Specification. You can pass different implementations of the Specification to retrieve the products.
Overall, the goal of repository pattern is to create a storage abstraction that does not require changes when the use cases changes. This article talks about the Repository pattern in domain modelling in great detail. You may be interested.
For the second question: If I see a ProductRepository in the code, I'd expect that it returns me a list of Product. I also expect that each of the Product instance is complete. For example, if Product has a reference to ProductDetail object, I'd expect that Product.getDetail() returns me a ProductDetail instance rather than null. Maybe the implementation of the repository load ProductDetail together with Product, maybe the getDetail() method invoke ProductDetailRepository on the fly. I don't really care as a user of the repository. It is also possible that the Product only returns a ProductDetail id when I call getDetail(). It is perfect fine from the repository's contract point of view. However it complicates my client code and forces me to call ProductDetailRepository myself.
By the way, I've seen many service classes that solely wrap the repository classes in my past. I think it is an anti-pattern. It is better to have the callers of the services to use the repositories directly.
Repository pattern mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.
So, repositories is to provide interface for CRUD operation on domain entities. Remember that Repositories deals with whole Aggregate.
Aggregates are groups of things that belong together. An Aggregate Root is the thing that holds them all together.
Example Order and OrderLines:
OrderLines have no reason to exist without their parent Order, nor can they belong to any other Order. In this case, Order and OrderLines would probably be an Aggregate, and the Order would be the Aggregate Root
Business logic should be in Domain Entities, not in Repository layer , application logic should be in service layer like your mention, services in here play a role as coordinator between repositoies.
While I'm still struggling with this, I want to post as an answer but also I accept (and want) feedback about this.
In the example GetProductsByCategory(int id)
First, let's think from the initial need. We hit a controller, probably the CategoryController so you have something like:
public CategoryController(ICategoryService service) {
// here we inject our service and keep a private variable.
}
public IHttpActionResult Category(int id) {
CategoryViewModel model = something.GetCategoryViewModel(id);
return View()
}
so far, so good. We need to declare 'something' that creates the view model.
Let's simplify and say:
public IHttpActionResult Category(int id) {
var dependencies = service.GetDependenciesForCategory(id);
CategoryViewModel model = new CategoryViewModel(dependencies);
return View()
}
ok, what are dependencies ? We maybe need the category tree, the products, the page, how many total products, etc.
so if we implemented this in a repository way, this could look like more or less like this :
public IHttpActionResult Category(int id) {
var products = repository.GetCategoryProducts(id);
var category = repository.GetCategory(id); // full details of the category
var childs = repository.GetCategoriesSummary(category.childs);
CategoryViewModel model = new CategoryViewModel(products, category, childs); // awouch!
return View()
}
instead, back to services :
public IHttpActionResult Category(int id) {
var category = service.GetCategory(id);
if (category == null) return NotFound(); //
var model = new CategoryViewModel(category);
return View(model);
}
much better, but what is exactly inside service.GetCategory(id) ?
public CategoryService(ICategoryRespository categoryRepository, IProductRepository productRepository) {
// same dependency injection here
public Category GetCategory(int id) {
var category = categoryRepository.Get(id);
var childs = categoryRepository.Get(category.childs) // int[] of ids
var products = productRepository.GetByCategory(id) // this doesn't look that good...
return category;
}
}
Let's try another approach, the unit of work, I will use Entity framework as the UoW and Repositories, so no need to create those.
public CategoryService(DbContext db) {
// same dependency injection here
public Category GetCategory(int id) {
var category = db.Category.Include(c=> c.Childs).Include(c=> c.Products).Find(id);
return category;
}
}
So here we are using the 'query' syntax instead of the method syntax, but instead of implementing our own complex, we can use our ORM. Also, we have access to ALL repositories, so we can still do our Unit of work inside our service.
Now we need to select which data we want, I probably don't want all the fields of my entities.
The best place I can see this is happening is actually on the ViewModel, each ViewModel may need to map it's own data, so let's change the implementation of the service again.
public CategoryService(DbContext db) {
// same dependency injection here
public Category GetCategory(int id) {
var category = db.Category.Find(id);
return category;
}
}
so where are all the products and inner categories?
let's take a look at the ViewModel, remember this will ONLY map data to values, if you are doing something else here, you are probably giving too much responsibility to your ViewModel.
public CategoryViewModel(Category category) {
Name = category.Name;
Id = category.Id;
Products = category.Products.Select(p=> new CategoryProductViewModel(p));
Childs = category.Childs.Select(c => c.Name); // only childs names.
}
you can imagine the CategoryProductViewModel by yourself right now.
BUT (why is there always a but??)
We are doing 3 db hits, and we are fetching all the category fields because of the Find. Also Lazy Loading must be enable. Not a real solution isn't it ?
To improve this, we can change find with where... but this will delegate the Single or Find to the ViewModel, also it will return an IQueryable<Category>, where we know it should be exactly one.
Remember I said "I'm still struggling?" this is mostly why. To fix this, we should return the exact needed data from the service (also know as the ..... you know it .... yes! the ViewModel).
so let's back to our controller :
public IHttpActionResult Category(int id) {
var model = service.GetProductCategoryViewModel(id);
if (category == null) return NotFound(); //
return View(model);
}
inside the GetProductCategoryViewModel method, we can call private methods that return the different pieces and assemble them as the ViewModel.
this is bad, now my services know about viewmodels... let's fix that.
We create an interface, this interface is the actual contract of what this method will return.
ICategoryWithProductsAndChildsIds // quite verbose, i know.
nice, now we only need to declare our ViewModel as
public class CategoryViewModel : ICategoryWithProductsAndChildsIds
and implement it the way we want.
The interface looks like it has too many things, of course it can be splitted with ICategoryBasic, IProducts, IChilds, or whatever you may want to name those.
So when we implement another viewModel, we can choose to do only IProducts.
We can have our services having methods (private or not) to retrieve those contracts, and glue the pieces in the service layer. (Easy to say than done)
When I get into a fully working code, I might create a blog post or a github repo, but for now, I don't have it yet, so this is all for now.
I believe the Repository should be only for CRUD operations.
public interface IRepository<T>
{
Add(T)
Remove(T)
Get(id)
...
}
So IRepository would have: Add, Remove, Update, Get, GetAll and possibly a version of each of those that takes a list, i.e, AddMany, RemoveMany, etc.
For performing search retrieval operations you should have a second interface such as an IFinder. You can either go with a specification, so IFinder could have a Find(criteria) method that takes criterias. Or you can go with things like IPersonFinder which would define custom functions such as: a FindPersonByName, FindPersonByAge etc.
public interface IMyObjectFinder
{
FindByName(name)
FindByEmail(email)
FindAllSmallerThen(amount)
FindAllThatArePartOf(group)
...
}
The alternative would be:
public interface IFinder<T>
{
Find(criterias)
}
This second approach is more complex. You need to define a strategy for the criterias. Are you going to use a query language of some sort, or a more simple key-value association, etc. The full power of the interface is also harder to understand from simply looking at it. It's also easier to leak implementations with this method, because the criterias could be based around a particular type of persistence system, like if you take a SQL query as criteria for example. On the other hand, it might prevent you from having to continuously come back to the IFinder because you've hit a special use case that requires a more specific query. I say it might, because your criteria strategy will not necessarily cover 100% of the querying use cases you might need.
You could also decide to mix both together, and have an IFinder defining a Find method, and IMyObjectFinders that implement IFinder, but also add custom methods such as FindByName.
The service acts as a supervisor. Say you need to retrieve an item but must also process the item before it is returned to the client, and that processing might require information found in other items. So the service would retrieve all appropriate items using the Repositories and the Finders, it would then send the item to be processed to objects that encapsulates the necessary processing logic, and finally it would return the item requested by the client. Sometime, no processing and no extra retrievals will be required, in such cases, you don't need to have a service. You can have clients directly call into the Repositories and the Finders. This is one difference with the Onion and a Layered architecture, in the Onion, everything that is more outside can access everything more inside, not only the layer before it.
It would be the role of the repository to load the full hierarchy of what is needed to properly construct the item that it returns. So if your repository returns an item that has a List of another type of item, it should already resolve that. Personally though, I like to design my objects so that they don't contain references to other items, because it makes the repository more complex. I prefer to have my objects keep the Id of other items, so that if the client really needs that other item, he can query it again with the proper Repository given the Id. This flattens out all items returned by the Repositories, yet still let's you create hierarchies if you need to.
You could, if you really felt the need to, add a restraining mechanism to your Repository, so that you can specify exactly which field of the item you need. Say you have a Person, and only care for his name, you could do Get(id, name) and the Repository would not bother with getting every field of the Person, only it's name field. Doing this though, adds considerable complexity to the repository. And doing this with hierarchical objects is even more complex, especially if you want to restrict fields inside fields of fields. So I don't really recommend it. The only good reason for this, to me, would be cases where performance is critical, and nothing else can be done to improve the performance.
In Domain Driven Design the repository is responsible for retrieving the whole Aggregate.
Onion and Hexagonal Architectures purpose is to invert the dependency from domain->data access.
Rather than having a UI->api->domain->data-access,
you'll have something like UI->api->domain**<-**data-access
To make your most important asset, the domain logic, is in the center and free of external dependencies.
Generally by splitting the Repository into Interface/Implementation and putting the interface along with the business logic.
Now to services, there's more that one type of services:
Application Services: your controller and view model, which are external concerns for UI and display and are not part of the domain
Domain Services: which provide domain logic. In you're case if the logic you're having in application services starts to do more that it's presentation duties. you should look at extracting to a domain service
Infrastructure Services: which would, as with repositories, have an interface within the domain, and an implementation in the outer layers
#Bart Calixto, you may have a look at CQRS, building your view model is too complex when you're trying to use Repositories which you design for domain logic.
you could just rewrite another repo for the ViewModel, using SQL joins for example, and it doesn't have to be in the domain
is it the role of repository to load the complete hierarchy ?
Short answer: yes, if the repository's outcome is a hierarchy
The role of repository is to load whatever you want, in any shape you need, from the datasource (e.g. database, file system, Lucene index, etc).
Let's suppose a repository (interface) has the GetSomeHierarchyOrListBySomeCriteria operation - the operation, its parameters and its outcome are part of the application core!
Let's focus on the outcome: it doesn't matter it's shape (list, hierarchy, etc), the repository implementation is supposed to do everything necessary to return it.
If one is using a NoSql database than GetSomeHierarchyOrListBySomeCriteria implementation might need only one NoSql-database-query with no other conversions or transformations to get the desired outcome (e.g. hierarchy). For a SQL database on the other hand, the same outcome might imply multiple queries and complex conversions or transformations - but that's an implementation detail, the repository interface is the same.
repositories vs domain services
According to The Onion Architecture : part 1, and I'm pointing here about the official page, not someone's else interpretation:
The first layer around the Domain Model is typically where we would
find interfaces that provide object saving and retrieving behavior,
called repository interfaces. [...] Only the interface is in the application core.
Notice the Domain Services layer above Domain Model one.
Starting with the second official page, The Onion Architecture : part 2, the author forgets about Domain Services layer and is depicting IConferenceRepository as part of the Object Services layer which is right above Domain Model, replacing Domain Services layer! The Object Services layer continues in The Onion Architecture : part 3, so I ask: what Domain Services? :)))
It seems to me that author's intent for Object Services or Domain Services is to consist only of repositories, otherwise he leaves no clue for something else.

How to manage transactions in the service layer?

We’re developing a .Net application with the following architecture: presentation layer (using MVC pattern with ASP.Net MVC 2), service layer, data access layer (using repository pattern over Entity Framework).
We’ve decided to put the transaction management in the service layer but we’re not sure about how to implement it. We want to control the transaction entirely at the service layer level. That is, every time a controller calls a method in the service layer, it has to be an atomic operation regarding database updates.
If there were no relation between different services provided in the service layer, then it would be simple: each method should commit the changes at the end of its execution (that is, call the save method on the context it uses). But sometimes services at the service layer work together.
e.g.: we provide a shipment service that has a confirm method which receives the following parameters: the shipment id, a flag indicating if it corresponds to a new customer or an existing one, the customer id (in case the shipment confirmation is for an existing customer) and a customer name (in case it is for a new customer). If the flag is set to "new customer", then the service layer has to (a) create the customer and (b) confirm the shipment. For (a) the shipment service calls the customer service (which already implements the validations and logic needed to create a new customer and store it in the database).
Who should commit the changes in this scenario?
Should the customer service do it? it cannot commit the changes after creating the new customer, because something can go wrong later in the shipment confirmation method, but it has to commit its changes in the case it is call directly (in other use case, provided to create a client).
Should the controller calling the service method do it? but the controller shouldn’t know anything about transactions, we've decided to put all transaction knowladge in the service layer.
A Transaction Manager in the services layer? How to design it?, who calls it and when?
Is there a design pattern for this that we should follow?
I have a Commit() on my service, this only commits if the UnitOfWork is created by the service, if it is passed in the constructor the commit does nothing.
I used a second (internal) constructor for the service:
public class MyService
{
private IUnitOfWork _uow;
private bool _uowInternal;
public MyService()
{
_uow = new UnitOfWork();
_uowInternal = false;
}
internal MyService(IUnitOfWork uow)
{
_uow = uow;
_uowInternal = true;
}
public MyServiceCall()
{
// Create second service, passing in my UnitOfWork:
var svc2 = new MySecondService(_uow);
// Do your stuff with both services.
....
// Commit my UnitOfWork, which will include all changes from svc2:
Commit();
}
public void Commit()
{
if(!_uowInternal)
_uow.Commit();
}
}
In a similar architecture with WCF and L2S instead of EF, we chose to use transactions in the main service interface implementation class. We used TransactionScope to achieve this:
public void AServiceMethod() {
using(TransactionScope ts = new TransactionScope()) {
service1.DoSomething();
service2.DoSomething();
ts.Complete();
}
}
The main disadvantage is that the transaction may get big. In that case, if for example one of the service calls in the transaction block requires only readonly access, we wrap it in a nested TransactionScope(TransactionScopeOption.Suppress) block to prevent locking rows/tables further in the transaction lifetime.

ASP.net MVC Controller - Constructor usage

I'm working on an ASP.net MVC application and I have a question about using constructors for my controllers.
I'm using Entity Framework and linq to Entities for all of my data transactions. I need to access my Entity model for nearly all of my controller actions. When I first started writing the app I was creating an entity object at the beginning of each Action method, performing whatever work I needed to and then returning my result.
I realized that I was creating the same object over and over for each action method so I created a private member variable for the Entity object and started instantiating it in the constructor for each controller. Now each method only references that private member variable to do its work.
I'm still questioning myself on which way is right. I'm wondering A.) which method is most appropriate? B.) in the constructor method, how long are those objects living? C.) are there performance/integrity issues with the constructor method?
You are asking the right questions.
A. It is definitely not appropriate to create this dependencies inside each action method. One of the main features of MVC is the ability to separate concerns. By loading up your controller with these dependencies, you are making the controller for thick. These should be injected into the controller. There are various options for dependency injection (DI). Generally these types of objects can be either injected into the constructor or into a property. My preference is constructor injection.
B. The lifetime of these objects will be determined by the garbage collector. GC is not deterministic. So if you have objects that have connections to resource constrained services (database connections) then you may need to be sure you close those connections your self (instead of relying on dispose). Many times the 'lifetime' concerns are separated out into an inversion of control (IOC) container. There are many out there. My preference is Ninject.
C. The instantiation costs are probably minimal. The database transactions cost are where you probably want to focus your attention. There is a concept called 'unit of work' you may want to look into. Essentially, a database can handle transactions larger than just one save/update operation. Increasing the transaction size can lead to better db performance.
Hope that gets you started.
RCravens has some excellent insights. I'd like to show how you can implement his suggestions.
It would be good to start by defining an interface for the data access class to implement:
public interface IPostRepository
{
IEnumerable<Post> GetMostRecentPosts(int blogId);
}
Then implement a data class. Entity Framework contexts are cheap to build, and you can get inconsistent behavior when you don't dispose of them, so I find it's usually better to pull the data you want into memory, and then dispose the context.
public class PostRepository : IPostRepository
{
public IEnumerable<Post> GetMostRecentPosts(int blogId)
{
// A using statement makes sure the context is disposed quickly.
using(var context = new BlogContext())
{
return context.Posts
.Where(p => p.UserId == userId)
.OrderByDescending(p => p.TimeStamp)
.Take(10)
// ToList ensures the values are in memory before disposing the context
.ToList();
}
}
}
Now your controller can accept one of these repositories as a constructor argument:
public class BlogController : Controller
{
private IPostRepository _postRepository;
public BlogController(IPostRepository postRepository)
{
_postRepository = postRepository;
}
public ActionResult Index(int blogId)
{
var posts = _postRepository.GetMostRecentPosts(blogId);
var model = new PostsModel { Posts = posts };
if(!posts.Any()) {model.Message = "This blog doesn't have any posts yet";}
return View("Posts", model);
}
}
MVC allows you to use your own Controller Factory in lieu of the default, so you can specify that your IoC framework like Ninject decides how Controllers are created. You can set up your injection framework to know that when you ask for an IPostRepository it should create a PostRepository object.
One big advantage of this approach is that it makes your controllers unit-testable. For example, if you want to make sure that your model gets a Message when there are no posts, you can use a mocking framework like Moq to set up a scenario where your repository returns no posts:
var repositoryMock = new Mock<IPostRepository>();
repositoryMock.Setup(r => r.GetMostRecentPosts(1))
.Returns(Enumerable.Empty<Post>());
var controller = new BlogController(repositoryMock.Object);
var result = (ViewResult)controller.Index(1);
Assert.IsFalse(string.IsNullOrEmpty(result.Model.Message));
This makes it easy to test the specific behavior you're expecting from your controller actions, without needing to set up your database or anything special like that. Unit tests like this are easy to write, deterministic (their pass/fail status is based on the code, not the database contents), and fast (you can often run a thousand of these in a second).

Service Layer are repeating my Repositories

I'm developing an application using asp.net mvc, NHibernate and DDD. I have a service layer that are used by controllers of my application. Everything are using Unity to inject dependencies (ISessionFactory in repositories, repositories in services and services in controllers) and works fine.
But, it's very common I need a method in service to get only object in my repository, like this (in service class):
public class ProductService {
private readonly IUnitOfWork _uow;
private readonly IProductRepository _productRepository;
public ProductService(IUnitOfWork unitOfWork, IProductRepository productRepository) {
this._uow = unitOfWork;
this._productRepository = productRepository;
}
/* this method should be exists in DDD ??? It's very common */
public Domain.Product Get(long key) {
return _productRepository.Get(key);
}
/* other common method... is correct by DDD ? */
public bool Delete(long key) {
usign (var tx = _uow.BeginTransaction()) {
try
{
_productRepository.Delete(key);
tx.Commit();
return true;
} catch {
tx.RollBack();
return false;
}
}
}
/* ... others methods ... */
}
This code is correct by DDD ? For each Service class I have a Repository, and for each service class need I do a method "Get" for an entity ?
Thanks guys
Cheers
Your ProductService doesn't look like it followed Domain-Driven Design principles. If I understand it correctly, it is a part of Application layer between Presentation and Domain. If so, the methods on ProductService should have business meaning with regard to products.
Let's talk about deleting products. Is it as simple as executing delete on the database (NHibernate, or whatever?) I think it is not. What about orders which reference the to-be-deleted product? And so on and so forth. Btw, Udi Dahan wrote a great article on deleting entities.
Bottom line is, if your application is so simple that services do really replicate your repositories and contain only CRUD operations, you probably shouldn't do DDD, throw away your repositories and let services operate on entities (which would be simple data containers in that case).
On the other hand, if there is a complicated behavior (like the one with handling 'deleted' products), there is a point in going DDD path and I strongly advocate doing so.
PS. Despite which approach (DDD or not) you will eventually take I would encourage you to use some Aspect Oriented Programming to handle transaction and exception related stuff. You would end up with way to many methods such as DeleteProduct with same TX and exception handling code.
That looks correct from my perspective. I really didn't like repeating service and repository method names over and over in my asp.net MVC project, so I went for a generic repository approach/pattern. This means that I really only need one or two Get() methods in my repository to retrieve my objects. This is possible for me because I am using Entity Framework and I just have my repository's get() method return a IQueryable. Then I can just do the following:
Product product = from p in _productRepository.Get() where p.Id == Id select p;
You can probably replicate this in NHibernate with linq -> NHibernate.
Edit: This works for DDD because this still allows me to interchange my DAL/repositories as long as the data library I am using (Nhibernate, EF, etc..) supports IQueryable.
I am not sure how to do a generic repository without IQueryable, but you might be able to use delegates/lambda functions to incorporate it.
Edit2: And just in case I didn't answer your question correctly, if you are asking if you are supposed to call your repository's Get() method from the service then yes, that is the correct DDD design as well. The reason is that the service layer is supposed to handle all your business logic, so it decides exactly how and what data to retrieve (for example, do you want it in alphabetical order, unordered, etc...). It also means that it can perform validation after loading if needed or validation before deleting and/or saving.
This means that the service layer doesn't care exactly how that data is stored and retrieved, it only decides what data is stored and retrieved. It then calls on the repository to handle the request correctly and retrieve/store the data in the way the service layer tells it to. Thus you have correct separation of concerns.

Is there any benefit to using single Repository instance for Asp.net Mvc application?

In CarTrackr project, It use some technique that creates only 1 repository instance for all request in Asp.net Mvc website and uses UnityControllerFactory class to manage all repository instanes(send to requested controller).
Is there any benefit to using single repository instance when compare with creating new repository instance every request?
I know, it may improve overall performance. But, Does it cause any transcation problem?
partial Global.asax
public class MvcApplication : System.Web.HttpApplication
{
protected void Application_Start()
{
RegisterRoutes(RouteTable.Routes);
RegisterDependencies();
}
protected static void RegisterDependencies() {
IUnityContainer container = new UnityContainer();
// Registrations
container.RegisterType<IUserRepository, UserRepository>(new ContextLifetimeManager<IUserRepository>());
container.RegisterType<ICarRepository, CarRepository>(new ContextLifetimeManager<ICarRepository>());
// Set controller factory
ControllerBuilder.Current.SetControllerFactory(
new UnityControllerFactory(container)
);
}
}
partial CarController.cs
[Authorize]
public class CarController : Controller
{
private IUserRepository UserRepository;
private ICarRepository CarRepository;
public CarController(IUserRepository userRepository, ICarRepository carRepository)
{
UserRepository = userRepository;
CarRepository = carRepository;
}
}
Thanks,
Creating a repository instance per request by itself shouldn't cause any performance issue; the repository is often pretty shallow, and when it needs to access data things like connection pooling minimise the cost of establishing actual connections. Object creation is astonishingly cheap, especially for short-lived things like web requests where the object gets collected while still in "generation zero".
As to whether to have a single repository or a repository per instance - that depends on the repository ;-p
The biggest question is: is your repository thread safe? If not: one per request.
Even if it is though; if your repository itself keeps something like a LINQ-to-SQL DataContext (that you synchronize somehow), then you have big problems if you keep this long-term, in particular with the identity manager. You'll quickly use a lot of memory and get stale results. Far form ideal.
With a single repository instance, you will probably also end up with a lot of blocking trying to get thread safety. This can reduce throughput. Conversely, the database itself has good ways of achieving granular locks - which is particularly useful when you consider that often, concurrent requests will be looking at separate tables etc - so no blocking at the database layer. This would be very hard to do just at the repository layer - so you'd probably have to synchronize the entire "fetch" - very bad.
IMO, one per request is fine in most cases. If you want to cache data, do it separately - i.e. not directly on the repository instance.
I think you're misunderstanding whats happening with the ContextLifeTimeManager. By passing the manager into the Register() method your telling Unity to set the caching scope for your repository instance to HttpContext.
It is actually incorrect to say:
It use some technique that creates only 1 repository instance for all request in Asp.net > Mvc website
There is not a repository singleton. Unity is creating one for each request. It sounds like this is actually your desired behavior.
When the manager's scope is set to HttpContext the container looks to HttpContext for an existing instance of the requested type (in this case, your repository). Since the HttpContext is fresh on each request, the container will not have this instance, thus a new one will be created.
When you ask:
Is there any benefit to using single
repository instance when compare with
creating new repository instance every
request?
No.
As far as transaction problems: Threading will def be an issue. The CarRepository appears to be using Linq2Sql or Linq2Entities. Its ctor requires an active datacontext. DataContext is NOT thread safe. If the datacontext is being stored at a scope higher than the current request, there will be problems.
Using the new ContextLifetimeManager());, the lifetime of a repository is limited to one request. This means that evry request each repository is instantiated (if needed) and destroyed once a response has been sent to the client.

Resources