I re-posted this question as I think it is a bit vague. New Post
I am currently using a Windows Service that is on a 2 minute timer. I am using EF code first with a repository pattern for data access. I am using Ninject to inject my dependencies. I have the following bindings in my NinjectDependencyResolver class:
ConnectionStringSettings connectionStringSettings = ConfigurationManager.ConnectionStrings["Database"];
Bind<IDatabaseFactory>().To<DatabaseFactory>()
.InSingletonScope()
.WithConstructorArgument("connectionString", connectionStringSettings.Name);
Bind<IUnitOfWork>().To<UnitOfWork>().InSingletonScope();
Bind<IMyRepository>().To<MyRepository>().InSingletonScope();
When my service runs every 2 minutes I do some thing similar to this:
foreach (var row in rows)
{
var existing = myRepository.GetById(row.Id);
if (existing == null)
{
existing = new Row();
myRepository.Add(existing);
unitOfWork.Commit();
}
}
I am starting to see an error in my logs that say:
The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges.
Is it correct to use InSingeltonScope when using Ninject in a Windows Service? I believe I tried using different scopes like InTransientScope but I could only get InSingeltonScope to work with data access. Does the error message have anything to do with Scope or is it unrelated?
Assuming that the service is not the only process that operates on the database you shouldn't use Singleton. What happens in this case is that you are reusing a DBContext that has cached entities which are out of date.
The better way is to treat each timer execution of the service in a similar way like it is a web/wcf request and create a new job processor for the request.
var processor = factory.CreateRowsProcessor();
processor.ProcessRows(rows);
public class RowsProcessor
{
public Processor(UoW uow, ....)
{
...
}
public void ProcessRows(Rows[] rows)
{
foreach (var row in rows)
{
var existing = myRepository.GetById(row.Id);
if (existing == null)
{
existing = new Row();
myRepository.Add(existing);
unitOfWork.Commit();
}
}
}
}
Depending of the problem it might even better to put the loop outside and have a new processor for each single row.
Read http://www.planetgeek.ch/2011/12/31/ninject-extensions-factory-introduction/ for more information about factories. Also have a look at the InCallScope of the named scope extension if you need to inject the UoW into multiple classes. http://www.planetgeek.ch/2010/12/08/how-to-use-the-additional-ninject-scopes-of-namedscope/
InSingletonScope will create singleton context = one context for the whole lifetime of your service. It is very bad solution. Because context holds all objects from all previous time events its memory consumption grows and there are possibilities to get errors as the one you are receiving at the moment (but the error really can be unrelated to your singleton context but most likely it is not). The exception says that you have two different objects with the same key identifier tracked by the context - that is not allowed.
Instead of using singleton uow, repository and context use singleton factory and in each time even request new fresh instances from the factory. Dispose context at the end of the time event processing.
Related
Have a SharePoint "remote web" application that will be managing data for multiple tenant databases (and thus, ultimately, multiple tenant database connections). In essence, each operation will deal with 2 databases.
The first is our tenancy database, where we store information that is specific for each tenant. This can be the SharePoint OAuth Client ID and secret, as well as information about how to connect to the tenant's specific database, which is the second database. This means that connecting to the first database will be required before we can connect to the second database.
I believe I know how to do this using Simple Injector for HTTP requests. I could register the first connection type (whether that be an IDbConnection wrapper using ADO.NET or a TenancyDbContext from entity framework) with per web request lifetime.
I could then register an abstract factory to resolve the connections to the tenant-specific databases. This factory would depend on the first database type, as well as the Simple Injector Container. Queries & commands that need to access the tenant database will depend on this abstract factory and use it to obtain the connection to the tenant database by passing an argument to a factory method.
My question mainly has to do with how to handle this in the context of an operation that may or may not have a non-null HttpContext.Current. When a SharePoint app is installed, we are sometimes running a WCF .svc service to perform certain operations. When SharePoint invokes this, sometimes HttpContext is null. I need a solution that will work in both cases, for both database connections, and that will make sure the connections are disposed when they are no longer needed.
I have some older example code that uses the LifetimeScope, but I see now that there is an Execution Context Scoping package available for Simple Injector on nuget. I am wondering if I should use that to create hybrid scoping for these 2 database connections (with / without HTTP context), and if so, how is it different from lifetime scoping using Container.GetCurrentLifetimeScope and Container.BeginLifetmeScope?
Update
I read up on the execution scope lifestyle, and ended up with the following 3-way hybrid:
var hybridDataAccessLifestyle = Lifestyle.CreateHybrid( // create a hybrid lifestyle
lifestyleSelector: () => HttpContext.Current != null, // when the object is needed by a web request
trueLifestyle: new WebRequestLifestyle(), // create one instance for all code invoked by the web request
falseLifestyle: Lifestyle.CreateHybrid( // otherwise, create another hybrid lifestyle
lifestyleSelector: () => OperationContext.Current != null, // when the object is needed by a WCF op,
trueLifestyle: new WcfOperationLifestyle(), // create one instance for all code invoked by the op
falseLifestyle: new ExecutionContextScopeLifestyle()) // in all other cases, create per execution scope
);
However my question really has to do with how to create a dependency which will get its connection string sometime after the root is already composed. Here is some pseudo code I came up with that implements an idea I have for how to implement this:
public class DatabaseConnectionContainerImpl : IDatabaseConnectionContainer, IDisposable
{
private readonly AllTenantsDbContext _allTenantsDbContext;
private TenantSpecificDbContext _tenantSpecificDbContext;
private Uri _tenantUri = null;
public DatabaseConnectionContainerImpl(AllTenantsDbContext allTenantsDbContext)
{
_allTenantsDbContext = allTenantsDbContext;
}
public TenantSpecificDbContext GetInstance(Uri tenantUri)
{
if (tenantUri == null) throw new ArgumentNullException(“tenantUri”);
if (_tenantUri != null && _tenantUri.Authority != tenantUri.Authority)
throw new InvalidOperationException(
"You can only connect to one tenant database within this scope.");
if (_tenantSpecificDbContext == null) {
var tenancy = allTenantsDbContext.Set<Tenancy>()
.SingleOrDefault(x => x.Authority == tenantUri.Authority);
if (tenancy == null)
throw new InvalidOperationException(string.Format(
"Tenant with URI Authority {0} does not exist.", tenantUri.Authority));
_tenantSpecificDbContext = new TenantSpecificDbContext(tenancy.ConnectionString);
_tenantUri = tenantUri;
}
return _tenantSpecificDbContext
}
void IDisposable.Dispose()
{
if (_tenantSpecificDbContext != null) _tenantSpecificDbContext.Dispose();
}
}
The bottom line is that there is a runtime Uri variable that will be used to determine what the connection string will be to the TenantSpecificDbContext instance. This Uri variable is passed into all WCF operations and HTTP web requests. Since this variable is not known until runtime after the root is composed, I don't think there is any way to inject it into the constructor.
Any better ideas than the one above, or will the one above be problematic?
Since you want to run operations in two different contexts (one with the availability of the web request, and when without) within the same AppDomain, you need to use an hybrid lifestyle. Hybrid lifestyles switch automatically from one lifestyle to the other. The example given in the Simple Injector documentation is the following:
ScopedLifestyle scopedLifestyle = Lifestyle.CreateHybrid(
lifestyleSelector: () => container.GetCurrentLifetimeScope() != null,
trueLifestyle: new LifetimeScopeLifestyle(),
falseLifestyle: new WebRequestLifestyle());
// The created lifestyle can be reused for many registrations.
container.Register<IUserRepository, SqlUserRepository>(hybridLifestyle);
container.Register<ICustomerRepository, SqlCustomerRepository>(hybridLifestyle);
Using this custom hybrid lifestyle, instances are stored for the duration of an active lifetime scope, but we fall back to caching instances per web request, in case there is no active lifetime scope. In case there is both no active lifetime scope and no web request, an exception will be thrown.
With Simple Injector, a scope for a web request will implicitly be created for you under the covers. For the lifetime scope however this is not possible. This means that you have to begin such scope yourself explicitly (as shown here). This will be trivial for you since you use command handlers.
Now your question is about the difference between the lifetime scope and execution context scope. The difference between the two is that a lifetime scope is thread-specific. It can't flow over asychronous operations that might jump from thread to thread. It uses a ThreadLocal under the covers.
The execution scope however can be used in case you use async/wait and return Task<T> from you methods. In this case the scope can be disposed on a different thread, since it stores all cached instances in the CallContext class.
In most cases you will be able to use the execution scope in places where you would use lifetime scope, but certainly not the other way around. But if your code doesn't flow asynchronously, lifetime scope gives better performance (although probably not really a significant performance difference from execution scope).
It is an MVC application with Entity Framework Code First for the ORM and MEF as the IoC.
If I mark the DbContext with PartCreationPolicy.Shared there is an error saying the object already exists in the container every time I try to perform an edit.
But what if I simply mark the DbContext with PartCreationPolicy.NonShared so it gets created for every request?
Is there a terrible performance impact for that?
Update
Here is the code for save:
Provider IRepository<Provider>.Put(Provider item)
{
if (item.Id == Guid.Empty)
{
item.Id = Guid.NewGuid();
this.Providers.Add(item);
}
else this.Entry<Provider>(item).State = EntityState.Modified;
return item;
}
And this is the error when on Shared
An object with the same key already exists in the ObjectStateManager.
The ObjectStateManager cannot track multiple objects with the same
key.
You should definitely use PartCreationPolicy.NonShared. Everything you can read about context lifecycle management, whether it's linq to sql, entity framework, or NHibernate (sessions), will agree upon one thing: the context should be short-lived. An easy rule of the thumb is: use it for one unit of work, which means: create a context, do stuff, call SaveChanges once, dispose. Most of the times this rule works well for me.
A shared (or singleton) context is the pattern that hits performance, because the context gets bloated over time. The change tracker needs to track more and more objects, relationship fixup will get slower. And you will find yourself refreshing (re-loading) entities time and again.
We are trying to develop our own EF provider for our legacy APIs. We managed to get "GET/POST" operation working successfully.
However, for operation "PUT/MERGE", the method "CreateDbCommandDefinition" (of DbProviderServices implementation) fires twice. One with "DbQueryCommandTree" and another with "DbUpdateCommandTree".
I understand that it needs to fetch the entity prior to update it (for change tracking I guess). In our case, I don't need the entity information to be fetched prior to update. I simply want to call our legacy APIs with the entity sent for update. How can we strictly ask it to not to do the work of "DbQueryCommandTree" (and do only the work of "DbUpdateCommandTree") when I working with "PUT/MERGE" operations.
The client code looks something like the one below:
public void CustomerUpdateTest()
{
try
{
Ctxt.MergeOption = MergeOption.NoTracking;
var oNewCus = new Customer()
{
MasterCustomerId = "1001",
SubCustomerId = "0",
FirstName = "abc",
LastName = "123"
};
Ctxt.AttachTo("Customers", oNewCus);
Ctxt.UpdateObject(oNewCus);
//Ctxt.SaveChanges();
Ctxt.SaveChanges(SaveChangesOptions.ReplaceOnUpdate);
}
catch (Exception ex)
{
Assert.Fail(ex.Message);
}
You will have to write your own IDataServiceUpdateProvider to make this happen. For EF, the in built EF update provider does 2 queries - one to get the entity which needs to be modified and one for the actual modification. We are planning to make this provider public in our next release, so folks can derive from it and just override one or more methods. But for now, you will have to implement the interface yourself.
For PUT/MERGE requests, WCF Data Services calls IDataServiceUpdateProvider.GetResource to get the entity to update. In your implementation of this method, you can return a token that represents the object that need to get modified (you will have to visit the expression tree that gets passed in this method to find out the entity set and the key value of the entity in question).
In SaveChanges, you can push the update based on the token. That way you can avoid one round trip to the database.
Hope this helps.
This question is very similiar to this one. However, the resolution to that question:
Does not seem to apply, or
Are somewhat suspect, and don't seem like a good approach to resolving the problem.
Basically, I'm iterating over a generic list of objects, and inserting them. Using MVC 2, EF 4 with the default code generation.
foreach(Requirement r in requirements)
{
var car = new CustomerAgreementRequirement();
car.CustomerAgreementId = viewModel.Agreement.CustomerAgreementId;
car.RequirementId = r.RequirementId;
_carRepo.Add(car); //Save new record
}
And the Repository.Add() method:
public class BaseRepository<TEntity> : IRepository<TEntity> where TEntity : class
{
private TxRPEntities txDB;
private ObjectSet<TEntity> _objectSet;
public void Add(TEntity entity)
{
SetUpdateParams(entity);
_objectSet.AddObject(entity);
txDB.SaveChanges();
}
I should note that I've been successfully using the Add() method throughout my code for single inserts; this is the first time I've tried to use it to iteratively insert a group of objects.
The error:
System.InvalidOperationException: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges.
As stated in the prior question, the EntityKey is set to True, StoreGeneratedPattern = Identity. The actual table that is being inserted into is a relationship table, in that it is comprised of an identity field and two foreign key fields. The error always occurs on the second insert, regardless of whether that specific entity has been inserted before or not, and I can confirm that the values are always different, no key conflicts as far as the database is concerned. My suspicion is that it has something to do with the temporary entitykey that gets set prior to the actual insert, but I don't know how to confirm that, nor do I know how to resolve it.
My gut feeling is that the solution in the prior question, to set the SaveOptions to None, would not be the best solution. (See prior discussion here)
I've had this issue with my repository using a loop as well and thought that it might be caused by some weird race-like condition. What I've done is refactor out a UnitOfWork class, so that the repository.add() method is strictly adding to the database, but not storing the context. Thus, the repository is only responsible for the collection itself, and every operation on that collection happens in the scope of the unit of work.
The issue there is that: In a loop, you run out of memory damn fast with EF4. So you do need to store the changes periodically, I just don't store after every save.
public class BaseRepository : IRepository where TEntity : class
{
private TxRPEntities txDB;
private ObjectSet _objectSet;
public void Add(TEntity entity)
{
SetUpdateParams(entity);
_objectSet.AddObject(entity);
}
public void Save()
{
txDB.SaveChanges();
}
Then you can do something like
foreach(Requirement r in requirements)
{
var car = new CustomerAgreementRequirement();
car.CustomerAgreementId = viewModel.Agreement.CustomerAgreementId;
car.RequirementId = r.RequirementId;
_carRepo.Add(car); //Save new record
if (some number limiting condition if you have thousands)
_carRepo.Save(); // To save periodically and clear memory
}
_carRepo.Save();
Note: I don't really like this solution, but I hunted around to try to find why things break in a loop when they work elsewhere, and that's the best I came up with.
We have had some odd collision issues if the entity is not added to the context directly after being created (before doing any assignments). The only time I've noticed the issue is when adding objects in a loop.
Try adding the newed up entity to the context, do the assignments, then save the context. Also, you don't need to save the context each time you add a new entity unless you absolutely need the primary key.
I'm writing an app that we may be switching out the repository later (currently entity framework) to use either amazon or windows azure storage.
I have a service method that disables a user by the ID, all it does is set a property to true and set the DisabledDate. Should I call to the repository, get that user, set the properties in the service, then call to the save function in the repository? If I do this, then thats 2 database calls, should I worry about this? What if the user is updating the profile at the same time the admin is calling the disable method, and calls the user calls the save method in the repository (which currently holds false for the IsDisabled property?) Wouldn't that set the user back to being enabled if called right after the disabled method?
What is the best way to solve this problem? How do I update data in a high concurrent system?
CustomerRepository:
// Would be called from more specific method in Service Layer - e.g DisableUser
public void Update(Customer c)
{
var stub = new Customer { Id = c.Id }; // create "stub"
ctx.Customers.Attach(stub); // attach "stub" to graph
ctx.ApplyCurrentValues("Customers", c); // override scalar values of "stub"
ctx.SaveChanges(); // save changes - 1 call to DB. leave this out if you're using UoW
}
That should serve as a general-purpose "UPDATE" method in your repository. Should only be used when the entity exists.
That is just an example - in reality you should/could be using generics, checking for the existence of the entity in the graph before attaching, etc.
But that will get you on the right track.
As long as you know the id of the entity you want to save you should be able to do it by attaching the entity to the context first like so:
var c = new Customer();
c.Id = someId;
context.AttachTo("Customer", c)
c.PropertyToChange = "propertyValue";
context.SaveChanges();
Whether this approach is recommended or not, I'm not so sure as I'm not overly familiar with EF, but this will allow you to issue the update command without having to first load the entity.