Have a SharePoint "remote web" application that will be managing data for multiple tenant databases (and thus, ultimately, multiple tenant database connections). In essence, each operation will deal with 2 databases.
The first is our tenancy database, where we store information that is specific for each tenant. This can be the SharePoint OAuth Client ID and secret, as well as information about how to connect to the tenant's specific database, which is the second database. This means that connecting to the first database will be required before we can connect to the second database.
I believe I know how to do this using Simple Injector for HTTP requests. I could register the first connection type (whether that be an IDbConnection wrapper using ADO.NET or a TenancyDbContext from entity framework) with per web request lifetime.
I could then register an abstract factory to resolve the connections to the tenant-specific databases. This factory would depend on the first database type, as well as the Simple Injector Container. Queries & commands that need to access the tenant database will depend on this abstract factory and use it to obtain the connection to the tenant database by passing an argument to a factory method.
My question mainly has to do with how to handle this in the context of an operation that may or may not have a non-null HttpContext.Current. When a SharePoint app is installed, we are sometimes running a WCF .svc service to perform certain operations. When SharePoint invokes this, sometimes HttpContext is null. I need a solution that will work in both cases, for both database connections, and that will make sure the connections are disposed when they are no longer needed.
I have some older example code that uses the LifetimeScope, but I see now that there is an Execution Context Scoping package available for Simple Injector on nuget. I am wondering if I should use that to create hybrid scoping for these 2 database connections (with / without HTTP context), and if so, how is it different from lifetime scoping using Container.GetCurrentLifetimeScope and Container.BeginLifetmeScope?
Update
I read up on the execution scope lifestyle, and ended up with the following 3-way hybrid:
var hybridDataAccessLifestyle = Lifestyle.CreateHybrid( // create a hybrid lifestyle
lifestyleSelector: () => HttpContext.Current != null, // when the object is needed by a web request
trueLifestyle: new WebRequestLifestyle(), // create one instance for all code invoked by the web request
falseLifestyle: Lifestyle.CreateHybrid( // otherwise, create another hybrid lifestyle
lifestyleSelector: () => OperationContext.Current != null, // when the object is needed by a WCF op,
trueLifestyle: new WcfOperationLifestyle(), // create one instance for all code invoked by the op
falseLifestyle: new ExecutionContextScopeLifestyle()) // in all other cases, create per execution scope
);
However my question really has to do with how to create a dependency which will get its connection string sometime after the root is already composed. Here is some pseudo code I came up with that implements an idea I have for how to implement this:
public class DatabaseConnectionContainerImpl : IDatabaseConnectionContainer, IDisposable
{
private readonly AllTenantsDbContext _allTenantsDbContext;
private TenantSpecificDbContext _tenantSpecificDbContext;
private Uri _tenantUri = null;
public DatabaseConnectionContainerImpl(AllTenantsDbContext allTenantsDbContext)
{
_allTenantsDbContext = allTenantsDbContext;
}
public TenantSpecificDbContext GetInstance(Uri tenantUri)
{
if (tenantUri == null) throw new ArgumentNullException(“tenantUri”);
if (_tenantUri != null && _tenantUri.Authority != tenantUri.Authority)
throw new InvalidOperationException(
"You can only connect to one tenant database within this scope.");
if (_tenantSpecificDbContext == null) {
var tenancy = allTenantsDbContext.Set<Tenancy>()
.SingleOrDefault(x => x.Authority == tenantUri.Authority);
if (tenancy == null)
throw new InvalidOperationException(string.Format(
"Tenant with URI Authority {0} does not exist.", tenantUri.Authority));
_tenantSpecificDbContext = new TenantSpecificDbContext(tenancy.ConnectionString);
_tenantUri = tenantUri;
}
return _tenantSpecificDbContext
}
void IDisposable.Dispose()
{
if (_tenantSpecificDbContext != null) _tenantSpecificDbContext.Dispose();
}
}
The bottom line is that there is a runtime Uri variable that will be used to determine what the connection string will be to the TenantSpecificDbContext instance. This Uri variable is passed into all WCF operations and HTTP web requests. Since this variable is not known until runtime after the root is composed, I don't think there is any way to inject it into the constructor.
Any better ideas than the one above, or will the one above be problematic?
Since you want to run operations in two different contexts (one with the availability of the web request, and when without) within the same AppDomain, you need to use an hybrid lifestyle. Hybrid lifestyles switch automatically from one lifestyle to the other. The example given in the Simple Injector documentation is the following:
ScopedLifestyle scopedLifestyle = Lifestyle.CreateHybrid(
lifestyleSelector: () => container.GetCurrentLifetimeScope() != null,
trueLifestyle: new LifetimeScopeLifestyle(),
falseLifestyle: new WebRequestLifestyle());
// The created lifestyle can be reused for many registrations.
container.Register<IUserRepository, SqlUserRepository>(hybridLifestyle);
container.Register<ICustomerRepository, SqlCustomerRepository>(hybridLifestyle);
Using this custom hybrid lifestyle, instances are stored for the duration of an active lifetime scope, but we fall back to caching instances per web request, in case there is no active lifetime scope. In case there is both no active lifetime scope and no web request, an exception will be thrown.
With Simple Injector, a scope for a web request will implicitly be created for you under the covers. For the lifetime scope however this is not possible. This means that you have to begin such scope yourself explicitly (as shown here). This will be trivial for you since you use command handlers.
Now your question is about the difference between the lifetime scope and execution context scope. The difference between the two is that a lifetime scope is thread-specific. It can't flow over asychronous operations that might jump from thread to thread. It uses a ThreadLocal under the covers.
The execution scope however can be used in case you use async/wait and return Task<T> from you methods. In this case the scope can be disposed on a different thread, since it stores all cached instances in the CallContext class.
In most cases you will be able to use the execution scope in places where you would use lifetime scope, but certainly not the other way around. But if your code doesn't flow asynchronously, lifetime scope gives better performance (although probably not really a significant performance difference from execution scope).
Related
I have the following test (which is probably more of a functional test than integration but...):
#Integration(applicationClass = Application)
#Rollback
class ConventionControllerIntegrationSpec extends Specification {
RestBuilder rest = new RestBuilder()
String url
def setup() {
url = "http://localhost:${serverPort}/api/admin/organizations/${Organization.first().id}/conventions"
}
def cleanup() {
}
void "test update convention"() {
given:
Convention convention = Convention.first()
when:
RestResponse response = rest.put("${url}/${convention.id}") {
contentType "application/json"
json {
name = "New Name"
}
}
then:
response.status == HttpStatus.OK.value()
Convention.findByName("New Name").id == convention.id
Convention.findByName("New Name").name == "New Name"
}
}
The data is being loaded via BootStrap (which admittadly might be the issue) but the problem is when I'm in the then block; it finds the Convention by the new name and the id matches, but when testing the name field, it is failing because it still has the old name.
While reading the documentation on testing I think the problem lies in what session the data gets created in. Since the #Rollback has a session that is separate from BootStrap, the data isn't really gelling. For example, if I load the data via the test's given block, then that data doesn't exist when my controller is called by the RestBuilder.
It is entirely possible that I shouldn't be doing this kind of test this way, so suggestions are appreciated.
This is definitely a functional test - you're making HTTP requests against your server, not making method calls and then making assertions about the effects of those calls.
You can't get automatic rollbacks with functional tests because the calls are made in one thread and they're handled in another, whether the test runs in the same JVM as the server or not. The code in BootStrap runs once before all of the tests run and gets committed (either because you made the changes in a transaction or via autocommit), and then the 'client' test code runs in its own new Hibernate session and in a transaction that the testing infrastructure starts (and will roll back at the end of the test method), and the server-side code runs in its own Hibernate session (because of OSIV) and depending on whether your controller(s) and service(s) are transactional or not, may run in a different transaction, or may just autocommit.
One thing that is probably not a factor here but should always be considered with Hibernate persistence tests is session caching. The Hibernate session is the first-level cache, and a dynamic finder like findByName will probably trigger a flush, which you want, but should be explicit about in tests. But it won't clear any cached elements and you run the risk of false positives with code like this because you might not actually be loading a new instance - Hibernate might be returning a cached instance. It definitely will when calling get(). I always add a flushAndClear() method to integration and functional base classes and I'd put a call to that after the put call in the when block to be sure everything is flushed from Hibernate to the database (not committed, just flushed) and clear the session to force real reloading. Just pick a random domain class and use withSession, e.g.
protected void flushAndClear() {
Foo.withSession { session ->
session.flush()
session.clear()
}
}
Since the put happens in one thread/session/tx and the finders run in their own, this shouldn't have an effect but should be the pattern used in general.
What's the difference between these two controller actions:
#Transactional
def save(SomeDomain someDomain) {
someDomain.someProperty = firstService.createAndSaveSomething(params) //transactional
someDomain.anotherProperty = secondService.createAndSaveSomething(params) //transactional
someDomain.save(flush: true)
}
and
def save(SomeDomain someDomain) {
combinedService.createAndSave(someDomain, params) //transactional, encapsulating first and second service calls
}
My purpose is to rollback the whole save() action if a transaction fails. But not sure which one shoud I use.
You can use both approaches.
Your listing #1 will rollback the controller transaction when firstService or secondService is throwing an exception.
In listing #2 (I expect the createAndSave method of combinedServiceto be annotated with #Transactional) will rollback the transaction if createAndSave throws an exception. The big plus using this approach is that this service method is theoretically reusable in other controllers.
One of the key points about #Transactional is that there are two separate concepts to consider, each with it's own scope and life cycle:
the persistence context
the database transaction
The transactional annotation itself defines the scope of a single database transaction. The database transaction happens inside the scope of a persistence context. Your code:
#Transactional
def save(SomeDomain someDomain) {
someDomain.someProperty = firstService.createAndSaveSomething(params) //transactional
someDomain.anotherProperty = secondService.createAndSaveSomething(params) //transactional
someDomain.save(flush: true)
}
The persistence context is in JPA the EntityManager, implemented internally using an Hibernate Session (when using Hibernate as the persistence provider). Your code:
def save(SomeDomain someDomain) {
combinedService.createAndSave(someDomain, params) //transactional, encapsulating first and second service calls
}
Note : The persistence context is just a synchronizer object that tracks the state of a limited set of Java objects and makes sure that changes on those objects are eventually persisted back into the database.
Conclusion : The declarative transaction management mechanism (#Transactional) is very powerful, but it can be misused or wrongly configured easily.
Understanding how it works internally is helpful when troubleshooting situations when the mechanism is not at all working or is working in an unexpected way.
I re-posted this question as I think it is a bit vague. New Post
I am currently using a Windows Service that is on a 2 minute timer. I am using EF code first with a repository pattern for data access. I am using Ninject to inject my dependencies. I have the following bindings in my NinjectDependencyResolver class:
ConnectionStringSettings connectionStringSettings = ConfigurationManager.ConnectionStrings["Database"];
Bind<IDatabaseFactory>().To<DatabaseFactory>()
.InSingletonScope()
.WithConstructorArgument("connectionString", connectionStringSettings.Name);
Bind<IUnitOfWork>().To<UnitOfWork>().InSingletonScope();
Bind<IMyRepository>().To<MyRepository>().InSingletonScope();
When my service runs every 2 minutes I do some thing similar to this:
foreach (var row in rows)
{
var existing = myRepository.GetById(row.Id);
if (existing == null)
{
existing = new Row();
myRepository.Add(existing);
unitOfWork.Commit();
}
}
I am starting to see an error in my logs that say:
The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges.
Is it correct to use InSingeltonScope when using Ninject in a Windows Service? I believe I tried using different scopes like InTransientScope but I could only get InSingeltonScope to work with data access. Does the error message have anything to do with Scope or is it unrelated?
Assuming that the service is not the only process that operates on the database you shouldn't use Singleton. What happens in this case is that you are reusing a DBContext that has cached entities which are out of date.
The better way is to treat each timer execution of the service in a similar way like it is a web/wcf request and create a new job processor for the request.
var processor = factory.CreateRowsProcessor();
processor.ProcessRows(rows);
public class RowsProcessor
{
public Processor(UoW uow, ....)
{
...
}
public void ProcessRows(Rows[] rows)
{
foreach (var row in rows)
{
var existing = myRepository.GetById(row.Id);
if (existing == null)
{
existing = new Row();
myRepository.Add(existing);
unitOfWork.Commit();
}
}
}
}
Depending of the problem it might even better to put the loop outside and have a new processor for each single row.
Read http://www.planetgeek.ch/2011/12/31/ninject-extensions-factory-introduction/ for more information about factories. Also have a look at the InCallScope of the named scope extension if you need to inject the UoW into multiple classes. http://www.planetgeek.ch/2010/12/08/how-to-use-the-additional-ninject-scopes-of-namedscope/
InSingletonScope will create singleton context = one context for the whole lifetime of your service. It is very bad solution. Because context holds all objects from all previous time events its memory consumption grows and there are possibilities to get errors as the one you are receiving at the moment (but the error really can be unrelated to your singleton context but most likely it is not). The exception says that you have two different objects with the same key identifier tracked by the context - that is not allowed.
Instead of using singleton uow, repository and context use singleton factory and in each time even request new fresh instances from the factory. Dispose context at the end of the time event processing.
We are trying to develop our own EF provider for our legacy APIs. We managed to get "GET/POST" operation working successfully.
However, for operation "PUT/MERGE", the method "CreateDbCommandDefinition" (of DbProviderServices implementation) fires twice. One with "DbQueryCommandTree" and another with "DbUpdateCommandTree".
I understand that it needs to fetch the entity prior to update it (for change tracking I guess). In our case, I don't need the entity information to be fetched prior to update. I simply want to call our legacy APIs with the entity sent for update. How can we strictly ask it to not to do the work of "DbQueryCommandTree" (and do only the work of "DbUpdateCommandTree") when I working with "PUT/MERGE" operations.
The client code looks something like the one below:
public void CustomerUpdateTest()
{
try
{
Ctxt.MergeOption = MergeOption.NoTracking;
var oNewCus = new Customer()
{
MasterCustomerId = "1001",
SubCustomerId = "0",
FirstName = "abc",
LastName = "123"
};
Ctxt.AttachTo("Customers", oNewCus);
Ctxt.UpdateObject(oNewCus);
//Ctxt.SaveChanges();
Ctxt.SaveChanges(SaveChangesOptions.ReplaceOnUpdate);
}
catch (Exception ex)
{
Assert.Fail(ex.Message);
}
You will have to write your own IDataServiceUpdateProvider to make this happen. For EF, the in built EF update provider does 2 queries - one to get the entity which needs to be modified and one for the actual modification. We are planning to make this provider public in our next release, so folks can derive from it and just override one or more methods. But for now, you will have to implement the interface yourself.
For PUT/MERGE requests, WCF Data Services calls IDataServiceUpdateProvider.GetResource to get the entity to update. In your implementation of this method, you can return a token that represents the object that need to get modified (you will have to visit the expression tree that gets passed in this method to find out the entity set and the key value of the entity in question).
In SaveChanges, you can push the update based on the token. That way you can avoid one round trip to the database.
Hope this helps.
I'm writing an app that we may be switching out the repository later (currently entity framework) to use either amazon or windows azure storage.
I have a service method that disables a user by the ID, all it does is set a property to true and set the DisabledDate. Should I call to the repository, get that user, set the properties in the service, then call to the save function in the repository? If I do this, then thats 2 database calls, should I worry about this? What if the user is updating the profile at the same time the admin is calling the disable method, and calls the user calls the save method in the repository (which currently holds false for the IsDisabled property?) Wouldn't that set the user back to being enabled if called right after the disabled method?
What is the best way to solve this problem? How do I update data in a high concurrent system?
CustomerRepository:
// Would be called from more specific method in Service Layer - e.g DisableUser
public void Update(Customer c)
{
var stub = new Customer { Id = c.Id }; // create "stub"
ctx.Customers.Attach(stub); // attach "stub" to graph
ctx.ApplyCurrentValues("Customers", c); // override scalar values of "stub"
ctx.SaveChanges(); // save changes - 1 call to DB. leave this out if you're using UoW
}
That should serve as a general-purpose "UPDATE" method in your repository. Should only be used when the entity exists.
That is just an example - in reality you should/could be using generics, checking for the existence of the entity in the graph before attaching, etc.
But that will get you on the right track.
As long as you know the id of the entity you want to save you should be able to do it by attaching the entity to the context first like so:
var c = new Customer();
c.Id = someId;
context.AttachTo("Customer", c)
c.PropertyToChange = "propertyValue";
context.SaveChanges();
Whether this approach is recommended or not, I'm not so sure as I'm not overly familiar with EF, but this will allow you to issue the update command without having to first load the entity.