Responsibility of a DCI Context? - dci

The methodful roles contains the actual algorithm, but what should the Contexts executing method do but execute one of those methods?
public class SomeContext
{
// ... Constructor omitted ...
public void Execute()
{
// Is this all?
someRole.DoStuff(this.anotherRole, this.otherData);
}
}
It seems very simple, so I'm thinking that the Context should be responsible of for example database lookups. Wouldn't that simplify the methodful roles?

The main responsibility of a context is to bind roles to objects. Sometimes one or more of the "execute" methods will be complex but often they are not.
They are there to capture the interaction between objects
The binding of role to objects is atomic. It happens at one location in the context and for all roles at the same time.

Related

Avoiding all DI antipatterns for types requiring asynchronous initialization

I have a type Connections that requires asynchronous initialization. An instance of this type is consumed by several other types (e.g., Storage), each of which also require asynchronous initialization (static, not per-instance, and these initializations also depend on Connections). Finally, my logic types (e.g., Logic) consumes these storage instances. Currently using Simple Injector.
I've tried several different solutions, but there's always an antipattern present.
Explicit Initialization (Temporal Coupling)
The solution I'm currently using has the Temporal Coupling antipattern:
public sealed class Connections
{
Task InitializeAsync();
}
public sealed class Storage : IStorage
{
public Storage(Connections connections);
public static Task InitializeAsync(Connections connections);
}
public sealed class Logic
{
public Logic(IStorage storage);
}
public static class GlobalConfig
{
public static async Task EnsureInitialized()
{
var connections = Container.GetInstance<Connections>();
await connections.InitializeAsync();
await Storage.InitializeAsync(connections);
}
}
I've encapsulated the Temporal Coupling into a method, so it's not as bad as it could be. But still, it's an antipattern and not as maintainable as I'd like.
Abstract Factory (Sync-Over-Async)
A common proposed solution is an Abstract Factory pattern. However, in this case we're dealing with asynchronous initialization. So, I could use Abstract Factory by forcing the initialization to run synchronously, but this then adopts the sync-over-async antipattern. I really dislike the sync-over-async approach because I have several storages and in my current code they're all initialized concurrently; since this is a cloud application, changing this to be serially synchronous would increase startup time, and parallel synchronous is also not ideal due to resource consumption.
Asynchronous Abstract Factory (Improper Abstract Factory Usage)
I can also use Abstract Factory with asynchronous factory methods. However, there's one major problem with this approach. As Mark Seeman comments here, "Any DI Container worth its salt will be able to auto-wire an [factory] instance for you if you register it correctly." Unfortunately, this is completely untrue for asynchronous factories: AFAIK there is no DI container that supports this.
So, the Abstract Asynchronous Factory solution would require me to use explicit factories, at the very least Func<Task<T>>, and this ends up being everywhere ("We personally think that allowing to register Func delegates by default is a design smell... If you have many constructors in your system that depend on a Func, please take a good look at your dependency strategy."):
public sealed class Connections
{
private Connections();
public static Task<Connections> CreateAsync();
}
public sealed class Storage : IStorage
{
// Use static Lazy internally for my own static initialization
public static Task<Storage> CreateAsync(Func<Task<Connections>> connections);
}
public sealed class Logic
{
public Logic(Func<Task<IStorage>> storage);
}
This causes several problems of its own:
All my factory registrations have to pull dependencies out of the container explicitly and pass them to CreateAsync. So the DI container is no longer doing, you know, dependency injection.
The results of these factory calls have lifetimes that are no longer managed by the DI container. Each factory is now responsible for lifetime management instead of the DI container. (With the synchronous Abstract Factory, this is not an issue if the factory is registered appropriately).
Any method actually using these dependencies would need to be asynchronous - since even the logic methods must await for the storage/connections initialization to complete. This is not a big deal for me on this app since my storage methods are all asynchronous anyway, but it can be a problem in the general case.
Self Initialization (Temporal Coupling)
Another, less common, solution is to have each member of a type await its own initialization:
public sealed class Connections
{
private Task InitializeAsync(); // Use Lazy internally
// Used to be a property BobConnection
public X GetBobConnectionAsync()
{
await InitializeAsync();
return BobConnection;
}
}
public sealed class Storage : IStorage
{
public Storage(Connections connections);
private static Task InitializeAsync(Connections connections); // Use Lazy internally
public async Task<Y> IStorage.GetAsync()
{
await InitializeAsync(_connections);
var connection = await _connections.GetBobConnectionAsync();
return await connection.GetYAsync();
}
}
public sealed class Logic
{
public Logic(IStorage storage);
public async Task<Y> GetAsync()
{
return await _storage.GetAsync();
}
}
The problem here is that we're back to Temporal Coupling, this time spread out throughout the system. Also, this approach requires all public members to be asynchronous methods.
So, there's really two DI design perspectives that are at odds here:
Consumers want to be able to inject instances that are ready to use.
DI containers push hard for simple constructors.
The problem is - particularly with asynchronous initialization - that if DI containers take a hard line on the "simple constructors" approach, then they are just forcing the users to do their own initialization elsewhere, which brings its own antipatterns. E.g., why Simple Injector won't consider asynchronous functions: "No, such feature does not make sense for Simple Injector or any other DI container, because it violates a few important ground rules when it comes to dependency injection." However, playing strictly "by the ground rules" apparently forces other antipatterns that seem much worse.
The question: is there a solution for asynchronous initialization that avoids all antipatterns?
Update: Complete signature for AzureConnections (referred to above as Connections):
public sealed class AzureConnections
{
public AzureConnections();
public CloudStorageAccount CloudStorageAccount { get; }
public CloudBlobClient CloudBlobClient { get; }
public CloudTableClient CloudTableClient { get; }
public async Task InitializeAsync();
}
This is a long answer. There's a summary at the end. Scroll down to the summary if you're in a hurry.
The problem you have, and the application you're building, is a-typical. It’s a-typical for two reasons:
you need (or rather want) asynchronous start-up initialization, and
Your application framework (azure functions) supports asynchronous start-up initialization (or rather, there seems to be little framework surrounding it).
This makes your situation a bit different from a typical scenario, which might make it a bit harder to discuss common patterns.
However, even in your case the solution is rather simple and elegant:
Extract initialization out of the classes that hold it, and move it into the Composition Root. At that point you can create and initialize those classes before registering them in the container and feed those initialized classes into the container as part of registrations.
This works well in your particular case, because you want to do some (one-time) start-up initialization. Start-up initialization is typically done before you configure the container (or sometimes after if it requires a fully composed object graph). In most cases I’ve seen, initialization can be done before, as can be done effectively in your case.
As I said, your case is a bit peculiar, compared to the norm. The norm is:
Start-up initialization is synchronous. Frameworks (like ASP.NET Core¹) typically do not support asynchronous initialization in the start-up phase.
Initialization often needs to be done per-request and just-in-time rather than per-application and ahead-of-time. Often components that need initialization have a short lifetime, which means we typically initialize such instance on first use (in other words: just-in-time).
There is usually no real benefit of doing start-up initialization asynchronously. There is no practical performance benefit because, at start-up time, there will only be a single thread running anyway (although we might parallelize this, that obviously doesn’t require async). Also note that although some application types might deadlock on doing synch-over-async, in the Composition Root we know exactly which application type we are using and whether or not this will be a problem or not. A Composition Root is always application-specific. In other words, when we have initialization in the Composition Root of a non-deadlocking application (e.g. ASP.NET Core, Azure Functions, etc), there is typically no benefit of doing start-up initialization asynchronously, except perhaps for the sake of sticking to the advised patterns & practices.
Because you know whether or not sync-over-async is a problem or not in your Composition Root, you could even decide to do the initialization on first use and synchronously. Because the amount of initialization is finite (compared to per-request initialization) there is no practical performance impact on doing it on a background thread with synchronous blocking if you wish. All you have to do is define a Proxy class in your Composition Root that makes sure that initialization is done on first use. This is pretty much the idea that Mark Seemann proposed as answer.
I was not familiar at all with Azure Functions, so this is actually the first application type (except Console apps of course) that I know of that actually supports async initialization. In most framework types, there is no way for users to do this start-up initialization asynchronously at all. Code running inside an Application_Start event in an ASP.NET application or in the Startup class of an ASP.NET Core application, for instance, there is no async. Everything has to be synchronous.
On top of that, application frameworks don’t allow you to build their framework root components asynchronously. So even if DI Containers would support the concept of doing asynchronous resolves, this wouldn’t work because of the ‘lack’ of support of application frameworks. Take ASP.NET Core’s IControllerActivator for instance. Its Create(ControllerContext) method allows you to compose a Controller instance, but the return type of the Create method is object, not Task<object>. In other words, even if DI Containers would provide us with a ResolveAsync method, it would still cause blocking because ResolveAsync calls would be wrapped behind synchronous framework abstractions.
In the majority of cases, you’ll see that initialization is done per-instance or at runtime. A SqlConnection, for instance, is typically opened per request, so each request needs to open its own connection. When you want to open the connection ‘just in time’, this inevitably results in application interfaces that are asynchronous. But be careful here:
If you create an implementation that is synchronous, you should only make its abstraction synchronous in case you are sure that there will never be another implementation (or proxy, decorator, interceptor, etc.) that is asynchronous. If you invalidly make the abstraction synchronous (i.e. have methods and properties that do not expose Task<T>), you might very well have a Leaky Abstraction at hand. This might force you to make sweeping changes throughout the application when you get an asynchronous implementation later on.
In other words, with the introduction of async you have to take even more care of the design of your application abstractions. This holds for your specific case as well. Even though you might only require start-up initialization now, are you sure that for the abstractions you defined (and AzureConnections as well) will never need just-in-time synchronous initialization? In case the synchronous behavior of AzureConnections is an implementation detail, you will have to make it async right away.
Another example of this is your INugetRepository. Its members are synchronous, but that is clearly a Leaky Abstraction, because the reason it is synchronous is because its implementation is synchronous. Its implementation, however, is synchronous because it makes use of a legacy NuGet package that only has a synchronous API. It’s pretty clear that INugetRepository should be completely async, even though its implementation is synchronous, because implementations are expected to communicate over the network, which is where asynchronicity makes sense.
In an application that applies async, most application abstractions will have mostly async members. When this is the case, it would be a no-brainer to make this kind of just-in-time initialization logic async as well; everything is already async.
Summary
In case you need start-up initialization: do it before or after configuring the container. This makes composing object graphs itself fast, reliable, and verifiable.
Doing initialization before configuring the container prevents Temporal Coupling, but might mean you will have to move initialization out of the classes that require it (which is actually a good thing).
Async start-up initialization is impossible in most application types. In the other application types it is typically unnecessary.
In case you require per-request or just-in-time initialization, there is no way around having asynchronous interfaces.
Be careful with synchronous interfaces if you’re building an asynchronous application, you might be leaking implementation details.
Footnotes
ASP.NET Core actually does allow async start-up initialization, but not from within the Startup class. There are several ways to achieve this: either you implement and register hosted services that contain (or delegate to) the initialization, or trigger the async initialization from within the async Main method of the program class.
While I'm fairly sure the following isn't what you're looking for, can you explain why it doesn't address your question?
public sealed class AzureConnections
{
private readonly Task<CloudStorageAccount> storage;
public AzureConnections()
{
this.storage = Task.Factory.StartNew(InitializeStorageAccount);
// Repeat for other cloud
}
private static CloudStorageAccount InitializeStorageAccount()
{
// Do any required initialization here...
return new CloudStorageAccount( /* Constructor arguments... */ );
}
public CloudStorageAccount CloudStorageAccount
{
get { return this.storage.Result; }
}
}
In order to keep the design clear, I only implemented one of the cloud properties, but the two others could be done in a similar fashion.
The AzureConnections constructor will not block, even if it takes significant time to initialise the various cloud objects.
It will, on the other hand, start the work, and since .NET tasks behave like promises, the first time you try to access the value (using Result) it's going to return the value produced by InitializeStorageAccount.
I get the strong impression that this isn't what you want, but since I don't understand what problem you're trying to solve, I thought I'd leave this answer so at least we'd have something to discuss.
It looks like you are trying to do what I am doing with my proxy singleton class.
services.AddSingleton<IWebProxy>((sp) =>
{
//Notice the GetService outside the Task. It was locking when it was inside
var data = sp.GetService<IData>();
return Task.Run(async () =>
{
try
{
var credentials = await data.GetProxyCredentialsAsync();
if (credentials != null)
{
return new WebHookProxy(credentials);
}
else
{
return (IWebProxy)null;
}
}
catch(Exception ex)
{
throw;
}
}).Result; //Back to sync
});

When are .NET Core dependency injected instances disposed?

ASP.NET Core uses extension methods on IServiceCollection to set up dependency injection, then when a type is needed it uses the appropriate method to create a new instance:
AddTransient<T> - adds a type that is created again each time it's requested.
AddScoped<T> - adds a type that is kept for the scope of the request.
AddSingleton<T> - adds a type when it's first requested and keeps hold of it.
I have types that implement IDisposable and that will cause problems if they aren't disposed - in each of those patterns when is Dispose actually called?
Is there anything I need to add (such as exception handling) to ensure that the instance is always disposed?
The resolved objects have the same life-time/dispose cycle as their container, that's unless you manually dispose the transient services in code using using statement or .Dispose() method.
In ASP.NET Core you get a scoped container that's instantiated per request and gets disposed at the end of the request. At this time, scoped and transient dependencies that were created by this container will get disposed too (that's if they implement IDisposable interface), which you can also see on the source code here.
public void Dispose()
{
lock (ResolvedServices)
{
if (_disposeCalled)
{
return;
}
_disposeCalled = true;
if (_transientDisposables != null)
{
foreach (var disposable in _transientDisposables)
{
disposable.Dispose();
}
_transientDisposables.Clear();
}
// PERF: We've enumerating the dictionary so that we don't allocate to enumerate.
// .Values allocates a ValueCollection on the heap, enumerating the dictionary allocates
// a struct enumerator
foreach (var entry in ResolvedServices)
{
(entry.Value as IDisposable)?.Dispose();
}
ResolvedServices.Clear();
}
}
Singletons get disposed when the parent container gets disposed, usually means when the application shuts down.
TL;DR: As long as you don't instantiate scoped/transient services during application startup (using app.ApplicationServices.GetService<T>()) and your services correctly implement Disposable interface (like pointed in MSDN) there is nothing you need to take care of.
The parent container is unavailable outside of Configure(IApplicationBuilder app) method unless you do some funky things to make it accessible outside (which you shouldn't anyways).
Of course, its encouraged to free up transient services as soon as possible, especially if they consume much resources.

Injecting correct object graph using StructureMap in Queue of different Objects

I have a queuing service that has to inject a different dependency graph depending on the type of object in the queue. I'm using Structure Map.
So, if the object in the queue is TypeA the concrete classes for TypeA are used and if it's TypeB, the concrete classes for TypeB are used.
I'd like to avoid code in the queue like:
if (typeA)
{
// setup TypeA graph
}
else if (typeB) {
// setup TypeB graph
}
Within the graph, I also have a generic classes such as an IReader(ISomething, ISpomethingElse) where IReader is generic but needs to inject the correct ISomething and ISomethingElse for the type. ISomething will also have dependencies and so on.
Currently I create a TypeA or TypeB object and inject a generic Processor class using StructureMap into it and then pass a factory manually inject a TypeA or TypeB factory into a method like:
Processor.Process(new TypeAFactory) // perhaps I should have an abstract factory...
However, because the factory then creates the generic IReader mentioned above, I end up manually injecting all the TypeA or TypeB classes fro there on.
I hope enough of this makes sense.
I am new to StructureMap and was hoping somebody could point me in the right direction here for a flexible and elegant solution.
Thanks
I don't know if I fully understand your question, but in general your queue processor needs access to some sort of factory for processing those objects. The most convenient approach would be if your queue consist of messages/commands (DTOs) and you have some sort of abstraction over command handling logic, such as ICommandHandler<TCommand>.
In that case your queue processor might look like this:
private readonly ICommandHandlerFactory factory;
public void Process(IEnumerable<object> commandQueue)
{
foreach (object command in commandQueue)
{
dynamic handler = this.factory.CreateHandlerFor(command.GetType());
handler.Handle((dynamic)command);
}
}

Autofac: long-lived objects requiring short-lived objects during single method calls

I have a class X that I register in Autofac as single-instance because it's rather costly to create.
X has a method DoSomething that performs some action. However, to do its task in DoSomething, X needs additional dependencies. Typically, I'd inject them in the constructor, but in this case this gets difficult, because the dependency is bound to a narrower scope, e.g. instance-per-httprequest or instance-per-lifetime-scope. I can't use Func<T>, because this still resolves the objects in the lifetime in which the delegate is instantiated, so I don't gain anything.
The next option would have been to pass in the dependency as an argument to DoSomething, however the fact that there is a dependency is really just an implementation detail. In fact, I access X through an interface. I'd rather not cause a leaking abstraction by adding this parameter.
Resolving the dependency manually in the method (ie., service-locator style) isn't that great, either, of course. And even then I have the problem that I'm not sure how to access the proper IComponentContext. The class may be used in a web application or in a conventional application, or in a thread of a web application, but outside any request. How I do I determine the "current" lifetime scope?
So the basic problem is this:
class X : ISomething
{
public void DoSomething()
{
IDependency dependency = ?;
dependency.UseMe();
/* more stuff */
}
}
Ideally I'd like to be able to inject something into the constructor that will later allow me to resolve the actual object inside the current lifetime scope, like so:
class X : ISomething
{
IResolveLater<IDependency> dependencyResolver;
public X(IResolveLater<IDependency> dependencyResolver){
this.dependencyResolver = dependencyResolver;
}
public void DoSomething()
{
IDependency dependency = dependency.Resolve();
dependency.UseMe();
/* more stuff */
}
}
I'm certainly smelling design issues here, but I can't really put my finger on it. How can I solve this general problem: a long-lived object that requires locally-scoped, short-lived objects for single operations. I normally much prefer the different order: short-lived objects depending on long-lived objects.
I was also thinking about somehow moving the long-lived stuff out of X and create an additional, short-lived class XHelper that acts as some kind of "adapter":
class X
{
void DoSomething(IDependency dependency)
{
/* do something */
}
}
class XHelper : ISomething
{
X x;
public XHelper(X x, IDependency dependency)
{
this.x = x;
this.dependency = dependency;
}
public void DoSomething()
{
x.DoSomething(dependency);
}
}
Thus, when I need an ISomething, I'll resolve an XHelper (instead of an X) as needed which automatically gets the proper dependencies injected. It's a bit cumbersome that I need to introduce an additional type just for this.
How can I resolve this situation in the most elegant way?
It sounds to me like your use case is actually a simple factory pattern in disguise.
Create a simple DependencyFactory that has a Create() method that returns instances of whatever concrete class you want that implements IDependency, then have your DependencyFactory implement an interface IDependencyFactory.
Register the IDependencyFactory with the container, and modify the constructor of Class X to take an IDependencyFactory.
Use the IDependencyFactory instance to resolve a concrete instance of IDependency in your method DoSomething() and call DoSomething() on it.

What is wrong with putting Using Blocks in Repository?

I have using blocks in each method of my repository. If I want to cross reference methods, it seems it would be against best practices to initialize another Datacontext What am i doing wrong? If I declare a Datacontext in the class instead of using blocks in methods will I not lose power to dispose ??
public IList<something> GetSomething()
{
using (DB db=new DB())
{ ...GetListofSomethingElse(id)
}
}
public IList<somethingelse> GetListofSomethingElse(int id)
{
using (DB db=new DB())
{
... return IList
}
}
Actually, I think it is semantically (or how should I say that), not correct to create and dispose a datacontext in your repository.
I mean: if you open a new connection to the DB in each and every method of your repository, you're doing it wrong IMHO. This is too much fine grained.
The repository class has no knowledge of the 'context' in which it is being used. Your repository should not be responsible for opening / closing connections or starting and committing transactions.
Context is king, and the repository has no knowledge of the context in where it is being used. So, IMHO it is the responsability of the Application layer or service layer to open new DataContext objects, and closing / disposing them. (The same applies for transactions).
So, this is how I do it: (note that I do not use the Entity Framework, but I use NHibernate. I assume that the DataContext class in the EF is similar to the ISession in NHibernate):
using( ISession s = theSessionFactory.OpenSession() )
{
ICustomerRepository cr = RepositoryFactory.GetCustomerRepository(s);
Customer c1 = cr.GetCustomer(1);
Customer c2 = cr.GetCustomer(2);
// do some other stuff
s.StartTransaction();
cr.Save (c1);
cr.Save (c2);
s.Commit();
}
(This is not real world code offcourse; and it won't even compile since ISession doesn't have a Commit method. ;) Instead, the StartTransaction returns an ITransaction which has some kind of commit method, but I think you'll catch my drift. ;) )
If you don't use a using statement, you can still dispose explicitly. Even if you don't dispose of the data context though, cross-referencing these methods will still create a new data context. That may or may not be a good thing, depending on your usage. Think about the state management aspect of the data context, and whether you want to isolate the methods from each other or not. If you want to avoid creating a new context all the time, overload the methods with versions which take the context as a parameter.
Note that you don't usually need to dispose of a data context, although I tend to dispose of anything implementing IDisposable.
The using statement is syntactic sugar. It compiles to a try/finally block with the Dispose() call in the finally section. It ensures that Dispose will be called even if an exception occurs.
You can call .Dispose() on a class without using a 'using' statement - usually you'll do this in the Dispose method of your repository, if you've got one.

Resources