When are .NET Core dependency injected instances disposed? - dependency-injection

ASP.NET Core uses extension methods on IServiceCollection to set up dependency injection, then when a type is needed it uses the appropriate method to create a new instance:
AddTransient<T> - adds a type that is created again each time it's requested.
AddScoped<T> - adds a type that is kept for the scope of the request.
AddSingleton<T> - adds a type when it's first requested and keeps hold of it.
I have types that implement IDisposable and that will cause problems if they aren't disposed - in each of those patterns when is Dispose actually called?
Is there anything I need to add (such as exception handling) to ensure that the instance is always disposed?

The resolved objects have the same life-time/dispose cycle as their container, that's unless you manually dispose the transient services in code using using statement or .Dispose() method.
In ASP.NET Core you get a scoped container that's instantiated per request and gets disposed at the end of the request. At this time, scoped and transient dependencies that were created by this container will get disposed too (that's if they implement IDisposable interface), which you can also see on the source code here.
public void Dispose()
{
lock (ResolvedServices)
{
if (_disposeCalled)
{
return;
}
_disposeCalled = true;
if (_transientDisposables != null)
{
foreach (var disposable in _transientDisposables)
{
disposable.Dispose();
}
_transientDisposables.Clear();
}
// PERF: We've enumerating the dictionary so that we don't allocate to enumerate.
// .Values allocates a ValueCollection on the heap, enumerating the dictionary allocates
// a struct enumerator
foreach (var entry in ResolvedServices)
{
(entry.Value as IDisposable)?.Dispose();
}
ResolvedServices.Clear();
}
}
Singletons get disposed when the parent container gets disposed, usually means when the application shuts down.
TL;DR: As long as you don't instantiate scoped/transient services during application startup (using app.ApplicationServices.GetService<T>()) and your services correctly implement Disposable interface (like pointed in MSDN) there is nothing you need to take care of.
The parent container is unavailable outside of Configure(IApplicationBuilder app) method unless you do some funky things to make it accessible outside (which you shouldn't anyways).
Of course, its encouraged to free up transient services as soon as possible, especially if they consume much resources.

Related

PerRequestLifetimeManager and Task.Factory.StartNew - Dependency Injection with Unity

How to manage new tasks with PerRequestLifeTimeManager?
Should I create another container inside a new task?(I wouldn't like to change PerRequestLifeTimeManager to PerResolveLifetimeManager/HierarchicalLifetimeManager)
[HttpPost]
public ActionResult UploadFile(FileUploadViewModel viewModel)
{
var cts = new CancellationTokenSource();
CancellationToken cancellationToken = cts.Token;
Task.Factory.StartNew(() =>
{
// _fileService = DependencyResolver.Current.GetService<IFileService>();
_fileService.ProcessFile(viewModel.FileContent);
}, cancellationToken);
}
You should read this article about DI in multi-threaded applications. Although it is written for a different DI library, you'll find most of the information applicable to the concept of DI in general. To quote a few important parts:
Dependency injection forces you to wire all dependencies together in a
single place in the application: the Composition Root. This means that
there is a single place in the application that knows about how
services behave, whether they are thread-safe, and how they should be
wired. Without this centralization, this knowledge would be scattered
throughout the code base, making it very hard to change the behavior
of a service.
In a multi-threaded application, each thread should get its own object
graph. This means that you should typically call
[Resolve<T>()] once at the beginning of the thread’s
execution to get the root object for processing that thread (or
request). The container will build an object graph with all root
object’s dependencies. Some of those dependencies will be singletons;
shared between all threads. Other dependencies might be transient; a
new instance is created per dependency. Other dependencies might be
thread-specific, request-specific, or with some other lifestyle. The
application code itself is unaware of the way the dependencies are
registered and that’s the way it is supposed to be.
The advice of building a new object graph at the beginning of a
thread, also holds when manually starting a new (background) thread.
Although you can pass on data to other threads, you should not pass on
container-controlled dependencies to other threads. On each new
thread, you should ask the container again for the dependencies. When
you start passing dependencies from one thread to the other, those
parts of the code have to know whether it is safe to pass those
dependencies on. For instance, are those dependencies thread-safe?
This might be trivial to analyze in some situations, but prevents you
to change those dependencies with other implementations, since now you
have to remember that there is a place in your code where this is
happening and you need to know which dependencies are passed on. You
are decentralizing this knowledge again, making it harder to reason
about the correctness of your DI configuration and making it easier to
misconfigure the container in a way that causes concurrency problems.
So you should not spin of new threads from within your application code itself. And you should definitely not create a new container instance, since this can cause all sorts of performance problems; you should typically have just one container instance per application.
Instead, you should pull this infrastructure logic into your Composition Root, which allows your controller's code to be simplified. Your controller code should not be more than this:
[HttpPost]
public ActionResult UploadFile(FileUploadViewModel viewModel)
{
_fileService.ProcessFile(viewModel.FileContent);
}
On the other hand, you don't want to change the IFileService implementation, because it shouldn't its concern to do multi-threading. Instead we need some infrastructural logic that we can place in between the controller and the file service, without them having to know about this. They way to do this is by implementing a proxy class for the file service and place it in your Composition Root:
private sealed class AsyncFileServiceProxy : IFileService {
private readonly ILogger logger;
private readonly Func<IFileService> fileServiceFactory;
public AsyncFileServiceProxy(ILogger logger, Func<IFileService> fileServiceFactory)
{
this.logger = logger;
this.fileServiceFactory = fileServiceFactory;
}
void IFileService.ProcessFile(FileContent content) {
// Run on a new thread
Task.Factory.StartNew(() => {
this.BackgroundThreadProcessFile(content);
});
}
private void BackgroundThreadProcessFile(FileContent content) {
// Here we run on a different thread and the
// services should be requested on this thread.
var fileService = this.fileServiceFactory.Invoke();
try {
fileService.ProcessFile(content);
}
catch (Exception ex) {
// logging is important, since we run on a
// different thread.
this.logger.Log(ex);
}
}
}
This class is a small peace of infrastructural logic that allows processing files on a background thread. The only thing left is to configure the container to inject our AsyncFileServiceProxy instead of the real file service implementation. There are multiple ways to do this. Here's an example:
container.RegisterType<ILogger, YourLogger>();
container.RegisterType<RealFileService>();
container.RegisterType<Func<IFileService>>(() => container.Resolve<RealFileService>(),
new ContainerControlledLifetimeManager());
container.RegisterType<IFileService, AsyncFileServiceProxy>();
One part however is missing here from the equation, and this is how to deal with scoped lifestyles, such as the per-request lifestyle. Since you are running stuff on a background thread, there is no HTTPContext and this basically means that you need to start some 'scope' to simulate a request (since your background thread is basically its own new request). This is however where my knowledge about Unity stops. I'm very familiar with Simple Injector and with Simple Injector you would solve this using a hybrid lifestyle (that mixes a per-request lifestyle with a lifetime-scope lifestyle) and you explicitly wrap the call to BackgroundThreadProcessFile in such scope. I imagine the solution in Unity to be very close to this, but unfortunately I don't have enough knowledge of Unity to show you how. Hopefully somebody else can comment on this, or add an extra answer to explain how to do this in Unity.

Ninject interception in multithreaded environment

I'm trying to create an interceptor using Ninject.Extensions.Interception.DynamixProxy to log method completion times.
In a single threaded environment something like this works:
public class TimingInterceptor : SimpleInterceptor
{
readonly Stopwatch _stopwatch = new Stopwatch();
private bool _isStarted;
protected override void BeforeInvoke(IInvocation invocation)
{
_stopwatch.Restart();
if (_isStarted) throw new Exception("resetting stopwatch for another invocation => false results");
_isStarted = true;
invocation.Proceed();
}
protected override void AfterInvoke(IInvocation invocation)
{
Debug.WriteLine(_stopwatch.Elapsed);
_isStarted = false;
}
}
In multithreaded scenarios this would however not work because the StopWatch is shared between invocations. How to pass an instance of StopWatch from BeforeInvoke to AfterInvoke so it would not be shared between invocations?
This should work just fine in a multi-threaded application, because each thread should get its own object graph. So when you start processing some task, you start with resolving a new graph and graphs should not be passed from thread to thread. This allows keeping the knowledge of what is thread-safe (and what not) centralized to the one single place in the application that wires everything up: the composition root.
When you work like this, it means that when you use this interceptor to monitor classes that are singletons (and used across threads), each thread will still get its own interceptor (when its registered as transient), because every time you resolve you get a new interceptor (even though you reuse the same 'intercepted' instance).
This however does mean that you have to be very careful where you inject this intercepted component into, because if you inject this intercepted object into another singleton, you will be in trouble again. This particular sort of 'trouble' is called captive dependency a.k.a lifestyle mismatch. It's really easy to accidentally misconfigure your container to get yourself into trouble by this, and unfortunately Ninject lacks the possibility to warn you about this.
Do note though, that your problems will disappear in case you start using decorators, instead of interceptors, because with a decorator you can keep everything in a single method. This means that even the decorator can be a singleton, without causing any threading issues. Example:
// Timing cross-cutting concern for command handlers
public class TimingCommandHandlerDecorator<TCommand> : ICommandHandler<TCommand>
{
private readonly ICommandHandler<TCommand> decoratee;
public TimingCommandHandlerDecorator(ICommandHandler<TCommand> decoratee)
{
this.decoratee = decoratee;
}
public void Handle(TCommand command)
{
var stopwatch = Stopwatch.StartNew();
this.decoratee.Handle(command);
Debug.WriteLine(stopwatch.Elapsed);
}
}
Of course, the use of decorators is often only possible when you correctly applied the SOLID principles to your design, because you often need to have some clear generic abstractions to be able to apply decorators to a large range of classes in your system. I can be daunting to use decorators efficiently in a legacy code base.

How do I prevent Structuremap nested container from disposing an instance

I'm using a "nested container per request" pattern. It's wrapped in a using block so the nested container is disposed after handling the request. This causes all instances created by the nested container to also be disposed at the end of the request.
This works great and is usually exactly what I want to happen. However I have ran into a use case where I don't want one particular instance to be disposed while everything else is.
I tried explicitly removing the instance at the bottom of the using block like so:
using (var nestedContainer = _container.GetNestedContainer())
{
nestedContainer.Configure ( x =>
x.For<IFoo>()
.Use(getFooForRequest())
);
var handler = (IRequestHandler) nestedContainer.GetInstance(handlerType);
handler.execute(....)
nestedContainer.EjectAllInstancesOf<IFoo>();
}
Unfortunately it appears that EjectAllInstancesOf is also calling dispose.
My application has a few different instances of IFoo, and they all need to live for the entire lifetime of the application. However incoming requests need to be dynamically associated with one particular instance of IFoo.
Injecting it into the nested container like above accomplished this goal, but my IFoo is getting disposed with the nested container, which is bad.
So how do I block the dispose for IFoo while still letting it dispose everything else?
If that's not possible, is there some other way to dynamically select my per-request IFoo without manual injection via nested.Configure()?

Is it necessary to dispose StructureMap "per request" instances?

I'm adapting some code originally written for Windsor to use StructureMap. In the Windsor example we release the handler. Is it necessary to do this with StructureMap instances that are cached "per request"? The code is:
foreach (var handler in ObjectFactory.GetAllInstances<IHandle<TEvent>>()) {
handler.Handle(#event);
// do I need to dispose here?
}
// or should I do this:
ObjectFactory.EjectAllInstancesOf<IHandle<TEvent>>();
Thanks
Ben
StructureMap does not hold on to references to "pre request" instances at all, so you do not have to take any steps to tell StructureMap to release them.
However, if retrieved services expect to be explicitly disposed (because they implements IDisposable), it is still your responsibility to dispose them. StructureMap just gives you the instance, and its up to you to use it appropriately.
With one exception - if you retrieve an IDisposable instance from a nested container, Dispose() will be called on the instance when the nested container is disposed.

What is wrong with putting Using Blocks in Repository?

I have using blocks in each method of my repository. If I want to cross reference methods, it seems it would be against best practices to initialize another Datacontext What am i doing wrong? If I declare a Datacontext in the class instead of using blocks in methods will I not lose power to dispose ??
public IList<something> GetSomething()
{
using (DB db=new DB())
{ ...GetListofSomethingElse(id)
}
}
public IList<somethingelse> GetListofSomethingElse(int id)
{
using (DB db=new DB())
{
... return IList
}
}
Actually, I think it is semantically (or how should I say that), not correct to create and dispose a datacontext in your repository.
I mean: if you open a new connection to the DB in each and every method of your repository, you're doing it wrong IMHO. This is too much fine grained.
The repository class has no knowledge of the 'context' in which it is being used. Your repository should not be responsible for opening / closing connections or starting and committing transactions.
Context is king, and the repository has no knowledge of the context in where it is being used. So, IMHO it is the responsability of the Application layer or service layer to open new DataContext objects, and closing / disposing them. (The same applies for transactions).
So, this is how I do it: (note that I do not use the Entity Framework, but I use NHibernate. I assume that the DataContext class in the EF is similar to the ISession in NHibernate):
using( ISession s = theSessionFactory.OpenSession() )
{
ICustomerRepository cr = RepositoryFactory.GetCustomerRepository(s);
Customer c1 = cr.GetCustomer(1);
Customer c2 = cr.GetCustomer(2);
// do some other stuff
s.StartTransaction();
cr.Save (c1);
cr.Save (c2);
s.Commit();
}
(This is not real world code offcourse; and it won't even compile since ISession doesn't have a Commit method. ;) Instead, the StartTransaction returns an ITransaction which has some kind of commit method, but I think you'll catch my drift. ;) )
If you don't use a using statement, you can still dispose explicitly. Even if you don't dispose of the data context though, cross-referencing these methods will still create a new data context. That may or may not be a good thing, depending on your usage. Think about the state management aspect of the data context, and whether you want to isolate the methods from each other or not. If you want to avoid creating a new context all the time, overload the methods with versions which take the context as a parameter.
Note that you don't usually need to dispose of a data context, although I tend to dispose of anything implementing IDisposable.
The using statement is syntactic sugar. It compiles to a try/finally block with the Dispose() call in the finally section. It ensures that Dispose will be called even if an exception occurs.
You can call .Dispose() on a class without using a 'using' statement - usually you'll do this in the Dispose method of your repository, if you've got one.

Resources