I've run into the same question repeatedly whenever using a new DI framework... how do you run massively-parallel operation kicked off from an HttpRequest where each thread needs its own unique copy of the dependencies? In my case, I'm using Ninject.
The specific case I always run into is a CPU-intensive report, using Parallel.ForEach, that needs to use an Entity Framework DbContext; the EF context must be unique to the thread, but outside of these special reports the EF context it must be InRequestScope.
How do you achieve this with Ninject? Preferably allow disposing the EF context with each task on the Parallel.ForEach, since the data loaded with the context would just stay in the context and consume memory.
Note that this report is big enough to warrant Parallel.ForEach but small enough that it can run synchronously on a web request and not timeout the browser (<60 seconds). Maybe I'm weird, but I run into this need a lot.
The solution has several different moving parts that, IMO, aren't terribly well-documented parts of Ninject. The upside is that after implementing something like this, you should start feeling comfortable with Ninject in a hurry!
First, you need to change the scope for your objects so they use the HttpContext if it exists, and if not, use the current thread as a fallback. There is no documentation for this, but there is a DefaultScopeCallback that was added to the settings a while back. Set that property to your own scope callback which uses the same code in the Ninject.Web.Common source to get the HttpContext, but then use "?? Thread.CurrentThread" as the fallback. Do that in the CreateKernel code that should have been created automatically when you installed the NuGet package.
(I have substituted the StandardScopeCallbacks.Thread(ctx) where I used to have Thread.CurrentThread, since the former could conceivably change at some point. Currently those two are identical in what they do.)
private static IKernel CreateKernel()
{
var settings = new NinjectSettings{ DefaultScopeCallback = DefaultScopeCallback };
var kernel = new StandardKernel(settings);
// The rest of the default implementation of CreateKernel left out for brevity
}
private static Object DefaultScopeCallback(Ninject.Activation.IContext ctx)
{
var scope = ctx.Kernel.Components.GetAll<INinjectHttpApplicationPlugin>()
.Select(c => c.GetRequestScope(ctx)).FirstOrDefault(s => s != null);
return scope ?? Ninject.Infrastructure.StandardScopeCallbacks.Thread(ctx);
}
Also, don't forget that the Kernel needs to be set aside as a static object for access later. You don't want to new-up a new Kernel every time you need it; I make mine accessible via "MyConfig.ObjectFactory". While this is a code smell of the service locator anti-pattern, we're going to great lengths here to avoid the anti-pattern as much as possible.
Second, according to the commit description, the DefaultScopeCallback only affects explicit bindings with no explicit scope. So if, like me, you were depending on a bunch of implicit bindings that you hadn't added, you now need to configure them:
kernel.Bind(i => i.From(Assembly.GetExecutingAssembly(), Assembly.GetAssembly(typeof(Bll.MyConfig)))
.SelectAllClasses()
.BindToSelf());
If you don't like doing the above, there's another way of setting the default scope for all implicit bindings that is arguably more elegant. Changing default object scope with Ninject 2.2
Third, if you'd like to clear all cached objects from the scope at the end of each Parallel operation so that memory usage doesn't skyrocket due to EF caching or whatnot, here's how clear the Ninject cache scoped to the current thread:
Parallel.ForEach(myList, i =>
{
var threadDb = MyConfig.ObjectFactory.Get<MyContext>();
CreateModelsForItem(i, threadDb);
MyConfig.ObjectFactory.Components.Get<Ninject.Activation.Caching.ICache>().Clear(Thread.CurrentThread);
});
Note that I did some testing without that Clear line at the end, and it seemed like the EF Context was getting re-used even if that HttpRequest finished and I generated the report several more times. This was not what I wanted, so the Clear operation was important. Really, the behavior I want is closer to InCallScope, but trying to get InRequestScope with InCallScope as a fallback is a can of worms I'll open on another day.
I believe I understand the basic concepts of DI / IoC containers having written a couple of applications using them and reading a lot of stack overflow answers as well as Mark Seeman's book. There are still some cases that I have trouble with, especially when it comes to integrating DI container to a large existing architecture where DI principle hasn't been really used (think big ball of mud).
I know the ideal scenario is to have a single composition root / object graph per operation but in a legacy system this might not be possible without major refactoring (only the new and some select refactored old parts of the code could have dependencies injected through constructor and the rest of the system using the container as a service locator to interact with the new parts). This effectively means that a stack trace deep within an operation might include several object graphs with calls being made back and forth between new subsystems (single object graph until exiting into an old segment) and traditional subsystems (service locator call at some point to code under DI container).
With the (potentially faulty, I might be overthinking this or be completely wrong in assuming this kind of hybrid architecture is a good idea) assumptions out of the way, here's the actual problem:
Let's say we have a thread pool executing scheduled jobs of various types defined in database (or any external place). Each separate type of scheduled job is implemented as a class inheriting a common base class. When the job is started, it gets fed the information about which targets it should write its log messages to and the configuration it should use. The configuration could probably be handled by just passing the values as method parameters to whatever class needs them but if the job implementation gets larger than say 10-20 classes, it doesn't seem very handy.
Logging is the larger problem. Subsystems the job calls probably also need to write things to the log and usually in examples this is done by just requesting instance of ILog in the constructor. But how does that work in this case when we don't know the details / implementation until runtime? Since:
Due to (non DI container controlled) legacy system segments in the call chain (-> there potentially being multiple separate object graphs), child container cannot be used to inject the custom logger for specific sub-scope
Manual property injection would basically require the complete call chain (including all legacy subsystems) to be updated
A simplified example to help better perceive the problem:
Class JobXImplementation : JobBase {
// through constructor injection
ILoggerFactory _loggerFactory;
JobXExtraLogic _jobXExtras;
public void Run(JobConfig configurationFromDatabase)
{
ILog log = _loggerFactory.Create(configurationFromDatabase.targets);
// if there were no legacy parts in the call chain, I would register log as instance to a child container and Resolve next part of the call chain and everyone requesting ILog would get the correct logging targets
// do stuff
_jobXExtras.DoStuff(configurationFromDatabase, log);
}
}
Class JobXExtraLogic {
public void DoStuff(JobConfig configurationFromDatabase, ILog log) {
// call to legacy sub-system
var old = new OldClass(log, configurationFromDatabase.SomeRandomSetting);
old.DoOldStuff();
}
}
Class OldClass {
public void DoOldStuff() {
// moar stuff
var old = new AnotherOldClass();
old.DoMoreOldStuff();
}
}
Class AnotherOldClass {
public void DoMoreOldStuff() {
// call to a new subsystem
var newSystemEntryPoint = DIContainerAsServiceLocator.Resolve<INewSubsystemEntryPoint>();
newSystemEntryPoint.DoNewStuff();
}
}
Class NewSubsystemEntryPoint : INewSubsystemEntryPoint {
public void DoNewStuff() {
// want to log something...
}
}
I'm sure you get the picture by this point.
Instantiating old classes through DI is a non-starter since many of them use (often multiple) constructors to inject values instead of dependencies and would have to be refactored one by one. The caller basically implicitly controls the lifetime of the object and this is assumed in the implementations (the way they handle internal object state).
What are my options? What other kinds of problems could you possibly see in a situation like this? Is trying to only use constructor injection in this kind of environment even feasible?
Great question. In general, I would say that an IoC container loses a lot of its effectiveness when only a portion of the code is DI-friendly.
Books like Working Effectively with Legacy Code and Dependency Injection in .NET both talk about ways to tease apart objects and classes to make DI viable in code bases like the one you described.
Getting the system under test would be my first priority. I'd pick a functional area to start with, one with few dependencies on other functional areas.
I don't see a problem with moving beyond constructor injection to setter injection where it makes sense, and it might offer you a stepping stone to constructor injection. Adding a property is usually less invasive than changing an object's constructor.
It's not my intent to engage in a debate over validation in DDD, where the code belongs, etc. but to focus on one possible approach and how to address localization issues. I have the following behavior (method) on one of my domain objects (entities) which exemplifies the scenario:
public void ClockIn()
{
if (WasTerminated)
{
throw new InvalidOperationException("Cannot clock-in a terminated employee.");
}
ClockedInAt = DateTime.Now;
:
}
As you can see, when the ClockIn method is called, the method checks the state of the object to ensure that the Employee has not been terminated. If the Employee was terminated, we throw an exception consistent with the "don't let your entities enter an invalid state" approach.
My problem is that I need to localize the exception message. This is typically done (in this application) using an application service (ILocalizationService) that is imported using MEF in classes that require access to its methods. However, as with any DI framework, dependencies are only injected/imported if the object was instantiated by the container. This is typically not the case with DDD.
Furthermore, everything I've learned about DDD says that our domain objects should not have dependencies and those concerns should be handled external from the domain object. If that is the case, how can I go about localizing messages such as the one shown above?
This is not a novel requirement as a great many business applications require globalization/localization. I'd appreciate some recommendations how to make this work and still be consistent with the goals of a DDD.
UPDATE
I failed to originally point out that our localization is all database driven, so we do have a Localization Service (via the injectable ILocalizationService interface). Therefore, using the static Resources class Visual Studio provides as part of the project is NOT a viable option.
ANOTHER UPDATE
Perhaps it would move the discussion along to state that the app is a RESTful service app. Therefore, the client could be a simple web browser. As such, I cannot code with any expectation that the caller can perform any kind of localization, code mapping, etc. When an exception occurs (and in this approach, attempting to put the domain object into an invalid state is an exception), an exception is thrown and the appropriate HTTP status code returned along with the exception message which should be localized to the caller's culture (Accept-Language).
Not sure how helpful this response is to you, but localization is really a front-end concern. Localizing exceptions messages as per your example is not common practice, as end users shouldn't see technical details such as those described in exception messages (and whoever will be troubleshooting your exceptions probably has a sufficient level English even if it is not their native language).
Of course if necessary you can always handle exceptions and present a localized, user-friendly message to your users in your front-end. But keeping it as a font-end concern should simplify your architecture.
As Clafou said, you shouldn't use exceptions for passing messages to the UI in any way.
If you still insist in doing this, one option is to throw an error code instead of the message
throw new InvalidOperationException("ERROR_TERMINATED_EMPLOYEE_CLOCKIN");
and then, when it happens, do whatever you need to do with the exception (log, look up localization, whatever).
If localisation is important part of the domain/application you should make it a first class citizen and inject wherever it belongs. I am not sure what you mean with "DDD says that our domain objects should not have dependencies" - please explain.
You are correct for trying to avoid adding internal dependencies to your domain model objects.
A better solution would be to handle the action inside of a service method such as:
public class EmployeeServiceImpl implements EmployeeService {
public void ClockEmployeeIn(Employee employee) throws InvalidOperationException {
if (employee.isTerminated()) {
// Localize using a resource lookup code..
throw new InvalidOperationException("Error_Clockin_Employee_Terminated");
}
employee.setClockedInAt(DateTime.Now);
}
}
You can then inject the service using your DI framework at the point where you will be making the clockin call and use the service to insulate your domain objects from changes to business logic.
I am using entity framework for my project with MVC as front end and I have implemented unit of work pattern with repository pattern.
I have service layer on top of repositories to handle business.
My question is where to handle exceptions?
Is it good idea to let pass through all exceptions to presentation layer and handle it in the controller or do I need to handle it in the bottom layers?
Well, the general idea is not to let UI handle all exceptions, nor that makes much sense. Lets say you have a data layer implemented with ADO.NET. The general pattern here is to handle SqlExceptions in data layer, and then wrap the SqlException in a more meaningfull DatabaseLayerException that upper layers should handle - and you follow this pattern all the way to the top, so you can have something like InfrastructureException, ApplicationException etc...
On the very top, you catch all ApplicationExceptions that went unhandled (and you make all your exceptions inherit this one for polymorphism), and you catch all unhandled Exceptions as a special case not likely to happen, and try to recover from it.
I also suggest use of logging, either manually or with AOP - you will find plenty of resources online (perhaps Log4Net ?).
I think in any exception handeling strategy you have these options:
to recover from the exception if possible (for instance server is down, wait a while and try again)
Ignore the exception because it's not serious enough or whatever other reasons.
Bubble it upwards. Multiple strategies exist here, such as just a throw; or throw new Exception("message", InnerException); to name a few.
Lastly there are the global options to log the exception to some log format, or to send an email, etc. I don't include this in the above three options, because this is global to all the above three options.
So having said that I think that at each layer in your application described above you have to ask your self in which of the above three ways can you handle the exception. If you can not recover from it or ignore it, then you should bubble it upwards to a friendly error page where you do final cleaning up and presentation of the exception.
I'm refactoring a class and adding a new dependency to it. The class is currently taking its existing dependencies in the constructor. So for consistency, I add the parameter to the constructor.
Of course, there are a few subclasses plus even more for unit tests, so now I am playing the game of going around altering all the constructors to match, and it's taking ages.
It makes me think that using properties with setters is a better way of getting dependencies. I don't think injected dependencies should be part of the interface to constructing an instance of a class. You add a dependency and now all your users (subclasses and anyone instantiating you directly) suddenly know about it. That feels like a break of encapsulation.
This doesn't seem to be the pattern with the existing code here, so I am looking to find out what the general consensus is, pros and cons of constructors versus properties. Is using property setters better?
Well, it depends :-).
If the class cannot do its job without the dependency, then add it to the constructor. The class needs the new dependency, so you want your change to break things. Also, creating a class that is not fully initialized ("two-step construction") is an anti-pattern (IMHO).
If the class can work without the dependency, a setter is fine.
The users of a class are supposed to know about the dependencies of a given class. If I had a class that, for example, connected to a database, and didn't provide a means to inject a persistence layer dependency, a user would never know that a connection to the database would have to be available. However, if I alter the constructor I let the users know that there is a dependency on the persistence layer.
Also, to prevent yourself from having to alter every use of the old constructor, simply apply constructor chaining as a temporary bridge between the old and new constructor.
public class ClassExample
{
public ClassExample(IDependencyOne dependencyOne, IDependencyTwo dependencyTwo)
: this (dependnecyOne, dependencyTwo, new DependnecyThreeConcreteImpl())
{ }
public ClassExample(IDependencyOne dependencyOne, IDependencyTwo dependencyTwo, IDependencyThree dependencyThree)
{
// Set the properties here.
}
}
One of the points of dependency injection is to reveal what dependencies the class has. If the class has too many dependencies, then it may be time for some refactoring to take place: Does every method of the class use all the dependencies? If not, then that's a good starting point to see where the class could be split up.
Of course, putting on the constructor means that you can validate all at once. If you assign things into read-only fields then you have some guarantees about your object's dependencies right from construction time.
It is a real pain adding new dependencies, but at least this way the compiler keeps complaining until it's correct. Which is a good thing, I think.
If you have large number of optional dependencies (which is already a smell) then probably setter injection is the way to go. Constructor injection better reveals your dependencies though.
The general preferred approach is to use constructor injection as much as possible.
Constructor injection exactly states what are the required dependencies for the object to function properly - nothing is more annoying than newing up an object and having it crashing when calling a method on it because some dependency is not set. The object returned by a constructor should be in a working state.
Try to have only one constructor, it keeps the design simple and avoids ambiguity (if not for humans, for the DI container).
You can use property injection when you have what Mark Seemann calls a local default in his book "Dependency Injection in .NET": the dependency is optional because you can provide a fine working implementation but want to allow the caller to specify a different one if needed.
(Former answer below)
I think that constructor injection are better if the injection is mandatory. If this adds too many constructors, consider using factories instead of constructors.
The setter injection is nice if the injection is optional, or if you want to change it halfway trough. I generally don't like setters, but it's a matter of taste.
It's largely a matter of personal taste.
Personally I tend to prefer the setter injection, because I believe it gives you more flexibility in the way that you can substitute implementations at runtime.
Furthermore, constructors with a lot of arguments are not clean in my opinion, and the arguments provided in a constructor should be limited to non-optional arguments.
As long as the classes interface (API) is clear in what it needs to perform its task,
you're good.
I prefer constructor injection because it helps "enforce" a class's dependency requirements. If it's in the c'tor, a consumer has to set the objects to get the app to compile. If you use setter injection they may not know they have a problem until run time - and depending on the object, it might be late in run time.
I still use setter injection from time to time when the injected object maybe needs a bunch of work itself, like initialization.
I personally prefer the Extract and Override "pattern" over injecting dependencies in the constructor, largely for the reason outlined in your question. You can set the properties as virtual and then override the implementation in a derived testable class.
I perfer constructor injection, because this seems most logical. Its like saying my class requires these dependencies to do its job. If its an optional dependency then properties seem reasonable.
I also use property injection for setting things that the container does not have a references to such as an ASP.NET View on a presenter created using the container.
I dont think it breaks encapsulation. The inner workings should remain internal and the dependencies deal with a different concern.
One option that might be worth considering is composing complex multiple-dependencies out of simple single dependencies. That is, define extra classes for compound dependencies. This makes things a little easier WRT constructor injection - fewer parameters per call - while still maintaining the must-supply-all-dependencies-to-instantiate thing.
Of course it makes most sense if there's some kind of logical grouping of dependencies, so the compound is more than an arbitrary aggregate, and it makes most sense if there are multiple dependents for a single compound dependency - but the parameter block "pattern" has been around for a long time, and most of those that I've seen have been pretty arbitrary.
Personally, though, I'm more a fan of using methods/property-setters to specify dependencies, options etc. The call names help describe what is going on. It's a good idea to provide example this-is-how-to-set-it-up snippets, though, and make sure the dependent class does enough error checks. You might want to use a finite state model for the setup.
I recently ran into a situation where I had multiple dependencies in a class, but only one of the dependencies was necessarily going to change in each implementation. Since the data access and error logging dependencies would likely only be changed for testing purposes, I added optional parameters for those dependencies and provided default implementations of those dependencies in my constructor code. In this way, the class maintains its default behavior unless overridden by the consumer of the class.
Using optional parameters can only be accomplished in frameworks that support them, such as .NET 4 (for both C# and VB.NET, though VB.NET has always had them). Of course, you can accomplish similar functionality by simply using a property that can be reassigned by the consumer of your class, but you don't get the advantage of immutability provided by having a private interface object assigned to a parameter of the constructor.
All of this being said, if you are introducing a new dependency that must be provided by every consumer, you're going to have to refactor your constructor and all code that consumers your class. My suggestions above really only apply if you have the luxury of being able to provide a default implementation for all of your current code but still provide the ability to override the default implementation if necessary.
Constructor injection does explicitly reveal the dependencies, making code more readable and less prone to unhandled run-time errors if arguments are checked in the constructor, but it really does come down to personal opinion, and the more you use DI the more you'll tend to sway back and forth one way or the other depending on the project. I personally have issues with code smells like constructors with a long list of arguments, and I feel that the consumer of an object should know the dependencies in order to use the object anyway, so this makes a case for using property injection. I don't like the implicit nature of property injection, but I find it more elegant, resulting in cleaner-looking code. But on the other hand, constructor injection does offer a higher degree of encapsulation, and in my experience I try to avoid default constructors, as they can have an ill effect on the integrity of the encapsulated data if one is not careful.
Choose injection by constructor or by property wisely based on your specific scenario. And don't feel that you have to use DI just because it seems necessary and it will prevent bad design and code smells. Sometimes it's not worth the effort to use a pattern if the effort and complexity outweighs the benefit. Keep it simple.
This is an old post, but if it is needed in future maybe this is of any use:
https://github.com/omegamit6zeichen/prinject
I had a similar idea and came up with this framework. It is probably far from complete, but it is an idea of a framework focusing on property injection
It depends on how you want to implement.
I prefer constructor injection wherever I feel the values that go in to the implementation doesnt change often. Eg: If the compnay stragtegy is go with oracle server, I will configure my datsource values for a bean achiveing connections via constructor injection.
Else, if my app is a product and chances it can connect to any db of the customer , I would implement such db configuration and multi brand implementation through setter injection. I have just taken an example but there are better ways of implementing the scenarios I mentioned above.
When to use Constructor injection?
When we want to make sure that the Object is created with all of its dependencies and to ensure that required dependencies are not null.
When to use Setter injection?
When we are working with optional dependencies that can be assigned reasonable default values within the class. Otherwise, not-null checks must be performed everywhere the code uses the dependency.
Additionally, setter methods make objects of that class open to reconfiguration or re-injection at a later time.
Sources:
Spring documentation ,
Java Revisited