I do have an architectural problem (and besides I am not very familiar with Castle Windsor, which is used as a container for my application).
I have a Web application that implements the unit of work design pattern.
UnitOfWork implements the IDisposable interface.
I see no particular reason for the actions done in the Dispose method of the UnitOfWork (those actions are already done somewhen at an earlier moment).
Also, all my components are instantiated using the transient lifecycle.
Almost any component is also using a repository instance, which is also transient and also implements IDisposable (and again, no particular reason for that).
Also most of those components are also used by some desktop applications.
The problem I met was the memory leak due to the transient components implementing IDisposable, as I read here: http://nexussharp.wordpress.com/2012/04/21/castle-windsor-avoid-memory-leaks-by-learning-the-underlying-mechanics/ .
I also noticed Dispose any way being never called, neither from the web application nor from the client ones (and actually found more posts that it would be called when Release is called on the component).
One option of fixing the memory leak problem (and without using the NoTrackReleasePolicy!) was to actually remove implementing of the IDisposable.
But I guess this would be similar to specifying the NoTrackReleasePolicy, which could lead to even bigger problems than memory leak (although I do no see how ?) - so this is my first question.
I also tried specifying PerWebRequest instead of Transient, but how would be the behaviour of the components for the desktop applications in this case, since there's no web request\context ? That is my second question.
One more thing which I would not like to take into consideration is manually calling release for every component I resolve ...
Any ideas of solving the issue in the most safe\elegant\with less changes way would be much appreciated ...
There are few steps you should take.
stop whatever you're doing
learn about the tools you're trying to use, especially the part about lifestyles
Make sure you're understanding why Windsor is tracking the
components
Make sure you're understanding why tracking is important and
NoTrackingReleasePolicy is a Bad Idea™
Make sure you understand how Unit of Work works.
Now that you know a few basics, if you're looking for inspiration as to how solve this in web have a look at this tutorial which shows how to implement UoW with Windsor on the web.
On desktop it's more complicated and scenario dependant, just make sure you're not trying to reuse your registration code between the two apps.
Related
I'm working on an iOS app where we need different binaries for each customer based on their needs. A customer may want to change all the colors, icons and texts. We can do that through white labeling process. The problem here, though, is when they ask for different behavior, for instance, removing login screen and making it optional to login.
I thought we can use dependency injections and use different handlers for each customer if needed. For instance, we can have LoginHandler1 and LoginHandler2, both implementing ILoginHandler and inherit from UIViewController.
However, use of dependency injection is costly, it slows down the app because resolving is expensive comparing to normal instantiation.
The other way is to define all these behaviors in the app and enable/disable them in a plist file. like "is login optional? yes/no"
Any suggestions?
Thanks
You should create the entire object graph up-front, in the composition root. Object creation, and constructor injection, should not take much time at all as long as your constructors are not doing any actual work.
That being said, there are times when creating the entire object graph at the start of the application may take longer than is acceptable. In those cases, you can use lazy-loading to defer the costly initialization until later - while still creating the objects in the composition root.
Mark Seemann describes this approach in more detail here: Compose object graphs with confidence.
I thought we can use dependency injections and use different handlers for each customer if needed.
You thought right. Flexibility is one of the main reasons people use DI.
However, use of dependency injection is costly, it slows down the app because resolving is expensive comparing to normal instantiation.
It really doesn't cost that much at all. Have you tried it yourself? Unless the object in question (i.e. the object being injected) is very expensive to instantiate, you have no real reason to stay away from DI and Inversion Of Control. Also, as #Lilshieste noted above, creating the object graph up front (see AppDelegate) will probably make this even less a problem.
A good way of doing that is described here:
http://cocoapatterns.com/passing-data-between-view-controllers/ and here http://cocoapatterns.com/ios-view-controller-transitions-mediator-pattern/
The other way is to define all these behaviors in the app and enable/disable them in a plist file. like "is login optional? yes/no"
While less "elegant", this solution is a pretty useful one, especially if the project is not really big in terms of number of classes and VCs. It is also the easiest one to implement if the app code is already laid out and introducing major design changes would ask for lots of refactoring.
Always take action based on the task at hand, there is rarely if ever a single solution to a software design problem.
When using a IOC library like ninja, is there a performance cost to this or is it a one-time hit during application_start mostly?
Are you having performance problems? Do you have specific targets to meet that you aren't meeting? Have you used a profiler to trace the performance problems to your use of an IoC framework?
If the answer to any of these questions is "no", then the real answer to your question is "it doesn't matter". If the answer to all of them is "yes", then you already know the answer.
But, yes, of course there is a performance cost to using an IoC framework. Using new is one instruction, whereas an IoC is more than that, so it will have some cost. Does it matter to your application? Probably not. You've got the internet at one end, presumably a database at the other, and most likely some internal networking in between. Compiled code is rarely the bottleneck in web applications.
Depends on how you use it, but unless you are instantiating thousands of objects at once, there shouldn't be any noticeable bottleneck.
When an object is resolved from an IoC container, typically the container will use reflection to scan that class's constructors and public properties, and then loop through some internal collection to find the best match for each service that object requires. The instantiation is going to take as long as it would when you instantiate it manually, plus a small amount of time for the reflection calls.
If you're using a transient lifestyle, and resolving an object inside a loop, you MAY notice a slight performance hit, but at that point I'd ask if there is any better way of executing that code.
And if you haven't yet noticed a performance hit, then it shouldn't even matter to you. Don't optimize until you absolutely have to.
Checkout Munq.DI. This is a simple and VERY fast IOC inspired by Funq. It uses lambda expressions to define the creation method, and supports Container, Cache, Session, Request, and AlwaysNew lifetime managers.
Sample application includes integration with ASP.NET MVC.
I've read a lot about IoC and DI, but I'm not really convinced that you gain a lot by using them in most situations.
If you are writing code that needs pluggable components, then yes, I see the value. But if you are not, then I question whether changing a dependency from a class to an interface is really gaining you anything, other than more typing.
In some cases, I can see where IoC and DI help with mocking, but if you're not using Mocking, or TDD then what's the value? Is this a case of YAGNI?
I doubt you will have any hard data on it, so I will add some thoughts on it.
First, you don't use DI (or other SOLID principles) because it helps you do TDD. Its the other way around, you do TDD because it helps you with the design - which usually means you get code that follow those principles.
Discussing why to use interfaces is a different matter, see: https://stackoverflow.com/questions/667139/what-is-the-purpose-of-interfaces.
I will assume you agree that having your classes do many different things results in messy code. Thus, I am assuming you are already going for SRP.
Because you have different classes that do specific things, you need a way to relate them. If you relate them inside the classes (i.e. the constructors), you get plenty of code that uses specific versions of the classes. This means that making changes to the system will be hard.
You are going to need to change the system, that's a fact of software development. You can call YAGNI about not adding specific extra features, but not on that you won't be needing to change the system. In my case that's something really important as I do weekly sprints.
I use a DI framework where configuration is done through code. With a really small code configuration, you hook up lots of different relations. So, when you take away the discussion on interface vs. concrete classes, you are actually saving typing not the other way around. Also for the cases a concrete class is on the constructor, it hooks it up automatically (I don't have to configure) building the rest of the relations. It also allows me to control some objects life time, in particular I can configure an object to be a Singleton and it hands a single instance all the time.
Also note that just using these practices isn't more overhead. Using them for the first times, is what causes the overhead (because of the learning process + in some cases mind set change).
Bottom line: you ain't gonna need to put all those constructor calls all over the place to go faster.
The most significant gains from DI are not necessarily due to the use of interfaces. You do not actually need to use interfaces to have beneficial effects of dependency injection. If there's only one implementation you can probably inject that directly, and you can use a mix of classes and interfaces.
You're still getting loose coupling, and quite a few development environments you can introduce that interface with a few keypresses if needed.
Hard data on the value of loose coupling I cannot give, but it's been a vision in textbooks for as long as I can remember. Now it's real.
DI frameworks also give you some quite amazing features when it comes to hierarchical construction of large structures. Instead of looking for the leanest DI framework around, I'd recommend you look for a full-featured one. Less isn't always more, at least when it comes to learning about new ways of programming. Then you can go for less.
Apart from testing also the loose coupling is worth it.
I've worked on components for an embedded Java system, which had a fixed configuration of objects after startup (about 50 mostly different objects).
The first component was legacy code without dependency injection, and the subobjects where created all over the place. Now it happened several times that for some modification some code needed to talk to an object which was only available three constructors away. So what can you do but add another parameter to the constructor and pass it through, or even store it in a field to pass it on later. In the long run things became even more tangled than they already where.
The second component I developed from scratch, and used dependency injection (without knowing it at the time). That is, I had one factory which constructed all objects and injected then on a need to know basis. Adding another dependency was easy, just add it to the factory and the objects constructor (or add a setter to avoid loops). No unrelated code needed to be touched.
I'm fairly new to the DI concept, but I have been using it to some extent in my designs - mainly by 'injecting' interfaces into constructors and having factories create my concrete classes. Okay, it's not configuration-based - but it's never NEEDED to be.
I started to look at DI frameworks such as Spring.NET and Castle Windsor, and stumbled across this blog by Ayende.
What I got from this is
A) DI frameworks are awesome, but
B) It means we don't have to worry about how our system is designed in terms of dependencies.
For me, I'm used to thinking hard about how to loosely-couple my system but at the same time have some sort of control over dependencies.
I'm a bit scared of losing this control, and it being just a free-for-all. ClassA needs ClassB = no problem, just ask and ye shall receive! Hmmm.
Or is that just the point and this is the future and I should just go with it?
Thoughts?
One basic OO principle is that you want your code to depend on interfaces and not implementations, DI is how we do that. Historically, here is how it evolved:
People initially created classes they depended upon by "new'ing" them:
IMyClass myClass = new MyClass();
Then we wanted to remove instantiation so there were static methods to create them:
IMyClass myClass = MyClass.Create();
Then we no longer depended on the lifecycle of the class, but still depended on it for instantiation, so then we used the factory:
IMyClass myClass = MyClassFactory.Create();
This moved the direct dependency from the consuming code to the factory, but we still had the dependency on MyClass indirectly, so we used the service locator pattern like this:
IMyClass myClass = (IMyClass)Context.Find("MyClass");
That way we were only dependent on an interface and a name of a class in our code. But it can be made better, why not depend simply on an interface in our code? We can with dependency injection. If you use property injection you would simply put a property setter for the interface you want in your code. You then configure what the actual dependency is outside of your code and the container manages the lifecycle of that class and your class.
I wouldn't say that you don't have to think about dependencies, but using an IoC framework allows you to change the types which fulfill the dependencies with little or no hassle, since all the wiring is done in a central place.
You still have to think about what interfaces you need and getting them right is not always a trivial matter.
I don't see how a loosely coupled system could be considered lazily designed. If you go through all the trouble of getting to know an IoC framework, you're certainly not taking the shortcut.
I think that ideally, if you already have a loosley coupled system.., using a container will only move the place where you take the dependencies out of your code making them softer and let your system depend on the container building your object graph.
In reality, attempting to use the the container will probably show you that your system is not as loosley coupled as you thought it was.. so in this way, it may help you to create a better design.
Well, i'm a newbie at this subjet.. so maybe i'm not that right.
Cheers.
I must be high, because I thought the whole point of dependency injection is that the code that does stuff simply declares its dependencies so that someone who's creating it will know what to create with it for it to operate correctly.
How dependency injection makes you lazy is maybe it forces someone else to deal with dependencies? That's the whole point! That someone else doesn't need to be really someone else; it just means the code you write doesn't need to be concerned with dependencies because it declares them upfront. And they can be managed because they are explicit.
Edit: Added the last sentence above.
Dependency injection can be a bit difficult to get used to - instead of a direct path through your code, you end up looking at seemingly unconnected objects, and a given action traces it's path through a series of these objects whose coupling seems, to be kind, abstract.
It's a paradigm shift similar to getting used to OO. The intention is that your objects are written do have a focused and single responsibility, using the dependent objects as they're declared by the interface and handled by the framework.
This not only makes loose coupling easier, it makes it almost unavoidable, or at least nearly so, which makes it much simpler to do things like run your object in a mock environment - The IOC container is taking the place of the run environment.
I would disagree and say they lead to better design in many cases. Too often devs create components that do too much and have too many dependencies. With IOC developers i find tend to migrate to a better way of thinking and produce smaller simpler components that can be assembled together into an app.s
If they follow the spirit and do tests, they will further refine your components. Both exercises force you to write better testable components which fits very well with how IOC containers work.
You still have to worry. My team use Castle Windsor in our current project. It annoys me that it delays dependency lookup from compile time to runtime.
Without Castle Windsor, you write code and if you haven't sorted your dependencies out. Bang, the compiler will complain. With Castle Windsor you configure the dependencies in an xml file. They're still there, just separated out from the bulk of your code. The problem is, your code can compile fine if you make a mess of defining the dependencies. But, at runtime, Castle Windsor looks up a concrete classes to service requests for an interface by using reflection. If the dependency can't be found, you get an error at runtime.
I think Castle Windsor does check the dependencies exist when its initialized so that it can throw an error pretty quick. But, it's still annoying when using a strongly typed language that this fuss can't be sorted out at runtime.
So... anyway. Dependencies still seriously matter. You'll all most certainly pay more attention to them using DI than before.
We wrote custom DI framework, thought it took some time getting it right but it all worth the effort. We have divided the who systems into layers and the dependency injection in each layer is bound by rules. E.g. In the Log layer, CUD and BO interfaces cannot be injected.
We are still contemplating over the rules and some of these change every week while the others are remain the same.
I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.