We are building a computational engine where we have a number of objects that interact in performing the computation. The objects have dependencies among themselves and mimic a subset of a real-world system. We are building the computational engine in a phased manner where we incrementally model parts of the system and hence could result in a change in the dependency graph as we progress. We could explicitly state the dependency between the objects in code but this may result in having to change that portion of code in future. Would using IoC alleviate this problem? Or would it be an overkill?
There are several ways that applying dependency injection can be useful:
It allows you to abstract code that needs to be tested in isolation. In your case you might want to split the computational engine in multiple parts to make it easier to test the smaller parts of that engine, or abstract the database engine that the engine is using internally.
It allows that engine to be developed by multiple teams. By depending on the abstraction the other team provides (or you specified for the other team) it allows you to make progress, without being blocked by the progress of the other team.
If the engine consists of smaller parts that must be changable (specification pattern), injecting abstractions for those parts can help achieve this. You could even do this at runtime if you simply depend upon an abstraction.
If however, this computational engine is developed by one team, hasn't got any dependencies on anything that needs to be abstracted (database, file system, etc), and isn't that complex that testing the separate parts would make development and verification easier, using dependency injection in that computational engine might not help.
Related
I am confused about this line
Aspect-Oriented Programming and Dependency Injection are very different concepts, but there are limited cases where they fit well together.
from this website
http://www.postsharp.net/blog/post/Aspect-Oriented-Programming-vs-Dependency-Injection
I understand the advantages of DI over AOP, but why aren't they used together more often? why are there only limited cases where they fit together? Is it because of the way AOP is compiled, that makes using both difficult?
How do you define "limited cases"? I myself always use AOP and DI together.
There are basically three ways to apply AOP, which are:
Using code weaving tools such as PostSharp.
Using dynamic interception tools such as Castle Dynamic Proxy.
Using decorators.
The use of DI with code weaving tools doesn't mix and match very well, and I think that's the reason that the Postsharp site states that "there are limited cases where they fit well together". One reason it doesn't mix and match is because Dependency Injection is about loose coupling, while code weaving hard couples your code and the aspects together at compile time. From a perspective of DI, code weaving becomes an anti-pattern. In section 11.2 of our book, Mark and I make this argument very clear. In summary we state:
The aim of DI is to manage Volatile Dependencies by introducing Seams into your application. Theis enables you to centralize the composition of your object graphs inside the Composition Root.
This is the complete opposite of hat you achieve when applying compile-time weaving: is causes Volatile Dependencies to be coupled to your code at compile-time. This makes it impossible to use proper DI techniques and to safely compose complete object graphs in the application's Composition Root. It's for this reason that we say that compile-time weaving is the opposite of DI–using compile-time weaving on Volatile Dependencies is an anti-pattern. [page 355]
If you use dynamic interception, however, which means applying cross-cutting concerns at runtime by generating decorators on the fly it works great with DI and it is integrated easily with most DI libraries out there, and can be done as well when using Pure DI, which is something we demonstrate in section 11.1.
My personal preference is to use decorators. My systems are designed around a few well defined generic abstractions, and this allows me to apply cross-cutting concerns at almost all places that are important to my system. That leaves me in very rare cases with a few spots where decorators don't work very well, but this is almost always caused by design flaws. Either by my own limitations as a developer or by design flaws in the .NET framework or some other tool. One famous design flaw is the INotifyPropertyChanged interface. You might have guessed it, but in our book we describe this method in a lot of detail. We spend a complete chapter (10) on this topic.
I am currently creating a small personal windows (desktop) .NET LOB application and I want to use the opportunity to increase my knowledge and experience with DI. I've separated my application into model, DAO and GUI parts but I am wondering how to implement some cross-cutting concepts such as:
currently logged on user - used for:
asserting rights - in some parts of the application I check if the user has necessary rights
auditing - recording user actions into a separate database table
etc
current application parameters (loaded from configuration file or table) - used for:
defining business strategy
defining UI (theme for example)
etc
logging to file/database log - used for:
logging UI actions (clicking on buttons etc.)
logging business processes (results of calculations, strategy decisions etc.)
logging infrastructure stuff (SQL used to for CRUD operations)
etc
At the moment I can think of several ways to provide this information:
Using static properties - UserEntity.Current, Configuration.Current, Logger.Current, etc.
Pros:
Simple to implement
Simple to use
Cons:
Gets messy
It is unclear which part of the application uses what
Can not be used if you need finer granularity (for example if for some processes in the application you need to override current values)
Using DI - giving each class which needs this information a property/ctor parameter
Pros:
It is clear for each class what it needs
It is easy to unit test
Cons:
It just seems to explode constructors
Makes problems if class needs to have a default constructors
Difficult to setup when classes get instantiated by 3rd party (XAML)
Using ServiceLocator
Pros:
Easy to setup
Easy to use
Cons:
It is unclear which part of the application uses what
Difficult to setup finer granularity (but not impossible)
I am currently leaning towards ServiceLocator as I've worked with it before and it worked quite nice. However I am concerned about loss of control. It gets very easy to just reach for the service locator instead of trying to fix a design problem.
Can somebody provide their experiences/knowledge?
Sounds like perfect case to start with aspect-oriented approach. Your application's LOB will be designed according to business functional requirements that has been cross-cutted with different non-functional requirements: authentication, audit, logging, etc.
In same time, some of current application requirements could be solved by dependency-injection. To start, I recommend first to identify composition root. For example, in wpf applications it’s the Application.OnStartup method. In case if you are able to identify composition root it is better to avoid service-locator. Service locator will add unneeded complexity while maintaining and testing, because it could resolve literally anything, thus dependency management will be complicated.
Next step, to decide: should dependency injection and aspect-oriented approaches be separated or combined. Both approaches has benefits and drawbacks.
While choosing separated approach you could use postsharp with a lot of benefits: great samples and documentation, community and ready to use aspects. But nothing come for free, postsharp has only limited number of features in free version and complicated integration with continues-integration.
Another solution: combine dependency-injection with dynamic-proxy. As long as you follow conception: program to an interface, not an implementation — you will achieve all requirements. Benefits: one place to wire all components. There are two major drawbacks: first dynamic proxy is quite limited itself, second — integration between dependency injection container and dynamic proxy — for some container it already exists, for others not. Example: Ninject extension Interception, or StructureMap and Interception.
I recommend you to take a look at following resources to find more answers yourself:
* Book AOP in .NET: Practical Aspect-Oriented Programming by Matthew D. Groves: first chapter available for free
* Book Dependency Injection in .NET by Mark Seemann: well-written book about dependency injection, and chapter #9 dedicated to interception, approach that I found quite useful in cases that you had described in question. Author of book also has an excellent blog dedicated to dependency injection and video about aspect-oriented programming with dependency Injection.
What is DI for and what is its use case, when we have ServiceManager?
They appear to be similar since in configuration files for both zend-di and zend-servicemanager we can set up some options such as aliases and invokables.
I am trying to get a better understanding of what is happening behind the scenes with these components, and documentation did not give me enough info.
Could you please tell me what the difference is and when I should use Di instead of ServiceManager?
Zend\DI relies on magic, like reflections, to detect and inject dependencies while service manager uses user provided factories. That is main difference.
Di sort of deprecated in community in favor of SM due to complexity, debugging and performance issues.
It supposed to be good for RAD, but you need above average knowledge to use it properly.
On the other hand SM have pretty verbose and explicit wiring, you can open your code year later and easily figure out what is going on.
Zend\Di takes care of wiring your classes together, whereas with Zend\ServiceManager you have to wire things manually and write a factory closure for every class you want to instantiate.
Zend\ServiceManager is much faster since it does not rely on the slow Reflection API. On the other hand, writing closures for large applications with hundreds of classes becomes very tedious. Keeping your closures up-to-date will get trickier as your application grows.
To address this problem, I have written a Zend Framework 2 module called ZendDiCompiler. It relies on Zend\Di to scan your code and auto-generates factory code to instantiate your classes. You get the best of both components: the power of Zend\Di and the performance of Zend\ServiceManager.
I have put quite a bit of work into the documentation of ZendDiCompiler and some easy and more advanced usage examples are provided, too.
Basically the difference is as follows:
Zend\ZerviceManager = Factory driven IoC Container
Zend\Di = Autowiring IoC implementation
Zend\Di was Refactored for Version 3. Its behaviour now more solid and predictable than v2 and it is designed to integrate seamlessly into zend-servicemanager to provide auto-wiring capabilities (no more odd magic). Since it uses PHP's reflection api to resolve dependencies it is slower than a factory driven approach. Therefore version 3 comes with an AoT compiler to create a pre-resolved Injector that omits the use of Reflection. An additional benefit: The generated factories can also be used with Zend\ServiceManager directly.
There is a guide for using AoT with both components: https://zendframework.github.io/zend-di/cookbook/aot-guide/
I'm doing research on software architecture, layering, and looked lots of open source .net projects, like Orchard CMS.
I think Orchard is a good example for some design patterns.
As I know, UI, Services, Repositories and Entities should be in separate assemblies, due to misusing. But in Orchard, (due to being modularity and pluggable) I see service, repository and entity classes and interfaces in same folder and same namespace.
Isn't it an anti-pattern, or is it correct for patterns?
TL;DR: assemblies are not necessarily the right separation device.
No, what's important is that they are separated, not that they are in separate assemblies. Furthermore, the way you would factor things in most applications has to be different from what you do in an extensible CMS. The right separation in an extensible CMS is into decoupled features that can be added and removed at will, whereas regular tiered applications require decoupling of layers so those can be worked on and refactored with minimal risk and impact. The right comparison is actually between one of those applications and a module or feature in Orchard, not with Orchard as a whole. But of course, good practices should be used within modules, and they usually are.
Now separation into assemblies is a separate concern, that is more technical than architectural. You can see an assembly as a container of self-contained code, created for the purpose of code reuse and dynamic linking, but not especially as a way to separate layers. This is why they coincide in Orchard with the unit of code reuse, the module.
Also consider the practical aspect of this: good architectural practices have one main goal, which is to make applications easier and cheaper to maintain (and not, surprisingly (NOT!) to make consultants rich by enabling them to set-up astronaut architectures that only they can understand). A secondary goal is to codify what makes scalable and well-performing applications (although that is a trickier goal as it can easily lead to premature optimization, the root of most software evil).
For that first goal, conceptual separation is the most important, but the way this separation is made is usually not very important.
The secondary goal unfortunately conflicts with the idea of using assemblies as a separation device: Orchard as it is already has dozens of assemblies before you even start to add optional modules. And assemblies do not come for free. They need to be dynamically compiled, loaded, jitted, come with memory overhead, etc. In other terms, for good performance, you'll usually want to reduce the number of assemblies.
If you wanted to separate an Orchard site into assemblies for modules as it is today, and then separate each of these modules into layered assemblies, you would have to multiply the number of modules by the number of layers. That would be hundreds of assemblies to load. Not good. As a matter of facts, we are even considering an option for dynamic compilation to build all modules into a single assembly.
I'm attempting to refactor a large codebase to use StructureMap. Does anyone know if there's a tool to quickly scan a codebase and report the number of volatile dependencies within classes? Sure, I could always search all files for the word "new", but this would also find non-volatile dependencies such as those used from BCL which are not material. I suppose that NDepend could, indirectly, provide some report on the degree of coupling which, is, indirectly, what I'm looking to eliminate. I'm just wondering if there was some tool that was specifically designed for the purpose of assisting with the migration towards the use of an IoC container.
I am not aware of anything like that. I think it would be quite hard to write a tool to identify volatile dependencies since they could be as diverse as file IO calls to database calls. Also, many developers draw the line between volatile and not in different places. Some would say System.Configuration is volatile, others would argue it is not...