Functional dependency injection - dependency-injection

When writing object-oriented software, I use dependency injection a lot:
to compose together high-level functionality from lower-level capabilities: my account management service uses repositories and validation services rather than implementing them itself.
to isolate components from their dependencies: my account management service uses its dependencies through interfaces, so that I can swap implementations, mock for unit testing and so on.
What patterns exist in functional programming languages to achieve these goals?
edit: a commenter rightly asks: "what about just passing round functions?". I think that the following comment about function grouping hits the nail on the head - a service is a collection of functions with a shared set of dependencies that I can handle as an atomic group.
In Clojure it seems like protocols solve this nicely, but I was really wondering how the problem is solved more generally...

Some time ago I've read a post describing how dependency injection can be seen as currying in functional programming. I think it's very interesting, and it gives a good perspective on the topic.

At the small scale, things like currying and functions-as-parameters cut down the need for module dependencies. At a larger scale, things like Standard ML functors are very useful for this purpose. Racket has a system called units that does a good job on this too.

I developed a little library which I found helpful for DI in a functional-inspired (JavaScript) environment, it's nothing special, just a bit method I like.

Related

Is there any plain to replace DI implementation in Guice with dagger2

I like dagger2 a lot and want to use it in my new project. The only gotcha is with dagger2 we still have to write some boilerplate code and its missing support for CDI.
Since Google is developing and maintaining dagger2 and also using it for their Android development, I am wondering if they are thinking of replacing the DI implementation in Guice with dagger2, which is my first question.
If they are, then I can start using guice expecting that with some future update I will get the goodness of dagger.
But if they are not, is there a way that I can use both in the same project where guice can be limited to CDI.
I'm not a Dagger expert, but I'll try to answer your question... (and I hope to do it well)
I am wondering if they are thinking of replacing the DI implementation in Guice with dagger2,
Nope. There is no good reason to do it. Dagger and Guice present totally different approach to the Dependency Injection concept. The former uses a code generation, the latter - runtime reflection.
(...) is there a way that I can use both in the same project where guice can be limited to CDI?
I don't think it's a good idea to mix CDI, Dagger and Guice in the same project. Beside fact, that CDI is only a specification, not an actual implementation like Weld or OpenWebBeans - so I guess you wanted to say "DI"?.
Anyway: there is dagger-adapter extension which allows using Dagger2 modules with Guice (using DaggerAdapter) if you really want to mix Dagger with Guice 4.
By the way, I would like to give you an idea of what Dagger is and what it never will be. Below is a quote from Christian Gruber (who worked on Dagger) on this subject:
Guice will always have a superset of features compared to Dagger, though we do have projects using Dagger on the server and in stand-alone java apps. But Dagger is not as evolved in terms of the surrounding "scaffolding" code (servlet support, etc.) as Guice, and won’t be for quite some time. Additionally, some teams will need or want some advanced Guice features that will never make it in to Dagger.
You may ask what are these "advanced features"? It is e.g. AOP support, like intercepting methods, which might be crucial for many developers.
You can read the whole discussion (February 2014) which is available here.
As someone working on Java application framework development at Google, I can assure you that Google has large important projects built both with Guice and with Dagger, and both DI systems will continue to be used and developed for the foreseeable future.
My personal idea (which is not an official Google plan or statement) is that over time we will build both more powerful abstractions on top of Dagger (likely in add-on frameworks and/or libraries) so that Dagger continues to become suitable for a larger and larger set of applications, and also more powerful tooling around Guice to make the Guice developer experience become more and more comparable to the Dagger developer experience, at least for a subset of Guice applications that are doing "normal" things.
Both Dagger and Guice are useful tools, each with a different set of trade-offs and a different target audience. Using both in the same project is something that should be possible, although that isn't really the ideal solution because then you can't fully take advantage of the strengths of either of them. But better interoperability is a goal, and the Guice and Dagger teams regularly communicate about how to standardize and coordinate efforts.
Stumbled upon this after having issues with Guice and Java 11. As we barely use guice anyway, I intend to rip it out in favor of dagger for now. Guice is giving me a mega complicated asm based exception that is buried, hard to get a RCA read on and apparently not addressed by the framework. I also, having stepped through the guice code trying to figure this out over a week or so, find the "scaffolding" to be too much for at least my use case and it has me questioning the merits of DI frameworks generally. Dagger2 at least operates at compile time.

Is it recommended to use Dependency Injection when there is no multiple implementation?

I know that it recommended to use DI when we have multiple implementation of interface. But is there any other benefit that recommend to use DI without having multiple implementation?
I've often found out, the bigger the solution, the smaller the percentage of the interfaces having multiple implementations. But as #Mikhail pointed out, it's certainly a lot easier to plug in newer implementations should they arise.
However, the strongest benefit of dependency injection is that it can make testing a lot easier: by injecting interfaces in the unit under test, you're able to mock those interfaces so that they return some dummy objects that can help you reach certain code paths.
I also think that's easier and more elegant/readable to scale up a project through this inversion-of-control concept, and it's also pretty handy for following a SOLID design.
Just that other implementation(s) might appear in the future.

Why AOP and DI aren't used together very often

I am confused about this line
Aspect-Oriented Programming and Dependency Injection are very different concepts, but there are limited cases where they fit well together.
from this website
http://www.postsharp.net/blog/post/Aspect-Oriented-Programming-vs-Dependency-Injection
I understand the advantages of DI over AOP, but why aren't they used together more often? why are there only limited cases where they fit together? Is it because of the way AOP is compiled, that makes using both difficult?
How do you define "limited cases"? I myself always use AOP and DI together.
There are basically three ways to apply AOP, which are:
Using code weaving tools such as PostSharp.
Using dynamic interception tools such as Castle Dynamic Proxy.
Using decorators.
The use of DI with code weaving tools doesn't mix and match very well, and I think that's the reason that the Postsharp site states that "there are limited cases where they fit well together". One reason it doesn't mix and match is because Dependency Injection is about loose coupling, while code weaving hard couples your code and the aspects together at compile time. From a perspective of DI, code weaving becomes an anti-pattern. In section 11.2 of our book, Mark and I make this argument very clear. In summary we state:
The aim of DI is to manage Volatile Dependencies by introducing Seams into your application. Theis enables you to centralize the composition of your object graphs inside the Composition Root.
This is the complete opposite of hat you achieve when applying compile-time weaving: is causes Volatile Dependencies to be coupled to your code at compile-time. This makes it impossible to use proper DI techniques and to safely compose complete object graphs in the application's Composition Root. It's for this reason that we say that compile-time weaving is the opposite of DI–using compile-time weaving on Volatile Dependencies is an anti-pattern. [page 355]
If you use dynamic interception, however, which means applying cross-cutting concerns at runtime by generating decorators on the fly it works great with DI and it is integrated easily with most DI libraries out there, and can be done as well when using Pure DI, which is something we demonstrate in section 11.1.
My personal preference is to use decorators. My systems are designed around a few well defined generic abstractions, and this allows me to apply cross-cutting concerns at almost all places that are important to my system. That leaves me in very rare cases with a few spots where decorators don't work very well, but this is almost always caused by design flaws. Either by my own limitations as a developer or by design flaws in the .NET framework or some other tool. One famous design flaw is the INotifyPropertyChanged interface. You might have guessed it, but in our book we describe this method in a lot of detail. We spend a complete chapter (10) on this topic.

Zend Di vs ServiceManager dependency injection containers

What is DI for and what is its use case, when we have ServiceManager?
They appear to be similar since in configuration files for both zend-di and zend-servicemanager we can set up some options such as aliases and invokables.
I am trying to get a better understanding of what is happening behind the scenes with these components, and documentation did not give me enough info.
Could you please tell me what the difference is and when I should use Di instead of ServiceManager?
Zend\DI relies on magic, like reflections, to detect and inject dependencies while service manager uses user provided factories. That is main difference.
Di sort of deprecated in community in favor of SM due to complexity, debugging and performance issues.
It supposed to be good for RAD, but you need above average knowledge to use it properly.
On the other hand SM have pretty verbose and explicit wiring, you can open your code year later and easily figure out what is going on.
Zend\Di takes care of wiring your classes together, whereas with Zend\ServiceManager you have to wire things manually and write a factory closure for every class you want to instantiate.
Zend\ServiceManager is much faster since it does not rely on the slow Reflection API. On the other hand, writing closures for large applications with hundreds of classes becomes very tedious. Keeping your closures up-to-date will get trickier as your application grows.
To address this problem, I have written a Zend Framework 2 module called ZendDiCompiler. It relies on Zend\Di to scan your code and auto-generates factory code to instantiate your classes. You get the best of both components: the power of Zend\Di and the performance of Zend\ServiceManager.
I have put quite a bit of work into the documentation of ZendDiCompiler and some easy and more advanced usage examples are provided, too.
Basically the difference is as follows:
Zend\ZerviceManager = Factory driven IoC Container
Zend\Di = Autowiring IoC implementation
Zend\Di was Refactored for Version 3. Its behaviour now more solid and predictable than v2 and it is designed to integrate seamlessly into zend-servicemanager to provide auto-wiring capabilities (no more odd magic). Since it uses PHP's reflection api to resolve dependencies it is slower than a factory driven approach. Therefore version 3 comes with an AoT compiler to create a pre-resolved Injector that omits the use of Reflection. An additional benefit: The generated factories can also be used with Zend\ServiceManager directly.
There is a guide for using AoT with both components: https://zendframework.github.io/zend-di/cookbook/aot-guide/

Decomposition (modularity) in functional languages

Got an idea: functions (in FP) could be composed similar ways as components in OOP. For components in OOP we use interfaces. For functions we could use delegates. Goal is to achieve decomposition, modularity and interchangeability. We could employ dependency injection to make it easier.
I tried to find something about the topic. No luck. Probably because there are no functional programs big enough to need this? While searching for enterprise scale applications written in FP I found this list.
Functional Programming in the Real World and this paper.
I hope I just missed the killer applications for FP, which would be big enough to deserve decomposition.
Question: Could you show decent real-world FP application (preferably open source), which uses decomposition into modules?
Bonus chatter: What is the usual pattern used? What kind of functions are usually decomposed into separate modules? Are the implementations ever mocked for testing purposes?
Some time ago I was learning F# and wondering about the same topics, so I asked about quality open source projects to learn from.
The reason why you're not seeing anything similar to dependency injection in functional programming is that it's just "natural", because you "inject dependencies" just by passing or composing functions. Or as this article puts it, "Functional dependency injection == currying", but that's just one mechanism.
Mocking frameworks are not necessary. If you need to mock something, you just pass a "stub" function.
See also this question about real-world Scala applications.
Either we're talking at cross-purposes (it's possible: I'm rather unfamiliar with OOP terminology) or you're missing a lot about functional programming. Modules and abstraction (i.e. interchangability) were basically invented in the functional language CLU. The seminal papers on abstract types are James Morris's Protection in programming languages and Types are not sets. Later, most improvements in module systems and abstraction have come from the functional programming world, through ML-like languages.
The killer application for functional programming is often said to be symbolic manipulation. Most compilers for functional languages are written in the language itself, so you could look up the source of your favorite functional language implementation. But pretty much any nontrivial program (functional or not) is written in a modular way to some extent — maybe I'm missing something about what you mean by “decomposition”? The modularity will be more visible and use more advanced concepts in strongly typed languages with an advanced module system, such as Standard ML and Objective Caml.

Resources